Embedded - 258: Security Is Another Dimension
Episode Date: August 30, 2018We spoke with Axel Poschmannof DarkMatter LLC(@GuardedbyGenius) about embedded security. For a great in-depth introduction, Axel suggested Christof Paar’s Introduction to Cryptography class, availa...ble on YouTube. We also talked about ENISA’s Hardware Threat Landscape and Good Practices Guide. Axel will be speaking at Hardwear.io, a security conference for the hardware and security community. The conference consists of training (11th - 12th Sept 2018) and conference (13th - 14th Sept 2018). It is in The Hague, Netherlands. DarkMatter is hiring. Elecia has some discount coupons for the Particle.io Spectra conference.
Transcript
Discussion (0)
Welcome to Embedded. I am Elysia White, and my co-host is Christopher White.
Our guest this week is Axel Poshman. We'll be talking about security in embedded systems.
There are some things I think you should know. Before we get started with Axel, I have a 40% off coupon or three
to the Particle Spectra conference in October in San Francisco. Hit the contact link on
embedded.fm if you want one. And if you really want one, really can plan to attend and are
out of a job or in school, add a random number in case I have a free ticket to give away
as well.
Axel, thanks for joining us.
Hi, Alessia.
Hi, Chris.
Could you tell us about yourself?
Sure.
I'm currently responsible for the hardware security lab at Dark Matter, but at heart,
I'm a curious engineer with a passion for efficiency.
I did my PhD on lightweight crypto with Christoph Pahr in Bochum, Germany, and then did my postdoc
at NTU in Singapore, where I eventually became an assistant professor and built up my own
research group.
Then my research focus shifted towards implementation security, so that is in particular efficient
countermeasures against side-channel fault attacks for embedded devices.
Then around five years ago, I pivoted it to industry and started at NXP Semiconductors
in Hamburg, Germany, since my curiosity wanted to get completely new inputs.
And then again, around one and a half years ago, I moved to Abu Dhabi in the United Arab
Emirates, again out of curiosity.
And here I am building up a hardware security lab.
All right.
What's the high...
Oh, wait a minute.
I have to introduce Lightning Rounds before I just start going to it, don't I?
So we'll ask you short questions.
We want you to have short answers.
It'll be a little sillier, but...
And then we will get into the meat of embedded security.
Safest form of computer?
Stay away from it.
Do you prefer to start a dozen projects or finish one?
Actually both.
I want to start a dozen and then get satisfaction
when one after the other completes.
Worst movie for depicting hacking password swordfish okay yeah password swordfish i think it was just called swordfish here but it was bad
do standards make things safer or easier to hack?
Do my what?
Standards.
I do standards.
Yeah.
They make things safer.
Hacking, making, tinkering, engineering, or programming?
Engineering.
Is hacking a good thing or a bad thing?
It's good because
it creates pressure to improve
product security.
What's a tip you think everyone should know?
It's the indicator at the gas tank.
It shows you on which side of the car
your gas tank
opens.
It took me a long time before I realized that.
There's a little indicator, like when you have the fuel gauge.
Yeah.
There's a little carrot on the side, and it shows which side the door is on.
Yeah.
I didn't know that until like five years ago.
You need that forever?
Of course you do.
All right. Okay. you do that forever of course you do all right okay so you you have a phd and you do security and and i get so intimidated by security it just seems like an impossible problem
is there anything you can tell me that will keep me from being just overwhelmed and futility?
Sure. So I think there's no right. You can't do security completely right. From my point of view,
it's just a trade-off. So security is another dimension that runs contrary to usability, performance,
time to market, cost, and all the other engineering trade-offs. And wherever your customer or you want
to feel comfortable in reducing the other optimization goals, that's where you land
with your security. And it was never perfect.
You can never make it completely secure.
Well, that's the show.
Thank you, everyone.
Sorry.
I was going to jump in and say I've done a very small amount of security work.
I kind of fell out of being a firmware engineer at one place,
and then I kind of fell into this one role there.
And that was really striking to me.
The management really wanted it to be perfect.
And I kept telling them,
this is the best we can do
and it's always going to be a little bit of cat and mouse.
How would you present or convince people
who really aren't familiar with security and do think it can be perfect that there is a good enough?
Well, in the end, it boils down to risk management.
I think if you can show in particular management that you know the risks, you can quantify it as best as possible, and you have done all your homeworks, like
following best practices, security guidelines, standards, and so on, then there should be
a point where people accept the risk and move forward.
Otherwise, you will never hit the market, or the cost will explode, or usability goes
to zero.
How do I tell my clients or my managers or my bosses that it is worth spending the time and money on security when all they want to do is ship my consumer product?
Yeah, I think it's a core of the challenge, I would say. Yeah. I think it's
the challenge boils down actually to a lack of measurement for security. Security is
multidimensional. So and everybody has a different understanding of what security actually is. So even if there are metrics to measure it,
it's very hard to quantify it. And to show improvement, you need to first agree on a metric
and then measure and then show that you have improved and that the time has been worth
spent on improving the security. Because in the end, for product development, time and money is easy to measure.
You either hit the deadline or not.
You either hit the cost or not.
So these are kind of binary decisions
where security is everything between zero and one.
And the value to the end customer
really depends on someone else often like did this device get
hacked and go into the script kitty denial of service system or did my system just wasn't
that interesting so nobody hacked it and so how much i spend on security should be reflected
in or should be proportional to how much it's attacked but i have no control over one of those
correct and also i think security is valuable if you don't see it only if something is breached
you know it wasn't secure until it wasn't hacked or breached you can't see it. Only if something is breached, you know it wasn't secure.
Until it wasn't hacked or breached, you can't say for sure it's secure
just because no one has found a way to hack it or hasn't published it.
So I think that's also one way to convince maybe management
to invest the time is to make it slightly better than competitors
because burglars always go for the lowest hanging fruits.
And that's the same for hackers.
If they see this one, your product is harder to hack,
then they may go to a competitor product instead
if they get the same financial benefits.
Yeah, you don't have to be faster than the bear.
You just have to be faster than the other person.
Yes, exactly.
So you mentioned some standards.
What can I look at?
What can I point to to say this is good advice?
This is cogent advice.
Yeah, I think this depends on your target application or your product.
So sometimes there are regulatory requirements.
I think the NIST is doing a great job in publishing guidelines.
There is for tamper protection, which can be the first boundary or first protection level against attackers.
There are very good implementation guidelines
how to achieve a certain level of temper protection.
There are firmware security guidelines
and best practices for coding and design
everywhere available.
Just a quick Google search.
There are plenty of advice available for designers.
There is, but there's so much.
And I wish there was more specifically related to embedded systems
without operating systems or without Linux.
I mean, we've talked about the
CERT top 10 secure coding practices, but most of those are
things you do when you have a Linux system.
Is there a book is there a book or or a site you trust so i i've never designed a secure
product i was only part of it typically in the red team capacity so i never really made an
exhaustive literature survey what is available.
There are a few good references.
So, for example, my PhD advisor has this complete lecture series available at YouTube.
So that's a very good start.
Obviously, it's 24 hours of YouTube videos.
So that's maybe too much time to to invest but there are also other guidelines that from the enisa which is a european organization they published
a guideline in particular for embedded devices and how to
how to harden the platform and to mitigate certain threats that embedded devices face.
That one had some interesting tables where a lot of times when I talk to clients,
they think about one aspect.
They think maybe about their user passwords or maybe they think about having their firmware eavesdropped upon. kinds of things and the effects from nefarious activity and how that affects information
integrity to actual damage and whether that makes it available or causes environmental
harm to even legal problems that come up when people steal your software.
And that laying out the threats like that was very helpful to me because it's nice to not have to reinvent it each time.
So being on the red team, and just to make sure,
the red team is the penetration team.
They're the hackers.
And the blue team is usually the defenders who build the thing.
That's the right terminology?
Yeah, correct.
How do you, if I bring you something, I bring you Internet of Things Lite, and it hooks up over Wi-Fi, which is not the best option, but still.
What do you do first?
Do you take it apart?
Do you look at the software?
What do you do?
All right, so I'm working in the hardware lab,
and we have a software lab, a telecommunications lab, and a malware lab.
So whenever we get a device in our lab,
then the team starts simultaneously to work on it.
And we focus on the hardware of the device.
So we have a look at it, if there are any debug interfaces available.
That's the first easy-to-check access point.
If there's a JTAG enabled,
you can do a lot of things.
The second thing is,
if you open up the device,
you check the printed circuit board
and try to identify the components
and try to reverse engineer
all the components
that have been built in there. Then you check CVEs that are associated to those components
and try if the vulnerabilities have been closed in that design or not. Then you try to dump the
firmware or you have a look at the PCB if you can find sometimes pins have been disconnected in order to
make the debug
interfaces not accessible.
You can just solder the pins on it and
voila, you have your debug interfaces
and you can
do stuff.
And if all that doesn't work,
you can go on the chip level,
you can take off
the secure memory or cpu or whatever and put it in
a but that's probably not worth for an iot light device but you can then do physical attacks like
side channel attacks or fault attacks you can try to glitch the boot process to boot another boot image, which
gives you root access, so
you can dump the memory, or you can
do side-chain analysis, trying to
get the cryptographic
keys out of it.
So there are plenty of things you
can do.
You said side-chain analysis?
Yeah, when I say
side-channel, I'm...
Oh, side-channel, okay.
Side-channel, yeah.
So I'm talking, for me, side-channel in particular means power side-channels
or electromagnetic side-channels.
But obviously, since Spectrum Meltdown,
cache timing attacks are way, way more important side-channels.
That's the sort of thing like the chip whisperer?
Yeah, where you monitor the power and you can figure out the cryptographic keys
based on how the power is done when it decorrupts it.
I remember hearing about that and thinking, wow, that's
bonkers and amazing and terrifying, which is kind of where I am right now.
Okay, so if I'm on the blue team, not having JTAG as an easy access,
although that makes manufacturing so much harder.
Yeah, that's true.
And making sure my debug pins certainly aren't labeled debug and have a header.
It would stop you just a little bit, but I need those debug pins.
They're for manufacturing.
And then you can take out my firmware by decapping the chip.
But decapping the chip is more expensive than just reading it out through JTAG.
Yeah, but only slightly.
So there are companies where you can just set the chips,
they decap it for you.
You can use kind of specialized chemistry,
but it's still freely available and do it yourself in the garage.
There are dozens of YouTube videos showing you how to do it.
It's just a little bit of time and effort that you need to invest.
And with just a little bit of practice, every graduate student can do it.
So decapping a chip is not a big hurdle.
Okay, I feel like the red team won well that's a good question so are there things that
you've come across that and you don't have to say what they are but that you just okay we can't we
figured nothing out or is it basically everything is penetrable once you have hardware in hand? Yeah, I would say everything is penetrable.
But the question is if the effort is worth the price.
And for hardware security, there's this common criteria
and the J-HAS subgroup, Joint Hardware Assurance Security,
I think is the meaning of J-HAS.
They have rating tables for specific products.
So if you talk about a smart card or a security device with security boxes,
then there are different dimensions or factors that all give points,
like the tools that you use.
Is it a chip whisperer for $250?
Or do you need a $50,000 high-end oscilloscope
or a focused ion beam for $1 million
or a laser fault injection device for $100,000?
So you get different points for the price and the availability of the tools, for the
duration of the attack that it takes to do it, for the knowledge that you need, for the
number of samples, and a few other dimensions.
And then there's an exploitation phase and an identification phase.
And for each of them, you add the points together.
And if you get more than 31 points, then you get the highest assurance for this device.
And that doesn't mean that a nation state couldn't break it or motivate an attacker,
but it clearly is the highest grade that you can get and that you can certify.
So I think this is a better approach.
You can certify. So I think this is a better approach. You can hack everything. You try to quantify the multidimensional effort that an attacker has to put in
and try to come up with a rating.
Okay, so there's this rating,
and I can basically get a score.
And I can probably work out how much each point costs.
I mean, they aren't all going to be the same,
but if my boss wants to get a better score,
I can say, well, each point costs about three people weeks of work.
I say that without having any knowledge of how long it actually takes.
And then it's probably somewhat exponential. At the beginning, it's easier to lock your JTAG
than it is to redesign your platform so you have a chip that is reasonably protected.
Does that all sound about right? I mean, is this the story I'm telling myself is about right?
Yeah.
I mean, there's, again, there are many dimensions.
And in one example, what you mentioned is correct.
The engineering effort and designing more countermeasures can translate to points.
But this is just one dimension.
For example, knowledge of the device is another very important dimension.
And that's also the reason why smart cards don't provide a lot of very detailed technical manuals to everybody, but only to vetted customers.
Because the availability of this information would cost points in the evaluation.
And people always complain about this and say this is security by obscurity,
but in fact it's a mechanism to make it harder to attack the device.
And coming back to this example, if you want to really fulfill all the requirements to keeping this information tightly secured it
means you need to have a high-end secure it environment you need parameter security with
at least two or three different zones so that means two or three different checkpoints and gantries where you need to go from one point from one zone to
the other so there are a lot of extra costs that translate into these points yes we were talking
recently um about a consumer product where the question was I'm shipping a lot of these, and how do I make them secure enough
not to be laughed at? And we talked some about in manufacturing, making passwords different and
forcing users to change password upon receipt of the device. But it was Chris who brought up,
this doesn't matter if your database, if your server device
is unprotected
you have to protect the whole
chain. When you say zones
what do you mean by that?
So the physical zones
like you have
a fence around your
factory office
and then you have another
gantry where you need to
batch in and there's no
tailgating allowed.
There's a security guard watching it
or CCTV in operation.
And from that zone, you can enter then
the high security zone.
And then you have a similar separation in your IT network.
So the high secure network
which has no internet access,
kind of a DMZ, and then
a kind of
company-wide intranet.
Okay, I'm glad I asked,
because I had this idea that it was
areas on a board.
People do get a notion that it's all about the device,
right? Oh, if you can hack the device, I'm
doomed, and then they do some
strange backend that is
not secured well. it's not just attacking
the device there's lots of ways to attack the ecosystem of your product right yeah correct yeah
i was stressing only this this whole overhead it only translates to a few points in a device
evaluation because it means you have the appropriate measures in place
to protect the data sheets and the hardware designs of your product.
If you don't have this in place, you don't get the points
because the AT could be hacked and then the data is out.
Something else I've never thought about.
I'm always in favor of publicly available data sheets and manuals
because as an engineer, that makes my life so much easier.
And when I have to sign an NDA to see a chip manual, it's irritating.
And I never thought that there was a reason for it.
Now I do.
Yeah, I think obviously the functionality should be still publicly available, but the detailed working of the countermeasures needs to be closely guarded.
You worked at NXP in their hardware security area.
Yes.
Can you tell me more about what you did there?
Yeah, so for the first two years, I was running the vulnerability analysis team,
and that's side-channel fault attacks and invasive hardware attacks.
And that was the internal RAD team that hacked, from a hardware perspective,
all the high-end security chips that NXP produced, the secure elements.
And we came in at several stages of the design cycle from simulations to prototypes to
pre-evaluation risk mitigation and obviously we always gave feedback to the designers
how to improve things and what was easy to hack what is a it's a pain and in general how to improve
the product so that's what i did for the first two years. And then for the last year,
I was responsible for the security concepts team,
which was also for the high-end hardware security practice
within NXP and trying to,
I don't know how to say it,
trying to steer the innovation
and the creativity of this great team
towards products so that all these ideas and patterns that have been filed
and papers that have been published
eventually ended up in products of NXP high security chips.
Are there things in the hardware
that make firmware more secure?
Yeah, I think
Fuse is a good example.
Like when I said that JTAG
Fuse so that you can't read out my
firmware or you can't
use JTAG.
Yes, so
that's a simple
measure which makes life a lot harder for an attacker. It's not impossible to overcome it, but I would say it probably keeps a majority of attackers out.
Because now they have to do something like decapping, which means buying a few of my units okay yeah or even more so i think to reverse a fuse you really need
to change the ic so you need to circuit editing which means you need to use a fib which means
you need to go and rent a fib which costs a few thousand dollars and you need to have the knowledge
and you need to know where to do which fuse to change and all these things so this is quite or much more
sophisticated than just decapping and having a look at it or or just dumping a firmware okay
what's a thip it's a focus iron beam oh okay so it's a failure analysis tool which can be used for circuit editing.
Why wouldn't I just decap the chip, read out the flash,
flash it to another chip, and call it good?
Well, to read out the memory, to prohibit reading out the memory,
the fuse has been burned or has been blown.
Oh, so when you decap the chip… So you need to change the state of the fuse has been burned, right? Or has been blown. Oh, so you need to change the state of the fuse.
When you decap the chip, you just get the bare silicon out.
Oh, I thought you got the ROM out too.
The mask ROM, yeah, but not something like...
Not the flash ROM.
I don't think you can see anything physical with flash.
Oh.
All right.
I learn something new all the time that I thought I knew.
Could be wrong.
I don't think so, though.
Okay.
Okay, so you have to change it so you can actually read it out.
And then when you're reading it out, you're reading it out through like a JTAG interface, not through a microscopic interface.
Yeah, exactly.
I mean, that's another way to try to use other phase analysis techniques
to try to read out the EPROM on the flash,
but that's much more sophisticated.
So it's much easier to just try to reverse the state of a protection fuse
and then use the normal mechanisms to read out the memory.
What other chip-level tools exist for firmware security?
That's a good question.
Actually, in commercial devices, I'm only aware of fuses,
which are used in a lot of products.
In secure elements,
there's a completely different architecture.
Secure elements are passive devices,
so it's a completely different architecture
than the normal CPU or APU.
So I would say fuses is current state of the art.
That's kind of sad, but nice to know that I do know what the state of the art is.
And it's a little hard on embedded systems because you can't track intrusion very easily.
And so when you do get hacked, there's no way to tell anybody. On Linux systems, that's part of good security,
is being able to tell when an intrusion has occurred.
Are there other things like that where little devices
and larger devices have very different approaches?
Well, that's...
You can say no and we can move on
I don't mean to ask you things you don't know
I think it's a tricky question
there are so many different classes of
smaller and embedded devices
and for example
if you look at
secure elements again so the chips
that are on smart cards or passports or
credit cards so the highest grade
hardware security chips they have a lot of
intuition detection anti-temper mechanisms but most of them are passive so because the device
is passive so once it's powered up then they erase the keys or erase the memory these kind of things
but it's not that an alarm is sent to some remote operator it all happens
in that chip so but again this is for secure element and not for the standard iot chip
yeah i worked on a sensor that in military use you could push a button and it would erase its code multiple times and all of its keys until it ran out of batteries.
That sort of slagging the hardware was expensive, but if you really don't want somebody to have it, that's what you do.
You sometimes talk about open source when you're doing conference presentations.
Do you have opinions about whether open source is really useful or is it only useful in certain cases?
So I think open source is great it has done
and it's impossible to to clearly state what benefits open source has done to innovation
and security and humanity in in general i think it's a bit tricky for hardware because you can't just push an update.
So if the hardware is in the field, you can't change it.
There's only a few things that you can change with firmware updates.
But some flaws you cannot fix other than changing the hardware. And I think that's where also the reluctance from the vendors come in to
publish details about
their designs.
And if you look at Spectre and Meltdown,
it haunts the chip industry
now because no one has
checked it for
decades. But
if you combine that
with a
mandatory third-party testing approach,
like what is done for certified secure elements, for example,
then it can work.
You just need to make sure that a lot of people have a look at your design
with different ideas how to hack it,
and you take that feedback and improve on it.
And that's the beauty of open source. Once once it's out everybody can have a look at it and try to find a loophole
and and soon enough all these loopholes are closed or a lot of people know about these
loopholes and they're not telling anyone well it seems to run contrary to the to the the notion
of securing your infrastructure and your designs and things.
If that's true, why not open everything up?
Well, I think it's very beneficial
to keep a lot of things secret
because it adds additional pain for the attacker.
If you can't get the design principle,
if you can't get the countermeasures,
if you can't get all the tripwires that you put in place, it's really a pain. And at some point,
hopefully they lose patience and move on to the next target. That's why I think in order to increase product security, it's very beneficial to keep these details closely guarded. But then again, it needs to be made sure that independent RAD teams
have had a look at this design to not make any beginner's mistakes.
And so there's this dichotomy of you want to keep the information,
need to know, because then people who don't need to know don't have it
and can't misuse it.
But having open source gives you more eyes
and more security for everyone in general.
And they seem like opposite paths.
It's true.
Okay.
I mean, if you look at the state of software security or IT security,
most of the devices use open source libraries in one way or the other,
or modified it,
or don't tell people that they use open source libraries.
And still there's a lot of insecurity around it so i think open source and and the power
of crowd intelligence having a look at these software things improve the best achievable
state of security but what is then implemented in the average product it's a completely different
story i mean one of the good things about open source security libraries
is that you're not implementing your own fantastical made-up algorithm
that may have a shortcut.
Have we gotten past that point where everybody's implementing their own security?
Are we on to the next problem now?
Or do you still see a lot of people saying,
well, I just have to modulo and add
and then multiply by three
and then the number is always 12 when you get it back?
Okay, I think open source libraries is great
because you have exactly as you said,
you have proven secure implementations
or at least a vast pool of volunteers hasn't found any flaws in these libraries.
So you have a very good security level already for software.
I think for hardware, you don't have all these building blocks readily available.
If for crypto, you shouldn't change anything.
And the block ciphers, the hash functions, public key crypto,
all this is secure and secure since decades.
So there's no need to implement anything by yourself.
When I say it's beneficial to closely guard what you use,
then you have to use proven principles that are known to be secure,
but maybe add some extra layers of obfuscation
or extra obfuscation pretty much on top
to make life hard.
But again, it needs to be checked
that this has no influence
on the underlying security principles.
People make fun of obfuscation as a security measure.
So it's a little odd to hear you say it.
Can you explain more about...
Maybe what's a good example, yeah.
Yeah.
When you look at a chip and you can clearly identify all the building blocks,
then it's easy to go for the key storage or the fuses or the CPU or the registers.
If you use GlueLogic, which is kind of an obfuscated hardware layout,
it's very hard to find it and makes life for an attacker really, really painful.
Okay. attacker really really painful okay and you you could given infinite time as a hacker
unravel the glue logic but it would be much easier to just walk in and it the key is there
and easy to find yeah i mean there's this example from chris sananowski hacking a secure element from Infineon.
I think it was like six years ago.
And he found the coprocessor and he found a way to hack this coprocessor, the secure coprocessor. using this design in other devices and other derivatives of this product is good from an
economic point of view but it also means that these other devices can behave with the same
attack vector and if there had been glue logic used just as an example then it wouldn't be
obvious where these modules would have been and where the attacker would have to go.
So that's also why reverse engineering is actually a very interesting research topic
because there are no metrics how difficult it is to kind of reverse an obfuscation.
But reverse engineering, like hacking, does sometimes have a bad reputation
how do you how do you deal with that well i'm not doing reverse engineering but i admire people who
do it because it's very tedious work typically and a lot of knowledge and years of experience to to be efficient in it i think currently if you want to
make your product secure you need to add some layers of obfuscation on top just to make the
life painful and then not that every script kitty can go to youtube tutorial and and employ some
hacking tricks on your product at the same time it's unclear which obfuscation is really effective.
And you mentioned that people are laughing at obfuscation.
That's typically because it's not very effective.
Okay, going back to example earlier with I want to make an Internet of Things light.
I write some firmware.
It's not Linux, so it's not open to those
attack factors and what do i what are the things that are the easiest cheapest engineering things
to do for the biggest bang for the buck as far as making it secure so that it doesn't just get hacked in the first week it's out.
Okay.
You first should define your assets.
What do you want to protect?
If it's your firmware, is it keys, user data, whatever.
Once you have the assets defined, you can do a threat modeling
to see how these assets could be compromised
by adversaries
which means this adversary needs to do
to get to these assets
and then you can define your
countermeasures
and low hanging fruits are always to use
constant time
implementation of crypto
or use standard
crypto libraries that have constant time implementations
then fault countermeasures can be built in there are some coding guidelines
and so on so that the constant time is to avoid the side channel attacks? Yeah, correct. So whenever you touch a key, a crypto key,
then there's a chance that this information can be picked up through side channels.
And time is the easiest.
Typically, computer scientists or engineers want to optimize algorithms
for the average best performance, but that means it's data dependent.
So if you make it data independent,
it's always the worst case execution time, which is secure,
but it's also the worst case execution time every time.
But these are low-hanging fruits.
So then you already have this timing side channel closed out.
Okay, let's see if I can figure out what the assets are um one of my assets is the
uh password and username that my user uses on the server another might be the user's
wi-fi password because i need that to log on to their Wi-Fi.
Another asset would be being able to see remotely whether the light is on or off,
which might indicate whether the person is home or not.
And then another asset... Did you say like a pairing certificate or something like that?
I didn't, but that's a good one.
A pairing certificate so that I knew on the server that this device belonged to me and should be handled adequately.
Are those the assets?
Did I miss a bunch?
Does that sound about right?
That sounds good.
Giving a mythical IoT light?
No, no, no.
That sounds good it sounds like a challenge to secure it without any linux or any other standards libraries available
yeah well welcome to my world
um so let's talk about that certificate that i want to i want this light to be able to talk to my server.
And so when it leaves manufacturing, it gets a certificate that says, yeah, you can talk to my server.
If I use the same certificate for everybody, then a hacker can come in and make a fake light, a spoof light.
A clone, yeah.
A clone, yes.
Okay, and that is a sort of example of threat modeling,
where I go through my list of assets and I think about,
well, if I did this with it, then someone could do that with it.
Yeah, I mean, using the same certificate in all devices
is probably not what you would do.
It's a bad idea.
It's the number three.
I think IP cameras or something.
Yeah, yeah.
Where they use the same private RSA key in all devices
because someone who read the checklist probably didn't get that.
It means it needs to be a unique key that is completely random
and so on and so forth.
But that's expensive.
I mean, in manufacturing, to generate a unique key
and to have to program each device individually
instead of just mashing the same ROM over and over again,
that's hard.
I mean, as a manufacturing step, that is a pain in the neck.
Yeah, it's provisioning, right?
It's key provisioning.
And it also needs to happen in a secure facility
to make sure no one else can tamper with that key.
Which is tough because a lot of times that's overseas, right?
You're not necessarily in control of that facility.
But the provisioning
can also happen on-premise.
So you can do it
separately from the manufacturing.
Yeah, this is
shipping boards to the states and then shipping
them out from here
for domestic here.
There are also companies who offer that as a service because the key needs to be generated
in a secure way, right?
So you need an HSM, a security module to generate the key.
The HSM needs to be in a parameter secured environment with access control and so on
and so forth.
So you're saying I shouldn't just plus plus the previous number.
What about if I use the serial number of the chip that I'm using?
Well, it's, again, one of these trade-offs between costs, time to market,
usability, and security, and performance. If you have a light
bulb, and the serial number
is only available to an
attacker if he
takes out the light bulb
from the lamp
and has to look at it, then
yeah, probably you can use it.
Because that's
he's already in the premise
and then gets the serial number.
And that's part of threat modeling too, is identifying when, okay, the threat already has whatever the threat wants.
So we don't have to protect beyond this point.
They've already broken the window and gotten in the house.
They know whether or not the human's home.
Now we just have to protect our servers so that they can't do that to everybody right okay so we what was the step after
threat modeling okay so then you can derive the security requirements for your product so if you
want to prevent certain attacks or you can react to certain, you can mitigate attacks.
In this case, we just talked about serial number.
You want to make sure that it's not easily available remotely for an adversary.
So I think putting a sticker on the light bulb somewhere
where you need to be very very very close to the light bulb
could already be a good way
but then I shouldn't put it in my database
on my server
and use that as a key to organize
my light bulbs because that means that
if somebody has my server they have the keys
to everything
that's ever shipped
it goes in the key material
as one of the factors.
But there should be other factors as well.
Yes.
And ideally, those are partially company-wide and partially device-wide and partially unique to this device.
But manufacturing constraints and costs may limit what is unique.
Okay, so I have decided, I've modeled my threats,
I have chosen some things that I'm going to defend against,
I've implemented that in ways that don't open me up to other things,
in ways that are accepted by the industry.
And then I think the next thing I need to do is the independent third-party testing.
Is that the usual next step?
Yeah, I mean, typically you should have a RAR team early on.
It should be basically once you have a threat modeling done
and you derive your security requirements,
someone else should check whether you have overseen something.
Because at this point of time, you can easily change your design.
Whereas if you don't do that and you have a product,
you can't change the hardware, the architecture,
it may be very difficult or impossible to fix any flaws that you may have overlooked.
So the RAD team, ideally an independent third-party evaluator,
should come in very early.
And then once a prototype is there
and your feedback of the evaluator has been implemented,
there should be a testing round and potentially on the final product again.
So that makes a lot of sense to me for things that are infrastructure important.
But it would be hard to sell for my consumer IoT device.
Hard to sell to the people who make decisions about whether to pay for it?
Yeah, because it would be expensive. Hard to sell to the people who make decisions about whether to pay for it?
Yeah, because it would be expensive.
I mean, this is not ever going to be a $500 test.
It's not even going to be an FCC multi-thousand dollar test.
It's going to be expensive to get a proper third-party test of security because you have to dig into the details of the system.
Correct.
But this is an ideal case if you aim for high security.
If you have a light bulb and if you do the threat modeling properly,
probably you don't need to have the highest security standards for a light bulb.
But what you can do is you can employ, you can collaborate with universities or freelancers who have credentials, who have knowledge in that field.
And that's typically much, much cheaper than working with a proper testing laboratory.
Obviously, you get less, but it's better than nothing,
and it's much better than not doing it at all.
Yeah, that's it.
I'm glad you said that.
That was what I wanted was you to say, if you can't fork over the big bucks because your team isn't big enough,
that doesn't mean you just raise your hands and say,
oh, well, we can't do third-party testing.
We just won't do security.
That is not the right answer.
What you should do is you should just release your product
and then let people attack it, find out what's wrong,
recall it, and then make the correct one.
You could do that if your brand survives the damage.
Or you just change the brand name.
Yeah.
That sounds good.
But I also think money is not necessarily the biggest hurdle.
I think it's time to market that really suffers
or drives most decisions.
And that's where security suffers
because typically we need a couple of iterations to get it right.
Even if you're a security professional,
even at NXP, we didn't get it first time right.
We had a couple of test chips before we had a real product
because it's very, very hard to do it first time right.
That sounds like much of engineering.
You are going to be speaking at the Hardware.io conference in September,
which is coming up really soon.
Can you tell me about the conference and what you'll be speaking on?
Yeah, so the conference is a two-day conference
together with two days of trainings.
It's a very practical hands-on focus for both the training
as well as the conference.
Topics are on all areas of hardware security.
And I'll be speaking about different aspects of hardware security,
so challenges for security design in general, or for IT in particular,
the lack of incentives for security design,
the importance of independent security testing,
the pros and cons of open source, and the importance of reverse engineering.
It's a little bit similar to what we discussed in this podcast.
So basically, we were helping you write your talk.
Yeah, you set the incentive to put the timelines a bit earlier,
to bring in the timeline, yeah.
Have you seen, this may be something you discussed,
because just listening to you mention the incentives to good design, have you seen over time that the more higher profile incidents and attacks have led to people being more responsible or at least trying to ask the right questions more often? Or is it kind of still a free-for-all so i think the unfortunately the only the really brand damaging hacks and breaches and publications
really changed industry-wide behavior and the more damaging the publications or results were, the quicker industry reacted.
So I think Charlie Miller and the Jeep hack, even though it's kind of interesting to reveal all these vulnerabilities,
which cannot be fixed before industry can fix it.
I think that has changed automotive security for the good dramatically.
And this was when he actually, Charlie, was a journalist and he invited someone to take over his Jeep.
Correct.
And then he was a little shocked and appalled
at their driving wildly about the
streets yeah i mean i think this really made headlines everybody heard about it it has nice
pictures had a nice story and suddenly in automotive people were willing to invest money for security.
Even though cars are very expensive,
it's actually a very, very cost-driven
industry. And if you
increase the price for a few cents,
people will haggle you down
and give you a hard time. And what I
heard here is that for certain OEMs
from Germany, they were willing
to increase
the spending for security fivefold just after
this hack.
All the way from a nickel to a quarter.
But it's good that that is talked about because automotive is an industry that tends to move
very slow. And if we can do better there, then
other industries will maybe be a little bit more
aware that we can't just
keep hoping that somebody else's products are
attacked and that ours are given a free pass.
That's not going to last forever.
Well, Axel, do you have any thoughts you would like to leave us with?
Well, I'd like to thank you for being on your show.
I'd also like to mention that I'm still looking for a few talented people who are willing to work in hardware security.
So that would be a good opportunity to reach out to me.
What are you looking for?
Embedded firmware, embedded hardware engineers or security researchers?
Yeah, so I'm looking for security security researcher, for a hardware security researcher,
embedded security researchers,
and we have
a cutting-edge lab
and interesting everyday challenges
with projects where we need to do security
assessments, and it's a very
international team.
The lab is based in Abu Dhabi,
United Arab Emirates.
Is it... I there as I think?
Because how would you know what I think?
Actually, it is very beautiful because the offices are right next to the Arabian Gulf.
So there's turquoise water surrounding the office.
The other day we saw a whale shark swimming by.
So it's pretty nice.
And how should they contact you?
They can always email the show if they want, and I will make introductions.
But if they want to contact you directly, is there a good way to do that?
Yeah, it's probably best to just connect with me on LinkedIn and send a message.
Okay. Okay.
Our guest has been Axel Poshman,
head of the hardware lab at Dark Matter LLC.
He's the closing keynote at the Hardware.io conference in The Hague in mid-September.
Axel, thank you for being with us.
Thanks for having me.
Thank you also to the folks at Hardware.io
for connecting me. the show notes if you want to know more, and you can check them out on your podcast app or on embedded.fm on the web. Thank you to Christopher for producing and co-hosting, and of course,
thank you for listening. You can always contact us on that webby thing or show at embedded.fm.
And now, how about a quote from Neil Gaiman. I've always felt that violence
was the last refuge of the incompetent
and empty threats the last sanctuary
of the terminally inept.
Embedded is an independently produced radio show
that focuses on the many aspects of engineering.
It is a production of Logical Elegance,
an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are Logical Elegance and listeners like you.