Embedded - 219: Not Obviously Negligent
Episode Date: October 12, 2017Kelly Shortridge (@swagitda_) spoke with us about the intersection of security and behavioral economics. Kelly’s writing and talks are linked from her personal site swagitda.com. Kelly is currently ...a Product Manager at SecurityScorecard. Thinking Fast and Slow by Daniel Kahneman What Works by Iris Bohnet Risky Business, a podcast about security Teen Vogue’s How to Keep Your Internet Browser History Private Surveillance Self-Defense from EFF, including security for journalists as mentioned in the show Bloomberg’s Matt Levine Twitter suggestion @SwiftOnSecurity, @thegrugq, and @sawgitda_.
Transcript
Discussion (0)
Welcome to Embedded. I'm Elysia White. My co-host is Christopher White. This week, I'm pleased to have Kelly Shortridge to talk about security.
Hi, Kelly. Thanks for joining us.
Hi, thanks for having me on the show.
Could you tell us about yourself as though we had met you at a technical conference?
Absolutely. My name is Kelly Shortridge. I'm currently a product manager at Security Scorecard,
a security ratings company. In my spare time, as much of it as I have, I look at the intersection
of behavioral economics and behavioral game theory as it applies to information security.
So looking at cognitive biases and some of the irrational incentives that we see in the
security industry today.
I have so many questions.
I mean, I love cognitive psychology and game theory, and I'm sure probability is going
to come up.
And these are all things I like, but security isn't something I really like because it's
too hard.
So hopefully you can help me over that hurdle.
Absolutely. I hope so too.
Before we do that, I want to do lightning round where we try to ask you short questions
and have you give us short answers.
And I've totally failed on the questions this week, so we'll see how it goes.
Okay, I'm excited.
Favorite movie or book that you've encountered for the first time in the
last year can be nonfiction or documentary or whatever you like.
That's a good one.
So I actually saw Blade Runner 2049 a few days ago,
and I have to say it was pretty fantastic.
I was really hoping,
I think we've seen some sequels that don't do a great job of living up to
the original, but this one I thought was really strong. It think we've seen some sequels that don't do a great job of living up to the original.
But this one I thought was really strong.
It was also just visually absolutely stunning.
Any spoilers?
You want to just go ahead?
No.
So jealous.
I don't think spoilers.
I want to make a good impression on the show.
I don't want to hear your fans angry at me.
Hell with the fans.
I don't want to be angry.
I haven't seen it.
Which is more important, Bruce Schneider's applied cryptography or Daniel Kahneman's
thinking fast and slow?
I'm going to have to go with thinking fast and slow.
I'm a big fan of thinking about really any problem through an interdisciplinary approach.
I think kind of the high level study
of choices is really applicable across domains. So I prefer the macro over the micro, I suppose.
Favorite password or perhaps favorite stupid password?
Favorite stupid password. That's dangerous because I use some of the stupid passwords
just for throwaway things when I'm testing things, which probably isn't a good idea. I have to plug for sure, any sort of password
manager where you can generate really secure random passwords. That's a must have for anyone.
Suppose I'm always somewhat amused when I see, you know, when their password dumps, when people
just kind of enumerate the next digit, clearly someone has generated a good password for them and if it's you know a b c d you know qx and a mix of
things and then the last digit's nine they just make it 10 or 11 or 12 so i guess that's my
favorite stupid one my favorite stupid one was introduced to me by my mom uh forOL. And don't worry, it's long gone. But she needed it to be four characters. And so
it was asterisk, asterisk, asterisk, asterisk, because that's what it showed her.
Oh, that's pretty great.
If the three of us robbed a bank and were arrested, then each were told separately that
the first one to rat out the others would get a reduced sentence.
Would you remain silent in hopes that the police didn't have enough evidence and that we remained silent?
Or would you betray us in order to get the shorter sentence?
So this depends on a lot.
Believe it or not, I've actually thought about this a lot.
How would I perform in a prisoner's dilemma?
I think it depends on my role. I think, you know, how would I perform in a prisoner's dilemma? I think it depends on my role.
I think, you know, how much did I have good OPSEC?
You know, could you tie us together?
Maybe if, you know, we never had any digital trails together, it would be easier to obfuscate.
I think if it were, you know, complete strangers and let's say I was the one holding the gun, I would probably confess because that's it's going to be pretty obvious it was me.
I like how much thought you've put into this.
Yeah.
It's an interesting, and it's also, you know, what role would you play?
I think that's an interesting, rather than personality test,
just ask what would your role be in robbing a bank?
Okay, go ahead.
What would your role be in robbing a bank?
So I think it's the distraction.
I always like the idea of, you know, creating some distractions
and everybody's looking somewhere else, maybe like fainting or something. Just enough time for someone to slip in or quickly hold someone up and only the one person knows, not cause a big scene. I kind of like that idea.
I'm the driver.
Yeah, I know you're the driver.
Would you rather read a book about cognitive psychology or probability and i think
the next question depends on this answer oh okay um i feel like i've read kind of a lot of both um
ideally one that combines the two uh let's say i most recently read one about cognitive psychology
so let's go with probability okay what was the cognitive psychology book you read
so i read a really interesting one it's called what works by uh i may butcher the pronunciation
iris bonet uh she's at harvard and it was about um kind of like the biases that we have in the
workplace as far as gender equality and what are the solutions to those i actually think we've had another guest mention that book. It's a great book.
Okay. So now we have one last lightning round question. Okay. You're on Let's Make a Deal.
And there are three doors, one with the car behind it, and the two other doors have goats.
Which door, A, B, or C, do you choose?
This is, I'm assuming, hinging on the idea that I prefer a car to a goat, correct? Because I'm not sure if that's true. I would say...
Do you get to keep the goats? I mean, they show you the goats. Can you take one home?
I don't think so.
I'm trying to think, you know. I live in New York City.
Cars are really expensive in this city.
A goat could be more useful.
I could use a goat for a lot of things, just helping me carry bags to the subway, lots of stuff like that.
I don't know if you've met any goats.
I have.
I have a little bit.
Yeah, they are really smelly.
And I will give you that.
I'll just stick it on,
you know, the rooftop of my building. Well, with just that information, I guess I'll go with C.
Okay, so Christopher opens door number A, and there's a goat behind it. And we're going to
assume you want the car or that you want the monetary value of the car. Do you do you stick
with with your answer of c or do you switch answers
i am sticking with it really but probability says you have to switch okay let's move on i know it
does but i i like being edgy you never know plus if they were pygmy goats and cute i could plus
she wanted the goat so not switching is the correct answer.
Exactly.
I mean, the monetary value, I get it.
But that's also a hassle.
You know, you have to sell the car.
Okay, that's probably...
Taxes.
I told you.
So it's similar to the prisoner's dilemma.
It's like, I know what I'm supposed to do,
but I kind of like the idea of challenging that a little bit.
When I first heard about you, you were introduced as a security expert.
And that's such a big, broad term.
What does it mean to you?
That's a great question.
I think security expert can mean a lot of different things.
Obviously, there are people who are experts in how to build a secure architecture
in an enterprise. There are people who do vulnerability research, the grid on the
offensive side. I would say if I'm an expert in something which I find very flattering,
and I hope to live up to that, it would be in thinking about what are the ways how humans
influence the security game, as I called it. Obviously, if you're thinking in a vacuum about just the technology, and if we only had machine driven attacks, it would look very different than
it does today. But really thinking about, you know, simple attackers having their own biases
and what sort of attacks they prefer, and they want to minimize, you know, costs, they'll go for
low hanging fruit. It's thinking through kind of the first principles of what does security even
mean? I guess that's
where I specialize. It's really pretentious, but almost like the philosophy of attack,
I think, or sorry, the philosophy of security, I think is one way you can look at it.
Is it very tactical what you do? I mean, is it mostly thinking about this high level or do you do you deal with the security attacks and the ramifications and the
potential loss afterwards it can be very practical i certainly like to think about
practical implementations of the theory i think theory on its own is obviously not
going to work very well i've been trying to find guinea pigs and found one or two for
some of the behavioral design ideas i have around some defensive security solutions.
That's kind of a work in progress.
But I definitely think, you know, I don't want to just stay at the high level.
I try to actually, for example, in my Black Hat talk, have some sort of tangible takeaways
where people can try it out in their own organization.
I will say I'm not a pen tester or anything like that.
What I do in my day job
is help build security products. So really thinking about what would provide the most
value to someone who cares about improving their security. Okay. If I want to go to my boss as an
engineer and I want to convince them that they need better security for anything? How do I gain the right terms? What do I
say? I mean, I don't want them to just start changing my password every 90 days. That's just
tedious and annoying. Yes. What do I say to them to convince them that A, security is a problem, and B, there are solutions that aren't
trivial? Correct. So I think the why is security a problem? It's an interesting one where I think
a lot of people in the security industry take for granted that not everyone believes that security
is a problem. I think, frankly, there is some evidence, I think Equifax will change things a
bit, that there haven't been enough kind of punitive incentives for people to really care about security. I'd say it's,
you know, you have to care about your users. If you're any sort of customer facing organization,
assuredly in your business goals, you're going to have something about customer satisfaction and,
you know, customer value. And I think sometimes presenting it from the business side of things
can be really useful, not just, you know, it's something we need because it's the right thing to do, but, you know, it can increase
the value to customers, I think is one way to frame it. As far as, you know, trivial or not
solutions, I would also say that changing your password every so often, it's better just to have
a random and complex password. Again, using a password manager to generate, I think is a better
strategy. Say in general, my advice is always eliminate low-hanging fruit.
So implement things like two-factor authentication, use secure protocols, make sure that you have good patching.
You can cover a lot of the basics.
And frankly, a lot of companies don't actually cover the basics.
So that would even be a great first step.
But two-factor, I think that's something that absolutely every enterprise and frankly every individual should have on all their critical accounts.
Why do you think that companies don't do what's required?
I mean, these giant companies have to have security departments.
They have IT departments.
They have people, you know, chief security officers and that kind of thing.
And yet we still have these gigantic problems that come up and then sometimes it finds that you find out that that the attack mechanism was something like you said like an unpatched server for six years or
something so is it that they're just playing lip service to it by having this or is it just so
hard to keep track of all the things they have to do i think it's a bit of both honestly that
the latter is a very non-trivial problem I think in a lot of
ways we think of attackers as you know conducting all these complex operations and that can be very
true on the nation-state side but I do think in some ways the defensive side there's an asymmetry
as far as complexity like defenders have to deal with so many different products you know so many
different vulnerabilities where attackers kind of just need one vulnerability to get in. I think it's a problem of prioritization in large part, like,
what do you end up fixing? For example, in the Equifax case, it's, do you take a, you know,
server offline, so you can perform maintenance and patch it and everything. But it also means that
your customers don't have access to their, you know, whatever systems they need. So it's that
trade off between business and security. And I think today, companies don't have a great way of determining what's kind
of the what's called the rosy return on security investment, they don't have a great way to
quantify that or figure that out. I think also, there is a hiring problem, someone in security,
I think we could spend the entire podcast just talking about that issue. But I think it's also being understaffed, having too many products that are promising that they'll fix everything.
But it's important to understand security is not just a technology problem.
It's people, processes, and technology.
And if you aren't covering all of those and you don't have a team that has that sort of breadth, it's going to be very difficult to figure out kind of a comprehensive security program.
Like you can't just slap a box on the network and fix everything.
That won't actually work.
We all wish it did.
Yeah, right?
I so wish it did.
So punitive incentives and Equifax.
So far, their solution seems to be to get us to pay them $10 a month in order to make this work better for them.
That doesn't seem very punitive.
No.
Do you think they're going to exist in a year?
I think they will exist.
And I think it's interesting to consider the, I guess it's the big three.
What is it?
I'm blanking on the other two right now.
I don't know if you all remember those.
Experian and something with a T.
Yeah, Experian and Trust Union, maybe?
Something like that.
We can see how important these are to our daily lives.
Right.
They are actually hugely important, but we don't know them.
I think it'll be interesting to see if once all of the kind of buzz and hype has died out, if legislation gets passed. I know
that they proposed it. They obviously had a huge stock market hit. But the unfortunate thing is
it wasn't an opt-in program. For example, if Sephora got breached, I buy makeup from there.
I have an account on there. I opted into that. We don't really have a choice as consumers to opt
in or out from the credit ratings agency. So they're controlling this data. And in some ways, it's a very weird consumer relationship
because it doesn't actually matter very much.
If we're angry, they're still going to have all of our data.
So I think it remains to be seen if, for example,
they change the way just social security numbers works.
Obviously, that would be absolutely huge
and have huge implications for Equifax's business model.
But that's obviously a non-trivial solution.
It'll be interesting to see if there are any new fines. I think how it, from the regulatory side, it'll be
really interesting to see what happens there. Obviously, from the market side, they did have
a financial hit. And I think that's why the CEO, you know, resigned with his insanely high, you
know, million, multiple million dollar pay package. But right now, it's, that's kind of the worst
we've
seen even target you know they had reduced earnings for maybe a quarter or two but it didn't
hurt them that much long term it's just hacking doesn't really cause that much pain to company
companies right now there's stuff like yahoo which revealed last week that basically every account
had been breached but you know they've already been disintegrated and chopped up so who who you're going after right
right so who's culpable for that and and who should be uh financially dinged
and i mean with equifax did you look to see if your information was hacked uh Yes, I did.
And you don't have to say yes or no on that,
but if the answer had been yes,
or if it is yes,
would you have signed up for their- She did answer.
Oh, did she?
Oh, I missed it.
Yes.
Oh, yes.
Okay.
So did you sign up for their freeze?
Do you recommend people sign up for their freeze?
That's a great question. I would say that
credit freezes are important. I think there have been a lot of posts written about kind of the
process to go through that I think are smarter than I can suggest. I think, honestly, I don't
think there's a great solution across the board other than kind of monitor yourself, make sure
that you don't see anything suspicious going on. Unfortunately, there's not. I would say there's also like, there's no silver bullet as far as, you know, a blinky box to put
on the network. There's no silver bullet in this sort of situation. You know, it's this immutable
credential that now we can't revoke. So it's a, it's, it's pretty dreadful to be honest.
It reduces the security. It reduces, not sorry, not the security. It reduces the trust in the entire infrastructure.
Exactly.
And that trust means that our credit and our finances are less trustworthy all over.
I mean, if Chris and I decided to go on a huge vacation and spent a huge amount of money and really had a great time and then came back and said,
oh, no, that wasn't us.
Right.
Other than our social media pictures indicating it was, there's just not trust anymore.
Right.
I think that's a really interesting problem.
And there's some good thinkers, or at least an analogy I see is, for example, Matt Levine,
who's a finance blogger or journalist, has talked some about how a lot of the financial system relies
on trust in the financial system and trust that it won't go under and that you can almost view
banking crises as an erosion of that trust. What's interesting is we're kind of continually
have this erosion and it's something that we work so very hard to avoid on the financial side because it collapses the whole system so to me
it's like what happens in this kind of you know online ecosystem when trust collapses is there a
point where everyone assumes everything is hacked everything is being looked at by someone else
how does that change our society at this point given that everything is online basically um it's
that's the sort of things i think about you know when i can't sleep at night or rather i guess they keep me from sleeping at night
okay so equifax is hacked everything at yahoo's hacked targets hacked everything's hacked
and i mean at some point do we just give up and say, how do we get beyond this?
Okay, I can't even feel it.
This feeling of I don't have any pathway from here to create trust for my users and I don't have any trust for anybody else.
Yeah, that's a great question.
I personally think people have talked about the blockchain as a solution.
I don't think that's a solution. I don't think that's something that will fix things.
I'm talking in my upcoming talk about, I think in some ways, we're going through the stages of
grief and security. If we look at the security over the past 15 years, you know, I think for a
while we had the bargaining idea where we're like, okay, maybe, you know, if we just put the, again, the box on the network, everything will be fixed, you know if we just put the again the box on
the network everything will be fixed you know we'll avoid the problem i think right now we're
pretty solidly in the depression phase where it's like well why does it even matter what we do we're
just going to be hacked uh but the acceptance phase which i'm hoping um you know to help kind
of move us into as much as one person can uh that's where the concept of resilience comes out
which is really the idea that you don't want to rebuild exactly to how systems were after you've been compromised, because clearly there were vulnerabilities in that system.
It's similar to, for example, in Christchurch with the earthquake, they're not rebuilding it the exact same way as before.
They're rezoning, they're using different materials.
It's taking a more, I guess, a transformative approach to make sure that you're less vulnerable going forward.
And I think that's
what the industry needs. It's, you know, accepting that there is that risk, it will probably happen,
but it's how can you reduce the severity of the impact? And how can you reduce the likelihood?
The thing I talk about a lot is, again, the raising the cost of the attacker,
like a lot of people get so focused on these kind of like really small probability attacks,
and that's part of cognitive biases. But they don't focus on the fact that even sophisticated attackers will use really simple methods like spear phishing
if they can. So the goal is just to eliminate the low hanging fruit. And you've actually done a
decent job in at least raising the type of attacker that needs to be able to attack you.
So I think it's kind of moving through those steps, accepting that there is a risk,
but there's ways that you can mitigate it as the next step as far as an industry. I want to talk about the people aspect,
but I also want to go back to the resilience and hardening aspect, because you talk some about
not hardening and instead focusing more on resilience. And we heard some of that from
Gretchen Bakke when we talked about the US.S. power grid, about how it becomes brittle if you harden it too much.
Instead, you need shifting responsibilities and shifting areas, an architecture that deals well with change and differences.
Is that similar in security, that we need an architecture that's more able to deal with differences?
I think absolutely.
One thing that I've talked about a few times in my talks is if you're familiar with Netflix's Chaos Monkey, where it's really testing performance of system and it's kind of like injecting failure into the system to see how it responds and make sure that it's still performant. But it also, on the security side, reduces the ability to persist
because if you have this fluctuating infrastructure,
it's very hard for attackers to figure out, you know,
one, how to time their attacks, but then also how to persist.
A few of the other ideas from my Black Hat talk, for example,
a lot of malware is written, so it looks for artifacts of virtual machines
because then it will think that, you know,
maybe the FireEye
virtual machine is looking at it and analyzing it. So if you can put those sorts of artifacts,
like, you know, just the executable for virtual box, then on a physical machine, like your normal
computer, then it's likely that the malware won't run because it has very simple rules like that.
So it's doing things like that, that aren't necessarily intuitive, but really helps the idea again, of like, even if attackers are changing their methods, how can you kind of like, what are the easy things to do to change it over time?
But the infrastructure part is what I particularly harp on, because you have these legacy systems.
Yes, you can keep patching them.
You can do something retroactively.
But the fundamental problem is you can't do kind of the necessary security hygiene you need to do.
And it's still difficult to patch.
You can't migrate your data easily.
It's very hard to adapt over time.
So as much as you can implement technology
that makes it easy to move on from that technology
or adapt it easily, fluctuate it easily, the better.
I think that's a good point is that
not only the attackers change,
but the defenders should continually change
and evaluate things and fluctuate, as you say.
Because when I was working on something a long time ago,
doing authentication,
trying to keep counterfeiting from happening.
For consumables.
For consumables.
It's not a consumer product,
but a consumable part of an expensive thing.
One of the things management would always come down is,
well, why haven't we just solved this? Why can't you come up with, you know, just finish this and
then they'll stop attacking us and we can stop working on this. And I kept trying to explain
to them, no, we're going to have to be in this cat and mouse game for a while. We're going to
have to keep evolving their methods because they're not going to stop because the incentive
is too high. And that was really hard to convey.
Absolutely.
Yeah, I think resilience, I mean, in any domain,
I think resilience really comes down to how can you have a sustainable, adaptable process.
And I think we've seen that transformation in other industries,
even climate change, for example.
People aren't thinking so much from the preservation context,
how can you kind of, in some ways,
it's like securing this little ecosystem to help preserve habitats. It's how can you help, you know, for example, animals or plants or whatever else migrate to their ideal conditions, because that's what they naturally do in the wild. It's
about that transitioning and adapting, not just how do we keep this kind of isolated little chamber
exactly the same so something can survive. Okay, now my questions are about island biodiversity, but I'm going to go back to Chaos Monkey instead.
So for people who don't know what that is, Netflix has this process that basically injects bad things into its other processes and it kills random things and it runs on their production servers. So this isn't testing. This is, we're going to randomly kill things in the
wild so that our stuff works always. Is that why the movies I want to watch keep disappearing?
I don't think so. But after the chaos monkey, you mentioned faking things out and trickery.
I love the idea of trickery and how it can be used to make attackers, to baffle attackers.
Exactly.
You mentioned artifacts from virtual machines.
What other forms of trickery do you suggest?
Absolutely.
So just backing up a second, I think a lot of it, considering the way attackers actually
conduct their attack, a key part of it is profiling.
And to keep a very long explanation brief, though, I have, if you go to my website, I
have some talks around this.
Attackers tend to be pretty risk averse.
They want their attacks to obviously work and not be caught.
So profiling is very important so they can. They want their attacks to obviously work and not be caught.
So profiling is very important so they can be assured that their attack will actually work.
If you increase that uncertainty and you mess with their ability to profile, it becomes very difficult. So kind of the, if you want to put it as a catchphrase that I used during Black Hat,
it's putting wolf skins on the sheep, like make something look like a hostile environment for
attackers, they're more likely to be deterred. Another example, allegedly the Macron campaign in France for the election used fake email
addresses when they discovered that they were undergoing a phishing attempt.
They put in fake email addresses.
They had fake documents.
So using things like canary tokens in the form of documents that look like something
very scandalous or outrageous, but is actually fake. What will happen with a Canary token is it will actually alert you when
someone has accessed that file and you know, okay, someone's rummaging through, but it's basically a
decoy. So the idea is, again, put a bunch of fake data for the attacker. If they have to sift through
it, it wastes their time. Another example, thinking about embedded devices is the idea of spoofing what their own profile is. So if you're thinking about all the different IoT devices, if you make an IP camera look like it's a TV or a TV look raises the bar for them to figure out what it actually is, the better. A lot of times, if it's
just people trolling around on Shodan looking for something, you know, to attack, or if they have
some sort of like warmified attack where they're really looking for commonalities and indicators
that it's, you know, the right thing for whatever their repeatable attack is, then you can help
deter that, which I think is, I think it's cool to mess with attackers because they mess with us
you know so part of me is like oh that would be so fun and the other part of me is like okay now
you have a hundred devices in your house and you don't know what what is which because they're
all randomly talking about things that don't even make sense. Yeah, there's definitely a trade-off there.
And so that makes it harder for the blue team to do half their job.
Because a lot of times the defenders, the blue team,
their job is partially the information technology,
keeping everything running, keeping everything healthy,
finding devices that aren't working and fixing them.
And then as part of their hobby, sideline, 10% job,
it's defending all of that against people who spend their whole time attacking.
Is this a real, I mean, how do we convince our society, our technology groups, that we need specialists for defending the sheep herd, the sheep?
That's another great question.
It, again, rests on the idea, it sounds a little edgelordy, but does security actually matter?
And why does it matter?
If it's not financial costs, then what is it?
I think for companies with compliance, it's a lot easier.
I would say in general,
one thing that I think is missing in a lot of enterprises
is that attack experience.
And it's something that I think we need to encourage a lot more.
There aren't, it's not just criminals
and it's not just good guys.
It's a very, I think it's far too black and white
of a way to look at the industry.
Some of the best defenders I know are former,
quote unquote, bad guys or black
hats.
And I really think it's trite at this point, but the, you know, you have to understand
how attackers actually attack to defend properly.
One thing I always recommend, regardless of the organization, is really thinking through
what are the assets you care about?
Like, what do you actually want to protect?
And what are the ways attackers can get into those and really thinking creatively. And it turns out people across kind of organizations
have a lot of fun kind of thinking like an attacker. But unless you're doing that, you're
just going to be defending potentially everything on a like a mediocre level of security, but maybe
some things you don't need to defend because they're not mission critical. But then customer
data where you face regulatory fines, you need to put kind of all your assets on defending that.
So I think it's that sort of perspective where unless you have someone that kind of understands attackers or at least performs that exercise, it's going to be very difficult to implement proper security.
I don't know how I would go about getting that experience without feeling like I was breaking the rules.
I know a lot of people don't have that problem and that's me.
They like to try out the breaking the rules first.
Do you come across people like me who are like, well, yeah,
I just want to defend because breaking the rules is bad?
To be honest, I think you may be the first person who's told me that.
I have to say most of the time
um like when i've done some consulting and stuff people really embrace the idea of like oh how can
i be an attacker you know it's towards kind of this good end and i think people in general like
the idea of being on a red team and obviously a blue team because they do like the idea of you
know we're defending something but i think a lot of people really embrace the idea of you know
bringing out you know their inner kind of
you know bad guy thinking um i'd say i guess if you don't want to break the rules uh thinking
about environments in which it's sanctioned for example um capture the flag competitions where
that is the goal and it's you know it's not you're actually breaking something that's in
production for example i think that's um one way to go about it there are different kind of like
games online that can help you um but i would say that's not a to go about it. There are different kind of like games online that can help you.
But I would say that's not a problem I encounter very often.
Good, I guess.
Yeah.
How do we, we got a question from a listener that's a good one. And he mentioned that he
keeps hearing about how we should not rely on routers and
firewalls to protect our internet of things things and that we need to design security more into the
applications into our devices which are often i mean they have no ram they have no extra processing
power they have limited amounts some ram they have no available ram
for giant blockchains or things like that how do we how do we make our devices
better so that we don't end up on the cover of wired right i think uh ending up on the cover
of wired i think it's generally if it's you, someone was able to create like a warmable attacker, some sort of botnet, because every single device's of magazine and you know people pointing their fingers and you get pony for you know most
ridiculous uh breach of the year um pony awards are i know hacking awards basically at black hat
the big conference over the summer uh i would say anywhere where if you can um you know forcing
users before they can actually use the device to kind of change
whatever their password is, making sure that, for example, routers these days are coming out when
they ship the router to you, it has a unique password rather than just kind of the same one
for everything. I think it's honestly basic stuff like that will help keep you off the front pages,
but not everyone does that. I think the bigger challenge, and I don't have a great answer for
this, is like patching, for example, the problem I think with so challenge, and I don't have a great answer for this, is like patching,
for example, the problem, I think, with so many devices is you can't actually patch something
once there's a known vulnerability. So trying to, again, think through things like that is
really important. And then there's the issue of, you know, there's not as much memory protection,
obviously, you said there's not a lot of processing powers that makes it very difficult
to implement some kind of like the cooler device, kind of like hardening things you can do. But I think it's similar to the idea
of like using two factor authentication, honestly, would solve a lot of problems in the enterprise,
and not every enterprise uses it, but it really does raise the cost of the attacker. Again,
if you don't have kind of the homogeneous passwords, if you don't have kind of default
configurations and backdoors,
things like that, you're doing a lot better than a lot of people. I think it's also, you know,
there are different services at this point that are also looking at vulnerabilities and open source
libraries, like being really careful about stuff like that, not rolling your own crypto or trying
to be fancy and creating your own protocols, like looking at well vetted libraries and other things
you can use, I think is another good step. It's kind of, I guess, be a little humble about your
ability to create, you know, secure systems yourself. That's all really good advice. And
some of that sounded like you don't have to be faster than the bear, you just have to be faster
than the other people. A bit, yeah, for sure. Yeah, it's unfortunate. I mean, hopefully
that will one day change, but I really think it's true that if you can just avoid kind of the basic
obvious stuff, then you're doing pretty well. That's unfortunately the state we're largely in.
You mentioned password managers, and I am embarrassed to say that it wasn't that long ago that I started using one.
I found it to be very useful, and for reasons that I didn't expect.
So I use one password, and I put in my login, and then I have it generate a nice long password, and then I put in to login and then have it generate a nice long password.
And then I put into whatever I'm logging into the new long password.
And I thought this would be a huge pain because it would mean that I had to have this program around all the time.
And it would mean that all my passwords were different.
So if I didn't have the program, there was no way I was going to get in.
And I just had all of these preconceived notions about how it was going to make security harder.
And it turned out that it made security a lot easier because I'm not even trying to remember my passwords anymore.
And it didn't take that long to switch over.
I just did like one every day.
Every time I logged into something, I changed the password and it was all done right yep I noticed that when I got my Apple iPhone with the fingerprint security that I
wasn't typing in my password nearly as often but I felt like it was easier and definitely somebody
couldn't walk up and just look at my stuff on my device.
I even changed my timing so that it would lock faster because it wasn't so hard to type in my password.
Are there other things like this that make it easier for users to not have to remember things and not have to be so aware of security?
Things that make it easier?
That's a really good question. I would say in general, security does have the problem where
it's not really designed to be user-friendly. I personally think we've gotten to the point where,
you know, two-factor codes are now push authentication. I think it's pretty simple
to use. It used to be, I think, a lot more of a pain. But I would say two-factor at this point,
like it's such a minimal inconvenience. I personally think, you know, you can now get your codes offline. It
doesn't require you to have a connection that it's, but there's not necessarily a great excuse.
I won't pretend like that necessarily improves convenience in any particular way. But I think
the convenience is definitely minimized. You know, there have been problems, for example,
like Signal, which is a more secure app
because it's end-to-end encryption.
It had a lot of usability problems for a while,
but I would say like a lot of my friends
who are outside of security
are now able to use it
because it has become more user-friendly.
It recently implemented things around social sharing
where it's, you know,
you can have your profile picture,
whatever else.
It doesn't necessarily require,
I believe at this
point i require like a phone number in order to work it could be wrong on that but i think that
was a recent change so in some ways you can view that as like okay it's you know another app that
you can use it's not like if you've ever tried to set up like pgp for example for your email that's
a lot more complicated to use it's now just in an app yeah exactly you can do group messaging it can
be fun um so i think we're slowly getting to a point where security is becoming usable.
I think other than the examples you gave, we have a long way to go before security is
actually increasing convenience.
But I really hope that we do get there.
So PGP was this thing where I would post my public key and then we would use email and you would be able to decrypt my key or decrypt my emails and I would be able to decrypt yours.
And there were servers and if you trusted the server, it was all very, very complicated.
And I work in this industry.
I'm okay with a little bit of complication.
That was not very useful complication. Correct, yes. What other things
are like that, the opposite of good and easy? Honestly, I think a lot of security products
out there right now are a lot like that when thinking about security practitioners
are trained in this industry. They understand it well know, I did an informal survey where security practitioners
put the number of maintenance hours for their software every month. And some of the most used
software, for example, SIEM systems, security information event management systems require
at least 30 hours of maintenance every month, which is certainly not the just put the box on
the network idea. A lot of times the interfaces are super complicated.
There's tons of configuration required.
You can just look at the public filings of some of the security companies.
Their services groups are sometimes more than their services revenues
or sometimes more than their product revenues.
So it's a big mess.
Security in general is just not focused on usability, even on the enterprise side.
So I think in some ways the consumer side is leading the way.
So I think there are so many
examples honestly from the enterprise side of security in that vein on the consumer side I think
honestly the bigger problem right now is there's just so much snake oil you know from it's just
if you go on kickstart and just search for security or privacy it's honestly sort of a
nightmare and you look at some of the backgrounds of people and it's just not the sort of backgrounds you want of people trying to build
security systems and unfortunately i don't think it's up to the consumer to know better i think
that's a ridiculous assumption it's no different than if you're buying or you're having like uh
you're getting pharmaceuticals whatever else like you're relying on an expert to help you but in
security right now they can't there's not the equivalent of, oh, go to your local security expert and ask them which
of the 10 million types of antivirus they should get. They're relying on mostly like paid marketing
articles online or whatever else, just bigger brand names. It's a pretty big mess. So I think
it's, there are too many examples to count, I guess is how I'd put it. With the enterprise security software,
does this level of complexity make people feel more secure? I mean, it's clearly not real,
but is it partially bad UIs because that's what people think will make it harder for other people
to attack them? I've wondered a lot about this, whether it's on the part of the vendors trying to make the product
look more complicated as a proxy for communicating value, or if it's that the practitioners want a
bunch of complexity for some unknown reason. My guess is it's probably more of the former,
just because I can't imagine that practitioners really want complexity. And certainly, the work that I've done in the space
has implied that they don't want complexity. Obviously, everyone wants the ability to drill
down and have lots of functionality. But some of the dashboards that you see for the enterprise
software, even just managing devices, every possible functionality is just on the dashboard.
And you have to click through 5 million things. The font is, you know, five point font. It's pretty terrible.
So I think it's largely on the other side that increased complication makes
something look better. I do think that's changing.
I've seen a lot of particularly startups recently that have really gone
towards how can we make security simple for even,
let's say you're hired straight out of college,
you haven't had a lot of experience.
How can you easily kind of manage your security? Or even I think we're seeing
that a lot of, you know, outside of the security organization, people are buying security products.
Now think about engineering, think about you all like, as you said about PGP, you're obviously
technical, but you don't want to have to go through a million different steps just to send
an encrypted email. I think it's similar on the enterprise side. Engineering organizations care about security, but right now security products are pretty
inaccessible to them because you have people being hired just because they know X sort of
brand product in the security team. And I think that's untenable. And I think that will drive
a lot of changes in the industry as far as user friendliness of the interfaces.
How do I become better at security?
Both as a consumer, because I am trying,
but also as an engineer who needs to be better educated about these things
so that I can implement them better in my own products.
A big part of it, I believe, is understanding the realities of attackers.
So for example, there are very real attacks against, let's say, really low level firmware
level things that's realistically unlikely to be the attack vector people take against you unless
you're a very high value target. It's probably more likely that they'll send a phishing email
with some sort of malicious attachment or link. I think it's, I don't propose that there's really a great way to discover what
are the most likely attacks other than, you know, looking at some of the websites around that have
it. I think it's a lot of just being better at security is just questioning and being careful,
checking your assumptions about, you know, levels of trust, because we don't, as we discussed
earlier, we don't really have a system that has great embedded trust at the moment. Doing things, for example, like two-factor
authentication, having password managers looking at some of the really good guides online, for
example, security for journalists. You don't have to implement everything, but if you can at least
implement some of those steps, you're going quite a long way. But I think it's easy to become
overwhelmed by the number of attacks and the
number of things you can do and the number of things to worry about. I think it's, it's
unfortunate, we don't have a great way of communicating that to kind of the, you know,
average consumer in the US. But I think it's, if I had to give a few things, it's like a two factor
authentication, like some sort of ad blocker, because there's so many malicious ads at this point, password manager, and then just being really careful about what you click on and open.
Even if you want, if you're not sure about something, open it in like a virtual machine.
I think most of the engineers could probably manage that sort of thing. If you're like,
not quite sure, that'll at least help a little bit, because then they have to have a virtual
machine escape. Or open it on somebody else's computer. Yeah, exactly. You know, put it on
someone else. It's transferring risk, right? Your message of resilience and of people,
do they go together? How do we build systems that, oh God, how do we build systems that are
idiot proof? It was like halfway through the question and realized what I'm asking.
That doesn't seem possible.
How do we build systems that are resilient to people's cognitive biases?
Because people are going to click on those links, and not everybody can set up a virtual machine.
And sometimes you trust your mom to send you things, and you don't realize she's forwarded them from bad, bad people.
Yep.
So I would say at that point, it's, you know, making sure that if you do, for example, just talking about personal security.
At this point, I would say there are a few programs like VeriCrypt is one where if you have really sensitive files, like let's say tax returns, whatever else, like make sure to
store them in some sort of store them in some sort of encrypted drive. Thinking more on the
enterprise side, I think every enterprise should assume, okay, someone's going to click on something
stupid, how can I make sure all my data doesn't go out? So it's thinking about how do I have,
you know, privilege separation. So you know, if a typical user gets hacked, like making sure that
at least the attacker has to fight really hard in order to get privilege access to whatever they need. Thinking about, you know, network segmentation, making sure that, you know, say the sales group network is on a different network than the really, really sensitive like different layers of trust. So again, even if you get access to
one person's machine, making sure that they have to have some sort of out of band authentication,
again, two factor, whatever else in order to access kind of the next level secret stuff.
And then also, obviously, monitoring is a big part of it, making sure nothing's like
leaving once it's accessed. So I think it's kind of these layers of protection,
assuming, honestly, assuming that nothing can be trusted and how do you operate in a completely
non-trust environment, I think is a somewhat of a good start. There's really good research out
of Intel about kind of like trust architecture, like how do you implement kind of like, it's the
idea of step up authentication. How do you deal with differing layers of trust, depending on,
you know, who the person is from where they're accessing and so forth. And I think it's really thinking about it that way.
A key thing also is every time you add a piece of technology, including security software,
security products, really think about how can that technology be used for evil, so to speak.
So for example, antivirus, obviously, the Kaspersky news is a really big case. But antivirus is
essentially kind of like a root
kit because it has such crazy level access into the system.
So you really need to think about every time you're adding a technology, what new attack
services it's adding as well.
And how long it will take to maintain it.
Exactly.
You run out of time eventually.
And then now you have antiviral software that is unpatched itself which as you
say is a rootkit yeah so you have to to be careful not to add so much obfuscation that you as the
defender can't even tell that you've left giant holes in your your wall absolutely. I think asset management is the one thing that's,
I think that if there are so many dirty secrets of security,
but one of the dirty secrets is that very few enterprises
know exactly what data and what devices
that they have on their networks.
There aren't many great solutions for them to do so.
It's obviously a very time-consuming process.
But again, it makes it very difficult
to figure out what to defend. But But again, it makes it very difficult to
figure out what to defend. But it also it's all part of this idea of you're just adding complexity
into your system, and not really going what I call going back to basics and just understanding
what you have and kind of iterating from there. It's the other problem is a lot of times you're
adding stuff, for example, security software, and software. Maybe if you're a global organization, you add it in one location and then that person leaves and then no one thinks to remove it from that location.
There are all sorts of problems around just adding complexity and never removing it.
And so you just have this kind of downward spiral into not knowing at all what's going on in your own network. Yeah. And that goes along with the feeling of being overwhelmed and depression
and not able to fix it because it's just all too much and security is too hard.
Exactly. It's just how can we check the boxes to make sure that we're
C-way as much as possible and it looks at least like we did everything we could in the event of
a breach. Not obviously negligent. Exactly. Yep. That's the goal. So how do I keep up to date? I mean,
my field isn't security, but I want to be able to read a blog here or there or keep a book
on my nightstand to read occasionally. Do you have suggestions for how I keep up without it
becoming my full-time job? For sure. So there's another podcast that's really good. It's similar to this, but for
security called Risky Business. And they generally cover kind of all the major news stories of the
week as far as topics. The way I keep up to date is, unfortunately, Twitter, because that seems to
be, I think, half of security jobs sometimes are just tweeting about things and attempting to be
a thought leader on it. But it's a great way, like I always find out kind of about incidents and
other information through Twitter first. Wired is, I mean, there are a few publications like
Wired and I'm trying to think of, they're the ones that come to mind most immediately. There's
obviously like Computer World, kind of the standard publications that have their own security
sections. I think that's a good way to keep up. I would say like there are
some of the big conferences, for example, again, Black Hat, there's some internationally, like when
I would love internationally troopers, just checking out some of the talks from those and
seeing what seems interesting as far as titles and reading through them. There's probably a lot
of stuff you don't necessarily know. But the way I kind of learned the industry is looking at those talks, figuring out, okay, this is kind of, I like what the talk's talking about. I don't
understand all of this. What are some of the keywords I can look into and really making it
a fun learning process. So it's maybe follow. So I'm trying to think of some of the key people to
follow. You know, the Gruck is probably the one of the more famous security tweeters. There's
Swift on security as well, which is like a parody account.
There are a few of the kind of big accounts, which I think there have been a bunch of lists as far as who to follow for security on Twitter.
But if you follow just a few of those people, you're probably going to get most of the news that happens in security over the course of like a given week or month.
And Swift on security.
If you're listening, which I know you do occasionally,
you're always welcome on the show.
We have voice changing technology.
Yeah, I've heard about this neat stuff.
Okay, but back to Kelly.
What are you most excited about for the future?
Do you think this is all going to get better,
that we're going to get to acceptance and it's... I think you have to think that, but...
Yeah. Yeah. You have to believe. I want to believe.
Yeah, I want to believe. Do you see a path there?
Yeah, that's another thing I think about a lot. I think it is there. There is a path there. I
think it's going to be a very hard fought path. I mean, some of the most brilliant people I know on the offensive side where they used to break things are now building things. And that really excites me. because they clearly have thought about the ways, you know, how to kind of block the ways that attackers normally attack.
I'm really excited about some of the infrastructure changes
we're seeing and the idea of, again,
this kind of resiliency concept,
you know, more resilient infrastructure,
less, you know, static kind of almost,
if you think about it, it's like heavy infrastructure
that requires that retroactive patching and everything.
And it's difficult to take offline, all of that.
I think some of the new infrastructure,
like containers we're seeing, I think that will help a lot even without necessarily
a ton of security stuff on top i just think it'll make it easier to kind of like i guess embed
security into the environment um and make it a lot less of a headache i suppose i think also i'm
excited because i see you know talking to some of the students, sometimes they, you know, reach out to me after conferences or at the conferences themselves.
They seem a lot more excited about the idea of building and really starting things.
I almost feel like, you know, security was this kind of, you know, it's almost like how the joke is that sysadmins kind of hate their job and they always have to deal with tons of BS, all of that.
I feel like people are actually really excited about the idea of defending now.
And it's almost becoming as cool as the idea of being a hacker.
So that really excites me.
It's kind of what the next generation is going to do and kind of the fun they're going to
have on the defensive side.
So I think a big problem is that defenders kind of feel like, oh, God, we're never going
to be as good as the attackers.
There's nothing really we can do.
They think, I'm not going to say it's all attitude.
It's definitely not all attitude, but really starting to think creatively, as creatively as attackers about how to defend,
I think would be a really cool result. And I feel like that's going to happen. I would say that's
one of the things I'm more confident in is we're going to start to see a lot more creative
defense of either tools, designs, medications, whatever you want to call it. So I'm excited
about that. So at the top of the show, you mentioned, or at least you mentioned that you were
involved in behavioral economics as part of your work. Can you define that or talk about
how that relates to all of this? I know that's a huge open-ended question, but...
Yeah. So the reason why I decided when I was 11, I really wanted to study economics and economics became kind of, you know, from my 11 year old view, the best thing ever was that it's the study of choice. And make choices, not just assuming how people make choices or theorizing about it.
It's actual empirical studies, which obviously there are flaws in actual studies and you have to verify findings.
But I love the idea that you're actually measuring at how our brains think, all of the quirks that evolution has kind of created because, you know, it's how we were able to survive, but it hasn't really caught up to, let's say, the information age or the modern age.
So I think all of that is really neat.
So the reason why I think it's important as far as information security is because on both sides, you have humans.
It's humans developing the attacks, humans defending. And you can't ignore the way human brains work in either of those cases.
Again, it's not just about machines. So perfectly logical, you know, again, traditional game theory,
assuming everyone's rational, that's not going to work. And I think it's so readily apparent that
there are tons of inefficiencies and weird quirks, if you want to put it that way, in the security industry, that that's why it makes it really fascinating to study the biases and
figuring out what about human behavior is driving those kind of like inefficiencies and kind of
irrational things that we see going on. You've talked some about defenders being
very risk tolerant and attackers being risk averse.
So the attackers will go for low hanging fruits and will do easy stuff.
And the defenders will take huge risks because they think they're protecting things when
they aren't looking at that easy to get stuff.
What other, I mean, that's a very, gosh, I don't remember the name,
I want to say provost problem.
Do you remember the name of that?
No.
All right, well, maybe we'll cut this little part.
That's a well-known problem in cognitive behavior
where people look at what they're doing and decide their risk profile
based on whether they think they're winning or losing.
Right.
And winners tend to be less assertive, less risk tolerant.
Correct.
What other psychology things like that actually apply in the security world?
I think prospect theory, which you mentioned, overweighting small probabilities and underweighting
big probabilities is by far one of the more prevalent ones. I think there's also the system
one and system two thinking. I think security is such a high stress, you know, like it's it's not the end of the world, but it certainly feels like it, I think, to a lot of people.
So the you know, what I call the lizard brain really takes over and you're really thinking almost like a, you know, fight or flight type response rather than kind of like slower, more controlled sort of thinking.
I think there's also biases, particularly in group decision making.
You know, there's obviously things like hindsight bias.
There are tons of things where, you know, if you think, for example, that let's say
the retroactive patching, that's working really well right now.
Honestly, people aren't in security groups, aren't really measuring how well outcomes
are turning out.
So you can easily have that kind of bias where you're like, OK, well, we should double down
on this.
We should spend more just on, you know, whatever sort of retroactive, like another blinky box,
let's say we need two blinky boxes rather than one. And you don't really have a lot of evidence
that it's actually working. And that's another sort of cognitive bias. There are also things
like people caring more about short term versus long term. I think a lot of the problems we see,
there's such high turnover in the security industry. It kind of doesn't matter. As long as you look good to your boss in the next
two years, if there's some long-term thing that you know will create problems five years down the
line, it's not necessarily the biggest deal. But I think prospect theory is definitely the biggest
because you're going to look so much better if you stop the PLA from hacking you rather than
just a 13-year-old from Romania. So I think that's
a big driver of it. You've talked about dieting analogies with respect to
this long-term versus short-term. Could you rehash that for us?
Sure. So the idea is with dieting, a lot of people know that they should go on a diet and that they shouldn't eat pizza every night.
What?
Yeah, I know. It's shocking.
Oh, no.
Maybe to some people it's shocking.
But so when I feel like a lot of times, you know, people in security get really mad at users.
They call them stupid.
They're very, you know, put downy and then even a bit one step deeper than that attackers and people who are
more on the red team side really poo-poo defenders and the choices they make but the reality is
generally when people are in a situation it's the same thing as someone you know from the outside
being like oh well that person shouldn't eat pizza every night but then you know if they're in their
own situation they probably aren't making the healthiest choices either i don't think anyone is
there are some people i shouldn't that, but very few people are eating
just like salmon and broccoli and no sweets ever, you know, just a cheap meal once a month or
anything like that. People generally don't operate like that. And that's, you know, due to evolutionary
reasons, like we needed fat in our diet, blah, blah, blah. So I think it's having a little more
empathy is the reason why I use the diet analogy is like, you probably don't have a perfect diet.
Like few, the diet is the same as, you know, a consumer with perfect security.
Like they have things to do.
The average like salesperson, for example, like wants to get deals closed.
They don't want to have to worry about all these sorts of hoops to jump through security.
So it's having that little bit of empathy, I think will also drive, honestly, better security long term if you it's back to the usability, right? If you design better UX, if you design something that actually makes it convenient to have security embedded in kind of your everyday life, you're a lot more likely to use it, then kind of security as a whole will improve. But that's, I think, at least I see a lot of times, that's not the way people think about it. Yeah, I wish it was.
But then when I look at people who have nearly perfect diets and exercise for hours a day,
I look at them and think, well, how am I going to get my job done?
Yeah, very true.
It's a tough balance.
There are only so many hours in the day, and there are only so many hours in which I'm
willing to put in that level of effort.
Right. And I'm someone who like goes to the gym most days and I'm pretty active. I try to have a good diet. We also recognize like it's, you know, there's a good quote by Dan Gere, who's what I
always call like the OG thought leader in security. And he says, good enough is good enough. Good
enough beats perfect. Like it's a lot better if you have an off day, you're way too tired to go to the gym, like make sure to go to the next day. Like sometimes
users realistically will bypass security and they will do something stupid, like create a really
terrible password just to start. It's like, hey, you know, in the next few days, set up a reminder
yourself, put something in, like don't shame them just for doing something out of convenience. Like
try to find ways to encourage them to then when they do have the time or they do have the energy, whatever else, actually implement security properly, like
make it easier for them. I think that's really important. And that goes back to the diet and
exercise metaphor. If you feel bad because you went off your diet or you didn't exercise,
that is not as useful as just saying tomorrow I will do better and then doing better
exactly my password manager has been a lot like a an exercise manager tomorrow I will do better
and then yes it works out it forms habits that was another thing about cognitive and security
is that if you can get into the habit of doing things like two-factor authentication and password managers and even reading about security just to remind you that it's a thing you should worry about, are there other good habits for security hygiene?
I think there are.
Yeah, it's definitely like,
again, password manager,
two-factor authentication.
I think making sure,
like going through
kind of like regular checks.
So I'm someone, for example,
that disables JavaScript
on most websites,
but for obvious reasons,
like let's say I decide
to go online shopping
for like I saw Blade Runner.
I really want to get
some cool asymmetrical blazer
from Helmet Lang.
I'm going to enable JavaScript.
What I try to do is
at least weekly,
like most of the time it's every month. I go back and kind of like remove all the one-off
websites from my list of javascript enabled websites and it's having those sorts of regular
processes again putting in like i have calendar reminders for this similar to like backups like
have a calendar reminder for backups stuff like that i think is really useful um but i think
of a bigger point I find interesting
is so many people have asked me,
like when I was giving the Black Hat talk
about why game theory doesn't necessarily work,
a lot of people misinterpreted that
and was thinking that it was about gamification insecurity.
In some ways, like I love game theory,
but I would really love to study the gamification
because I think it's something you see
so much in other consumer software,
even some enterprise software. And I feel like we haven't leveraged at all in security.
Like how can you make better security practices almost like a game or achievements that people
have to attain and turning it into that habit, like in World of Warcraft, when they introduced
achievements, you know, to grind, you know, 10 different whatever is every day to keep people
obsessed and addicted to the products. Like, why don't we do that for security that seems like a great idea but i haven't really seen it yet
yeah and then you could generate some of that actual data to find out what people are willing
to do and what people aren't willing to do and the lengths they're going to go through
to defend their products or defend their companies that would. We don't have a lot of data yet.
I mean, we have a lot of data
on things that go very horribly wrong,
but not on things that are working.
Exactly.
So one thing I'm trying to do,
as I mentioned, I think early in the program,
I have my guinea pig, so to speak,
but I'm trying to enumerate
similar to what we've seen
in some government programs.
I know the UK has one about behavioral design. I think the most famous ones are around like taxes, like how to
increase the amount that people actually like pay their taxes. And there are different, you know,
framing effects, like how you phrase, you know, please pay your taxes to get people to do things
differently. Thinking about using some of those ideas towards improving security. And so one thing I've done is start to enumerate
what all of the potential strategies should be and then having people who are defenders at
organizations, generally pretty big ones from the people with whom I've spoken, implement those and
start to try to as much as they can collect data on adherence rates, like how much it actually
improves. For example, like with phishing, generally, you can test like how many people are opening phishing emails, like introducing some of those again,
behavioral elements, gamification elements, like how does the change the rate. So I'm
trying hopefully to get a little bit of data around that. But right now, no one's really
focused on it. And it's, it's pretty unfortunate, like data in general is just such a huge problem,
or lack thereof, rather is a huge problem, as far as like what we can do to improve the industry all right well all of my uh additional
questions will start a whole new podcast so i probably should let you go kelly do you have
any questions for us or thoughts you'd like to leave us with yeah so uh one question i would
love to hear is obviously you all have developed
some really interesting devices and products.
Like what, when you think about security
and when you've considered security,
what are the things you're thinking about,
you know, and what are the things
that kind of scare you
where you feel like you,
what are the, I guess,
sort of known unknowns that you have in mind?
The whole face on wired thing has been brought up a number of times.
Working at ShotSpotter on gunshot location systems
and putting devices out around a city means that you can't guarantee physical access.
And the data isn't really that important until you get it to aggregate, but still making
sure that each device has its own encryption settings so that it can't be used to hack the
other devices. And then being sure that our encryption wasn't something anybody at our company invented, that we used something that was well-known and hardware-based and yet not obviously attacked by other people.
There's all these things and none of them were particularly easy. It was go through the list of what we need,
what we think our threat model is like, who do we think is going to hurt these things and why?
Right.
It was very homegrown and hacked together with the fear that we would be attacked publicly for
being stupid, which is not really the right way to go, but it was what
we had at the time. I would say from something that comes to mind for me from working on medical
devices is there were a lot of times where we had something that we were designing or building to go
into an operating room or a doctor's office, somewhere where, you know, you're contacting
patients and you're doing something to
them or assisting the doctor and doing something. And, you know, it's a safety risk. It's a direct,
immediate safety risk for something to go wrong with those systems. And at each place, there was
always a push for, hey, you know, why don't we hook this up to a network? Why don't we hook this
up to the hospital network? then we can transfer patient data immediately
and do firmware updates and that kind of thing.
And it was important to push back on that
and say this should not be connected to a network.
We should send a person with a USB stick to upgrade these.
Despite the inconvenience,
it's much better to just not allow that entire
attack surface to exist the air gap and i think people don't think about that a lot i mean right
now we have the internet of things and all these devices and you know for consumers and stuff and
that but sometimes i think for some products there should be a does this need to be connected to anything? Absolutely.
Maybe not.
Well, that goes to BLE.
I've been doing a lot of BLE things,
and it's hard to get people to understand that its security is pretty worthless.
Yes.
And the way it works makes people more trackable.
And that has some bad side effects.
So even if you're building some medical device wearable that is good overall, you may be putting the person in danger because now people know where they are all the time. And so you have to balance this squawking versus the convenience of having a device that is constantly updating you for what its function is.
For the large part, though, I mean, these are things I always feel overwhelmed and I always ask my clients to please consider the security and what they're doing and whether or not it needs to be connected, whether or not it needs to be talking all the time, how we're going to do for more update.
If it needs to be connected, how is it connected?
Don't just throw every radio and protocol into it and say, wow, we can do anything.
We'll set up over BLE and NFC and we'll connect it via USB and
Wi-Fi and it will all be great. You can do that, right?
Yeah, it sounds like it's equally as similar to, you know, back to the Equifax example. It's
always a trade-off right now between security and usability. But I think you're absolutely right. I
know from the security side, I think a lot of people question,
you know, when devices come out,
like why does it have to be connected
so much across all these different,
you know, protocols?
So I think for sure questioning
kind of like in general,
I think no matter the domain,
but it sounds like particularly
y'all's domain, you know,
questioning functionality,
how can this functionality be used again
for like evil, so to speak as well?
I think that's fantastic to think about and what do what information needs to be conveyed
and how and how and and what needs to be stored uh we've had jen costillo on the show a few times
and she's always been a big proponent of don't store user data in the cloud unless you have to keep it local
because your cloud storage becomes an attack point and instead of attacking one device you
have information over thousands or hundreds of thousands and that i've had a client where we
had a nice chat about cloud storage yeah Yeah, I think that it's interesting.
It can sometimes be double-edged because I think in a lot of cases, cloud service providers have a lot better security than a lot of people can actually implement locally.
So I think it's an interesting debate.
I think it's still being hashed out in that vein as well.
Oh, yeah. And with the cloud I was talking about was a particular person's,
a particular company's server not using cloud services like AWS.
I think those provide a little bit more security than company Joe's server online
with a fixed IP address and no updates.
Wait, wait, we should, before I get really depressed,
we should probably close up the show.
Okay. That sounds good.
Kelly, do you have thoughts you'd like to leave us with?
Yeah, I think I'm really excited about, you know,
the idea that getting more of the people on the developer side,
people developing products, interested in security
and really understanding that hopefully the barriers
to just kind of a good level of security aren't too big.
Again, thinking about just don't use default passwords.
Obviously, there are some challenges and trade-offs there,
but I would love to try to fix some of the incentives
and really attack it
at the source of the issue where people are actually building things and how can you build
them in a more secure way. I don't even pretend that it will necessarily be fun, but I want it to
at least be a more approachable problem and not as scary and not seem as overwhelming because I would
hope that it doesn't have to be. Don't worry about it. There are always things where you can be attacking chip level, whatever. But I would say read about those if
you find them cool, but don't worry too much about those versus the basics.
Cool. That's good advice. Our guest has been Kelly Shortridge, a researcher at the intersection of
security and behavioral economics. She's
currently a product manager at Security Scorecard. Thank you for being with us, Kelly.
Thank you so much.
Thank you to Christopher for producing and co-hosting. And of course, thank you for listening.
I have a quote to leave you with from Daniel Kahneman from Thinking Fast and Slow, which was a very good, if thick,
book. Our comforting conviction that the world makes sense rests on a secure foundation,
our almost unlimited ability to ignore our ignorance. Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are Logical Elegance and listeners like you.