CyberWire Daily - Usable security is a delicate balance. [Research Saturday]
Episode Date: November 2, 2019Until recently, usability was often an afterthought when developing security tools. These days there's growing realization that usability is a fundamental part of security. Lorrie Cranor is director ...of the CyLab Usable Privacy and Security lab (CUPS) at Carnegie Mellon University. She shares the work she's been doing with her colleagues and students to improve security through usability. The research can be found here: https://www.cylab.cmu.edu/news/2019/07/29-usability-history.html Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. data products platform comes in. With Domo, you can channel AI and data into innovative uses that
deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to
your role. Data is hard. Domo is easy. Learn more at ai.domo.com.
That's ai.domo.com.
Hello, everyone, and welcome to the CyberWire's Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and
analysts tracking down threats and vulnerabilities and solving some of the hard problems of
protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
And now, a message from our sponsor, Zscaler, the leader in cloud security.
Enterprises have spent billions of dollars on firewalls and VPNs, yet breaches continue to rise by an 18% year-over-year increase in ransomware attacks
and a $75 million record payout in 2024.
These traditional security tools expand your attack surface
with public-facing IPs that are exploited by bad actors
more easily than ever with AI tools.
It's time to rethink your security.
Zscaler Zero Trust Plus AI stops attackers
by hiding your attack surface,
making apps and IPs invisible,
eliminating lateral movement, connecting users only to specific apps, not the entire network, continuously verifying
every request based on identity and context, simplifying security management with AI-powered
automation, and detecting threats using AI to analyze over 500 billion daily transactions.
Hackers can't attack what they can't see.
Protect your organization with Zscaler Zero Trust and AI.
Learn more at zscaler.com slash security.
I think there was not a lot of focus on usability in security until recently.
That's Lori Cranor. She's director of the Scilab Usable Privacy and Security Laboratory at Carnegie Mellon University.
The research we're discussing today is titled, Security and Privacy Need to be Easy.
The research we're discussing today is titled Security and Privacy Need to be Easy. I started working in this area around the year 2000 and was trying to build a usable privacy tool and went to look at what other people had done.
And there wasn't a whole lot.
There wasn't much in the research literature.
There wasn't much in what companies were doing. And I think there was a group of
researchers that started talking around then, and that spurred some interest in companies.
I ended up starting a conference called Symposium on Usable Privacy and Security,
and that has kind of spurred interest in this. And so now it's becoming much more common for companies to actually have usability teams
that are focused on security and privacy.
Now, back when you first had this realization, what was your conclusion?
Why had it not really bubbled up to the top at that point?
Well, I think that a lot of the security and privacy researchers and developers were kind of insulated.
They were very focused on security and privacy, and it was very technical, very mathematical.
And their attitude was, you know, we're not experts in usability.
We're trying to get the math right.
We're trying to get the technology right.
And we'll maybe throw it over the fence to some usability team later. And what
often happened is there wasn't time. The product shipped without doing the usability work. Or this
was done in a small company that didn't even have a usability team. And so there wasn't one to have
work on it. I remember, I want to say back in the 90s when PGP first came out and there was some
excitement about that, that we were going
to be able to apply encryption to our emails and so forth. And it never caught on. And I think a
big part of that was it was just so hard to use. Yeah, I think that that really was a big problem.
And, you know, one of the first research papers in this area was called Why Johnny Can't Encrypt.
in this area was called Why Johnny Can't Encrypt. And it was a user study using PGP and found that people just couldn't figure out how to use it. So, I mean, at a basic level, how do you define
usability? So it really depends on the application, but basically we're looking for tools, systems,
whatever that people can use, that people can figure out how to use, that people can use correctly without making errors, without being annoyed by it.
Efficiently, that people find a way to use the security and privacy as part of their normal workflow without having to stop doing whatever it is they really wanted to do
in order to do it. All of those things go into usability.
Can you take us through some of the approaches that you take there at Carnegie Mellon and some
of the research that you've done? We've looked at usability in a variety of contexts, and we try to
do user studies with actual people as much as we can, rather than just having experts look at
something and go, oh, this is going to be easy. This is going to be hard. And one of the challenges
we have that makes usable security different than any old usability testing is that when we're
dealing with a security tool, it involves some sort of a risk. When using PGP, for example,
it's not enough that I can figure out how to encrypt and
decrypt my email, but I need to be able to recognize when someone is trying to send me a
fake email that I need to be able to check the signature and make sure it's really from who I
think it's from. And so in order to do user studies in this space, we need to make the
participants in the study feel like there's
actually some risk that they're trying to protect from. But we can't actually put them at risk
because, you know, ethics, we don't want to hurt our participants, right?
Those pesky ethics, right?
Yeah, yeah. So we need to design the study in the way that people behave and are motivated to protect
themselves in a realistic way, in the way they would do it in real life without actually
putting them at risk.
And sometimes we do it by telling them up front, this is a hypothetical scenario, but
giving them such rich detail that they really get involved and get invested into it, even
though they know it's fake.
get involved and get invested into it, even though they know it's fake. Sometimes we do it by giving them payments for being safe and try to simulate it through money. Sometimes we trick them and we
make them think they actually are at risk through something that has nothing to do with the
experiment that just coincidentally happens. And then at the end, we tell them, hey, don't worry,
you weren't actually at risk. We faked all that. So as an example of that, we were testing the phishing warnings that show up in web
browsers. And so we brought people to our lab and we told them we were doing an online shopping
study. And we had them go online and purchase some inexpensive items. And we had them then fill out a survey about their purchase experience.
And while they were doing that, we sent them a fake phishing email that looked like it came
from the vendor they just made the purchase from. And then we told them to go check their email and
get the receipt for the purchase so that we could reimburse them for the purchase. And while they
were doing that, they would then see our phishing email, well-timed, and almost all of them would then click on the
phishing link, which would then trigger the phishing warning in their web browser.
And then we could see what they did at that point. And that was what we were interested in,
do they swat away the warning or do they actually pay attention to it?
Right. And that's fascinating. Are there common misperceptions that you find people have when it comes to designing usability into their
products? You know, for folks like you who are studying this sort of thing, you roll your eyes,
you shake your head and you say, oh, that again. Yeah. So I think that often developers assume
that users know too much. You know, the developers are very familiar with the technology and they just sort of assume that the users will understand it, too.
It's kind of like, you know, once you know something, it's hard to imagine what it was like before you knew it.
And so I think that's common.
I think also they forget that often security tasks are not the main task.
You know, it's something that users only do because they
have to, not because they want to. You know, I'm trying to send this email. I'm not trying
to encrypt the email. That's just a side thing. Right. So this project that you've been working
on and you find very clever as a developer may be little more than a nuisance to the end user
at the end of the day. Yeah. Yeah, that's fascinating.
What about that tension between usability and customization? You know, if you think back again, back into the early days of home computing, you know, there was that common perception that,
you know, Macs are easy to use and PCs are harder to use, but you can do a lot more on your PC
because you can customize it. And, you know, people would say, oh, Macs are easy to use, but you can do a lot more on your PC because you can customize it. And,
you know, people would say, oh, Macs are easy to use, but they're just toys. And it seems to me like there's a spectrum there and there's a balance between
usability, but also not frustrating your users that they're not able to have control over the
things they feel like they need to control. Yeah, that's a great point. So, yeah, users
want to have control, but it turns out good user interfaces for control
are pretty hard.
The more choices you give users, the more likely that they'll be overwhelmed.
They won't know which choice to make.
They won't understand how to make the choice.
It's going to take them more time to go through and make those decisions.
And so I think there's a delicate balance between
offering users choices and not overwhelming them with choice or in finding ways to introduce the
choice so that those who want to get down into the nitty gritty can, but everybody else can just
make a high level decision. So, you know, some of the ways that this is done is sometimes there's kind of a basic setup
and then an advanced setup.
Or I can say choose option A, option B,
or I want to configure option A for me.
And so then I can go down and drill down on option A
and do the minute details rather than just taking
the whole option A package.
MARK MIRCHANDANI- When it comes to usability, the minute details rather than just taking the whole option A package.
When it comes to usability, is there an element of fashion associated with this? In other words,
if someone comes out with something clever, a clever solution,
do you find that that tends to start a trend with things for better or for worse?
Yeah, I think there's definitely a lot of copying of user interface. And I think that's actually usually a good thing. One of the things that makes it easier for users is if the actions they need to
take are familiar. So if I've learned that, you know, hamburger menu thing in the top corner,
you know, the first time I saw it, it's like three lines that like, what on earth does that mean?
But now that I know that when I click on it, it opens a menu. Now it's easy. I know where to find the menu. And if we have some websites that
instead of putting those horizontal lines, they changed it and made them vertical lines,
then no one would know what that meant. And we'd all have to go like start randomly clicking till
we figured out, oh, that's a menu too. So there's definitely, it's definitely useful to have similar patterns
across different products and services.
Now, one of the things that you all have been working there
at Carnegie Mellon is this idea
of a privacy and security nutrition label for IoT devices.
Can you take us through that effort?
Yeah, so the problem we're trying to solve here
is people hear that security and
privacy can be an issue with IoT devices. A lot of that in the news lately. And so, you know,
you go to Home Depot to buy your IoT device or you go online and you want to find out, well,
which brand should I buy to avoid these security and privacy problems? And basically there's no
information and it's very difficult to do that.
And so what we would like to see is a label
similar to nutrition labels that you find on food products
that would have security and privacy information
for your IoT devices in a standard format.
So you could take two products,
two smart thermostats or whatnot,
and look at them side by side
and compare their security and privacy features.
So we have been working on designing, you know, what are the ingredients that should
be in the security and privacy label.
So we've done user studies to find out what users are interested in.
And then we've gone and talked to experts about what they think users need to know.
And based on that, we've come up with a proposal.
Now we're taking it back to users to see whether they really understand what experts want them to
know and finding better ways of explaining it to them. So we're slowly converging on what should
be in that label. And I think that leads us to a conversation about public policy, which how does
usability intersect with public policy and the
folks who make those decisions, who are setting regulations and making laws and so forth? How do
they have to consider these sorts of elements of security in the work that they do? I think that
it used to be that we didn't see much about usability in any of the policies related to security and privacy.
But more recently, I think that's been coming up as an issue. You know, the Federal Trade Commission
will go after companies for being deceptive in their privacy notices, in the information they
provide to consumers about security and privacy. And so companies will settle with the FTC and they
will rewrite their privacy policies. They'll change their consent experiences in their products.
And then the question comes up, well, have they actually improved things? Have they solved the
problem? And there's not kind of a strict test. There's no there's no law that it says, like, how do you know that
your informed consent truly was informed? You know, there's not a strict measure of that.
But I think it's something that the FTC and other agencies that worry about these things are trying
to figure out. And they are offering guidance for kind of best practices and ways that you can
provide notices to people that actually are
informed and meaningful. When you look at what Congress is doing, there has been a bunch of
proposed legislation about IoT device security and privacy. None of it has gotten very far.
But one of the things that we're seeing in some of these proposed bills is that you have to inform consumers about security and privacy.
So far, they haven't actually explained how.
Some of them have even referred to the idea of put a label on the product or inform consumers if there is a microphone or a video camera.
We don't have too much detail about how to do it.
much detail about how to do it. And one of the things we're hoping with our project is that if any of these pieces of legislation actually move forward, that we will be able to say,
hey, here is a way to do it, you know, adopt this. So we'll see what happens.
What is on the leading edge right now with you and your students and colleagues who are
working on this research? What are the things that you're excited about for the future?
students and colleagues who are working on this research, what are the things that you're excited about for the future?
Oh, so many different things.
Right now, we're busy looking at how to improve opt-out and consent on websites and trying
to come up with a set of validated best practices that we can put out for companies to use.
So hopefully, we'll make some good progress on that.
So hopefully we'll make some good progress on that.
Also doing work on passwords and what kind of password policies companies should adopt so that their users create strong passwords, but they're passwords that they can actually
remember and use.
So some of the work that we've done on that has actually led to improvements in NIST password
guidance a couple of years ago,
but there's more that we're working on to have some more actionable guidance for companies.
What are your recommendations for companies who, as they're doing their development,
they know that this is something they want to have part of the process, they want to be effective.
How do they measure success? Well, I think in order to know how successful you are with usability, you really have to
actually do user studies.
And, you know, that's something that, as I said earlier, a lot of companies were just
not doing for a long time.
And I think still a lot of companies are not doing it.
We're now seeing that some of the bigger tech companies are doing it and they have
teams.
They hire my former students.
are doing it and they have teams. They hire my former students. My graduates are now going to big tech companies and are on some of these usability teams doing security and privacy work.
But that's really what we need to know if it's usable is actually test it with users.
Our thanks to Lori Cranor from Carnegie Mellon University for joining us.
The research is titled Security and Privacy Need to be Easy.
We'll have a link in the show notes.
Cyber threats are evolving every second, and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker,
a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control,
stopping unauthorized applications, securing sensitive data,
and ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant.
The CyberWire Research Saturday is proudly produced in Maryland out of the startup studios of DataTribe,
where they're co-building the next generation of cybersecurity teams and technologies.
Our amazing Cyber Wire team is Elliot Peltzman, Puru Prakash, Stefan Vaziri, Kelsey Bond,
Tim Nodar, Joe Kerrigan, Carol Terrio, Ben Yellen, Nick Valecki, Gina Johnson, Bennett Moe,
Chris Russell, John Petrick, Jennifer Iben, Rick Howard, Peter Kilby, and I'm Dave Bittner.
Thanks for listening.