CyberWire Daily - Taking a look behind the Science of Security. [Research Saturday]
Episode Date: June 12, 2021Guest Adam Tagert is a Science of Security (SoS) Researcher in the National Security Agency Research Directorate. The National Security Agency (NSA) sponsors the Science of Security (SoS) Initiative... for the promotion of a foundational cybersecurity science that is needed to mature the cybersecurity discipline and to underpin advances in cyberdefense. Adam works in all aspects of SoS particularly in the promotion of collaboration and use of foundational cybersecurity research. He promotes rigorous research methods by leading the Annual Best Scientific Cybersecurity Paper Competition. Adam joins Dave Bittner to discuss the NSA's SoS Initiative and their Science of Security and Privacy 2021 Annual Report. Information on the SoS Initiative and the report can be found here: Science of Security Science of Security and Privacy 2021 Annual Report Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. data products platform comes in. With Domo, you can channel AI and data into innovative uses that
deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to
your role. Data is hard. Domo is easy. Learn more at ai.domo.com.
That's ai.domo.com.
Hello, everyone, and welcome to the CyberWire's Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and analysts
tracking down threats and vulnerabilities,
solving some of the hard problems of protecting ourselves in a rapidly evolving cyberspace.
Thanks for joining us.
Every year we do a report on our activities from the previous year.
You know, we find it is a good way to talk about and increase transparency of what is being going on in the program.
That's Adam Taggart. He's a science of security researcher at the National Security Agency Research Directorate.
The research we're discussing today is their 2021
Science of Security report.
And now a message from our sponsor Zscaler, the leader in cloud security. Enterprises have spent billions of dollars on firewalls and VPNs,
yet breaches continue to rise by an 18% year-over-year increase in ransomware attacks
and a $75 million record payout in 2024.
These traditional security tools expand your attack surface with public-facing IPs
that are exploited by bad actors more easily than ever with AI tools.
It's time to rethink your security.
Zscaler Zero Trust Plus AI stops attackers by hiding your attack surface,
making apps and IPs invisible, eliminating lateral movement,
connecting users only to specific apps, not the entire network,
continuously verifying every request
based on identity and context.
Simplifying security management with AI-powered automation.
And detecting threats using AI
to analyze over 500 billion daily transactions.
Hackers can't attack what they can't see.
Protect your organization with Zscaler Zero Trust and AI.
Learn more at zscaler Zero Trust and AI.
Learn more at zscaler.com slash security.
Well, can we dig into sort of the philosophy behind it here?
Why approach security from a scientific point of view? What's your goal?
So our goal is to build up an academic discipline investigating the fundamentals of cybersecurity.
So we're talking about developing theories, models, and having scientific evidence to help inform cyber. So the reason we do this is a lot of times our gut initiative, our reactions to things, it
sounds right, but you really should dig in and do the study to figure out what is the
best solution.
One of the common ways you look at it is think about, we're always seeing what is the best
password you can make.
We've done plenty of studies on passwords
and often it's like, make it longer,
add special characters, do this.
But the science says that doesn't actually fit
with how humans remember things.
So you get all these other parts involved in it.
So in a sense, we want to really know the nature of how all these things fit together, so that
way when we provide advice and provide technology to solutions, we're confident that it's going
to provide a good solution.
Is this largely a matter of having a good amount of rigor behind the work that you're
doing, good scientific principles?
Absolutely.
We're very much doing rigorous work, stating your assumptions, testing those assumptions, trying to validate
what you're doing. So that way you're not just creating something that sounds right. Let's
test it with the real world and see if that is the actual solution.
Well, let's go over some of the details here of the SOS initiative, the Science of Security initiative.
You sponsor several different groups that you call lablets.
What is that about? Can you describe that for us?
Sure. Lablets are small virtual labs at leading American research institutions, our universities.
And the idea with a lablet is we don't want to just create a good research lab.
We want to have it to be multidisciplinary. So it's not just a computer science activity or
electrical engineering activity, but philosophy and psychology get involved and actually pull
the questions apart and actually really dig into it. And the other aspect of the Lablet is that it brings in other institutions. So we wanted to have not just one, or in our case, six, really great places.
We want to have them bring in other institutions, other researchers, professors, graduate students together to really have that vibrant discussions and research and collaboration.
and research and collaboration.
So be able to use the, I guess, the scale that these universities and institutes bring to the table, their own resources,
their own network of folks who can help with these hard problems.
Exactly.
Well, let's go through some of those organizations.
Who is in the lineup and are there any particular specialties from each of
them? So we, yeah, so let's just go through it. We have Carnegie Mellon University out in Pittsburgh
and they're very much in scalability and composability, like looking at like programming
languages. And the other aspect is like doing long-term human behavior studies. So we can get really new perspectives from that.
We have Kansas University, and they have specialty work.
They're very much in cyber-physical systems.
And one of their big projects is trying to have computers
be able to prove that they are what you think they are.
Not just who they are, but they're running the right type of software,
they're in the right configuration, and those attributes that you care about you think they are. Not just who they are, but they're running the right type of software, they're in the right configuration,
and those attributes that you care about to secure a system.
We have a lablet at the International Computer Science Institute,
which is a research organization in Berkeley, California.
They're connected to the University of Berkeley.
They are bringing much more on the privacy aspect of it.
In the privacy, you look at all these things
of how is information flowing,
what are people doing with your information,
and they have quite rigorous resources
to actually bring privacy policies
into understanding contextual privacy.
It's not just what the information is,
it's how it's going to be used.
And that really brings changes to how people perceive things.
In addition, they have a really robust test bed
where they test thousands of Android apps
to see if they're actually following these privacy policies
or doing other types of work.
We have a lablet at NC State, North Carolina State University.
And this lablet is really working on norms
of what are expectations of how information works.
One of my favorite projects they're working with
in collaboration with Rochester Institute of Technology
is they've been working in those collegiate competitions
to get a better understanding of how attackers attack systems.
And finally, we have two more lablets.
We have Vanderbilt in Nashville.
Vanderbilt brings an expertise in cyber-physical systems.
All their research projects have a connection
to those computer devices that connect
between the physical world and the real world.
So whether they're understanding how train control systems work, or how the power grid
influences things through information, and then how the power comes together.
And finally, our last one is the University of Illinois in Urbana-Champaign, UIUC.
And they're very much looking at the resilience of systems,
looking at how systems continue to work under compromise.
We're not in a stage where if something gets broken into,
we shut everything down, wait a few days,
try to recover everything new.
That just doesn't work in our world
where everything is dependent upon these computer technologies.
They're looking at those kind of tasks, bringing humans involved, in our world where everything is dependent upon these computer technologies.
So they're looking at those kind of tasks, bringing humans involved.
How about uncertainty?
Because a lot of our models assume we know everything, but we really don't.
So how do you bring those into your modeling of what's going on?
Now, to what degree are the lablets their own sort of silos? and to what degree do they interact with each other, if at all?
So we really try to push them to interact with each other.
Obviously, proximity within the lablet creates good collaboration, so we try to work on that.
We have them meet quarterly, where they can present on general themes.
So sometimes we'll do an empirical study day presentation
so all the different lablets talk about their work in that area
and they can build up a more robust connection there.
And then we have our annual conference to bring everyone together
and we also have a continuous virtual organization And then we have our annual conference to bring everyone together.
And we also have a continuous virtual organization, which helps people have collaboration consistently over the year.
Now, one of the things that you outline in the report is this notion of coming at what you describe as five hard problems.
It's an interesting list. Can you take us through that and give us a little description of what you're after here? Yeah, absolutely. So the five hard problems,
these were developed in collaboration with the lab that leads and NSA in saying, what are really hard fundamental challenges we have in cybersecurity that we really need to make progress if we're
going to really transform how cybersecurity is done. We have five.
Resilient architectures, the idea of working through compromise and being able to recover from it.
We have secure collaboration, which is the challenge of having information move between devices and platforms
and have it to be secure and meet the objectives.
We have metrics, which is that perennial challenge
of trying to measure how secure is something
or prioritize areas to focus your security.
We have scalability and composability.
This one seems a little weird in the sense that
it's the idea of often solutions work really well
in the small but don't work in the big.
So how do you take those smaller solutions
and scale them to tackle bigger problems and more data?
And then the composability part is,
you can write secure parts,
but how do you put them all together
so you don't have to redo all the security thoughts
of a system?
So a challenge of a secure product A, secure product B
doesn't mean you get a secure system when you put them together.
And then the final one is really a very interesting one,
and it's the human aspect, the human behavior of cybersecurity.
And that is all about trying to bring in an understanding
of how humans interact with systems and make decisions
so that way you can have systems that are realistic
because you can develop a perfect system,
but then the human will make a decision and it's like,
what were you thinking?
Well, that just means the technology wasn't prepared
for how a person would respond.
Right.
Yeah, I mean, I'm fascinated sort of at the whole approach to this because I guess what I wonder is, are there any areas of cybersecurity and privacy that have a hard time being fit into a scientific framework?
Are there times when you and the folks you're collaborating with find yourselves thinking, this is a square peg in a round hole?
Generally, I think we're usually thinking of
most solutions do fit into a scientific,
a rigorous approach to looking at it.
Not every problem would fit into the five hard problems
because they're not intended to be.
They're just five problems that we're really focused on.
So yes, I think science can bring us very much here in cybersecurity, but we're not
trying to tackle every problem.
Right, right.
Yeah, that makes a lot of sense.
Well, let's go through this year's report together.
What are some of the highlights for you?
What really stands out as interesting?
Well, this year has been definitely an interesting year with the pandemic.
That has definitely changed many of the ideas or the activities that we normally do. Well, this year has been definitely an interesting year with the pandemic.
That has definitely changed many of the ideas or the activities that we normally do.
But we're very appreciative that the universities considered the national security research being done here as critical and worked through their difficulties to continue to make progress.
So the three areas that we really work in is we do foundational research with universities,
we hold competitions, which is really a unique thing where we're trying to inspire people to do good work,
and then we grow this community because as good as having
20 people work on a project, 200's better
and 20,000's even is the best.
So let's just talk about some of the interesting research findings. 200 is better and 20,000 is the best.
Let's just talk about some of the interesting research findings.
At Carnegie Mellon, we had this study,
a long-duration study of how humans have been making decisions in cyber.
They have gotten hundreds of volunteers and then they monitor
what they're doing.
They've gotten hundreds of volunteers and then they monitor what they're doing.
They investigated the question of,
we see these breach notification emails all the time.
We get an email,
oh, our systems were broken into.
For your safety, you should change your password.
They started looking at it as,
what do people actually do with that information?
We're all probably guilty of it in the sense that most people kind of ignore it and move
on with their lives.
They don't actually rush out to change the password for that system.
It gets even more interesting in the sense that when people do change their passwords,
they do just a slight variation.
I'm sure many of the people can think,
oh yeah, I just added an A to the end,
or I added a one to the end of my password.
And then if they don't do that,
a lot of times when they change their passwords,
it actually gets easier to guess,
which is really the human aspect of it.
It's like, well, I had this really hard password,
well, I can't remember that,
so I'm going to make it easier this time.
So you get advice out of this and saying,
maybe these breach emails where everybody's saying
change your passwords all the time,
people aren't really listening to it.
So we need to have new effective ways of communicating.
We had a study at University of Alabama,
and this is one I've been working with.
And they've been looking at it as, what is a good research paper?
What do you put in it?
And you're like, well, that's got to be a challenge because every paper is different.
But there are certain attributes of a paper that you really want to see.
Do you want to see the assumptions?
You want a clearly laid out goal and approach?
They've been working on this question for a few years.
This past year they have done an open expert elicitation.
They went out and talked to experts in security research
and said, what are you looking for?
You don't want to just talk to the professors of what's going on.
You want to talk to the engineers who actually have to make use of the papers
and say, is the information in here useful to you?
And one of the things that they often find is
papers struggle in understanding
the validity of their research.
So what are the flaws of my analysis
that you can not trust my research?
Or what is something going on outside of my research
that influences my research?
So you can think of it as like,
what is just the limitation?
Like, I want to make a big claim,
but really my big claim isn't so big.
Right.
So it's a matter of people having a natural sort of human biases.
Yeah, exactly. So you'll have studies that will say we sampled programmers, but in reality,
they all looked at freshmen programmers in colleges. And you're like, well, does that
really apply to somebody who's been a professional for 20 years? Maybe, maybe not. But you need to talk about it in your paper.
I see.
You need to acknowledge it.
Exactly.
So that other people know those challenges with it.
I see.
Interesting.
What other things caught your eye this year?
So there's a project at NC State
that is really interesting.
We hear the advice,
patch when there's a vulnerability out there for your software, patch. Patch now. We hear the advice,
when there's a vulnerability out there for your software,
patch.
Patch now, don't wait, patch.
And in reality, that is such a not scalable solution. You think of these people who have large cloud presences.
There's, I would say, thousands, millions of virtual machines
running on these computers.
If you just took time down to patch,
you're spending huge amounts of time to patch.
And so people a lot of times just don't do it.
And this research project's really been looking at,
well, all right, let's make the assumption
that you don't patch just because it's out there.
You patch when the vulnerability
and somebody's trying to attack you.
They've been developing those models and the sensors
so that way it detects and says,
oh, you care about this now,
so install this patch now.
That way you respond to what's going on
rather than just being proactive and patch.
Which I know sounds really weird in the sense that you're like,
why are you doing this later?
But when you have so many machines that you're dealing with,
you need to be able to prioritize.
And the system helps you prioritize to say,
this is what you're going to be attacked on.
Deal with it now.
Versus they're not working on this one right now.
Right, right. Like, I imagine if you have a whole lot of, I don't know, retail stores,
you're going to prioritize putting security guards on the ones that might be in bad neighborhoods
versus the ones that are in good neighborhoods. So, like you say, you're sort of bringing evidence
to the table. Exactly. Yeah. Interesting. Now, what does NSA get out of their participation here? The
leadership role, what comes back to NSA? So coming back to NSA, one of the things about
science security is this research is completely unclassified in public. So the results are going
out in the leading journals that are being presented at the leading conferences.
So it's going out to everyone.
So in that sense, as NSA works with people
who use these results, these products,
these ideas and concepts get put into products
that the U.S. government incorporates and uses.
So in that sense, it helps defend the U.S. government
and NSA is responsible for working on the
security of national security systems.
So having better things to build it with is a great benefit.
More directly, we have NSA researchers called SOS Research Champions who actually stay abreast
and work with the research projects.
So that way they can get these ideas
and incorporate it into their research and on NSA missions.
That way we can have the direct response
and understanding internally.
At the same time, we build up the base
of the security technologies
and even all information technologies in the country
that help benefit our cyberspace
and help make it more
secure. So it's really, I mean, is it fair to say that it's sort of a pure research effort here?
You know, it's aside from what's happening in industry with the development of, you know,
products that people are selling. As you say, you're bringing scientific rigor to some of these
questions without, I don't know, the veil of having to worry about marketing or profits or many of those things that the big providers, they have to deal with.
Yeah, absolutely.
We're really looking at what is the fundamental value that you're providing?
What can we do now that you can't do before and release it as widely as we can, whether, you know, well, obviously there's the IP issues and the intellectual property issues, but we want people to be able to use it and make benefit of it.
Our thanks to Adam Taggart for joining us.
The research is the 2021 Science of Security Report from the National Security Agency.
We'll have a link in the show notes.
And now a message from Black Cloak.
Did you know the easiest way for cyber criminals to bypass your company's defenses is by targeting your executives and their families at home? Black Cloak's award-winning
digital executive protection platform secures their personal devices, home networks, and connected
lives. Because when executives are compromised at home, your company is at risk. In fact,
over one-third of new members discover they've already
been breached. Protect your executives and their families 24-7, 365, with Black Cloak.
Learn more at blackcloak.io. Research Saturday is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies.
Our amazing CyberWire team is Elliot Peltzman, Puru Prakash, Kelsey Bond, Tim Nodar, Joe
Kerrigan, Carol Terrio, Ben Yellen, Nick Volecki, Gina Johnson, Bennett Moe, Chris Russell,
John Petrick, Jennifer Ivan, Rick Howard, Peter Kilby, and I'm Dave Bittner.
Thanks for listening.