CyberWire Daily - A new approach to mission critical systems.
Episode Date: July 14, 2018Andy Bochman is senior grid strategist for Idaho National Lab’s National and Homeland Security directorate. Today we’re discussing the research the INL has been doing, developing new approaches t...o protecting mission critical systems. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. data products platform comes in. With Domo, you can channel AI and data into innovative uses that
deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to
your role. Data is hard. Domo is easy. Learn more at ai.domo.com.
That's ai.domo.com.
Hello, everyone, and welcome to the CyberWire's Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and
analysts tracking down threats and vulnerabilities and solving some of the hard problems of
protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
And now, a message from our sponsor, Zscaler, the leader in cloud security.
Enterprises have spent billions of dollars on firewalls and VPNs, yet breaches continue to rise by an 18% year-over-year increase in ransomware attacks
and a $75 million record payout in 2024.
These traditional security tools expand your attack surface with public-facing IPs that are exploited by bad actors
more easily than ever with AI tools.
It's time to rethink your security.
Zscaler Zero Trust plus AI stops attackers
by hiding your attack surface,
making apps and IPs invisible,
eliminating lateral movement,
connecting users only to specific apps,
not the entire network,
continuously verifying every request
based on identity and context,
simplifying security management
with AI-powered automation,
and detecting threats using AI
to analyze over 500 billion daily transactions.
Hackers can't attack what they can't see.
Protect your organization with Zscaler Zero Trust and AI.
Learn more at zscaler.com slash security.
We spent like the past decade or two trying to get awareness that this was an emerging problem.
That's Andy Bachman. He's a senior grid strategist for Idaho National Lab's National and Homeland Security Directorate.
Today, we're discussing the research the INL has been doing, developing new approaches to protecting mission-critical systems.
At first, it was a nuisance-level problem, and it was treated accordingly. It didn't require
vast amounts of spending or restructuring of organizations in order to be able to keep it
at arm's length. And we were trying to, I'd say, wake up, bring more awareness to seniors,
telling them it's going to get worse probably so be aware of that
where we weren't sure if they were aware i would say fast forward till now i spend a lot of time
on the hill and with companies and senior folks in companies they're aware they're awake they're
nervous the problem is they don't know what to do it seems like we keep doing more of the same on cyber hygiene or
say cyber defense. We spend more every year on products. We spend more every year on services.
We do more training and improve our policies. And yet it's not changing anything. The number
of attacks are increasing and they're getting more powerful and the damage they're inflicting is getting more severe.
So today, the problem is not a nuisance level problem.
It's a strategic business level risk and a strategic risk to the nation.
The Idaho National Lab has come up with a way that I think is practical and relatively easy to understand to begin to mitigate it in a demonstrable way.
Can you take us through some of the history of this?
So we're talking about industrial control systems and critical infrastructure.
How did we go from the previous era of analog feedback and gauges and so forth to where we are now with everything being digital?
now with everything being digital? I think the drivers are primarily economic, and they have to do with efficiency and new capabilities. The efficiency is obvious. If you can automate a
process that used to require 100 human beings performing different types of maneuvers hands-on,
and you can replace them with an automated system and maybe have just a handful of humans
touching the process, guiding the process,
well, heck, you've just saved yourselves a ton of money.
You might've spent something on the automation
and keeping it running,
but you've not only reduced your headcount
and improved your bottom line,
but you've probably speeded the process too
and made it more standardized.
Those are some of the additional capabilities besides efficiency and money saving that come along.
You also may get situational awareness.
You may be able to monitor much better using sensors than you ever were using human eyes and ears.
And so you can be much more in tune with the processes that are most important to you
and can run closer to the edge, thereby,
again, being more efficient, saving more money, making more for less.
I think that's been the siren call of digitization and automation.
And I think it's only accelerating now with artificial intelligence, internet of things,
and all varieties of automation.
Yeah, one of the things that your paper points out is
in the old days, you had these three physical pillars of security, gates, guards and guns.
That's right. Those, by the way, are still there, Dave. Gates, guards and guns are present at every
nuclear reactor site and they're present on other important parts of electric utility and chemical and oil, natural gas.
Certainly they're present. They may not be quite as visible, but they're present in financial institutions, too.
Any place that has something really valuable to protect has to have strong physical security.
And I think for decision makers, physical security is a lot easier to understand.
decision makers, physical security is a lot easier to understand. You can see it.
You can feel it in a way that cybersecurity has proven much more ephemeral, much more intangible.
And only recently, I think that's what delayed the understanding of it and the increase in anxiety up until relatively recently, once they started to see the impacts.
So describe to us, what are the impacts that now that we've hosed everything up to the internet, it seems like we've made that perimeter porous in a way.
considered to be the seminal book on cybersecurity, where the first nefarious actor on a very,
very tiny network was doing misdeeds and still figuring out what was going on eventually.
The internet was sort of a one size fits all way for everyone to get connected.
So first lands were connected to it. And now, you know, in many cases, everyone's just directly connected to the internet, and devices are directly connected to the internet, as the search engine Shodan reveals.
Now, you all make the point that what is referred to today as cyber hygiene is inadequate to protect industrial control systems.
Yeah, and this is a subtle point I want to make sure I get across, Dave, the article and the methodology from the Idaho National Lab,
it's not a diatribe against cyber hygiene. We want people to continue to do cyber hygiene to
the best of their ability. Again, cyber hygiene and the way I'm using it, and it is a somewhat
loose term with different definitions. I mean, it has everything that we now do in a typical
enterprise, whether it's technology, whether it's services, whether it's training, whether it's
governance and policy, all of those things are cyber hygiene conforming to accepted best practices.
We need to keep doing those things else every want to cry and not pet you and all of their offspring that are constantly being born will cripple large companies or companies actually of any size.
Ransomware, too, is something that's really increasing awareness of the link between cybersecurity and dollars and cents.
that among all the many different systems that you might protect in a medium or large size enterprise,
you have endpoint security and you're just doing diligence all across your networks and systems with cyber hygiene. What we're saying is one of the mantras of INL is if you are targeted,
you will be compromised. It's just a plain fact. And it's very easy to demonstrate. It's been
happening over and over again.
Well, maybe you might say, well, we're the type of company, hopefully we won't be targeted.
The problem is, is you don't get to choose if you're targeted or not. I would say it's fairly
clear that if you're anywhere near critical infrastructure in terms of what your responsibilities
are, then you are a target. And back to the mantra, if you're targeted, you will be compromised.
Therefore, that makes the whole narrative a lot more compelling to people that were
sort of wishing and hoping that this problem would go away.
One more part.
The methodology isn't about the entire enterprise and your hundreds or thousands or millions
of endpoints.
It's about the, it's very selective.
It's about the handful of processes or functions that you perform or products that you make
that are so important that if you were to lose them for more than a day or two, be out
of business.
And that introduces a new term beyond strategic business risk.
It's not a new term in the world, but it's new related to cybersecurity, I think, for most people. It would be called corporate viability,
that you are, in many cases, now in a position where, through cyber means, you could be put out
of business. And you might not be put out of business, but as a CEO, and we've seen this
multiple times now, you could lose your job because ultimately the buck stops with you.
So take us through what you're proposing here. What are you suggesting?
What we're trying to do, again, is to say, keep doing cyber hygiene, do it to the best of your ability.
It will help keep the ankle biters that Mike Asante refers to in the HBR paper, help hold them at bay to the greatest extent possible. But for the handful of things, the systems and processes that absolutely must not fail,
first of all, figure out what they are, because not everybody understands what their most
important processes and dependencies are.
So that's the first step of the methodology.
The second step is create the map of the different hardware, software, communications, and human processes that support those processes that must not
fail. And the third part is flipping the table around and looking at
yourself from the outside as an adversary would. INL can help with this,
but many organizations can already go a fair ways towards accomplishing
this by sort of asking yourself in the first phase, if I was going to take my
company out of business, if I was going to put us out of business, what would be the most damaging thing
I could do? What would I target? In the third phase of the methodology, we have cyber-informed
people, people with experience and being on the offensive side, navigate through the landscape
that was defined in the second phase, all of the hardware, software, comms, and processes that support the most important things that must not
fail, and find the easiest pathways to achieve the effects that they want,
the company-ending effects. The last part is called
mitigations and tripwires. That simply means, and this is probably the part that I think
people latch on to for better or for ill, because we're saying
when you see now that some of your
most important systems and processes are at extreme risk because of the numerous digital
pathways in that are extremely hard to police, we're talking about selectively introducing
out-of-band solutions, analog solutions. Humans are analog. So adding a trusted human where you might have removed
that person years ago because he can't be, people will take issue with this, but in theory,
he or she can't be hacked. Of course, they can be social engineered. But if you understand what
I'm saying, not simply just layering on more complex software defense solutions and hoping
for the best, things that you can't even understand,
but rather adding engineered solutions that you can fully understand will protect a machine,
say, from killing itself if it's given the instructions via software to do so.
It strikes me, you know, one of the things that stood out to me when I was reading through
the report was there was a phrase here that said, but if your own employees physically update the software at those plants, the effort can be
prohibitively expensive. And I wonder if chasing after cost savings, has that ultimately been a
blind alley? If we can't trust the data that's being sent back to us from a remote location or
something like that, were we chasing after something we shouldn't have been chasing? It seems to me that if something goes wrong, we're going to have to
send a human out there anyway to see what's really going on. Yeah, well, I mean, one of the big
features that purveyors of industrial systems and industrial equipment that they tout is that it's
remotely accessible, that you can do remote diagnostics and remote updates. It's sort of
a blessing and a curse. If in the situation where you need to issue a security patch because you
found a high severity vulnerability, the ability to deploy that patch to hundreds of thousands of
systems quickly from one central location is fantastic. It's really necessary these days.
The downside of that convenience, I don't think I've
used the word convenience yet, but that's certainly a big player here. And it's related to efficiency.
The downside is that if trusted folks inside your company have that capability, then armed with the
proper credentials and adversaries are getting better and better at acquiring credentials,
so they don't even look like hackers on your systems. They can do the same thing. And if they can do the same thing,
now you're in peril. So it's got balance between the convenience in order to have efficiency.
In some cases, that's important for cybersecurity to be able to issue patches,
but convenience for other reasons too, to save money, to speed up other functions,
for competitive reasons many times over. People money, to speed up other functions for competitive
reasons many times over. People are taking on risks they don't understand. I guess that's one
way to sort of summarize this topic is we don't think that senior leaders, we don't think people
in government fully understand the risk that they're carrying, not trying to scare them,
just trying to show them reality.
And once they come to understand that as the CEO of the first pilot that we did with a large utility and the second pilot that's going on with the Department of Defense right now,
once they understand their exposure and how their company or their mission is at risk,
then they can go ahead and make informed engineering decisions to how they want to mitigate that risk. In some cases, we think they can reduce it. I shouldn't say to zero,
but they can greatly reduce it from where they are now and be able to continue modernizing while
they're still reducing their risk. Now, take us through what's going on with those pilot programs.
What are you learning from that? How are your theories standing up to practice in
the real world? So far, so good. There's definitely a cognitive leap that has to be made when we're
introducing the concept that you may be doing a great job on cyber hygiene. You may have a very
competent chief security officer. You may have good policies and you may have a budget that's equal to or superior to your peer organizations.
The problem with the pivot is the hard part, because back to the mantra I cited earlier,
if you're targeted, you will be compromised. And that statement stands independent of how
robust your cyber hygiene is. So you can imagine the CEO who's been told
for many quarters or years, we are very strong on cybersecurity. We go to conferences and we
learn that we are among the best. And that's good. I'm not pooh-poohing that at all. It's just the
pivot is if they're in critical infrastructure, they're a target. And if they're targeted,
they will be compromised. And if they know that that's once they come in critical infrastructure, they're a target. And if they're targeted, they will be compromised.
And if they know that that's once they come to accept that,
and it doesn't always happen in the first couple of minutes,
it takes a little while and sometimes some demonstrations.
But once they come to understand that,
they become very eager to figure out what they can do about it.
Like I said, the solutions here are not all that mysterious.
There are solutions that I would say CEOs and other seniors, even people that aren't real comfortable in the computer realm, they can understand
in industrial companies anyway, engineering solutions because they make
so much sense. And it's actually their own engineers sometimes that come
up with the best mitigations. In the case of the pilot, even before the
INL folks could lead them
to a solution, they were already way out ahead thinking of ways to better protect a very
important piece of equipment. And what sort of feedback are you getting from other folks in the
ICS community? Has there been much pushback or are folks embracing it? I'd say two-thirds positive,
one-third either confused or negative. The The two thirds positive are chiming in on Twitter and elsewhere with this is very similar
to what I've been advocating for a long time.
And it's true.
There are a number of people that have been thinking about leveraging the core acumen
of these large critical infrastructure companies, which is engineering, by the way, leveraging in
ways that would reduce their cyber risk. It's just it hasn't been an easy sell, I don't think.
And maybe no one's really tried to package it up before in a way that is deliverable. We're at the
edge of that now with several pilots either done or underway and beginning to get closer to something
that's more repeatable and scalable.
So I'd say it's mainly been positive, Dave, but there are going to be people who,
for perhaps religious reasons, like I mentioned when I got academic pushback just on the word
analog, they can't even listen beyond that. Then that's fine. That may be appropriate in their
world. But when you're out there in the field, hands on these systems, and they're vital to the
survival of a company, or to the performance of a mission in the military, don't really have the
luxury of intellectual purism, got to find things that work. I think in this case, this consequence
driven cyber informed methodology, so far, at least, and I think it'll continue is proving
itself to be something that works. The methodology, the CCE methodology, it might look at first blush like it's about a one-time assessment that will lead to improvements, security improvements in an organization.
But that's actually not the ultimate intent of it.
It's not why INL created it and why it's being birthed now.
What we're really trying to do is use it almost as on-the-job training.
And that by going hand-in-hand, arm-in-arm with the end-user
customer, both with their senior leaders and with their engineers and with their
cyber teams, we're trying to change the way they think about
this, up until now, intractable, overwhelming problem.
So we're not so much trying to leave them with a one-time set of
updated improvements to their security. We're trying to change their minds.
And so that when we do leave and have made some
of the mitigations together, left them to continue some of them on their own,
that that new type of thinking fully informs everything they do
from that point on.
And they don't need any outsider to hold their hand anymore.
And the thinking permeates not just the C-suite and the board and not just the engineers and operators and the cyber team,
but the procurement folks and the HR folks.
And everybody comes to see that they have a role in substantially reducing the amount of risk that they're carrying now.
Again, you probably have captured this already, but they're a critical infrastructure provider.
They're a target. They don't get to choose that. And if they are targeted, they'll increasingly understand that now that there are a handful of
fairly straightforward things, things that also don't necessarily have to be
very expensive at all, they can do to substantially improve their standing.
Our thanks to Andrew Bachman from Idaho National Labs for joining us.
If you'd like to learn more about the work they're doing, there's an article he recently
published in the Harvard Business Review. That was the May 15th edition of the review.
The title of the article is Internet Insecurity.
And now a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses
is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home,
your company is at risk. In fact, over one-third of new members discover they've already been
breached. Protect your executives and their families 24-7, 365, with Black Cloak. Learn more
at blackcloak.io. The Cyber Wire Research Saturday is proudly
produced in Maryland out of the startup studios
of Data Tribe, where they're co-building
the next generation of cybersecurity
teams and technologies. Our amazing
Cyber Wire team is Elliot Peltzman,
Puru Prakash, Stefan Vaziri,
Kelsey Bond, Tim Nodar,
Joe Kerrigan, Carol Terrio, Ben
Yellen, Nick Valecki, Gina Johnson,
Bennett Moe, Chris Russell,
John Petrick, Jennifer Iben, Rick Howard, Peter Kilpie, and I'm Dave Bittner. Thanks for listening.