CyberWire Daily - TRISIS Malware: Fail-safe fail. [Research Saturday]
Episode Date: January 6, 2018Robert M. Lee. is CEO of Dragos Security, a company that specializes in the protection of industrial control systems. He’s describing his team's research on TRISIS, tailored ICS malware infecting sa...fety instrumented systems (SIS), so far found only in the middle east. It's only the fifth known incident of malware targeting ICS systems. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me.
I have to say, Delete.me is a game changer. Within days of signing up, they started removing my
personal information from hundreds of data brokers. I finally have peace of mind knowing
my data privacy is protected. Delete.me's team does all the work for you with detailed reports
so you know exactly what's been done. Take control of your data and keep your private life Thank you. JoinDeleteMe.com slash N2K and use promo code N2K at checkout.
The only way to get 20% off is to go to JoinDeleteMe.com slash N2K and enter code N2K at checkout.
That's JoinDeleteMe.com slash N2K, code N2K.
Hello, everyone, and welcome to the CyberWire's Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and analysts tracking down threats and vulnerabilities and solving some of the hard problems of
protecting ourselves in a rapidly evolving cyberspace.
Thanks for joining us.
And now a message from our sponsor Zscaler, the leader in cloud security. Enterprises have spent
billions of dollars on firewalls and VPNs,
yet breaches continue to rise by an 18% year-over-year increase in ransomware attacks
and a $75 million record payout in 2024. These traditional security tools expand your attack
surface with public-facing IPs that are exploited by bad actors more easily than ever with AI tools.
that are exploited by bad actors more easily than ever with AI tools.
It's time to rethink your security.
Zscaler Zero Trust Plus AI stops attackers by hiding your attack surface,
making apps and IPs invisible, eliminating lateral movement,
connecting users only to specific apps, not the entire network,
continuously verifying every request based on identity and context, Thank you. organization with Zscaler, Zero Trust, and AI. Learn more at zscaler.com slash security.
Mid-November or so, we ended up coming across the malware.
That's Robert M. Lee. He's the CEO of Dragos Security, a company that specializes in the protection of industrial control systems.
He's describing Trisis, a bit of malware affecting safety instrumented systems,
so far only in the Middle East. Basically, there's not a lot of malware samples tailored towards
safety equipment. There's not a lot of software that should be popping up on our radar related
to safety equipment. So we were doing the normal sort of hunting thing, found the sample, took a look at it,
pretty quickly realized it was weird, but we didn't immediately notify folks.
I think this is also just an interesting aspect about intelligence for most folks.
When you look at some of the ways people treat information sharing, it is a fast as I can get it, throw it out the door, let me see it kind of approach.
And not to put anybody down, but that's typically the DHS government kind of approach.
They set up that standard with automatic indicator sharing program as well of as quickly as I get it, I'm going to throw it out the door.
That in the industrial community can have some pretty big implications.
So we want to make sure everything that we knew about it was correct before freaking anybody out.
So we actually had to go and acquire a safety system, a trigonic safety system, use the malware against it, not only reverse engineer the malware, but actually rip into it pretty deep towards the end of November, we ended up quietly informing international partners and
CERT and DHS and DOE and bringing up the fact that this was something pretty significant.
And so when you say you discovered this through threat hunting,
what can you share with us about what that process is like?
So what I will say is a lot of people, I think, have a weird perception of how you do collection. And the Kaspersky
silent signature discussion has sort of opened people's eyes to one side of the industry while
thinking that that extends to everybody, which it does not. So normally, there's a couple different
ways to do this, whether it's at Dragos or anywhere else. The first is if you have access to customer networks.
So like antivirus vendors or software vendors
that see intrusions,
they get collection out of their customer networks
and then they hunt through that data
looking for malware and interesting aspects.
Another way is to use malware repositories,
whether it be VirusTotal or malware.com or other places that
folks are submitting malware, even if it's accidental. And sometimes it's just by the
vendors. And then there's also other mechanisms to do this. Some of our favorites are actually
looking at the beginning of malware campaigns. Like if you were an adversary and you were going to go target industrial asset owners,
how would you do that?
Generally speaking, you might go after the vendor websites
and vendor sites themselves.
You might even test out your capabilities
in smaller locations.
There's just a lot of different ways to do collection.
But generally speaking, the point is get a bunch of data
and then you're looking for things that are abnormal.
For us, a lot of times it's an understanding of the industrial software paths where we know on the industrial software side of the house, like what a correctly configured and triconic system should look like.
What does that actually look like?
Now, let's look for abnormalities compared to that and try to go from there.
look like. Now let's look for abnormalities compared to that and try to go from there.
So can you tell us anything about, was this discovered before any potentially dangerous action was taken? No, it was discovered after the fact. And so I think it was discovered for
everybody after the fact. I can't speak for everybody else involved in the investigation,
but I know the incident occurred earlier this summer.
And it was in...
Can you say what the incident was?
Yeah. And so we can even without being in the customer site, just by knowing sort of how these
operations work. But there's also knowledge out there about what actually happened. But
in sort of a two cent version, a safety system exists
to govern the process and make sure that it's safe. You don't want to have gas leaks. You don't
want to have over pressurized events. It's there to control the process. And so what occurs is if
a safety system is compromised, it'll shut down the entire plant or shut down that portion of the
process. That's a good thing. And a lot of people look at that as the fact that it's been compromised, but that's actually it doing its job. Its job is to shut down
the process if something is unsafe. So when the Trisis malware got loaded onto the Triconic system,
it ended up failing safe. The impact was the plant shut down, which is definitely a disruption
and an attack, but everybody was safe in that environment, which is definitely a disruption and an attack.
But everybody was safe in that environment, which is exactly what it's supposed to do.
Now, looking at the capability and looking at some of the things the adversary was doing, we can make assessments, but not facts, about what the adversary might have been trying to do.
And so as an example, based off of what we've seen with the capability, it looks like there actually we know for a fact there was errors in the code.
But it looks like the adversary may have made some errors in being able to change the logic on the safety system. So if you can think about it, the safety system says, hey, if these parameters get involved in the process, these parameters, I detect something unsafe.
get involved in the process. These parameters, I detect something unsafe. So if you mess up in changing parameters, you'll drop the system. It fails safe, does everything it's supposed to do.
If you do successfully change the parameters, what could come from that is potentially you
taking out the safety functionality of that safety system, meaning that the adversary could do a
different attack against the industrial process now without that safety net, if you will, being there.
So you'd almost need two attacks to do something potentially life-threatening.
But obviously, we don't want to see anybody messing around with the safety system.
So this is potentially disabling the fail-safes. Is that a good way to describe it?
Yes. And so the impact at the site was specifically taking down
operations. So there was an operational outage due to this malware. The implications of what
the adversary appears to have been doing is that in our assessment, they were trying to figure out
how to remove the safety functionality from the system. Just to back up a little bit, to describe how these systems work.
I mean, these safety systems run independently of the operational systems, and they're there
to sort of watch over and take over if something potentially goes wrong?
Yeah, that's correct.
A safety system is another type of industrial control system.
It is very, very specifically configured as well.
And this is where we try to capture the nuance of, no, this is not a highly scalable attack.
Some of the other researchers accurately noted the Python code that was used, the framework that was developed, is scalable.
You could absolutely use that in other sites.
And you have a framework now to be able to go after other sites. But the
nuance there is the framework is just that. It's a framework. The actual attack
is specific to each and every safety system. And I don't mean just triconics. So what generally
happens is you go and you build an industrial process. You're super excited about it you're like i'm going to uh
you know build widgets or i'm going to uh in this case you know oil and gas or manufacturing or some
level of industrial process you've built and what happens is you do a process hazard analysis and
you come in you have the safety engineers and you have people trained specifically for this
to look at the process and go hmm you know it's not it's people trained specifically for this to look at the
process and go hmm you know it's not it's i'm dramatizing this of course but they look through
it they study it deeply very deeply actually but they look through and they go all right well this
is a decently unsafe process uh we're gonna say that people could definitely get killed here or
the environmental damage could occur uh you are you need a safety
level of one or safety level of two or three um or like the hypothetical maximum which nobody
should be running i don't think anybody runs a process like this and i don't think you can even
buy safety equipment for it was like a level four um level four usually means like you did something
wrong once you try to rebuild your process to be safer.
Um, anyways, uh, but I'm, but I'm also caveat this with, I always sort of advocate on knowing the limits of your expertise.
I'm not a safety engineer.
So, so this is, this is just from my understanding.
Um, but generally speaking, uh, you come in, you figure out what the safety level of that
environment is, and you can do one of two things.
You can either redesign the process to try to bring it into a safer condition,
or if that's just not possible, especially in some industries like nuclear environments,
oil and gas production environments, like that's just not possible to have it completely be,
you know, safe all the time. And so you then build a safety system out of the vendor technologies
available like Triconics, and you tailor it
specifically for that plant. And so its safety rating and what it's doing is absolutely specific
to that environment, not just like a Triconics for an oil and gas environment, but specifically that
physical process that was done. And so Trisis is trying to take advantage of the triconic system
to change the logic, to understand the process enough to change the safety parameters around it.
And that's also why we noted there is no vulnerability in triconics. And this is where
I always, you know, the media is getting a lot better. I have to admit, like a lot of the journalists that I've worked with over the years, they're getting a lot better about this.
But of course, as soon as this came out, oh, vulnerabilities in Schneider Electric triconic system allowed a plant shutdown.
No, that's legitimately not accurate at all.
The functionality to change the parameters of a safety system are needed.
Like using your own system has to occur.
What the adversaries were in effect doing was learning the industrial process as well as an engineer for the purpose of changing the parameters so that it would no longer be in a safe condition.
Now, there were some errors that the operators made in terms of the sort of the physical status of some of the
controllers on these triconic systems yes can you describe that for us no and so i'm happy to talk
about why i won't um so i guess what i'm saying in the in the report that you mentioned that um
there was a key switch in program mode instead of run mode that's what i'm getting at different
question good Good question.
Okay, so I think it's useful for the audience.
I'll sort of explain the previous hesitation
as well as the one you're talking about.
Okay, great.
So the previous hesitation of like,
no, I'm not going to go into that
is there were actually errors in their code.
I see.
And we don't want to publish on that
because we don't really want to advise them
on what they screwed up on.
Because we don't, I mean, you can't assume the adversary knows what they did wrong yet.
Okay. So you're saying attackers in the, in the adversary or the errors in the adversaries code
is what you're referring to. Yeah, exactly. And so the, and maybe I used operator, I was thinking
operators like adversary operators, but, but yes, the, the adversaries could have made this a lot better and we just don't
want to tell them how um but in the and this is also like the weird thing by the way we're
andragos when i've got people who have been on the offense on the industrial side of the house
as well that are now defenders like oh yeah i would have wrote it this way and you could have
done this and the defenders piece yes you should there's
there's two things you really want to maintain i would say there's just two things two two key
points in this scenario that makes sense for the safety systems number one you should not have the
safety system itself the sis the safety instrument system should not be connected to the network
and it should be a truly air-gapped, kind of segmented environment.
You want it completely out of band so that it can govern the process natively.
This one was connected up.
And so it was on the network where a remote attacker could get access to it more easily.
Now, simply air-gapping it is not the standalone solution,
but it's definitely a much better position to be in.
And is that done for convenience?
They hose it up to the network for some reason of convenience for themselves?
Yeah, for that convenience, especially when you're talking about
if you've got a lot of sites and a lot of remote sites,
your ability to connect it up to access it is convenient.
But also sometimes it can just even be for almost,
you can almost say for safety safety if your threat model doesn't
if your if your scenario your risk scenario is a threat model if your risk scenario doesn't include
remote adversaries compromise your safety system then it's completely reasonable to say i would
really like to get more information for my safety system to know that it's working correctly and so
the so i don't want to blame the victim they I don't think the practice is a good one. I think it's one that we should critique. But in victims' mindset here, they could have
done it for all the right reasons in the world. It could have been done for even added safety.
But in my professional assessment, it is absurd to network that industrial safety system.
Usually, and I will say that this is one of the areas of manufacturing oil and gas
that is contentious because there are plenty of environments you walk into and the operator is
used to going to a separate system, a separate network. It seems completely separate. And so
they think they're doing all the right things. And if you look on the control network, it's not
connected, but you might have level 3.5, which would be like your DMZ, um, in the traditional kind of Purdue
model. Um, you might have a 3.5 that's actually shared across these. So in other words, one
network out from where you are, there might be shared systems that then come over and use like
LDAP and stuff to, for the safety environment. So I have seen a majority of them actually connected,
uh, incorrectly, but no fault to the people involved.
But I would say no fault in the sense that they were trying to do the right thing.
So long story short, that's one of the things that we recommend for everybody to segment
that.
And number two is, yeah, the PLCs or final control elements, but the controllers for
the safety system itself, you have a physical key switch that you can turn.
Turn one way to program it,
and you can load new logic and things into it.
Turn it another way, and it's on run.
And unless it's already been compromised
and has some sort of rootkit functionality to bypass the key,
you're not doing anything in that system.
So if it's in the run mode,
the attacker can sit on that engineering workstation all day long
and throw code at the controller. It's not open for business. So the attack isn't feasible in that way. That attack
path doesn't work. Now, technically, if you're the adversary, you could do some other things to put a
rootkit on the system that I don't want to go fully into, but it's not hard to figure out.
And then it would bypass the functionality of the key turning
because it's still logic at the end of the day.
It's still code at the end of the day.
But in this environment, we understand that the key was left in the program mode
and that the safety system was interconnected incorrectly.
So these two things led to an ability to more easily compromise the system.
But before we move on, I do want to throw a giant asterisk here
that, of course, as soon as I said that in the report with our team of,
you should do these two things as mitigations.
And then we went on to say, and all of these other things,
and here's how you should treat this scenario.
The vendor community of the people who sell firewalls,
segmentation devices, et cetera, jumped ferociously on that and said, see, if you bought our box, you would have been protected.
And I just, for the purpose of education, want to note that that is stupid.
Go on, Rob.
So segment your environment.
Use these type of devices fantastic but monday night quarterbacking attacks to go if the scenario was entirely different like we would have won
yeah that's man if i was taller and could dribble faster and dunk i could be a basketball player but
that's not the world i live in right so like we need to realize that the attack would
have looked different in a different environment we saw this after the ukraine 2015 attack as well
when people came out and why did i quarterback and like oh well that couldn't happen in the
united states because we have nurk sip and nurk sip mandates to form authentications the attackers
couldn't have remotely connected it's like dude number one it was distribution nurk sip doesn't
apply and number two if the attacker was going after a NERC SIP certified location, they could read the standards of what you're going to have in place and design a different attack.
So all of these things are important to put in place.
They make your environment significantly more defensible.
And I'm a huge fan of segmentation and firewalls.
I don't think anyone that's serious in this conversation isn't a fan of those devices. But saying that you're going to prevent the attack
is very disingenuous in understanding how these attacks occur.
So digging into some of the nitty gritty of exactly what was going on here, what can you
tell us about the technical aspects of what they were trying to do once they were in the system?
Yeah. So in general, once they established that from the engineering workstation,
which is where you would program your control elements, once they were there,
the code effectively validated that it had communication to the control environment,
which also means, again, they were there before because you had to load in the actual IP address
of the control element. Once you had the IP address of the control element and the software started
verifying its connection, basically did a look through
the code, using the protocols, the native protocols, to try
to interact with the system to identify where could I load the logic,
where is there a place for me to interact with the system.
After it found where it could upload new logic onto the system,
it then tried to push new logic to the system.
And that new logic, if it would have been successful,
it is in our assessment, I'm very careful in words here,
it is in our assessment that it would have removed the safety function from the system,
going back to our previous discussion on the system would have operated normally,
there would have been no failure,
the defenders would have had no idea the safety functionality of their system was denied,
and then the attacker could have followed up with an attack on the industrial process
where the safety system wouldn't have kept it in check,
or they could have just waited and see if anything unsafe ever happens,
and then it would just look like the safety system didn't work correctly.
And unless there was really good forensics done later, you may not even expect this to be a cyber attack.
That one's a little less realistic.
It's more realistic that they would have caused a second attack on the industrial process itself.
What occurred because of their errors or because they intended it.
It's always difficult to get to intention, right?
But it appears they did mess up.
So what it appears that they messed up on is some aspects of their code and interacting
with the triconic system, which caused the safety system to do exactly what it's supposed
to do and fail when it realizes that something is wrong.
And then it shut down the industrial process, plant ground to a halt, and the engineers get called in to try to figure out what's going on. So looking forward,
I mean, this was discovered and this is only the fifth known ICS-tailored malware. So this is
pretty novel and new. Looking forward, how does this inform what you all do from here on out?
Looking forward, how does this inform what you all do from here on out?
I think it's very clear to me at times that many of the ways that we try to inform the community are off IT best practices or off pen testing tricks.
Like I say, we is like the general Anthosat community.
And that's not appropriate.
And so I like to be able to inform asset owners about the risk based on real threat data.
I think the appropriate way to do it is what are real threats?
It goes back to people like, oh, my gosh, you can't have a sticky note on a substation like HMI.
Like, why?
Well, the sticky note has your password.
Like, OK, well, what if somebody sees it?
I'm like, do you have cameras in here?
Well, no.
OK, what's your threat model? Is your threat model the Russians are paratrooping into your substation?
Because if so, sure. Is your threat model a remote adversary threat model the Russians are paratrooping into your substation? Because if so, sure.
Is your threat model a remote adversary compromising your control center
and pivoting to your substation?
Because if that's your threat model, give the operator his damn password
so that he can actually use his system.
And if it wants to be a complex password,
there's no problem actually writing it down.
Anyway, so there's a back and forth.
Logic doesn't always, you know,
the things we've learned in InfoSec don't always translate to ICS.
The things we've learned in InfoSec don't always translate to ICS.
So for me, what this means for a portion of the community is a good case study on why some of these vendor recommendations
from Schneider as an example.
Schneider had already made recommendations before the attack ever occurred
on how you implement a safety system.
If you followed their best practices, the attack would have been significantly more difficult.
This gives weight to that. With adversarial data
validating some of the recommendations and helping us develop other
recommendations, helping us move the community forward. So that's a good
thing. Lessons learned here are good. The other thing that it does is it's
starting to highlight more of the activity that's going on that most of us had suspected but couldn't prove.
And in the absence of proving something, I don't think you should just guess. And so you really
want to validate these things. So, you know, speaking from a very biased perspective because
of Rondregos, but a very biased perspective, we really haven't had an ics dedicated threat intel team in
the community before like we we've had we we have had one critical intelligence back in the day
um they focus a lot on like vulnerabilities and and research which was fantastic ended up getting
bought up by eyesight who ended up getting bought up by fire eye and some of those folks are still
there today i'm doing some fantastic work but but we really haven't had the threat intelligence folks
that are actually
going out tracking the adversaries and seeing what they're doing and really understanding down
on the industrial side of the house what's going on. At best, we've had IT security companies who've
seen adversary groups, but don't know exactly what they're doing on the ICS side of the house,
but they'll report out on them. So what I'm trying to get at here is, well, there's more focus today,
and there's teams that are focusing on it.
So I expect to see more and more stuff.
But in that vein of seeing more and more stuff, I would also note that more and more stuff is occurring.
So we're going to we're sort of at this juxtaposition that you're going to see more ICS based threats because we're looking.
And so we're going to see more of the ones that already exist.
But there is also a very apparent uptick in adversaries being focused on this environment.
So we're also going to see more anyways. So to the layman who is not involved in security and
is reading the media, get used to the words industrial and ICS because you're gonna be
seeing a lot of it. And it's going to seem more scary than it actually is, I guess is my point.
So in summary, I would say there's a lot of activity going on and it's an increase in
momentum that I'm not comfortable with nor used to.
But there is an aspect of that that's just looking and starting to see more and more.
So lessons learned is there are real threats out there.
Obviously, we have to be informed off of threat data, not just off of Pentester tricks.
And no offense to the Pentester community.
They do a lot of great work.
But when designing our industrial systems, we don't get the opportunity to do a tech
refresh every couple of years.
We really need to get this right.
And being informed off of what threats really are doing is a great thing for the community. So
Trisys, or also known as the Triton stuff, that malware is another good case study to help us
make better decisions in the future. You mentioned FireEye, and they were, I believe, first to come
out publicly about this, and they called it Triton.
But you all take a different approach.
You intentionally aren't the first ones out with information on these things.
Can you explain that philosophy?
Yeah, absolutely.
I don't trust the larger community, I should say.
I'm sure I sound like some elitist jerk, and it's not my intention,
but I don't really trust there to be a nuanced discussion in the global media
when it comes to industrial attacks.
And that trust is based on a lot of actual data about these things going wrong.
And I hope that our international community grows up around these media
as it relates to reporting on industrial threats,
but I don't want to put my customers at risk waiting for that. So in short, if you're not doing the
mission, knowing about the problems may not actually help you. It's probably the easiest
way to summarize that. So our philosophy is when we identify a new threat, we work on it,
we inform our customers first and foremost. And then when we
think they've had at least enough time to process the information, we reach out to DHS and DOE and
any of the international partners that reach out to us. There's a couple of governments that have
reached out to us and said, hey, here's our line of communication. We would like to interact with
you and we give them the information free as well. So it's not like, so you're in this weird position as a company like mine.
So we're a software company, but we have an Intel team.
And in one way, my folks are persisting and growing that team based off customers.
And so it is entirely fair to say my customers get more information.
But I don't like holding information over the community to go, you know, other people
would benefit from this. We're talking life and safety stuff. I don't like being like over the community to go, you know, other people would benefit from this.
We're talking life and safety stuff.
I don't like being like, you're not a customer.
You can't get this.
It's both not fair to the company,
but it's really not fair to the customer.
And personally, or I guess the non-customer.
And personally, between those two choices,
I'd rather side on the side of the industrial community.
So we still get the information out there where it's needed.
The thing that I don't have time for is going and seeking out every cert in the world.
We tried that, actually.
When the crash override stuff occurred, we tried to make communications with literally
every cert that had a public email address and phone number.
And many of them ignored us.
And only a couple came back in contact with us. And so I'm
not going to go chase down your government. I'm not going to go chase down your national cert.
But if they come and reach out to us, we have no problem sharing information. I'm not going to just
give them all of our intelligence reports because I'm still running a business. But on things that
they could actually influence, obviously, that's the right thing to do.
So I could summarize the policy very easily to say the industrial asset owner community needs the information longer than it's in the public to actually be able to process the information and focus on it.
We hear from our customers and having been in the industrial community, I know that when something comes out and your executives
and PR department is having to deal with a New York Times article saying, oh my God,
we're going to die. You don't have time to actually fix anything. So to even buy them 72 hours a week,
two weeks, three weeks, whatever it might be, is very useful for them getting their heads
wrapped around it going, hey, do we even have this system? Do we run this process? Well, what are our mechanisms to do security
on top of this? It gives them time to focus on the issues.
The race to the media is not appropriate. Now, FireEye
did not race the media. And this is this is the piece that I also
think is important to capture. We put out a very strongly worded like, hey,
this is not how we handle business
and we don't think it's the way
you should handle business.
And that was actually in our community,
like talking to other ICS vendors
because there are some of them
who do exactly that.
They find a vulnerability,
they race to report it.
They find a piece of malware,
they race to report it.
And that's not helpful to the community.
FireEye actually did the right thing.
They found it.
And I don't think that actually,
this is where they'll have to speak on their own timeline.
My understanding of the situation, and I was not on site, but my understanding of the situation was the asset owner found it.
They called in Schneider.
Schneider did fantastic work to try to uncover as much as they could.
Then FireEye got called in.
Then FireEye did the incident response after the fact and then did an amazing bit of work on the analysis.
So I don't know the exact timeline, but I don't know that I don't know that FireEye actually found it.
And that's not a diss on FireEye. I just think it's important that if the vendor, if Schneider did some good work, we also sort of give them accolades.
But the FireEye analysis has been fantastic. They got a guy over there, Blake, who's going to do a lot more research on this coming forward.
And I think it's going to be fantastic for the community.
So I don't think they handled themselves incorrectly.
But yes, our policy is very simple.
We publish to our customers first,
then to any of the international partners
that can influence the change.
Like if you're just some random cert
that doesn't deal with ICS,
then it's not useful for you to know this information.
And we'll work with people as need be to get them
information they need we'll try to do victim notification if possible through the certs
we're never the ones to knock on your door um and then uh we always prepare a public report
immediately by the time we notify the dhs and this is going to sound like another giant diss in the
dhx i actually really like a lot of folks over there, but we know for a fact that, that when we notify any government or any government agency, that there
is a clock that starts on when it's going to get leaked. Um, the U S government specifically, um,
the DHS and DOE areas do not have a great track record on not leaking these things. Um, so we
know when we pass it over that we, we have a clock to when it's going to come out.
Sometimes it never does.
Sometimes they do a fantastic job.
Many times it gets out eventually.
And so we already prepare the public report.
In this case, we had a public report.
We did not actually know when we found this.
We didn't know FireEye had it.
We didn't know anything about the larger context.
We just were doing our job.
But we went ahead and prepared the
report for when it was going to get published. FireEye published it. And so we published ours.
So for those of us on the outside who are just leading our lives, minding our own business,
what's the appropriate level of concern for us?
Probably not a lot. Because what are you going to do with it?
Yeah.
It doesn't impact you. Or if it impacts you, there's are you going to do with it? Yeah. It's, it doesn't impact you. I mean,
or if it impacts you, there's nothing you can do about it. It's like those random events.
I don't know an appropriate analogy. I generally hate analogies anyways, but
if you have information that you cannot act on, you might as well not have that information.
And so if you are doing vacations in industrial facilities, let's have a conversation about
safety. But if you're living your life normally, what is useful to know is the industrial asset
owners and operators and in larger industrial community actually takes a lot of these things
seriously. And although we definitely need to do more, and especially in some industries,
not a lot is actually being done and we want to do a lot more there.
But in some industries, they do a ton of things.
You've heard me plenty of times talk about the power grid operators in North America where they do a lot of training and exercises and tabletop exercises of how they would respond to these events.
We've got customers that don't have safety systems in their environment and still they're like, okay, well, what if this wasn't a safety system?
What if it was this type of system?
And what if it did these things?
Let's walk through what we do in that.
And they take it very, very seriously.
And so to the larger community, I would just say, just because you're not getting information
about all the good work that's being done, please don't assume it's not being done.
And I honestly wouldn't fear in in the scenarios we're talking
about. If there's loss of life conditions, the people I worry for are the people at the plant
level. It could be far worse, but so could a lot of things. I don't think that there's value in
people fearing about these things. I mean, look at what it's done. Look at what that fear has done to the nuclear industry. Nuclear energy is one of the cleanest, safest forms of energy that we have.
But the fear from Hollywood, the media, anything else around radioactive monsters and all this,
it drives a fear that has crippled that industry to where we have to look to other sources.
And the people that operate those types of plants almost operate at a loss but do it because it's a good energy source out of the base load for the american power grid so i don't know i'll just say
to the larger industry fear and hype is always going to lead you to a bad situation and put
undue pressure on the people that are operating equipment. And if we're so scared that we ask them to change the way that they are doing business,
not from a safety perspective, but from a PR perspective,
you're going to be pulling resources away from the safety perspective.
If we let fear and hype drive or motivate or push unnaturally the evolution of that industry,
we will get an answer that looks different than the right one.
So I would just say to everybody,
we don't generally see New York Times headlines on power is still on.
Things are working well.
Like, don't worry.
There's a lot of good people doing a lot of good work.
Our thanks to Robert M. Lee for joining us.
You can find the complete report on the Trisis malware on the Dragos website. It's in their blog section.
And now a message from Black Cloak.
Did you know the easiest way for cyber criminals to bypass your company's defenses
is by targeting your executives and their families at home? Black Cloak's award-winning
digital executive protection platform secures their personal devices, home networks, and connected
lives. Because when executives are compromised at home, your company is at risk. In fact,
over one-third of new members discover they've already
been breached. Protect your executives and their families 24-7, 365, with Black Cloak.
Learn more at blackcloak.io.
The CyberWire Research Saturday is proudly produced in Maryland out of the startup studios
of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies.
Our amazing CyberWire team is Elliot Peltzman, Puru Prakash, Stefan Vaziri, Kelsey Bond,
Tim Nodar, Joe Kerrigan, Carol Terrio, Ben Yellen, Nick Valecki, Gina Johnson, Bennett Moe,
Chris Russell, John Petrick, Jennifer Iben, Rick Howard, Peter Kilpie, and I'm Dave Bittner.
Thanks for listening.