Risky Business - Risky Business #816 -- Copilot Actions for Windows is extremely dicey
Episode Date: November 26, 2025In this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: Salesforce partner Gainsight has customer data stolen Crowd...strike fires insider who gave hackers screenshots of internal systems Australian Parliament turns off wifi and bluetooth in fear of of visiting Chinese bigwigs Shai-Hulud npm/Github worm is back, and rm -rf’ier than ever SEC gives up on Solarwinds lawsuit Dog eats cryptographer’s key material This week’s episode is sponsored by runZero. HD Moore pops in to talk about how they’re integrating runZero with Bloodhound-style graph databases. He also discusses uses for driving runZero’s tools with an AI, plus the complexities of shipping AI when the company has a variety of deployment models. This episode is also available on Youtube. Show notes Google says hackers stole data from 200 companies following Gainsight breach Gainsight Status Trust Status CrowdStrike fires 'suspicious insider' who passed information to hackers Salesforce cuts off access to third-party app after discovering ‘unusual activity’ Атаки разящей панды: APT31 сегодня Office of Public Affairs | Seven Hackers Associated with Chinese Government Charged with Computer Intrusions Australian federal MPs warned to turn off phones when Chinese delegation visits Parliament House Sha1-Hulud: The Second Coming of the NPM Worm is Digging For Secrets FCC eliminates cybersecurity requirements for telecom companies Trade Associations Cybersecurity Practices Ex Parte SEC voluntarily dismisses SolarWinds lawsuit Record-breaking DDoS attack against Microsoft Azure mitigated The Cloudflare Outage May Be a Security Roadmap – Krebs on Security Critics scoff after Microsoft warns AI feature can infect machines and pilfer data vx-underground on X: "I've had a surprising amount of people ask me about Copilot" Researchers warn command injection flaw in Fortinet FortiWeb is under exploitation Two suspected Scattered Spider hackers plead not guilty over Transport for London cyberattack Russia arrests young cybersecurity entrepreneur on treason charges This campaign aims to tackle persistent security myths in favor of better advice Oops. Cryptographers cancel election results after losing decryption key. Uncovering network attack paths with runZeroHound Model Context Protocol
Transcript
Discussion (0)
Hey everyone and welcome to risky business. My name's Patrick Gray. This week's show is brought to you by Run Zero, which is a fabulous tool that lets you do, I guess, inventory, asset discovery and even vulnerability scanning. And its creator, H.D. Moore, will be along in this week's sponsor interview to talk about a couple of things. They have been playing around with Bloodhounds OpenGraph to do some cool stuff with that.
And we're also going to talk to HD about what they're doing with AI, which is probably more than they're kind of shouting from the rooftops because I think they just don't want to be those people, right?
But they're doing some cool stuff there and we'll find out about all of that after this week's news, which starts now.
Adam Boyle, welcome.
And the first thing we are going to talk about today is Salesforce's rough year is continuing.
There has been apparently some sort of, like people are couching this as like describing it as a, like, describing it as a,
supply chain attack involving an app or a platform called Gainsight, which has resulted in
like Salesforce data for something like 200 organizations going walk about. What do we know here?
So I guess to start with GainSight as a company that you glue into your sales force
so that it can like give you insights about your customers and it's designed to like use that data
to you know, task your sales seat with doing things. So it has quite necessarily deep
integration into sales force deep integration into your data.
it looks like somebody got hold of some access from GainSight into those integrations.
We're not 100% clear how that happened.
Probably someone got some credentials, got some access to a system at GainSight
and then kind of pivoted onwards into their access into Salesforce customers.
What's interesting here, I think, also is that Salesforce appear to be the ones that initially spotted this.
So they spotted some unusual interactions with,
their APIs using Gainesites access and then went back to GainSight and said, hey, you know,
like what's going on here?
Which I guess on the one hand is a good news story, right?
It's just that Salesforce has kind of learned something from the mess that has been going on
for them earlier in this year.
I think they initially went to GainSight with like three customer accounts that were being abused.
As you said, it has now expanded to significantly more.
So it looks like kind of a repeat of the...
the earlier Salesforce breach except using GainSites access.
It looks like it's the same people, like the scattered shiny elapses, hunters,
you know, that kind of lot.
And I expect we will see the same kind of, you know, endgame play out
where the companies whose data get stolen, get extorted,
and we'll see whether that makes them any money.
Yeah, so meanwhile, a spokesperson for the shiny hunters group,
they have a spokesperson now.
I wonder if it comes with dental benefits.
They said that Gainesite was a customer of sales loft drift.
You know, this was the earlier one that you mentioned
was a company called Sales Loft Drift,
and that's how people grabbed a whole bunch of Salesforce data.
You know, we don't really precisely know what the mechanics of the breach are here,
but it looks like based on a couple of things that have come out,
like one of them from GainSight,
it looks like probably the attackers were just able to steal
a bunch of API keys or something like that, or credits, right?
Because they've got here, Gainesite can provide the IP ranges and subnets
that Salesforce login events from the GainSight connector should originate from.
At this time, these details can only be shared through a support ticket, blah, blah, blah, blah,
but that suggests to me that the way that this thing works is it's a cloud service
and you have to get an API key from your Salesforce instance,
plug it into the cloud service and then the cloud service interfaces with Salesforce,
which begs the question, why wasn't this IP restricted to those ranges in the first place?
if those logins are only supposed to come in from that range,
why was there no restriction there?
You know, I think we need to start having these conversations
and asking these questions, personally.
I mean, yeah, we did see in the earlier breaches
these crews, you know, pilfering API tokens to move lattery.
So it's absolutely in their wheelhouse, you know,
as in terms of a motorcycle randio technique they're used to using.
And it makes sense that, you know,
that's the kind of the aviue they went down.
Whether or not the earlier breaches kind of led to the Gain site,
like one, maybe there was some access in the Gain site
with token stol,
and that they can steal tokens to go back into Salesforce and so on and so forth.
But you're right.
The, you know, kind of assembling things out of cloud components is how we build modern,
you know, modern tooling, but it does come with a bunch of risks and those of us that
are old enough to remember when, you know, perimeters existed and networks were restricted
and you had to be in a certain place.
And like where you came from was an orth factor.
Do rather end up feeling like, you know, surely grandpa's controls here would have,
prevented this kind of abuse.
But that's kind of not the world we live in anymore,
even though there is something to be said for grandpa's IP restrictions.
Yeah, I mean, it's a, look, it's a funny old world.
Let's just, let's just say that.
But, you know, I think Salesforce did well to detect this, right?
Because if it is a detection based on a bunch of unusual logins from the
wrong IP ranges kind of thing and like actually detecting that when you're operating
at the scale at Salesforce is like kind of commendable, right?
have to get on this because they've been just dragged all year. Yeah, yeah, exactly. In the end,
it's their logo and name that is at the top of this story, even if it wasn't directly their
fault, like even if it was, you know, some customer application and their customers, customers,
etc. Like, it's still their name. And, you know, I think good on them for having spotted this
proactively enough to kind of get onto it early, but in the end, did it make most difference? I don't
know. I mean, they're still there with their name, as you say, being dragged through the mud.
Yeah, so what we do know is that Google has said that something like 200 customers of this platform have had their sales force data walked.
But we also know that the Lapsis shiny hunters or whatever you want to call them have made some claims here that are being denied.
They're claiming to have breached a whole bunch of different companies.
Like one of the companies that they claim to have breached was CrowdStrike.
So they said that as a result of this, you know, they were able to breach CrowdStrike and whatever.
They used a few screenshots to kind of make that claim.
We've seen them do, you know, the Com kids kind of do this before.
In the case of Octa, where they managed to get onto some, like, third-party service agents' desktop
and get a couple of screenshots of, you know, an Octor support panel,
and that was all that they managed to do.
And then they claimed that they'd breached Octor.
But, you know, in this case, it looks like there was someone on the inside actually taking some of these screenshots,
and they have now been fired.
So it was interesting seeing CrowdStrike getting dragged for this,
because honestly, that's a good result, right?
Like if you've got someone in there doing the wrong thing,
detecting them and firing them seems like a win.
To me, actually, you know,
no organisation the size of Crowdstrike
is going to be able to completely exclude, you know,
a malicious insider, right?
Like, it's not realistic to expect to screen them out.
So being able to detect them and give them the heave-ho,
that's a good thing. That's a win.
Yeah, yeah, I absolutely agree.
I mean, hiring people is difficult
and the background checking people trying to figure out who you can trust is really, really difficult.
And, you know, as an organisation, especially one that's hiring, you know, young technical kids,
you often do just want to give them a chance.
I know, like when we were hiring at Insomnia, you know, we are hiring, you know, by and large,
punk hacker kids, right?
And you have to give them a degree of trust that, hey, even if you have done some shady things in the past,
you have done some things that, you know, a normal Corp background check might even pick up and flag on,
at the same time you're saying, hey, we're giving you an opportunity.
a legit job. It's going to make, you know, more money than you were making, do a dumb hack and stuff.
You know, come on board, take the win. Let's, you know, move on with your life. And, you know,
99% of them take that and do the right thing. You know, sometimes you lose something. As you say,
spotting it, reacting to it, that's the best you can do. Well, I'm going to guess. I'm going to go out
on a limb here and say crowd strikes, hiring policies are a little bit different to your hiring policies
when you were running a small pentest firm in New Zealand.
Where you're prepared to roll the dice? Like, I mean, anyone who admits to anything dodgy,
getting a job at a major EDR vendor.
It's not happening.
Sometimes you'd like to give these kids a chance, you know?
They're good people mostly.
Speaking of Insiders, too, I want to follow something up.
A couple of weeks ago, a week or two ago,
we spoke about this thing that happened at Intel
where someone tried to sneak data out on a USB device,
and then that didn't work, so they spun up a NAS
and were able to get the device out on the NAS.
You know, and I made the point at the time,
well, they still got caught, right?
But we sort of used that as a way to kick the DLP
companies I've since found out it was actually the DLP companies who caught it
because they detected the attempted USB exfiltration is suspicious and then the
product just went screenshot crazy and that's how they caught the guy so you know
interesting yeah it's funny how these things work out I guess yeah in that case good
job good job DLP snapping someone you know mounds up in a
and cop and stuff around so that's it's what we're screenshots too like you know
and I'm sorry but you know we say it for attackers we've got to say it
for defenders as well. It ain't dumb if it works. Yeah, and I guess it did that time. So yeah,
good job. Now, moving on, we've actually got a write-up to cover from, it's positive technologies,
right? The Russia-based, you know, big consultancy, which is, you know, very competent,
but does a lot of stuff for the Russian state, and we're not big fans of the actions of the
Russian state these days. But what's funny here is we've got a pretty detailed write-up about
a Chinese APT crew called, you know, APT-31. This is a group that has been,
I think indicted in absentia by the US Department of Justice.
So they're like a known APT group.
They do a lot of economic espionage, intellectual property theft, that sort of thing.
And yeah, you know, Russians have done a big write-up on the Chinese hack and them.
You know, with friends like China, who needs enemies?
Yeah, exactly.
I always like reading write-ups like this from other countries
because you do get a slightly different perspective,
there's a slightly different flavor, like the culture of how much you share or what you say,
is a little bit different than what you might get out of Western firms.
So they're always a fun read.
This crew, the APT-31, I think, is the, I think it's been attributed to the Wuhan branch of the Chinese Ministry of State Security.
So, you know, they're pretty serious business hackers.
And it's just a great write-up bunch of great details.
They're using kind of tradecraft that you would expect.
One thing I did quite enjoy in their tradecraft is they are using the old Microsoft dev-tive.
tunnels where like Visual Studio came with this feature a couple years back where you could just like
tunnel stuff out of your network into Microsoft and then Microsoft would kind of port forward it back to
you. So you could have a Microsoft certificate Microsoft address space listener that would terminate
on a local servers on your machine and they were using that for backdoor access. So it's a great
way to get around network controls. And Red Team has used it. So it's always fun to see, you know,
that stuff trickling down into APT crews. But yeah, overall it's just a great write-up. And, you know,
it's a good reminder that there are competent people everywhere,
and just reading the stuff that's in English,
you know, sometimes you do miss out on things.
I mean, I'm so, you know, you've got to make a no-limits partnership crack here.
You know, Russia and China have a no-limits partnership.
I guess that includes, you know, the limits placed on each other
by their various security controls, but fun stuff.
Look, staying with the Chinese as well,
we had an interesting thing hit the news cycle here,
which was there's a,
a big wig from the Chinese Communist Party visiting Australia at the moment as part of some huge 100-strong delegation.
And as part of that, I think he's meeting the Prime Minister.
It's a pretty big deal, state visit, you know.
And the Australian Parliament House, like, you know, security team put out this memo.
Not to be, you know, not to be forwarded onto anyone, of course, and subsequently, of course, it leaked.
But this memo basically said, hey, we've got a Chinese delegation coming through Parliament House.
At various points, the Wi-Fi ain't going to work because we're turning it off.
You should update all of your devices to lock down mode, turn off Wi-Fi, turn off Bluetooth.
This, I don't know how to feel about this, right?
Because on one level, I think, do they know something here?
Does China try to slip in a couple of MSS guys holding, you know, super advanced Chinese equivalence to, you know, the Hack 5 hacking pineapple?
or is this just like, you know, the juice jacking thing or don't use public Wi-Fi advice?
I sort of can't figure it out, to be honest, if this is based on actionable advice.
What did you make of this?
I mean, my initial result was the initial feeling was kind of the same thing,
which is like this feels like scaremongering, oh my God, there's, you know,
there's Chinamen in our midst, quick, everybody closed the curtains and, you know, turn off with us.
Set fire to your laptop. Eat your phone. The Chinese are coming.
Yeah, exactly. When all of the laptops,
Pops were made in Shenzhen to start with.
But so, like, that was my initial film.
But then, yeah, you do start the one.
Like, I wonder if there is something actionable.
Like, is there some proximity thing?
Because, I mean, there have absolutely been proximity-based attacks, you know, against these platforms.
But I'm always reminded of Mark Dowd's Apple AirDrop, you know, like CPAO, path traversal, you know, bugs.
So, like, there are proximity bugs.
It would be kind of a, you know, a bold move to do it while you're there in an official delegation.
But China does bold moves, you know what I mean?
So that's it.
And that's why I'm conflicted about this, because I'm like, oh, it's ridiculous to assume that, you know, there would be someone malicious as part of this delegation here for a friendly visit.
And then you're like, actually, no, probably not.
I mean, you know, like, on the other hand, why not?
I guess for comparison, the same delegation came through New Zealand before it went through Australia.
And this guy's like third in charge in China.
Like, he's head of their, like, I guess the equivalent of the part.
Parliament, so like not quite speak of the House, kind of higher up, but still like third in line
if, you know, something bad happens in China. But yeah, they came through New Zealand.
And I don't see, I didn't see any news articles about our parliament turning off Bluetooth. So
perhaps the New Zealand stuff is just absolutely impregnable. And, you know, it's the Australians being,
you know, worrying about things that, you know, we've got totally sorted. Or, you know, maybe we
didn't get the actionable advice. You know, who knows? But, yeah, either way, it was just,
it's an interesting story and it does make your wonder, you know.
It does, yeah.
So it was Zhao Li Ji, I think, is the pronunciation there.
It is the chair of the National People's Congress of China.
So, yeah, visiting Parliament.
And look, I think one thing about this story, too, is I don't think the Chinese is going
to appreciate these news stories very much.
I mean, at one point, like Australia's relationship with China deteriorated to the point
that China issued its like 10 demands.
I think it was 10.
They had a demand list of things that Australia need.
to do to improve relations and one of them was to stop getting the press to be critical of China,
which we had to explain to them like that's not really how it works here, but they hate stuff
like this. And in particular, they hate anything where they're being singled out. I think
Dmitri Alperovich on his geopolitics, DeCantad podcast had a great interview. He was talking about
the podcast was about tariffs and trade negotiations with China. And at one point, the Chinese
were going apoplectic about some Trump introduced trade measure.
because they were complaining about being singled out.
And as soon as they found out, no, this applies to everybody.
They were like, oh, cool, yeah, no problem.
You know, so when they feel like they're being singled out,
it's a huge problem.
So I sort of feel like, you know,
we've got some pretty strong comments
from the opposition party here in Australia
in this piece from The Guardian.
And I feel like, well, if they leaked this
to sort of score some points here,
I don't think that was in the national interest.
Is it in the public interest in terms of like,
should the public, you know about this sort of stuff?
I think absolutely.
but does this undercut the country in other ways?
Like I kind of feel like it does.
So I don't know if there's going to be any sort of flow and effects from this.
We'll just have to wait and see.
And, you know, if anyone knows why they did that, you know, email us, let us know.
Meanwhile, Shia Lude is back.
This is the NPM worm that gave us the warm and fuzzies earlier this year.
Someone's had a second go of it.
There's apparently 500 packages affected.
What does the worm actually do in this iteration, Adam?
So much the same as it did last time with some refinements.
I mean, essentially it infects developers through backdoor JavaScript packages,
rummaging around steal their get, steals their GitHub tokens,
steals their NPM tokens, publishes updated packages that are backdored as well.
So it kind of propagates via NPM, you know, every time a developer with publishing rights gets compromised.
Then it adds itself.
It also uses GitHub actions.
to set up a backdoor in the environment that it's running in,
which I think is one of the newer features,
and it steals all the secrets it uses trufflehog to rumm around,
steal all the credits it can find,
and just straight up publishes them in a new GitHub repo in that account.
So you can rummage around on GitHub and help yourself the credentials.
The backdoor component is new,
and that's quite fun because the GitHub action,
it uses hooks on GitHub discussions.
So you can post a message in the like discussions
forum feature of a repository and it straight up takes the body of that discussion and
passes it to a shell to exec on the machine that it's backdored so it's a great you know public
internet to command exec and all the places that this thing has ever run so that's pretty
fun but yeah it's you know it went a little bigger i think than the previous one it was a bit more
aggressive uh GitHub is changing a bit about how auth or publishing packages works so i think maybe
the people who write this felt like they better, you know, if they want to do it again,
they better do it now before NPM ruins their party.
But yeah, it's been spreading pretty big, gotten some quite big name packages,
but it's also pretty noisy.
One of the things it also does is it will delete all of your files if it can't find
credentials that are useful to it.
So it punishes you for not having your credentials.
Not having a passwords, no passwords. text, RMRF.
Exactly.
Exactly. Exactly. It punishes you.
So, yeah, but I don't know who's behind it.
Like, there's a couple of bugs in it that's making it not quite as effective as it could be.
And because it's so noisy, you know, it's being shut down pretty quick in terms of how it propagates.
But still, like, I do love a good internet mess and it's making a mess.
Well, how is NPM going to get, you know, how are they going to get on top of this?
Really?
Like, is this just going to keep happening or they are actually going to get on top of this?
So they have proposed some changes in the workflows for publishing packages to try and require, you know, like live,
human proof of presence.
You can't automate everything quite so much,
which has some downsides, obviously,
but clearly, you know,
having worms propagate through your packages,
not ideal.
And just because the JavaScript ecosystem is so fluid
in terms of how often it pulls upstream dependencies,
like this is a thing that, you know,
you do want a human in the loop somewhere.
You don't want this just, you know,
going full auto crazy in BNBM generally.
Yeah, I remember when we spoke about this
when they were first talking about those sort of controls
and like, it's a good idea,
but it's only going to slow it down, I feel like, you know.
Well, yeah, I mean, the supply, like the nature of the JavaScript ecosystem
just kind of lends itself to these kinds of, you know, supply chain attacks.
So, yeah, we're going to see more of them.
And, like, honestly, it's just, you know, it's kind of fun to talk about.
I'm glad I'm not a JavaScript dev, though.
Yeah, it's a good time for us.
It is a good time for us.
Basically.
Now, this is something that has been coming for a while.
We've covered it in a risky bullet, and I don't know that we've talked
about it on the main show, but the FCC and the United States has eliminated these minimum security
requirements for telcos that was sort of introduced, I think, in the wake of Salt Typhoon, or they may
have even predated that, but the idea was that the FCC had sort of reinterpreted,
interpreted various bits of like, I think the Kalia Act to say, well, you know, according to this
act, you need to hit this bar and make your systems this secure.
Trump ad-beens rolled it back basically.
Brendan Carr, who heads the FCC, is the chairman there.
He voted with one of his colleagues, which is Olivia Trusty.
There you go.
While the Democratic Commissioner Anna Gomez voted against this.
So it looks like telcos are kind of off the hook, right?
I mean, the telcos had some legitimate complaints in my view, right?
Which is, hey, this is going to be really difficult and expensive,
and it's going to cost us a bunch of money, which is, of course, what they're going to say.
but you know it's all academic now because it's done
those requirements are not going out of the telcos
what do you make of this right because is getting telcos
to spend billions of dollars to try to update their core networks
to a more secure state is that what's going to win us anything here
or you know do we need to be thinking more about how to use over-the-top
modern services to mitigate most of the risks from groups like salt
typhoon I mean I think having some minimum requirements for teucos I thought was a good idea
And these are pretty low bar minimum stuff,
like claiming this is going to require billions of dollars worth of work.
You know, I feel is a bit disingenuous.
Some of these things are like,
change the default passwords, right?
They're not things that are surprising to anyone
or are specifically, you know, are telco specific.
They are things that you, like, you as a customer of a telco,
would kind of expect them to be already doing.
And, you know, in that respect, I don't,
I feel like letting telcos off the hook isn't a great plan.
as you have also, as you say, like, telcos are less important than they used to be, you know, because of over-the-top crypto, because, you know, we tend to you say, you know,
Well, no, hang on, hang on. Let me just stop you there. I mean, I would rephrase that and say that telcos should be less important these days. But as we saw through the Salt Typhoon stuff and how much the Chinese got out of that, you know, clearly they still are very important in terms of figuring out who's talking to who and this and that. So, you know, I mean, unless you really change policy and pivot towards using apps that regard the telcos as threats, you know,
just as a matter of policy like they are a problem right if you are if you're an attacker uh you know
in in a u.s telco trying to figure out who your FBI agents are talking to i mean chances are they're
using phones they're using text messages and whatever i guess what i'm asking is like can we can we
reasonably expect that a minimum baseline is going to be able to stop foreign adversaries from being
able to do that and i just don't think it will it might make it a bit harder but i think the solution
here is going to be moving towards you know over-the-top stuff and
doing a lot of education, both internally at places like FBI and then externally as well,
which is, hey, if you want to talk to us, maybe don't text messages or maybe don't ring the number.
You know, maybe you want to use a pay phone or send us an email.
You know what I mean?
Yeah, I mean, I think defending in Telcos against nation states, right, is kind of a bar that, you know,
regulation is never going to meet, right?
If your adversary is Chinese MSS or Russian FSB, then, you know, a Telko is always going to be, you know,
fair game for them because they'll find a way.
But that doesn't mean that we should let the telcos off the hook completely, I suppose.
In this week's Between Two Nerds, Conversation, Gruck and Tom were kind of comparing
the lack of regulation for cloud services, like infrastructure providers like Amazon and so on,
to telcos and saying like, is, like, do we feel like we can trust cloud providers to do a good job
of this?
I'm like someone like Google or someone like Amazon, EC2.
like they do do a good job, Amazon AWS, like they do a good job because it's in their DNA to have build, you know, build robust systems that are resilient against all kinds of threats.
Like security is super important.
Telcos don't really have that kind of cultural background of caring about security.
And so regulation, but there maybe is more appropriate.
And I think maybe it was Gruck made the point that if you're an engineer inside a telco, having regulatory requirements to point at and justify why you need to be in the way,
why you need to slow things down, why you need to, you know, kind of be an impediment
is actually pretty useful.
And having done a bunch of work in telecos, I found, you know, like being able to point
to external requirements was useful for justifying things.
And in that respect, I feel like letting them go as a loss, you know.
But on the other hand, Johnny's MSS always going to be up in your telcos.
So what can you do?
What can you do?
Oh, well, onto the next story.
That's what you can do.
the remaining parts of the SEC lawsuit against Solar Winds has been tossed.
This was the lawsuit that the SEC filed in 2023.
It was an interesting one.
We covered it at the time because basically one of the big parts of the lawsuit was,
hey, you know, you've been putting all of these security statements on your website
and into your SEC filings where we've looked at your internal chats
and your security people are freaking out about all of the deficiencies in your security.
you know, so therefore you were lying to the market.
You know, bits and pieces of that have just been getting tossed over the subsequent years
and the last of it's gone.
So I don't know whether this sends a message that people maybe need to be a little bit more
careful in their statements.
Like having had a couple of years to think about it, I think even if charges were proven
and there was some sort of ruling and whatever, you know, if the lawsuit succeeded,
I think the end result of that is you're just going to wind up with a different type of
weasily boilerplate language going into a CC-5.
Instead of like, you know, the generic stuff we got now, it'll be a slightly more hedged generic language.
And I don't know that that really changes much.
Yeah.
And I think in the end, weasels are going to weasel.
And, you know, this particular lawsuit, in some respects, I quite liked it because, you know,
I do like seeing weasels get some comeuppance because I've been involved in many weasels over the years.
But as you say, like, I don't think it was going to make much difference with a big picture.
And maybe it's time we let Solowans go and, you know, they can continue on with their life.
We can all just move on from this ugly episode.
That's right.
And people will continue to weasel regardless.
It's time to find someone new.
It's been five years.
Basically.
Oh, now someone in Australia really annoyed some people
because a world record DDoS attack
hit a single endpoint in Australia.
It was a 15.72 terabit per second,
DDoS targeting some Azure endpoint here in Oz.
Yeah, and apparently Microsoft just kind of weathered it, which, you know, I guess good for them.
Good job on the network engineering team.
All the people involved in, you know, pushing that many packets.
It was, what, 3.64 billion packets a second, they said.
That's a lot of packets.
So, like, good work taking it.
It looked like it came from one of these, like, you know, compromised home routers and cameras botnets
because, you know, people's domestic internet is so big these days that, you know,
chucking 15 terabits a second.
at a target is totally a thing you can do with, you know, a few thousand compromised IP cameras
and routers and things.
So that's kind of terrifying in a way.
But, you know, I guess it's kind of embarrassing the internet works, given that you can
throw that many packets around and people do.
And yet things basically still fine.
You know, it wouldn't have been that long ago that, you know, the entire place would have
been at its knees at, you know, that many packets per second.
Yeah, I mean, I'm amazed like we have the links to this country to support that sort of thing.
You know what I mean?
Like, it's amazing.
It's like a good news story, doesn't that?
It does.
And Crems has got an interesting kind of think piece here.
You know, there was an intermittent outage.
What was that like last week?
I think it was before.
Yeah, it was before we recorded last week's show.
We didn't cover it, though, because it was just an outage,
which was a cloud.
Cloudflare went down for a bit and was flapping around and, you know, having a hard time.
And some people had to spin up, you know,
some sort of alternative, like hacked together CDN or whatever.
and some people just removed CDN protection so that they could be online.
And you know, Brian's got this story here, which makes the point that, hey, Cloudflare isn't just about availability.
It's also your WAF.
And people have been very lazy about stuff like SQL injection because they're using Cloudflare and just relying on Cloudflare to mop that stuff up when people try it.
So basically the thinking is, you know, a strategy for attackers might be to just wait until Cloudflare has an outage again and, you know, monitor for DNS changes on targets and whatever.
and then you're going to have an easy time going at him.
I don't know.
I mean, sure.
What did you think of this?
So I felt the bit about like the WAF being missing
is a thing that ideally people shouldn't be relying on.
And I guess, you know, like in my professional, you know,
previous professional career, you know, we did some shootouts between various Waf products,
including like, can comparing Cloudflare with various other CDNs on premise WAFs.
And the thing that is worth remembering about Cloudflare is CloudFlair does an amazing job,
but they also only have, you know, a couple of hundred milliseconds worth time to make a call in every request.
And so you're going to get at most, you know, 80, 90 milliseconds worth of CPU time spent deciding whether a request is malicious.
And any attacker that can exceed that threshold is probably just going to get let straight through.
So like the amount of waft you can get from something at cloud-fair scale and the amount you're paying for that waft, you're kind of getting what you pay for.
So that's one thing to remember.
The thing I liked about this particular Krebs piece, though, was anytime you have a big outage like this and people have to make ad hoc changes to your infrastructure to stay alive, it's really important that you've got a process afterwards to say, okay, what do we change, what ad hoc stuff got created, you know, how did we solve those problems? And did we do so in a way that, you know, maintained our security posture? Did we spin them up with personal devices? Did we use, you know, accounts that were outside of regular? What shadow I'm.
IT got created, which will then get forgotten about and left the bit rot and then compromised,
you know, three years from that or something. So that part I thought was really good call out.
They're kind of like, you know, our attack is going to wait for Cloud Ther to go down and then
pounce like that. I didn't find particularly compelling. No, and it's funny what you say,
right, because you just triggered a memory for me, which is nearly 20 years ago when I found
at this podcast, one of the early sponsors was Checkpoint, but in Australia. So obviously the audience
early on skewed very heavily towards Australia.
And the reason Checkpoint sponsored Baby Risky Biz, you know, new podcast was so that they could
have someone come onto the show every month or two and just beg their customers to go and find
and remove, allow any any firewall rules from their checkpoints because of exactly what you say,
which is something breaks.
They throw and allow any any into their firewall and then, okay, everything works again and
they just leave it there.
And this was such a problem that they were literally sponsoring the podcast so that they could
beg their customers.
to like roll them back.
So yeah, it's a thing and it's still a problem.
Although now it's cloud player and not your checkpoint firewalls.
That's how it goes.
Now let's have a chat about co-pilot actions in Windows
where Microsoft has got some new experimental AI agent
shipping with like a Windows beta that you can turn on.
But what's really funny is they're shipping it and just saying,
hey, this is like super experimental
and unless you're like a super duper power user,
like don't turn this on.
and we can't really, we don't really know what's going to happen here.
So they're just hedging the absolute crap here out of this release.
And I think quite rightly people are pointing at this,
you know, people in security are pointing at this and saying,
oh my God, this is going to be a problem in the future.
Now, a couple of things.
Yeah, probably.
I would agree with that that it's going to be a problem in the future.
But complaining about it ain't going to help
because I have a feeling that in three, four years from now,
Like even sooner, the job of security people is going to be dealing with stuff like this.
You know, companies, they want the productivity gains that come with AI.
They're going to demand it.
They don't care that you shouldn't mix code and data.
They don't care.
That's all foreign gobbledy gook.
Your job now as a security professional is to help organizations do this in a way that doesn't get you immediately, like, like digitally murdered, basically.
Yeah.
Yeah.
And we're going to solve the halting problem while we're at it.
I mean it's just the idea of hooking up an LLM
to be able to just like randomly do stuff with their windows
it sounds terrifying and you are correct
in that they are just going to do it anyway
and I think the like the Microsoft caveat was
I only turn this on quote
if you understand the security implications outlined
which nobody does nobody does right
nobody knows what's going to go on
and like
I don't know like
it's just, it's terrifying.
And at the same time,
you know, as a hacker,
as someone that covers security,
as someone that likes break and stuff,
like this is a,
it's wonderful.
We're going to be,
you know,
future hackers are just going to be like
convincing windows to give you a shell,
convincing windows to,
you know,
do whatever you want to do,
you know,
by asking nicely.
Like, no more do we have to think about,
you know,
memory corruption or, you know,
security boundaries or, you know,
like complicated things that require,
you know,
semantic execution or whatever.
I was like, no, we just ask nicely now.
We just give a, you know, convincing, like, everything becomes socially engineering,
which, like, what a world, man, what a world.
It is.
I mean, I think we're going to be able to deal with a lot of the basic stuff, right?
Like, I think that's, I'm pretty bullish on our ability to deal with, like, very basic
prompt injection and whatnot.
But you're right that we can't fundamentally, from a first principle sense, solve this problem.
Now, look, as regular listeners would know, these days, risky business, you know,
and maybe in particular, like we work very closely with a VC fund.
You know, I work very closely with Decibel and some of its portfolio companies.
And what's amazing about that is you really get a sense for the sort of technology that's being funded
and where all of this is going.
And I can tell you absolutely we're going to be dealing with like AI on endpoints.
Like it's happening because when you see, you know, you just look through a pitch deck of what people are saying they can do with AI agents on desktops.
And it's amazing from a productivity perspective.
It's amazing from a like organizational efficiency perspective.
And you get some real security gains.
Like just really powerful, sophisticated stuff.
So it's coming.
Like it is absolutely coming.
Should it be turned on by default in experimental mode on every Windows endpoint ever?
Probably not.
But, you know, it is, I guess that's my, I've said it a bunch of times.
It's happening.
Meanwhile, the VX Underground folks had a bit of a poke at this, this,
this beta feature in Windows and did a little write-up on X, which I found quite funny, actually.
Yeah, so they dug through the implementation of this thing. And interestingly enough,
it's actually not so much client side. There is actually a lot of heavy lift talking back to
Microsoft servers. They make the good point that if you want to turn this off, you can just
like remove the DNS entry or make a fake DNS entry in your host's file for the particular
endpoint it uses off in Azure. But yeah, the idea that your local machine is doing
all of this AI stuff and then plumbing it off to Microsoft and that somewhere in Microsoft there was
like a real time, you know, potentially a real time feed of like, you know, every, you know, once this
thing is deployed at scale, like every Windows user on the planet, everything that they're doing
that they're interacting with the AI going through, you know, some endpoint of Microsoft, which,
like, man, there must be some ways to monetize that, but also how much is that going to cost them?
Jesus. So, yeah, that's, you know, when you pull apart these things, it does start to look, you
a little bit, you know, equal parts panopticon terrifying.
And also, you can see why NVIDIA stock price is so high.
Yeah, I mean, I think with a lot of this for the, you know, the, the, the, the, the ball case,
I guess, is that they start shipping a bunch of these features.
They become really useful.
And then when the companies eventually turn around and say that'll be 50 bucks a month per seat,
you can't imagine not paying, right?
So we're a little ways away from figuring out whether that's actually what's going to happen.
But let's see.
Now, from talking about the problems of tomorrow to talking about the problems of 20 years ago,
that are still problems today, Adam.
We actually spoke about that 40 web, what was it like a dot-dot slash, like, you know, bad URL command execution bug.
We spoke about that last week.
Of course, it is now being exploited in the wild, and I believe there's a better exploit module for it now.
So, hooray, you too, dear listener, can just go out and exploit this one easily.
Yes, yeah, this one was being.
It's definitely being attacked in the wild.
It's on the Sissackev list.
We hadn't seen,
it was actually a two-part bug.
There was a, like,
path reversal that led to, like,
orth bypass,
and then,
or led you kind of create accounts,
orth bypass.
And then there was a second one
that you could do
arbitrary command execution
as root on the underlying device,
and those two have been chained together.
Last week,
we hadn't seen a pock for the second half of that.
Now,
both of these have been checked into a meta-spoit module.
And the funny thing is,
you know,
in this sort of irony sense,
the command injection part of it is you inject into the file name of a SAML user.
So you log into the command line interface of your Fortnite and you set up a user to authorize SAML.
And then the file name gets processed by some kind of underlying command line.
You can shell meta character inject into that.
And it's just kind of, you know, it's funny that it's in setting up federated authentication,
you get code exec as a route.
So like it's yesterday's, you know, grandpa's bug.
but with today's, you know,
federate authentication technology,
so biopow powers combined.
Good job.
Fort and FDF.
Now, meanwhile,
two suspected scattered spider kids
have pleaded not guilty
over the Transport for London cyber attack.
This is a piece here from Alexander Martin
talking about that one.
We'd already reported, I think,
on the arrests of these two guys,
Talia Joubert and Owen Flowers,
aged 19 and 18, respectively.
And yeah, they've thrown in the not guilty, please.
You read this story and you think one of the things like they're pleaded not guilty to is like failing to hand over their passwords to or passphrases to various encrypted devices.
Do you think that seems an odd thing to plead not guilty to?
Because you would think that that would be a fairly straightforward charge.
I think your note in our weekly planning document on this story was, well, yeah, good luck with that, guys.
I think by the time it gets to this point and you've had like, you know, National Crime Agency task forces and stuff all over you,
they've got all of your, you know, your data on drives and whatever.
I don't know.
I don't think it's going to go their way, but of course I'm not super familiar with the case.
But the vibes here don't feel good for them.
The vibes really don't feel good.
And although they were behind the transport for, allegedly behind the transport for London thing,
you know, any sentencing or any kind of trial process and sentencing,
assuming they found guilty, is absolutely also going to be thinking about, like,
look at the mess, Jaguar, and Rover.
You know, like, it's just, this is a big thing in Britain.
and yeah, those kids not going to go well.
No, it's not.
Speaking of someone else who's up on charges,
another one from the record,
Dorena Antunuka over there,
has a report about this 21-year-old guy
who's been arrested in Moscow on treason charges,
and some people in the Russian media
are saying this is because he was smack-talking
that Messenger app, Max,
which is like, I guess, Russia's answer to WiiChat.
So they're trying to corral everyone
onto a surveillance-friendly state-controlled app,
and he's like, yeah, this is a piece of crap,
and there's all these bugs in it,
and blah, blah, blah, blah.
So it seems like he didn't mind sort of poking,
you know, the Russian establishment in the eye,
and that's possibly what's gone wrong with him.
But, of course, this is the issue with Russia.
We don't know.
I mean, maybe he was doing something treasonous.
You would never know.
It's all going to go into a closed court,
and God knows what's going to happen to him.
Yeah, yeah.
I mean, he at the very least, he called Max, quote,
a disgusting product, unquote, on his telegram.
So that's, you know, not going to make your friends in Russia.
But, yeah, as to what's going to happen to him, I mean, he may end up in the, you know,
like severe penal colony alongside the group I be guy, or he may end up, you know,
working for the Russian establishment, you know, in the cyber world.
Or he might just send to the front.
Like, you just don't know in Russia, you know, what's going to happen and the lack of
kind of transparency.
And maybe that's a, maybe if you speak Russian, there is more transparency to be found
there like, you know, obviously we see these things to Google translate, etc.
But yeah, who knows what's going to happen to the guy?
But it doesn't feel like a good time to be someone doing, you know,
maybe even good faith security research in Russia, you know?
No, unless you're working for one of the companies that the state smiles on, like positive, right?
Like, if you're working there, you should be okay.
But, yeah, crazy stuff.
Just don't say mean things about Max.
No, that's right.
Don't say mean things about the Russian state generally.
You know, Max, the military, Vladimir Putin, you know what I mean?
you just got to know what you can say and what you can not say.
We got something I just wanted to mention this week.
You found this one.
This is a piece from Tim Stark's over at CyberSoup.
A bunch of cybersecurity professionals have gotten together
and signed a letter asking everyone to end so-called hack law, right?
L-O-R-E.
So this is the idea that we keep putting out this advice,
well, not we-wee, but, you know,
the cybersecurity advice du jour is still like,
don't use public Wi-Fi,
and juice jacking is a big threat and whatever.
So they've got together and signed a letter saying, please, God, let's stop this.
And also put out a list of recommended advice which is actually sensible, which is stuff like, you know, you should patch your stuff and use a password manager and multifactor authentication.
So I think this is a really good idea.
A whole bunch of sort of senior cybersecurity executives have signed on to this.
Bob Lord has something to do with this as well.
And, you know, it just seems like a great idea.
And it's a very handy resource that you can actually point people towards, because, you know,
You know, you're like me, I imagine in that people frequently ask you, like, what can I do to be more secure?
And, you know, this is just something you can point those people towards.
And I hope they keep building this out.
Yeah, and I think this is a great project.
And, you know, I thought I wanted to put it in the run sheet this week because, you know, so many people, so many listeners are, you know, going home for, you know, Thanksgiving in the US or, you know, festive season coming up.
You know, you're going to be giving family IT advice and having something to point to people to when they're like, you know, should I get a VPN?
NordVPN sounds really good on YouTube.
You can point to something that says,
actually, you know, VPN doesn't do anything.
We just use a password manager, just change, you know,
use multifactor, you know, that would be a good start.
So it's nice to have something to point to
and something that they can kind of go and read on their own time
so that you don't have to, you know, fix their internet
whilst you're at home, you know,
try to eat Thanksgiving turkey or whatever.
Yeah, it's funny, man.
Like, I hate listening to a podcast where the podcast is good
and the host is saying smart things and it's really cool.
And the next thing, they're reading an advert.
for NordVPN, you know.
Just kill me.
It's so bad.
Like, I never wanted to do that.
Like, people are like, oh, why is your sponsorship model the way it is?
Like, why do you think?
It's like, we do not want to do that.
We're never going to do that.
Never.
And, okay, so we got our comedy story for the end of the show.
This is, this had me dying.
Dying.
The International Association of Cryptologic Research,
Adam, ran a very cryptographic.
secure like election for some position within the organization.
They had to abandon the election because, drum roll please.
One of the people involved in this process lost their key material.
So they were supposed to have three people with a third of the key material each
and then by their powers combined,
they could get the results back in a cryptographically good way.
And yes, one of the three lost his key mat.
And so they've had to annul the whole thing
and they're going to have to rerun a whole other election.
which, I mean, it's a beautiful thing.
It's a thing of joy to see that, you know, everybody struggles with, you know,
the real hard part of crypto, which is not the algorithms.
It's not the, like, key exchange primitives.
It's not all of those things.
It's where do we put the damn key?
Yeah.
Where's my Uber key?
I thought I left it over here.
I thought it was in this drawer.
It's basically that, but this is the international association of cryptologic research.
It's just so good.
And the person who lost their key mat has actually resigned their position,
which I think is really funny.
It's like you get drummed out of those circles for losing your key, right?
Yeah.
The guy actually is, it's a motor young.
He, I actually have a book by him on my shelf.
You know, there's amazing research on early crypto virology and stuff.
So like, he's a legit guy, but it can happen to anyone, right?
Dog eats your ubiquit.
What are you going to do, you know?
Yeah, yeah, exactly.
Well, mate, that is actually it for the week's news.
Big thanks for that.
It's great to chat to you, as always.
We'll do it all again next week.
Yeah, we certainly well, Pat.
I'll see you then.
That was Adam Boyleau there with the check of the week's security news.
Big thanks to him for that.
It is time for this week's sponsor interview now with industry legend, H.D. Moore.
He, of course, created MetaSploit a million years ago.
But these days he runs Run Zero.
And Run Zero is a, I guess, asset discovery platform,
which can also measure like risk exposures as well, right?
So you can point it at your organization.
And you can know, hey, wow, we've got some really risky stuff hanging out on the perimeter here.
There's a whole bunch of stuff happening internally that we didn't know about.
Like, what's this network over here?
It's a fantastic tool.
It works both as a network scanner and as a data cruncher.
You can feed it data from other tools.
You can give it API access into your cloud environment.
It's extremely, extremely cool stuff.
Now, one of the things HD's been playing with lately and the team at Run Zero is something that we spoke about on the show in a sponsored segment with Bloodhound, with SpectorOps who make Bloodhound.
They introduced the ability to take their attack path mapping technology,
and they've made it more open so you can start creating your own extensions, I guess, to Bloodhound.
So it's not just looking at Windows credentials and things like that,
and finding attack paths through Active Directory.
So, yeah, HD joined me to talk about what they were doing,
what they've been doing with Bloodhounds OpenGraph.
And I also quizzed him on what they're doing around AI.
because Run Zero being a primary source of data
is a fantastic tool to start throwing some AI at
and it makes a lot of sense for them, for example,
to have an MCP server for other products
and tools to use and agents to use.
So yeah, that's basically the interview with HD more
that I did last week or the week before. Please enjoy.
So in Run Zero, we've got graphs in the product already.
We do like layer two topology, layer three bridges, segmentation graphs,
uh, route path tracing.
But we're curious, like, what would happen if you've been extended us to more than just network assets?
And so the obvious choice to play this was to go with Bloodhound and say, let's go build an open graph connective for a Run Zero to bring Run Zero data into Bloodhound itself.
And then let's try overlapping the nodes within Run Zero with the nodes that are in the Bloodhound Active Directory and kind of figure out can you like chain things together.
So the hard part of this is like, you know, oftentimes when you see a graph, it's what they call an executive distraction machine.
It actually doesn't do anything useful.
It doesn't tell anything new.
And so I was really kind of hit in my head against the wall trying to figure out what is the value of a graph when you can get the same data with just a linear query, right?
Like, what does the graph tell you that it doesn't?
And so where we landed is anytime there is a relationship between two assets that define the security relationship, that's really important for a graph and not something you can easily do outside of a graph itself.
So, you know, when you think about from the network side, one example we came up with that I thought was like really hit the nail on the head is, is there any network segment that has both an iPhone in it and a Cisco rider with the default?
default SNAP configuration. Like, by definition, you don't need to know which network is B-O-I-D.
You find them by the presence of a B-O-I-D device. And in the same segment, is there a misconfigured
core infrastructure device exposed directly to it? And there you go. You've got either, you know,
a wireless gas segment or whatnot. Without having to know anything at all about which ones are wireless,
how it's configured, you found, like, a pretty big significant configuration flaw just by looking
for any connectivity between a B-YD device and an insecure infrastructure device.
But, I mean, do you need a graph to do that sort of query? I mean, I would have thought that you could
just sort of query a run zero like inventory data set and find that anyway, right?
Sure, you could do it. So if you happen to start off with here's by wireless network,
now let's go find all the insecure infrastructure, that would be great. I mean, that would get
to the same answer. Their question is, did you know which of your networks were guest networks
to start with? So if you're going to a network with zero knowledge and the only thing you can
really do to tell you what's what is to look at the relationship between two nodes, that's one way
to get there. And that's where I really found it to be useful. It's like, did I find any segment of my
network that has an AS400 and also has a Windows XP machine. Do we have any network that has,
you know, a HMI, but also has a consumer device in the same segment, like an IP camera for
high KVision or whatnot? So it's those type of relationships that I think that are really interesting
that you can do with the graph that are really difficult to do outside of it. How fancy did you go
with it, right? Because these seem like fairly basic use cases. Did you go through the whole step through,
step through, a half, attack path kind of thing with open graph? Yeah, we took a different approach
of like we open source everything.
So all the code that effectively take an export from Run Zero,
whether it's the free version or paid or whatnot,
and then you run this Go code on it.
It produces a open graph file,
which is nodes and edges and all the information linking together.
You import that into Bloodhound,
either the free version or your paid enterprise version,
and then it can do your cipher queries
to show how it all change together.
So I look at this as a way to like prototype stuff
that we want to build and run zero natively.
Let's go play with it within Bloodhound first
and then take the parts back
that make the most sense into the product,
just like Bloodhound and Spectrops is doing the same thing with OpenGraph and their enterprise product.
So it's kind of a neat kind of community prototype land that would be OpenGraph.
And all you really have to do is put together nose and edges and lay them out a certain way
and then you know until be able to create a query to check the relationships.
Something similar that we do within Run Zero of today is we look for, you know, every TLS fingerprint
behind the firewall and then we look at the whole internet and see, did we see the same fingerprint someplace else?
So there's no there's no vulnerability unless you have and have a match in both places.
And so it's a similar thing with Open Graph.
Like vulnerabilities only exist because of the relationship between two devices in a certain way
and a certain connectivity between them.
The thing that we really tried to highlight in Run Zero is what do you not find through
other visibility of their tools?
Like what are the things that we can identify about network segmentation, multi-home connectivity,
inner asset in your credential relationship, so you're not going to see someplace else.
We felt like, you know, playing with Bloodhound is a really great way to show that stuff off.
And then we love to get, you know, go the other direction to import Bloodhound data into Run Zero.
So now you're overlaying your active directory with your Run Zero stuff vice first.
So we already import your LDAP and AD and your Enter ID today.
So we have some of that data today.
But I think we're going to meet the middle someplace.
We feel like we'll have a great way to make bloodhound users lives much easier
and also a way to make our users lives much easier by pulling in floodown data.
This is just a great kind of community pool of player in the meantime.
Now, look, I do actually want to talk to you about AI a little, right?
Because I feel like there are certain services and products that are kind of at risk from AI more than others.
I feel like yours less so, actually, because an AI agent just isn't going to immediately be able to do what Run Zero does.
I can see, though, that Run Zero would be a tremendously useful product for Aigentic platforms to use.
I just wondered how you're thinking about all of that, right?
Like, are you trying to set Run Zero up as something that's very AI-friendly?
Are you trying to build, like, query builders for your product as well?
So instead of people having to actually query at using some sort of, you know, structured query language,
you can just get them to use natural language.
Like, where are you at in the whole AI thing?
Because I haven't seen much from you guys at the moment.
And I'm guessing there's a reason.
Yeah, we're pretty quiet about it.
We use a bunch of AI stuff right now to help us identify vulnerabilities before they make the news.
So before you covered in, you know, Whiskey Biz News, before they show up in bleeping computer,
we have stuff scraping all the socials, looking at release notes, flagging stuff to us
so that we can help our customers get ahead of it really quickly.
And that's a friend of the shows that we've been working with him and his company to do that.
I don't want to name drop them here, but that's all AI-based.
We also use some of the AI stuff to help us find more references to enrichment.
And then we have an MCP server and Run Zero so you can hook it up to Cloud and do whatever we want to do with Run Zero, including running new scans, pulling data, cross-perifting stuff, all directly through your tool of choice, the MCP.
What we haven't done so far.
Do you have some people operating AI platforms who are already using your MCP server?
Like, because I'd imagine that would be very popular, right?
Yeah, a lot of folks will plug us into either their tines workflow through MCP
or pulls into, you know, nearly anything else.
Claude for reporting, we've other folks who plug directly into PowerBI
and then hook that into whatever Microsoft's co-pilot type thing is.
But there's lots of ways.
Yeah, I mean, it's just funny, right?
Because everyone's shipping an MCP server and like Run Zero is the only one that I can think
of where I'm like, no, it actually definitely makes sense for them to have an MCP server, right?
Where it's not always clear.
People are just, you know, shipping them anyway.
but yours actually actually makes sense.
Yeah, we hope to like not just give you data,
but help you take actual actions,
like actually do things in the product from the MCP.
That's the part that we've been shipping incrementally here.
So we're going to build natural language search.
That's, you know, an easy thing.
It's kind of been the works for a while.
Where we've really been thinking more about the AI side
is like, you know, we're hard to replace there
because we provide data you don't get for anything else.
You can't synthesize your way to, you know,
the data we provide about network assets or some of the core...
No, you can't infer your way
to this sort of information that you're capital.
Like, that's not how it works.
Yeah, kind of goal for Run Zero since day one is we need to be a primary source of data.
We're not just going to aggregate stuff.
We need to be telling you things you didn't know before.
Otherwise, what's the point?
We don't want to just sell your data back to you.
So going forward with AI, we feel like there's some really cool things we can be doing.
Like, we already have some things that are not AI-based, but do very similar, like
autonomous network discovery.
We have automatic assets.
We'll scrape your entire private RFC-19-18 space, put it all together and show you
the big, fancy map.
Building some more AI support around that so that it's a predictive.
which offsets and which octets to go after next.
It's basically using some internal learning to figure out, you know,
how can I do the same more officially and then highlight and continuously scan the things that are more at risk?
That would be a good example of that.
Like, let's increase the frequency of testing for the things that are most likely to be exposed
and let's hold off from things that just don't change very much over time.
So that's another area we're looking at for building AI into.
The challenge, of course, is we ship a self-hosted product that runs at a SCIF.
And we also ship a SaaS product that runs at Amazon and everything in between, right?
So whatever we build, it has to work just fine everywhere else.
And so, you know, for the AI stuff, we need the ability for the customer to plug in their own local LLM or just turn it off entirely.
And for the cloud side, we need to make sure their data never leaves that particular region, country, et cetera.
So we're a little bit hamstrung in that we're not willing to just go through all over customer data into Anthropic.
And that kind of sets us apart from many other people in the space.
Like, we actually care about where your customer data goes and where it ends up.
And that means we take a little bit of slower path together.
Yeah, it is.
There's a fair bit of YOLO out there, which I'm sure there's going to be some interesting
headlines for us to cover in the news over the next few years because of that.
One thing we should just touch on before we go, like you've done this big kind of pivot,
really, to being a vulnerability scanning platform, right?
Like a risk exposure platform.
You're currently doing a fair bit of work on the front end, right, to sort of reflect
that change.
Yeah, a while back we redid the product UX, so it was really designed to help you bring it in
from as many places you can, like your passive network discovery, active scans, your connections,
and it really kind of walked you through that process, gave you the dashboard, told you how
all that worked. We've since then been adding a ton of features around vulnerability detection,
exposure management, vulnerability inference, rapid response, risk dashboards, but none of that is like
turnkey today. You have to kind of know what to do to set it up. So we're really looking at doing
a refresh where as you log in, you're getting a list of like, here's my extra attack service
management, here's the things I want to plug into it. I want to watch that stuff, you know,
ask the customer what EDR they use, make sure that security and tools, proper.
the representative everywhere, really kind of learn for the customer what goals they have as we do the onboarding
and they'll be able to show them, you know, that really quick hit list of here's what's going according to plan,
here's unexpected new stuff that's come up and here's what's falling behind.
All right, H.D. Moore. Thank you so much for joining us for a chat. It's always a lot of fun.
My pleasure. Thanks, Patrick.
That was H.D. Moore from Run Zero there. Big thanks to him for that. And I've dropped a couple of links
into this week's show notes, both into the Bloodhound Open Graph stuff and what they're doing.
around AI, so you can click through and have a bit of a read if that interests you.
But that is it for this week's show.
I do hope you enjoyed it.
I'll be back next week with more security news and analysis.
But until then, I've been Patrick Gray.
Thanks for listening.
