Risky Business - Risky Business #831 -- The AI bugpocalypse begins
Episode Date: April 1, 2026On this week’s show, Patrick Gray, Adam Boileau and James Wilson discuss the week’s cybersecurity news. They cover: Those pesky North Koreans shim a backdoor int...o a 100M-downloads-a-week npm package TeamPCP appear to have ransacked Cisco’s source and cloud environments AI is getting legitimately good at being told to “just go find some 0day in this” Kaspersky says Coruna and Triangulation do share code lineage Iranian hackers dump Kash Patel’s gmail spool Oh, and of course there’s a Citrix Netscaler memory leak being exploited in the wild This week’s episode is sponsored by Dropzone AI, who make automated AI SOC analysts. Head honcho Ed Wu explains how they’ve built pre-canned ‘hunt packs’ to lead the AI off into your environment to find weird, interesting and security relevant things. Show notes Google links axios supply chain attack to North Korean group | The Record from Recorded Future News Cisco source code stolen in Trivy-linked dev environment breach chiefofautism on X: "someone at ANTHROPIC just showed CLAUDE finding ZERO DAY vulnerabilities in a live conference demo" h0mbre on X: "Claude is somehow better at kernel exploitation than creating meal plans." Vulnerability Research Is Cooked — Quarrelsome MAD Bugs: vim vs emacs vs Claude - Calif MAD Bugs: Claude Wrote a Full FreeBSD Remote Kernel RCE with Root Shell (CVE-2026-4747) A Risky Biz Experiment: Hunting for iOS 0day with AI - Risky Business Media Security leaders say the next two years are going to be 'insane' | CyberScoop Coruna framework: an exploit kit and ties to Operation Triangulation | Securelist Apple says no one using Lockdown Mode has been hacked with spyware | TechCrunch Reverse engineering Apple’s silent security fixes - Calif Jury finds Meta's platforms are harmful to children in 1st wave of social media addiction lawsuits | PBS News Meta and YouTube found liable in social media addiction trial Iranian hackers publish emails allegedly stolen from Kash Patel Iran Us War: 'Legitimate targets': Iran issues warning to US tech firms including Google, Amazon, Microsoft, Nvidia - The Times of India Drop Site on X: "IRGC: From now on, for every assassination, an American company will be destroyed" OSINTtechnical on X: "Starlink shutdowns are forcing Russian troops even deeper into Ubiquiti’s ecosystem. " State Department reissues $10 million reward for info on Iranian hackers | The Record from Recorded Future News National Cyber Authority: 50 Israeli companies 'digitally erased' | Israel National News Stryker restores most manufacturing after cyberattack | Cybersecurity Dive Citrix NetScaler products confirmed to be under exploitation | Cybersecurity Dive CISA tells federal agencies to patch Citrix NetScaler bug by Thursday | The Record from Recorded Future News Using a VPN May Subject You to NSA Spying | WIRED Post reporters called the White House. Their phones showed ‘Epstein Island.’ - The Washington Post
Transcript
Discussion (0)
Hey everyone and welcome to another episode of Risky Business.
My name is Patrick Gray.
This week's show is brought to you by Dropzone,
which makes an AI sort of sock platform.
And Dropzone's founder Ed Wu will be along in this week's sponsor interview
to talk about why they have launched a whole bunch of pre-canned AI threat hunts
and the logic behind them.
It's actually a really interesting interview.
Ed is a very, very smart guy, as regular listeners would know.
And really, they started developing their pre-canned.
threat hunts from the premise of like what would we do if we had unlimited man hours to throw at a query for example right and they sort of worked backwards from there and they found some really interesting stuff so that is this week's sponsor interview with ed woo from drop zone coming up after this week's news segment with adam boileau and mr james wilson which starts now and you know i'm so happy i've got an extra spring in my step this week because basically because chaos uh because
so much chaos between AI finding like O'Day and everything and supply chains getting torn apart.
I'm just, I'm a happy guy.
I just, you know, it reminds me of the old days.
It really does.
It goes.
Yeah, yeah, super messy.
So Adam, let's start with you.
I mean, we've got a supply chain attack against something called Axius, which is apparently
used everywhere by everything.
and this has now been linked to a North Korean group.
This is a huge big deal.
Also feels a little bit like the dog who caught the car,
but walk us through the rough shape of this story,
if you would, please, sir.
Yeah, so Axios is a JavaScript, like a wrapper
around the HTTP libraries that you would use
if you want to retrieve content.
Normally, in a browser,
there's kind of the Ajaxi kind of APIs
that people use to retrieve external web content.
There's an equivalent thing for, like,
server-side JavaScript. Axios was a framework that kind of let you use the same APIs in both
server-side and client-side JavaScript. It's wildly popular, something like 100 million downloads a
week. And so, yeah, turns out some North Koreans managed to get a Trojan version of it into the
NPM repository for not particularly long, like a few hours, but when you're talking 100 million
downloads a week, that's still a lot of people. And the Trojan version was dropping like
full-on backdoor, cred-stealers, the whole shebang,
exactly as you would expect.
And of course, the JavaScript ecosystem has been going crazy lately
with all of the team PCP attacks.
And so we initially assumed, like,
this is probably the same kind of thing.
But no, it's the North Koreans reminding us that, you know,
they're still out there, they're still around.
Presumably they're going to be going after cryptocurrency stuff.
But, I mean, who even knows anymore?
They might have some other plans.
I mean, I think from now on when we talk about North Koreans,
we should always, I'm sorry, refer to them as
pesky. Those pesky North Koreans
at it again. It just feels like the
right word. Yeah, James,
initially you were thinking this was probably team
PCP as well.
And then, yeah, I mean, you seemed
a little bit disappointed that it wasn't.
Yeah, I would have
lost a fair bit on Polly Market if I was
that way inclined on this one, I think.
It just felt like it was them. It felt like a
continuation of what they were doing. But
delving into some of the details on
here that I found interesting, for example,
yes, they published malicious versions of Axios, and they did both the latest and a legacy to get
maximum coverage. But the way they did it is kind of cool. They created a separate package
called Plain CryptoJS and then published that so that there were some legitimate versions out there.
And then what they did was to Axios. They basically injected this Planejs or Plain CryptoJS
as a new dependency. And so this sort of fooled a lot of detection and scanners that went,
well, this is just a new dependency in Axios and that other new dependency, well, it's got
publishing history so she'll be okay. Still caught quickly, but just interesting to see them going
that extra little step. They could have just compromised Axios, but a little bit of extra care.
The really interesting question that's not answered is how did they get the credential
for the maintainer of Axios? He even said in the GitHub issue himself, he said, I'm trying to get
support to understand how this even happened. I've got 2FA and MFA on practically everything,
yet still this credit got out. Sounds like a, you know, browser, some sort of browser token to me, right?
could be could be but i mean the the last sort of sad trombone is this on this is the poor guy did have
the oidc trust publishing model configured for axios which is what mpm wants you to do these days
um to to prevent to a degree some of these sorts of attacks but he'd set it up such that the
publishing environment was basically saying if you've got an oidc token use it or if there's an mpm token
use it and if there's an mpm token prefer that and so it was like you'd try to do the right thing
but it just wasn't complete yeah yeah
You just made me think of something funny, too, in terms of, like, funny names for JavaScript packages where the Dark Sword Exploit leak that people have been talking about, this is the other iOS exploit chain.
Like, I listened to Between Two Nerds last week's edition, and I think it was, yeah, Grec and Tom pointing out that, I think, what was it, one component of that whole chain was called RCE loader.js, which is, like, way to be stealthy guys.
Like, you know, iOS, super secret iOS exploit chain.
dot j s yeah
did we obfuscate the code in the rc e loader dot jess yeah
code solely obfuscated
jobs done jobs done yeah now look i mean
PCP team pcp
may not have been behind this axius
supply chain compromise but they had other stuff
that they were doing james and by the looks of it
um that involved tearing through cisco
they have managed to clone something like 300 repos
they've racked off with a whole bunch of aws tokens as well like it's
it looks like cisco is having a real bad
time. Adam's comment earlier was like, gee, Brad Arkin must feel, you know, he co-hosts a show
with you, James, in risky business features. Gee, he must feel he's lucky that he's not there
anymore as the CSO, but honestly, knowing Brad as I do, he's like what I would call a combat
CSO who enjoys stuff like this because he's sick. He's sick. But yeah, walk us, walk us through
what we know is happening actually at, at Cisco at the moment. Yeah, it's a bad day and I don't
think we've seen even close to what the full impact of this will be.
So this is fallout from the trivia supply chain attack, which was team PCP.
And I think as we said last week, it's like these guys are now sitting on a massive
trove like tens of thousands or hundreds of thousands of credentials that they managed to get out
of both the trivia and the check marks supply chain exploits.
And so obviously they've done a grep for some interesting names in there, found that Cisco
was in there and have just gone absolutely to town.
Like, as you said, 300 GitHub repos.
There's AWS credentials in there.
there.
And, you know, when you hear of Cisco vulnerabilities and stuff, it sounds exciting initially
and you hear that it's in something like, you know, Cisco secure firewall and you think,
okay, well, if you've bought a product that's called Cisco Secure Firewall, you kind of deserve it.
But this feels like it's literally lock, stock the lot, the kitchen sink and everything out of
Cisco.
So, you know, next wave of supply chain attacks, here we come, I think.
Well, yeah.
I mean, Adam, what do you expect might shake out of this?
Because I'm at a loss, you know, because it's, it's.
It's so weird that this is like the second thing that we're talking about this week, right?
And then like later on, like halfway through the run sheet.
It's the FBI director's personal email spool got dumped on the internet by Iran.
And it's like, yeah, okay, like we've got other stuff that we need to talk about first, right?
Like that's this week.
But, you know, what's your take on the seriousness of this?
What could the implications be for Cisco and its customers?
I mean, it's kind of hard to say because we haven't got concrete details about like with products,
what repositories, what kinds of environments?
have credentials been taken for.
Cisco is a big place, and there's so many acquisitions over the years.
I've spent plenty of times plunking around inside Cisco products,
and you can see the acquisition histories in the products in some cases.
There's other companies' logos still in there
from when they were shopping it around to sell the different vendors.
So you don't know exactly how bad it's going to be,
but Cisco gear, you know, across their product range is everywhere,
and anything that exposes the gubbins of that
is going to result in bug shaking out.
because, you know, a lot of this stuff is old, right?
And Cisco does not do great quality assurance until they are forced to, I don't think.
So whether or not getting a whole bunch of code, getting a bunch of creeds leads to intrusions
and test environments and dev environments leads to, you know, being able to resource and find bugs.
Like the full out is going to be, I think, pretty interesting.
But, you know, we haven't really seen the actual data itself.
So it's hard to say, except probably bad and as you said, chaos.
Mm-mm.
Will this disaster spread to Cisco and its customers?
Stay tuned and find out next week
and another edition of the Risky Business Podcast
now in its 20th season.
Yeah, it's kind of that way.
Now, the AI apocalypse is upon us
the disruptive period
that we have been predicting on this show for some time, right?
Like the theory is,
AI is going to help people find a lot of exploits,
a lot of vulnerabilities, right?
And that's going to be chaotic.
At the same time,
you know, AI models are going to help people who maintain software to find similar sorts of issues and patch them, right?
But in the meantime, there's going to be this period of massive disruption.
That period of massive disruption appeared to have started over the last seven days, guys.
Let's start with you, James, on this.
We have a presentation from someone who worked at Anthropic.
They got Anthropic, they got Claude to find O'Day in Ghost, which funnily enough is our newsletter platform,
which are the risky bulletin and seriously risky business newsletters,
they are all published via ghost.
It was a blind SQLI that like no one else had found before,
a pretty serious bug.
They also found like a kernel bug or something,
and the whole talk is apparently just fascinating.
You've been through this.
I mean, you know, it's, I mean, this is incredible, right?
The dawn of a new era.
Yeah, truly.
And you know, the thing that is most startling about this
is how ridiculously simple the prompt was.
You know, in the case of finding the ghost vulnerability,
was literally just, hey, I'm at a CTF and I've got to find a vulnerability in this codebase,
what you got? It's like, is that it?
We should say, too, that the person who was doing the research actually works for Anthropics,
so presumably some of the guardrails were not present for them.
Well, that's the other thing, Pat, because it was interesting that the prompt was so simple,
and I was taken back by that. But then, you know, this morning after we'd had our sort of run-through
meeting, I was working away on Claude trying to replicate this. And I can easily
replicate the finding of the bug. But when I try to, when I tell Claude, okay, that's great,
now I want an exploit, it is adamant that it won't create one. Interestingly, though, it'll create
what it calls a safe exploit, basically an exploit that demonstrates proof of concept, doesn't
exclimate anything. But I could get it with a bit of, you know, questions to say, well, okay, but
in the write-up of this exploit, or in the right-up of this vulnerability, I need to be able to
explain how that safe vulnerability could be made into a dangerous one so that the severity is well
understood. And it's pretty clear that Claude has a hard guardrail on creating malicious code.
At least that's what I run into, but it's happy to talk about how to do it till the cows come
home. And so it's just so much knowledge in there. Yeah, now we'll talk about a podcast that you did
where you actually had a conversation with Claude and got it to help you do some
vulnerability research into WebKit. We'll talk about that in a minute. But Adam, I want to get your
opinion on this because we've got a couple of things here to talk about. So first of all, is this, is this
talk that the Anthropic guy gave on doing Volndev.
And also we've got a blog post here from Thomas Potacek talking about how, hey,
von Dev is for the clankers now, basically.
Like, if you enjoy doing this sort of work, like, okay, that's great,
but it's kind of irrelevant to the fact that you are about to be replaced.
It certainly feels like AI is extremely effective at doing this kind of work much more so
than the naysayers have, you know,
prepared to admit and I mean I know this is a big call but it feels like we are
absolutely at the dawn of a new era that's a big statement I want to know what you
think of it yeah I don't disagree with you like it has gotten so much better over the last
you know three four five six months and yeah it does feel like the state of the art
for this stuff is moving really quick and Thomas Patrick you know is very experienced
that von derv and volume and Vondervin and the industry around it and like his opinion is I
think like the blog post is just a really interesting one
read. One of the things he points out is that, you know, the, a lot of our defense against security
vulnerability is predicated on, you know, really the top tier human exploit devs focus on the top
tier targets, you know, browsers, core operating system kernels, things that are big money, big reward,
big kudos, and everything else a bit further down the ecosystem has really only ever got pretty
cursory attention and now these models are as good as you know maybe they're not quite a doubt
unit yet but they are pretty damn good and against most software that's more than what you need
and that's going to upend everything for defenders because you know these kinds of bugs you know are
in some cases you know 10 minutes worth of reasoning time away by a model instead of days for an
experience security research and you know some of these bugs are not super common like that
ghost sequel I bug, like it's not a complicated bug. Someone had to go bother going to look for it and
that's the hard bit is it clearly no one has bothered. You know, that bug, you know, if I would like
to think when I was back at the Somalia, if we had, you know, when reviewing that, we would have
found it. Like it's the sort of thing that would, you know, we would find, you know, half a dozen of
those a week around the team. But that's at the cost of, you know, 40 whatever pretty experienced
pen test is getting paid a lot of money versus five minutes of GPU compute.
you know.
Yeah, this is 10 bucks in tokens, right?
Yeah, 100%.
And I think, like, I hear what you're saying on the Dowdbot 3,000,
but I feel like Dowdbot 3,000 is probably closer than we think.
I mean, I agree.
Like, we're not far off being able to deploy, you know, a whole room full of Dowd units,
and that's going to be terrifying.
On the other hand, it makes me wonder what Dowd's up to these days,
because, like, I bet he's doing something wild as well.
Exactly.
So, you know, the, we, but yeah, it's just, it's moved so much in the last few months,
and I'm actually impressed,
you know,
apropos James' conversation
with a clanker about WebKit bugs,
like that the guardrails in the retail versions,
you know,
do seem pretty adamant about it.
You know,
obviously that's going to trickle down
into the clone Chinese models
over the next, you know,
six months through a year.
And the next couple of years
are just going to be wild.
Well, exactly, right?
And I don't know that I'm that as assured
by the guardrails as you are.
and as I said, we'll get to that in a moment.
I did include in this week's show notes,
a joke that I just thought was too good not to include
from Twitter user Hombie,
which said, Claude is somehow better at kernel exploitation
than creating meal plans.
It's wild how bad it is,
and I just think that is funny because, yeah,
it is really wild how bad AI is at some stuff
and how absolutely fantastic it is at Valdiv.
And speaking of Valdiv,
we've had bugs pop up in VIM and Kinda Emack,
We'll get your thoughts on that in a moment, Adam.
And also we've got like a free BSD,
Claude Bug pop out as well.
Like it has just been absolutely raining bugs
thanks to Claude Code this week.
James, let's get your thoughts first on the VIM stuff.
Yeah, look, this is worthy for just another data point
of how simple it is to get the morals to spit out the exploits,
but also just another beautiful short prompt.
You know, I've heard there's an RCE,
zero day in this code base, can you find it for me, please? And off it goes. But the fact that
this all began as a Twitter thread, I think it was, or a blue sky thread of, you know, hey, let me see
if I can find a Vim vulnerability. And someone says, oh, damn it, that's it. I'm switching to
Emacs. Dot, dot, dot. Hold on a sec. You're not going to believe this. And, you know, I think, as
Adam pointed out, the Emax one, not as good, but just, you know, it feels like you could sit down
with any code base at the moment fire the simple prompt in it and you're going to find interesting
stuff it's wild yeah and speaking of that's exactly what you did for risky business features and what's
amazing is like you cook this one up like i don't know original concept for this was like 10 days ago
which is hey what if i rig up clawed via open claw and text to speech and speech to text so i can
have a conversation with it and record a podcast where i ask it to help me audit web kit and that's what
you've done and what's amazing about it i think you used like 15 bucks worth of token
to do this.
Yep.
And it was, I mean, I've listened through to the whole thing.
We've published it yesterday.
That's in risky business features.
For those who haven't subscribed, you need to go and subscribe because there's so much good
stuff going in there.
I also published an interview I did with Rob Joyce, former NSA and also former CIA CCI director,
Andy Boyd.
I spoke to them about AI and whatnot.
That interview is in risky business features.
So we're putting a lot of stuff in there, very interesting stuff.
But, you know, the timing on this one is amazing because you did this whole
podcast where you interviewed Karina Klau and got her to help you try to recreate parts of the Karuna
exploit chain in WebKit.
And it was really amazing, like the level of analysis you got out of this bot and into this
podcast was amazing.
The podcast kind of drags in some spots, but then there's like there's always this epic payoff
where the model says something incredible.
But what, you know, just what is your reflection on having been through this exercise?
was this more fruitful than you expected?
Because I was quite surprised.
Yeah, absolutely.
I actually went into this with a ton of things prepared.
I'd set up all of these little experiments I was going to do
with like a very targeted sort of thing of like,
here's this repo, here's this particular commit I want you to look at,
here's the exact thing I want you to do.
And I thought I'd just move through a set of these.
And as I was doing it, I sort of got to the point of thinking,
I think I'm just overthinking this.
And it got down to as simple as let's run up a clawed bot.
Let's make sure I can talk to it with my voice.
I can hear its voice, hit record, and let's see where this goes.
And even from just the first question I asked about, you know,
I obviously knew that first stage in the export kit of Karuna,
and I said to it, you know, have a look at decode audio data.
And the first response it gave me back on that was vastly enlightening.
Like I hadn't considered some of the aspects of how the decoder would work,
how different protocols would, or different audio formats would affect different vulnerabilities.
what, you know, when it got weird was I kept trying to go from tell me about the exploit,
tell me about what's possible, and it would give me an exceptionally detailed response
to the point where I think I could have taken that and written an exploit myself,
but I wanted it to write the exploit, but it just wouldn't, no matter what I tried to do.
And then it got weird.
It started talking about me and like as if I was not in the room and trying to appeal to
the podcast audience of, you know, James is being really.
mean to me here he's trying to get me to do things and look it was the point is the point is like okay
it didn't write exploit code but it certainly gave you enough as a skilled developer who's familiar
with iOS internals enough to you know there I think there was points where like you were asking
it to analyze one bug and it's like well there's going to be more of these because you just got to
look for anything else with the same characteristics and like you know it could have really
helped you to do some stuff and I think the point that you made towards the end of the podcast
when you sort of did your conclusion was absolutely right which is you are
are a software developer with experience with this sort of stuff,
using Claude could turn you into an exploit writer, basically, right?
But your dad, also a smart guy, works in orthopedics, probably not.
I think is the point.
I thought that was a really interesting lesson there.
Adam, I know you've listened through to this.
I mean, what was your reaction to this?
Because, you know, something like this, an experiment like this can wind up being, frankly,
a little bit piss weak.
And I feel like this wasn't.
I thought it was really enlightening and gives us a bit of a window into what the future could look like.
Yeah, I like James' starting premise of like what if we had, take the current expert,
take one piece out and get Claude to recreate the missing piece in the same way that Anthropic had done with like getting Claude to make a C compiler.
Like I thought as a starting premise, that made a lot of sense.
And then, yeah, just hooking it up to, you know, text to speech and speech to text and recording a podcast is such a,
it's such a crazy idea, and yet it does kind of work.
You know, I liked it.
But, you know, the level of analysis it spits out about really complicated topics, right?
And having, you know, for a living had to read codebases like that and reason about them.
The fact that you can get such concise reasoning about it in such a short period of time, right,
compared to the days that would take a poor human,
trying to shove it into their skull and think, think it through.
And, you know, being able to write the exploit part of it,
In the end, like, I'm having written a lot of exploits in my time.
Like, I'm kind of okay with it not doing that part.
It would be nice for it did.
But, like, the level of explanation it gives you and being able to just, like,
spitball ideas about how the exploit should work with it.
Yeah, if you're already an exploit writer, that's going to get you, you know, so,
it's such a force multiplier.
And that's, it's pretty scary.
It's hella cool, but also, you know, kind of scary.
I like how that for me it's frustrating that it won't write the exploit for you.
you almost take it as a good as job security.
Stay out of my lane.
Something that the human can sprinkle on top.
Yeah, yeah, exactly.
The lack of ethics, yes.
But you're right about the speed too, Adam,
because it's like, you know, there's a point where James goes,
hey, go grab the repos for WebKit.
And it's just like, beep, beep, beep, done.
You're like, whoa, okay.
You know, like the whole thing is just crazy.
And look, we've got this story here from CyberScoop.
Headline is, security leaders say the next two years
are going to be, quote, unquote, insane.
And that's based on comments from Kevin Mandia.
Adamski and Alex Stamos.
Alex, of course, a regular guest on the show.
Morgan has been on the show too back when she worked at NSA,
and we did a live podcast at NSA, very smart woman.
And Kevin Mandir, of course, being Kevin Mandia,
hasn't been on the show ever, actually, Kevin Mandia.
We might have to fix that at some point.
But, you know, they're predicting that, you know,
AI is going to make things pretty nuts.
And, you know, Rob Joyce said similar things,
the podcast that I did with him that's in Risky Business Features.
I think, though, the framing that the next,
couple of years are going to be insane is wrong. I mean, James, this is also what you were saying,
where you were like, well, I think that might be too long a time frame. I think it's too short.
I think we're entering a really crazy period that's going to drag out for longer than people
realize. I think people will tolerate insanely bad security for longer than we can comprehend
because we've seen it happen before, because we're old. And we've been in this industry for a while.
Adam, what's your feeling there? Like, how long is this going to be crazy?
for because I think, yeah, longer than you expect.
It's going to be crazy for a while, man.
There's a long tale of old tech and a long tail of industries that are, you know,
we like to think that all technology is as agile as say like Chrome updates
shipping out every week or Microsoft shipping a patch Tuesday.
Like there's entire industries that, you know, patching at all is still new, right?
It's still a thing that, you know, they kind of struggle with and, you know, being able to
throw all of a sudden the depth you can put it into everything, the depth.
you can put into specific high value things against everything is going to create,
you know, such a defensive side burden for all these industries that aren't really geared up for
that have very long life cycles where it's not computers, their plant equipment that's meant
to last 20 years, you know, cranes and all sorts of stuff like that.
But it's going to be wild for a long time, I think, right?
You know who's turning into a real winner for this one?
Like it's anyone who does those old school fundamental controls like a allow listing.
One of the big winners, man, like Knock Knock is getting so.
much interest at the moment because all of a sudden everybody's like okay I can't rely on people
just not knowing that I have a tax surface right everybody knows it um so yeah it's just crazy so
I think I think a lot of the solution here is going to be old school security controls like least
privilege access allow listing detected respond I don't know if it works at AI speed right like do you
have any feelings there James yeah it's the same place I keep ending up Pat like I think I've said
it on a couple of podcasts now where it's like there's almost like this arc of AI security where
you go, wow, this can find a bug really fast.
Oh my God, this is going to completely change the way attacks are, you know,
launched and created.
And oh, my good, the scale that this is going to create.
And then you sort of go, okay, dot, dot, dot, what incredibly cool space age technology are we going
to need to defend against this?
And you kind of go, well, privilege management.
Yeah, right?
Can't run the binary?
No problem.
Can't get to the box?
No problem.
Damn, it's the same stuff we've always done.
Now we just actually need to lift the bug as good enough as not good enough.
Yeah, I think the where it gets a little bit more complicated to some of this software supply chain stuff, right?
Because of the speed that that moves already, I think, I don't see any easy solutions there.
I don't see any easy solutions anywhere, but at least I see some solutions in some places.
But yeah, look, the vibe too from RSA, James, you and I are talking about doing a podcast on this at some point.
But the vibe at RSA seems, from all of the founders seems to be like nobody really knows what they should be doing.
And they're all scared of AI and, you know, like everything's completely.
in the air at the moment. So it's a wonderful time to be a cyber security podcast host. Let's put it that way.
Now, look, let's move on to our next topic. And Kasperski has put out a really interesting blog
post that has given us a little bit more information on what's been going on between
triangulation and Kauruna and links between them. Now, when Kourna first surfaced, we said,
yep, that's L3 Harris and so was triangulation. Then the next week, I came
out and I said, well, now I'm being told that the triangulation exploit kit was not an L3 Harris
product, but there might have been bits licensed in, maybe, or shared. But Karuna definitely
is an L3 Harris product. Now we've got this post from Kasperski, which I think lends weight to that.
Because they're saying that a couple of the bugs in triangulation are older versions of
bugs that were in Karuna, like obviously compiled from the same source. So it looks like,
there's a couple of bugs in triangulation, which were also in Karuna, different versions of
them. And it made me think back to when Adam and I, you and I first discussed this triangulation
thing that was at the start of 2024. And we went back and had a listen, actually. And it seems
that, you know, even back then we were like, well, this one looks like it was cobbled together
from a bunch of different places. Like you even pointed out that it kind of shelled the device twice
before it even dropped a payload, right?
Which just seemed a little bit weird.
So I think where this has landed is, yeah,
triangulation was put together from multiple different sources,
including some stuff from L3H from trenchant,
but it wasn't a trenchant product.
That seems to be, I think, the rough shape of this.
Adam, is that about where you landed as well
after this latest info from Kaspersky?
Yeah, it kind of seems like that.
I mean, it's, you know, we all wondered about
whether there were directly shared components
or whether it was just the same bugs,
and this seems to confirm that, yes,
in fact, there are shared components between the two.
I mean, these types of kits, you know,
do, by their nature, have to be somewhat modular,
have to be, you know, things that you can reconfigure
into the exact kind of setup that you need
for a particular engagement.
And also because different bugs die at different times,
you want to be able to swap them out,
and vendors like L3 Harris, you know, their contracts are,
you know, we're going to sell you this capability,
and to deliver that, we need to be able to swap out components
as they die.
So, like the modularity of it makes sense.
And of course, you know, people like the NSA, the Five-Iase agencies are also perfectly capable of taking stuff from vendors and reintegrating them or integrating them into other sets of tooling or whatever else.
So like modularity, patchworkness, like gluing things together, you know, is kind of how this has always been done.
So it sort of makes sense that now that we're starting to see more bits of the story shake out, you know, that there was some shared components and maybe, you know, some shared lineage, you know, shared.
suppliers, that kind of thing.
You know, we're probably never going to know, but interesting.
Notably not across the entire exploit chain, right?
Not even Kasperski's arguing that.
So it does look like, you know, triangulation might have been a bit of a collab.
Let's put it that way.
But I guess the question too that I have for you, Adam, is, you know,
we know that Peter Williams leaked the Karuna exploits.
Well, I mean, that's the working theory.
It's pretty solidly understood, I think, that Peter Williams leaked the Karuna exploits from
Trenshend, which would have included.
a couple of bugs that were used in triangulation.
By the way, this theory now explains why transient people were wearing the triangular,
you know, triangle t-shirts and whatever, like I feel like this is all tied together pretty nicely now.
Feel like it's got a bit of a bow on it.
But do you think the leak of those exploits, some of which, a couple of which may have been in the triangulation
exploit chain, would have helped the Russians to somehow detect that exploitation against some of their people?
I mean, Kaspersky's story has always been that they caught it on their Wi-Fi,
which I think is just ludicrous considering the target set for that triangulation exploit chain
was like diplomats all around the world, Russian diplomats and government people,
and what, you're going to also throw it at a handful of Kaspersky threat researchers?
I mean, come on, man.
Like, that's ridiculous.
But yeah, do you think the Williams leaks could have helped the Russians actually find triangulation?
Yeah, I mean, I can't see it not helping.
I mean, the nature of these bugs, like, they've delivered over HTTP, you know, they're going to be inside the browser.
So, like, the places that you can spot it, there are somewhat limited.
But on the other hand, like, knowing what you're looking for vaguely, like, knowing the shape of the exploits,
knowing, like, it's going to be in this kind of component of my message or this kind of component of, you know, safaris connecting the internet.
Like, if you're in the proxy on the network and you kind of know roughly what you're looking for in terms of web traffic,
It probably would give you some minutes.
Like, like, it's hard to know without seeing what the actual stuff on the wire looks like in detail,
but, like, I can't imagine it doesn't help.
And at the very least, you know, maybe there's some other bits of tradecraft hint.
Because I think, like, the triangulation, they were a bit more critical of the tradecraft than, say, Caruna.
Like, Caruna seemed a little more polished, whereas, you know, triangulation, maybe it was a little rough around the edges.
So maybe it helped more there than it would have against, you know, like detecting Caruna by knowing some of the details.
Now, James, just to bring you into this, I mean, we've all been talking about this over the last few days, obviously.
I mean, you feel like that is about the shape of it, right?
Yeah, 100%.
I mean, on the technical details, the first bit of commonality that we found between triangulation and Karuna was that use of the undocumented hardware registers.
And that, I think, was what people jumped at first and went, aha, that's the same.
The whole thing's got to be the same.
It's come from the same place.
And then, you know, we sort of resolved that and said, you know, no, no, no.
look, there is a similarity there.
We can't quite explain it.
It is a parallel discovery.
Don't know.
I would love to know.
But the Kaspersi thing this week is, yeah, it comes down to a different part of the exploit
chain, which is the kernel exploit itself.
And I think just to pick up on something you said there, Adam, whether it's shared knowledge
of bugs versus shared actual code and modules ready to go, I think given that this was a binary
kernel exploit and what they were essentially doing was disassembling it and looking at it and saying,
well, you know, both of these do all the same actual exploit steps.
And this later version checks for more additional versions of the kernel,
different additional chipsets, etc.
I think the fact that, you know, from a binary perspective,
that the rest was the same, but there was additional functionality added on.
That's code.
That's not just know the bug.
Yeah, now that was kind of what my reading of that particular detail suggested.
Like, was some shared source tree behind this, which, yeah, I mean, that kind of makes sense.
All right.
Now look, for anyone listening, thinking, well, what can I do here to prevent myself getting owned by this sort of stuff?
Well, use lockdown mode because Apple has come out and said nobody ever in the history of anything.
Nobody using lockdown mode has ever been hacked with spyware, which is pretty interesting.
I mean, they have pretty decent telemetry on handsets, especially ones running lockdown mode.
So I believe it.
I run lockdown mode.
I would have to say, though, it does have a few rough edges and is probably not suitable for.
mainstream consumption. But that is an interesting data point. We also have a piece here about
someone has reverse engineered Apple's security fixes, the ones we talked about last week,
James, where we couldn't figure out what this like partial reboot thing they were talking about
is this blog post addresses that. Yeah, it was Catalan said this my way and it was great to see
because it answered the question that I sort of left off with last week, which was, you know,
this notion of a faster reboot doesn't make sense to me.
because knowing a lot about how the software update internal is working, are working,
you know, its task is to essentially update a file system that has been mounted as read-only.
And so you can only do that if you reboot the system,
mount the file system that you want to update as read-write,
but then you're in a very protected mode at that point
because you don't want anything else running while that's the case.
You do the update, then you go,
and that's why it takes so long to apply a software update.
But this write-up is amazing for two reasons.
One, it explains that, yeah, basically Apple is shipping these cryptosolns,
which are essentially cryptographically signed extensions to the file system.
So you can basically say here's the whole iOS version, but please just patch in this little
bit of the file system here and but the cryptographic sort of trust of the file system is maintained.
So that's how they're doing the fast patching of certain components, certainly those high
vulnerability components like Safari and WebKit.
So just great to see them advancing their craft on this, but towards the end of this
right up, there's a very interesting thing in where we're just great to see them advancing their craft on this.
where they said, you know, we basically pulled apart this first silent update.
Yes, it has the advertised update in there around Safari's navigation bug that was in there.
But they also said, but there's a fix here in Lib Angle, which is part of WebKit that Apple has not actually documented at all.
And, you know, it's connecting some fuzzy dots, but funny to see a silent security update come out with an undocumented fix in Lib Angle around the same time.
The Dark Sword was discovered, and that had an exploit in,
Lib Engel, not the same bug at all, but you kind of imagine someone at Apple went,
can we please go and have a look at over all of these libraries, please?
Yeah, and you're guessing if it's a silent fix, it's probably a dozy.
They usually are.
Now, look, I just wanted to update the listeners on something as well.
We spoke about how meta is abandoning E2E in Instagram,
and I was suggesting that this is about safety teams.
It's not about, you know, law enforcement access or anything like that.
funnily enough, I wasn't sure why they were doing this, and it turns out that META is being sued
quite a lot, both by state governments and by people who claim that they have been harmed by
meta's lack of due care, and META keeps losing these lawsuits.
So if you wondered why they are rolling back E2E, it's because they are going to have to
have a renewed focus on user safety, particularly around kids, and having a social network
where young people hang out, where other people go hunting for said young people,
offering E2E in that sort of context, I think, is going to be very legally problematic moving forward.
I would not expect to see them remove E2E from WhatsApp because it is not a social network.
So I think that just sort of reinforces the commentary that we had around that recently.
Look, as we already mentioned earlier, I don't know that there's much to add here.
Cash Patel, the FBI director in the United States,
his Gmail or whatever popped and like, you know, his holiday photos leaked.
I think his trip to Cuba and whatnot.
But I mean, you know, whatever.
Like this is just, you know, it's whatever news now, Adam.
Yeah, I mean, having your email spool hacked, you know,
happened to Dan Kaminsky when he was on stage of black hat.
It happens to the best of us.
So, you know, I have some sympathy for Cash Patel on that regard.
But at the same time, it is just funny.
Bad joke.
Yeah.
I guess good job Iran.
Yeah, yeah, that's right.
Now, look, speaking of Iran,
they are putting out the warnings
that American tech companies in the Middle East
are now legitimate targets.
We saw this all over social media this morning,
and the best we can tell,
this is actually a legitimate warning out of the IRGC.
So they have said Cisco, HB, Intel, Oracle,
Microsoft, Apple, Google, Meta, IBM, Dell,
Palantia, NVIDIA, JPMorgan, Tesla, GE,
spire solutions G-42 and Boeing all of their products contribute to America's
war effort and thus they consider these companies to be valid targets now this
of course comes after Israel has struck at Iran's steel mills all of them
Iran had spent a long time building up a steel production capability as a hedge
against oil restrictions you know honestly it feels like Israel's goal here is to
turn Iran into a failed state which I don't think
is an admirable goal at all, despite the fact that the Iranian government is horrible.
And yeah, we're going to see these sorts of strikes if these, you know, if the types of
actions we've been seeing against Iranian economic interests continue.
I think this is inevitable.
We've already seen them hit AWS data centers and whatnot, like deliberately to see what
would happen to see if various, you know, U.S. military capabilities were degraded.
And yeah, we're just going to see more of this.
I mean, James, you know, you've worked in big tech.
You've worked both for AWS and Apple.
You know, I can't imagine if you were an Apple staffer in Dubai that this would be warming your heart.
No, this is not a good feeling.
And I actually heard from a few friends at AWS after we were talking about the Bahrain incident.
And some of the things they shared with me just really brought into focus how huge the damage is.
And goodness knows how to recover from that.
But thankfully, you know, this is different, right?
This is talking directly to.
And I think there was a line in this threat that says, you know,
civilians are warned to stay away from this,
want to stay away from banks.
And that just does not feel good at all.
It's crossing a boundary.
But at the same time, you know, what are they going to do?
This is the options that are left, you know, left open to them at the moment.
Now, look, speaking of civilian tech being used, you know,
being relevant to a conflict,
we got this really interesting post here on Twitter.
which is, you know, since the Russians have been kicked off Starlink, which, geez, why did that take so long?
You know, it looks like probably they got kicked off Starlink because SpaceX is preparing an IPO
and they didn't want their IPO complicated by that old situation.
But we got this amazing video from a Russian soldier who is like in a snowy field
currently meshing ubiquity stuff together to get some sort of coverage on the battlefield.
I mean, Adam, what a world, as we like to say.
Yeah, yeah, exactly.
That's not the fun sort of network engineering.
You have to go out there in the field and the snow and so on.
I bet the Ukrainians are super happy about it, though,
because, like, ubiquity gear is not fantastic from a security perspective.
There's been so many issues with that in the past.
So I bet they're rubbing their hands together and chuckling about, you know, all the fun
they're going to have ruining the Russian infrastructure again.
So, yeah, bad time to be Russian network engineers, that's for sure.
Yeah, I mean, that's some adminning under fire, right?
Oh, and we probably should have talked about this earlier, though,
but it's been that sort of week.
there is a Citrix net scaler bug that is out there being exploited in the wild watchtower
had a write-up of this and of course they buried the fact that this is being exploited in the
wild like deep under all of their meme images in their write-up I mean I actually like their
write-ups but they probably could have put that one a bit higher but you know this this looks real bad
and Cicca is out there telling agencies to fix it by Thursday so yeah a bit of a citrix
cytrics popolips out of yeah this is a sort of a variant of Citrix bleed
where it leaks memory when it's processing requests.
The only thing that makes us less terrible, I guess,
than some of the previous ones is that there is a prerequisite
that your net scaleer is set up to be a SAML IDP,
like an actual identity provided.
You're going to back your orth off your net scaler,
which surely no one is crazy enough to actually use,
but, you know, I guess by virtue of the fact that it's out there being exploited in the wild,
people clearly are, although Lord knows why.
But yeah, the Wastikaer write-up is wonderful as usual.
And yeah, anyone who's got Citrix net scanners on the edge of the network is very used to having to emergency patch everything anyway.
And this probably won't be the last one judging by the quality of memory management and Citrix products anyway.
So, yeah, good times for the admins, at least they're not, in the fields in Ukraine.
I mean, I think the IDP configuration is just, it is just insane.
Because when I think, hmm, how should I anchor, you know, where should I anchor?
all of my trust in my organisation,
do I think, you know, a bit rot pile
of crap, Citrix, Netscaler
box that I'm already trying to figure
out how to sunset. Yeah, that's where I
want my IDP. Yeah, where they're parsing
their XML with C badly.
Yeah, yeah. Oh, speaking
of parsing things with C badly,
FFMPEG did their
April Fool's post
today and they said that
they were like switching to Rust or whatever
and it wouldn't be performant, but at least it'd be
secure and like shut everyone up. I don't think this
is the own that they think it is, right?
Like, this is like an April Falls joke that is somewhat of a cell phone.
But anyway, what else we got?
You pulled in this story, Adam, which is very funny.
It's from Del Cameron.
It's from Wired that says, hey, it's just pointing out that, hey, when you use a VPN,
it might actually subject you to being like, it's going to be much more likely that you
wind up in the 702 data set because you're shunting traffic around outside the US.
Yeah.
Like having your internet, having your ISP be in another country does make you a foreigner
from the point of view of American surveillance.
So yeah, maybe not the privacy improvement that you were imagining.
And apropos of the ongoing 702 reauthorization debate that we have to have every few months,
yeah, maybe just a little data point and some comedy.
Well, I should say it doesn't actually make you a legit target for the Americans.
It just makes it much harder for the Americans to actually do their job without incidentally
collecting on you.
So yeah, that's a real bummer, man.
That's a real bummer.
And finally, we have some malicious, like, SEO work here.
James, why don't you talk us through this one?
It turned out for a while.
People were, you know, reporters and whatever started ringing the White House, you know,
just by tapping in the number into their keypad.
And it was bringing up, you know, because sometimes, like, you know,
Android phones or whatever will actually look up or who's the number that you're calling
and show that on the screen.
It was showing Epstein Island, which, you know, well,
This article was great for the setup of it.
I started reading it.
I don't know why I'm reading this.
It was like, you know, this will begin with Melania's special debut of her human-eyed robot,
and she was walking like a model does, foot in front of foot,
and it made us wonder, what does one wear?
What designers do you wear as the first lady to an event with the first human-eyed robot?
I'm like, what am I reading this for?
And then it's like, so we called the White House.
And it popped up on our phones as Epstein Island.
and then it tells into what happened.
And yeah, it's like you say,
it's like Google phones will go to Google,
look up Google Maps and Google Business
sort of records of who's at this phone number.
But that's all crowdsourcy kind of stuff.
And, you know, someone, you know, craftily
snuck in a fake edit and changed the presence
of the White House number to being Hepstein Island.
So it's not dumb if it works.
And it's certainly got the laughs.
Yeah, I tell you there's,
I see people playing tricks on Maps apps
around my area because I live in a pretty tourist heavy part of Australia. It's very beautiful
beaches, rivers, you know, swimming holes, that sort of thing. And people will deliberately,
like, there'll be some beautiful remote swimming hole and someone will like map it to the
centre of a town so that people just can't find it, right? Like they go there to look it up in the maps
and it's like, well, if you don't know where it is. So it's like this sort of, yeah, underground
anti-tourism efforts. But anyway, we'll wrap it up there. Gentlemen, thank you so much for
joining me, James and Adam. And yeah, we'll do it all again next week.
Cheers. Yeah, thanks Pat. We'll see you then. Thanks, Pat. What a week. That was Adam Bwalo and
James Wilson there with the check of the week's news and what a week it was. It is time for this
week's sponsor interview now with Edward Wu, who is the founder of Drop Zone. And yeah,
Drop Zone does like AI sock stuff, right? So it does your basic sock work, automates that
with AI so that you don't have a whole team of people sitting there chasing down every single alert
in your sock. I mean, basically every sock is doing something like this, whether it's homegrown,
whether they're using a specialist vendor like Drop Zone, it's kind of the future. But, you know, Ed,
while he's there, figures, hey, I'm plugged into all of this data, why don't we do some extra stuff?
And that's what they've built. They've built in some automated AI-based threat hunting.
And yeah, that's what this interview is about. So here is Ed Wu talking about why they chose
to build AI threat hunting into the Drop Zone platform. And also, you know, sort of
what the starting premise of it was, which turns out it's like, well, you know, what would you do
if you had basically unlimited man hours to throw it like certain queries and whatnot?
And anyway, here's Ed starting off by explaining why they chose to build these features.
Enjoy.
We're doing this for a couple of different reasons.
First and foremost, a lot of our early adopters and customers, after they have AI agents
investigating alerts, seeing me.
next ask is some sort of help with regards to threat hunting.
And at the same time, technically,
there are a lot of similarities between threat hunting
and alert investigations.
Some might say alert investigations are kind of like
a smaller scoped threat hunting.
Well, threat hunting is investigations without the alert, right?
Like it's just a different starting point,
the same kind of workflow, really, once you get going.
Yeah, exactly.
And strut hunting in general is a little bit broader, right?
Because for alert investigations, you are looking at one specific thing,
such as this user logged in abnormally from this particular unusual geo region at this time.
Well, threat hunting generally, you start off by casting a much wider net.
let's review all logging attempts from certain countries.
And then you might find one match, you might find zero match, or you might find 100 matches.
And then after that, you need to do additional filtering before diving deep into each match
to see if there's actually any corresponding breach or suspicious activities.
So I guess my question for you is how far do you go with the automation with something like this, right?
Because what you just described, you know, show me the number of logins from country X or whatever.
I mean, you can pretty much do that with Splunk already, right?
If you are prepared to learn a cursed, you know, query language.
So the AI takes care of that part of it, right?
Because you've got a much more natural interface where you can just ask it, hey, should.
me those things. But in that instance, you're still kind of manually guiding an investigation towards
something. You know, how far do you go with something like that in terms of automation to the
point where you just, instead of asking it, hey, show me the number of, you know, logins we had
from this region, you're just saying, show me something weird. Yeah, good question. It's definitely
a spectrum because different startups and products are kind of picking up.
different spots within this whole automated threat hunting spectrum. On one end, it's a chatbot.
On the other end, extreme end, you just say, show me something weird and then it finds you stuff.
For Drop Zone specifically... Yeah, but it might... The best thing about AI is you ask it to show you
something weird, and it might show you something weird that has absolutely no connection to some sort
of incident, just something strange. But yes, anyway, sorry, I interrupted you. Go on.
No, it's good. Yeah, from our perspective, we are choosing somewhere in the middle.
So we define our AI threat hunter as a piece of software that takes hunt packs as input.
And a hunt pack could be one or more sets of TTPs or IOCs.
And then what it generates for output is a complete hunt report.
So generally when we zooming to a specific threat hunt, we are trying to automate what a typical human analyst or engineer is looking for.
So we break down the threat hunting actually into three phases.
The first phase is what we call a collection phase.
That's where if you are hunting for, for example, a TTP of unusual logging of some sort,
you write a pretty broad query looking for logging attempts from unusual countries
or countries that maybe the organization has no business interacting with.
So that's kind of the collection phase.
generally what happens after that is you write a single query or multiple queries and you end up getting a lot of responses.
So that's where we enter the second phase, which is the filtering phase.
So the collection phase might find 100 matches.
It might find 100,000 matches.
So what do we do then?
It's practically infeasible to thoroughly looking to each of 100,000 matches.
So this is where we are building software.
replicates the data pivoting and slicing that a lot of human threat hunter are doing.
When they are faced with a lot of data, the human threat hunters generally are using their intuitions to find
unexplainable anomalies.
So they are using statistics, they are slicing the data across different filters, columns,
and trying to see, okay, do I see anomalies within this data that I should spend more time on?
So at the end of the filtering phase, we might go from 100,000 rows to maybe 150 rows
that truly require additional in-depth analysis.
And that's where for each of these anomalies, our system is performing an in-depth analysis,
kind of similar to an alert investigation to looking to, okay, for this particular instance,
this user, this time in this country, exactly what's going on,
what is the user traveling to that country?
Is this particular user actually recently moved that country?
What is the country of residence of this employee within the HR system, etc.
To try to figure out if any of these anomalies actually correlate to malicious activity.
And the end of the strut hunting, we show the full pipeline of starting off with a couple of queries,
maybe landing with that resulted in 100,000 rows.
After the statistical filtering phase,
we might weed out majority of that
and had 100 anomalies that the system dug deep into,
which resulted in 98 bin-9 results and two suspicious results.
Now, you are releasing this at RSA,
but I'm imagining that you've probably already, you know,
back tested it against some customer data and whatever to see what shakes out.
Like, you know, what, you know, how useful is it proven to be in those, in those tests?
Great question.
It's quite interesting because as we started to really test it against real world customer data,
the number one impression is just the overall thoroughness and the depths of the analysis is truly eye-opening.
I think this is where for alert investigations, when we are leveraging AI agents, we generally see compression of automating maybe 60 minutes, you know, 90 minutes of work.
No, no.
I mean, I already see where you're going with this, which is like now that you've got this sort of unlimited labor paradigm, you can say, okay, well, what if we had unlimited labor?
What would we ask those, you know, completely, you know, basically free people to do?
and it's tackled this absolutely gigantic task
that would take a ridiculous number of hours
just to see, you know, what shakes out the other end.
I mean, it makes a lot of sense,
but the question is, you know,
is interesting stuff shaking out the other end
of all of those virtual, you know, people hours?
Yeah, absolutely.
During the early beta alpha testing,
we have already found a couple interesting anomalies
that we have surfaced to our early adopters.
none of them turned out to be like true true you know true positives yet but we have received
a very positive positive feedback in terms of the team really appreciating flagging those interesting
situations some of them had to do with some sort of misconfiguration where when you look at the
data it's totally very concerning but if you look at the broader compensating security controls
that's around it. It's not the end of the world.
But still stuff is still something that they're like, well, glad we found that out, right?
Yeah, exactly. So what in one case, we saw essentially a very suspicious looking web request pass
being accessed. So that really resembles ultimately a web show. And that's kind of where
the particular environment actually had some sort of waft in front of it. And then
had a somewhat poorly must-configured or maybe intentionally configured web gateways that
always responds to these file pass requests with 200s, even though technically there is no actual
like web shell executing on those paths. So you've just given me two examples there of the sort
of threat hunts that you can start with, which is stuff with, you know, impossible travel or weird
logins from weird regions. Another example there was just
just, you know, well, that looks like web shells.
Yeah.
That looks like web shell activity there.
That's a bit strange.
Might want to investigate that.
What are some other examples you can give me of this sorts of,
because it sounds like you're very much doing at this stage pre-canned hunts for the most high-impact
stuff.
You know, there's two of them.
Can you give me a couple more that you're releasing in these sort of, you know, pseudo-playbooks?
Yeah, absolutely.
So when we think about hunt packs, right now we are primarily focused.
leveraging our own internal threat research to generate hunt packs that's either focused on specific
minor attack TTPs as well as well-known threat actor activities and
common software stacks so we are looking at
hunt packs for example that's targeted with a specific threat actor group as well as
TTPs such as remote services so we we have hunt packs that's
looking for PowerShell, the usage of PowerShell remodding as well as PSXX.
Beyond that, we also have Hunt Packs that's focused on network activities, ranging from
a usually large data transfers to anomalies within DNS that might indicate some sort of DNS.
So far, we have already built around 50 pre-canned hunt packs, and we're
expecting to going public very soon with around 100 more.
Also, we're doing some active research to leverage AI agents to continuously monitor open
source intelligence feeds like threat reports and Twitter feeds.
So we can get to a place where our system is also programmatically generating hunt packs
live for novel emerging threats to get to a future where,
security teams can actually adopt some sort of 24-7 autonomous hunting,
where the combination of AI threat intelligence analyst and AI threat hunter
is able to perform hunts on net new emerging threats over evenings
before security team is actually waken up and report to the executive team.
Yeah, I've spoken to people in threat.
intelligence who are doing just that, like they're using AI systems to do like all of the IOC
extraction and whatever from various reports and findings and then generate hunt rules and whatever.
But being able to do that all in the, all in the one box, yeah, I mean, I can see,
I can definitely see the appeal. Ed Wu, fantastic to chat to you. I wish you all the best with
it. Very interesting conversation. And yeah, we'll chat to you soon. Cheers.
Thank you, Pat.
That was Ed Wu there from Drop Zone. Big thanks to
him for that.
And of course,
Drop Zone is this week's sponsor.
And if you run a sock,
yeah, you should check out Drop Zone
because it can certainly save you
a lot of time and frustration.
But that is it for this week's show.
I do hope you enjoyed it.
I'll be back soon with more security news and analysis.
But until then, I've been Patrick Gray.
Thanks for listening.
