Risky Business - Risky Business #817 -- Less carnage than your usual Thanksgiving
Episode Date: December 3, 2025In this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news. It’s a quiet week with Thanksgiving in the US, but there’s always some c...yber to talk about: Airbus rolls out software updates after a cosmic ray bitflips an A320 into a dive Krebs tracks down a Scattered Lapsus$ Hunters teen through the usual poor opsec… … as Wired publishes an opsec guide for teens. Microsoft decides its login portal is worth a Content Security Policy South Korean online retailer data breach covers 65% of the country This week’s episode is sponsored by Nebulock. Founder and CEO Damien Lewke joins to talk through their work bringing more SIgma threat detection rules to MacOS. This episode is also available on Youtube. Show notes Airlines race to fix their Airbus planes after warning solar radiation could cause pilots to lose control | CNN Congress calls on Anthropic CEO to testify on Chinese Claude espionage campaign | CyberScoop Post-mortem of Shai-Hulud attack on November 24th, 2025 - PostHog Update: Shai-Hulud and the npm Ecosystem: Why CTEM Must Extend Beyond Your Walls | Armis Glassworm's resurgence | Secure Annex 4.3 Million Browsers Infected: Inside ShadyPanda's 7-Year Malware Campaign | Koi Blog Post by @spuxx.bsky.social — Bluesky Meet Rey, the Admin of ‘Scattered Lapsus$ Hunters’ – Krebs on Security The WIRED Guide to Digital Opsec for Teens | WIRED Perth hacker Michael Clapsis jailed after setting up fake Qantas Wi-Fi, stealing sex videos - ABC News Ed Conway on X: "The person who first downloaded the OBR's document at 11:35 on Budget day (I'm guessing someone at Reuters, given they first reported it) had already guessed the web address and tried and failed to download it 32 times so far that day(!) https://t.co/6iLm2uEUj2" / X Reuters accused of hack attack | ZDNET The Destruction of a Notorious Myanmar Scam Compound Appears to Have Been ‘Performative’ | WIRED Microsoft tightens cloud login process to prevent common attack | Cybersecurity Dive Fortinet FortiWeb flaws found in unsupported versions of web application firewall | Cybersecurity Dive Cryptomixer platform raided by European police; $29 million in bitcoin seized | The Record from Recorded Future News Officials accuse North Korea’s Lazarus of $30 million theft from crypto exchange | The Record from Recorded Future News Data breach hits 'South Korea's Amazon,' potentially affecting 65% of country’s population | The Record from Recorded Future News NSA Contractor Groomed Teenage Girls On Reddit, DOJ Alleges Nebulock developed coreSigma for MacOS coreSigma repo:
Transcript
Discussion (0)
Hi everyone and welcome to risky business. My name's Patrick Gray.
We've got a great show for you this week. We'll be chatting with Adam Bwalo about all of the week's security news.
And then we'll be hearing from this week's sponsor. And this week's show is brought to you by Nebulok,
which makes, I guess, what they're describing as a, a bit cheekly, as a vibe hunting platform, right?
So this is an AI-enabled threat hunting platform where instead of having to comb through horrible blog sources and do a whole bunch of stuff manually,
You can kind of ask it, hey, you know, is anything weird going on?
And can you have a bit of a deeper look at that and answer these questions?
And, you know, it's actually pretty cool stuff.
So the Nebulaugh founder, Damien Lucie, is along in this week's sponsor interview to chat about some work they've done on Sigma.
Sigma detections for MacOS, basically taking what they're calling core sigma detections for MacOS and piping them through to elastic.
So you too can have that information in elastic thing.
like, I don't know, there was an unsigned kernel extension load on your macOS box. That seems like
something you might want to know about. And it's actually currently a little bit difficult to know
these things. So that is actually a really interesting interview. Do stick around for that. But
first up, Adam, let's get into the news. Although, actually, before we get into the news, we've got to say
a big shout out to our editor slash producer slash seriously risky business host, Amberley Jack,
who is laid up recovering from surgery. Yeah. She's definitely been looking at
forward to that for a while so I'm glad that it's been done and we'll see her back at work
you know towards the end of the year early next year I think so yes it'll be good we'll miss
her unfortunately we are working harder and um enjoying her slacking us uh photos of her uh with two thumbs
up on the the good stuff they give her the good stuff uh in recovery from surgery so ambley uh
hope you're feeling okay mate and um yeah looking forward to having you back all right uh but by the way
for those of us joining on uh youtube this is why there's no superimposed uh
you know, images or news ticker or anything like that.
I ain't doing all that.
That's Ambley's job.
So we'll wait for her to get back before that resumes again.
Now, look, we're going to start with a story that is not actually cybersecurity,
but is definitely about patching and the perils of patch management.
Airbus had a hell of a week.
It turned out, the cosmic rays caused a bit flip in a controller,
something that controls the elevators of the air.
Airbus A320 plane, and that has led to them issuing an emergency directive that all of the
operators of this aircraft needed to roll back to a previous version of the firmware that
controlled this device.
This caused groundings of airlines all around the world and just through the world's
airline system into absolute chaos.
But, I mean, this is ultimately the story about a patch.
Yeah, it's a software patch management story, which is pretty wild.
There was, yeah, I think it's what, like something like that.
6,000 Airbus A320 family aircraft around.
And yeah, they demanded that all of them have this particular software version
rolled back on their elevator controllers, which in some planes is a like in situ
software thing.
And in some configurations, you actually have to pull the device and replace it with one
that's been downgraded in the maintenance facility beforehand.
So pretty disruptive.
The, I mean, Airbus has said that it is because of,
Yeah, cosmic ray bit flips.
And there was kind of, you know, there was a solar storm,
some heightened, you know, activity in, you know,
the Earth's magnetic field and so on that could have done that.
I've seen some people like saying, well, actually the peak of the solar storm
had already kind of passed at that point.
So, but blitz-lips can happen.
To be clear, what they're alleging happened is that a, you know,
some solar activity caused cosmic rays to flip a bit somewhere in an airbus's sort of control
system which resulted in an Airbus A320 having an uncommanded pitch down, right, which put
everyone in zero G, a couple people got injured, and then they looked into it and they're like, oh,
okay, this was a bit flip. And you sort of wonder why the updated version of this firmware couldn't
deal with a flipped bit, whereas the previous one couldn't. You sort of think, well, maybe they took
out some check-summing or something to, apparently, apparently the newer version has new features.
And you wonder if the reason that they couldn't keep the sort of integrity checking is because
they needed the processing headroom for these new features.
It's just, I just think it's an interesting software engineering story.
Yeah, yeah, it certainly is.
I mean, and I've seen, you know, some people in like the airline industry,
you know, airline industry commentators talking about it,
saying that there are some safety features that were in the new version
that, of course, now that they're rolling back,
that they're not going to have for some things that may improve the characteristics
of the plane and certain, like, when it's an alternate control law
and it's approaching a stall, then there is some software that dealt with that circumstance
that they've rolled back.
But yeah, it's kind of unclear, you know, how, one of the things I read was that the older versions would refresh particular parameter values in memory more often, which maybe leads to your idea that there wasn't enough headroom with these new features and that had to take something out and that was the thing that got taken out.
But we don't really know whether that was accidental or, I guess it wouldn't be accidental in the aircraft industry, but what the tradeoffs were that led to that.
But, the net result for most people is standing around waiting, you know, for their planes and all of this.
because patch management's hard.
And actually had a friend of mine say,
you know, well, like surely these things have automated, you know,
updates these days.
Why is this require plane stop?
You know, that's, you know, kind of, it's a little,
this is the only area of technology where we have good, robust controls around
something like patch management.
So it's actually really nice to see this happening in the real world
compared to, you know, Crowdstrike where we're just going to YOLO, roll out and update
and, you know, brick computers all around the world.
Yeah, I'm not complaining about there,
I'm not complaining about there being no OTA updates for your Airbus, right?
Like this component apparently is made by Talas,
and you've seen pictures.
I don't know if they're just file pictures of people updating other components,
but it's of someone with a terminal plugged into the thing,
you know, in the tail of the aircraft,
actually updating the firmware.
And I'm glad it doesn't have some random LTE modem in there,
you know, downloading a bunch of code
and not actually checking the signatures properly, right?
Which is how it would normally be if it's an OTA update.
That is how it would normally.
I mean, imagine if we made, you know, banks and enterprises
and government agencies responsible for the integrity of their computer systems the way we do
for airline operators.
You know, so this seems like a good news story all around.
Yeah, yeah.
I don't know, though.
I think it's just connecting them to the internet kind of would be a mistake,
and you can't really take the bank stuff off the internet.
But sure, okay, we'll go with that.
That's fine.
It's been like the rigour is what I mean.
Sure, sure.
I get it, I get it.
All right, now, look, speaking of, you know, carefully rolling out technology that,
we really understand the implications of.
Let's have a chat about AI now.
And the US Congress is asking the chief executive of Anthropic
to come along and have a chat about this recent campaign
that we spoke about a couple of weeks ago
where a Chinese APT, presumably the MSS,
we're automating part of their campaign using Anthropic.
I think, you know, based on the comments we've seen from people on various committees,
I think this could actually be an intelligent conversation.
So the House Homeland Chair Representative Andrew Garbarino from New York said this incident is consequential for U.S. homeland security because it demonstrates what a capable and well-resourced state-sponsored cyber actors such as those linked to the PRC can now accomplish using commercially available U.S. AI systems, even when providers maintain strong safeguards and respond rapidly to signs of misuse.
So I think, okay, it's slightly encouraging, but I still think it's missing the point
because they could have just done this just as easily with, by spinning up, you know, deep seek.
I don't think the, oh, wow, it used US technology angle is particularly interesting here.
What do you think?
No, I felt the same thing.
That felt like a bit of a rah-rah.
Let's advertise how good US technology is kind of bit of it rather than a meaningful piece of it.
But like, it's good to have these conversations and I'm glad that, you know, they are talking about this stuff, you know, at this kind of level,
rather than, I mean, there's so much room given to AI companies to kind of, you know, go out and innovate right now that having any kind of oversight of all seems like a good step.
But it does, yeah, I'm not sure how meaningful it's going to be.
Like, I mean, you can kind of imagine the level of questions that anthropic.
I think the CEO of Google Cloud is going to be there as well and some other companies like.
So I'm not confident that it's going to be a super detailed and deep, round, you know, line of questioning.
But it's good that we are having these conversations at all.
Well, I don't think they're going to table IOCs, if that's what you mean.
Like, I think keeping at high level is kind of okay for Congress.
But I don't know.
It sort of seems like a hearing that, yeah, we'll be, we'll at least be tracking at some point.
Moving on, and we've got a decent post-mortem of the latest Shilud Worm attack.
This one's been written up by a company who, the name I still can't believe exists.
Post-Hog.
Yes, and this one's interesting because they are one of the patient zeros.
So they were compromised to see the initial deployment of the shalead worm along with, you know, a couple of other places.
And they've got a write-up of how the attackers went through the process of compromising them.
And that's actually quite an interesting, you know, quite an interesting kind of campaign in itself, like the process of doing that.
They managed to kind of sneak a commit in, you know, they submitted a pull request for a commit that they had made that kind of exploited a misunderstanding in some of the, like,
like automation workers used when their process, process commits.
And post hoc is written on said, like, we straight up didn't really understand
how this particular, you know, GitHub integration worked exactly.
GitHub's actually changing some of that behavior because it is confusing.
And they talk through that initial kind of intrusion process.
So that kind of speaks to the cunningness of the attackers that were behind this.
Like they had plenty of tricks up their sleeve.
And I think, you know, the post hoc write-up is actually really good, like great details.
the worm itself, but also good details of the process they went through and some of the steps
they're taking to address it. So yeah, in the end, kind of good job writing it up and good job
responding to it. Yeah, we had a bit of feedback too from a listener via Blue Sky who was saying
that, you know, that we got it wrong when saying that the NPM solution to this is to
introduce manual steps into the publishing process. But this is more of a, I think it's more of a
miscommunication, I guess, because there is, I mean, basically NPM solution here,
a GitHub solution, is to gate all of these repo updates through some sort of CICD pipeline, right?
So you've got like a trusted platform where you can submit code and it goes through this
and, you know, then winds up in your repos.
But I think there's also like U2F challenges and stuff involved, which is the manual bit that
you were talking about.
Like clear this up for us, would you?
So there's a couple of bits here.
One is NPM for the case where you do just want to publish a package yourself, it will require U2F or some kind of like human in the loop off as a kind of as an interim step, I guess.
I mean, but that will probably be there forever.
But the real thing they're trying to do is move to a point where, yes, everything goes through a CICD pipeline.
And there's some technical mechanisms for doing like short-lived tokens issued by the CICD integration provider itself like GitHub in the case of many people.
that then can be trusted through a like federated orth mechanism into the publishing provider.
So NPM or Pai Pai or whatever other platform is taking the artifacts made by the built process.
So there's sort of a technical level of automation there.
But really the overall goal here is to push responsibility for the integrity of code
and therefore the artifacts generated and published back up into the code repository
and therefore back up into the human process.
So in the case of a big open source,
project, you know, it's not going to be particularly likely that one person can commit, approve,
and, you know, publish a piece of code that there'll be more people involved. And that's the kind of
level of human checking. And then all of that human interaction is gated behind some kind of U2F or,
you know, other fishing resistant or. So it's kind of, everybody's right in a way. Like
there is more automation in the publishing side, but the goal is to push off back up to where
humans are and where they are able to make better decisions about the code and the repositories
and the rules around how and when stuff can be built and deployed and published and so on.
Well, I think one of the positives of that approach, too, is you have theoretically,
you have fewer long-lasting, like, access tokens and secrets lying around, right?
Yes, yes, exactly.
And in the, actually, in the post-hog write-up of the Shahel-Lidbit,
they discussed whether using that kind of short-lived token would have actually helped
in their particular case for the pre-shahlered kind of patient zero part.
And their conclusion was it probably wouldn't have because the attackers were moving sufficiently
quickly that they would be within the time limits of those kind of like interim tokens.
So, you know, nothing is going to make this a perfect process,
but anything that means you are less likely to leave tokens lying around
and then the value of those tokens is either more tightly scoped or reduced in time
or have some other kind of restrictions on it,
all of that absolutely improves the resilience of the ecosystem as a whole.
and it's a good plan.
Yeah, so we've linked through to a bunch of stuff
talking through all of that.
So if you're really interested,
you can go through to this week's show notes
at risky.biz and have a read.
Now we've got a write-up from a friend of the show,
John Tuckner over at Secure Annex,
looking at like, you know, these VS code extensions
that are malicious.
And he's had a look at 23 extensions
which copy other popular extensions.
I mean, this stuff isn't new,
but it's a good write-up
on how these people,
getting these extensions into these stores and making them look legitimate by doing things like
running up the downloads. And he's got a great, you know, he's got some great screen caps here
where he's got the bad ones next to the good ones and it's real hard to tell the difference.
Yeah, yeah. I mean, some of these are like cryptocurrency-related things. And like you, it's basically
impossible to tell just at a glance which of these is legitimate and which is not, which of course,
is the point of the process. And that's how they get you. There was a like ongoing campaign that's
been going for, I think, a month or two now, targeting VS code extensions in the Microsoft
store and some of the other kind of related stores. So, yeah, the idea of kind of leveraging that
trust and then using it to target extensions is not just a thing that's limited to browsers.
It's right. We're seeing in other platforms. And, you know, the trade craft that we've seen
in browser extensions is absolutely applicable in those other places. And that's kind of one
of the things that he's written up here. Yeah. So there's a whole bunch of a MNVAS marketplace and a bunch
in Open vSX.
And I mean, I guess this is the point, right,
is the supply chain is absolutely a mess.
And it's not just limited to things like Shilud Worms
and, you know, the occasional dodgy commit
or typo squatting in MPM.
Like, top to bottom, I feel like this is,
there's going to be, there's going to need to be some real work done here
to sort this out, right?
Yeah, well, exactly.
And the problem is it's really quite hard
because most of the controls that we see in place
in code marketplaces and app stores and things
is there is an initial bunch of checks at time of kind of like when you originally set up or deploy it
and then subsequent updates don't get the same amount of scrutiny.
And in the case of browser extensions and this kind of ES code stuff,
a lot of it is submitting something that's useful and perfectly valid
and then six months down the line replacing it or updating it with some kind of malicious thing
and that can lead to a lot of success.
Even people who will sell their extensions after they've got bored of them,
you know, go around and offer to buy, you know, unloved extensions that have an install base and then,
you know, update them with malware. That's something that people have done with great success in the
past. Yeah, so Secura Annex does a lot of work tracking, things like VS code extensions,
you know, Chrome browser extensions and whatnot. Another company that does similar stuff,
but different kind of approach is Koi. And Koi has a write-up here on a group. They're calling
Shady Panda, which they reckon. And it's amazing, right? Because you know I've been banging on
about stuff like browser extensions for years.
And they don't often make the news.
And it's like, oh, well, it never happens.
It's like, well, that's because no one's really looking at it.
And they've taken a look.
And apparently over seven years, this group, Shady Panda,
conducted a campaign that infected 4.3 million Chrome and Edge users
with malicious extensions.
Seems like a pretty comprehensive write-up here, too.
Yeah, yeah.
They kind of look through the timeline as the people get more experienced
with how they're going to work it on the various features
that have added over the years.
But yeah, getting to the point where, you know, their extensions are stealing, you know, all of the cookies, all of the form input, all of the browser histories, all of the mouse clicks.
So they've got multiple ways to kind of turn that into money or turn that into onwards access.
In some cases, also just having it pole, having these extensions, pole an endpoint for its arbitrary JavaScript to run.
So when they want to do something more exotic that they haven't thought of, they can just task their fleet of browsers.
And this is the same kind of thing where they take, you know, submit a good extension, let it run for all.
while and then update it.
And in the case of, you know, the Google Chrome slash Microsoft Edge Store, you know,
the controls there are really not great for updates.
So it's it's kind of humbling when you see how successful this stuff is.
And, you know, I know people who've been hacked by browser extensions getting updated.
And like it can happen, right?
It's, you know, even if you are kind of conscious about it, like this is a very hard thing,
especially when, you know, you install the browser extension and it works for a while.
You just forget about it.
It sits there and then, you know, eight months later,
bam, you're infected with an infestilor.
It's, yeah, it's pretty rough.
Yeah, I mean, the controlling this sort of thing is quite difficult as well, right?
Like, you can do it with, you know, Microsoft and Google, you know,
browser management stuff, but it's, it's hard.
It's harder than it should be.
Yeah, yeah.
And like at an enterprise scale, it's difficult.
And then even if you, you know, even if you get to the, like,
I guess if you are going to let your users installers the extensions at all,
like having approved trusted ones is kind of like the goal.
standard but even those can be taken over because i mean you're just saying i trust this person who
i don't really know to put stuff in my browser and maybe they're trustworthy today but six months
from now a year from now things can be different and the mechanisms for dealing with that are very
limited unfortunately well yeah i mean you've got airlock digital have um allow listing for browser
extensions which is really cool um and then you've got as i mentioned like john takner like i've got
no business relationship with secure annex or john at all zero uh but
I think what he's got there is a really good business where you can pay him money to monitor
extensions that are present in your environment. So you just give him a list and he charges like per
extension and if one of them goes rogue, you get an email or you got a, you know, Slack or, you know,
whatever it is, however he notifies you. And that way you can go and take care of it because, you know,
you're quite right. Even if you've got that control element, whether or not that's using the native
tools from Microsoft, Google or using something third party, more like the airlock approach,
that's only useful if you actually know when to act, right? And no one's really doing a good
of detecting this stuff. So yeah, I think this is going to just continue to like keep truck on,
keep trucking on, right, and become more and more of a problem. But, you know, I've been saying
that for years and it hasn't really accelerated yet. But, you know, when these things kick off,
it tends to happen quick. So I reckon it's better to be prepared now, personally.
Yeah, browser extensions don't make me feel good. So, yeah, being prepared and having some solution
to this problem, I think is pretty important because as we make things harder elsewhere, you know,
as we, U2F everywhere, as we do, also some other things more, you know, more robustly,
attackers just going to move around and browser extensions are, they're a juicy target.
They sure are.
All right, so let's talk about some classic Krebson, because Brian Krebs has identified, you know,
what looks to be a fairly key player in the old scattered lapsus hunters, you know,
advanced persistent teenagers kind of crew.
And it looks like he's like a, what, a 15-year-old Jordanian kid,
maybe with some Irish heritage as well.
Maybe I think it looks like an Irish mom and a Jordanian dad living in Jordan.
And yeah, this is pretty brutal stuff, right?
Because the thread that Brian pulled on was when this guy dropped a screencap of a, like it was a scam email that he got that said,
oh, you know, we've accessed your email account using password and then inserted like, you know, his old password from like some old dump or whatever.
Or from an info steal a logger.
apparently. And he like screencapped it, redacted his email address and popped it into a
telegram channel saying, oh, no, look at me, I'm done. You know, ha, ha, ha, being sarcastic. But he,
he didn't redact the password. So from there, all you needed to do was look up that fairly
unique password, you know, match it to an email address and then onwards from there. And yeah,
Brian wound up talking to this guy. Yeah, yeah, he correlated infrastina logs and figured it out,
ended up identifying the dad and emailing the dad
because the dad appears to work at Royal Judeanian Airlines
from the Infrastiler logs.
So he mails the dad saying, hey, I think your son's into something shady,
you know, et cetera, et cetera.
The dad thinks it's a scam and forwards it to the son for tech advice.
And then the son's like, well, I guess I'm cooked now,
decides to reach out to Brian, has a conversation about it,
tells Brian that he's kind of winding down his involvement,
claims he's winding down his involvement with,
I think he used to be involved with the Hellcat ransomware
and then that's the basis for some new like shiny hunters,
scattered laps of shiny hunters ransomware
that he's been AI modifying.
Anyway, he says that he's trying to get out of it
and he's in touch with European law enforcement about it,
although Krebs is pretty skeptical.
And yeah, kids 15 years old.
So the assessment that this is a bunch of teenagers,
yeah, bang on here.
Yeah, there was a bit of research.
it's not in this week's show notes, it's not in our run sheet,
but there was a bit of research that was doing the rounds this week too,
academic research that suggests that kids grow out of this stuff as well,
which is kind of hopeful.
By the time they're in their 20s, they all go off and get real jobs,
which is interesting, but, you know, see if you can avoid prison in the meantime, guys.
Not so good.
And speaking of that, Wired, helpfully, has published a guide to digital Opsic for teens.
This is Lily Hay Newman and JP O'Masson over at Wired.
So, you know, maybe he should have read this Opset Guide.
Yeah, exactly.
It's actually a pretty good offset guide, and it's just a funny juxtaposition.
That's, you know, if these criminals would follow such useful offset guides,
then they might do a little better, get the little less Krebs and going on.
But it's actually, it's quite a good write-up.
And if you do have a teen in your life that is potentially into cybercrime,
and you'd like to help them avoid going to jail, definitely give them this.
They're one of the authors of this piece,
originally started writing this for their 15-year-old daughter.
So it's kind of geared towards teenagers,
has some quite kind of teenagey sort of language in it.
So yeah, it's a useful resource for you
to share with your family and friends.
Did you forward this to your teenage daughter?
I will have a conversation with her, yes.
I bet you will.
Now, we got some follow-up reporting here on this guy,
Michael Clapsis, who's an Australian guy in Perth,
who, this is the guy who got busted,
spinning up Wi-Fi points using like a Wi-Fi pineapple
on aircraft and in airports.
And he was doing the typical thing, which is saying,
oh, you know, Qantas, free Wi-Fi on whatever,
I'm waiting for people to click through.
And then I guess it was like, you know,
enter your Facebook password or your I-Cloud password
to get free access, you know, that sort of scam, that sort of thing.
What was really interesting about this case
is that he got busted because one of the cabin crew noticed this, right?
And was savvy enough to know what this guy was doing.
So I'm pretty sure the way it worked is like they got him at the airport when they landed.
He did all sorts of stuff.
He tried to like remote wipe his phone and his laptop.
He then like spied on his boss having meetings with the AFP,
you know, our federal police, our FBI and whatnot.
So yeah, he's gone down.
He was convicted.
But he's been sentenced, man.
He's going to go down for seven years for this.
So what he was doing is he was taking these creeds, cred pairs.
and dipping into people's like eye clouds and whatever
and stealing mostly nude images of young women,
which is, you know, like just a revolting crime.
But, I mean, it's unusual to see a sentence like this
for a nonviolent crime in Australia.
Yeah, I've been seven years, four months, I think.
And he pled guilty as well.
So, I mean, I guess the sentencing probably reflects a little bit of that.
And, yeah, that's pretty serious sentence.
I mean, as you say, it's a gross crime,
and I'm not mad about him going to jail for it.
but yeah that's a fair amount of time in the clink yeah i'm pretty sure he's uh married with kids
too i think um but uh yeah not good he's eligible for parole in uh 2030 and um yeah just as i
say wanted to talk about it because i was really surprised to see a sentence like that from a
Australian court now um we got a funny one just i wanted to touch on quickly right because uh reuters
uh famous hackers over at reuters yeah he's the elite uh red team over at rogers
has apparently hacked the UK government's Office of Budget Responsibility to get the drop on a budget report and distributed it, you know, before its intended release.
So this has turned into a sort of minor scandal over in the U.S. over in the UK, I'm sorry.
And yeah, it looks really like what they did is they just guessed the URL by changing the, you know, the date of the previous report with, you know, today's date to November underscore 2025.
And of course, the way it works is, you know, with corporate earnings results and these sorts of reports, you put the PDF.
into the web server and then you press publish at the time that the results are supposed to go live.
Now, what makes this funny for me personally is I covered Reuters doing exactly the same thing
to a Swedish software company called Intentia.
And I've got the story linked in this week's show notes.
I was working for ZDNet at the time.
That was published on October 29, 2002.
So what's funny is the...
techniques. Yeah, the initial story where it's like,
Reuters got accused of hacking, like that has survived the bit rot on the
internet, but the follow-up where it was determined that it was actually just,
I guess, the URL, that story's lost forever because I think that one didn't make it.
This one made it global, but the other one, I think it just stayed on the Zedinat Australia site,
and I think that one's lost to bit rot. But there you go.
Reuters are elite URL guesses.
And it was interesting at that time as well
because I spoke to lawyers
and it's like I think I still remember one of the quotes
which is if you publish information
you can't get mad that people read it.
If you put it on to,
you publish a PDF to the internet
and someone guesses the URL
like that's not a security control.
You haven't bypassed anything.
It's perfectly legit.
So the people getting cranky with Reuters
need to chill.
I would suggest.
Now you and I spoke a few weeks ago
about the buildings being blown up at the KK Park complex.
This is the scam compound in Myanmar,
and we were like, you were flagging it already at that time of like,
well, maybe this might be a little bit performative
and just kind of like blowing up a couple of empty buildings for the cameras.
And here we have a story from Wired by Matt Burgess that says,
headline is,
the destruction of a notorious Myanmar scam compound appears to have been performative.
So good call, Adam.
But yeah, walk us through this one.
Yeah, and basically the deal is that they are trying to, the government there is trying to present an image that they're doing something about it without actually doing a whole bunch.
There were suggestions that a bunch of the ringleaders and people kind of in charge of the compounds were basically allowed to walk and go off and continue their operations elsewhere.
And the actual KK Park compound itself, only a very small fraction of buildings appear to have been demolished.
And we've seen some updated satellite photos.
and it's just like one corner of the complex,
which has, you know, like, what is it, like 600 buildings
and they've demolished, you know, a couple of hundred or something.
So, like, not a pretty, you know, like not a small amount,
like enough to make the photos look good,
but probably not super effective.
And it may well be that, you know,
maybe these buildings were surplus to requirements anyway
and they were going to, you know, replace them with something else.
So the idea that it would be performative, I think,
I feel a little bit justified in saying that,
but that was, you know, certainly even when we were reporting it,
There was plenty of skepticism about whether they were serious at all about dealing to this.
And it does not seem like that they are.
Well, I mean, you know, I've always thought the biggest challenge in dealing with this stuff
is the fact that those scam compounds represent something like, you know,
40% of the GDP of Myanmar, Lao and Cambodia, right?
Like it's just an absolutely insane amount of money.
So when you think scam compound, what is the government in Myanmar doing about scam compounds?
just substitute the words scam compound in your head for money factory.
Are they going to blow up the money factory or are they going to try to seize control of the money factory
or get the people who run the money factory to give them money from the money factory?
So I think that's really the issue is, you know, in countries with small economies like these,
when you have these money factories popping up, the temptation is always going to be too great
to just get in on the action as opposed to trying to stop it.
So yeah.
All right.
Let's move on.
And Microsoft is making some changes that should help prevent people from running cross-site
scripting attacks against its login interface.
Please explain this better than I just did.
So the deal is Microsoft is rolling out an update to the content security policy on some of the
intra-Orth process, or the Microsoft Online Orth process.
and this will restrict browsers' ability to execute script
that didn't come from Microsoft.
So if there is cross-site scripting vulnerabilities
or other cases where script would be included,
then the content security policy is designed to prevent that.
Really, the funny thing is that, like,
I'm impressed that there wasn't already
a content security policy that said this,
and, you know, that seems like a bit more.
I mean, that's kind of what I was wondering.
I'm like, hang on, they didn't have a content security policy.
Yeah. I mean, configuring our content security policy is fiddly, right?
And I will give them that.
Like it's the kind of thing that you don't want to screw up.
And then the process of rolling out is a bit fiddly.
And you have to kind of collect logs and make sure,
like you have to put it in like a monitoring mode.
And then collect logs to see what would have been blocked that actually didn't get blocked,
et cetera, et cetera.
So it's like it's fiddly.
But on the other hand,
we are talking about Microsoft here,
like center of the digital world.
Their orth system you would hope would be CSPable.
There are some like edge cases or there's other bits where the orth is actually
done in other places. Like Microsoft has such a complicated ecosystem that there are other bits where
this kind of CSP wouldn't be appropriate. But I think, you know, overall, this is kind of a defense
in depth control. And as Microsoft's, you know, they're like secure a future initiative thing where
they are trying to take security more seriously again, like going back and retrofitting, you know,
preventative controls like this to make future screw-ups lesser a problem. Like that's a good thing.
They don't, they didn't have to do this. But, but.
but it's good that they are.
It's just, it shouldn't be necessary.
They should have done this right in the first place.
And, you know, a comprehensive, robust CSP early on
would have made sense, I would have thought.
Yeah, I mean, you know, one of the reasons is all,
Microsoft scale.
One of the, well, one of the reasons this all came about, right,
is because of that CSRB report into the incident
where, you know, attackers stole KMAT, right?
KeyMAT that allowed them to mint tokens.
And you looked into the report as to why Microsoft didn't actually
stick this key mat into a HSM.
And the answer was effectively, because that's really hard.
Yeah.
And I think it's nice to see them moving on and actually doing sensible stuff,
even if it's hard.
And I'm guessing something like this, as you point out,
at Microsoft scale is hard.
So I think, yeah, you and I can both agree,
looks like good news.
Let's take it as good news.
Yeah, it is good news.
And once they've done this well in one place,
maybe it becomes policy.
It maybe becomes, like, best practice for how they deploy other systems that need it.
So, you know, always steps in the right.
direction, but you know, it just shouldn't have been rubbish in the first place.
That's right.
Now, moving on to a company we love, Fortinet.
We've got some more Fortinet 40-Web stuff to talk about.
And this is like there's bugs in old, unsupported version of their WAF, which we'll get to in a
minute.
But I just want to tell a quick anecdote, which is on show days.
So that's every Wednesday here in Australia.
After the show, you know, I do some other work.
And then I go out with a group of friends.
And we go to a local pub.
We eat some steak and just have a bit of a chat.
There's a rotating lineup of characters, about eight dudes,
and we all turn up and, you know, have a steak.
And last week, I've finished work, you know what I mean?
I've done some stuff.
I go off to have dinner.
We're eating at this pub, and there's a golf tournament on one of those TV screens in the pub.
And everywhere is Fortinet signage.
I guess they sponsor this tournament.
And I'm trying to have a nice relaxing steak at it.
And Fortinette is just projecting itself into my sightline.
It was a disturbing meal.
but walk us through this one
is it more just typical comedy bugs
from Fortinette?
So this is actually the same bug as last time.
So this is the bugs we talked about last week
in their 40 web, which they had patched
and it was in the Sissackev list.
I think Rapid 7 did some work
and discovered that the same bugs
also exist in older end of life versions
of their 40 web.
So the products that use their 40 web bits.
And yeah, people of course are still running those
and those people are not going to get patches because they're aren't patches
and they are going to get owned because 40 everything.
So, you know, I guess if you're running an unsupported fortinet,
anything on the internet, like you've probably already been owned 47 times.
So like this is actually zero difference for you.
But still, it's just, it's any opportunity to, you know,
kick them in the old 40 gnats, 40 nads, 40, 40 balls,
40, kick them in the 40 anything is always welcome on this show.
You remember when we met those
Fortinet people at Osser at that time?
That was funny.
When they came up and they're like,
hey, we love the show.
It's like, oh, anyway, and they're like, yeah,
we work at Fortnite, man.
I mean, there were good sports about it,
which I guess you'd have to be working at Fortnite.
Yeah.
Oh, man.
But I mean, like,
imagine being career Fortnite at this point.
I know.
I mean, it would be different, right?
If they were like saying,
we're spending all of our money,
we're going to get on top of this problem
and spin up the most amazing security program
you've ever heard, you just don't really hear a lot of that coming out of.
No, I mean, they tend to announce vulnerabilities
and other people's products they've found,
which is legitimate work and good.
But it does, there does seem to be some correlation between when there are fortinet bugs,
when they then subsequently announce other people's flaws,
tends to kind of correlate with the media cycle for their own bugs.
I'm not saying they're trying to squash their stories by burying them.
But, you know, the data is a little bit suss on that front.
You never know what's happening inside, though, right?
Like they might be just doing an incredible job on the next generation of products.
And, you know, we're just too distracted by the failures of most of their stuff to even think about it.
Oh, man, it kills me.
It's like Sophos, right?
Sophos is a good example where a bunch of their stuff was getting owned.
And then, oh, it turns out that they were running like counterintelligence operations against Chinese spies.
And, like, you know, had made really seriously good changes to their products.
Like, you know, a good example of that is Sophos has like a cloud management interface for a lot of its products, right?
And it works really well and protects you against a lot of these sort of, you know, perimeter-based attacks because the thing's checking into them.
You know, there's no open ports, basically, right?
So, you know, this is a really cool thing, but often their customers choose not to use it, right?
And it's like, who do you blame at that point?
Is that their fault?
You know, do we really expect them to say, no, you must use this cloud interface, which deals with these issues?
I mean, at some point, I kind of think, because I'm a bit of an absolutist, that maybe they should.
But I guess, you know, I guess I'm not sticking up for Fortnite.
I'm saying sometimes the situation's a little bit more complicated than you realize.
Yeah, yeah, exactly.
Anyway, to the wonderful world of next generation finance now.
And a crypto mixing platform was raided by the European police.
They seized $29 million in Bitcoin.
It looks like this thing over the years had laundered something like $1.5 billion US dollars, you know, over the last nine years or so.
This is a piece by Darina Anten-Yuk over at the record.
Yeah, this was an operation called Cryptomixer that ran out of Switzerland,
and I think Swiss and German authorities cooperated to dismantle it,
seized a bunch of servers, as you say, some Bitcoin proceeds.
They also seized, they said, 12 terabytes of data.
And I'm curious if your only thing that you do as a cryptocurrency mixing service,
12 terabytes is quite a lot of data.
And one hopes that it is the logs of the mixing process,
such that they can reverse it.
And they don't say that in any of the docs from Europol
or any of the reporting.
But like, what else could 12 terabytes of data be worth seizing?
And I do, I hope that that is what they've got
and they've got some way to reverse the mixing
and they can go backwards from there.
And that will be an interesting thing to see whether, you know,
six months from now we see a bunch of, you know,
arrests or raids or whatever else coming out of Europol
based on this data.
But we don't know that.
It's just kind of an interesting data point.
Well, I mean, when you think about,
in the early days or earlier days of stuff like Bitcoin when everybody was using it to buy drugs
off Silk Road, thinking that they were Opsic geniuses. And then they realized like, the creeping
realization kind of came over the next few years of like, oh, the authorities could pretty much
just look up my home address at this point based on the historical blockchain information.
But of course, you know, cops were, you know, too busy going after the sort of bigger supplies
and stuff to bother with people who are just buying drugs for personal use. But it is
interesting how stuff that seems like a good idea at the time, often, you know, as time progresses,
seem like a less good idea in retrospect. We've also seen, there's another crypto story here.
There's Lazarus apparently has, you know, Lazarus Group or whatever you want to call them.
The North Koreans have been accused of stealing 30 million bucks from some South Korean
crypto exchange. This one's by John Greig also at the record. Yeah, this one's called Appbit,
is the name of the cryptocurrency exchange.
They lost something like $30 million worth of cryptocurrency.
There's a little bit of kind of discussion
as to whether it was an insider-assisted thing
or whether there was some technical issue.
The company itself actually put out a report that said,
we're not saying that this is how we got our money stolen,
but we did make some mistakes in our wallet system
and you could infer information about our key material
from the signatures being placed on some of our money.
up on the blockchain and they're kind of suggesting they got actual crypto hacked by the North Koreans
but then other reports are it was an insider so you know did the North Koreans develop a quantum
computer or did they send a guy to apply for a job at that crypto account like which of these two
is more likely right yes Ockham is firmly on the side of probably just an insider and obviously that's
a modus up around that we've seen Lazarus do you know fake job interviews and blah blah blah blah but the
crypto company would like you to think that it was very fancy crypto bugs that let them recover
key material from the blockchain. Both of which are bad, but if you will give your money to
a cryptocurrency exchange, you kind of expect it to end up in North Korea's nuclear weapons
program anyway. So, yeah, everything normal. Now, one thing South Korea does better than anyone
else is data breaches, right? Because I swear every time you see of a major data breach in South Korea,
it's like, oh, they got details on literally everyone in the country.
Another one similar to that, there's like an Amazon-like company in South Korea where their contact, the contact details of all of their customers leaked, right?
So that's like, you know, names, addresses, phone numbers and whatnot.
So, I mean, not terribly critical, but still useful data that you don't want floating around out there.
But apparently it potentially affects 65% of the country's population.
Alexander Martin wrote this one up again for their record.
Yeah, I think it also maybe included some amount of order history data,
which could be interesting, I guess, if you're buying things that you wish to not have people not know about.
But yeah, they, I think initial reporting said that there was something like
four and a half thousand customer accounts where the data were taken.
Yeah, but then that's expanded very rapidly to 65% of the entire population.
So, yeah, North Korea, sorry, South Korea, always does seem to do things pretty large
when it comes to, you know, data breaches and loss.
I mean, that telco hack that we've been following,
the Caledon's been following over on risky bulletin, you know,
for the last little while at Korea Telecom, KT,
like that thing is just huge.
And like, they get to the point of having to replace all their SIM cards.
So yeah, they definitely, you know, do it at scale in South Korea, that is for sure.
They're getting owned world champions.
Yes.
All right, so we're going to finish off this week's podcast with a discussion of
the, oh, just, I mean, I can't even, right?
So this guy, 50-year-old Booz Allen Hamilton contractor who was working at the NSA was basically
catfishing teenage girls on Reddit and getting them to send him nudes.
Now, that is bad.
Already bad, right?
It's very, very bad.
But what makes it incredibly stupid is he was doing this from his work computer, which, of course,
at a workplace like NSA, you're going to expect if there's nude images,
on there, there's probably some endpoint thing sitting there, monitoring for inappropriate use
of a work computer, and yeah, like it really does look like he was caught by NSA because the FBI
got onto it after a, quote, you know, referral from another federal agency. So, I mean, just what a
moron. This guy deserves everything he gets, not only for being a catfishing creep, but also for being a
massive dumb ass. I know. I mean, you have to click through like consent banners, just to log into these
systems would say your stuff's going to be monitored.
Like maybe don't do anything dumb.
And yet here he is, cruising Reddit, like a chump.
Like, oh, how do you think this was going to go?
And I guess, I mean, he hasn't, I guess we should probably say that he hasn't
pled guilty or anything yet.
But, so, like, I guess this is all accused.
But, geez, man, what were you, what were you thinking?
Yeah.
And what is it with booze?
Like, because that's where Snowden worked right as well.
Like they, man, Booz Allen Hamilton, like, is a nonstop source of, like, disastrous employees for NSA.
But yeah, Douglas, Jason Martin, you know, good luck to you.
And yes, as you point out, innocent until proven guilty.
But, man, you know, by the time it's popping up in Forbes, that's a lot of smoke.
You know?
Like, if the NSA is fingering you to the FBI, like, probably you're all.
They don't want to wear the bad publicity if they didn't have to.
And here they are wearing it.
Well, mate, that is actually it for the week's news. Thanks a lot for joining us, for joining me to have that discussion. Always great. And we'll do it again next week.
Yeah, thanks much. Pat. I will see again next week.
That was Adam Bualo there with a look at the week's security news. Now we're going to hear from this week's sponsor.
And this week's show is brought to you by Nebulauk, which is a startup focused on AI-enabled threat hunting. Right. So, you know, you might have attackers trying, you know, partially successfully or even unsuccessfully.
to do various things in your organization and your detection stack just isn't going to surface
that sort of thing, which is why threat hunting, you know, is a good thing to do. The problem with
threat hunting though is it often, it's a very specialist skill set and involves a lot of time,
right? And a lot of effort to actually go and make it fruitful. So Nebulauk was created with the
idea of like, hey, why can't we just have vibe hunting, right? Why can't we just use natural language
to ask questions about data collected in our environment? And you might find, hey, you know,
Are there any remote management tools deployed on any of our computers?
You know, how did they get there and onwards and onwards, right?
So this is the idea.
They've got some great customers already.
I'm working as an advisor with these guys.
And, you know, Damien Lucie, who is our sponsor guest this week, very smart dude.
And one of the things they realized, though, in building out this product is there were no really great sources of telemetry for MacOS, right?
For MacOS.
and EDR is a little bit vague in general and especially on MacOS, right, where it's like,
hey, something bad happened or we stopped a bad thing from happening, but it doesn't really
give you that many details.
So he took a look at like Sigma detections for MacOS and realized like they weren't really
built out quite the way they wanted.
So they've done a little bit of work there and they've developed a thing they call Core Sigma.
So I've linked through to Damien's blog post on that and also to the GitHub repo for Core Sigma.
But they've developed basically a whole bunch of detections for MacOS that they then pipe into
elastic and you can go and grab them off GitHub and have a look at them. But yeah, things like,
as I mentioned at the top of the show, like an unsigned kernel extension load in MacOS, probably
the sort of thing you want to know about. And there's a bunch of others as well. And we talk about a
few of them in this interview with Damien Looney, which starts now. Enjoy. Yeah, I mean,
fundamentally, MacOS has been a second class citizen and endpoint security for a really long time,
despite the fact that it is increasingly more popular in enterprise environments.
So beyond the fact that endpoint vendors have support for macOS,
in the open source community,
and when you think about particularly Sigma
and being able to write detections as code,
there's really good coverage for Windows and some coverage for Linux,
but we don't really have a lot of coverage for Mac
just because there isn't a lot of signal mapping, right?
We've got two particular signal maps as it stands.
And the challenge that we saw was like, hey, wait a minute,
if we want to be able to understand at a more granular level
what's happening across all macOS EDR suites,
we need a way to figure out how to bridge that gap in telemetry.
So basically, like, as it stood,
there was an 85-ish-percent gap across MITTER tactics for Sigma.
and we built CoreSigma.
We built a new framework.
So what do you mean by like signal mapping, right?
Like just explain that for people who are not OFA.
Yeah.
Basically like the endpoint has a bunch of different events.
These events need to be tagged.
And what you need to do is basically take a bunch of different pieces of data,
map them to a consistent format,
so that you can ultimately translate this stream of data,
what's happening within an environment.
to a detection language.
So we saw that there was this gap in signal mapping
from what was happening on macOS
to how to interpret it as a Sigma rule.
So we created a framework to help with that.
I mean, it's almost like you're trying to a degree
like normalize these signals coming from MacOS
so that when you shove them into like a data store,
it's like very much the same as the data sitting alongside it from Windows boxes.
I mean, is that more or less, you know, the vibe here?
Bob here? Yeah, absolutely. But importantly, like, to not keep that proprietary or shelved away,
right, we want to be able to provide that as something extensible. Probably 99.99% of all
enterprise security environments are comprised of not just Windows machines, but have some
sort of Mac presence. We want to give security teams the ability to normalize that data,
right, to have the same level of visibility. So that was the inspiration behind it. And of course,
it allows us to do all the fun hunting things that we do on the back end too.
Now, EDR platform's a little bit opaque when it comes to explaining why they're alerting
on certain things on macOS, right?
Because it's almost like I'm getting the impression that it's like 20 years ago when, you know,
the big leader in the network intrusion detection space back then was, who's he, what's it,
internet security systems, right, with their network-based sensor.
And one of the funny things is they'd give you an alert, but they wouldn't
give you like a packet capture, they wouldn't actually explain the rule and how it alerted
and why. You know, is, is EDR and Mac a little bit that way as well? Yes, is the short
answer. I think EDR is really good at giving you a good visual aid for what might have
happened on a Mac machine, but it's still, we've remained in this like black box ecosystem
still, right? Like that openness, that visibility, like beyond, you know, this is critical and
therefore it's bad. Why? Because I say so, we haven't gotten much better on the EDR side. So we want to
be able to kind of crack open that black box and have more complete visibility, actually understand
what happened. And as we know, EDR isn't going to catch everything. So what about all the other
pesky things that get past that black box? Well, right now teams don't really have that visibility
or struggle with that visibility on MacOS. So we want to give that visibility back.
Yeah. Now, you've given a few examples. You say that you've mapped out.
like 50 production ready sigma rules.
You've got three here which are like all timers, like absolute bangers,
but are these not standard with like existing sigma?
Like that's before I list them, these aren't standard?
No.
We didn't find any in terms of like the, we'll shout out the unsigned kernel extension load.
I mean, no.
Yeah, it's a big deal.
That's not a standard load.
I mean, that is a big deal, right?
So an unsigned number one here in your list of three out of 50 is an unsigned kernel extension load,
which you're saying is a critical.
And yeah, I mean, hello.
I mean, it's very unusual to use kernel extensions in macOS at all.
But the idea of using an unsigned kernel extension, and even if you want to use a signed kernel extension,
you know, you've got to jump through a bunch of hoops to actually get that to work.
So I imagine that by the time an unsigned kernel extension is popping up on a box,
then things have gone really sideways.
Yeah.
But I mean, the reality is obviously like Apple's very strong
when it comes to operating system security,
but they're not perfect.
And the bigger problem that we're seeing really is like
beyond unsigned kernel extensions,
MacOS is increasingly getting targeted by adversaries.
And we know like the quickest way to build a 13 foot ladder
is a 12 foot wall.
To your point, like, yes, a lot of things have gone bad.
But like, you know, I could be running on a match.
macOS system, I get drawn to a watering whole site. I think I'm downloading, you know,
the macOS image for team viewer because I need to access somebody else's machine because I'm
working in a help desk. And before I know it, it's got an unsigned kernel extension and I've
got malware running on my box. Yeah. Yep. So another one you've got here, number two is the Sigkels
sent to security tools. Again, why is this not already there in Sigma? You know, beats me, Patrick.
I think again, it really is because Windows telemetry is really open.
It's really straightforward to be able to focus your time on that.
So I think it's just, again, like visibility predicates actionability.
So all these people have just been focusing on a particular telemetry source that's really open, really extensible.
Like everybody knows Windows.
So it's really easy to write detections for it.
I think the other challenge was signal mapping, right?
Like we weren't able to necessarily get the same kind of visibility in a Mac OS.
So even if we wanted to write a SIG kill detection, it wasn't necessarily obvious how to do that.
And the other question is, like, since it's open source and we're doing it out of the goodness of our heart, was the juice worth the squeeze?
And was it?
I mean, it's true, I mean, it's been really exciting to see, like, what people, the feedback that we've gotten, right, our whole approach in terms of sharing all of this is to provide a framework so that every organization has the same visibility into their moment.
macOS environment, right? Like, it doesn't need to just belong to Nebulonk. Like, everyone should have that
right. Yeah. And the third one that you've highlighted, which is just a, you know, a handy one to
have stored somewhere else, right, is when MacOS itself either quarantines a file or detects
something via X protect. You know, it's great that MacOS will do that and shut it down, but it's not
exactly chatty about it, right? It's not really set up to integrate with seams and things like that.
so you know there's been a malware detection on a machine.
So you can pipe that out into, all of this goes into elastic, right?
Yes.
Yeah, we built everything with an elastic back end.
We have like a breakdown of our architecture.
So if people want to redo the experiment or run this themselves, you know, we're more than happy to share how we did it.
Yeah.
So I'm guessing the rationale.
I see too that you've put this in as a pull request with one bit of sigma and they're like,
well, it doesn't belong here and you're still trying to figure out like quite where you might
give it to them or you're just going to have to run it as standalone.
But I'm guessing the reason you did this is being kind of a threat hunting platform.
I mean, threat hunting relies on telemetry, right?
So I'm guessing you're just trying to fill out that sort of completeness of data coming
from the max systems in the environments you're in.
Is that about the long and short of it?
Yeah, I mean, you're completely correct, right?
Hunting is entirely predicated on visibility, right?
So I actually liked your, you know, the third example of the detection.
that we used, right, being able to show you all quarantining.
You want to know everything that's going on so that you can hunt effectively.
So for us as a hunting platform, being able to first map and understand what's actually
happening inside a Mac OS environment means that we can get the same kind of visibility
across EDR platforms and across operating system platforms, which is key.
I want to be able to put my money where my mouth is and say, hey, we can do visibility
across all these operating systems.
we can actually see telemetry.
And if you don't have that visibility,
it's really hard to do hunting.
Yeah, so have you deployed this stuff in anger yet?
Is it up and running?
Yep.
We, I mean, we're open.
We've got a lot of macOS coverage,
I'd say a bit beyond what we've got going on in Core Sigma.
But we also want everybody to have access to this kind of capability.
Yeah.
And did you find that once you turn this stuff on an environment,
it was like, you know,
were you getting more alerts than you expected?
or was it like less chatty?
Like what was the vibe there?
In terms of chattyness,
it wasn't super FP prone, right?
The way that we think about hunting is like,
you want many different sources of signal, right?
You want to understand several events occurring in real time.
I think one of the classic pitfalls
when people think about threat hunting
is alert detection, like alert hunt.
For us, we use these detections as a series of triggers,
right, to understand, okay,
if A, then B, then C, like, that goes from a medium to a critical, right?
Or if we can see these kind of pre-indications, it just allows us to be a bit more proactive
instead of being like, oh, snap, you know, eight things are fired and now we have to take action.
So we've seen a lot of value in it.
But of course, there's always a bit of fine-tuning that you've got to do too.
Yeah, of course.
I would never take, like, a detection and just, like, chuck it into prod.
You know, you always want to do good testing, hygiene, stuff like that.
I guess what I'm trying to understand is, so, you know, obviously,
top three that we just talked about. They are like really good detections, but, you know,
obviously in those 50, there's going to be stuff that's like below that threshold.
You know, how has that been useful in, you know, doing some hunts in your customer's environments?
A few different ways. First way is of the 50 detections that we've published, right, they've got
varying severity levels. So a lot of times, like, you might get some lows. And it's just
valuable information, right? Maybe you've got a risky user who downloads one too many,
files or somebody, you know, is using p-b-paste to copy-and-paste commands when they shouldn't.
Yeah.
But then you go kind of one step further and on the customer side, but we've actually seen a lot of
and where the macOS visibility has been really helpful is in macOS-specific malware and then
a lot of remote access tools, right?
So folks using a combination of capabilities to do like a drive-by download, write to disk,
and then try and open up a C2 channel.
as a for example.
Yeah.
Yeah.
I mean,
that's going to be handy,
right,
when you see,
and you will see that,
thanks to these rules.
You will.
Previously,
you would not even have that data
to hunt on, right?
I mean,
that's kind of the whole,
that's the point of what we're talking about.
Yeah, right.
And of course,
there's always like the,
the hygiene perspective too.
Whenever it comes to,
like,
writing and deploying rules,
right?
You always want to,
like, run it against production data
and it's really helpful for us
to also understand,
okay,
when you're building,
when you're building
in your,
framework and you have greater visibility. There's also the question of like, okay, how is this going to
perform in real time? So we also had to go through a series of steps of like, okay, we've got all these
ways to generate signal. Let's actually run this against production data and like see how valuable
this rule is, how chatty it is, does it require fine tuning? And that's an iterative process.
Some fine tuning was required on a few. But, you know, now we're able to say hand on heart,
you know, our macOS visibility is commensurate with and our Windows.
visibility, which is really important for a hunting team.
Now, before we wrap it up, I'm just really curious here about how this came together,
because I'm just going to take a wild guess, because I've worked with so many startups where
they wind up throwing together like a set of features for Mac and paying a lot of attention
to Mac because there's that one big customer who comes along and says, hey, we'd love to buy this,
but we need support for Mac. Is that what happened here?
No, actually, it's funny. It was more, no, it was based around the fact that, like,
most of our customers use Mac, right?
Like, gone are the days of just the Windows Enterprise,
although they still exist.
Or the one really weird, modern, you know, Silicon Valley company
that's like 80% Mac or whatever.
Exactly.
For us, no, it was just the fact that there was Mac everywhere.
And we were like, there's got to be a better way.
We have to understand what's happening.
So very much borne out of need.
All right.
All right.
Well, Damien Lucie, thank you very much for joining us to talk
through all of the work you're doing in, yeah,
extending C-S-Sigma, well, extending Sigma out to do better telemetry on MacOS.
I'm sure that's going to be interesting to a number of our listeners.
I'll drop a link to the blog post talking about that into this week's show notes.
Thanks again.
Thank you, Pat.
That was Damien Lucie of Nebulauk there.
And if you want to get into some vibe hunting, you can just look up Nebula.
N-E-B-U-L-C-K.
Google for Nebula-Security, you are going to find them.
But that is hit for this week's show.
hope you enjoyed it. I'll be back next week with more security news and analysis, but until
then, I've been Patrick Gray. Thanks for listening.
