Risky Business - Risky Business #808 -- Insane megabug in Entra left all tenants exposed
Episode Date: September 24, 2025On this week’s show Patrick Gray and special guest Rob Joyce discuss the week’s cybersecurity news, including: Secret Service raids a SIM farm in New York MI6 ...launches a dark web portal Are the 2023 Scattered Spider kids finally getting their comeuppance? Production halt continues for Jaguar Land Rover GitHub tightens its security after Shai-Hulud worm This week’s episode is sponsored by Sublime Security. In this week’s sponsor interview, Sublime founder and CEO Josh Kamdjou joins host Patrick Gray to chat about the pros and cons of using agentic AI in an email security platform. This episode is also available on YouTube Show notes U.S. Secret Service disrupts telecom network that threatened NYC during U.N. General Assembly MI6 launches darkweb portal to recruit foreign spies | The Record from Recorded Future News One Token to rule them all - obtaining Global Admin in every Entra ID tenant via Actor tokens | dirkjanm.io Github npm changes Flights across Europe delayed after cyberattack targets third-party vendor | Cybersecurity Dive Major European airports work to restore services after cyberattack on check-in systems | The Record from Recorded Future News When “Goodbye” isn’t the end: Scattered LAPSUS$ Hunters hack on | DataBreaches.Net UK arrests 2 more alleged Scattered Spider hackers over London transit system breach | Cybersecurity Dive Alleged Scattered Spider member turns self in to Las Vegas police | The Record from Recorded Future News Las Vegas police arrest minor accused of high-profile 2023 casino attacks | CyberScoop DOJ: Scattered Spider took $115 million in ransoms, breached a US court system | The Record from Recorded Future News vx-underground on X: "Scattered Spider ransoms company for 964BTC - wtf_thats_alot.jpeg - Document says "Cost of BTC at time was $36M" - $36M / 964BTC = $37.5K - BTC value was $37.5K in November, 2023 - Google "Ransomware, November, 2023" - omfg.exe https://t.co/uv2EzbL5HT" | X JLR ‘cyber shockwave ripping through UK industry’ as supplier share price plummets by 55% | The Record from Recorded Future News Jaguar Land Rover to extend production pause into October following cyberattack | Cybersecurity Dive New plan would give Congress another 18 months to revisit Section 702 surveillance powers | The Record from Recorded Future News AI-powered vulnerability detection will make things worse, not better, former US cyber official warns | Cybersecurity Dive
Transcript
Discussion (0)
Hey everyone and welcome to risky business.
My name's Patrick Gray.
We've got a great show for you this week.
Adam Boislo is on break.
So we've got a guest co-host, Mr. Rob Joyce, formerly of the NSA,
but these days he is an advisor to various companies.
So that's going to be a lot of fun because we've got so much good stuff to talk about this week.
And then we'll be hearing from this week's sponsor after that.
And this week we're chatting with Josh Camdrew, who is a co-exammer.
co-founder of and CEO of Sublime Security, which makes an email security platform, which
is very AI heavy.
And indeed, he'll be joining us a little bit later on to talk about some of the nuance there
because it's not like you can just throw every single inbound email into a large language
model for analysis.
So we'll be talking to him about some of the science of working out when you apply AI versus
when you don't.
That is an interesting chat and it is coming up later.
But first off, of course, it is time.
to chat with Mr. Rob Joyce about the week's news.
First of all, Rob, thank you very much for filling in for Adam.
We appreciate it.
Hey, Pat.
It's great to be on again.
And we're going to start this week with like, I mean, what better week to have a former NSA guy on the show?
Like the morning of recording, morning for us at least, I wake up to the news that this extremely strange, like covert comms network got rolled up in New York.
It involved something like 300 SIM servers, 100,000 SIM cards.
There's like a foreign nation involved doing covert comms with organized crime.
Please walk us through this story.
Tell us what on earth is going on here.
Yeah, so it's a good one, Pat, definitely.
So the Secret Service came out and announced it had dismantled a sprawling sim farm, right?
This technical capability that allowed them to,
access cellular networks with tons and tons of unique sims. And it was spread across abandoned
New York City apartments. And if you looked at the pictures of these things, man, they had some
exquisite cable management and rack, uh, uh, racking capability. It was well maintained, well
architected. Um, so the equipment, the secret service said could send 30 million text messages
a minute. So, you know, you got to wonder what's this for. Um, so, um, so,
a few things also came out in the discussion.
They said it's capable of encrypted comms, encrypted messaging.
So right there may be a piece of your answer.
You know, you can use these SIM cards as almost disposable one-time pads
where you make a message and that communications channel never gets used ever, ever again.
So that's a really good kind of anti-law enforcement or surveillance.
countermeasure. They also were using it, you know, in what appeared to be threatening ways.
The Secret Service talked about menacing messages sent to U.S. government officials.
And, you know, everybody also is fixated on the topic that this is such a high-capacity entity
with multiple locations spread around the networks of New York that they think that it could
do things like DDoS attacks that might block communications or EMS.
mess and police dispatch. So it's just a really, really surprising story. But I think this is only
the first chapter of what we're going to learn about it. So I want to quote from a CBS news article
here. It says early analysis shows the network was used for communication between foreign
governments and individuals known to U.S. law enforcement, including members of known
organized crime gangs, drug cartels and human trafficking rings, according to multiple
officials briefed on the investigation.
The U.S. Secret Service said it is combing through the more than 100,000 SIM cards
in an ongoing exhaustive forensic analysis.
Now, my question to you is if you've built this thing for covert comms and spy stuff,
why do you then use it to send threatening messages to elected U.S. representatives?
That seems insane.
Like, that just seems like, why would you do that?
Yeah.
Some people don't have the discipline they really need in nation-state espion.
and nation-state activities.
But I think, you know, pure conjecture,
we may find this was like a contracted effort, right,
where somebody was paid to do it on the behalf of another entity
and they used it, but they didn't know what was moonlighting
on top of that network, right?
And so that'll be the downfall.
There's like those people who get paid 50 bucks a month to allow, you know,
home proxy networks.
to use their computers or their connections and stuff.
Same sort of thing, bigger scale.
Yeah, well, I think they knew what it was.
You know, you don't have these racks full of antennas
without knowing that it's a pretty powerful capability.
I think what they did was they partitioned off
and they used some of it for the activities of the funding entity
and then they used others of it to make a little cash in the evenings.
Yeah, right.
So this was truly a multi-purpose.
a bit of infrastructure.
Yes.
But, and I also thought it was interesting.
So Matt McCool, who is my favorite name Secret Service agent, right, heading up New York.
Mr. McCool is pretty cool.
Yeah, I'll give you that.
Yes.
He said that they got multiple locations in New York and even in New Jersey.
But he intoned that there were others in their, you know, in other cities in the U.S.
And I will tell you, Pat, I think those SIM cards are going to be the downfall of this operation
because it's really hard to get SIM cards in large quantities.
And, you know, somewhere you either have to become an MVNO or you've got to pay somebody
who has an issuing capability for SIM cards.
And, you know, they have to be activated onto these networks.
And so they'll be able to follow the money and follow the accounts.
And they'll also be able to tie those SIM cards across.
you know, multiple of these devices, whether they're operating around New York or in other
cities. Who knows, we may even find these in other international countries. But I think that
first round, I think the SIM cards are going to be the downfall and the unraveling of this.
Well, I see what you're getting at, right? Which is the fact that they had 100,000 of them
just in New York indicates they were getting them from somewhere and someone's about to be in
I look there's no other way to say it someone's going to be in deep shit
yeah and I think I think they got them from multiple places right
but there will be a finite number of threads to pull
and you know big data this is the kind of thing that's
that's just going to be really really lucrative for the investigation
now before this news broke we already had a story in this week's run sheet
down towards the end right which we've pulled up towards the front
because it's now very relevant to this discussion
which is MI6 in the UK, I guess that's the UK's CIA.
They have launched a Tor Onion service to allow possible collaborators
to connect to them and share information.
And they've released a YouTube video about this as well saying,
look, in your country, it might even be risky for you to view our videos on YouTube
to look at the how-to guide of how to connect to this tour service.
First of all, there's some irony there in the fact that they're explaining that it's dangerous to access their YouTube on YouTube.
So that's the first little thing there.
Second of all, I mean, I would have thought in, you know, heavily contested sort of network environments in some of these countries,
very little is going to stand out more than attempting to connect to a Tor Onion service.
I really question the wisdom here.
And this is me as just as a civilian with no deep expertise in covert comms and whatever.
But I just think this just seems insane to me, Rob.
And I just, seeing as you're here, I wanted to get your opinion on it.
Yeah, well, CIA did a similar tip site several years ago.
Didn't end well from what I remember.
No, no, they had covert comms that didn't go well.
So they had websites that, you know, were really, if you kind of looked at the tradecraft use,
were not well thought out, well architected.
I think they didn't consult the right expertise.
But they did set up a tour site for tips several years ago.
And I couldn't imagine that one of our closest partners in the UK
wouldn't have asked them about whether it was worth it or not.
Because you have to think about other things.
There certainly is the safety and security of your assets, right?
But you also, by announcing it publicly,
you're going to get everything from whack jobs connecting
to give you their imaginary leads to people sewing real disinformation who want to, you know,
try to slide in that double agent or misdirect you to something that they want you to
see and focus your resources on, right?
So there's a lot of reasons that this is, you know, challenging, but at the same time, I think
the fact that they're doing it is an acknowledgement that, you know, you've got to have multiple
ways to get connected to people who want to help your cause.
And whether it's walk-ins to an embassy, which is really dangerous or trying to find the
right person in your country, where, you know, the home team has the huge advantage.
In many cases, the people can't travel, and that is one of the safer ways to get away
and try to make a meet.
So the virtual meet certainly is an option.
It'll remain to be seen to history, I think, of how successful and you.
and how useful it is, but I think they're going to try a wide and diverse set of ways to allow
their sources to get in connection.
It's so funny, just every time I talk to someone from the intelligence community,
there are so many aspects of intelligence work that are so similar to journalism.
Like when you were describing the whack jobs who flood your tip line and whatever,
it's like, I've been there, dude.
Like, I have everybody who's having some sort of mental health crisis,
with cyber characteristics, like they will reach out to us and tell us about how they're being
tracked by the, you know, the intelligence world and the satellites are getting them.
And, you know, so I, yeah, been there, been there.
And even when it comes down to interviews and sort of trying to unpack people's agendas,
like it is interview techniques, it's, it's, it's amazing.
I've had some funny conversations about that and you just, uh, made me remember some of them.
Definitely some strong parallels, Pat.
Yeah, 100%.
Um, so let's move on to some more.
bread and butter infosec news now. And of course, the big research news that broke over the last
week was Durkian Malema, who we've had on the show before. This guy is an absolute expert
in all things, you know, yeah, as your AD, which is now Entra ID, I'm sorry. And he accidentally
stumbled onto like the holy grail of Entra ID attacks. I'll explain it badly because
it's hard to explain well. But essentially there are these like service tokens.
you can generate in your Entra ID tenant
to perform various, you know, privileged tasks.
And he was messing around generating some of these tokens
and then just thought, well, I wonder what happens
if I change like the tenant ID on one of these tokens
and try to run it in another tenant
or use it as to authorize something in another tenant.
And it just worked because for some reason,
Microsoft didn't think, well, you know,
whoever developed this didn't think to validate
that the token was, you know, for a specific tenant.
So that basically meant that you just had needed to generate a token in your tenant,
change the tenant ID, and you could just use it anywhere.
So this meant, yeah, full compromise of every single Entra ID tenant on the planet.
This is like a 10 out of 10 megabug.
Obviously, Microsoft patched it immediately when he reported it to them.
They've assigned it a CVE.
I don't see any mention of a bounty payment for this,
but you would think if there is one, it's going to be huge.
Oh, and we need to mention, too, before I get your reaction on this, I think you've done a little work with Microsoft, so there's a bit of a conflict there, so we should just mention that in the interests of transparency, but what was your reaction to this? Because, like, wow.
Yeah, so Pat, it certainly is a God mode token, right? So Microsoft really has these, they had these actor tokens that were meant for Microsoft's internal services to impersonate the accounts across tenants to do the background tasks.
You know, you could launch a service that would execute in yours.
And you hit the nail on the head, the flaw in those tokens that didn't validate if it was changed or altered.
It meant you could issue your own for anywhere.
And the other part you didn't mention was it really bypassed logging.
Yeah.
Because it said, you know, an admin did this.
And the assumption was it was your admin and your tenant, not a fan of,
admin from another tenant. So I think the response he got and the speed at which it was patched
was indicative of how powerful it was. So I do think the upside is, you know, the bug reporting
channels work. This is how it's supposed to work. He found something before it was used at scale,
but it does make me think back to, you know, the study we did on the Cyber Safety Review Board
where, you know, there was another token problem where a series of kind of cracks and seams in authentication let people mint tokens that allowed you in the 0365 world to do some really impressive things with accounts that were not yours.
So we continue to see kind of, you know, that authentication and identity is hard and it's harder when you have legacy.
architectures, right? And I've watched Microsoft since that CSRB report and seen how much they've
cleaned up in the way authentication and, you know, the tenants are secured, the way things are
logged, and especially the amount of oversight and Overwatch that have been added. But it's still
clear that, you know, there's still those cracks and seams that will bring you to your knees.
and you've got to, in the cloud world, you've got to trust your provider because they have
God mode under the hood. And the question is, can somebody else get to that God mode without knowing
or properly authenticating? And that's what you've got to protect against.
Now, look, I hear you. And it's great that they've lifted their game. But I think, you know,
the criticism that I have of Microsoft is that they let too much technical debt accumulate.
My criticism is that they didn't do enough until now.
so it's great that they're doing stuff now but you know they've sort of got this massive technical debt problem that they've got to tackle now and it's kind of too late right even if they are taking it seriously I guess that's what I would say like awesome that you came to the party now it's a little bit late and this is you know I don't know though look I think bugs like this ultimately they happen everywhere they can happen everywhere but still like wow it's a it's an absolute clangor now look speaking of recent issues we spoke I spoke I spoke
with Adam last week about this NPM supply chain worm, the Shailoud worm, which is very cool.
GitHub has taken some steps now.
Well, they are taking steps.
They are moving towards a world in which you will need to do some sort of Fido-based
multi-factor authentication event in order to publish an update to a package in NPM, which I think
is good, but it's going to really annoy developers.
And then they're working towards a trusted publishing system where package repository
is trust specific workflows or services to publish code using short-lived open ID connect identity
tokens instead of like API keys that last forever. These both seem like sensible changes.
But I guess, yeah, once you've plumbed through an automated publishing workflow, I mean,
you're kind of going to wind up somewhere back here in the first place, aren't you?
If everybody's using some sort of pre-approved workflow.
So, like, once the attacker's on that machine, I don't know.
I'm guessing they know more about this than me.
And if they think this is going to help, that's great.
But it did seem a bit strange to me.
It will help, Pat.
So, you know, the trusted publishing plan they have is intended to keep me from scraping
your token and then using it somewhere else to publish in your identity.
So if they're shorter-lived, the window of exposure is cut down.
The idea that you're going to use phytotokens makes it more likely to issue those authentications.
It's going to be harder and tied to you.
But in the end, somebody has to have the authority to do these publishing activities.
And I think attackers are going to go after you and your identity to try to get into your pipeline.
Yes.
But what they're doing is they're raising the bar.
They're making it harder.
you know, multi-factor hardware tokens aren't invulnerable either, but they, they raise the bar a lot
for the attackers and then shortening the window that the generated tokens are valid also closes
down opportunities. So I do think this is a step in the right direction. Yeah, 100%. I do too.
I just wonder, though, like if, you know, you look at the way the Shia lewd worm worked,
which is it would edit people's packages on their...
machine and then forcibly publish them. I mean, this would mean that you would just edit the
packages on the machine and wait for the user to publish them. Like authenticated, right? So it's going
to slow things down. It won't stop. And that doesn't mean it's a bad thing. I think it's a good
thing. But we ain't done here, I guess, is my point. Definitely. Definitely. Now, we saw a fairly
significant ransomware attack affecting flights in Europe. They were delayed because it was this company
called Collins Aerospace. They provide check-in and boarding technology for a whole bunch of
airlines. And yeah, this caused, you know, massive disruptions at a bunch of airports in Europe,
including Heathrow. I believe recovery, I don't even know. I had a quick look around this morning.
I couldn't see whether everything's back to normal now, whether the recovery efforts are still
ongoing. It's not that. As of today, Heathrow said most of their flights were running, but Brussels
still canceled flights even today.
They lost half their flights yesterday.
And Dublin and Berlin were still slowed, right?
They didn't give metrics.
But it's, you know, and anytime a flight into or out of one airport is delayed, it cascades, right?
Because those planes probably don't just fly London to Brussels, right?
They go elsewhere.
And so if they can't get through London, they're not getting to other places.
so the cascade knock-on effects are massive yeah we had a few months ago i was flying domestically
in australia not to sydney but because they'd been bad weather in sydney i mean it just caused
like cascading problems through the whole network right um so yeah 100 percent i guess what's
interesting here though is we've seen some big game ransomware lately for the first time in a while
you know when you think about jaguar land rover marks and spencer now this collins aerospace
now there is rumor there there are rumors at the moment that
this Collins Aerospace thing may have been scattered spider or scattered spider adjacent.
That was the case for Marks and Spencer.
It looks like it was the case for Jaguar Land Rover.
So what I find really interesting here is that we saw this massive initiative
from law enforcement and intelligence agencies to target Eastern European, you know, Russian, essentially,
ransomware as a service organizations and all of the affiliates and whatever
and go for a disruption effort there.
And that actually appears to have done.
something and now we're seeing big game ransomware and it's western kids it's it's it's british and
american teens they're all going to get caught we're going to talk about that in in just a moment but
it's bizarre isn't it that we've seen this shift in the location the geography of the attackers
from these places where law enforcement can't work because they're they're in hostile jurisdictions
into areas where law enforcement can work it's just it's such a bizarre outcome yeah
And the other part is, you know, you've pushed the age of the attackers down, right?
The folks that are doing the scattered spiders often relying on juveniles to take the action
or juveniles are in the driver's seat.
And that also changes some of the tools that law enforcement has in the UK and the U.S.
Yes, yes.
Now, we do have some reporting here from data breaches.net where they reached out to ShinyCorp
and asked them if they were behind
the Collins Aerospace thing
and they said no comment
they did say that that's not unusual
for them to say that
but what was unusual
is that they said I said no comment
but you can say that I said no comment
so it's all a bit strange
like they're just getting
everyone's getting like scattered spider vibes
of this
now meanwhile two more
alleged scattered spider members
were arrested over the
London Transit System
breach in the UK. Your lot, the Americans have also dropped an indictment for one of the guys
who's been picked up in the UK. Another guy turned himself in to the police in Las Vegas pertaining
to the 2023 attacks into the casinos. There was MGM and Caesars. So whoever that is, they're in
deep, deep trouble. So yeah, there's all of this happening at once. It feels like Scattered Spider
is in the process of being rolled up. And of course, there was all of this really
interesting, there were these really interesting alleged facts dropped by the DOJ in these indictments
that have been unsealed. So, you know, this, this, the one of the ones who was arrested in the
UK, Tala Joubert, who's only 19, you know, apparently they took $115 million in ransoms. They
breached a US court system. And VX Underground noticed something very interesting, which is in these
documents, it says that Scattered Spider obtained a ransom of 964 Bitcoin from one
organisation. The charging document said the cost of Bitcoin at the time, of that Bitcoin at
the time, was $36 million. So they went back and looked at the Bitcoin chart to see
roughly when that moment in time corresponded to November 2023. And who was being ransomware
in November 2023? It was the Chinese bank. It was the Chinese ICB,
see, the world's largest bank, right? So we're just finding out all sorts of stuff, but it does
feel, I predicted incorrectly in 2023 that these guys were all going to go down very, very
quickly. It's taken a couple of years, but it does feel like, oh my God, they're all about
to get arrested. What do you think? Yeah, and this is the snowball rolling downhill, right? It just
rolls right over people and scoops them into this big ball and continues down and hits more people.
So as you get people to turn, they will turn on the associates they know.
They also will have, you know, machines and tradecraft, and nobody does obsec perfectly.
And so even if they don't know who their co-conspirators are, they just know them by nicknames and handles,
they have interactions that are going to give nuggets and clues that law enforcement get closer and closer.
And even inside that, you know, there will be.
people who are online and still participating, and they will either be assets of law enforcement
or they will be law enforcement on those accounts. And so they're all going to look over their
shoulders and wonder. And so the, you know, the news tightens in this space. So I do agree with you,
it's going to keep yielding more and more arrests. It's going to be hard to stay ahead and
outside of it. And even the Bitcoin, you know, that that's a permanent record. And as little bits and
wallets turn up in some of these arrests, it also starts to be another, you know, piece in the jigsaw
that you can use to connect other people and other, uh, in other topics. So I, I can't remember if it was
this guy or someone else, but they'd use some Bitcoin to buy some gift cards and then loaded it into
like Uber Eats credits or whatever. And that's like one of the ways that they got connected. Like, it's just
It ain't easy. I think if you're, like, if you are not a Russian with good protection, with a decent umbrella, like, you've got no business doing this. And even they don't want to have a bar of it at the moment, it seems. Yep. And, you know, back to your no comment, but you can say no comment. You know, that smacks very much of scattered spider, you know, having fun on telegram and, you know, on the, on the leak sites talking about their escapades, right? I don't think they can help themselves in some of these. So,
yeah, we will continue to learn more and more and put it together just like the law enforcement
will, and I think we'll see outcomes and results.
Yeah, just before we move on from this discussion, what's your feeling as to the effectiveness
of some of the joint intelligence and law enforcement operations targeting the ransomware
ecosystem, you know, particularly the Russian and, you know, Ukrainian ransomware ecosystem
over the last year or so?
Yeah, it's hit and miss.
Pat. So, you know, we've got to put sand in the gears and you've got to keep adding friction all the time, right?
There's not one solution to any of this. It's going to be law enforcement. It's going to be the
offensive cyber you keep hearing people talk about. It's going to be a lot of industry and the
commercial entities helping. And all of that just kind of works together in a, in one.
direction. So, you know, the criminals are going to keep innovating. The governments and the private
industry will up their game over time and, you know, we'll have to work them one shot at a time.
But I'll tell you, you know, even though some of these are juvenile folks involved in this,
it's serious business, right? There are, you know, death threats amongst them and even out to the people
that work in law enforcement and intelligence pursuing them.
So it's not an easy job.
In fact, I was talking to some of the private companies' threat intel teams.
And they've got serious threats against themselves and their families to the point they don't
even like to be acknowledged in the role they play and where they help because the threat's real.
Yeah, I mean, I've seen, I've spoken to American law enforcement people who've had, you know, old addresses doxed and things like that and death threats and it's like, it's bad.
It is, it is, you know, there's a level of nasty out there that's, that's just insane.
And look, speaking of the impact, we've got more reporting here just on the cascading effects of this Jaguar Land Rover ransomware incident.
There is a company that supplies Jaguar Land Rover and its share prices plunged 55%.
you know, what's the name?
Orton's, a company providing specialist insulation components for Jaguar vehicles.
You know, they're just in all sorts of trouble.
That's an Alexander Martin report over at the record.
So, you know, this stuff has genuine consequences.
And meanwhile, Jaguar Land Rover itself has extended production,
its production pause well into, like into October at the earliest.
And that's according to cybersecurity dive.
So, yeah, bad, bad, bad, bad stuff.
Yeah, when you talk on the NACON, I, I really,
something that said, you know, Jaguar has 33,000 employees, but there's 200,000 workers in
the supply chain, right? Autins and others. So, yeah, the knock-on effects are huge.
Yeah, I mean, look, thankfully, thankfully, Jaguar Land Rover is a very profitable company, right?
So it will be able to eat the losses here because, you know, there's a lot of people want
range rovers. And, you know, thankfully for the mechanics all over the world, people will continue
to buy range rovers. But, yeah, it's just, it's a crazy.
all time. But I see that stock plunge and wonder if there's opportunity. I have a friend that
built a bot to go out and invest in companies right after the news broke about a cyber intrusion
because the stocks always dip, but often they come back and level out or jump higher. He did
really well with that strategy. Now, in the physical supply chain of the smaller companies,
I'm not so sure, but certainly the big companies, it's a good strategy.
Yeah, indeed. Just quickly, because you're here, we wouldn't normally cover it. It's just amazing the degree to which, like, the Section 702 reauthorization story just like pops up every year. And it's starting to hit the headlines again because I think it's, like, is it due to expire again?
Yeah, April 2026, Pat. So, you know, there was a huge fight last time. It barely got through. That was in late 2020.
They kicked it 18 months.
There's a proposal now, you know, to get this authorized through the midterm elections here in the U.S.
So that, you know, it won't be a political football before the elections.
So we'll see if that's a strategy that works.
I mean, just to be clear, like 702 failing to reauthorize would be extremely bad.
Yeah.
It would be a huge hit to a lot of.
the things we're talking about, right? It is an amazing tool that helps us in, um, you know,
cyber intrusions, ransomware gangs and other things. So, um, you know, but at the same time,
foreign intelligence services, you know, conspiring against you over Facebook, you know, like,
yeah, or, or maybe, or maybe buildings, uh, you know, cell phone farms in, uh, New York
city. Yeah. So all of those things, you know, it helps. Um, you know, it, it, it, it,
I'm surprised this major run sheet, though, because the SISA authority, not SISA, the organization, but...
Oh, this is the sharing stuff. Yeah, yeah. Tell us about that.
It is just about to fall off a cliff as well. And, you know, there's been a number of attempts to kind of wire it into other bills and get it passed.
but there's so much discussion of whether the authority needs to be amended with more privacy
components or not that it's never quite made it across the line.
And, you know, it's about to fail, and that would leave a bunch of companies without
protections, legal protections for sharing information with the government and across to
others that can help in investigations.
You know, the privacy components, you know, it's good to be worried about it, but there is zero and zero examples of this being misused.
So, you know, I think we're throwing the baby out with the bathwater talking about hypotheticals when it has proven value in pursuing intrusions.
So I'm hoping we get the SISA renewal long before we're fighting over, you know, any of the Section 702 renewals.
Yeah, so that's a 2015 law, right?
Like, it's coming up for, like, 10-year sunset,
and it's the one that enables people to share logs and bits and pieces
and threat information with the government and have, like, safe harbor, right?
And that's, yeah.
Correct.
I did have that one originally in the run sheet.
I did cut it because I was just like, well, let's talk about it when it falls over.
It hasn't fallen over yet, but, yeah, we'll see.
It probably will, though, won't it?
I am not optimistic right now.
It's not looking good because typically what you do is you get a
agreement to attach it to something that's definitely going through, you know, the annual
defense budget or something. And those aren't moving. And, you know, there's even debate.
We're heading to a one October government shutdown if we don't get a general authorization for the
U.S. government. And, you know, even that is under question at this point.
Yep. Things in America going real well at the moment. Everything's functioning exactly as it should.
Now, let's move on to our last story here.
And it's about you.
Eric Geller over at Cybersecurity Dive has written up a bunch of comments you made on a panel.
It was a panel discussion between you and John Holtquist.
And you were talking about how don't be too sunny about the fact that AI models are finding vulnerabilities now
because attackers are probably going to outpace defenders when it comes to using those vulnerabilities,
as in exploiting them before they can be patched.
And I wanted to talk to you about this because every time I game this out, right?
Because this is a discussion I've had a bunch of times with various people.
You know, where I land is that you're right, in the short term, this is going to be a period of massive disruption.
But in the long term, I think we're probably going to wind up in a better place.
And I just, you know, they've obviously drawn out your Rob is concerned comments for this piece.
But do you think that ultimately in the long run we wind up better off?
I do, but I worry about, you know, the chasm we have to cross to get there, Pat.
I am an internal optimist.
People that have known me for years know that I see the glass half full and almost everything.
And boy, I have convinced myself in the last couple years that, you know, this vulnerability, discovery, and automation activity is going to be painful for us.
because I do think, you know, our operating systems and our browsers and our cell phones are going to get much, much more protections through this.
And they will be updated at a speed and efficiency that will benefit all of us.
But there's so much legacy tech in the environment that I think it's going to be like the forest fire that comes through.
cleans it out and gets rid of all the unsupported. Yes, it's a cleansing fire. But,
you know, if your house gets burned down, it's not great that the new one's going to be, you know,
going to be shiny and new and better built. You've got the pain and maybe the dangers of that fire
coming through. So I do think that there's a lot of pain between here and the Utopia you see.
and the faster we get there, the better.
But, you know, the quote in the article is, when I live by, we suck at patching, right?
So if you identify all these vulnerabilities, we're going to have a hard time of just locking the doors and windows.
Yeah, I mean, I think, though, that if you turn identity, sorry, vulnerability identification and discovery into something that's really easy to do in the development life cycle, you're just shipping less bad stuff.
You're shipping less stuff that needs to be fixed, right?
Until you vibe code and it pulls in crappy packages, right?
Yeah, yeah, yeah.
So there's a little of that on our future too, right?
Yeah, yeah, vibe-coded dog crap, basically.
There's going to be plenty of that.
All right, Rob, that is actually it for this week's news section.
Thank you so much for joining us and for filling in for Adam, and yeah, we'll catch you again soon.
It's always great to be here.
Thanks, Pat.
That was Rob Joyce there with a check of the week's security news.
Big thanks to him for that.
It is time for this week's sponsor to view now with Josh Camdrew,
who is the chief executive and co-founder of Sublime Security.
Sublime security is just, I guess it's a reinvention of email security,
of the email security platform.
So it filters for things like business email compromise, malware fishing, that sort of thing.
That's what we mean when we talk about email security.
And, yeah, they've released some really interesting stuff lately.
They've got a security analyst, like an agentic security analyst,
and also an agentic, like, detection engineer.
These two agents working on the platform in conjunction to really tighten up detections,
do things on like lazy auto mode.
Works pretty well.
I'm also publishing today a demo of the platform,
including an overview of those agents on our YouTube,
on our YouTube channel so you can find that by looking for risky business media that is our
YouTube channel but yeah in this conversation with Josh I wanted to talk to him about finding the
line between when you throw something at an agent and when you don't because obviously you cannot
throw every single email that comes into your organization against an LLM for analysis right so you
need to actually yeah find a bit of a line to work out what sort of detections you're going to do
and what sort of analysis you're going to do. So here's Josh talking about that. Enjoy.
Yeah, this is like, this is the hard problem in doing real-time detection and prevention at
scale, and particularly when it comes to agents. You cannot, it's not practical to send every message
to an agent or to our automation layer where we have some of our more traditional models that live
either. And so the way that we've architected sublime is essentially a two-layer system.
Layer one is our DSL that we've, it's like this programmable layer that we've created
our message query language that can describe complex attacker behavior. Like that whole
output that I, we just went through from Ade, is written in MQL. And so layer one essentially
acts as a filtering layer to say, hey, is this message suspicious or not? Does it match some
you know, various behavioral indicators, like does it match a known sending pattern or an unknown
sending pattern? It's essentially like, does this look sus compared to what's normal or to what we
know is suspicious behavior? And if it passes that layer, so you get something that flags
layer one, then that gets escalated to layer two. And layer two is where we have our models,
we have our agents, this is where ASA lives as well, to actually do the deeper investigation
and then make remediation decisions.
Are we going to block this message?
Are we going to insert a warning banner?
You know, are we going to move it to the spam folder, things like that?
I mean, I'd imagine, though, that if you flag something, I'd imagine your first line there,
you're flagging either as suspicious or, oh, hell no, this is absolutely malicious right into the bin
with you.
Is that the result of that first stage like triage?
Yeah, the first stage, there's a lot of times where we're just like,
this is obviously bad.
We can just bend this immediately.
There's others where we're just looking for things that don't look,
that don't look completely normal.
So as an example, just use like a kind of simpler example is like a kind of simpler example
is like an HTML attachment from a first-time sender or a newly registered sender domain,
like in the last seven days or 14 days.
Neither of those things are by themselves malicious,
but they are very commonly associated with like attacker activity or they're highly abused.
Like the first example is an HTML smuggling delivery mechanism,
but it's also sometimes used for secure message sending services.
And then the new sender domains, like depending on your environment,
that might be, you know, that might be something you see or it might not be.
So we're looking at this type of, like, activity.
We're looking at, you know, what you typically see in the environment,
and then we'll escalate that to layer two, basically.
Yeah, I mean, it's funny, right? Because, like, having spoken with one of your competitors about this challenge, like, it is the challenge, right?
Particularly when you're operating at scale, which is, you know, you can't throw every binary into a sandbox.
You can't analyze every single message with AI.
So it's all about finding that line where you're catching everything that's suspicious, but you're not burning just crazy amounts of computational power, analyzing stuff that's benign.
Yeah, yeah.
And the other, it's just like the reality of our, what we do in security is that no matter what, no matter how good like our, our detections or models are today, there will always be something that gets through, right?
Like no, no security solution is perfect. And like we fully acknowledge that. We know that we're, we're practitioners.
is here too. And, you know, we, and so, like, the question is, when something gets through,
like, especially as we're seeing adversaries adopt more autonomous tooling, generative AI,
like we've talked about this a bunch on how we're actually, we're seeing this happen in the
wild. And, like, there's going to be continued innovations and things that evade detection systems.
That's just the reality we live in. And so the question is, the thing that we thought about
is what then, right?
Like, we don't want to be a bottleneck for our customers.
Hey, we got to go update this model.
You got to put in a ticket or you have to go build your own detection or anything like that.
So that was more part of the big inspiration for Aide was that we know it's going to fail.
So how do we enable our customers to be in control of their own destiny without letting, without creating work for them?
And that's why we created Aday, was to automate that whole process and bring the time to closing gaps from what's traditionally like weeks or months to go and retrain a model to hours now.
Adi can turn things around.
Yeah, I mean, I think the key thing with sublime, I mean, really, if we really boil it down, the key value proposition is that it's different for every installation, right?
like every rule set, every tolerance threshold is going to be different per customer.
That's right. Yeah. No two environments are the same. There's a different lock on every house.
There's, uh, you learn to pick one lock and you still, you don't know what, what's, what's happening
in, and at the other house. That's kind of like one of the analogies that we use for, for better or
worse. Hey, everybody's got to do marketing, right? Like, maybe, maybe worse. I don't know.
The other question I got, right, is so who's handling the compute for all of this AI, right?
Is that something where the customer plugs it into their own, you know, LLMs that they're hosting,
or are you doing it?
And if you're the one, I'm guessing it's a combination of both.
But if you're doing it, like, since this thing went live properly, like, what does the
compute bill look like every month, right?
Because I reckon that would be, you know, you got to see some sticker shock.
There, guy.
Yeah, and to be clear, for Adda and Asa, like, these are all included in sublime
enterprise so like we're not charging our customers more for this and in terms of how we like who owns
this who owns the bill for these agents is that we have a we have our deployment model we have a
SaaS based deployment where it's our our cloud infrastructure it's it's hosted you know just like
normal SaaS and then we have self-hosted deployments and so just in terms of just like how we
price sublime the licensing fee is the same
same, whether it's self-hosted or whether it's SaaS. But the, in terms of who's responsible for
the infrastructure bill, if you're SaaS, it's sublime. If you're self-hosted, you own your
own infrastructure bill. So that's the, that's the difference in, uh, in terms of the, the bill
for the agents. So what, you're, it's, it's a small enough amount of, uh, compute that you're
actually just able to eat that on the SaaS side? Um, yeah, yeah, that's, that's right.
Yeah, okay. That's as cheap
than I thought it would be, right, for doing this sort of analysis at scale.
Yeah, I think one of the things that makes this really possible is the DSL.
So like the hunt, for example, like the hunt, when Aude generates a detection
and it wants to validate the efficacy of that detection,
it's using the the generated detection to run the hunt so it's not like an agent is going and doing
that work these are leveraging primitives that already exist within the architecture of sublime so
when we're going back and analyzing the last 14 days of history that's just a that's just a hunt
that is already built in the system it's already highly optimized and how how sublime works
So it's only at the layer of, hey, we need to investigate these, the matches.
No, I'm with you.
I mean, you want your agents to use the tools, not be the tools, right?
Which is why I find it funny when everyone's like, oh, but LLMs are going to take over this.
And it's like, well, that doesn't seem like a good way of doing business, if I'm honest, right?
I mean, this is the reason why our two agents have been so effective is that it's because of the tools that we have in sublime that we make available to them.
and because of all the context and knowledge that we have
and that we've given these agents,
like this wouldn't be possible without the expressiveness of MQL
and the retrohunt capability and things like that.
So one of the really key things when doing this at scale for this type of problem
is that generally speaking, LLMs are nondeterministic,
but we are turning it into a deterministic problem with the DSL in terms of what we actually generate output and deploy to make the decision at the end of the day.
Just going back to something you said earlier, which is if stuff does occasionally slip through, I'd imagine the back testing component of our day is going to become quite useful then.
Because you might run a back test and go, oh, okay, this didn't cause like six million messages to get flagged as malicious.
but it did cause three of them to get flagged as malicious,
and that might be something we're going to look at.
Yeah, you mean in terms of reviewing the back testing results
and making that transparent?
Yeah, yeah, definitely.
So there are cases where, and we've seen this happen
since we've released Ade, like done our public launch,
is that Ade is basically the detection is picking up
other variants of the campaign as well.
Yeah, and historically as well, right?
Historically.
It is historically.
Yeah, exactly.
So it's like there's, because Adi strives to be behavioral, there's a spectrum when you're, when anyone
who's done detection at any sort of like detection engineering, there's a spectrum in how
you build detections where on one side of the spectrum, you're super specific and like you're
almost IOC based and or you are IOC based.
On the other side of the spectrum, you're like purely behavioral driven.
in. Your TTPs. Your TTPs. Yeah, exactly. You're using AI and things like that. And so there is a
trade-off in terms of how quickly you can do something. Generally speaking, if you want to do an
IOC-based detection, you don't need ID for that. Like, it's just a block list. You know, it's like
you literally just hard-code the IOCs and the detection you're done. But like that's useless because
they're going to rotate IOC. So you want to be on this side of the spectrum where you can be
much more behavioral. And so like there's just a there's just a tradeoff because the farther you go,
the more time it might take to actually iterate and build something that's highly effective.
And in the meantime, if it's an active campaign that you want to respond to you, you actually want
to get something out relatively fast. So there's kind of a spectrum. One of the things, one of the
things that we're thinking about for future iterations of Ade is to essentially output multiple
rounds of detections where the first one is going to be a little bit more specific, but it
mitigates the impact of the current ongoing campaign like immediately or the one that you
might see tomorrow, right, so that you just don't feel the pain. But then it keeps going. And it's
like, all right, how can I get more behavioral? How can I get more expansive? How can I use these
ML functions in a, in a better way as well. And, um, and then just keep going so that you just
get more pervasive, more broader coverage. All right. Well, Josh Kamju, it's amazing talking to you
about this stuff, right? Like about how AI is legit changing detection. Thanks a lot for joining me
to talk about it. Always fun. Thanks for having me, Pat. That was Josh Kamjew from Sublime
Security there with this week's sponsor interview. Big thanks to him for that. And also big thanks to
Brie Campbell from Sublime.
I did the original sponsor interview
for this week's show with Brie, but
unfortunately we had a technical issue which meant we needed
to scramble last minute to
replace it, which is why
Josh was joining us from the lobby of a
hotel. Sorry about that
for everyone who was impacted.
It's just one of those things. But that is it
for this week's show. I do hope you enjoyed
it. I'll be back in a few weeks
because I'm just about to go and leave
with more security news and analysis, but until
then, I've been Patrick Gray. Thanks for
Thank you.