Risky Business - Risky Business #798 -- Mexican cartel surveilled the FBI to identify, kill witnesses
Episode Date: July 2, 2025On this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news: Australian airline Qantas looks like it got a Scattered Spider-ing Mic...rosoft works towards blunting the next CrowdStrike disaster Changes are coming for Microsoft’s default enterprise app consenting setup Synology downplays hardcoded passwords for its M365 cloud backup agent The next Citrix Netscaler memory disclosure looks nasty Drug cartels used technical surveillance to find, fix and finish FBI informants and witnesses This week’s episode is sponsored by RAD Security. Co-founder Jimmy Mesta joins to talk through how they use AI automation to assess the security posture of sprawling cloud environments. This episode is also available on Youtube. Show notes Qantas hit by cyber attack, leaving 6 million customer records at risk of data breach Scattered Spider appears to pivot toward aviation sector | Cybersecurity Dive Microsoft to make Windows more resilient following 2024 IT outage | Cybersecurity Dive (384) The Ultimate Guide to App Consent in Microsoft Entra - YouTube When Backups Open Backdoors: Accessing Sensitive Cloud Data via "Synology Active Backup for Microsoft 365" / modzero AT&T deploys new account lock feature to counter SIM swapping | CyberScoop Iran-linked hackers threaten to release Trump aides' emails | Reuters US government warns of new Iran-linked cyber threats on critical infrastructure | Cybersecurity Dive Actively exploited vulnerability gives extraordinary control over server fleets - Ars Technica Critical vulnerability in Citrix Netscaler raises specter of exploitation wave | Cybersecurity Dive Identities of More Than 80 Americans Stolen for North Korean IT Worker Scams | WIRED Cloudflare confirms Russia restricting access to services amid free internet crackdown | The Record from Recorded Future News Mexican drug cartel used hacker to track FBI official, then killed potential FBI informants, government audit says | CNN Politics Audit of the FBI's Efforts to Mitigate the Effects of Ubiquitous Technical Surveillance - Redacted Report NATO members aim for spending 5% of GDP on defense, with 1.5% eligible for cyber | The Record from Recorded Future News US sanctions bulletproof hosting provider for supporting ransomware, infostealer operations | CyberScoop US, French authorities confirm arrest of BreachForums hackers | TechCrunch Spanish police arrest five over $542 million crypto investment scheme | The Record from Recorded Future News Scam compounds labeled a 'living nightmare' as Cambodian government accused of turning a blind eye | The Record from Recorded Future News
Transcript
Discussion (0)
Hey everyone and welcome to Risky Business. My name is Patrick Bray. We've got a great
show for you. We'll be getting into the news with Adam Boileau in just a moment and then
it'll be time for this week's sponsor interview and this week's show is brought to you by
RAD Security. They do a lot of stuff around Kubernetes and sort of cloud infrastructure security.
And we're talking to Jimmy Mester, one of the founders there, all about all of the stuff
they're doing with AI in cloud.
And it just feels like everyone is doing stuff with AI now.
And that's an interesting chat.
And it is coming up later.
Before we get going, Adam, we should probably mention that we're actually about to go on
break. The weekly show is going on break for a couple of weeks.
It's school holidays here and I'm, you know, going to enjoy that with my family.
So if anyone's wondering why there's no weekly show next week or the week after, that is why.
But let's kick off the news now and yeah, airlines.
So last week we were talking about how Sk Scattered Spider had pivoted into attacking the insurance industry and it's like wow
It's so weird that they target verticals like this. It looks like they found a new vertical, which is the airline industry
Yes, we had seen some reports last week about their potential involvement with breaches at WestJet and Hawaiian Airlines
And then in the last kind of hour or so,
we've seen a report in the Australian media
that Qantas, the National Airline of Australia,
has also suffered a pretty big breach of some sort,
lost, I think, six million customers' worth of data.
And that is being described as, you know,
scattered spider-like or esk or, you know,
probably involves that group of
people so yeah we got three does that make a trend probably right well I mean
last week you grabbed that WestJet story and stuck it in the news items and we
were like I don't know what this is could be ransomware could be anything
but it looks like yeah that was the the first one to fall right and yeah now
Hawaiian Airlines and then Qantas and you just think, it's got the makings,
you know, it's a crime wave, right?
Like let's just call it what it is,
which is these advanced persistent teens
are just out there raising hell.
This almost feels a little bit like,
you know, for those who are around back then,
because it has been a long time,
it feels a little bit like LulzSec back in the day, right?
Where it was just like, just kept happening.
And you never knew where they were gonna pop up.
Yeah, no, that parallel I think is pretty apt, APT.
This particular one at Quantus does have some other
kind of elements that certainly seem pretty familiar
from other scatters by the breaches.
So it sounds like this was a third party platform
used by Quantus' contact center.
And that kind of like third party,
outsource provider breaches
seem to be the methodology for Marks and Spencer.
That scatters by a lot.
So they do understand how to go find the weak points
in these ecosystems and you know the the
relationships between a lot of these organizations and their outsourced
partners or people they use for these sorts of things you know are not really
designed to be super resilient because no one's really gone around and targeted
them for certainly as a pen tester you know people doing assurance you never
practically get to go exploit these relationships you can point at them on
paper but there's no practical kind of like, we've done a thing.
And so this is not an area that has got much focus, I don't think.
Yeah.
Well, I mean, we spoke a couple of weeks ago, didn't we, about the AP teens who were going
around and changing people's MX records by socially engineering their domain registrars
and whatever. And as you pointed out at the time, you can't
really go and do that as part of a pen test. You can't socially engineer the help desk at a domain
registrar and have that transferred over. It's just not a thing. No, no, that's unfortunately out of
scope. And if you raise those things, and I've written reports raising exactly these issues
with customers saying, hey, look, there is a weak spot here if someone were to do this,
it gets put down the list way below all of the other easy technical issues to fix. Go update this
or patch that. That's straightforward for them to wrap their heads around doing something about.
These ones, they go, eh, that sounds complicated. Kind of big picture, kind of ecosystem,
That sounds complicated. Kind of big picture, kind of, you know, ecosystem.
Eh, that's risk accepted next.
Yeah, well, I mean, I think, yeah, outsources and SaaS is the other one, right?
Which you've got to worry about because we used to worry about this when it was infrastructure
as a service.
Like we used to worry about, well, what happens if there's bugs in AWS or whatever and your
pen testers can't, you know, go and hack AWS?
And mostly, I mean, they're doing a really good job of that with the exception of Oracle, right?
They're the ones that do not appear
to be doing a good job with that.
But I think with SaaS, when it comes to major SaaS,
you're mostly OK, but there's so much of this stuff, right?
God knows what it was, whether it was an outsourcer
or a SaaS platform or a combination of both.
And more and more, that's how business operates these days.
And more and more, we're going to see third party breaches
just like this.
I mean, it's hardly new, though.
But I think it's just ubiquitous now.
Yeah, I mean, I guess it's a good reminder that as we
improve other things, the deployment of pass keys
and multi-factor everywhere and things
that make traditional hacker trash
of like stealing creds and logging in.
Once that goes away, it doesn't mean everything's secure.
It just means people are gonna start looking
for other things that unfortunately
are probably gonna be more difficult to defend
than just bad passwords.
I mean, the rise of info stealers as a credential source
already suggests that we're kind of getting better at not just having Monday1 as our password for half our user base because that's the
default when someone joins or whatever.
These attackers don't go away.
They move and they find something different.
Unfortunately, fixing things is just going to get more complicated as the attack surface
gets more gnarly.
Yeah, gnarly.
Good choice. But it is like, gnarly. Yeah, gnarly.
Good choice.
But it is like, it is squeezing the balloon, right?
You squeeze the balloon and yes, it does not get smaller, unfortunately.
It's enough to make one, you know, given enough time in this industry, it's enough to make
one maybe a little bit jaded.
Yeah, yeah.
I mean, the amount of people who work in this field who are like, I'm just going to quit
and go be a llama farmer or something.
Yeah, that's the joke.
So just farming, that's the future.
Yeah, yeah.
The joke LinkedIn thing, which is what is it like, you know, sysadmin through to CISO
and then goat farmer.
It seems to be the correct career trajectory.
We got a story here from Cyberscoop about AT&T making some changes.
They've got like an account lock feature now, which is supposed to be, that's supposed to
make sim swapping harder
I think this is good
I do worry that things like this if they're not properly implemented which is very difficult to do at scale
Just turn into speed bumps for people like, you know the calm kids
Yeah, yeah, exactly. I think this is a feature they've launched where in their mobile app
You can basically put a lock on your account that prevents number porting, changes to details,
changes to numbers and other bits of your account set up.
And if you do want to do those things,
you have to go turn that off in the app.
The natural question is, what do you do
if you've lost your phone, you don't have the app?
Well, you can't sign into the app for whatever reason.
What does that flow look like?
But I guess, anything that adds,
even if it is just a speed bump,
to com kids doing some takeovers.
We're going to make them drive over that speed bump.
Yes.
I think the AT&T docs did say that at least if you turn this
account lock feature off, it sends a message to all of the numbers
in the account and emails everybody and so on.
So,
you've got a chance that you might spot it happening, which is why of course the com kids will do it at two in the morning, or on a long weekend, or when you're at the pub.
So yeah.
Yeah. I just would have thought like even standard things that they don't do,
which aren't always going to work, but like trying to ring the person who's being sim
swapped. Like they're like, I lost my phone.
I need to sim swap ring the number and see if it rings, see if someone picks it
up, you know what I mean?
Like maybe put a 24 hour delay on it.
I don't know.
It's just always seemed way too loose.
Um, so look, it's good that they're doing something there and maybe that'll
slow a few things down.
I don't know.
Uh, let's talk about some more meaningful changes now.
So in the wake of this, you know, crowd strike disaster, when was that?
Last year?
Yeah, I think it was 2020.
Yes, last year, 2004.
Yeah, yeah, yeah.
So Microsoft made some noises about how they were going to, you know, change things basically
to try to make Windows more resilient to this sort of stuff.
And that involves a combination of things like encouraging security companies to move as
much possible out of the kernel, which seems quite sensible actually.
And encouragingly, they didn't just knee-jerk.
They weren't like, everybody's got to get out of the kernel.
But now they've got a bit more of a roadmap for what those changes are going to look like.
Now one of the things they're doing is, and they sort of buried it in their report,
in their blog post about this, and were like,
oh, we're releasing all of these features
to help people get out of the kernel.
It doesn't look like they're,
they haven't said whether or not
they're gonna kick everyone out,
but it doesn't feel like that quite yet to me.
I think what they're gonna try to do
is see exactly what they can get out
before they make any further decisions and whatnot.
But they've also introduced a bunch of recovery features
and stuff, which should help quite a lot next time.
Well, hopefully there's no next time.
But if something like this were to happen again,
these features would make a real difference.
Can you just walk us through exactly what they got planned
here?
Yeah, so it's a kind of multi-pronged approach.
One is they've been talking a bunch with security vendors
about kind of what they want from an API,
what do they need to do in kernel,
and is there a way to be provided that,
those services, those callbacks,
whatever it is out in the user space.
So that's kind of one tack.
The next is there are a bunch of guidelines from Microsoft
about how you should deploy updates for this kind of like really privileged software.
So things like, you know, staggered rollouts, ring deployments, you know, sort of best practice
stuff that some vendors do.
Well, what was funny is like how much emphasis there was on the, in the Microsoft post about
ring deployments.
Like you can almost tell the people writing it could not believe that CrowdStrike weren't
doing that.
You know, and they're making it a requirement.
Like if you want to have kernel access, like you have to do ring deployments.
And like they shouldn't have to demand that people do that.
Like that was the sin, right?
If you look back on the CrowdStrike thing, you know, it was this weird confluence of events.
They were very unlucky for that to have happened.
It is a well-designed, like if you're going to be in the kernel, it was actually very well designed.
They just got really, really, really unlucky.
But that's why you do ring deployments in case you get
really, really unlucky.
So anyway, sorry, I cut you off.
Yeah, yeah, yeah.
So that's, I guess that's the second part is, you know,
some better guidance and requirements for vendors that
are going to have this access.
And then the third bit that you mentioned is, you know,
when it happens,
what does the recovery process look like?
So Microsoft has been introducing a,
so there's already like a Windows recovery mode process,
but one of the problems was stuff was getting stuck
in kind of recovery mode and not able to get updated.
So they've, and there were a number of clever hacks
that people came up with to try and remotely recover
machines that were stuck in a reboot loop.
But Microsoft's kind of formalizing that structure and then introducing a mechanism for administrators to deploy updates to machines that are in recovery reboot mode.
So better resilience for an overall fleet in the event that something terrible happens, you don't actually have to roll truck and go and visit every physical machine or every, you know, get along the console of every server
or every virtual machine or whatever else, like some mechanisms to automate that process.
And that seems like good progress, you know, regardless of, of what thing goes wrong, you
know, with people's machines, having a mechanism to recover them at scale clearly is something
that we needed.
Yeah. And I'm sorry to all of the listeners who might have been triggered by this conversation,
you know, having flashbacks like that GIF of the dog having the Vietnam War flashback.
Sorry for going full flashback dog on you folks.
It was pretty full on.
Alright, so Microsoft is also making some changes to app consents in Entra, which is
a very positive thing.
These are baby steps though.
So it was a listener who alerted me to this through the Entra chat podcast, which is a
thing that emanates from the bowels of Microsoft.
It even has a theme song involving singing the words Entra Chat, which I find incredibly cringe, but
it is really good that they're actually out there talking about this.
And indeed there was a young woman there, Erin Greenlee, who, and you watched this whole
thing and you were like, you know, this is a smart person making a lot of sense.
But it does seem like these changes are baby steps.
Like walk us through what exactly they're doing here. So the deal is that in a Microsoft, like as your tenancy, the app consent process,
the current default is end users can consent to stuff. And you can change that,
but the default has been end users can just consent to whatever.
And that has led to a proliferation of apps that are consented for access to people's cloud
resources that have too much permission or aren't really used or the opportunity of social
engineering people to grant access to apps into their work environments.
There's just a whole bunch of bad stuff that happened.
And the plan here is that they are going to change the defaults so that
admins are required to consent to certain kind of high-privileged access.
And in particular, there's one of the permissions that you can grant is like
read, write access to all files or read, write access to the directory.
And these are things that, you know, there's not often a good reason to do.
And often it's abused by app developers just because
it's easy, rather than asking for specific permissions.
There often is a good reason to do them, but that's going to be something that is going
to be rolled enterprise wide, right?
It's not something you want a user to be able to grant.
It's not permission you want a user to be able to grant.
Exactly.
And so the new default settings will be that for certain privileged, particularly powerful
permissions that the user has already,
but that it's not desirable for them to be able to just grant
willy nilly, those are going to be restricted
and require admin consent.
And as an admin, you'll be able to configure exactly which
permissions meet that threshold, set up escalation.
There's some options for
delegated, so you can delegate some permissions, can be approved by the mail team, and some can be
approved by the desktop team, whatever. A bit more granular and powerful control. The downside is a
bunch of this is going to require you to talk PowerShell to an API rather than pointy clicky,
but as you say, baby steps. But overall, it's just good that they are thinking
about this and the era of people just being able
to grant access to all of the corporate resources
at a click without really even understanding
what they're doing, like that's a thing
that Microsoft should probably end,
and it sounds like they're on the road to do that. Well, they should have done it five years ago, which is my beef here, which is like,
oh, it's great you're doing this thing that you should have done in 2018.
We didn't really understand how...
Yes, we did.
We talked about it at the time.
We did, but we the ecosystem, we the industry as a whole, we didn't really know which cloud
model was going to win.
We didn't know if we were all going to end up in some Oracle cloud future or whatever
instead of Microsoft one.
The current mess of Oauth granting and federated authentication and so on and so forth.
It took us a while to better then.
Unfortunately, we're doing it on the internet with all of the businesses that run the entire Western world.
So it's a little bit of a YOLO time.
Yeah, indeed, indeed.
And yes, Erin Greenlee is the product manager
for App Consent on Microsoft's App Platform team.
That is a real job title.
And she sat down with host Meryl Fernando
to chat about App Consent for one hour and 12 minutes.
And that is on YouTube and you can find it. Although Adam, you said that you were sort
of impressed with the understanding sort of being shown there.
Yeah, yeah. I mean, it felt like quite a lot of thought has gone into this and you get
the feeling that there are absolutely pockets of competence at Microsoft and this is one of them. But you know the
overall big picture is still like we're a way away from where we need to be
you know big picture ecosystem-wide. Yeah now if only we had a sort of recent
example of how this can go wrong. Oh look we do We got this blog post here from a company I think named ModZero, which is looking at
a pretty disastrous Synology backup thing, right?
So you've got this Synology active backup for Microsoft 365, which already I have so
many questions.
So this is a backup product for your cloud.
So this is if you like don't
trust Microsoft to backup your cloud data and you would prefer Synology to do that for
you like in your house? Is that the basic idea of this product?
That's basically it. Yes. If you have a Synology NAS and you want to backup your stuff out
of the Microsoft cloud, then this is the kind of the plugin that you use to do this.
So you install the like active backup app on your Synology
and then you have to like authorize that
into your cloud environment
so that it can then read and write stuff.
And how do you authorize it into your cloud?
Well, so there's like an initial kind of setup process
where you have like an OAuth grant that you give
that creates an account in your Microsoft environment
which then gets passed off to Synology and then pass it back to the NAS through some back channel and then it
provisions
like through that basically admin access to your cloud so that it can
back stuff up and it does that part of it like getting admin access access to
your cloud is kind of done as you'd expect with all of grants and blah blah
blah blah but the initial setup step where you are approving this app into
your environment there's some cuts there's something kind of weird going on.
And these researchers, this security company ModZero,
they were red teaming a customer that used this product
in their cloud and were looking around
and decided to install it themselves
and look and see how it worked.
Which is a lot more effort than just telling them
not to run it because it's a bad idea, but anyway.
Well, you want to be informed about these things
when you're giving them back.
You want to know why it's a bad idea, I guess. Yes, you want to point it, exactly. Because otherwise, computers are a bad idea, but anyway. Well, you know, you want to be informed about these things, you know, when you're giving them back. You want to know why it's a bad idea, I guess.
Yes, you want to point it out, exactly.
Because otherwise, like, computers are a bad idea,
but that's not, you can't just write that in the report
and collect a check, which is, you'd like.
But no.
So they pulled it apart, and they were, like,
looking at the web requests going back and forth,
and they saw a username and a password in one of the requests.
And like, well, that's weird,
because this is all OAuthy, token-y,
like why is there a credential going past?
And they pulled the thread, and it turns out
this credential is shared by,
so this process creates accounts in your service principles
in your Azure tenancy, but the process that creates this
seems to have a hard-coded password.
So they checked and yes,
they can auth with this password.
Then they checked Synology's own Microsoft tenancy,
and it works there too.
So they've created a password for everybody.
So they've created a password for everybody.
Then this particular account has relatively restricted
permissions, but amongst the permissions it does have is
read all your Teams messages, which is not great.
And I think if it's, I was a little unclear if it's read
everybody's Teams messages, cause like you do have to be
admin to do, yeah, it's a bit confusing.
But then the other weird thing is that there does not actually appear
to be any legitimate reason like this password is not actually used in any
legitimate flow it seems like maybe at some point they were trying to do it
just the easy way with passwords and then they moved to doing it the correct
way with you know tokens and blah blah blah blah it gets kind of a bit murky
but maybe they didn't tidy it up.
So this all seems pretty bad.
Like being able to just log into anyone's Microsoft tendency
that uses the Synology product
and read their Teams messages seems bad.
But the thing that's kind of worse also
is that Synology's like disclosure of this bug
really underplays what the bug is and says that no one has to do anything,
even though as best I can tell all of their customers that use this kind of need to go
roll credentials at the very least, let alone look at the access logs or whatever else.
Synology's actual advisory is like, allows attackers to obtain sensitive information by unspecified vectors.
Like, that's the entirety of it.
That's some Weasley language if ever I've heard it.
Now let me riddle me this, right, because I want to connect it back to our previous
conversation.
What is to stop a typical corporate user connecting their Microsoft 365 account to this via OAuth?
I think at some point in this flow you will need to be a global admin.
Oh really? Okay, okay.
So an end user I don't think can do this.
You have to be an admin.
I think you have to be an admin to do it, which I guess is good.
But the fact that the other, I guess the problem is I'm not clear.
Yeah.
If you like, and that is kind of the bigger problem is I don't really
know if an end user can consent to some of these things and do half this process.
Or like, it doesn't, I mean, no, I'm pretty sure...
The fact that the OAuth grant says review for your organization,
because they've got a screen cap of the OAuth thing, the pop-up.
The fact that it says review for your organization tends to suggest it's admin-y.
I think that you do need to be global admin at some point in this process.
Like there may be some bits that don't have to be added, but either way, it's done great.
And the fact that no one really understands how all of this works, and now to design sensible,
you know, kind of like, how should we back up our stuff to our NAS in our house?
And do so in a way that doesn't result in everybody in the world being able to read
your team's messages.
Like I...
What a world!
What a world!
I guess it's my mistake. in the world being able to read your team's messages. Like I... What a world, what a world.
I guess it's my mistake.
It doesn't really connect to the previous story
because it is an admin doing the consent.
Probably, but we don't know, but maybe, but anyway,
I don't know.
It's just a strange, it's a strange old world.
And computers are weird.
Yeah, the point is that cloud auth is weird and hard.
Yes, exactly, right.
So let's move on to a story now where, look we're just going to mention this one
briefly, you know some hacker linked to Iran, this is the person calling himself Robert
who leaked a bunch of stuff I think during the presidential campaign, is now threatening
to release more emails. I mean really care factor pretty low on that, I would imagine.
And we've got all of these reports too, talking about US government warnings about Iranian
threats against US critical infrastructure and whatnot.
You've got also the, you know, mostly the sort of MAGA right, predicting there's going
to be massive terrorist incidents because of these Iranian
sleeper cells that crept into the country under Sleepy Joe's watch, you know.
So there's all of this like, you know, fear being pumped about Iranian cyber attacks and
we have not seen any really.
And I just have this feeling and Tom, our colleague, Tom Uren, he's working on that this
week. He's writing about this for Seriously Rescue Business Tomorrow, which you can subscribe to at
risky.biz. You know, I think once the bombs start flying around, you know, doing something that
doesn't achieve a military aim, but causes a bunch of damage like attacking US critical infrastructure,
like opening a dam in the Midwest,
I don't think is really gonna do anything good for Iran,
and they're not dumb.
So I just don't think it's gonna happen.
What do you think about this?
Because for the last like two weeks,
we've just been looking nonstop at news headlines
coming into our central sort of news repository
where we scrape everything into.
And it's just been this constant drumbeat of fear about Iranian attacks and no actual
attacks.
Yeah, yeah, but that really just has not been much to show for it.
And you know, when we did see like, what was it, some Iranian attacks in Armenia, was it
we talked about?
That was like MEK stuff.
That was MEK related, but like just kind of not very exciting.
And yeah, they took down a municipality's website or something.
Yeah, exactly.
And I guess the problem is, yes, they have cyber capability.
Yes, they can do cyber, but they don't want to do cyber that's escalatory.
And if you want to do cyber that's not escalatory, you haven't really got very much left.
You could only really do inconsequential stuff.
Well, that's why I think the hack and leak.
That's why I think that we might see the hack and leak stuff kind of kick off again,
because what they dump a couple of male spools that doesn't justify a bombing run,
but taking out some critical infrastructure does.
Yeah, yeah, exactly.
So we might see some, you know, Roger Stone emails or whatever.
And like maybe there's something in there. but like, you has kind of already elected
Trump a bit late for that.
Yeah.
And it's not like he's particular.
Like he's, he's just one of those, he's Teflon, right?
So I mean, this is a guy who's making millions and millions of dollars out of like, you know,
coins and selling fragrances and doing all sorts of stuff.
That's like very, very unconventional.
And I just don't think you're going to cause him much political damage with a bunch of leaked emails.
And if they were that damaging they would have released them already.
Well exactly, yeah exactly.
But of course now we've said this probably you know immediately something really really bad is
gonna happen but it is just our feeling internally isn't it that it's it's unlikely that they're
gonna do anything real.
Yeah it doesn't feel like it so but hey you maybe we'll be wrong. Maybe we'll have a really
juicy episode after you come back from holiday.
Sorry if we cursed you all by saying that. Now, look, let's talk about these AMI MegaRack
bugs. These bugs were disclosed in 2024. I'm almost certain we spoke about them at the
time. But these are like lights out management bugs in lights out management stuff used
in data centers, which is, you know, look, if someone's actually going to use these
things, like it's they're using them, you're in trouble, right?
If you're dealing with this sort of stuff.
And it looks like CISA has just added that stuff to the Kev list.
So we've got a report here from Dan Gooden looking at that.
That's not good.
No, no, it's not. I mean, these, it makes sense that these bugs are going to get used in the world because
they're pretty trivial.
I mean, the bug that Eclipse found was like, you can basically bypass auth with an HTTP
header, which that's not great in a lights out management system where, of course, you've
got access to the underlying hardware, the underlying disks, you can bypass early boot security controls, you can do all
sorts of really good stuff with access to these things.
Most people hopefully don't put their lights out management systems on the internet.
Obviously some people do, which is not great.
But there's also the case of if you get any any bug, you know, where you land on some server, you know
So I wear that bug or whatever else you can usually
Quite often you can reach the lights out management system locally from the machine
So as an escalation vector once you've got any form of compromise
Yeah, it's the lateral movement to everything everywhere basically as soon as well and and also just
basically as soon as you're in one thing. And also just as a privilege escalation vector,
be able to go from an un-priv web user on a box
to I control the lights out management system
on that same hardware, you bypass all the rest
of the OS security controls, plus there's not really
going to be great logging up there,
there's not going to be any EDR.
It's a great route to go down.
So I'm not surprised that people are using it.
The other aspect that's complicated about this is
Lights Out Management software tends to be,
because of the integration with hardware,
tends to come from the BIOS manufacturers
or the hardware vendor with a licensed tool
from a BIOS manufacturer.
So what you're saying is this will be easy to patch?
Yes, so like software supply chain patch, tool from a BIOS manufacturer. So what you're saying is this will be easy to patch? Yes.
So like software supply chain patch,
patch supply chain wise, this is messy.
And that's one of the reasons I imagine it's ended up
on the CAVE is that there are a dozen hardware vendors using
AMI's Mega Rack.
Lights out management framework, either under license
or however the commercials work.
So very reminiscent of the Android ecosystem
in terms of how quickly things can get patched.
Yeah, and look, staying with bugs, right?
We've got a new one to talk about here,
which is a bug in Citrix Netscaler,
which is a 9.3 CVSS.
And last time we saw something like this,
it got a fancy name back in 2023, Citrix Bleed.
And I remember when that one came up,
I think that was when I was in the U S uh,
and I just remember saying to some people like,
everybody's about to get owned with this and it actually happened like instantly.
Right. So I'd imagine that, yeah,
certainly by the time I'm back in a couple of weeks,
we'll be talking about all of the people who got owned by this,
but this bug looks bad.
Yeah, yeah. It's very similar to the original Citrix bleed.
So it's a memory leak where you get the contents of some kind of other parts of memory back due
to I'm assuming like buffer mismanagement somewhere but the real
issue with these is if you leak memory and that memory happens to contain
session tokens of in-flight communications which is pretty likely I
guess if you get you know get enough, a bit of memory,
and the attacker gets a session token,
they are now post-orth, which bypasses multifactor.
And that's the thing that really made a mess last time,
is like, people assume the multifactor makes it okay
to put this stuff on the internet,
but if you can steal a session token
and ride an existing session,
or take over an existing session, then yeah, like all the
auth in the world doesn't really help you, all the octa, all of the other controls, all
of that login stuff, all the impossible travel, all of the things you do at the login step
are no use if you've stolen the post-auth session authorization material. So yeesh.
Yeah. I mean, it's like people don't think about stuff like pre-auth RCE either.
They just don't think about it because they're like, Oh, you need to log into it.
It's like, well, if there's pre-auth attack surface, they're like, no, you don't.
Authentication is not access control to this stuff. Really.
It's really not, you know? Anyway, got a great story here from Andy Greenberg.
I mean, this is, we're seeing something that I feel like I've predicted on the show, which is
that the weak points in these North Korean fraudulent IT worker schemes has always been
the laptop farms.
It was my feeling that once the FBI had turned these sorts of investigations into those laptop
farms into a process, which is often how they tend to do it. When they're cracking down on a crime type, once they've got a template
for how to enforce and investigate and enforce, that's what they do and that's what they're
doing. So we've seen 29 laptop farms across 16 states in the United States being shut
down, computer seized, people arrested, and yeah, they're going at it. So I think the
easy wins for the North Koreans,
you know, just using these laptop farms, I think that's going to get harder for them,
but they'll pivot to something else, whether that's residential proxy networks or whatever,
but it's going to be harder, I think.
Yeah, yeah. I mean, it's certainly, there's a number of steps in this process that are
just made easier by having a physical presence on the ground, right? And things like having
identity cards and all of the other like bits and bobs
you need to go through initial employment validation,
like when you get on boarded or you go through the hiring process.
So it was kind of like a bunch of that stuff
that these people were facilitating.
And then there's the receiving the laptop,
plugging it in, operating the infrastructure for the North Koreans to access it.
And I think there were like 200 computers all up
seized across these, the 29 or so laptop farms.
And it seemed what they were using was like IPKVM.
So they would plug these devices into the USB
and video output, USB input video output of these laptops.
And then the North Koreans would use the console
through an IPKVM, which, you know, I imagine with the roundtrip
time to Pyongyang internet, probably not a great user experience.
I mean, can you imagine having to log into someone's enterprise Citrix via an IPKVM of
a laptop in someone's basement in Missouri?
I don't think these guys are actually based in Pyongyang.
I think often they're based all around Asia, but yeah, like the round trip time is still gonna suck.
It's still gonna be a terrible user experience.
Can you imagine having the video conferencing? We've seen plenty of stories about like, you know,
these North Koreans don't show up on camera on their video conferences.
Yeah, it's because they're having to use an IPK VM to video conference.
It's because they're gonna look like a they're gonna look like a slideshow.
Like USB pass through for camera.
Like it's just, yeah, I think, was it one of the UK retailers was making people turn
on their cams because they were worried about hackers in their environment.
Like maybe that's another way to deal with, you know, North Korean workers is, you know,
actually having a functional camera across this big, this kind of janky of a connection
not going to be a good time. So that's how you, that's how you can detect them. across this big, or this kind of janky of a connection,
not gonna be a good time.
So that's how you can detect it.
But you know, you are right, this is an easy place
for them to round people up
and the fact they are doing so is good.
I think we've only seen two of the farm operators
identified both in New Jersey,
but I imagine we'll see details of more
as they unveil the rest of the prosecution documentation and stuff
Well, and I think they're using these laptop farms for a reason right to give them that local presence
But there's got to be a reason they're not using residential proxy networks
Which would be the obvious way and it's it's just something about actually having a computer
Like there are ways to detect when a computer is sort of physically located roughly somewhere
I just don't know how like it would be harder to do this through a residential proxy network
I think.
Yeah, yeah. Because otherwise you have to get the physical laptop to them in Pyongyang,
or wherever they are, you know, wherever they're hanging out. And then you've got to try and
tunnel everything back through, which I mean, you could do, you could give them little Linux
boxes that, you know, tail scale it or whatever outside some, out someone's residential proxy
network. But I imagine the logistical side of that
is also complicated, physically getting the laptop there.
And then the consequences of any like Opsic breach
at that point, right?
Because if you start going like,
where do the WiFi networks around this corporate laptop
think it is?
And it looks like it's in Taipei or wherever.
Yeah, exactly, right?
Like it's just complicated.
It's better in the US, you know? Yeah, there's right. It's just complicated, it's better in the US you know.
Yeah there's reasons, it's easier just to do it that way. Meanwhile in Russia, and this is not the
first time Russia has done this, but they're increasingly cracking down on Cloudflare, like
it's like getting real hard to get to Cloudflare, you know websites behind Cloudflare in Russia.
Roscom Nudzor has been saying this would happen for quite some time,
so there's no real surprises here.
I think interestingly enough, it really is because Cloudflare uses encrypted client hello,
which means that it's quite easy to bypass things like censorship restrictions
by tunneling traffic through Cloudflare, and that's really why they're doing this.
So if you want your website to be available in Russia, you just can't sort of use Cloudflare, I guess,
is the moral of the story.
Yeah, apparently they are dropping connections
after the first 16 kilobytes of data,
which ain't going to get you very far.
That's enough for a TLS connection stand up
and maybe the page header to come through,
but that's about it.
So yeah, it must be pretty rough being on the internet
in Russia at the moment, you know,
because like Cloudflare,
it's a pretty big swathe of the internet.
Yeah, and Cloudflare didn't leave Russia
after it invaded Ukraine, saying at the time,
Russia needs more internet access, not less.
Very Cloudflare-like response.
All right, so an interesting one this week.
We've got a report out of the FBI.
It's partially redacted.
It's really interesting.
It looks at the risks posed to FBI operations by what they call ubiquitous technical surveillance.
And it's got some amazing anecdotes in it from 2018 about how the Sinaloa cartel, I
believe, was able to surveil FBI
personnel. I don't think they were actually agents I think they were like
it was a legal attache in one case and whatnot but they were they were doing
sophisticated fairly sophisticated technical surveillance of FBI personnel
in order to locate sources and witnesses and murder them. Which is really not great when you're the
FBI, like if you are actually leading the cartel to the identities of the people
who are, you know, being witnesses and sources against it so that they can be
murdered. I mean really that's just not doing your job properly. I suppose
perhaps these risks weren't as well understood by FBI in 2018. They certainly
would have been understood by other agencies like CIA.
But really this report just is looking at like
what a threat this sort of stuff is to its operations and how they really need to
mix up the way they do things. And it's an interesting report as I say.
And I think Tom is looking at this one for seriously risky business this week as well.
Yeah, I mean the idea of law enforcement agencies having to operate in a, I mean, what was that,
contested information space?
Is that correct?
Oh, they've got some euphemism.
I can't remember what it is right now, but yeah.
You know, it kind of makes law enforcement feel more like, you know, intelligence work,
like foreign intelligence work,
because you're operating in an environment
that is surveilling you and has all these kinds
of interesting tooling and the sort of trickle down
of surveillance techniques into the,
I guess criminals or private sector,
does kind of change things a little bit.
So some of the examples they gave include things like
compromising the phone of, I think as you said,
it was a tash or something to be able to see
who they were calling, who they were talking to,
track location.
There was some other reports of cartel hackers
breaking into camera systems around Mexico City
and using that to track movements of agents and so on.
So, you know, kind of movie hacking in a way, like, um, which, you know, it's
gotta be pretty scary when you're, you know, if you're an agent or, and
certainly if you're cooperating with, uh, you know, with drug enforcement,
angels or whoever else, you know, the fear of being murdered by the cartel for snitching, you know, anything
that increases that fear is good for the cartel.
Even if it's not necessarily real in all cases, like, you know, the idea that they have this
kind of spooky capability, you know, even if it isn't real in all cases, you know,
you know, get yourself a stingray on Alibaba and buy some data from a data brokerage and
you know, you're off, you're up and running.
I don't know that.
Did they compromise the phone or do I thought that was more that they were surveilling and
it felt more like stingray from the bit that I was.
Yeah, it may have been stingray.
It may have been like some of the SS7 tracking tricks.
Yeah.
Figure out what Southside, a roaming phone is associated with.
It was a little like there's enough kind of redaction and unclearness in this that it
wasn't, you know, I wasn't clear if it was on device
compromise something like a stingray or something like you know some other
mechanism. The point is that electronic surveillance this type of
surveillance has been democratized to the point where organized crime can use
it. I've had some fascinating discussions with people who are sort of
intelligence community adjacent about like,
you know, this is sort of a related topic.
If you wanted to put a, you know,
a human into a human spy into somewhere like China,
you just sort of can't anymore without them knowing, right?
Because one photo of that person on the internet,
you've got their identity with things like facial recognition and whatnot now. And if they don't have any person on the internet, you've got their identity with things like
facial recognition and whatnot now.
And if they don't have any trail on the internet, well, that's suspicious too.
So how do you even have a legend for, it's very, very hard.
So I mean, I'd imagine there would be some interesting recruitments to solve that problem
where you would really need to take people who have
established identities doing something quite benign and patch them in and make them spies.
I don't know, maybe that's one solution.
But how sustainable is that?
I don't know.
But look, the report is an interesting read and again, Tom's going to have some more coverage
on that tomorrow.
We got one here from Alex Martin over at The Record which is
looking at NATO members are sort of being squeezed by the Americans to
increase defense spending to 5% of GDP which does seem a little bit high if I'm
honest. But what they're doing is they're taking like 1.5% so they'll do 3.5%
on core defense. 1.5% is gonna to go to other stuff like, you know, infrastructure
security improvements, cyber, things like that.
So I'm guessing there's going to be a bit of a spending boom in Europe.
Thanks to this, which is like, sure.
Yeah, we're spending 5% on, on defense, but it's not more, you know, planes and tanks.
Yeah, it's Microsoft 365 license.
It's like E5 and you know, more crowd strike. Yeah, it's Microsoft 365 license. It's like E5 and you know, more CrowdStrike.
Yeah, yeah, yeah.
Which, you know, I guess, you know, whatever way they want to weasel it, but it's just,
it is a bit funny to go like, yeah, okay, we're going to classify our Fortinets or whatever
as, you know, defense spending when actually it's probably making it worse.
I was just wondering if oddly this could actually work out quite well, which is to sort of think
of this as national defense, because it's all pretty ad hoc in most countries, right?
When it comes to like the way civilian government systems in particular are defended, like,
you know, let's see how this plays out.
It could could turn into a really good thing.
Yeah, it's certainly possible.
It's not often we have glass half full moments on the show, but yeah, it's possible.
Like maybe the kind of coordinated purchasing
and sort of, you know, the process that would go along
with, you know, defense and air quotes, spending, you know,
maybe, I mean, despite all budget overruns
and all sorts of cost overruns that go into, you know,
normal military procurement, but like,
maybe that procurement process rules out
and better outcomes than ad hoc purchasing
or whatever it is that we do at the moment in the IT world.
So yeah, maybe.
Maybe.
Well, we live in hope.
Meanwhile, the United States government has sanctioned a bulletproof hosting provider,
according to this piece by Matt Kapko over at Cyberscoop.
It's called the AZA group, A-E-Z-A. They were linked to like Lamastela, Medusa, BNLian,
Redline, a whole bunch of stuff.
I mean look, sanctions are nice.
I kind of prefer what ASD did though to a bulletproof host which is to release the RMRF
shark and just burn the whole place down.
I would much rather see them do that than this personally.
I mean, why don't we have both?
Why not sanction them and burn them down?
That would, you know, get the best of both. mean, that's good, but I feel like just actually
destroying their operations is just a better way to go.
Yeah, I mean, ultimately, those bulletproof hosting
providers are such an important part of that ecosystem.
So like, anything that makes people not trust
the bulletproof-ness is good.
And I guess maybe being sanctioned suggests
that other stuff may be happening to them
or that they are targets for that.
But it would be better if they just RMRF them off the internet.
I'm with you.
Yeah, yeah.
And meanwhile, this guy, Kai West, 25 years old, apparently known as Intel Broker and
was behind a bunch of hacks.
And one of the breached forums, because apparently there's more than one, I don't track this
stuff very closely, he's been arrested.
Walk us through this one, Adam.
Yeah. So like the name Breach Forums has been, you know, there's sort of a dozen different
forums over the years that have had basically the same name, the same design. And many of
them are sort of every time one of the Breach Forums gets raided or shut down or seized
or whatever, they sort of splinters into two more ones with some admins from the last one
or some high ranked users from the last one starting their own fresh one.
So it's a bit of a mess, but yeah, this guy, I think he was British and then the rest of
the admins of this instance of Breach forums were Frenchmen, and they were running essentially
the same kind of thing as like Pom Pom Purin was, even though unrelated, other than the
fact like name and
vibes. But yeah, they were involved in, there was one kind of high profile thing that this
particular breach forum was into.
Was it the 23andMe?
Maybe I, there was something I forget. Like it's hard to keep track of because the names
are all the same. Many of the same people on the different instances of the forums and then they're
all getting arrested and talking smack about each other.
And like, it's hard to keep track of either way.
Like some people have been arrested and I guess, you know, now we'll have another
five new brief forums run by different people.
Yeah. And Darina Antoniuk over at the record has reported on some arrests in
Spain, five people arrested over $542 million worth of crypto-scamming.
So they apparently fleeced 5,000 victims worldwide.
I'm not really sure if this means they're operating some of these compounds in Asia
or whatnot, but either way, people doing bad stuff at quite a high volume and they've been
arrested.
Yes, I think a couple of them, three of them were in the Canary Islands, so like Tenerife,
I guess is a good place to run your scamming business.
But yeah, they-
I mean, if you're gonna do crime, right?
You may as well work from home somewhere nice.
Yeah, exactly, exactly.
But yeah, there was a bunch of mules that were cashing out
and then via ATMs cashing out people's bank accounts
and then depositing that, and then that was gettingMs cashing out people's bank accounts and then depositing that and
then that was getting sent onwards through cryptocurrency.
So it's kind of part of this money laundering, you know, kind of merry-go-round and then
in all of the crypto investment scamming that was feeding that.
So yeah, I mean, $500 million is not a small operation.
No, it's not.
And just a reading list item, I guess, more than anything else.
James Reddick at the record has written a nice small feature about what life is actually
like for people in those scam compounds.
Basically a write up of a international rights watchdog report into what life is like there.
They interviewed 58 survivors of those compounds in Cambodia.
Pretty grim reading.
Yeah, yeah. I think he links through to the Amnesty International piece.
And yeah, it's just, it's so hard reading, like, old people getting forced labour into being a criminal.
Like, it's, yeah, some of the individual stories are pretty harrowing,
and there's a bunch of details about, like, the compounds and the guards and the controls they have in place. So yeah it's not really an enjoyable read but it's you know
this is the reality for a bunch of people who are stuck in these places so yeah. All right mate well
that's actually it for this week's news. Thank you so much for joining me. A pleasure to chat to you
as always and we'll do it again in three weeks. Yeah a lot of bad stuff could happen in three weeks so I'm kind of looking forward to when
you get back. We'll be able to, it's going to be, I'm sure there'll be something terrible
and enjoyable and we'd love a good disaster so yeah. Good luck Internets, we'll see you in a few weeks.
That was Adam Bailo there with a look at the week's security news. It is time for this week's sponsor interview now and this week's show is brought to you
by RAD Security.
Now RAD specializes in cloud security.
They were formerly like a couple of years ago, they were known as KSOC for the I think
Kubernetes Security Operations Center. I think that's what the acronym stood for
but the point is they do a lot with Kubernetes right so they really know
Kubernetes they've done some really interesting work around like container
fingerprinting as well and they've done a lot over the last year in particular
around AI so that's what I wanted to chat with this week's guest,
Jimmy Mester.
That's what I wanted to talk to him about,
which is like, what's the use case for AI
in cloud and in Kubernetes?
And here's what he had to say.
Cloud telemetry is vast and ephemeral
and it is generally noisy.
So AI can really start to help us make sense of
that. And a few different use cases that we're starting with would be the age-old vulnerability
triage problem, right? You have a thousand different AWS accounts, a sprinkle of GCP over here, you have many thousand different
images, packages, misconfigurations. It's a hard task to figure out what to work on
first and what matters the most and can actually prevent a breach. And we're finding that automating
those workflows with the help of AI can help teams get that done much faster,
at least the first 80%.
There's still work to be done,
but it takes that burden off of their plate.
So we're seeing a lot of success there.
I mean, so what does that look like, right?
So I guess one thing that's different in cloud world
versus, cause we are talking about vulnerability management,
but vulnerability management writ large, you know, a lot of the effort there, a lot of what you're
really trying to do is apply context, you know, the unique context of your environment to a finding,
right? And that's something that I'm a little bit skeptical of how well LLMs are going to do there.
I think it's a little bit less that case in cloud security
because at least there's some uniformity there.
But like you said, it gets you 80% of the way there
with some of these workflows.
Like in what way?
Can you make it tangible for us?
Like how does AI help in one of these sort of cloud-based
or cloud environment vulnerability management workflows?
Yeah, sure.
So depending on where the data is coming from,
we'll just pick on some of the sensors
we've built over the years.
You typically have a collection of SBOMs, right?
SBOMs are annoyingly kind of dense
and hard for humans to reason about.
So we've built tools over the
years but it's still as hard to kind of take an SBOM and do something with it at scale.
So something that we've been able to use the SBOM for is essentially helping build a rag
pipeline that stores the data. We can do similarity search with LLMs,
understand the ecosystem at whole,
and say which of these workloads is most similar,
and help prioritize based off of how many times we've seen
that application running, what types of clusters is it in,
is it even live,
which packages are in use.
And then the LLM can help kind of rank order that.
And then we overlay traditional cloud posture data
on top of that to say, okay,
we know this workload exists 500 times.
We know that it's in a cluster that is in use, it's in production, and there's
some external IP addresses tied to these workloads. And then you just keep chipping away at the
context until you get to the point where the LLM can help you kind of rearrange the Lego pieces and give you
some actual insight on what to tackle first, second and third.
Yeah.
I mean, I guess that's what I was getting at when it comes to cloud world because it's,
you know, usually you're protecting a production environment and often the context can be as
simple as is vulnerable is on the internet.
That's a priority, right?
So yeah.
Yeah.
But there's more elements that I think the LLM can,
at least the adding LLMs can help us with that priority versus, you know.
Well, it helps you with the is vulnerable problem, right?
When you're actually looking at the S-bombs,
because you might not immediately be aware that you've got some, you know,
like log4j is the example that everybody uses, right?
Yeah.
If you've got those S-bombs and you've got LLMs understanding them, you're going to
know where you've got a vulnerable library. Exactly. And another example that we've had
success with too has been with our eBPF telemetry that we've had for a long time, we are able to see
and inspect low-level kernel information,
so system-level information, process tree files that
are created, network connections.
So we've had customers who use that data,
and the LLM interface that we have,
they can generate basically firewall rules
based off of a 30-day day average of traffic, right?
That we've seen it's actual, you know, HTTP requests and they can start to build
policies proactively that match, you know, all of that data that we've seen.
And an LLM is a lot better at that than doing it manually.
It's a hard task.
Yeah.
I mean, but we've been doing that sort of stuff with like machine learning classifiers
for quite a long time.
Like what's the advantage of an LLM there?
I guess like often when people talk to me about things like these, like the advantage
of an LLM is that engineering that thing, part of it is a lot easier when you're using
LLM.
When you're using, yeah.
Can move faster.
Like ML models don't need to be maintained over time.
And I think the biggest advantage is just people like using natural language for this sort of thing.
Like it's, uh, it humans have imperfect questions.
And I think the way we've built at least our chat interface is it's very much, um, kind of a
interview where, you interview where the chat will
say, did you mean this?
Do you want to see that?
And then you can keep adding more context.
And ultimately, you don't have to have
any special query language or anything on top of it.
The LLM can handle that.
Yeah, I mean, I think vendor-specific scripting
languages are dead, thankfully.
Thankfully. Thank god. That was a are dead. Thankfully. Right? Like, thankfully. Thank
God.
That was a long era.
Yeah. And even before stuff like ChatGPT existed, it's like, you know, I do interviews like
these and they'd be like, yeah, so we've got this great language. We've got this great
query and, you know, scripting language. And I'm like, oh God, no. You know? No, no, no more of that. Yeah.
It's not necessary now, right? And like everything can be kind of translated
into whatever language you want through that prompting.
And then, you know, one more thing we have released too
that helps add a little more context is a knowledge base.
So we've had customers who, you know, add a little more context is a knowledge base.
So we've had customers who, you know, you spend years building internal policies, you've
done, you have architecture diagrams, you have existing tickets from the past, you have
pen test reports, SOC 2, whatever these artifacts are, they're vast.
And when it comes to cloud security, another kind of dimension of
priority is like, does it matter to me?
Right?
Like, um, it's one thing to say, is it connected to the internet?
Like everyone should care about that, but maybe you have a policy or some,
something you're trying to protect that's buried in this sea of data.
And the LLM is also pretty good at saying, hey, you have these vulnerabilities and they're
even a bigger problem because you have a FedRAMP deficiency or something.
I made that up.
But there's ways to pull in internal data that can really make that effort even better.
So we're excited about that too.
Now, one thing I wanted to sort of query you on
is it feels like we're in a pretty different place now
than we were a year ago when it comes to the acceptance
of AI actually in enterprises, right?
Like a year ago, people are like, I don't know, man,
I don't wanna let chat GPT loosen causing damage here, you know, like, and fair enough too. But it
sort of feels like, like it's matured a bit and there is just this growing acceptance
where people are like willing to give it a go, right? They'll, they'll have a look at
it. They'll say, Oh, okay, cool. And they'll try it. Like, is that sort of your understanding
of where we are as well when it comes to people using AI to do stuff like this?
Yeah. I mean, we started a year and a half ago, a little more than that,
using it for our core runtime detection product. And I would say at that time, it was like,
we're curious, but you know, we're going to have to go through some hoops that to do this. And now,
maybe it's a mix of people just threw their hands up and said, like, our team's
going to do this anyways, and we might as well pick the right one.
And I think everyone's using AI for their day to day, and they just got used to the
way it works.
Legal teams still have, there's still pushback for sure, but there's more of an ecosystem to run your own models
to tie to somebody else's AWS bedrock or something like that, where you can kind of defer
the risk and put it back on the person using it. And we've seen some of our customers do that,
where they said, we love it, but you have to use our model. And you're like, okay, right.
And we have a way to do that.
Yeah.
And that's got some sort of weird policy and governance framework applied to that model.
And that's what helps them work well.
But doesn't that make your life harder because that model might behave in
interesting, unexpected ways.
Yeah.
We've, we've kind of taken the, the line of like, you can't just pick any random model or you're not going
to have the same tool that all of our other customers do.
So we have kind of a list of supported more frontier models that help with that.
Yeah.
Another question I've got though is like so many vendors I'm talking to, their customers
are like, okay, but we need to run this standalone model, right?
Like nothing we do with this model can be used to train, you know, the model.
So how does the model get better in a situation where everybody's running
private models?
This is something that I've worried, you know, that I, that I worry could be a
bit of a, like, like a pain in the, you know, what, for people like you who are
trying to improve their products and you just, you get carved out of that
visibility by legal.
Yeah.
And it's, um, yeah, it reminds me of the time everyone forgets, but like even chat
GPT, well, all the data on it, not that long ago was what from it was all a year
old or something.
Um, and I think for us we've leaned into, well, there's two things.
One is token counts are rising, so you can start putting more and more
in a, in a prompt, essentially.
Um, that's helpful, but also I think rag and vector databases, giving the
model unique context in that way.
It's not the same as like retraining a model and publishing it, but at least,
at least it's a way to make it much more specific to what the customer wants.
And it's unique, right.
And you can separate through tenancy.
Yeah.
But how do you know this is the problem?
How do you know when they've done something cool that you want to, you know,
you want to make some changes across all of your customers because they figured out something cool and you can't.
Yeah, I don't think that's solved yet.
Yeah.
Other than them sharing it, which isn't going to be the case all the time.
So you know.
Have you had that issue though, where people have bumped into like a corner case where it's
doing something weird and like just working through that without that inspectability, I imagine it's just a bit of back and forth, right?
Yeah, we, and we drew a pretty hard line on like, we don't look at prompts, we don't, I mean,
we don't see anything that's really happening. So like, it has to be a conversation basically when,
because we don't, you know, we're a security company. So we would just jump on a zoom and try to reproduce it.
Or, you know, we have ways to send kind of generic reports back to us,
but I don't think that's totally solved yet.
And these systems are so unpredictable at times.
It, you couldn't reproduce it if you tried.
Yeah, that's what I was going to ask. Like, how does that conversation go? Cause you're like, Oh, you jump on a zoom call, try to reproduce it. you tried. Yeah, that's what I was gonna ask. Like how does that conversation go?
Cause you're like, oh, you jump on a Zoom call,
try to reproduce it and the model just won't.
You're like, no, it worked this time.
You're like, well,
I'll tell the model to do more of that, I guess.
I don't know.
Yeah, yeah, yeah, exactly, exactly.
All right, Jimmy Mester, thank you so much for joining us
to talk about like where AI is
when it comes to cloud security, very interesting stuff.
Yep, Thank you.
That was Jimmy Mester there from RadSecurity. Big thanks to him for that. And that is it
for this week's show. I do hope you enjoyed it. I'll be back tomorrow with the Seriously
Risky Business podcast in the Risky Bulletin podcast feed. And then I'm off for a couple
of weeks. But yeah, until then, I've been Patrick Gray thanks for listening.