Risky Business - Risky Business #830 -- LiteLLM and security scanner supply chains compromised
Episode Date: March 25, 2026On this week’s show, Patrick Gray, Adam Boileau and James WIlson discuss the week’s cybersecurity news. They talk through: TeamPCP’s supply chain attack on Git...hub, and they threw in an anti-Iran wiper, because why not?! Anthropic hooks up its models to just… use your whole computer After Stryker’s Very Bad Day, CISA says maybe add some more controls around your Intune? Another iOS exploit kit shows up in the cyber bargain-bin The FTC decides to ban… all new home routers?! U wot m8?! Supermicro founder was personally sanction-busting Nvidia GPUs into China?! This week’s episode is sponsored by enterprise browser maker, Island. Chief Customer Officer Bradon Rogers joins Pat to explain how its customers are using Island to control the use of personal AI services in regulated industries. This episode is also available on Youtube. Show notes ‘CanisterWorm’ Springs Wiper Attack Targeting Iran TeamPCP deploys CanisterWorm on NPM following Trivy compromise Andrej Karpathy on X: "Software horror: litellm PyPI supply chain" attack Checkmarx KICS GitHub Action Compromised: Malware Injected in All Git Tags Felix Rieseberg on X: "Today, we’re releasing a feature that allows Claude to control your computer" A Top Google Search Result for Claude Plugins Was Planted by Hackers Lockheed Martin targeted in alleged breach by pro-Iran hacktivist CISA urges companies to secure Microsoft Intune systems after hackers mass-wipe Stryker devices FBI seems to seize website tied to Iranian cyberattack on Stryker Stryker confirms cyberattack is contained and restoration underway Hundreds of Millions of iPhones Can Be Hacked With a New Tool Found in the Wild Someone has publicly leaked an exploit kit that can hack millions of iPhones Russia-linked hackers use advanced iPhone exploit to target Ukrainians Apple rolls out first 'background security' update for iPhones, iPads, and Macs to fix Safari bug Post by @wartranslated.bsky.social — Bluesky Signal’s Creator Is Helping Encrypt Meta AI Hacker says they compromised millions of confidential police tips held by US company Millions of 'anonymous' crime tips exposed in massive Crime Stoppers hack Feds Disrupt IoT Botnets Behind Huge DDoS Attacks FCC bans import of consumer-grade routers amid national security concerns White House pours cold water on cyber ‘letters of marque’ speculation Google launches threat disruption unit, stops short of calling it ‘offensive' Supermicro’s cofounder was just arrested for allegedly smuggling $2.5 billion in GPUs to China Cyberattack on vehicle breathalyzer company leaves drivers stranded across the US Man pleads guilty to $8 million AI-generated music scheme Two Israelis AI generated "intelligence" and sold it to Iran
Transcript
Discussion (0)
Hi everyone and welcome to Risky Business. My name's Patrick Gray. We have an absolutely insanely jam-packed show for you this week. We've got a whole bunch of news to get through in just a moment with Adam Boiloh and Mr James Wilson. And then later on in this week's show, we'll be hearing from this week's sponsor. And this week's show is brought to you by Ireland. And Ireland, if you are unaware, is an enterprise browser built really with the sort of features you'd imagine an enterprise browser should have. So it's heavy.
on DLP, you can do things like per domain file restrictions and whatever.
You get incredible visibility into what your workforce are doing.
Like, it's just like there's a lot you can do with it.
You can do sort of like pretty secure app delivery and whatever web app delivery.
So yeah, there's a lot you can do with it.
But Braden Rogers of Ireland joins us this week to talk about how their customers are using
it to restrict the sort of unsanctioned use of AI.
You know, personal AI accounts and stuff.
This is a problem for a lot of enterprises.
So, you know, with Ireland, you can see when someone is like using a personal account with like chat GPT instead of the corporate account, for example, because you have that visibility in the browser.
So he'll be along to chat about that a little bit later on.
But let's get into the news now.
And goodness, there is a crew called Team PCP that we've just watched over the week, gradually causing more and more havoc.
Adam, let's start with you on this one.
Tell us about Team PCP and what they've been up to.
Yeah, so this group has been out compromising a bunch of like, you know, supply chain stuff on GitHub.
They were involved.
There was an attack on a security scanner called Trivy last week sometime.
And then a subsequent security scanner from checkmarks, both of whom had their GitHub's compromised.
But once the attacks were in here, they were dropping, you know, credential stealing malware.
And then there's sort of been this process of watching them evolving their tooling,
kind of in real time as they've been, you know, deploying it out to people.
And it kind of feels like they're using a bunch of AI to build it as like build the plane
as they're kind of flying it.
But they seem to be doing it at kind of scale and a speed that, you know,
whilst blunt is pretty effective.
And then we saw them drop alongside the credentials stealing parts.
We saw them drop some like a wiper that targets specifically Iranian machines as well.
So that was, you know, another kind of aspect of this story that's been pretty wild.
We're going to do some cyber war as well.
Why not?
You know?
Why not?
Why not?
Throw that in.
Yeah, the wipering Iranian machine seems to be just a bit of like a weekend project of,
hey, we've got all these crez.
Why don't we see if we can write a little script that's going to mess with the Iranians?
Like it's like a side quest.
It totally was.
And like I'd have a hard time calling it a wiper because it was just literally a script that was
if in this time zone or if running Farsi, RMRF.
and nothing special to it.
So what are they doing?
Look, they've heavily focused on supply chain.
That's their bread and butter.
That's how they get the credentials.
And then what they do with it seems to be at the whim
of whatever takes their interests on any given day.
The thing that happened today, or at least in the last sort of 48 hours,
is as Adam mentioned, they went after trivia recently.
Then they pivoted and went after checkmarks,
not the main checkmarks product,
but one of their products, I think it's called Kicks or KixS or, you know,
It's essentially their infrastructure as code scanner, which is not as, you know, prevalent as their main code scanning, SaaS sort of thing.
But nevertheless, it's the same pattern.
Compromiser and opens a source repo that's producing GitHub actions, puts malicious code in that GitHub actions.
GitHub actions is such a disaster in terms of security.
Millions and millions of GitHub actions just say, grab this image and use it.
And so they pick it up, credential steal the runs, and off they go.
Fascinating thing they did today was they took those credentials they'd gotten from the checkmarks GitHub action run.
somewhere and realize they'd gotten enough credentials to compromise light LLM, which is a heck of a
package to compromise. That's like 90 plus million downloads a month. So it was only live for about
an hour, but if you managed to do a Python install or more to the point, something else you were
installing grab that as an indirect dependency, all your credentials got snapped. But again, like,
what are they doing with it? We don't know. Sitting on a massive pile of credentials and, you know,
wrecking balls still to come. Yeah, I mean, it says they're grabbing crypto wallets like where they can
and whatever. So, I mean, I think it's like, yeah, I think this crew just seems to be going out
there collecting as much access as they can. If some crypto wallet keys happen to fall out of
whatever thing they've owned, then that's great. But there doesn't seem to be any sort of clear
like criminal strategy here. It's not like it's a ransomware crew trying to do X, Y, Z.
Well, I didn't even mention the wallet stealing, because frankly, that seems to be just par for the
course for anything, right? That's like your base level activity you do now to make sure you can
continue your efforts. But what you're doing novel on top of that,
not clear. The one novel thing on this though is, and we all sort of noted out about this
because we discovered it together, was this use of this thing called the internet computer
protocol, which when I looked at it, I was like, how can something be called that in today's
day and age? And then even then you go and look at it and you're like, oh my God, not only is it
called internet computer protocol, it's a blockchain thing and it essentially allows like
bulletproof hosting on the blockchain as well as, as long as you're willing to dip in a little
bit of cryptocurrency along the way. See earlier comment about crypto wallets being par for the course.
This is how they ran their C2.
As exciting and weird as that is,
I also would not be surprised if they said to an LLM,
I need a really novel way to store my C2.
I don't want to pay for bulletproof hosting.
What else you got out there?
And it said, I've heard of this thing.
Why don't you get to use that?
Exactly, right?
Like, it is a deep cut that an AI model would absolutely suggest, right?
Totally.
Yeah, 100%.
Now, speaking of AI models,
you know, a big thing that's come up over the last couple of months,
and we've talked about it at length on the show is OpenClaw.
And in fact, OpenClaw is now used by you, James,
to do various tasks for risky business.
So, you know, we got a little lobster on the team.
That's fine.
You know, we keep it away from anything sensitive.
Don't worry.
But it looks like Claude, you know, Anthropic got the message,
and now they've released a similar sort of thing for Claude, right?
So you can now control your computer with Claude.
Unlike OpenClawer, though, this will actually run in the Clod.
and they've got like a little agent that can run your machine to enable it to, you know,
move the mouse around to do whatever it needs to do.
But like I feel like this is going to be, this is going to provide us with a fair bit of
content for the next six months.
I feel like like probably Anthropics going to do a better job of wrapping some guardrails
around this.
But I just wanted to get your feelings on this, James, as to how this is going to go.
It's such a wild ride.
I mean, you know, open call, yes, I use it, but I keep it on separate VM, separate machines,
all the separation and I still feel nervous about it.
And, you know, there's a couple of different ways to use open claw.
There is use it on a separate machine.
Good. Use it on your main machine.
Not so great.
Use it on your main machine and allow it to be remotely controlled through telegram or other messaging.
Well, then you're really on the frontier.
And then, you know, Anthropic turns around and says, well, hold my beer.
We're going to productize this for the masses.
So over the last couple of days, they released things like that.
I think they call it dispatches, which is essentially a way to talk back to the clawed instance running on your local machine,
which I would find handy right.
I might be out and about and think,
hey, there's that feature I want to go on creating this repo.
Can you get a head start on it?
That's fine.
I love that.
But then they just went to the nth degree and said,
actually, you can now chat to your home computer
and that home computer will have full computer control
and do things like, hey, I'm going to be late for the meeting.
Can you find that PDF document that Charlie's waiting for and email it to him?
And I'm just like, that's not safe.
That's not good.
Yeah, so the question is like, well, the question is, can this be secured to an acceptable degree?
The answer to that is it has to be.
And it's going to be the definition of acceptable that changes, not the actual security threshold of that product.
But Adam, well, I mean, you know, is that the feeling you get, mate?
When you look at this, is that, well, we're just going to have to tolerate these things occasionally doing unpredictable and damaging things on our endpoints?
Or do you think that Anthropic has a reasonable,
opportunity here to make this thing behave sensibly.
I just, what do you think?
I'm kind of torn in a way because on the one hand,
like this is kind of what we expected Microsoft to do with copilot, right?
Was to just go buck while, bolton LLM into the core OS and just let it go crazy on your
stuff.
And clearly Microsoft, for all of its faults, is not quite that crazy.
And so then somebody else is coming on and doing it as an aftermarket product,
you know, there's a degree of, well, at least it's not.
Like Microsoft can say, well, hey, we didn't do it.
And then Claude, you know, Anthropic will, you know,
kind of try and do a competent job because it is their core business
in a way that it isn't, you know, wouldn't be for Microsoft.
But the whole concept is just terrifying.
And, you know, to think back to how we felt when MCP started to become a thing,
right, me like, oh my God, we're going to let the AI, like,
make web, you know, HTTP requests.
Oh, my God, how terrifying.
And now we're just going to let it pointy, clicky around your desktop,
like an image-recognize, you know, a VNC stream,
of your computer like what kind of mad world is this and yet you're right like they are just going
to have to make it good enough because people are going to use it and it's going to go horribly wrong
we're going to get some great fodder for you know for talking about terrible things but it's just
going to be wild you know and like there's no way to make it okay so it's just going to have to be
okay enough right because well that's that's kind of what i get like our definition of what's okay is
probably going to have to change.
And sadly, this is just how it is.
It's funny, actually, James, you're working on a podcast at the moment where you are actually
having a conversation with your Claudebot, with your OpenClawer instance about trying
to get it to recreate the Karuna Exploits.
I've heard some of that.
That is going to be absolutely hilarious.
When's that one getting published?
I would like to get that out this week, Pat, and I've got to tell you, when I sent the
demo last night, I was sure you were going to absolutely hate the concept.
And so when you said, I think this might work, I was like, okay, let's get this going.
But yeah, like, you know, just as Claude,
Claude created a C compiler by saying,
hey, here's a working reference,
and I've deliberately removed a part of it,
can you recreate it?
And I thought, why not do that with the exploit kits?
Let's take certain elements of
and see if an LLM can actually reason with it
and recreate it from scratch
and maybe even help us, you know,
find angles to deal with parts
that have been patched out of the OS.
I don't know.
It's interesting to see what we've come up with.
Yeah, that's right.
So can you cajole OpenClau into being your exploit writer
to complete, you know, once you remove part of the Karuna chain, can you get it to actually fix it?
So interesting experiment, can't wait to hear the whole thing.
It's sort of very strange hearing you have a conversation with a clanker that you have, you know,
animated with like an 11 Labs voice. It's like the whole thing's very weird.
Yeah, it's it's sort of oddly compelling, kind of creepy, like you'll see what I mean when we published that one.
Real quick too, just make sure everybody that if you do want to rush out and install the
latest clod tools make sure you are actually getting it from the correct anthropic website because
people this is something we've spoken about on the show with the team at push security and now there's
a 404 media piece about it as well which is there's a lot of malicious SEO malicious Google ads
for clawed that aren't clawed so you go to what you think is a clawed download page and it's
actually an install fix thing that gets you owned so I'm guessing most people who are listening to
this podcast are not going to get tricked by that but it is like
They are spending the money on the malicious ads because they are getting shells with it.
So that's something to keep in mind.
We've got a report here from cybersecurity dives saying that Lockheed Martin was targeted in an alleged breach by pro-Iran hacktivist.
Eh, you read this story.
It doesn't seem that compelling.
These people claim to have stolen 375 terabytes of data from Lockheed Martin, including what they're calling the blueprints for the F-35 aircraft.
Look, I'm guessing most of the sensitive blueprints.
for an F-35 aircraft are not going to be lying around on an internet accessible land.
This whole thing smells highly suspect.
Adam, you had to look at this this morning and felt the same way that I do.
Yeah, the story that cybersecurity dive links to is kind of the upstream source is pretty,
like there's almost no detail.
Basically, we have a telegram post claiming some stuff and that's about it.
Like, I mean, the only response from Lockheed Martin that anyone's got stuff
is that they are aware of the claims and that's about it.
But it just doesn't really pass the sniff test.
And, you know, maybe they've got some related documentation or something.
But, you know, the idea of it just being like this in Blueprints in an S3 bucket that they're going to find lying around doesn't seem super credible.
Yeah.
Yeah.
We do have a fair bit to get through on the Iran stuff, though.
First one of them, or next one of them, is that Sisa is urging companies to maybe have like dual key controls.
on like sensitive operations in June after this strike a breach.
And this seems like good advice.
We actually heard from a listener, Matt Flanagan, who's a, you know, Australian security guy
who's been kicking around the industry for a while.
And, you know, he wrote to us and said, look, you know,
Siss's announcement here is on point because it's something like,
if you're an admin and you haven't done this, it's like a nine-click operation to go
and just vape everything.
And that doesn't seem real smart.
I mean, it's funny, actually, James, because when we first spoke about this,
like your immediate reaction was like, why is it possible?
to do this so easily with Intune, you know, maybe they should time lock it or something.
But, I mean, you know, that there is a, you know, dual admin control for operations like this
makes sense. It's just no one seems to know that it exists. Yeah. Like it's, like you said,
it's really good advice from Sisa and Brad Arkin and I sat down and talked about this on the,
the most recent risky business features episode around, you know, even if you do turn on this,
like, you know, dual key approver or some sort of.
sort of, you know, extra step in the way.
Like as not, they're still going to find a way around that, right?
There might be like an API way to do it, a PowerShell way to do it.
Maybe it's just another fish that they've got to do to get the second key to turn.
The point is, though, don't think about turning things on like this as a complete mitigation.
Think about this as just one of many things you can do to make it harder for the attackers.
If it's harder for the attackers, they've got to spend longer in your environment.
That will be emitting more signals, and that's how you can catch them.
Because that's where Stryker failed here.
Nine clicks, bang, gone, so fast, no hope of catch you.
it. So, you know, good advice, but just remember it's not, you know, complete resolution. It's just
yet another way to make your environment a harder place for attackers to actually get stuff done.
Yeah, and just a note, too, we keep talking about these podcasts James is doing in risky business
features. You can go and find that feed in your podcatcher. Just search for risky business
features. That's our new podcast channel. Do subscribe to it. It would be great for us,
like you would be really supporting us and supporting James's work. If you do just go out right now,
now to your podcatcher, type in risky business features or head to risky.
DotBiz, find that feed and subscribe to it.
Adam, I'm guessing your take here is going to be broadly similar.
Yeah, much the same.
I mean, you know, ideally you would have as good a controls as you can possibly have around
it, you know, multi people involved, obviously a useful thing.
You know, the question is if you're already admin and Intune and the only thing
standing between you and wiping everything is, you know, another Intune admin, then the path
to get one, it's probably not that dissimilar from the path to get a second one.
And being able to leverage through, you know, between Active Directory and Azure AD, you know,
to follow that path doesn't seem super unlucky.
And I would have said that, you know, in this case, the attackers, maybe they landed on
into and they got lucky and they were able to appointee clicking their way through.
And maybe they didn't have the expertise to go around these controls.
But with LLMs these days, like, you know, navigating through the Microsoft ecosystem, you know,
now you just ask Claude to help you out.
probably find a way, you know.
It's funny, man.
I'm remembering actually an IRGC linked attack against the Australian Parliament years ago.
They actually managed to get like directory access of some kind, like it was read only
directory access or something, but they got a bunch of stuff that like was not good for
them to have.
And sources inside the Australian government actually said, man, that they just happened to get
creds for a misconfigured account and they were really lucky because there weren't many accounts
with that misconfiguration.
They just like absolutely got so lucky.
And I think, you know, I mean, this,
James and I, we did an interview with the team at SpectreOps,
actually about, you know, attack path management and AI
and all this sort of stuff.
And it just really is the case that, yeah,
there's always going to be those misconfigured accounts, right?
There's always going to be that one that you can hit
that gives you that path to something like in tune.
Striker, meanwhile, has lodged a bunch of documents with the SEC.
see, you know, their 8K filing says they're not sure if it's material yet and they're rebuilding.
They've described the attack as being contained, which I'm guessing means that they have evicted
the attackers from their environment.
James's joke this morning was what environment.
Their environment is gone.
Their environment was deleted.
So, you know, they evicted themselves.
But yeah, it looks like rebuilding has begun and hopefully they are back up and running soon
because they're a very important company to the medical supply chain.
Now, as Tom Uren and the Gruck put it in this week's Between Two Nerds,
again, one of our other podcast feeds.
You can subscribe to the risky bulletin feed if you want to listen to that podcast,
but they did a podcast this week about how it is currently raining iOS exploit kits,
which is not something I think we've ever said before.
I'm thinking of like a George W. Bush meme here,
which is Sir, a second iOS exploit kit has hit GitHub.
But, you know, James, you've obviously been following this one very, very closely.
This one is called Dark Sword.
Looks like it is being used by the same crew who are using the Karuna exploit kit.
But the origin of the kit itself, it doesn't look like it was also L3 trenchant.
It looks like it came from somewhere else.
But, I mean, seeing one of these kits in the wild is pretty crazy.
Seeing two of them in a couple of months is insane.
Yeah, it's just like the most incredible coincidence, I guess.
But, you know, in saying that, I guess not surprising because you're right, it does have the same hallmarks of Karuna insofar as this was clearly once something that was a very prized set of exploits that then sort of went on to a secondhand market for some shady work with Russians targeting Ukrainians.
And then ends up in like this almost like the dollar store of exploit kits that it just gets snapped up and looks for crypto wallets.
you know it's like the saddest end for the most advanced code yeah i think in a previous podcast of
tom and the gruck they described or was it tom and ambly in seriously risky business tom described it as
like you know what was once a mint like mid 1980s bmw um m3 is now being used as a paddock basher
you know driven around on a farm by a bogan um you know it's just really it's just a really
sad end for something that was really beautiful once upon a time yeah exactly and the great thing from
that was it spawned an argument between you and i as whether it went as to whether that was
the E46 BMW, the E30 BMW.
I think we can probably say the Kouruna was the E30,
and this is more like your E46,
but in terms of actual...
Laughs in car guy.
I'm just going to sit here looking blank.
It's like, I got no idea about something.
Yeah, there's about like 10% of the audience got that,
but I, you know, I'll allow it.
Go on.
So just in terms of the tech behind this,
it's like it's almost like the same playbook,
and I found that fascinating insofar as, like,
if you zoom out of what Karuna did, DarkSaw does the same things.
It's latent bugs in WebKit, gets you your read-write primitives,
escalate up into your sandbox escapes, then you've got access to talk to the kernel,
get your privesque.
The one thing in this that I'm still pulling the thread on that I'm curious about
is the really fascinating thing about Karuna was its use of undocumented registers
in the arm processor to get basically an arbitrary read into memory
that was otherwise protected by the page protection layer.
There's no mention of that.
in this Dark Sword kit.
And so I'm kind of curious as to
what did they do that didn't require that?
So it's kind of interesting.
Yeah, there was some notes in it about
like some kind of user mode pack bypassing.
But then there was no real specifics.
Although I guess we've got the code down
so we can go dig in.
But yeah, that also struck me as like,
how did this work?
Is it similar?
Because like it did feel like
some work has been done on this kit
after it was sold or shared around
because I think Google's write-up
has a bit of detail about, you know,
where different,
users have bolted on extra stuff, you know, to meet their own particular needs or to solve
bugs. And there's some like examples of maybe the code got fixed at some point. And then you can
sort of see the lineers, like the family tree of users of this code, sort of starting to diverge
a bit. So anyway, there's some interesting nuance in here. And I guess as people dig through it more,
we'll discover more of the like, you know, nuggets of juicy detail that, you know, really
only tragics like you and I would care about. But, you know, are still super interesting.
I think, I think a nice thing here, though, like the silver lining here, is, you know,
is that for people who are interested in learning a bit about exploit development,
now there's a couple of repos here with some really good stuff in it
that they can use to just get an idea of how all of this stuff works.
And I don't think that's necessarily a bad thing.
I think it's good for more people to understand the way this sort of stuff works.
I should also clarify too, because I used a term there that is extremely Australian,
which is paddock basher, a paddock being a field in a farm,
and a paddock basher is typically a car that is like no longer suitable to be registered on public roads.
so you just drive it around the farm and bash the crap out of it.
That's why we call it.
A paddock basher.
And yes, not a, not a, yeah, these are the paddock bashes of exploits at this point.
It's like so sad.
And yeah, being used to target Ukrainians and then onwards to target like Chinese cryptocurrency users and whatever.
Just very, very sad.
So we've linked a bunch of stories about that in this week's run sheet.
This one is very relevant to you, James Apple, according to TechCrunch, has rolled
out its first background security update for iPhones, iPads, and Macs to fix a safari bug.
Apple has previously rolled out one of these background fixes to MacOS.
And I guess the reason it's relevant to you is one of the first projects you ever worked on at Apple
was actually like writing this bit of MacOS to do these updates.
Yeah, I managed the team at Apple that looked after the Mac App Store amongst a few other things.
And this was back in the day when software updates were done through the Mac App Store, right?
It feels quaint to think back to that time.
You used to launch the Mac App Store to do your OS updates.
And so we were basically the UI that was driving the software update code behind the scenes.
And one of the features that I personally wrote was this thing called Code Red,
which allowed Apple with a specially crafted and very highly guarded software update configuration,
we could push something out that would be silently deployed and installed on MacOS machines.
Now, at the time, that was only for MacOS, and I'd left Apple before it was first used.
And you're right, it was used in 2019 to patch to basically patch Zoom when Zoom released a version
that had an open web server that could arbitrary launch apps.
And that was so it was fun to see my work get used eventually after I'd left there.
So seeing this story this week is interesting to me because it sounds like that little simple
thing that I created has been evolved significantly since then because the original version I did,
I don't think it would have gracefully handled installing stuff that needed a restart.
and this is something that piqued my interest in this story.
It talks about, you know, I think the original write-up says that when the update had been installed,
it only did a quick device restart rather than a longer reboot.
And I'm really interested to know what exactly that means.
When you install a software update on MacOS or iOS,
there's two, sometimes three reboots involved to boot the system into something that can write to the OS,
deploy the fix, rebuild the caches, come back up in the right security environment.
So I just wonder if Apple has sort of ring-fenced stuff that they know they need to update often,
looking at you, WebKit, and made it such that you can actually update those,
and there's a lighter method to restart the system.
Maybe it's as simple as like a kill-all, and the system comes back up without a full reboot.
So, yeah, it's evolved.
It's interesting.
It's funny, man, because having you working with us,
it's sort of like hiring a North Korean defector or something, right?
Because, like, Apple do not talk about, like, any of their engineering.
So, yeah, it's pretty cool.
No, I am a North Korean defect to given the state of AI.
Yeah, I mean, let's see if they send a team of ninjas after you to neutralize the threat.
But yeah, we've linked through to the tech crunch piece on that one.
Look, an update two on the mobile internet situation in Russia.
St. Petersburg has had its mobile internet fully cut as well,
which the war translated account on Blue Sky has called a Russian edition or Russian-style digital detox.
which is not a bad gag.
Just wanted to put that out there
because it is strange.
Like is this a drone defence measure?
Is this Kremlin paranoia?
We don't know.
Things are getting a little bit interesting
in the war over there as well.
The Ukrainians have started targeting soldiers
instead of material
and that is actually proving to be quite successful
and they are succeeding in attriding Russian forces at the moment.
So it just feels like the pendulum
is swinging a little bit back towards Ukraine.
side at the moment and things are a little bit politically sketchy in Russia so you know always careful
about um you know being too hopeful about anything happening there but there's some interesting
signs let's just put it that way uh but we've got a piece here from matt burgess and lily hay
newman uh over at wired and this story is about moxie marlin spike
apparently he's going to start applying cryptography to i i and we are all somewhat confused
by what to what end?
Adam, walk us through this story because like none of us can make really heads or tails of it.
But, you know, let's start with you.
Yes. I mean, most people would know Moxie from its background with the Signal project.
He did a lot of work on the underlying protocol and then led the project early on in its life.
One of the things he's done later on is this thing called Confer,
which is his work on integrating, you know, some, I guess, some ideas from the Signal E2E model into the AI world.
And one of the things that they're trying to do there is to make chatting with a LLM that is hosted in somebody else's environment feel more like a private chat.
And like the current model of, you know, like bolting LLMs into existing chat applications, you know, the security model of that doesn't really match people's mental one.
And so the idea I think is he's trying to come up with a way to bring some of the privacy and security guarantees that Signal provides into a world where you are chatting with a computer.
And obviously the idea of end-to-end crypto when one of the ends is meta is kind of a bit ass backwards.
And I think that's the thing that we've all been struggling with is like even if you had like how is this different from SSL of you talking to meta, right?
I mean, you're protected against intransic communications interception by the underlying, you know, by SSL.
I think what they're going for here is to try and anchor it into some kind of trusted execution environment.
So like other Apple's private cloud compute, have an LLM conversation where the operator of the hardware and the operator of the LLM stack doesn't just by default get access to everything.
So they would have to do some extra work to, for example, train their models on the condensio chat or whatever else.
So I think they're trying to build a system where the client can to at least some degree guarantee that you are only talking to, you know, some GPU somewhere in a cloud and that there isn't something else involved.
You know, and Apple tried to do that by making private cloud compute,
like sort of publicly auditedable and having some kind of guarantees
that, you know, they're only running hardware
or they're only running combinations of software that have already been approved
or that have some manner of inspectability.
But bodging all of that together into something that really means something
to the average person, kind of difficult.
Doing an inside meta, which is not exactly the world's most trusted technology provider,
also kind of difficult.
So, yeah, it feels like Moxie's got his work cut out for him here.
Yeah, it's funny you mentioned Apple's AI there because like, what AI?
I had to reenable it recently because I was in Brazil and I wanted to use, you know,
back when I went on my trip to Brazil.
So I had it disabled.
I had to reenable it to enable it to enable, like let the live translate stuff, which is pretty cool.
But now I keep getting those like Apple's like notification summaries that like are just
comically indecipherable these days.
Like, you know, AI is really good at summarizing stuff.
But like I just, you never know what your notifications actually say when you,
get a summary from your iPhone telling you what your notifications are. James, is this about your
understanding of this as well? Yeah, 100%. Like for all the points that Adam just mentioned, I just
don't get it. You have to trust both sides of the conversation for this to be worth anything.
And as Adam said, if they could do that with private cloud, great. To be honest, this just feels like
meta really struggling to get a good headline out there. Hey, look at us. We're still relevant with
AI and hey, look, we got that signal guy to come along and yay, privacy will result.
I mean, I've got a different vibe off it, to be honest.
Like, I don't think it's, I don't think anyone working in comms at a company like Meta thinks that this is a good headline because they just don't think about stuff like this, right?
I think this is more likely some project manager or someone in Meta trying to do the right thing.
And, you know, we just got, and that's how you wind up with a lot of good stuff, right?
It's always one champion in an organization trying to do something.
I just don't think it's been clearly articulated here.
And we, you know, we have to wait.
We have to wait and see what he actually comes up with there.
But at the moment, I think, yeah, we're all just a little bit confused.
Okay, moving on.
We've got a piece here from Raphael Satter over at Reuters, which is about a, there's this
online platform.
It's like a crime stoppers style deal where you can route like law enforcement tips
to various law enforcement agencies and whatever.
That company got owned.
Eight million confidential tips have been stolen.
And that ain't good if you are dropping a dime on.
a Mexican drug cartel.
And in fact, we have the original write-up, which I went down a rabbit hole today reading about
this news outlet, which is called Straight Arrow News, which was founded like by a Republican
mega-donor billionaire in the US who just like wanted straight arrow unbiased news and has founded
this website.
Now, the report's actually pretty good.
They've got a really comprehensive report on this incident.
But yeah, look, I think this is just one of those cases where we've got a whole bunch of
information in the hands of God knows who, which could put people's lives at risk.
You know, you would really hope that police agencies would do better due diligence on
organizations like this to prevent their, you know, exposed S3 bucket or whatever it was
from leaking this sort of info.
Adam, do we have any idea how this breach may have gone down or is it still a big mystery
at this stage?
So the hacker group that was responsible for doing it and then subsequently leaked it to
distribute the night of secrets.
Like they've got some text files that are not particularly detailed.
It feels like direct object reference.
Like they mentioned that as a like a particular aspect that was weak.
And they mentioned making like 8 million requests.
And there was 8 million records.
So it feels like direct object reference.
Gee, their WAF did a really good job of stopping that, right?
Like 8 million records.
Eight million queries from 1 IP presumably and like not nary a peep.
Yeah.
So I mean, that's kind of what it feels like, which, you know,
I guess law enforcement doing due diligence on their suppliers are unlikely to go.
you know try incrementing the tip ID by one and seeing what happens like you know they ought to but
that seems unseemed unlikely um but yeah that's as best we've got um the actual data itself i think
distributed the data of secrets are attempting to constrain who gets access to it but you know once the
stuff is out um you know it'll be bad and as you say like the kind of people who are giving tips
to law enforcement about you know organized crime or whatever else it's just it's not a pretty
story for anyone involved. And James, you know, we were talking about this and your point on this
one is that de-anonymization has been, de-anonymization tech has come a long way thanks to AI, particularly
this year. And, you know, even though these tips are supposed to be anonymized, I mean, in some cases,
it's going to be very clear who someone is, you know, and there's even tips that have been
pulled out by this S-A-N, Stradarone News website where they're talking about these tips where
they're saying, don't tell the person because, you know, just this information being exposed,
going to know who it is. But, you know, your point more broadly is that, like, you know,
de-anonymization now is, like, a lot easier than it used to be. Heaps, heaps easier.
You know, if you think back to the state of the art of the Netflix prize and that that was, like,
the big, oh, wow, moment of like three, I think it's typically three data points is all that's
required for a really high-acuracy de-anonymization of someone. And now you've just got more and more
and more of this data coming out. You can marry this up with, you know, the private
data that's available about location and other things. It's been trapped through marketing and
ad tech, et cetera.
You know, for all the work going into Section 702,
it just feels like we're almost at the point where
being anonymous is not kind of possible anymore
unless you've got an extremely high level of Opsic,
which is a sad way to be.
Yeah, we've got, speaking of the commercially available information,
we did see a very, look,
there's an unverified and probably not true report
from some sort of military journal or podcast
spreading over X saying that,
the way the Iranians were able to locate the troops who were killed in,
the American troops who were killed in Kuwait was through like ad tech tracking,
which I don't know, man, they were sitting in a double wide that was probably quite visible.
So I don't know, I don't know how true that is,
but we've also seen news that, you know,
Cash Patel's FBI has resumed the purchase of commercially available information.
Like, this is a big issue.
It's one that we've been talking about, particularly in seriously risky business,
for years now.
And, you know, it has a big impact.
on anonymity. Now, look, let's move on to our next piece now, and this one is from Krebs on security.
Brian Krebs has written it up. The Isuru, Kim Wolf, Jackskid and Mossad botnets have been disrupted
by the Justice Department in conjunction with authorities in Canada and Germany.
Now, what's funny about this is we've talked a bunch about like Kim Wolf and whatnot over the last
month or so because Brian Krebs has been writing a series of articles about them. And this is the one
where there were like, you know, elements of it.
shipped with like set-top boxes, Android set-top boxes. Other ways that this was deployed was
through people's like residential proxy networks. People would like rent access to the
residential proxy network or the operator of that residential proxy network would just
basically go onto the local IPs of the network of the user and like start infecting their
Android, you know, set-top boxes or whatever it is. End result is that you wound up with a,
with a, you know, residential proxy network inside another residential proxy network.
And you also wound up with like, you know, a lot of DDoS capacity here.
But my main takeaway from this whole thing is when Brian Krebs starts writing a series of articles
about your botnet, like when he's like, this is part one of a series that we're doing on this botnet.
Look, it is time to move to Burles, fellas.
Like, it's time to pack it up and get the hell out of there.
I mean, Adam, you know, like what more can we say?
I think I think if summer is it pretty neatly there, like when Brian Krebs starts
docks and you, like, it's just going to end badly.
Like either you're going to end in jail or all your infrastructure is going to be disrupted
or he's going to phone your mum or, you know, like it's just not going to end well for you
and it's time to find the new hobby or find a new way to do business or, you know,
just kind of get out and move on because like Crabs on your case, that's just, you know,
That's just end times. Move on.
Yeah.
Now, moving on to another story.
And noted Wackjob and FCC chairman, Brendan Carr, has made an announcement.
The FCC has made an announcement.
They're going to make home routers great again.
And the way that they're going to do this is they are banning the import of new models of consumer-grade routers.
So basically, if a model of router does not yet have an FCC mark on it, that's it.
you are not able to import it into the United States if it's foreign, which is like, what?
It's basically.
Seems a little bit bonkers.
So there's a process here for getting some sort of exemption to this where I think it's like
the, is it the DoD and like the FCC or DHS or something?
It's not really clear.
But I think the idea here is if you want to start importing routers, you need to go through
some sort of bureaucratic process, which presumably involves buying a lot of Trump coin
or Melania coin or a series of Mar-a-Lago memberships
in order to be allowed to import your home routers
into the United States.
I mean, Adam, you looked into the FAQ on this this morning.
I mean, that's about the long and short of it, right?
Yeah, pretty much.
Like, new devices are gonna have to go through some kind of approval process.
Everyone is disallowed by default.
And, like, I think you're right that this is just going to involve
having to either bribe someone or do some kind of administrative process
to get on the allow list to be prepared.
to sell their import routers into the US.
And the thing that seems strange, though, is, like, the incentives are just all out of whack.
I mean, there was this cyber trust mark program that we talked about a while ago that ended up being abortive,
where they were going to get, like, underwriters laboratories to test the security devices.
They would have some kind of, like, cyber security rating thing.
This feels like the crazy Trump world alternative to that program, so that one kind of failed.
Then we've now got this where, as you say, you buy Trump coin or whatever to be allowed in.
But the thing is it's all foreign routers.
Like what domestically manufactured American home routers are there?
I mean, even the American ones are still made in Taiwan or Indonesia or Vietnam.
So James, you looked into that part of this, right?
Which is like Netgear's stock went up and it's like, but they don't, they're foreign.
Like what?
Yeah, yeah, but why?
Like it's just, the only thing here that I think could be successful is we've got to remember,
folks, that for us as tech guys, yes, we go down to the best buy.
or whatever store and we go and find the best router that we can possibly get with the latest,
you know, 28 Wi-Fi antennas on it. But that's not what you typical, you know, cable or
internet subscriber does. They take whatever router comes from the carrier. And so, you know,
I'm trying to imagine a positive outcome for this. And if this forces a carrier to adopt a router
by default that's more secure, even if it costs them more, maybe that's good. But I'm really out
on our limb here trying to add some logic to this. Yeah. And I'm just checking. And for anyone who's
curious,
Malania coin is trading at about 12 cents at the moment.
So get crack in everyone.
Hold on, hold on.
Yeah, that's right.
It might be a good buy opportunity, you know what I mean?
Just ride the back of corruption coin.
That's, you know, go long corruption.
There's an easy way to do it now.
Well, the Trump router.
The Trump router is surely what comes out.
Here we've got the Trump.
Yeah, we've got the T1 mobile.
I want to see the Trump router.
Yeah, that's it.
That's it.
Now, moving on to another story now.
and the White House, through statements of a couple of different officials,
we've had the National Cyber Director, Sean Cancross, and also another guy.
The Senior Advisor at the Office of National Cyber Director Thomas Lind.
They've come out and they've sort of rubbish this idea that the United States government
is going to start issuing letters of Mark, allowing private operators to go out and just
like hack back and whatever.
Our colleague, Tom Uren, I mean, he predicted this in his seriously risky business newsletter
over the last like six months.
He said like this is a bit of a distraction really and it's not where it's likely to go.
And you know, when companies like Google announced they were going to launch, you know,
threat disruption units and everyone thought, oh, they're going to get their letters of mark.
And no, it just hasn't worked out that way.
So we've had those announcements from the White House and also Google has launched its threat disruption unit,
doesn't use the word offensive anywhere.
And, you know, their most recent takedown, which I think was just over the last week or so,
they, you know, it was mostly like legal takedowns of domains and whatnot.
But I think the point is.
companies like Google are going to get more aggressive,
but I don't think they're going to be popping shell under color of law, right?
Like I don't think that's the way it's going to go.
Adam, thoughts here?
Yeah, I think you're right.
I mean, the idea of letters of Mark has always been, like,
we've enjoyed talking about it,
but it's so kind of complicated and impractical to do in real life.
So I'm not surprised to see them pouring water on it.
And, yeah, the, you know, making the current disruption process of, you know,
taking down domain names, taking down C2 infrastructure,
taking down hosting, you know,
that does basically work. It's just clunky and takes a long time and smoothing that over.
I think Google was talking at RSA this week about, you know, kind of basically just kind of making
that smooth and integrating it so that all the players in the industry are kind of working together.
And maybe that gets us a bit closer to, you know, being able to do the kind of stuff that, you know,
Krebs is getting done to botnets without it taking a five-part, you know, Kreb story to get,
to make it actually happen.
Now, this next story we're going to talk about is like my favorite of the week.
It's not a cybersecurity story, but you're going to have to listen to me talk about.
it anyway because it is just too funny.
A founder, a co-founder of Super Micro
has been arrested for smuggling $2.5 billion
worth of Nvidia GPUs to China.
And what's really funny about this,
I mean, this is, you know, this is a billionaire
co-founder of Super Micro.
And what's funny about this is like, okay,
so the way the scheme worked is that
he would get Southeast Asian companies
to order a whole bunch of Nvidia kit.
It would arrive in warehouses in Southeast Asia.
Then he would swap the labels,
like showing that it had all of the invidia stuff inside onto like replica hardware,
which would then stay in these warehouses in Southeast Asia
and the real stuff would be sent across to China.
So, you know, we've talked about how it's funny that China always claims
that Deepseek is being trained on a series of rack mounted potatoes, right?
And they don't have, they don't have Nvidia stuff.
But yes, they do and this is how they're getting it.
But the detail in this and in the indictment is just absolutely hilarious.
The feds or NSA or whoever were all.
up in his messages and they were also all up in the security cameras of the facilities where they
were swapping the labels and here's this guy a billionaire co-founder of Super Micro using a hairdryer
with his wife to swap the labels personally and you know say what you want about you know violating
these sort of trade restrictions but this is a win for customer service like this is a billionaire who's
happy to get his hands dirty and and do all of this the whole thing is just absolutely crazy
Adam. I mean, it's funny because Super Micro Gear is actually pretty good. And like Super Micro,
the company has had all sorts of like financial reporting problems and they've, you know,
like they haven't done themselves many favours. So like in that respect, I'm kind of not surprised
that, you know, sea levels or, you know, execs at Supermicro were involved in kind of shady
sanctions busting. But it just does seem weird for all of the things that they have kind of screwed up
over the years. I mean, we had that story about like, you know, grains of rice chips hidden
on Supermiger motherboards, which kind of turned out to be punkum, but there's just been a lot of
weird smells around Super Micro over the years, and for this to be the thing that sees some of them
go down, it just is just kind of weird. But on the other hand, like if you want to buy X-Sathex
kit, you know, server kit and you don't want to buy, you know, big name like Dell or HP
expensive, like the Super Micro stuff as generic rack-mount server gear went was pretty good. And like
a lot of their gear is still used in, always used in some of the big cloud operas.
you know, for making custom hardware.
Like they're built on a built by Super Micro under contract.
So like they are deeply embedded and very American, right?
I mean, despite all of the kind of Taiwan integration of it,
like if you were going to name an American hardware company,
Super Micro is kind of more American than any of the router vendors that we imagine
the FCC are talking about.
So yeah, like it's a it's a wild ride and yeah, I mean when you posted the story
and I was like, what do you mean?
He was doing it himself like, it's crazy, but yeah, it's where we are.
Moving on, and we've got a couple of skateboarding dogs to get through this week.
The first one is that there has been an attack against one of those companies
that makes like the breathalyzer interlocks for people who've been done for driving while drunk
and they have them installed in their car and they need to blow into them before they can drive their car.
And there was some attack that brick them so people couldn't drive.
I mean, probably a net win for public safety, right,
if the sort of people who drive drunk can't drive their cars.
I don't know.
Probably not the end of the world.
What do you think, Adam?
Yeah, probably.
Probably, probably. I think this was the back end server that supported calibration went away.
So they kept working for a while and there was like a regular recalibration process.
And that wasn't working, which is why the cars then stopped working.
And I think, yeah, net results still probably a win for everybody.
And now we've got our two favorite AI scams of the week.
A man has pleaded guilty to netting $8 million in an AI generated music scheme
where he would just like load up Spotify and streaming services and whatever
with like thousands of AI generated slop songs.
and then get bots to actually listen to them
and get money that way.
I mean, that's pretty straight up like bot fraud.
So, you know, they're going to be in a bit of trouble.
But, you know, this dovetails nicely with our final item,
our final skateboarding dog of the week,
which is an indictment in Israel of these two guys
who were basically, who approached Iranian intelligence
and said, hey, like, I'm totally like a, you know,
Israeli spy and I'm going to sell information to you.
And they were just like making stuff up with AI.
And it's really funny.
They've been charged for this.
I personally think they need like 10 bucks in a sun hat for having done this.
But it's full of all these amazing details.
Like one of them was using a stolen identity of an Israeli citizen that they got out of
a telegram.
And then when the Iranians like, I need proof that you're real, you know, send me a photo
of you doing their hand, you know, the OK hand gesture.
You just groked it.
You know, just used grok to generate an image of the person in the license throwing an OK sign
and like just really, really wild stuff.
I'm guessing you had a look at this one, James.
Any thoughts?
It's just beautiful, right?
This is our future now.
Nothing is real.
No one is who they say they are.
And I also just, I comically imagine the moment when this scheme came together.
I sort of imagine like two people sitting at the pub like, hey, do you reckon we can make some fake intel?
Yeah, I reckon we can make some fake intel.
Let's see if we can sell it.
And then it just gets wildly out of hand.
Like they do manage to sell it.
Then they've got to verify who they are.
But who would have seen the right turn on this, which is, yes, you successfully sold it to the Iranians.
Good job.
you send the fake intel, you drained their coffers, you've taken money off the regime,
good job, here's your espionage indictment.
Yes.
Yeah, it doesn't seem right.
Yeah, it's funny too, because like an Israeli fellow sent this one to me, and what was funny
about it was, you know, he pointed out to me that someone who actually had done real
espionage for the Iranians wound up getting paid like a thousand bucks, and these guys
got away with like tens of thousands of IRGC crypto, which is just the cherry on top, really.
But guys, we're going to wrap it up there.
Thank you so much for joining me for this week's news segment.
A really fun one this week.
Adam, James, yeah, great stuff.
Can't wait to do it again next week.
Cheers.
Yeah, thanks, Pat.
I'll see you then.
Yeah, Pat, see you then.
That was Adam Bolo and James Wilson there with this week's news segment.
Big thanks to both of them for that.
It is time for this week's sponsor interview now.
We're talking with Braden Rogers, who works over at Ireland.
Island is the maker of the island browser, which is an enterprise browser
that can do all sorts of really interesting.
interesting things. You know, the browser is kind of the OS these days, so it makes sense that we would want some better instrumentation and visibility. Island is really popular in heavily regulated industries where people need to keep an eye on where company data is going and what people are doing with it. And in that spirit, today's conversation with Braden is about how customers are using Ireland to really make sure that people are using only the sort of corpo-sanctioned AI tools. So if there's
trying to go off and do stuff in their personal open AI account like you know blocking that is one
option but with Ireland you can get a little bit more granular you might say hey they can go over there
but they can't cut and paste into that or they can't you know grab something from the file system
and just throw it throw it into into open AI so yeah here's brayden rogers talking about how
island is helping customers deal with that problem enjoy it does very lot but i'll tell you it's
interesting so this is there's pressures from a lot of different angles
to use AI. Number one, the users feel the pressures. Number one, they're intellectually curious,
but then they're also being, they feel the pressure. If they don't get, everybody's telling them if
they don't learn AI, they don't figure out AI, their job could be at risk, you know, so they're all
experimenting and trying stuff. You've got the lines of business that are pushing tools that are
specific to their business into the org that, you know, how do I bring this thing into the call
center and make my call center folks, you know, more capable in this process. And you've got the
executive pressures as well. So the boards and all the others are thinking, looking at the competitors
going how do we stay competitively capable.
So it's just pressures from all over.
I walk in all the time.
It's all over the map.
We see, you know, there's kind of a couple of common little factors I see.
The old school traditional block page, and I'm sure you were, you know, these things,
the upstream kind of proxy, sassy provider, whatever they are.
There's some block page.
You try to go to Claude, you're blocked.
You go to whatever you're blocked until you go to the sanctioned destination.
And then it starts getting tricky.
we start seeing challenges with tenancy, being able to actually discern tenant A versus tenant B,
because they're all using the same URLs and they all look the same to each other.
So a lot of times they're struggling with tenancy.
This is why at the intro that I will now no doubt put on this interview later,
I mentioned that this is actually a tricky problem because it's not like you can just block a domain.
You know, as you say, like there's a tenant problem there,
which is like, how do you know they're using the corporate account versus a personal account?
The only way you're going to do that is via something like Ireland or, you know,
like a plug-in-based product as well can surface these sorts of issues.
But, you know, I guess it's hard to know how prevalent it is generally.
But I'd be stunned if there's any enterprise out there of scale where this isn't happening
at least to some degree, right?
Yeah, 100%.
I had one recently where they had an executive in the company who insisted, a very
powerful executive in the company insisted, it's not the company's standard, but you're
going to give me access to Gemini.
And they're literally all screaming about it, but they can't do anything about it.
And they're looking for an answer.
what they really want is an answer
that's not a block page.
They want an answer that lets that person have access to Gemini
but the company data doesn't go there.
So how do we balance that?
That's like it's almost the impossible physics.
And that was one we did a really good job of solving
with the thing we've talked about in the past called say yes
where you can just say yes to personal stuff
and corporate data won't spill to personal stuff.
But that was an interesting one
because it was again, it was a pressure from a powerful person
in the organization that just goes against everything
the policies of the org wanted,
but they just prefer Gemini for themselves.
Yeah, now when it comes to these DLP use cases, like I find Ireland actually much more compelling than a lot of these plug-in-based products because you've got things like the ability to apply file system restrictions to different domains, right?
So you can say, like, someone can go to OpenAI, you know, they can go to a chatbot there, but they can't hit the file system at all.
Like they can't upload any files to it, right?
So, I mean, that's, I guess that's the thinking there, right?
is being able to apply those like DLP-like controls to, yeah, chatbots, AI, AI bots everywhere.
Yeah.
The important part is balancing the control with context.
So the thing that we talked about a moment ago, the tenancy, for example, if someone's going to the
corporate Gemini tenant, well, cool, those things are freed up.
If they're going to the personal tenant, they have access to it, but that stuff's not
available now in this particular situation.
So a different policy over the different tenants in the process.
but certainly your point, the governance living locally in the browser because the natural habitat of all this stuff,
it's kind of starting habitat from the majority of user work as a browser.
And it makes sense whether it's Chrome or Edge with an extension control or whether it's the full enterprise browser.
Controlling it that presentation layer is most important.
Now, normally we're used to hearing of examples of this sort of stuff going wrong in, you know,
it might be in an SEC filing where there was a data exposure or whatever because a staff member, you know,
accidentally pasted a whole bunch of sensitive information.
into a personal account or blah blah blah blah blah.
You weren't even aware of this one because it only happened a couple of weeks ago,
but it's very funny, which is a guy at the Chinese Ministry of State Security
actually got in trouble recently because he had been using chat GPT to summarize internal documents,
which is just I think when, you know, if there's one example of this sort of stuff that
belongs on the slide deck, that's got to be that's got to be it, right?
Which is the actual Chinese intelligence operative is pasting stuff into an American chatbot.
But what sort of stuff?
I mean, why do we pull it back for a second and talk about, like, I understand that there's rules and, you know, governance concerns about corporate data going into these sort of things.
But what are the practical concerns about where this information can wind up if it is pasted into, like, a personal, you know, AI chatbot thing?
Like, what's the actual concern that, you know, what's going to happen there that's bad?
Well, I certainly think there's the fear of the unknown because you don't know how it could be used.
is it being used by the AI provider for their purposes, for their own organization,
to make it.
It's one thing to make their models better.
It's quite another thing to make the intelligence of the model better.
Now your intelligence is feeding your competitors as they're using the same models.
So you could be inadvertently in it, especially when you start getting into purpose-built models
or purpose-built providers, you know, legal providers, for example,
you're feeding your legal documents in there and then all of a sudden competitive legal firms.
They're not seeing your data specifically, but it's certainly better at its job.
and more in tune with the legal system because of the work that you've put into it.
But wouldn't that assume that the data that you're submitting to those queries
is actually then being used on the next training run for those models?
I mean, is that how they do it?
I think in some cases that does exist.
Because when you go into the settings of the providers,
there are core settings.
I know like in some of the open-the-eye universes, there's checkboxes.
Do not use my model.
Do not use my data for training your broader ecosystem of learning
on that particular model or that particular provider.
So at the end of the day, they do use the data on the back end.
I'm certainly not an expert on how they're doing all that stuff
and how that ultimately carries over to how other clients may use that data.
But that's the kind of fears people have.
And obviously the example, you mentioned with the China example,
okay, cool, they use that.
Now I'm sure there's somebody sitting in a government somewhere going,
how does that get used against us somewhere down the road,
maybe other nation state stuff or something else for the provider,
who knows what it is.
Yeah, it's funny, man, that China,
MSS example, that's like someone from the Director of National Intelligence doing the same thing,
but using Deepseek, right? So I think there's the, you know, there's the national security
concern there. I mean, look, you know, when we started off this interview, you were talking about,
like, oh, there's a lot of pressure to use AI and whatever. And, you know, everyone's worried that
if they're not using AI, they're going to get left behind. I mean, I'm guessing in most
situations, though, there is going to be a sanctioned corporate chatbot that is included
with their productivity suite. Like, we use Gemini, because we're a,
We're a Google shop, you know, other people are going to use M365.
You know, I'm guessing where this is most a problem, though, is in some of these regulated
industries where they aren't using AI yet in a sanctioned way?
Like, is that where this pops up as a serious concern?
And I understand, too, that that's where Ireland finds a lot of its customers is in
those regulated industries where they're likely to be most cautious about AI.
So is this a thing that's popping up mostly in, you know, those regulated verticals?
Yeah, the example I used earlier was, you know,
exactly that was a financial services example
where the executive wanted to use a specific preferential
thing. Because maybe there were no options. And again at the end of
the day too, the sanction option may not be what someone prefers.
In the past, I've seen, I've joined calls before. You know, I've had up the
other end bring recording tools to the table. And someone on the call goes, hey,
wait a minute, we have a corporate standard recording tool. Why are you using that
one? And they prefer that one versus the other because it gives them a better
transcript or whatever. So again, the user's preference is
start weighing into some of this stuff, which creates the Wild West.
It's the same thing we saw with Shadow IT years ago.
It's just that on steroids at the end of the day because the intellectual stimulation
for the user, everybody's curious about it and everybody just have their preferences and
not everybody wants to use copilot.
Somebody wants to use Anthropic because maybe they're a developer and they lean more
that direction and they hear more about the capabilities of copilot or clawed in the
development world.
So they may lean that way.
They may want to experiment with Claude co-work and the company doesn't provide that
with their co-pilot environment or their.
chat GPT environment. So as a result, they can start experimenting with those tools. Yeah, I'm with you.
I'm with you. I get it. So what's the, what's the like desired end state for most of these orgs,
right? Because you mentioned earlier that like, okay, you've got your sort of block page thing, right?
That's one way to do it. You can just block it. And then you've got, well, different controls
on the file system for different tenants and like getting a bit more granular. You know, I'm guessing
these features have been around long enough now that you've got a sense of what the average org is
doing, you know, with these controls. Like, what are they doing with the controls? Are they going
more the block route or more the fine-grained, you know, access control route? Well, the challenge is
they're going the block page quite often because that's the tooling that's the status quo.
When they start getting into the need to get into granular stuff, the tendency things are
real challenge. I see that all over the place. They just struggle to gain tendency control.
And how are you doing that? Are you doing that by sensing, like, by logging when someone's logged in
with like their corporate domain or something.
Is that how you're doing that?
You can do it a number of different ways.
So that is the advantage of sitting in the presentation layer.
So I can see the input of the credential and know,
oh, they just plugged in the corporate credential.
That's this policy.
They plugged in this credential as a policy.
I can see attributes that are passed by the provider.
Sometimes they throw attributes in the headers.
So in the HTTP headers.
Sometimes you could filter those on firewalls, etc.
The challenge is there's just no standardized way to present tenancy.
So like one provider does it this way.
another provider does it this way in the SaaS universe and the same in the AI universe.
And as a result, you have to have a number of different angles to be able to hit it from.
Sometimes it's attributes on the screen.
There's a couple of SaaS providers we work with where there's nothing that presents the tenancy
except for the login username.
And then they actually throw the tenancy in the actual DOM of the application on the screen to show you what tenant you're working in.
So those are things we can see because we're living at the presentation layer in the process too.
So combination of these things usually gives us a pretty foolproof way to identify tenancy.
And I've not yet run into one thing that's been presented to us where we couldn't find our way through it.
Again, status quo is really hard.
Yeah, I mean, I guess the question was like, are you, you know, are people going for that sort of tenancy identification thing and just blocking everything else?
Or are they dialing back what people could do in those non-corporate sanctioned tenants?
Like, are they still allowing some access to the unsanction stuff, I guess was the question.
More often, I'm not seeing people allowing the personal stuff.
I see the block page for the personal stuff.
and better yet for the non-sanctioned providers more often.
And what they want, they would love this kind of holy grail of,
you can go to your personal stuff, we don't care,
but the company data is not at risk and everybody wins.
The challenge is that's just very difficult to pull off.
And again, to your point you made earlier,
it's a lot easier to pull off when you're controlling the mechanics of the browser
at the end of the day and that we find a lot of value in that.
What I do see, you said, what's the end state earlier?
Orgs are going to be multi-provider and multi-model.
Every large org, yeah, you're going to have a,
Microsoft license and you're going to have anthropic licenses and you're going to have Gemini
licenses and open AI licenses and different parts of the business, the legal team's going to use
Harvey and the medical practitioners are going to want to use ChatGPT Health. And so you're going to
have different environments and you're not going to want 10 different front doors to each one of those.
You're going to want to homogenize front door and you're going to orchestrate the right provider
to the right user at the right time. So based on context, this user developer, oh, let's orchestrate
Claude to be a part of their ecosystem and a part of it.
their workflows, but this user is a designer in our marketing team. Let's let that be
Grock because that's our preferred image creation tool. Yeah, I mean, this just becomes like a
directory thing, you know, ultimately, like it's about permissioning, you know, provisioning.
It's provisioning and directory at the end of the day, but you get to enforce it because
you've got the presentation control. Yeah, you're going to want to ultimately need, you're going
to really need two things in that process. You're going to need context about the, about what the
user is. So, and directory services doesn't tell you enough about that. Like, you can think about
Director of Services tells you about somebody's title.
But it doesn't tell you the day-to-day work they do.
It doesn't observe their workflows and see,
oh, one minute they're working in this system over here
and doing this ticket triage.
In the next minute,
they're writing a summary of some sort of root cause analysis
at the end of the day.
If you can observe those things,
you can build a profile and learn about what the user does,
then you can make recommendations on the right model
to be used at the right time.
You can make recommendations the right provider.
And then when you bring content to the table too,
you can say, you know what?
the context is this user doing this, the content they're engaging is code.
We should probably use Claude over here for that.
But the user's asking about the weather in Seattle, let's use free chat, GPT for that query
in that particular case.
So the content and context orchestrating the proper provider at the right time, that's an
ultimate in-state where I see a lot of organizations going rather than the fixed state
of just one provider for this user, and then I've got a different thing that's tied into
a different provider.
You're not going to want the Atlas browser for OpenAI.
just for their world, and then the Comet browser for LXD,
and then the Gemini browser with Google and the co-pilot browser.
You're not on different doors.
You're going to want something to orchestrate those things,
and that's what we're doing with the Enterprise browser.
No, it makes a lot of sense.
Makes a lot of sense.
All right, Braden Rogers, great to see you again.
It's been a little while.
Good to chat to you about all of that,
and we'll catch you again soon.
Thank you very much.
Thanks, special.
That was Braden Rogers there,
chatting about how you can use an enterprise browser
to get a better handle on the use of unsanctioned AI
in your enterprise. You can find them at
island.io. And big thanks to them for being this week's
risky business sponsor. But that is it for this week's show.
I do hope you enjoyed it. It was a fun one this week.
And yeah, I'll be back in a couple of days with more security news and analysis.
But until then, I've been Patrick Gray. Thanks for listening.
