Risky Business - Risky Business #833 -- The Great Mythos Freakout of 2026
Episode Date: April 15, 2026On this week’s show, Patrick Gray, Adam Boileau and James Wilson discuss the week’s cybersecurity news. They cover: Everyone has an opinion about Claude Mythos�...� even though almost nobody has used it yet CISA adds a 2009 Excel bug to the KEV list, u wot? Adobe also parties like it’s the 2000s, and fixes an Acrobat Reader bug Disgraced former Trenchant exec Peter Williams’ sob story fails to resonate with … anyone Remember those crosswalk buttons hacked to play audio mocking Trump and Zuck? They were “secured” by the password: 1234. This week’s episode is sponsored by mobile network operator, Cape. Ajit Gokhale talks with James about the ways to get being a telco right when you’re starting from scratch and solving the security problems of 2026. This episode is also available on Youtube. Show notes Lab Space The “AI Vulnerability Storm”: Building a “Mythosready” Security Program Polymarket on X: "JUST IN: Goldman Sachs is reportedly ramping up its cyber defenses in preparation for Claude Mythos." Ananay on X: "Marcus Hutchins probably has the best take on Mythos doing vulnerability research" solst/ICE of Astarte on X: "Th vast majority of CISOs do not work at Google-sized companies, and will not have to worry about 0days" Charlie Miller on X: "we’ve gone through this before with early fuzzers, afl, etc" James Kettle on X: "'Can AI Do Novel Security Research? Meet the HTTP Terminator' will premiere at Blackhat" jeffrey lee funk on X: "We've been tricked, again. Many of the thousands of bugs and vulnerabilities Mythos found are in older software are impossible to exploit." Claude is getting worse, according to Claude • The Register Your Agent Is Mine: Measuring Malicious Intermediary Attacks on the LLM Supply Chain OpenAI's Mac apps need updates thanks to the Axios hack | CyberScoop Hack at Anodot leaves over a dozen breached companies facing extortion | TechCrunch Snowflake customers hit in data theft attacks after SaaS integrator breach Booking.com confirms hackers accessed customers’ data CPUID hijacked to serve malware as HWMonitor downloads • The Register Known Exploited Vulnerabilities Catalog | CISA Adobe fixes PDF zero-day security bug that hackers have exploited for months | TechCrunch The Sad Decline of Trenchant Exec Who Had Everything, Before Deciding to Steal and Sell Zero Days to Russian Buyer FBI Extracts Suspect’s Deleted Signal Messages Saved in iPhone Notification Database US operation evicts Russia from hacked SOHO routers used to breach critical infrastructure | Cybersecurity Dive Telegram Is Still Hosting a Sanctioned $21 Billion Crypto Scammer Black Market | WIRED The Dumbest Hack of the Year Exposed a Very Real Problem | WIRED
Transcript
Discussion (0)
Hey everyone and welcome to risky business. My name's Patrick Gray. We've got a great show for you this week. We're going to be talking about how everybody in the world, it seems, has a hot take on Anthropics mythos model, which is kind of funny, considering it's still in preview and most people have never even touched it. But yeah, the freak out has been superb. Chef's Kiss. So we'll be getting into that and all of the week's other security news in just a moment with Adam Barlow and Mr. James Wilson. And this week's show is brought to you by,
Cape. And if you're unfamiliar, Cape is a mobile phone network, a virtual mobile network operator
based out of the United States that does things properly. And as you'll hear in this week's
sponsor interview, one of the reasons they're able to do that is they're not a horror show of
30 years of mergers and acquisitions. It's a Greenfield's belt, you know, privacy and security
focused telco, which is getting a lot of traction, not just with privacy conscious individuals,
but also with enterprises who have to worry about things like high risk travel. That is this
week's sponsor interview, which was done by James Wilson, not me, because as I mentioned last week,
I'm on light duties at the moment because it's New South Wales School holidays. But yeah, I listened to it
yesterday. It's a very cool interview and that is coming up later. But first of all, guys, goodness,
it certainly has been a week of hot and spicy takes around mythos. One of the first things I,
you know, that happened after we published last week's show is I heard from a contact in Canberra who said,
you know, this mythos thing has really managed to cut through into policy circles and is
causing a big freak out, but this whole week has just been people freaking out over mythos.
I mean, I suppose we shouldn't be too surprised.
Adam, you know, yeah, I guess were you surprised that it was such a big freak out?
No, I guess we've been, you know, everyone has been looking for something that AI is really good at.
And earlier on in the AI hype cycle, there's plenty of other things that people thought
AI was going to, you know, like reduce the industry to dust or, you know, their particular industry or
whatever it was, but I think, like, exploit dev and computer security generally.
Like, that's a thing that AI is really good at and is a great fit for replacing.
Like, it's one of the few industries where perhaps the AI is now better than the humans.
And that's the kind of thing that everyone's been ready and waiting for.
And for our industry, we seem to have got here.
Whereas, you know, if you're an artist or a translator or whatever else, you know,
maybe it hasn't really been as much of a disruptor yet.
Whereas for us, you know, it's here.
So in terms of being prolific, I think you're right when you say that LLMs are going to be better at researching and finding vulnerability.
So in terms of like how many vulnerabilities they can generate, I think you're right.
But in terms of it actually killing industries like exploit development, I think that's wrong.
So I've had it, you know, I had a bit of about a week to noodle on this.
Probably also helped by the fact that I had a nice conversation with Mark D out on Sunday because he's,
He's bringing like an unprompted conference to, or, you know, style conference to Sydney
later this year.
So we just caught up and had a chat about that on Sunday and a few other things like this.
You know, and he's been sort of like his mind has been boggled watching really smart people
just sort of say, you know, just pumping out binary takes on this.
Because people seem to fall into two camps, which is, oh, it's all nonsense, these things are
useless. Mythos only found bugs that had to be hand-reviewed or weren't exploitable. So what's the
point? It's completely, you know, trash. Or it's like the sky is falling, oh my God, we're all
going to die. Whereas I think when you look specifically at exploit development, there's going to be
every time someone updates a model, and I said this last week, every time someone releases a new
model, there's going to be all of these one-shotable security bugs that just fall out of it, right?
But there's still going to be, I guess, what the finance people call alpha for the exploit
developer people.
And alpha, of course, is like, you know, market edge, right?
So staying ahead of the market.
So I think people like your marked ads and whatever are still going to be able to find the
types of bugs that models aren't very good at finding.
And that's going to be very important.
You know, I think people who say, oh, well, what if governments get early access to the bugs
that these models find?
I don't think they're that useful to those governments because if an LLM can
find them. They're not exclusive anymore. And exclusivity is kind of a very important part of all of this.
So I think the exploit developers are safe, but they're going to have to get better at what they
do. James, you know, where did you land on all of this? Yeah. I mean, look, overall, surprised.
Yes, but not for the reasons you'd expect. Like, surprised at just how massive the response to this
is. And like I said, everyone's got to take. But, you know, I think about it in terms of the idea how
this has evolved, right? We used to write a little bit of a comment in code and then a model would
come along and create just that next snippet of code. And then it got better and better at you giving it
higher level prompts of actually now I want you to write this function. Then we could talk to
it about now write this file. Then we could talk to it about now here's how I want this project to come
together. Well, the exact same thing has happened with cyber, right? And the critical change with
mythos is that it's gone from, as you say, spitting out these one-shot bugs to having the ability
to work out how to chain these things together, right? It's that layer of a...
distraction has gotten higher.
Does this disrupt, Von Devs and exploit devs?
No, but yeah, they need to get better.
I'm a little bit curious about what it does for the economics about this.
Like, does it make very exquisite vulnerabilities vastly more expensive now
because they are truly the things the model can't find?
Or does it commoditize them and the price ends up dropping?
I had a chance to catch up with two interesting conversations this week,
one with Jamison O'Reilly, who's a hacker professionally,
and his take was much the same.
He's like, what's surprising here is
all the things that people are freaking out about with mythos
is already possible largely with Opus 4.6.
Well, this is, I mean, you know,
we spoke last week about how amazing Anthropics
marketing machine is, and it's proof, right,
that this whole conversation doesn't have to be about mythos, right?
This whole conversation is just about AI generally,
but they've managed to make it all about them,
which is, from a PR perspective, impressive.
Yeah, absolutely.
and almost to the detriment of OpenAI
when they sort of tweet and say,
we've got a cyber model too.
Yeah, yeah, just a couple hours ago,
they're like, oh, we've also got a cyber security
focus preview model.
It's really dangerous as well, I swear.
Yeah, yeah.
So look, I mean, to round out my thoughts on this,
I'm super excited to see what the model does,
but for all intents and purposes,
I don't think this is the massive game changer
for any industry whatsoever,
and it all kind of seems to be coming back to
whatever your worry is about what this could do,
get back to your security basics and fundamentals
and that's where to apply your energies right now.
Well, okay, so game changer, I'm going to say,
I don't know, I don't know if it's not a game changer, right?
Because the one thing that I think everybody can agree on
is that these models are going to result
in a lot more vulnerabilities being reported to vendors,
which means the velocity of security patches
is about to go into hyperdrive, right?
So I think everybody can kind of agree
that that's one of the things that happens.
Does this mean that people like Mark Dowd
won't be able to find exquisite bugs in the future?
No. So maybe it doesn't change everything.
It's not so much that it's changing the game,
but what I meant there is that the game has already changed.
Yeah, okay, okay.
And it's not just Mythos doing that, right?
Yeah, exactly.
But you're right.
What Mythos does is it, once again,
it grabs the lever of velocity
and ratchets it up to the next one.
But largely,
rest of the rules of the game haven't changed,
but yes, now the velocity comes
and that's what we've got to deal with.
Now, look, a funny thing that happened this week
is a paper came out from a bunch of CISOs
and a bunch of really sort of important people in security.
They put out this paper called the AI Vulnerability Storm
and how to build a mythos-ready security program.
Again, there it is mythos right in the name
when it's more of a generic sort of AI-AI concept.
But they published this paper.
And look, as I say, I've got a lot of respect
for the people who put it together.
And generally speaking, it's really,
good advice that's in this paper, but it just seems to me to be exactly that, like, just
generally good advice. There's nothing really groundbreaking in here. It all seems pretty
obvious that, like, if you want to deal with a massive uptick in patches, maybe get better at
patching, seems to be the advice, right? So, Adam, like, when you read this, did you have sort of the
same reaction there? Because, look, they say sort of, oh, well, maybe you should look at segmentation,
for example, and reducing blast radius and maybe deploying some agents to do patching,
which before we got recording, James had a very funny point there, which is, you know,
CSOs were all about like open claw, can't let that anywhere near our networks, these agents,
and now it's like, unleash the agents, we've got a problem.
But Adam, did you have the same reaction when you read this, which is just like, okay,
this is all pretty generic stuff, like what are we doing?
Yeah, the way that you counter this is through basic kind of security,
hygiene stuff that we've already been, you know, we spent the last 20 years trying to learn to do
right. And patching velocity is one of them. Things like, you know, defense and depth controls,
network segmentation, isolation. They mentioned actually explicitly honeypots, which is great,
because like humans fall for honeypots, LLMs even more so, you know, so that's a technique that,
you know, like if you're Harun Mir right now, you're laughing away to the bank, I'm sure,
because like we're going to need so many more canary tokens. So, like, there's a bunch of stuff
in here that is just kind of sensible. The thing that made me laugh about
this though is the kind of timing parts of it so they've got this like table of things you should
think about doing and one of the things is like how soon should you start doing it and several of them
are like this week and the idea that any organization is going to be able to do anything this week
is just hilarious let alone you know a lot of the Csos and the people who are behind the guidance
in here and the guidance is great like even if you're like a super high-end organization like Silicon Valley
super, you know, security-centric, you know, you've got all the technology parts kind of under control in your organization.
Doing any of these things in a week is already ludicrous, let alone every bank or shipping company or...
Doing them in a year is ludicrous. Like, we've had a long time to try to get patching.
Like, that's why I find it funny. It's like, well, you know, you're going to have to patch faster.
It's like, eh, that ain't going to do it. I mean, you look at, you know, okay, one thing that AI is really good at is going to be, you know, reversing patches, right?
So this idea that you are ever going to be able to outrun this particular bear is like,
it's ridiculous when someone can take a patch and reverse it into an exploit basically instantly.
You know, the only thing that's going to work there is, you know, look, this is why I've been
a fan of these sort of controls for a long time, but it's stuff like Aeroch digital, right?
So, you know, application control, execution control.
And in the case of knock knock, you know, network access control, you know, tied to SSO.
Like, this is the only stuff that's going to help.
I just find it funny that like to see advice saying, well, you're going to have to get
better at patching. It's like, well, really?
Can have to be doing a lot of things.
Yeah, exactly. And, you know, in that vein, we've got a post here on X from Polymarket,
which says, just in, Goldman Sachs is reportedly ramping up its cyber defences in preparation
for Claude Mythos. It's like, okay, what? You weren't ramping them up before?
It's like that, what was it like that, like shields up campaign a while ago, you know,
where everyone might put their shields up. Yeah, yeah, yeah. I'll just get my,
my cyber shield out of the cupboard.
Because we've had at any point in the last 20 years,
we've had a chance to put it down, right?
To put it away in the cupboard and let it get dusty.
We've needed this the whole time.
And yes, like the models make the stuff
perhaps more broadly accessible,
maybe cheaper overall, like on average.
But yeah, like we've needed all of this stuff forever.
We still need it.
You're going to need it next week.
We needed it last week as well.
Yeah.
One thing, though, too, that I just wanted to quickly mention
is I don't think everybody who's got preview access to mythos
is actually going to be thrown code into it.
And this is going to be an interesting thing
over the next year or two
is if you're a large software company,
do you actually start cutting and pasting your source?
You know, just giving anthropic access to your source?
I don't know.
I think there's going to be some companies
that feel funny about that.
There's been some nuclear takes this week, though,
but I wanted to mention a couple of notable ones.
Marcus Hutchins made a point that, you know,
been talking about on the show recently, which is that, you know, the amount of compute
involved in this AI stuff, like it's very expensive and it's being subsidized by all of the
VC money that's going into these models.
So Marcus Hutchins published a video saying, 20,000 bucks worth of tokens for a BSD bug seems
actually like pretty bad value, which I think is a reasonable point.
I do think these models are going to get more efficient in time, but the economics of
AI just generally are a little bit nuts and he is making a good point. James, you and I had
had a chat about that earlier. It's a good base point. I think I don't really buy the, you know,
he then sort of goes down this rabbit hole of like, well, who's going to foot the bill and should
community projects be expected to, you know, fund their tokens? And look, I largely think that's
going to sort itself out. I don't know how, but it's like that, let that, they'll leave that
alone and it'll get worked out. The thing that sort of got me really thinking was just his point.
around, and we've all talked about this, you know, that VCs are funding so much of what we do.
I mean, if I've got a $300-dollar Claude subscription, I regularly hit the limit.
So I'm probably really actually costing them thousands and thousands of dollars of what this would be
if I was using APIs directly with Claude.
And, you know, it made me think back to this strategy that Anthropic has here of, like, you know,
if Mythos is so dangerous where you can only release it to 40 companies, well, why not actually
just use the, let the economics of this sorted out and say, look, Mythos is available.
It now costs, you know, $10,000 per 100 tokens.
And if you've got the budget to do that, then go, you know, have at it, right?
I don't know.
I think it will be interesting to see what becomes of AI when the real cost of it is actually
footed by the consumer.
And I think it will actually solve a bunch of problems, not just, you know, the zero-day
challenges, et cetera, but gosh, a lot of slop goes away if it's going to be really expensive
to generate this stuff.
And I think that was the core of his point.
But, you know, $20,000 for an open BSD bug?
Yeah, no, not going to be worth it.
Especially a null pointed DRIF, not very exciting bug.
Yeah.
Well, you know, it could just crash the box.
So, okay, good, good job.
But, look, suffice to say, the economic reckoning for this has to come at some stage.
And I think that's when, you know, quite a few problems will get sorted out.
Well, yeah, like having a functioning economy, if you consider that a problem, right?
Like, if this is a bubble at burst, like, we've all got massive problems.
I've linked through to an interesting post too from Charlie Miller, where he says, while I do agree you shouldn't freak out that AI is finding so many vulnerabilities.
And he says, we've gone through this before with early fuzzers and AFL, etc.
I disagree that AI will find all of the vulnerabilities.
And as long as there are a few lingering vulnerabilities, nothing changes.
The only part that I disagree with there is nothing changes.
Like obviously the environment that you're operating in, even if you're doing X-Dev, it changes quite substantially.
But he is right that we have been through this before when fuzzers came a lot.
and that led to a deluge of bugs and a lot of chaos and disruption,
which is kind of what we've been predicting here.
But obviously, Fuzzlers didn't find all of the vulnerabilities,
which I think is his point and sort of underscores what I was saying earlier.
We've also seen a write-up from Tom's hardware, which went wide,
like people were passing it around saying, like, a lot of the bugs that Mythos found was in old software
or non-exploitable and the severe zero days.
relied on 198 manual reviews.
I don't think that's really that important.
I think that's people just trying to tear this down and say,
oh, it's not a big deal, but it is a big deal.
But one interesting thing that's been happening,
actually, over the last week or so,
is a lot of people complaining that Claude,
like the current, like general availability models,
have basically been lobotomized for some reason.
I asked you about that, James,
because you are a clawed user.
And yeah, you said it's got real dumb lately,
and no one knows why.
Yeah, I think I said it's like a watching a tweak
chase squirrels at the moment. It is just, it's so bad. And I, I, you know, I'm paranoid about
the use of models and how good the code is that's getting shipped. And so I routinely do things
like I'll use Claude to create the first implementation, then I'll use like source graphs amp
to do an adversarial code review on it and then pass it through Codex if I'm really, really
worried. I would say that in the last week or two, that's gone from the kind of level of
paranoia that I go to for a very major change to something that I'm just doing for every single
thing that I'm asking Claude to do. You know, the transcripts even when you look at what it's
doing are just hilariously bad at the moment. It'll say, oh, you know, okay, I've implemented that
change. Oh, wait, I should have done that change over here. Oh, wait, that's the wrong way to do it.
I should have done this over here. And you literally just feel nervous when you're watching it.
You're like, buddy, are you okay? Are you doing there, little guy?
He's not okay. He's not okay at the moment.
Now, surely this is because they have taken resources from the sort of general pool
and committed them elsewhere.
And, you know, we were chatting before we got recording and it's impossible to know where that's going.
Could it be going to scaling up mythos?
Maybe.
Could it be a training run?
Do they use the same hardware for that?
Maybe?
Like, we just don't know.
That's the point, isn't it?
Yeah, we don't know.
You know, it reminds me also that, you know, year after year at Apple, we were pinged and
hassled every year when people would say, oh, you've released the new iOS.
and it's made my old phone slower and you did it deliberately.
It's like, and I've heard the same claim being made a dythropic, right?
They're dumbing down clawed so that when mythos launches, it's going to be incredible and whatever else.
But I don't buy that.
You know, the reality just is when you're building software and releasing new functionality,
that requires computer, requires resources and that's got to come from somewhere.
I am a little bit curious, though, as to is this just what it looks like when a frontier model operator is essentially doing an operational readiness test?
and starting to transition their old generation of models
to being essentially legacy product.
Is this what it looks like?
It gets quantized.
It gets reduced amount of compute.
They ramp up the limits because this might be just something
we have to sort of bake into our lives from now on.
You're going to have to deal with the fact that when they've got a release coming up,
everything gets lobotomized.
Totally.
I mean, when I worked at Aval, right, in September and October,
they were bad months because we were getting ready for the new iOS.
Maybe we just got to be used to whatever.
the cadences for a model release that there's, you know, each quarter there's a bit of a bad
month where it's like, eh, we probably won't be writing code as good or as fast that month,
which...
What a world we're arriving at.
What a world.
That is funny.
It sort of reminds me of like when COVID hit and all of a sudden teams didn't work.
Because of like the resource constraints and like as you, there were serious constraints
on that.
Like people have forgotten that happened.
But, you know, we're so used to thinking about computing resources being limitless, right?
Like unlimited compute and unlimited.
because we haven't done crazy stuff like AI with our resources for a long time.
I mean, that's a conversation, Adam, you and I had like two years ago.
But I do want to point out one other thing too, right?
Like before last week's show, we had a bit of a discussion internally about, like,
if you're a SaaS vendor, is the security through obscurity now, like, does that give you an edge,
right?
Because people can't look at your source code.
And we sort of kicked a few ideas around like, well, you know, there's going to be some
dependencies there that an attacker will be able to figure.
out and then they can go and, you know, look for vulnerabilities in that code or whatever.
You know, ultimately we didn't talk about it last week.
And then along comes James Kettle.
We love talking about his research.
He's announced his Black Hat talk.
And he is talking about something that he is calling the HTT Terminator,
which is an AI-based thing.
So I'm guessing, like, even if you're operating a SaaS thing,
like when you got guys like Kettle releasing something called the HTT Terminator,
I don't think you're even safe at that point.
Adam, you know, we're just going to have to guess, aren't we,
until we see what cattle's come up with, but it sounds ominous.
Yeah, I assume it's building on his previous work of multi-layer,
like HTTP processing desynchronization slash canonicalization
slash interaction between different components communicating along a web path.
So I assume it's something along those lines,
but yeah, like whatever he drops is always good.
Like he's one of the genuinely reliable people
in Infosec, like when he shows up with something fine, you're like, you know it's going to be a good ride.
So, yeah, we will see what it is.
And, you know, he's, he has proven that he's pretty good at, like, looking at people's,
like, implementations of systems and then, like, divining through, like, blackbox magic,
how they probably work and then finally involves in it.
And that's the kind of thing that, like, if you're a cloud vendor where people can't see a source,
like, AI that learns off James Kettle could be pretty scary stuff.
I actually had, we had DAF studded who created burp in a snake oiler segment recently talking about like how they've glued burp, like AI to burp as well.
And again, I'm going to double down on the idea that there's nowhere to hide even if you're a SaaS.
Vendor, because like when you've got, you've got AI that you can throw up repos.
You've got AI that can plug into your browser and drive burp.
You've got, you know, kettle coming with his Terminator.
Like, I don't know.
I just don't think there's anywhere safe at the moment, and it's, you know, yeah, the bugpocalypse is real.
Moving on, we got some, a couple more AI-related stories to talk about,
then we're going to move on to some non-AI-related stuff, which either you'll be sad about that or very relieved,
depending on your personal preference.
But we've got a paper here where someone has measured malicious intermediary attacks on the LLM supply chain,
so there are these things out there called LLM routers.
And, I mean, I think the idea is sound, actually.
So instead of like communicating directly with Claude or Codex or whatever,
you have this router that sits in front,
which works out where to send your query,
based on what type of query it is,
based on things like costing capability and whatnot.
So it seems like a good idea.
The issue being, though, is a lot of them are sort of free.
There's a lot of paid ones and a lot of free ones,
and the people who did this research found eight of the free ones
were injecting, injecting malicious code back at the users.
and even one of the paid ones was doing it.
James, you know, what's your take here on, I guess, these LLM routers to begin with?
And then, I mean, I don't think any of us should be surprised at the malicious activity.
I think we're more surprised at just how many of these LLM routers there are.
Yeah, I think, you know, at the core of this is sort of the, I guess,
interesting way that the whole LLM interaction works between the model and the user, right?
It is, you know, it is literally a transcript and there's no state for a given
conversation or interaction with a model that is stored server-side. So if I'm having, let's say,
a chat with Claude and it's like we're four or five turns deep in the conversation, every time
I'm interacting with a new message to Claude, that entire chat, that entire transcript from
end to end is being sent back to Claude to say, hey, here's the conversation back and forth,
and now James has added this response. What do you want to say back? So because this is like a stateless
thing and everything's in the transport of this, it gives rise to actually the integrations tend to
happen at that transport layer as opposed to thinking more of like an API-centric way that we would
traditionally integrate functionality and add-on things. So they've done it the dumb way because they
have to, I think is what you're saying. Well, yes, I mean, the entire way that the entire way that
an LLM works from a protocol level is dumb and naive, right? It's just a text transcript. So that's why
these things are popular, right? Because you can put them in the middle of that transcript and they can do
manner of things. Like you said, they can find cheaper models to use, they can, you know,
inject tools or respond to tool calls for you. So the use case for this is not going to go away.
But also when I read the paper, I was like, of course this is possible. Of course you can inject
malicious code here, of course. You could, you know, just get the credential straight out
of the conversation. But then I kind of sit back and go, okay, well, damn it, it's dumb it,
because it's, of course, it's going to happen. But there's not really a good solution on the
horizon for how you you prevent this.
Well, as a user, no, but I mean, I think ultimately, and Adam, you and I were talking about
this earlier, ultimately this comes back to like, you know, who do you trust, right? And, you know,
Cloudflare is a great example where people are happy to give them their, you know, SSL private
keys and say, please terminate our sessions and it's fine because it's Cloudflare, but do you
give it to some Rando who's been around for 10 minutes and the answer is no? But because everything
in the AI world is new, I guess, you know, trust.
is a little bit harder to figure out.
Yeah, I mean, that's exactly it.
I mean, the comparison with CloudPair makes sense.
You wouldn't use, you know, honest Akmed's, you know,
free TLS termulation service and then expect not to every HTTP inspected.
You wouldn't put HTTP in the clearer over Tor and not expect exit notes to snoop on your stuff.
That's the price that you are paying for the service that you're receiving.
So, yeah, I think, you know, kind of James nailed it on the head.
Like, everything is so new that no one's really had time to think about this yet.
and here we are thinking about it
and discovering that maybe not the best idea.
Yeah, and for anyone curious about the Honest Ahmed's reference there,
that is a deep cut that goes back quite a few years.
If you Google Honest Ahmed's used cars and certificates,
you will find out what Adam was talking about there,
where someone basically tried to become a Root CA for Mozilla
by describing themselves as, yeah, like a used car yard
that also does certificates.
They were trying to make a point, and it was well made.
What else have we got here?
We've got OpenAI's Mac apps needed to be updated because of the Axis hack.
It looks like their supplier chain was tainted, James.
There's actually a really good lesson out of this.
So what happened here is that Axios was that package, a wildly popular package that was compromised last week.
Open AI has been very loud here about the fact this wasn't a compromise of their dev environment or their corporate environment.
And I went to look into that.
And yeah, there's a fair claim around this.
What happened was one of their GitHub actions that was building their Mac app was bringing in that Axios dependency.
And as what that malware did was it exfiltrates all the tokens and credentials out of that environment where it's running.
And in this case, it was just a GitHub action running, which happened to have the private key or the cert for signing and notarizing that Mac app, which if you're not familiar with that, that basically is how you get a Mac app to run on a Mac,
without being distributed via the app store.
So the worst that could happen here is that someone malicious could have created an app,
signed it as if they were APE and AI and shipped it around.
And if someone launched it on their Mac, they wouldn't get the usual, you know,
this is not a trusted app, it should be moved to the bin.
It would have just said, hey, you downloaded this from Safari.
You're sure you want to run it and off you go.
But there's no evidence here that anyone actually did that.
They're just out of caution, rotating that cert and pushing out new versions.
It's so funny that you mentioned, oh, I downloaded it with Safari.
I haven't seen that.
And then I'm like, oh, yeah, James probably uses Safari because he used to work at Apple.
That's right.
You can take Safari out of the boy.
You can't take the boy out of Safari, whatever.
However it works.
Or is that the other way around?
I can't remember.
Yeah.
Either way, I'm not using Chrome for anything other than work stuff.
All right.
And meanwhile, we've got this other issue with anadot.
This is a business monitoring software maker.
they got owned and then this led to problems at Rockstar games and Snowflake and, you know,
this is another typical sort of supply chain thing where you've got the shiny hunters group like
what stealing Orth tokens and stuff, is that right, Adam? And then just sort of popping up in a
bunch of other places subsequently. Yeah, yeah. Anodd is a service that you like plumb into your
environment into your whatever data you have and it produces, you know, graphs and metrics and
and alerts and stuff. So by design, you give it creds to wherever you store interesting things.
That's what got hacked by Scattered Lapsus Shiny Hunters. And then one of the people whose data
was integrated there was Rockstar Games. And then their data got hoovered up using those tokens by the
shiny hunters, amongst other people as well. So pretty kind of standard sort of thing, as we've seen
like when they did that big Salesforce and the one against Snowflakes. So kind of same style of thing.
but yeah somewhat embarrassing for rockstar games yeah now tell me also about this CPUID site
which apparently serves up something called HW monitor which I'm guessing stands for hardware
monitor they got owned and were serving up malware instead of HW monitor which I can't tell you
that I'm not really familiar with it but I'm guessing it's quite popular because everybody
seems to be talking about this website getting popped yeah it seems like the story here was
the site was serving up some kind of malicious downloads and it wasn't the hundred
it sounds like probably their infrastructure was compromised somehow gut feel because there was something
about like not all requests like some proportion of requests so it feels like probably dns related
some round robin dns maybe there was a dangling host name where the dns pointed to something
that didn't exist and they registered and was serving out malware um i think this is the sort of thing
that a lot of you know PC enthusiasts use right to look how fast is my overclocking or whatever else is
going on so yeah someone was dropping i think an infestiler uh via that
So kind of interesting
and the answer of
it's always DNS. It's nice when compromises
come back to some kind of like ground truth like that
we can rely on DNS to screw things up
so yay. Now this next one I had to
actually check the
dateline on it because it says
booking.com warns customers of possible data
and security breach by unauthorized parties. I mean they've been
owned a couple times right?
So many times. I think I've lost track of how many
it seems to be a pretty regular occurrence
and that kind of travel booking thing
is a great line for scammers.
Like if you've got enough data to impersonate someone who's booked through a service like that,
then you can usually fish them for creds or fish them for credit card details or something like that.
Like it's a, you know, people are in a high stress situation when they think their accommodation's
going to be cancelled or something.
So, yeah.
I mean, they nearly got me.
Like a couple months ago, I've got a booking coming up that I booked through booking.com
and the hotel got in touch with me and said, actually, there's been a problem with your cards, sir.
And I was like, the reason it was convincing is because they,
They reached out over WhatsApp, and funnily enough, they used a verified WhatsApp account,
but it was in a different name.
And that's what made me go, hang on.
And it turned out the verified account was for like a clothing store in a different country
to the hotel that I booked in, right?
So, you know, but I was, I'm normally pretty paranoid, right?
and I don't feel like I'm going to get done with fishing.
That was one where it was like,
it was good enough that it was close.
And I almost engaged with it just at the last minute stop.
So yeah, it is a decent enough data set to know
when people have got upcoming bookings
and you can tell them, hey, I'm the hotel.
There's a problem you're about to lose your booking.
That is a high stress sort of thing.
Now, Haffa Lee, the Vodrability Researcher,
I got this one off social media from a post by Hafei
that said that quote tweeted a post about this.
just with one word, which was what?
Which is SISA has added
a Microsoft Office Remote Code Execution bug
from 2009 to the Sissacav.
Adam.
This is somewhat confusing.
This is a bug in Microsoft Excel,
but like the version,
I think it's like Office 2000 era.
So like very, very old.
And I think like it's, this is like Mem corruption.
Like it's a real bug.
Microsoft hasn't updated the like advisory, CVE, whatever,
since 2009, so there's no new information there.
So I am confused as to exactly why this has ended up in Sissacav.
I mean, question one is like, did some AI just added?
Like, are they letting AI, like, read, bug, you know, read posts somewhere and then stick
things straight into the cave?
Or did someone legitimately get owned via Excel 2000?
And the answer is probably they did.
Like, a government agency owned by ancient Excel?
Like, totally good.
Because an Excel update would have broken that all-important macro that runs the entire department.
I mean, it's got to be something like that, right?
Yeah, exactly.
So, like, maybe this is exactly what it looks like
that someone is out there owning important things
with Excel 2000, you know, memory corruption.
So in which case, like, good job.
We got another time warp story,
the next one in our list,
which is Adobe has actually patched a vulnerability
in Acrobat 2024.
So Acrobat Reader, so what is it?
Acrobat DC, Reader DC and Acrobat 2024.
People have been exploiting it.
There's been a pretty solid, like,
APT campaign.
using this bug happening for like four months.
So yeah, what year is it?
Yeah, yeah.
I mean, it's funny kind of seeing, like,
how much of our early risky biz life
was talking about acrobat bugs and flash bugs,
you know, back when those were the two main ways
that people got compromised.
So yeah, it's nice to, you know,
some things haven't changed over the years.
Actually, the specifics of this bug are that it's a,
you can make PDF documents that call into Adobe's API
for reading files,
and there was normally a sandbox process.
is like a sandbox about what files we read.
And they are reading stuff that is permitted in the sandbox.
So it's actually like correct operation of acrobat.
But whoever is doing this,
the only samples we've seen are like reading ntll.dl to figure out
what version of Windows you're running on
and then sending that back to then get extra targeting information.
And the C2 they talk to can send back further commands to run.
But no one knows what those further commands are yet.
So there may be some other interesting technique going on
later down the exploitation chain, but we haven't seen it.
So yeah, it's kind of fun that people are still shelling stuff, you know,
with Hackrabat in the year 2026 AD.
So, yeah, it's cool.
Now, the next thing we're going to talk about here is a write-up from Kim Zeta
in her Zero Day blog, all about Peter Williams.
And it's a really good writer.
Peter Williams, of course, being the trenchant guy who stole exploits
and sold him off to Russians.
I mean, what's funny here is that his sob story
has been allowed to make it into this blog post,
where he's apparently, according to court documents,
oh, he was under so much pressure at work, you know.
And he was under financial strain as well.
And then you get to the part
where it says that he earned in excess of $2.25 million
from his job at Trenchant between 2022 and 2025.
And the year that he was promoted to general manager,
he earned $775,000 US dollars.
I make a fraction of that,
and I consider myself to be quite affluent,
So I don't know what on earth, in what universe can you be under financial pressure when you are earning that much money?
I just read this and it didn't, I didn't walk away from reading this feeling any more sympathetic towards Peter Williams than I did at the start.
Like, you know, it's even like, oh, part of the sob stories he's been subjected to sort of, you know, ridicule and criticism from around the world.
Well, well, yeah.
James, let's pull you in on this.
You know, did you have the same reaction as me, which is just like, what?
Yeah, well, Pat, look, we've all had those weeks where it gets to Friday and we think, you know what,
maybe I'll just sell some stuff to the Russians.
I'm done with this, you know.
No, I read this and had the same reaction, but I kind of want to believe that this was something that the lawyers put together and they said,
look, please just sign this.
We're going to ship it off to the court.
This is what we always do.
It's your best shot of any lenience here.
I want to think that's the case, because I would hate.
to think that this was actually a heartfelt belief that this is a way to tell your story and to
get some lenience because it is thoroughly, thoroughly undeserving of anything of the sort.
It's bizarre.
Adam, were you the same here?
My reaction was, you know, the Woody Harrelson meme where he's, like, crying into his money?
Like, that's just what I thought.
If that wasn't the picture illustrated in his blog, it should be the picture illustrated in his blog.
But yeah, we like having a bit more detail.
We like having some, like, the human part of the story, like, how do he end up?
here and why you know why does he think his life is miserable or like you know his back is sore
you know okay yeah that sucks like it sucks having to take painkillers because your back is sore
you know what helps with back pain adam selling exquisite vulnerability chains to russians
this chiropractors are really expensive guys come on yeah like i like a bit of color in the story
like it's nice to understand some of the human behind the story but at the same time there
the same reaction as both of you all which is like excuse me how much
money were you making buddy like how many lexas and fake watches do you do you really need uh and apparently
the answer is a lot he had it all you know that's the thing he had it all um and i was i was just doing
the math there in my head he could have just stayed working in that role for another year or two and
made as much as he did selling these things off yeah why yeah no it's just he had it all uh it's
just absolutely bizarre i wonder if we'll find out more one day like my my pet theory and i've got
nothing to back this up with is like he lost a bunch of money on crypto or something and like
was in a hole and you know we're scared to tell his wife or something you know like it just feels like
that's the kind of thing but there's nothing in the you know nothing in this piece that suggests
that's true i mean i just like we all struggle right to understand these uh these things um we've got
a couple more stories to get through one real quick from 404 media here is uh FBI we're able to
extract a bunch of deleted signal messages off somebody's phone because they were sort of
cached or saved in the iPhones like notifications database basically.
This is something we've spoken about before about how like even doing timing analysis on
Apple notification services can be useful for law enforcement and intelligence services.
You know, of course you worked at Apple, James, and your take on this is the user had deleted
signal. So therefore, Apple should have, you know, iOS should have probably deleted these things
out of the notifications database, which I think, yeah, fair criticism. Yeah, I think there's a couple of
lessons to learn here for users as well. Like, yes, that's just straight up bad from Apple. They should
have cleaned this up. But important to remember that iOS operates on a couple of different
security levels, right? There is files and data that is always available. There's files and data
that are available after first unlock. There's files and data that are only available when the device
is unlocked. And so when you see that notification setting that says things like, you know, show
previews on lock screen, that, that I think is something that should actually be explained a bit more
in depth to say, you know, the data that is going to be shown on the lock screen is stored
in a way that is less secure because it is accessible when it's not locked. The other thing that I
think is getting a bit of misreporting around this is some people are saying, well, you know,
this is the danger of push notifications, it's bad, it breaks end-to-end encryption.
Not all push notifications are the same.
Some push notifications do contain the text that will appear on the lock screen,
and yes, that is a problem if you think your text and message exchange is otherwise end-to-end encrypted.
But I don't believe that's the case with Signal.
They just get a push to say, hey, something's happened, you best go and check to see if there's something to present to the user,
and then locally signal will look at the end-to-end encrypted message and determine what to put on the screen.
But, you know, end of the day, lesson here is, if something is visible on the screen,
when the device is locked, that same data is present in the hardware somewhere and just as accessible.
Yeah, I mean, one thing that there was a bit of a jump scare in this piece where you read,
the case involved a group of people setting off fireworks and vandalising property at the Ice Prairieland
detention facility in Alvarado, and you're thinking, well, I mean, that sounds like civil disobedience
to me. Like, that's unfortunate to see people, you know, having the full way to the federal
government come down on them. So, yeah, the facility in Alvarado, Texas in July, comma, and one,
shooting a police officer in the neck.
And then you're like, oh, okay, I see.
That's a little bit different to setting off some fireworks
outside of a nice detention facility.
Adam, we spoke about this weird Russian thing
where they were up on a whole bunch of Soho routers
and trying to get like Microsoft logins and whatever.
A United States government operation
has evicted the Russians from these Soho routers.
Do we know anything more about why on earth
they were trying to like collect
creds via these Soho routers?
Well, so this piece describes it
as a campaign being used to breach critical infrastructure.
And when we talked about it last week,
like this was cert warnings being popped up.
And if you click through them and then enter your Microsoft credits,
your credits would get stolen.
So I'm hoping that no critical infrastructure is dependent on users,
you know, not just accepting a certificate warnings,
something there's a little more defense in depth there or something.
But yeah, I guess if the Justice Department feels
and it shut it down and describe it as critical.
Then perhaps the real truth is somewhere in between us being confused
as to exactly what anyone was doing last week
and it being a threat to national critical infrastructure.
Well, the one thing that made me go,
hmm, on this one is apparently there were adversary in the middle attacks
on secure connections to the Outlook email platform
and you're like, you mean self-hosted Outlook?
Because that could explain why this was a big deal here
because you could capture some creds on the wire
and then use them, you know, obviously to log into that network somehow.
So I sort of wondered if maybe that initial reporting around it,
like sort of being like Microsoft Cloud accounts might actually be a little bit wrong
and it was something else, something more,
which would explain because last week we were very confused about that.
So that would kind of explain it.
Just one, we're going to link through is that people can read.
Telegram is still hosting a sanctioned $21 billion crypto scammer black market.
The level of bad stuff that has.
happens. You know, this is the Shinby guarantee. They're an enabler of crypto scammers and human
trafficking and Telegram just hosts it. Absolutely no problem there. So, you know, I don't even
know. Like last time we checked, I think, what? Telegram had like a staff of six and this is
what happens. But even a staff of six, you would think, would be able to clean up the $21 billion
illicit marketplace that enables human trafficking. What else have we got here? This one you wanted to
talk about, Adam, because it is very, very funny.
The headline is the dumbest hack of the year exposed a very real problem.
This is when people were loading up wave files onto crosswalk hardware, right,
so that they would display political or speak political messages.
Is that about right?
Yeah, this was around Silicon Valley and a bunch of crosswalk signs.
When you press the button and you're waiting to cross and announcers when it's time to cross,
the wave files that contained that audio had been replaced with things like, you know,
AI generated, Elon Musk saying things about, like, like,
Donald Trump or, you know, like, I have the sort of, you know, humorous political messaging,
should we call it.
And it sounded like movie hacking, you know, like someone broke into the control center for
the, you know, the transit operator and managed to distribute these updates.
No, it turns out the way these things are managed is by like Bluetooth low energy.
And they have default creds of 1, 234.
And if you have the manufacturer's app, which is in the app store, you walk up to the thing,
you enter 1, 234 as the pairing pin, and now you can just upload new wave files.
And the only impediment is you can't upload them whilst it's playing a sound.
So you have to wait for the traffic to be quiet, I guess.
So, yeah, the fact that someone figured that out, the 1-2-3-4 code is in the manufacturer's manual,
which, you know, they're reviewing the manuals to find the full credds
is hacking going back to the, you know, Kevin Mittnick kind of error.
So, yeah, just very dumb, but on the other hand, I guess it isn't dumb if it worked.
And clearly someone, you know, spent an evening driving around Silicon Valley updating
crosswalk buttons for some good lulls and headlines.
So yeah, it's nice to see the story about how that actually happened.
Risky business.
Available on Apple Podcasts, Spotify, and soon Silicon Valley Crosswalk hardware could happen.
Well, we're going to wrap it up there, but before we do, Adam, you're actually off
for a few weeks.
You are flying off to the UK, actually, for a little while, so you're not going to be here
for a few weeks.
And, yeah, Googling, I'm sure, can.
Boeing 787 run on vegetable oil and seeing what sort of response is you going to get.
I mean, I hope you can come back, dude.
Yeah, well, like getting to the UK at the moment seems likely.
Getting back, I mean, we're just going to have to take a chance.
So, yeah, I'm going to go and see some family, have a break.
And, yeah, I mean, wish me luck, dear listeners,
for being able to fly over a war zone there and back whilst jet fuel is,
you know, being blockaded in the straight of cool moves.
And like it feels it's a very strange time to go on holiday at the moment.
But yeah, I'm going to give it a go.
You'll be able to scavenge some used oil from the fish and chipperies in the UK.
Thank God you're going to a place with fish and chip oil.
Just out of the oil, yes.
Yes, otherwise it could get a little bit scary.
But yeah, we will wrap it up there.
Have a great time, Adam, and we'll catch you when you're back.
But that is it for this week's news segment.
Adam, James, thanks a lot.
It's been a lot of fun.
Yeah, thanks, Pat.
Thanks, Pat.
See you next week.
That was Adam Boyle,
and James Wilson there with a look at the week's security news.
And we will have fill-in co-hosts while Adam is away.
I think the gruck is going to come do one.
Brad Arkin is going to come and do one.
I think, who else who we've got?
Dmitriol Perovich, he's going to come and do one.
And then Adam will be back in the chair,
so it should be a fun few weeks.
It's time for this week's sponsor interview now.
And today we are chatting with Ajit Goclay.
Well, when I say we, I mean James Wilson
chatted with Ajit Gokle,
who is the Enterprise guy over at Cape.
For those who are not familiar,
Cape is a security and privacy focused virtual mobile network operator.
I hear terrific things about it.
They are doing very well.
They are raising like, you know, mega rounds in VC.
So things are obviously going quite well for Cape,
and they've sponsored a couple of shows here and there before.
But James did this week's sponsor interview where he spoke to Ajit really about what
the enterprise use cases for Cape are and why it is that sort of government department
and enterprises are buying subscriptions for their executives and sometimes like even a wider selection
of staff.
So here is James Wilson's interview with Ajit Go Clay from Cape and we'll drop you in here
where Ajit starts off by talking about sort of enterprise use cases, specifically enterprise
use cases for Cape.
Enjoy.
I think with enterprise, the key thing we think about is for all corporate services, there's
certain standards that need to be met from a security perspective, device security, network
security, compliance, and policies. And right now with Telco, enterprises have none of that.
You're kind of lucky to have that convenience. And if we think about how we protect data,
you know, if you put data on AWS instance, that's your customer data, that's locked down.
You have complete knowledge of how it's being used, how it's being stored, what's happening.
that same data with that same risk is on telecom networks and you have no visibility.
You have no control.
And that's the key area where CAPE is kind of changing how enterprises use telecom,
which is they now have network level control over how their phones are being used.
That's the data, phone calls, usage, et cetera.
We're giving all that information back to the enterprise.
Maybe just give me a bit of a flavor of like when you're approaching an enterprise, what are some of the examples or use cases, like the specifics?
You know, is this going in the hands of the C suite only? Is this useful for all employees?
And how do they determine who's going to get the most amount of value out of Cape for an enterprise?
So I think the most obvious use case or most initial common initial use case is around high risk travel.
Right.
So that's where you're using a burner phone.
You're kind of setting up a kit.
But you have no visibility, what kind of risk you're actually taking.
You're just kind of wiping it and hoping nothing happens.
You're isolating the issue.
So they're using CAPE where you can now use phones that can, you know,
we have security features that minimize surveillance.
And so, you know, we have that feature.
But then also you have that full telemetry data, network attaches,
and all network traffic data.
And so that's key in terms of high risk travel and then executives.
But we're also seeing, again, an interest at the enterprise level for all employees because they're traveling.
And when they travel, they are still conducting their business, right?
Same data, same risk over the telephone or tethering.
And so, you know, we're seeing a lot of CSOs being very interested in those data streams, right?
initially, you know, through our UI where we do some threat mitigation, but also to be sent to
their seam for them to do analysis together with all the other data streams they have.
It's a blind spot that we're kind of filling for the CSA.
But we're getting interested in the IT as well just because we can do it in a cost-effective way
and, yeah, the overall security features we're offering.
Yeah, that integration to the seam must be a really cool selling point.
Is there other telcos out there that's off that level?
of integration?
No, not that we understand.
I mean, one, they're not making that data available,
and two, if that data is not available,
they're not going to be able to send it to the scene.
Right, right.
And you mentioned that the high-risk travel aspect.
So are you saying that with Cape,
you don't need a burner phone anymore,
or is it that if you are doing high-risk travel,
you should still get a burner phone,
but do that through Cape so that you can at least know
what was that exposed to,
and to your point,
it's not just throw the thing into a wood chipper and hope for the best.
But what's the sort of recommendation there around high risk travel?
I think each organization is going to have their own kind of risk policies.
We see oftentimes they have one policy for most countries.
And then for one or two, they will have kind of a specific, they require you to take a separate phone.
And so we are, you know, we're willing to, you know, that's not, that's not up for us, for us to decide.
But yes, in either case, we think it makes sense to have Cape service in the sense of even if you're
just going to have a special phone that you're going to wipe each time, fine, at least while it's on,
you have full knowledge of how that phone is being used.
Yeah, yeah, the burner phone, it's a, you know, I guess it's a time-honored and well-trusted
concept, but you are also introducing a massive blind spot, right?
That burner phone, you don't know what's happened.
You're just disposing of it knowing that that, I guess, ceases the ongoing threat.
But gosh, it would it be actually nice to understand what happened there?
Just in case there was a vector that could outlive the physical lifetime of that device.
Is that sort of what you're seeing?
Yeah.
And I think, like, for example, you'd have network attach information.
You know, what towers are your phone connecting to?
And we obviously, we can let you control that as well, say, hey, we don't want to attach to towers in this country.
We don't want to attach to these bad void telcos if you have certain ones that you don't want to connect to.
But you also have the visibility.
Hey, hey, why is this phone connecting to that country if they're not supposed to be there?
You know, what's going on?
You know, network traffic.
Hey, this is traffic that we're not expecting or traffic that's operating outside of the VPN.
This is a key feature here.
Like if you have like an on-device control over where traffic is supposed to be routed,
we know in zero-click attacks and there are other, you know, processes, which can bypass
those on-device tool.
Cape is sitting as the ISP.
You can't bypass the ISP, right?
So one simple use case we have for all travel
is to say, hey, we will just notify you
if data doesn't go to the specified IP range
that you're expecting that your VPN requires.
And so now you have insurance that the tools you have on the phone
are actually being used properly,
are actually effective.
I mean, Ejeet, this is super cool, and not to be the sort of grumpy skeptic here,
but a lot of this comes down to sounds great, but I've got to trust you.
What do you offer customers that are sort of stuck at that point of being like, sure,
but, you know, as you say, you're only collecting the minimum, you're not selling it,
etc.
But what if I don't trust your word?
What have you got beyond that?
Sure.
I think that's a fair concern and a fair concern.
and a fair question.
I think one, we can get, you know, outside, you know, validation.
And so one is, you know, we have a SOC2 type 2 that we just got.
And so that's kind of like the external validation.
And we're continuing to get those kind of certificates and validations.
But we also maintain, and this is a big part of CAPE,
is the compliance of how you're running your mobile core
really matters and how you're taking care of data.
And so we maintain trust.cape.co,
and I encourage everyone to go visit it.
And that's kind of a real-time feed
of whether we're meeting our own standards
in terms of our security policies.
And also gives a list of kind of what certifications,
et cetera, we've had.
But I think, you know, it's hard.
It's hard to prove, it's hard to prove that we're doing good, right?
I think that's natural.
I think some of it is going to come in time.
And we want to earn that trust, right?
We're not asking people to just say, yeah, we're, you know, we do everything right.
We're looking to earn that trust and we think we'll do it.
Yeah.
No, that's a fair answer.
It sounds like you're relatively open to being audited and introspecting and you're putting
you're putting the data out there to help answer that question.
The other thing that was sort of on my mind when I was thinking about Cape is,
beyond the data collection and all the cool things you guys can do with visibility,
there's just some standard attacks that happen when you've got a cell phone.
You know, there's SMS fishing, the hijacking of those one-time password codes.
Sim-swapping is a huge problem.
Can these things happen on Cape?
If not, why not?
Sure, sure.
So let's start with sim swapping.
So sim swapping is where, you know, you tell a sob story to a telecom provider and say,
hey, you know, I've lost my, you know, I need to have my phone number transfer.
I lost my phone.
I lost all my information, et cetera.
And, you know, I'm not in that world.
But my understanding is it is relatively, people are commonly successful in doing that.
A lot of celebrities have been sim swapped and had all their information kind of transferred over
to a new device.
The way Cape solves that is we give each user a 24-word passphrase.
And that passphrase needs to be saved when you sign up.
And the only way to transfer that number
is with that 24-word passphrase.
So if that pass-ray does not match,
you cannot transfer the number.
And so there is a trade-off there, just to be upfront,
like you have to remember the 24-word passphrase.
But the security benefits are, you know, rock-solid,
and they will prevent sim swapping in the sense of,
if you don't have that code, you cannot transfer the phone.
All right.
Well, last question for me is you guys are essentially building a product
for high-risk individuals or enterprises
that want to be really careful with high-risk traveler.
But any way you frame this, right,
if you go into Cape, it's because there's a degree of risk involved
on the cell network.
that you're trying to mitigate through the lack of recording or enhanced data feeds, etc.
But creating a product that is for these high-risk scenarios and high-risk individuals
is also going to make you quite an interesting target for the likes of nation-states, etc.,
that, you know, I was thinking about Salt Typhoon and their efforts of getting into all manner
of U.S. telcos, and I'm sure it's more prevalent than just the U.S.
Are you guys at risk of the same sort of attacks, and, you know, how are you thinking about this
in terms of how you build up the infrastructure
and manage the product?
The Salt Typhoon attackers did not do anything novel.
Meaning if telecos were able to maintain
cybersecurity policies that are common across the board
and are standardized, published widely known,
if they were able to set those up,
we don't think Salt Typhoon would have happened.
And I think that's where Cape, for two reasons, like I had mentioned before, we have the luxury of not having 20 to 30 years of M&A, right?
And so we have a clean install.
But again, we have a clean install with modern cyber security protocols, but obviously you can't stop there.
You have to be vigilant, right?
And so, you know, we have a security team.
They're empowered.
And they're constantly, you know, constantly thinking and innovating around how to keep our own mobile core security.
Yeah, that makes a lot of sense.
Ajit Goklay from Cape, thank you so much for stopping by to have a chat with me.
I've thoroughly enjoyed this. Thanks for your time.
Thanks, James. Appreciate it.
That was Ajit Goclay there from Cape, and you can find Cape at cape.com.
Cape.com. Cope.
Big thanks to Cape for being this week's risky business sponsor.
And that is it for this week's show.
I do hope you enjoyed it.
We'll be back soon with more security news and analysis.
But until then, I've been Patrick Gray.
Thanks for listening.
