Risky Business - Risky Business #818 -- React2Shell is a fun one
Episode Date: December 10, 2025In this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: There’s a CVSS 10/10 remote code exec in the React javascri...pt server. JS server? U wot mate? China is out popping shells with it Linux adds support for PCIe bus encryption Amnesty International says Intellexa can just TeamViewer into its customers’ surveillance systems …and a Belgian murder suspect complains that GrapheneOS’s duress wipe feature failed him? This week’s episode is sponsored by Kroll Cyber. Simon Onyons is Managing Director at Kroll’s Cyber and Data Resilience arm, and he discusses a problem near to many of our hearts. Just how do you explain cyber risk to the board? This episode is also available on Youtube. Show notes Risky Bulletin: APTs go after the React2Shell vulnerability within hours - Risky Business Media Guillermo Rauch on X: "React2Shell" / X React2Shell-CVE-2025-55182-original-poc/README.md at main · lachlan2k/React2Shell-CVE-2025-55182-original-poc · GitHub Hydrogen: Shopify’s headless commerce framework Researchers track dozens of organizations affected by React2Shell compromises tied to China’s MSS | The Record from Recorded Future News Unveiling WARP PANDA: A New Sophisticated China-Nexus Adversary Three hacking groups, two vulnerabilities and all eyes on China | The Record from Recorded Future News Risky Bulletin: Linux adds PCIe encryption to help secure cloud servers Sean Plankey nomination to lead CISA appears to be over after Thursday vote | CyberScoop 🕳 on X: "This guy is complaining that GrapheneOS “failed him”. Showing a Belgian 🇧🇪 police request for an interrogation regarding premeditated murder (as a suspect)." / X Sanctioned spyware maker Intellexa had direct access to government espionage victims, researchers say | TechCrunch To Catch a Predator: Leak exposes the internal operations of Intellexa’s mercenary spyware - Amnesty International Security Lab Is ransomware finally on the decline? Treasury data offers cautious hope | CyberScoop UK cyber agency warns LLMs will always be vulnerable to prompt injection | CyberScoop In comedy of errors, men accused of wiping gov databases turned to an AI tool - Ars Technica
Transcript
Discussion (0)
Hey everyone and welcome to Risky Business. My name's Patrick Gray. We've got a great show for you this week. We'll be recapping the React to Shell stuff, which has been going on, of course, over the last six or seven days. So we'll be talking about that with Adam Weilow and a whole bunch of other security news. And then we'll be hearing from this week's sponsor. And this week's show is brought to you by Kroll. And today, this week, we're going to be hearing from Simon Oneeons, who is the managing director of cyber and data.
resilience at Kroll. Kroll, of course, does MDR stuff. It also does incident response and
whatnot, known as a very competent large shop. And, yeah, Simon is along to tell us all how we should
think about the way we can interact with boards, right? Now, a lot of this is stuff that we've heard
before, but I guess this time it's kind of different. I mean, of course, Simon's based in London
and England is still reeling from the Jaguar Land Rover ransomware attack, and there's a bit of a
window here to make a dent and convince some of these board members who think cybersecurity is
just for the nerds down in the IT department that it is actually an issue they need to pay attention
to. So that interview, a very entertaining one I'll add is coming up after this week's news with
Adam Boiloh, which starts now. And Adam, firstly, good to see you. And secondly, man, React
Server components. This thing has been a really big story. Of course, we love it when a story like this
breaks like the day after we've done our most recent episode. I think it might make sense.
First of all, to just say, yes, I will use the bug's name. I think it's a serious enough event
that we get to actually use its name, React to Shell. But why don't we start off by actually
describing what React Server components, which is where this bug is, or in the protocol between
React Server components and the client? It's a deserialization thing. But why don't we actually start
by describing what React server components actually does,
because it is surprisingly interesting.
It's a relatively new sort of back end for the front end technology.
I've been reading about it, but you're going to explain it better than me.
So React is a framework that many people have heard of
in the context of web development that was intended originally
for client-side JavaScript kind of application development.
There was kind of a trend a few years ago towards what we call
single-page applications, which is basically when you browse to a website,
but instead of constantly pulling HTML in from the website
and going back and forward every time you click on a link,
instead kind of loading a whole application down into the browser,
into the JavaScript runtime,
and kind of rendering an application locally,
so you could have kind of websites that felt more like, you know,
mobile apps or more like real applications.
And React was one of the frameworks that was very popular
for doing this.
As kind of JavaScript web development, you know,
picked up Steam and lots of people started doing more and more complex things
out on the client side, the temptation was, you know, why is the server side written in some
ancient, you know, grandpa language like Java instead of modern hip JavaScript? And why don't
the same developers work on it? And why don't we tightly couple the client side to the server
side so they can work more closely together? And that's kind of the React client side framework
started to expand into the server side. And now we've kind of got to the point where many modern
JavaScript applications have a JavaScript front end and a JavaScript's back end.
And the rise of Node.js and a bunch of other JavaScript server-side frameworks made that
a pretty popular path. And so now we have this thing where a thing was traditionally felt
like it was client-side is actually now kind of both client-hand server. And of course,
there's a communications mechanism between these components. And that's where this particular
bug came along. So it's a like deserialization flaw in, I guess it's kind of like J-Chi.
some encoding on the wire, you know, inside HTTP, obviously, used between the client and server.
And it kind of, the bug is quite clever. It leverages sort of the asynchronous execution
properties of modern JavaScript environments to lead to kind of code injection. So you can craft
a JavaScript object, which refers to the future execution of another JavaScript object.
And then you can basically provide code and have it run by the server as it, you know,
the process sort of resolving these these dependencies and it's pretty cool bug i gotta say uh and
CVSSS 10 and in the implementation used by the React server components unorthed which that's a good
time bug hell yeah it sure is i mean it was funny right like just in researching this stuff
watching a video from a developer who was like talking about this stuff like next j s has been all
over this like React server component stuff, right? And they were giving examples and stuff and
showing like really, you know, being able to do really cool stuff with this. Like, is it the client?
Is it the server? Well, it's kind of like you need to think about it differently. And as this guy's giving
examples, he's like, I don't know how this part of it actually works entirely. I'll just consider it
magic, but this is what you do do with it. And it was really cool. Like, you really do get the
impression that this is not just some dumb change for dumb changes sake, but with anything that's
new and involving passing serialized bits and pieces between client and server.
You're going to expect there's going to be some attack surface there, and that's what we're seeing.
Yeah, yeah, there absolutely is.
And there is good reason for it.
I guess you say, like, this is not just like, let's bolt this in because we can, right?
There's been a lot of thought about, you know, like people want experiences that feel more
like mobile apps, right?
They feel like slick applications.
And delivering that requires a bunch of things that the old model of, you know, that the old model
of like transactional HTML rendering didn't really deliver and then we moved to single page
applications but then the back end was kind of out of step and now you know things like single page
applications make delivering search engine the search engine discoverable content really complicated
because the search engine is not going to execute the JavaScript so you it's very bad for discoverability
and it's bad for things like accessibility so there's a bunch of reasons why they aren't ideal and
so we've moved towards the world where you can decide do I render the content the server side do I
render it client side? Do I render it a mix of both depending on which client I'm serving?
Like if I'm delivering it over a mobile network where it's really like the end device is low
power. Maybe I want to do a bunch more rendering server side. And so having frameworks where
you're flexible or where the code executes and where content gets generated or rendered or
processed and how API calls get made and so on is attractive for a bunch of reasons. But as you say,
it tends to look like magic because you've abstracted away the actual complexity of dealing with
this kind of distributed application developer.
And distributed programming is very, very hard.
And we end up with flaws like this.
And this stuff, you know, it moves so quickly, right?
The developers move very quickly.
The frameworks move very quickly.
Us poor security focal left scratching our head thinking,
but we thought React was a client side technology.
How can it have a CVSSS10 remote code exec in it?
Yeah, well, I mean, here we are talking about the CVSS10 in server side JavaScript.
What?
You know, like it's just, for us alt-farts, it's a bit, it's a bit,
weird.
But just thinking on this a little bit more, these server components you would think, I mean,
just immediately off the top of my head, I'm thinking, is there any way you could just
containerize this part of your web application, right?
And lock it down pretty hard to, you know, sort of protect yourself somewhat from bugs in these
processes.
I wouldn't have thought you would need to give these React service components absolute, like,
privileged raw access to your web servers, right? I mean, or am I wrong there? I mean, you know,
you're not wrong, but I guess the question then becomes does access to the underlying shell actually
even matter anymore? Because as you say, everything's containerized. A lot of the stuff will run like,
you know, ephemeral sort of environments where the underlying Unix isn't important. And what is important
is the data. But I mean, but I mean, you know, once you've containerized it, you might, you know,
put some rules about like what sort of data access, you know, database access to this container is,
and things like that, right?
I just would have thought breaking it up a little bit
and treating it as a, you know, distinct thing
would be perhaps a sensible thing to do,
but I don't know these environments.
That's why I'm asking.
Yeah, I mean, I guess, you know,
there was a trend for a while towards microservices
where we would break up all these applications
into very small components
with kind of better-defined contracts
about how they were going to interact
with databases or other resources on the network
or back-in systems or whatever else.
And we went down that road for a while
as a kind of a model of application development.
I remember it got a bit out of hand.
And that's it.
It got out of hand.
It'll be over-containerized things.
We over kind of component-ize things.
And then the amount of interaction that you would have to do
with the environment to get anything done,
you'd end up making a thousand HTTP requests on the back end
to render one page to go out to the client.
And so finding a middle ground between breaking applications up into tiny components
and then having to have flexibility and routing
and load balancing between every one of them,
or giant monolithic apps that do everything in one process
and therefore have no externally controllable security boundaries.
Like the reality is we kind of need the middle ground,
and we are, you know, we slosh backwards and forward,
you know, between these sort of approaches to things.
And, you know, us security folk tend to be a bit behind the ball
because the stuff sloshes very quickly,
and developers move very quickly,
and coming along and saying,
hey, how about some controls here or whatever else,
is not, we historically have not been very good at that
and then traditionally developers just kind of did their own thing
and didn't take responsibility for that security thing.
In the modern world, like developers that are, you know, continuously publishing
and don't really have the supervision of external security folk
have to take responsibility for that stuff themselves.
And we have seen more sensible outcomes as a result.
But, you know, there's collateral damage along the way
and a lot of people getting a lot of shelled with this reactor shell thing.
Yeah, so let's look at the time.
line here. So I think it was like last Wednesday US time, which would have been Thursday for you and I.
There was, well, it was a patch, wasn't it, that came out and then everyone sort of scrambled to
find a pock. And then what happened is the proof of concepts gradually got smaller and smaller
as people whittled down and figured out this bug. And I think someone dropped a patch for some
service that was like a dead giveaway that you could do it in this really simple way. And then
before you know it, we got like a 10 line pock. And then all hell breaks loose, right? So you've got
cloud flare basically dosing itself because it needed to fix this and made some boo-boo they had like a
25-minute outage which comes on the back of another outage they had like a week earlier so cloud flare's
been having a pretty pretty rough quarter shall we say um you would think if this continues it's
actually going to start causing problems for them too uh and then uh of course we saw the usual stuff right
like apt crews out of china just going nuts with this thing uh i mean it's a free-for-all right
And they're dropping web shells and every actor under the sun is doing, God knows what with this.
I mean, I get the impression with the Chinese stuff.
They're just dropping web shells and figuring out what they can do with them later, right?
Which is the usual approach from our friends at the Ministry of State Security and their contractors.
Yeah.
I mean, just shell everything right now while you can and deal with the mess later, you know,
because you can't stop to think when you're competing with everybody else for these shells.
Because, you know, when a bug like this drops that you don't have foreknowledge of,
like it really is, you know, a racer who gets to shell at first.
and shore up their access and, hey, figure out what it is later.
And I think we've seen some graphs and data from the grey noise people
about like the scale of scanning.
And it's just like that graph.
That's a hell of a chart.
That is a hell of a chart, that one.
We've seen a couple people like Kevin Beaumont, actually.
He was sort of downplaying this one saying, you know,
security is like, you know, the security world is sort of panicking over this one.
I wouldn't say that.
I'd say that it's an interesting bug and it pops up in a lot of places.
Like his point is, oh, it only affects 2% of organizations or something.
but you know go look at Shopify's latest thing which is called hydrogen which uses like
next J.S React server components right so you know if you have developed your store using this like
it affects you so there's a lot of like downstream products you know cloud flare for example
which obviously fixed it so I think that that's where the issue is is there's going to be a lot of
downstream stuff most of which I have a positive feeling though Adam because because it is such a recent
software tool,
I have a feeling that the sort of people who use this stuff are the sort of people
who are going to be aware that there is a problem and they're going to upgrade it.
It's not like this is like Log 4J when we're talking about ancient Java stuff
where the people who wrote it retired 15 years ago and nobody knows about it
and there's like that huge long tail.
I think this one has a pretty good chance of getting tidied up pretty quickly.
What do you think of that?
Yeah, I think your instinct there is right.
I mean, the JavaScript environment, the ecosystem moves so rapidly already
that anyone who is deploying anything kind of serious out of this stuff
already is used to having to roll the stuff all the time.
Like constantly be redeploying, constantly rebuilding.
Like that's a part of their, like, you know,
the way that they approach development
is that kind of constant rolling and reintegration of upstream stuff.
And, you know, that's a curse and a blessing, right?
I mean, it's a curse in that we've seen supply chain attacks on, you know,
NPM and stuff are really bad
because people are used to pulling new dependencies in very quickly.
But on the other hand,
when it comes time to patch something like this, they are used to doing this, and it's not a scary,
exciting process like, as you say, the Java log for shell stuff, where no one knows how to
even compile those apps anymore, let alone, you know, fix it and deploy it and blah, blah, blah,
so there's a very, very long tail. So, yeah, compared to a bug in like ancient PHP or ancient
Java applications that are long since dead, I think this will get, you know, there are much less
of a long tail, but it is interesting because, you know, as a kind of, because this stuff is so new,
there are opportunities for new and quite exciting bugs like this, and that's always a good time.
Funny, funny, you should say, because the very next thing I was about to say is I saw some researcher
of note. I can't remember who it was, but they were on Twitter and just saying, well,
this is the sort of thing that's going to make people go back and look at this code, right?
And you get a feeling that, I mean, this one did sort of come about, like it was a pretty,
bad mistake that led to it, right? And you sort of think, well, if there's that bad a mistake,
maybe there's other stuff that's like not so obvious. So I guess we look forward to more
fixes. But yeah, I mean, it looks like we're getting a happy ending here, right? Like, it's not like
Log 4J where like everybody is still getting shelled with it, right? I mean, that's said, there are,
there is some real damage being done out there. So, well, you know, we, it'll be a while before
we actually see a lot of the reports coming out indicating what sort of
victims they were, what was taken and whatnot. But look, staying with Chinese APTs, right? Because
there's a lot to talk about with the old Chinese APTs this week. We've got some research here
at a crowd strike looking at a, what are they called? They're calling them Warp Panda, a Chinese
APT crew that's going after VMware V-Center. I mean, it just, VMware just depresses me so much
because it went off to Broadcom and they've done nothing to try to improve it.
It's just been left to rot, and, you know, the vultures working for the Chinese government
are certainly having fun picking the bones of the old VMware's out there.
Yeah, CrowdStrike wrote up this particular actor and some of the tooling that they're using.
They use a tool called Brickstorm, and it's actually, honestly, pretty cool.
I mean, there's so much, you know, old VMware infrastructure and big enterprise, and it's such a great target.
Is this the one that's written in Go?
Yes, there's a bunch of Golang components and stuff, which is very, very hip, very modern of them.
And it does like all the sorts of things that you would expect, you know, being able to access the virtual machines.
So you compromise the ESX hypervisors and drop this, you know, Golang tooling on it.
But one of the bits that I really appreciated about this is, you know, I've compromised some VMware in my time.
And one of the things you often want to be able to do is land on the SX hypervisor, you know, a steal the file system, which is straightforward.
you know, take some snapshots or whatever else and X-fill those,
but being able to pop up inside the virtual machines and then use their connections
onwards to the rest of the world is a thing that, like, you can do it, but it's fiddley.
Like you have to kind of rig up some power shell or get a shell and do it by hand.
It's kind of kind of funky.
This tool thing they've got, they drop it like a socks proxy on the guest VMs,
and then they have some plumbing for their C2 so they can connect through their C2 network
into the hypervisor and then from there, pass the connection into the guest, which then were
proxy it onwards through the Sox proxy onwards to wherever it needs to go. So if, for example,
there's network plumbing or VPNs or application access, they can just straight up use that.
And that's a super slick tool as an attacker. And we've seen these Chinese groups, for example,
like steel browser tokens, like a session cookies out of the browsers, and then pivot through
virtual desktops inside the VMware infrastructure using this so that they can then
piggyback on existing sessions, come from the right place in the network and not look anomalous,
you know, blend them with all the other network traffic. Like it's just, it's exactly the
tooling that if you are a real serious business actor and you want to use VMware in your day
today, this is kind of what you need. And I love that for them. Like they're good, good work,
Chinese, you know, MSS contractors or whoever it is. Yeah, I mean, I was, I read this earlier and
I was just thinking, wow, you know, go like new school meets old school, right?
Like they're writing an ice queen, go malware.
But yeah, there are many other signs, in fact, that they're doing a good job there.
Sorry to say.
Now we've got to just quickly discuss a piece by Alexander Martin that appeared in the record.
Look, it's a good piece, but I feel like he kind of buried the most interesting part of this.
And his conclusion perhaps should have been his introduction.
which is he's looked at the
absolute mess of overlapping actors
targeting end-day vulnerabilities.
Like one of the good examples he uses here
is the
sharepoint stuff that happened
six months ago or whatever.
When it looked, yeah,
it looked like there'd been some low and slow exploitation.
But then when, you know, the attackers got wind
that this was going to get patched
that Microsoft knew about it,
well, all of a sudden a whole bunch
of other threat groups in China got it
and they went absolutely crazy.
exploiting it, right? So, you know, he's looked at how this seems to be a bit of a pattern with
Chinese crews when they might be using some sort of bug and then, well, there's an announcement
that a bug in XYZ is going to get patched and then it just, you know, everyone starts going
absolutely ape with it. But, you know, it's right at the end where he says, he writes that
China sprawling cyber ecosystem is very busy and there are limited data points differentiating
state units, contractors and sub-criminal actors. That, the, that, that, that, that,
that three distinct entities converged on tool shell is not unprecedented.
From proxy logon to Avanti, a pattern is emerging.
I mean, as I say, I think it's a great piece.
I just feel like that should have been the introduction if I was the editor.
Yeah, yeah.
It's just, it's a good kind of state of the, I was going to say state of the nation,
like state of affairs,
or understanding how the stuff works in China and kind of what it means for defenders as well,
right?
I mean, being able to understand how your adversary is thinking and what they're doing
and the kind of patterns that we see emerging there.
was interesting. Another thing I thought that
he didn't mention in this, which I thought would have
rated a bit, specifically
talking about the SharePoint bug, is that
when ProPublica was covering it,
and they mentioned that, like, most of the
SharePoint maintenance or development teams
are also in China.
Yeah, the on-prem sharepoint
is a Microsoft China product.
It all leads back to China.
Yeah, which is just, you know, also an interesting
kind of nuance, especially because he brings up the, you know,
kind of requirements for
software developers and researchers in China.
it to cooperate with their government as as you do with everyone does with their host governments you
have to obey whether they obey the local rules and laws and things but yeah it's it's a good write-up and
i think that you know it's it's definitely one for the kind of policy wonks to think about perhaps
more than the infrasect people because like we kind of understand what's been going on what we've
seen but like big picture you know we do need to think about how it all fits together and you know
how they roll yeah and i think what's interesting is over the last year or two you know the ice soon leaks
also help with this. We've got a bit of a clear picture of how things work in China, and it is
a bit nuts. I mean, it's funny because there's this current discussion happening in the USA
about whether or not using contractors to go after cyber criminals or state-backed adversaries
is good policy. And you think, well, I mean, it's the Chinese are doing it and it's a mess,
but is that an argument to not do it, or is that an argument to do it? And I just don't know if I can
make up my mind on that just yet. I might need to mull it over over a few weeks off. So anyway,
it's just a good piece to read in, you know, as a backdrop to that whole discussion.
I've also linked through to Catalan Kimanu's write-up from July about the timeline of the
tool shell stuff because he did a spectacular job where he wrote a newsletter piece,
which was essentially all just dot points. But man, it's a useful resource and he did a great job.
Now, speaking of Catalan, he had a few weeks off. And, you know, we had to lower him into
a vat of regenerative fluid, basically, which is what you do with cattle. And you leave him in there
for a few months. He's basically regenerated. And we've put him back to work. So those risky bulletin
news podcasts are back. The risky bulletin newsletter is back as well. And in today's edition,
he's actually got some really cool news. I don't see anyone else covering it yet, but they probably
will. But Linux is adding support for PCIE encryption. And the first thing I thought was, man, how would
you even key that and then I read his piece and then like that's what the support is doing.
It's doing the actual keying for it.
But why would you want a PCIE card to have, you know, encryption and where should it be
encrypted from and to?
Adam, help me understand here.
Yeah.
So the idea here is that between the CPU and PCIE connected peripheral devices and I guess
GPUs are the big thing at the moment that are PCIE connected, sometimes you want to
have a situation where a snooper on the bus isn't able to see the contents of the traffic
between devices.
And if you're cynical, you would say that so people don't steal AI models that are executed
on your hardware.
The more actual reason is that we're seeing this trend towards trusted computing in the
cloud where we can't do all of the AI compute on the edge of the network, so we're going to
ship it up to the cloud, but we have privacy concerns.
And we've seen both Apple and Google release their work on implementing, you know,
kind of trusted confidential computing in the cloud, where we have VMs that we trust to execute,
you know, operate on data that the end user doesn't necessarily want to share with the hosting provider
or the place where the hardware lives and the hardware vendors, AMD, Intel and Arm,
kind of cooperated on the standard, trying to come up with a mechanism where a virtual machine
that's executing some particular code
can trust that it has a clear path out
to probably the GPUs or other PCI-e-connected hardware
and the trust anchor on the platform,
the T-E on most platforms or the, you know,
kind of that particular trusted part of the hypervisor
is able to assert to these virtual machines
that, hey, there is a trusted path,
I've done the key exchange, it's all good,
you can have a private chat with the GPU
and no one outside is starting to snoop.
And, you know, that's a, you know, the old adage of, you know,
if it's your computer you controls, what control goes on to it and goes on on it.
And by extension, if it's somebody else's computer, you can't, you know,
that adage is always going to be true to some extent,
but we can definitely make it more difficult for someone who can, you know,
has physical access to the computer to kind of observe what's going on and abuse that.
So it's an interesting work.
And the fact that we're seeing this,
you're seeing the main vendors cooperate on this
and contribute the co-back kernel.
It's also really cool.
Like seeing them work together, it's nice.
But yeah, it's just an interesting kind of sign of the hardware times
that we're trying to make it so that you can actually have some guarantees
about the robustness of computers that aren't in your hand.
Yeah, I mean, we've seen this before with like confidential computing
and hypervisors and stuff.
And I feel like the reason this is happening is, as you say,
for GPUs, right? It's they just want the same sort of level of assurance and it's probably a
compliance thing where, oh no, you can't do this in the cloud because for GDPR reasons, you can't
tick this box anymore and I figure this is probably a way to help people tick the box more than any
actual security gains. That's just my feeling, but I don't know. Am I too cynical? I mean, box
ticking is an important part of the process. We do a lot of box. And sometimes by accident,
box ticking does improve things. So, but no, the whole kind of how we build trusted confidential
computing in a remote environment.
Like that's a very hard problem and we're making steps towards it and yeah, but cynicism is
still pretty warranted though.
Yeah.
Now let's put our cynicism into overdrive now and talk about Sean Planky's nomination to lead
Cicester.
This guy, I mean from what I've heard, competent, good pick, you know, going to be great and
it's not happening because of shipbuilding in Florida.
It's the whole US like legislative process, regulatory process, whatever you call.
all this. Like, it's just so bonkers some days, like the way that, you know, this kind of
completely unrelated, I'm not going to call it pork, but you know what I mean, like some
completely unrelated, small issue like this can change the way things work. So there was a
vote that covered a whole bunch of nominations to various positions in the US government,
and he was excluded from them because of the objections of, as you say, I think a Senate
or a representative from Florida over, yeah, maybe shipbuilding.
Who even knows anymore?
Well, you know, Sean Planky has ties to the Coast Guard
and Christy Noam has partially cancelled a Coast Guard contract
with a Florida shipbuilder.
And that seems to be the reason for the roadblock
from a Republican senator.
And meanwhile, our good old mate, Ron Wyden,
is also saying no to, you know,
installing a competent head of SISR
until a report is released on Salt
typhoons. So we've got like, I feel like it's like they're both being equally stupid,
but for reasons much more aligned with their sort of party brands. You know what I mean? But either
way, stupid is stupid and it's not getting done. So CIS is going to continue, I guess, to have an
acting head. We did see, though, during Trump's first term, there were a lot of agencies that just
had acting heads. They were never sort of Senate confirmed. And, you know, the wheels didn't fall off
for the most part, I mean, at least in the first term, but let's see what happens now. Yeah. I mean,
it's just,
CISR is such an important,
like,
cybers is kind of relevant and kind of important,
and it would be nice to have strongly issue there,
but hey,
there are so many other things going on that,
you know,
as you say,
we may just have an acting and survive the term,
and then we'll see what the mess looks like afterwards.
Well,
Siss has been pretty much gutted at this point.
And it was funny because I held back saying that on this show
when people were being a little bit hysterical about it
saying, oh, it's gutted,
like a lot of good people are gone.
It's, you know,
all the people they worked with.
with there are gone. And then you just hear it enough from enough sources where they're just
like, man, they have absolutely just taken a machine gun to the place and it ain't what it
used to be. So, God, what it means for the future of that agency? I don't even know.
Now, here's a fun one. And I do not know if this is a troll or not. If it is a troll,
well done. I think it's hilarious. But there is a guy all over X complaining about graphene OS
and saying the feature where you give the copseid your S pin, they put it in it and it raises the
phone. It doesn't work and he's talking about it being a failure of graphene's platform. And then he's
posting like documents that he's received apparently received from the Belgian police where they're
talking about having extracted data from his phone. But then you look at like what he's been he's
being investigated for and it's murder. And you just like, and he's like, yeah, I'm a suspect in a
murder. This is terrible that this like this feature didn't work for me. I only ever used this phone
indoors and blah, blah, blah, blah, blah. As I say, it could be a troll. If so, it's a brilliant one.
but it's led to the most hilarious conversations
on that platform.
Just, you know, 10 out of 10 as a spectator.
Like, amazing.
Yeah, I went and read the,
there's a message thread on the, like,
graphene forums about it.
And, like, it's just such comedy reading.
And, like, this guy, I mean, he's come back,
like, he initially posted on the forums
and made his claim,
and it's just filled with, like, you know,
20 pages of people telling me he's trash
and doesn't know how to computer
and he must have, you know,
got his friend to set him up for him or something like that.
And then he actually comes back and defends himself in the thread at the same time that
he's also being accused of murder.
And basically says like this is a straight up stock graphene OS.
I wasn't using any third party trash.
Like I, you know, did all the right things.
I gave, I watched the cop type in the DURAS pin and the phone did not reboot and did not
destroy the murder evidence contained therein.
And then there's a few people in the thread going.
So you're telling us that the evidence of the murders that you did are on the phone.
And he's like, yes, yes, they are.
This is why I think it's a troll because he's also expertly calibrated all of his posts
to cause maximum distress and outrage amongst the graphene OS people who are exposing,
who are, I'm sorry, responding exactly how like a completely triggered privacy absolutist would
respond, right, by saying, it's not our fault that you're dumb and like, you know.
And we can never guarantee you that, you know, the human dimension and blah, blah, blah, blah, blah,
but our stuff is perfect
and although we don't claim it is
and it's just, it's fun
but it does,
it does,
did you read it and think
maybe this guy's trolling as well
because it feels like it,
it really does.
It does have an element
like, yeah,
I mean,
but at the same time,
sometimes,
sometimes the best trolling is not trolling.
Like it's actually,
it could actually be,
like it's just believable enough.
Honestly,
like I'm willing to give the guy
the benefit of it now
that he's just that nuts
that it's not actually calling.
What I find,
what I find interesting
is the graphene people
like defending their operating system instead of just saying I'm glad the cops
unlocked your phone if you committed a murder and I hope you go to prison.
Which would be the response from a normal human being Adam.
But sadly it was not to be.
Instead of getting outraged and triggered and like just go on for the guy's throat,
absolutely hilarious.
Now staying with mobile security and whatnot,
we've got a story here from I think this has to be Lorenzo, right?
Yeah, it is.
Lorenzo over at TechCrunch
he's taken a look at an Amnesty
International report into Intellexa
which of course is alleged to have done a bunch of
pretty shady stuff in the spyware field
they make the what is it Predator
Predator?
Yeah the Predator spyware which it's the sort of thing
that's popping up in places that you know
you would hope it wouldn't pop up in
but apparently Amnesty got a hold of some
internal training material
which showed
someone from Intellexa
being logged into a large
customer environment, which was, you know, actively targeting a device and collecting intelligence
from it and whatnot. And they're saying, you know, this is outrageous that Amnesty is saying,
you know, it's outrageous that Intellexa might be able to do this and is access against
customer sites and whatever. But, you know, there's a couple of people quoted in this story
who are like, look, it feels like that's actually, you know, probably a demo setup with team
viewer and not actually a customer setup because, you know, most countries that use this sort of
stuff, they're probably not going to give your team viewer access into their super secret
surveillance stuff. I mean, look, who knows, right? Who knows? I mean, Intellexa, I feel like
are not a good player, whether or not they're allowing team, you know, making their customers
allow team viewer access in. I feel like that's sort of a peripheral issue. What did you make of
this? So the underlying amnesty piece does have a bunch more details. They haven't released,
they got a whole bunch of data that included the video of like,
Microsoft Teams meeting where someone was running through a training session for other, you know,
intellects of staff, and he'd demo remote access.
And in that meeting recording, apparently one of the people attending says, is this a test system?
Or is it the live thing?
And the guy says, no, this is the live thing.
So Amsse haven't released the video.
That's what you tell the newbies.
So they go, cool, you know what I mean?
When it's really, it's just, you know, there's a VMware thing in the cupboard.
I mean, it's absolutely, well, we don't know.
We certainly don't know.
But the, yeah, Amnesty's written it up, and, you know,
there are some things that make it look kind of like it may actually be straight up,
you know, team viewer into some Pred of the Control Panel in Kazakhstan or whatever.
And there's a bunch of customers in some of the drop downs and stuff that they all have
code names, we don't know they are, but one of the things the Amnesty Report does do
is try to correlate a bunch of the data they have got from this leak with,
things that have been previously reported or data they previously obtained. So things like
domain names that line up that they've previously seen Predator being used, being used to
like deliver Predator, they've seen some of these domain names in the data they've got. So there's
sort of a number of points that kind of spoke to the authenticity of the data to start with,
but then, yeah, as to whether or not you can just straight up Team Viewer into, I mean,
the idea that a government agency would buy a surveillance system that you could just Team Viewer
into, I agree, does seem a little bit, you know, a little bit incredulous.
Makes me feel a little bit incredulous.
But on the other hand, you know, people do just yolo this stuff.
And, I mean, the idea that you could just team view into it is pretty funny, DBA,
other than the whole spyware thing being, you know, not a pleasant ecosystem to start
with.
But yeah, the amnesty write-up is pretty good.
And I'm glad that they are doing the research and coordinating with other people that
report on this stuff.
but it's just
honestly
the idea that
they would just do that
kind of rings through to me
so color me convinced
yeah
well I mean
NSO group
you know apparently
I mean
they hosted a lot of
their own attack infrastructure
and whatever
this seems to be this
dividing line
between a lot of the spyware
companies
where the really high end
pro shops
they don't do any operations
they're like
here's a tool
you know something goes wrong
you can call us
we can help you work through it
but we are not going
anywhere near your operations
don't want to know
You know, that's fine.
And, you know, when you're dealing with a professional agency in a jurisdiction that has
decent rule of law, decent court oversight, decent political oversight, that's generally how it goes.
So, yeah, team viewer for the Algerians or something, I don't know, maybe that's,
maybe that's just how it's going to go for Kazakhstan.
Maybe that's how it's going to go.
You just don't know.
Yeah.
I mean, I think, you know, that's an important differentiation, right?
There are customers for this stuff that are sufficiently sophisticated that they can deploy it and
manage and operate them themselves. And then there's plenty of customers that are not that
sophisticated. And that's kind of the niche that Interlexa are going after here. Because
I mean, they have a bunch of kind of ancillary products for deploying Predator on devices
using things like if you're in the national ISP and you've got access to the National Route
CA, then it's got mechanisms to like man in the middle HTTP, H-DPS, drop the like trigger for,
you know, installing the predator in the middle of the HTTP. So you can use that for like,
targeted infection when you are the national
telco or the national kind of operator
in your internet privilege position.
They've also got mechanisms deploying it via
close access like through fake base stations
and like baseband bugs.
I think they had a Samsung baseband bug that they were using for it.
So, you know, they are targeting people that
need to buy the whole package
because they don't have that capability in-house.
And, you know, I guess there is a market.
Well, and as I say, them having team viewer access to that
is like pretty small on the list of concerns
that flow from what you just said
about all that they're up to.
Now let's move to what,
it's a tentative good news story, I guess.
Matt Capco over at CyberScoop
has looked at some treasury data,
US treasuries, released some data
that says that ransomware payments fell by one third
to 734 million last year,
but apparently the number of victims is remaining the same,
but like, you know,
it's generally a positive write-up saying that the numbers were down last year.
I mean,
Given we're nearly in 2026, I don't know what this tells us about trends, but fingers crossed,
we don't have to wait a year for the next Treasury data set.
Yeah, I mean, the graphs are honestly not super compelling when you see the, I mean,
ransomware has done this kind of drop off as well.
Like it was an equally large dip between 21 and 22, so like it's happened before.
But yeah, I think 2025 is going to be the year where we find out whether the amount of disruptions
and like the sort of generally making ransomware's life a bit tougher,
whether or on it has actually made a difference.
So yeah, next year's numbers or this, you know, 2035's numbers are going to be the real telling ones, I think.
Now, another one from CyberScoop here.
Derek B. Johnson has reported on a warning from NCSC, which is saying something that you and I have actually been saying this year,
which is that large language models are always going to be vulnerable to prompt injection in one form or another.
I mean, you can try to filter it, you can try to massage it, you can have other models watching them,
but fundamentally when you are mixing code and data, this is an inherent problem and one that
cannot be solved. It is nice to see a prestigious agency such as NCSC come out and say the same thing.
Yeah, this was based on a blog post from a David C as an NCSC technical director for platforms research
and he says exactly that, right? The very nature of an LLM is its job is here is context, predict next token.
There is no instructions or data. There is no concept of separation between those things.
it's just give next token and anything else.
There is no dedicated signal path for this thing.
Yes, exactly.
Exactly.
And so like the blog post makes the sensible kind of like the name implies that this is somehow like SQL injection,
it's comparable to SQL injection, like where there is by design already separation
between, you know, kind of data and code.
And that's just not what it is in LLMs.
And I think this is important for people to read and understand, right?
we are, you know, just letting the computer yolo the next token and then kind of thinking about it like there was some separation between code and data and then hoping and pretending isn't an approach that's, you know, really serving us well so far.
And it's just fundamental to that type of, you know, next token prediction model of computing.
So, yeah, I'm glad someone else said it as well.
It makes me feel good.
Yeah, and not for one second are we claiming we're the first people to have that thought.
when people first started saying that,
we were very,
very definitely quick to,
to agree with them.
And I also want to reiterate something
that I said,
I think it was on last week's show,
which is whatever you think of this stuff,
as a security professional,
this is your job now.
Congratulations.
Get on board.
Like, we're going to this party.
We're going, man.
There's nothing you can do.
And it doesn't matter
that they're mixing code and data.
It doesn't matter.
That's your job now.
It doesn't matter that you don't like it.
It doesn't matter that there are inherent problems
with that.
It doesn't matter.
This is your job now.
You need to get your hands around this risk.
Now, this one from Ars Technica, it's a bit of fun.
But I don't really think it's a particularly novel story.
Dan Gooden has written about these guys like Tweedledum and Tweedled Dumber,
two guys who have been arrested for the second time
for doing cybercrime stuff to the US government, which is amazing.
And in this case, apparently they were trying to use AI to cover their tracks.
to me the fun part of this story is just how bad these guys are at doing angry cybercrime.
So I'll let you explain that in a moment.
But the whole angle of, oh, they were trying to ask AI, like how to cover their tracks and whatever.
It sort of reminds me of when I think there was like a congressional testimony or a Senate hearing or something into these LLMs.
And one of the senators or Congress people turned up with, you know, bomb making instructions or whatever that they got an LLM to talk about.
And then the AI company representatives were like, well, here's that.
that bomb recipe you can Google it as well like it's right here so I sort of feel like people
asking AI how do I cover my tracks it's just sort of like asking Google it's not the novel part but
still worth talking about just because of how brazen and dumb these guys are tell us about them
yes so these two uh both 34 year old brothers from Virginia uh Alexandria Virginia
munib actor and soheb actor uh and uh one of them got father both they both got fired
uh I don't whether that was simultaneously ever the same thing well you they both got fired uh
One of them had their access, you know, kind of revoked from the systems that they were looking after.
The other one didn't.
And so they logged in to their like MSSQL database server that they had access to
and then proceeded to delete 96 databases from that system, which, you know,
contained some manner of, you know, sensitive information, important stuff.
Apparently something related to like Freedom of Information Act requests or something like that.
So they deleted this stuff and then went onwards to, you know, clean up the process, you know, delete
the evidence by asking Chachi Pitti or whatever, which as you say is good comedy angle.
The thing that's really weird about this, though, is that this is not their first time.
Like, the pair of them also played guilty in 2015 to hacking the State Department.
I think they were also, you know, employed in some, you know, by some contract or employed in some way.
And they had stolen a bunch of, like, passport and visa information.
But apparently this, you know, a previous guilty conviction for deleting data.
from a government agency where they worked was not enough to stop them getting employed
at a government agency where of course they have subsequently done the same thing.
I think they might have been working for a third party that was doing work for a government
client like they weren't directly employed by the government,
which is probably just answers your question of like how did this happen?
But the fact that they both have like they love to commit crimes together every time,
like 10 years apart.
It's amazing to have family connections, right?
It's nice to do things with your siblings.
That's always good.
I do think that Dan saved the best bit for the end of the,
of the piece, which is, wait, wait, hang on a second.
They were asking for how to delete logs from Windows Server 2012.
Like, what year is it, sorry, which government agency is running Windows Server
2012?
Like, isn't that out of support?
So that's the kind of the other good bet is that, yeah, the lack of competence or the,
you know, the sort of the comedy of this extends to all parties involved, not just the
brothers in question.
Full stack comedy on that one.
Oh, man.
actually it for this week's news. This is our second to last show for 2025. And next year,
we're actually in, well, it's my 20th season. I think it's your like 17th or 18th next year,
which is a pretty crazy milestone for this program. We've got a new host joining us as well,
who's going to take on some new work next year. There'll be more to hear on that when we're back.
So yeah, next week's the last show. I've got a soapbox coming out, actually, in a couple of days,
probably tomorrow, actually. You've listened to that one. It's with the spectroops guys talking about
open graph. Headline is graph the planet. It is really cool. It's a really good soapbox edition.
I'd really recommend people listen to that. I mean, even you were like, wow, that was,
that was good. It takes a bit for you. Yeah, yeah, yeah. Yeah, so I'd really recommend people
have a listen to that. That's me chatting with Jared Atkinson, who is the CTO of Spectrops and,
you know, Bloodhound. Yeah, it's a sponsored interview, but, you know, very interesting stuff. But,
yeah, we'll wrap it up there. Adam, thank you so much for joining me for that discussion. It was a lot of fun,
and we'll do it again next week. Yeah, it was a good time.
I will see you next week.
That was Adam Boyleau there with the check of the week's security news.
Big thanks to him for that.
It is time for this week's sponsor interview now,
and this week's show is brought to you by Croll Cyber.
And Croll is, you know, a big risk firm that does a lot in cybersecurity.
It's got a really competent incident response shop,
and they acquired like an MDR provider some time ago,
so they do MDR as well.
And, you know, they've got a really good reputation.
And they work, you know, with big companies,
with governments, like with all sorts, and yeah, as I say, they definitely have a reputation for
knowing what they're doing. So today we are chatting in this week's sponsor review with
Simon O'Neillan's, who is the managing director of cyber and data resilience for Kroll in London.
And really, Simon has been chatting with a lot of boards lately about cyber security,
certainly in the wake of all of the drama that has engulfed the UK with the attacks
against the retail sector and Jaguar Land Rover. And he says, you know, really there's a bit of an
opportunity at the moment to get in there and educate boards about cyber security. But you want to
do it right. You know, you don't want them to ask a simple question and for you to be sitting there
spinning your wheels and unable to answer it because that just is embarrassing for everyone. So here's
Simon O'Nions and I started off by asking him where boards are at really with their understanding
of cyber issues at the moment and here's what he had to say. It's really interesting and it does change.
It's not, you know, I can't make a sweeping statement that says, you know, all boards are full of, you know, 60 year old, you know, died in the wall, business people who don't understand technology, but more often than not, that that is the case.
But we're typically not dealing with a generation of digital natives here.
We really understand what this is.
This is witchcraft to them, right?
This is a dark magic that they don't understand and they need to pay, you know, a deeply technical chief information security officer or someone, a ton of money to come in and fix for them.
and there's a kind of delegation of responsibility that's associated with that,
or in some cases maybe even an abdication of responsibility,
where they're, you know, they're just trusting that somebody else is fixing the problem.
And from a sort of business perspective, from a business governance perspective,
that's just, you know, you can't do that anymore.
This is an existential risk.
We've seen in recent times so many, you know, damaging issues that are real core strategic business issues.
and so we can't continue in that vein anymore.
We've got to bring boards on the journey.
At a crawl, even now, we see disconnect between the executive and the board
in terms of what their level of understanding of this risk is,
and we need to get better at explaining it and bridging that gap.
I think that's one of the core issues that we've got,
because if you think about the responsibilities of the board,
they're otherwise challenging questions,
they don't understand the way the business is being run,
and if they don't, then we've got a big issue.
I mean, it's always seemed to me like a lot of boards, board members just, yeah, they don't think about it at all.
There's just this assumption that it's being handled.
And you kind of alluded to that just there.
But there's this assumption of like, oh, there's some nerds down in the IT department.
They've got all of this under control, right?
Like, and that's quite often, you know, said nerds in the IT department are pointing at things and saying, you know, we've got a real problem.
But that message just doesn't get through or it sounds like typical sort of corporate whining about being under resourced.
like and not like a real problem.
And I think some of that is a problem of our own making, right?
I know if you go back five years,
I think a lot of cyber was technical people saying this is a really tricky problem.
Let me come and fix this for you.
You don't need to worry about the detail.
I've got it.
Yeah.
And we can't do that anymore.
We have to migrate this into something that is a proper business consideration.
And to do that, we need to speak business language.
We need to be talking about all the impact of the business in terms of revenue,
reputation, you know,
broader harm to consumers or markets
or whatever else it might be in a way that the board
get.
Because we can't be selling this as Black Magic anymore.
We have to be talking about this in those terms
because they don't get it at the moment.
There are talk kits out there to help people,
you know, but if you,
in a UK context here,
if you talk about the National Cybersecurity Center here,
all the CISOs will understand
or the techies and understand,
the analysts will get it.
The board many times have never really heard of them
other than seeing news articles that talk about this government agency
that in their mind is probably rather untouchable
and that's just not the case.
So we need to be better.
Yeah, yeah.
I mean, many of my friends in security consulting
have made jokes about having to do like finger puppet shows.
It is a little bit like that.
And we're not trying to deride board members,
but it is difficult to cut through, right?
when they haven't really understood that this is a risk that is bored level.
But I'm guessing, you know, being someone who's based in the UK,
one thing that definitely would have changed that outlook in the United Kingdom at the moment is the Land Rover incident,
which is, you know, it's an incident of national significance.
It's showing up in like the GDP projections, right?
Because it's such a big deal.
You know, has that really lit a fire under their glutes, shall we say?
I think inevitably has.
But history, maybe not at this scale, but history is littered with victims in this sort of thing.
So you see these peaks and troughs where there's a, there's huge swaves of highly reported attacks.
You know, we've got M&S, co-op, Harrods, JLR.
But they tend to be sort of vertical specific, right?
Like retail's under attack and everyone freaks out.
Not patchy in 2017.
Oh my God, terrible day for logistics.
And, you know, it is that thing where it seems to be a vertical specific thing.
I think what's different, though, about the Land Rover thing is the economic impact, right?
But that's where we're going now, right?
Yeah, yeah, yeah.
You've got really high-profile manufacturing or whatever, right?
I mean, I don't want to get into a victim-shaming thing here because, you know, we can opine about whether or not they should have done more before the event.
I don't think any of us really know yet.
So it could be anyone, I think, is the thing here, right?
And now we're starting to see really high-profile brands get here with huge economic impacts up and down their supply chain.
and there are moves in regulated environments to look at addressing some of that through third-party risk management,
which is really difficult through better information sharing, which we've been talking about for over a decade,
you know, and all these different things that people are pushing.
The really interesting thing in the UK is if you look at the critical national infrastructure,
and if you look at the Cybersecurity Resilience Bill that's gradually parliament now, J.L.R. would not be caught by that.
They're not critical national infrastructure, not deemed it anyway.
if you look at the economic impact of an attack there,
and you can extrapolate that out over, you know, pick a large manufacturer.
You know, they're not going to be captured by this.
So the incentives have got to change, I think.
You know, we can't be relying on somebody in the boardroom looking at that
and go, well, you know, thank goodness that wasn't us.
I think it's happening quite a lot at the moment.
But with these peaks and troughs in mind, in six months time,
it's to put a, again, to put a British phrase on it, right?
That's going to be chip paper.
that's, you know, these things are quickly forgotten because there are, you know, things are just too fast paced.
So we've got to be better at making the lessons that we learn out of these big high profile incidents actually bring about fundamental change in the way that businesses are being run and the way that they're looking at cybersecurity.
Because the risk management programs at a lot of places just don't work either, right?
I mean, the ideologies, the motivations, the tactics, all these things change like a daily, hourly, whatever basis, right?
I mean, these things shift all the time.
You're absolutely right.
It's the retail sector at the moment that seems to be bearing the brunts of it.
But that could shift.
That could be finance.
That could be farmer.
That could be anything.
And then I guess what you're saying is what you're seeing now is like a pulse of interest from boards rather than like a permanent, you know, sea change or watershed moment, right?
I mean, that's my concern.
Absolutely.
So let me ask you this, right?
So say you're a CISO, say you're a security person, you know, these, you got a bunch of olden cross.
board members, right? You've got to go fill them in, get them on your side. You're talking about
people who, you know, dictate their emails to their secretaries and, you know, only use an iPad.
What is it that you can actually, you know, how do you do that? How do you talk to these executives
and sort of bring them onto your side and get them sort of OFA with a security program and what
it should look like? That's a really good question. And I think the most foundational one
is explain this to me in a way that I can understand, right? Why are we spending
this much money on cyber. Why are you telling me it's not enough? What are we not doing? It's foundational
questions like that. I mean, one of the most kind of beautiful things in this is that that lack of
knowledge is actually a real power if you exercise it in the right way and ask what might seem like
dumb questions. They're not. They're actually really insightful because if you as a see-so or as a
technical leader can't explain something to someone in terms they understand, you probably don't
understand it well enough yourself. So it drives a level of rigour.
into that process. So, you know, what are we doing well? What are we not doing well? What are the risks
associated with that? Because, again, I think we find ourselves baked into an environment where people
look at this as an inherently technical risk, which it is, but it has big business impact. So when we're
testing this stuff, right, I mean, when you're sitting down doing your tabletop exercises or your
pen tests or whatever else, are you taking the results of those and extrapolating that out so you can
look at what a capital impact on the business is, right? I was a really good example of that.
have they a key question I would have
were I a board member is
you know have we looked at
what the business viability looks like
if our core operations are shut down for a month
two months three months
I got to tell you one of my
one of my smarter CISO buddies
who works in healthcare
was very smart during COVID
because they did have to close various bits
of their business
very big business had to close various bits of it
due to certain things happening in the pandemic
I won't give away too many details because that'll give away who it was.
But what that meant is all of a sudden he could quantify
what outages and downtime actually cost the company
because of the experience during the pandemic.
So he used that to justify a bunch of really cool programs and initiatives
which were essentially around ransomware control.
And I think, you know, knowing a bit about what he's done there,
I think they're actually in really good shape now.
And it was because he could go to the board, he could go to the CEO and go to the board and say,
look, you know, this is what it would cost us if we had a ransomware incident.
And we know this because of X, Y, Z.
So I think you're right.
Like, you know, being able to answer that question of like, why are we spending all of this money?
Like, if you can't jump in immediately, like, instead of sort of scoffing and being unprepared, you know,
jumping in with an answer there is probably a good idea.
It's so important because what we should be doing.
I mean, risk management in cyber is typically what, you know, what controls do we have in place to
mitigate this threat. What it should be is what controls do we have in place to mitigate this threat?
And if those controls fail, where's the impact on the business? Because that's what the board
care about. How much money am I going to lose if this thing manifests? How likely is it to manifest?
And that's where it gets really tricky because the likelihood changes. I mean,
banks are a good example where a lot of these risks are managed like quarterly risk and audit
committees. The risk in quarter one is going to be completely different potentially to the risk in
quarter two. And many of, not all of them. And it sounds like you know a few people that
have done this reasonably well. Not many of them have got mechanisms in place to set trip wires
and to set mechanisms into detect when that threat changes to an unacceptable level and what that
means to the business. So, you know, we know that there is an increased loss of losing X million if this
threat manifests or X billion if this threat manifests and therefore we need to act now. We can't
wait for the next risk and audit committee to approve a spend cycle that's out of budget and, you know,
these other things. So we've got to be much, much more adaptive and dynamic with the way that
we deal with this. And people are getting on board with that now. I mean, cyber risk quantification's
been a thing for a long time. And a lot of people, including myself, up until recently, were
fairly skeptical about whether you could ever actually do that. I think now we're moving to a place
where we can. And I think some sectors are better than others. I mean, finance have been forced to do
this through various regulatory models for a long time. Do I have enough money to ride through a
crisis because we've been through enough of them in the past? But those lessons haven't rippled out
into other sectors and I think now that's starting to happen
and I think things like Jay and are big
high-profile attacks are helping to focus the mind.
All right, Simon O'Neyans, thank you so much
for joining us for that conversation.
Good ranting there, actually, good ranting about
where boards and executives need to be thinking.
A pleasure to chat to you.
Yeah, thanks very much. Really enjoyed it.
That was Simon O'Nions there from Kroll Cyber.
A big thanks to him for that.
And that is actually it for this week's show.
I do hope you enjoyed it.
I'll be back tomorrow with a soapbox edition of the show.
And then again, next week for the last episode of Risky Business for 2025.
But until then, I've been Patrick Gray.
Thanks for listening.
