Risky Business - Risky Business #832 -- Anthropic unveils magical 0day computer God
Episode Date: April 8, 2026On this week’s show, Patrick Gray, Adam Boileau and James Wilson discuss the week’s cybersecurity news. They cover: Anthropic’s new Mythos model hunts bugs and... chains exploits together so well that… you cant have it… …Unless you’re one of their Project Glasswing partners The world isn’t short on bugs, though. F5, Fortinet, Progress ShareFile, and TrueConf are all getting rekt by humans GPU Rowhammering goes in the GPU, past the IOMMU and back into the host-side Nvidia driver North Korea is spending serious time and money on its crypto hacking Just when the US needs CISA most, they slash its budget some more! This week’s episode is sponsored by identity verification firm, Persona. Tying digital actions to actual human identities isn’t just for banking know-your-customer any more. Persona’s Benjamin Crait says know-your-staff checks belong in high-value flows inside your organisation, too. This episode is also available on Youtube. Show notes Claude Mythos Preview \ red.anthropic.com Anthropic Claims Its New A.I. Model, Mythos, Is a Cybersecurity ‘Reckoning’ - The New York Times Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED FFmpeg on X: "Thank you to @AnthropicAI for sending FFmpeg patches" / X Critical flaw in F5 BIG-IP faces wide exploitation risk | Cybersecurity Dive React2Shell vulnerability helps hackers steal credentials, AI platform keys and other sensitive data | Cybersecurity Dive Critical flaw in FortiClient EMS under exploitation | Cybersecurity Dive Researchers warn of critical flaws in Progress ShareFile | Cybersecurity Dive CISA gives agencies two weeks to patch video conferencing bug exploited by Chinese hackers | The Record from Recorded Future News New Rowhammer attacks give complete control of machines running Nvidia GPUs - Ars Technica North Korea's hijack of one of the web's most used open source projects was likely weeks in the making | TechCrunch Drift crypto platform confirms $280 million stolen in hack as researchers point finger at North Korea | The Record from Recorded Future News Drift on X: "Drift Protocol — Incident Background Update " / X Trump’s FY2027 budget again targets CISA | Cybersecurity Dive CISA’s vulnerability scans, field support on chopping block in Trump budget | Cybersecurity Dive Iranian hackers break into U.S. industrial systems, agencies warn FBI labels suspected China hack of law enforcement data 'a major cyber incident' Russia Hacked Routers to Steal Microsoft Office Tokens – Krebs on Security Massachusetts hospital turning ambulances away after cyberattack | The Record from Recorded Future News Exclusive | 'Ghost Murmur,' a never-used secret tool, deployed to find lost airman in Iran in daring mission A Secure Chat App’s Encryption Is So Bad It Is ‘Meaningless’
Transcript
Discussion (0)
Hey everyone and welcome to another edition of Risky Business.
My name's Patrick Gray.
Adam Bualo and James Wilson will join me in a moment to have a chat through the week's security news.
And of course the big news this week is that Anthropic is putting its mythos model into like a limited preview
because it's too powerful an O-Day discovery machine and whatnot.
So we'll be talking through that in just a moment plus all of the other news over the last week.
And then we'll be hearing from this week's sponsor.
and this week's show is brought to you by Persona,
which is an identity verification company, I guess you would call them.
I mean, they started off more in that sort of KYC space,
but these days, as you'll hear,
when we are joined by Benjamin Chait,
who works in product over at Persona.
That sort of technology is actually very useful in the enterprise these days,
particularly with remote workers who you may not have ever met IRL,
like making sure they're not North Koreans maybe,
kind of useful, or making sure that the person you hired
is the person who's still doing the work,
So that is this week's sponsor interview coming up later.
And for a first in Risky Business History, it's not actually me who did the sponsor interview.
James Wilson filed that one because I'm on light duties this week because it is school holidays in my state.
So yeah, that's one for the history books.
All right, let's get into the news now.
And yeah, as I mentioned at the intro there, Anthropic is releasing its so-called mythos model,
but it's not releasing it to the public just.
yet because as per Anthropic, oh my God, it's too powerful.
The world cannot contain its power.
Adam, walk us through the rough shape of this story, if you would, please.
So we've been seeing LLMs getting better and better at interpreting code and reasoning
about it, you know, over the last couple of years.
And this Anthropics is kind of their work at building a model that is kind of specifically
well suited to thinking of reasoning about code.
And have they found that?
is very good at doing specifically cybersecurity things.
Understanding code, finding bugs,
reasoning about it, writing exploits,
or at least proving that the bugs are real things.
They appear to have hooked this mythos LLM
up to a whole bunch of code bases out of some of the open source world
and let it loose and it's found a heap ton of bugs.
So far they are documenting a few of them
in this particular blog post talking about it.
They say there's plenty more, like they said, like thousands to come.
They've managed to put some hashes in here
of things they're going to release in the future once it's been through the
patching and disclosure process.
But this thing, you know, per their write-up, is, you know,
terrifying and amazing and as good as high-end humans at finding bugs.
And, you know, that, I guess this is not unexpected,
given the trajectory that we are.
on, but yeah, it feels, you know, it feels pretty interesting times.
Yeah, I mean, it feels real, right? I guess is what you're saying there. And, you know,
the fact that we woke up this morning here in Australia and New Zealand and it's all over
the New York Times and Wyatt and here and there, like kudos to Anthropics PR people,
because they've certainly managed to make a splash with this as like a cyber security story.
You know, it's been a interesting few weeks for this stuff, right? Because as you say,
like we're seeing more and more LLMs being used to do this stuff.
I'm seeing a lot of cope on social media, frankly, from security researchers.
I'm seeing them say, oh, well, you know, there's going to be a very limited pool of one-shotable bugs that you could just ask the model to produce.
And I'm thinking, okay, that might be true.
But then, you know, what happens when the next model comes out, right?
So, I mean, even Anthropics say that, you know, keep in mind that today's best model is, you know, tomorrow's worst model eventually, right?
So you do sort of wonder how this is going to shake out.
I kind of feel like, and I want to get your opinion on this, James,
I kind of feel like the job for vulnerability researchers in the future
might be leveraging these models into finding stuff that is not one shotable
until the next model, right?
So that's going to be the job, is going to be actually trying to stay ahead
with the models that are available now before the next one comes
and auto-magically discovers those sort of bugs.
I kind of feel like that might be where we're heading.
What do you think?
Possibly.
it's interesting, you know, the concept of prompt engineering seems to be all but gone, right?
If you were to say the goal of the human now is to work out how to craft the prompt
to stay one step ahead of these one-shot fixes, I don't think they even need to do that.
You know, the prompts that they're sending to Mythos is things like, please find a security
vulnerability in this program. So, you know, what do you do? And then upend, and I'd like to stay
ahead of the next model, please. I just don't really know what you can do to stay ahead other than just
really be on the forefront of adopting this, pointing at as many repos as you can that you're concerned about.
But it still hasn't solved the problem of how do the defenders actually go about triaging this
and making a meaningful change to their repos and also avoiding the fact that we've got to remember,
pointing a model at hundreds and thousands of repos and producing heaps and heaps of bugs and getting those all fix is wonderful.
But the number of times I've found a simple bug fix turn into yet another exploit is like,
you know, it's a common thing that happens.
So there's all of a sudden going to be a massive perturbation
throughout the open source community of all these fixes landing
that is probably going to give rise to the next wave of bugs anyway.
So it might not be that the next model has to come along, Pat.
It might be just that throw the model at all these fixes
that are getting generated in rapid time,
and that's where you'll find your next set of exploits.
I mean, maybe, but I would point to a tweet, actually, from FFMPEG.
And we covered this months ago, like last year sometime,
that FFMPEG were getting really salty
when people kept submitting bugs to them without patches.
and they've posted, thank you to Anthropic AI for sending FFMPEG patches.
Now, I did talk about the coverage in Wired and the New York Times,
and this coverage is looking at the fact that Anthropic has launched this thing called Project Glass Wing,
where they're giving preview access to this model to all of the major vendors,
some security vendors as well, like Crowdstrike and whatnot, but, you know, Microsoft, Apple, whatnot.
The idea is they're going to get a head start on finding bugs.
But again, I do feel like this means that every time they update their model,
we're going to have to wind up with something similar, right?
Because otherwise it's going to be like a mini catastrophe every time they do a point release,
which is probably not an ideal way for things to go.
But I just want to mention one thing there, James, which is you spoke about the hype factor here.
This is something you've got to really keep in mind with Anthropic.
Is Anthropic decided early on that their brand, that its brand is very much around safety, right?
So they do this whole thing about like, oh no, you know, AI is big and scary and can cause so many societal harms.
And this is why you really need to regulate strongly to prevent people from using Chinese models, right?
Like this seems to be a big part of their brand.
And it is sort of self-serving where they want to ban, I guess, models that have been distilled out of theirs and whatever.
It kind of makes sense.
But look, another factor in all of this is, as we'll see from the news this week,
it's not like we're already short on bugs, right?
Like, Adam, do you think an infinity O'Day machine really actually change as much?
I mean, I think it does, but maybe not in the way people are expecting.
I mean, it definitely changes some things.
And like, I'm thinking about, you know, all the people I've worked with over the years
who have been very good bug hunters, you know, the sorts of work that we did.
Like, I think, you know, when we first saw LLMs and AI models, you know,
starting to rise over the last couple of years, you know,
one of the people that felt particularly victimized were artists and creatives,
people who felt like all of a sudden anyone could generate art on posted on social media
and they felt like their work was being undermined and undervalued,
and that the human touch on that in art was really important.
And I think the funny thing for being a vuln hunter is the human touch doesn't matter here.
What matters is the shell out the other end.
Like the artists have something to point to.
You need us because we bring the humanity.
for us. Exploids don't need to have soul and suffering, right?
They don't. They don't need to have suffering.
Like Nick Cave said, like, people who are having AI write, you know, fake Nick Cave songs.
He's like, yeah, but where's the suffering?
You know, that's an important part of golf music.
You know, we don't have that in the exploit dev world.
And so AI is actually going to be pretty good at replacing us.
And that's a crazy time.
And I, you know, the anthropic blog post has a line.
I always said, what was it?
Ultimately, it's about to become a very, about,
to become very difficult for the security community
after navigating the transition of the internet
in the early 2000. We've spent the last 20 years
in relatively stable security equilibrium.
It's like, excuse me?
Did we work in the same industry?
It's been bonkers since the internet
and it's going to continue to be utterly bonkers.
And like, as you and I have said
on the show before, like we are here
for that. Like that's what we are writing.
That's what this podcast is all about is the bonkersness.
And so like in that respect,
Like, it's just going to be fun.
It's going to be wild.
It's going to be messy.
And, you know, even if they keep this model secret for a little bit,
like, what's that going to buy us?
Like, three months of Microsoft getting a preview, getting preview access.
Well, okay, okay.
So let's take a little tour, shall we, of this week's horror show bugs.
And explain, I guess, why Anthropics preview.
You know, are they giving this technology in preview to F5?
Because there is apparently a critic.
floor in their big IP stuff, which is facing widespread exploitation risk, according to
cybersecurity dive, this piece here.
There's people out there still exploiting React 2 Shell, according to Cisco.
This is turned into a bit of a bit of a drama, and there's a whole bunch of stuff getting
stolen in that.
Critical flaw in 40 client EMS under exploitation at him.
I mean, so I sort of think, okay, we're already in trouble with these sort of bugs.
when Mythus goes GA, what happens to Fortinet?
Right?
Like, what's your feeling there?
Adam, I'll get your feeling on that first.
Well, I guess the step one, they can fire all three of their QA staff,
because clearly they don't have very many,
and replace them with an LLM,
and that will produce hopefully much better results.
But the hard part is, I think,
hooking up test environments and test harnesses to the LMs.
If you're Fortinet, you have to get an LLM
into a position where it can introspect your products
and test them.
And for some people,
that's going to be more difficult than others.
And I think some vendors,
like obviously Fortnite
should run all of their products
through anthropic stuff
and hopefully that will produce
a significant improvement in their things.
And maybe this is what Fortnite needs.
Whether everybody, you know,
Cisco and, you know,
what else do we have like progress software,
you know, like all the people who've,
F5, you know,
who are in our list of, you know,
terrible vendors this week, like, are they going to be able to survive this much change,
like organizational change, like the tech side of it, like throwing AI out your products,
probably good? Can they be like, what's Danguito's trail of bits, like reorganizing their
whole company around AI? Can you see a Fortnite doing that? Probably not, right?
Yeah, so I want to bring you into this part of it now, James, because I had a really interesting
conversation with Adam Points in the other day, CEO of Knock Knock, and one of the things that
they've done is they have worked on getting their whole code base into a state where it's
clawed friendly. The way Adam described it, I thought actually was really interesting. He's like,
you know, this product has been around for a while, a bunch of people have, you know, have contributed
to it. It's sort of like an apartment block and every apartment is, has a different style and, you know,
has been decorated according to different taste. And so if you throw an AI agent at it and say,
hey, you know, we would like to build another wing to this apartment building, it gets a little bit
confused because it doesn't know what style to use and whatever. So you sort of have to refactor your
code base a little bit and get it in the right shape. But it feels like that's what everyone's going
to have to do now is work towards being an AI first code generation shop. Right. And I, you know,
you're the one with the experience in software engineering, James. What do you think of that idea that,
you know, now the job is about figuring out how to make your code base friendly for Claude to work on
because that's like that's a sign of success for Anthropic. They've done.
well. Yeah, absolutely. And you know, it even challenges some of the just fundamental, I guess,
paradigms we were operating in software engineering. You know, for a longest time, there was this
debate of do we go mono repo or do we have lots of repos? And the argument was, well, mono repo's got
all the code in one place, it's just easier, but that repo gets massive. And so people would break it
out into, you know, here's the repo for our iOS app and now here's different repos for our web
services and whatever else. And that made it easier for a human to reason with and to sort of keep
separation of concerns. But, um,
Certainly when you've got Claude coming along and you want Claude to reason with your entire software suite,
you kind of have to go back to putting all of these things into one repo.
And I don't actually think it takes a lot of human work these days to think about how to structure it.
Like as not, as long as you get all the software in one place as the starting point,
and then prompt Claude and say, okay, this is a longstanding product.
Here's the general run of the mill of what it's got.
How would you structure this to be more accessible and then write an agent's MD,
along the way and in partnership with the human there.
I think you can, you know, you probably get a long way towards that.
And that is, I mean, you know, that that is literally table stakes at the moment now
for any business that wants to remain relative in this emerging landscape.
Well, I mean, I can't think of a bigger change in software development like ever.
Like it makes like the conversation around something like, oh, DevOps, should we or shouldn't we?
DevOps versus Waterfall.
I mean, that was the big topic for a while.
And now you just look at that and you're like, how is this even a conversation?
Yeah, I haven't thought about DevOps for a long time, and I'm still amazed that for someone that loves writing code, I haven't written a line of code since probably November last year.
I was amused last night. I'm popping open a code editor to just look at, you know, to compare to dot environment files to make sure secrets are there.
And that's the only time I'm opening a code editor anymore. Wow.
Yeah, yeah. And you're working on some stuff for risky business as well, which could be a lot of fun.
We'll stay mum on that for now, but it's going to be cool.
Now, look, you know, those bugs I mentioned before are not the only ones.
We've also got bugs in something called Progress Share File.
Is that a file transfer appliance, Adam?
It most certainly is.
And actually, this one is a like cloud control plan, but you can store your data on-prem kind of situation.
And this bug bug is super funny.
Like, you hit the admin page and it redirects you off to the log,
page if you're not logged in, but it still renders and runs the page. So like the page is still
there, your browser just redirect you away before you could see it, which is the kind of thing that
happened in PhP apps in the early 2000s, and they've managed to reinvent that bug class on
modern.net. And yeah, literally you just go to an unorthed admin page, and then the forms
are there, and you can fill it and edit the config and onwards to, I think they work it up,
watchtow work that up into code exec. So yeah, it's just a very funny, like, throwback from the
of a bug.
So, yeah, fun.
Yeah, let's see if Klop goes nuts with that one.
What else have we got here?
We got Sissa warning agencies, government agencies,
that they need to patch a bug
that is being exploited by Chinese intelligence services,
and they've got two weeks to fix it.
This is in software that I'd never heard of.
What is a true conf?
What is true conf?
I've never heard of it, Adam.
Yeah, so it's like an audio-video conferencing kind of platform thing,
so I think like Zoom, I suppose.
The funny thing is it's kind of not really so much a bug as a sort of a misfeature where you can,
there's like client software that the server will drop on you and there's some kind of,
like if you're an admin, you can upload updated versions of the client software to this thing,
which will then push down to end users.
And so this is Chinese getting access through probably cred stealing to these servers,
putting backdoor updates on them and then initiating video conferences with people,
which will push down the modified client to them and then shell them.
So it's quite a, you know, quite a fun campaign in that respect.
I don't know how widely used this is, but Sysa seems to think that it is, you know,
being used by government organizations across, you know, Europe and Asia and America, obviously.
Yeah, I mean, this is what struck me about it as well, which is like,
oh, here is Sisa warning government agencies to urgently stop using this piece of obscure teleconferencing software that I've never heard of.
Then again, you know, someone might pop out of the woodwork and say, obscure, it's everywhere.
But yeah, anyway.
We've also got some Roehammer news
where someone's figured out how to apply Roehammer
against like Nvidia GPUs.
Now, Nvidia GPUs are a big thing in the news,
obviously at the moment, Adam.
But is this research actually interesting?
Is it novel?
It's actually pretty interesting, I thought.
I mean, obviously, you know,
rohammering has developed a lot over the years.
There's been a lot of interesting techniques
and different types of memory and work that you have to do.
And a lot of it honestly isn't super exciting for us to talk about.
But this one is there's three separate sets of research that all basically did the same thing,
which is through Rohammer modern Nvidia GPUs.
Some of the researchers leverage that to like bit flip the contents of memory on the GPU
and like degrade the performance of LLMs that are running or, you know,
other AI models that are running on the GPUs.
Not very exciting.
A couple of the researchers leveraged it to modify like memory page tables in the GPU
and then leverage that into arbitrary memory right.
And one set of researchers turned that into manipulating the Nvidia driver back over in CPU memory to then code exec of that.
And so you can go into the GPU and then across from there back over into the driver and pop out in a privileged context in the kernel.
So like privilege escalation via the GPU, even in the case where the IOMMMU controls are turned on, which is, you know, that's kind of like best case security setup at the moment.
So yeah, that's pretty cool research.
like actually weaponizing it like that.
So yeah, that's pretty cool.
Yeah, nice.
Do that, Claude.
I mean, it probably will, right?
It probably was.
I shouldn't joke about that.
Now, James, let's have a chat about the post-mortem
on the Axiase supply chain compromise.
We talked about that last week.
Jason Semen, who maintains Axis,
has published a write-up, a bit of a post-mortem there,
and Zach Whitaker wrote it up for TechCrunch.
The gist here, I think, is that the North Koreans
did some very on-point social engineering,
which enabled them to do this.
But when you read this, you're thinking,
geez, it didn't take much, right,
to do something so high-profile.
So walk us through this, if you would.
Yeah, I remember when we talked about it last week,
I said, you know, we weren't sure what happened.
And I think you said, I was probably just,
you know, they managed to get the credentials out of the browser.
But it was kind of more interesting, but also just kind of sad,
the way it ended up, right?
But yes, I think this is yet another data point of the North Koreans getting more and more crafty
and actually taking their time to really plan this out.
Like this one was a couple of weeks in the making.
Essentially, you know, they put together a very real-looking Slack workspace and invited Saman to come along and join it.
And very, very convincing.
You scheduled a meeting to connect.
This was basically people that were going to help and contribute to the project.
in the effort. But the thing that got me is that it's like, yes, they took the time to build
up rapport and they took the time to build up trust and they took the time to look very convincing.
Then when you join the meeting, they said, oh, there's, you know, the meeting popped up in
error and said, oh, you need to update something. Don't worry, here's the updateer. Can you just go
and install that? And he went and installed it. And it's just like, buddy, like, I don't want to
throw too much shade on the guy, but why? Why would you have just taken that link?
I mean, I think it's because, as you say, like, they made the whole thing very believable.
And I think when you are in that situation, you're going to be thinking,
oh, right, so this whole thing is just a ruse, is it, to get me to run this executable?
You're not going to be thinking that, right?
And I think that's the whole point of spending weeks and faking websites and Slack workspaces
and all of these conversations over a long period is so that you don't have that suspicion.
It's not like this was someone they met five minutes ago in a forum.
You know what I mean?
So I think we're going to be careful not to be.
throw shade at people on this. I think it's the same as a lot of these people who get victimized by
scammers is, I think I remember one point like 10 years ago, there was evidence that there were
psychologists working with some of these criminals to help build these scenarios that were
most likely to get people to respond. That said, I think I can't see myself doing that, but
I'm a professionally paranoid person. This guy isn't, you know what I mean? So I think, I don't know,
Cut him some suck, I guess is what I'm saying.
As I said, I'm trying not to throw too much shade in because I do feel for the guy.
But I think maybe I should frame this better and say, look, it is fascinating that despite the fact that the whole concept of a click-fix attack and the install fix attack is a known vector, you almost wonder, did they even have to go to this much level to be that convincing when someone is still willing to just install an update on the fly when you can't join a meeting, right?
There's something about that concept of, I joined a meeting, there's a problem,
and they gave me the update to install, should be ringing bells off,
I've heard about this before, this is a relatively known concept and not falling for it,
and yet people do.
So maybe we're just too close to it, and I'm being unfair.
Maybe, maybe. Who are we to judge, etc?
Moving on, though, and staying with the pesky North Koreans,
they pinched $280 million.
from a DFI platform called DRIF.
Is there anything particularly novel about this one,
apart from the fact that it's $280 million?
I think the novel thing here was like the previous one James was talking about,
like that was a couple of weeks of prep.
This one was months.
Like they went to conferences with this drift platform's people.
They met them IRL.
They shook hands.
They made some deals.
They invested like a million bucks of their own money,
the North Koreans money,
into building rapport with these guys
to get to the went where they could compromise
enough systems to bypass the like multi-signature controls and eventually loot the entire platform.
So it kind of made the level of...
This is the one where they got the multi-signatures done months ago and then waited for a chance to use them, right?
Yeah, I have read about this.
Yeah, so like it was pretty good work.
And I think, you know, the, you know, the level of North, that the amount of work that North Korea is willing to put in,
clearly suggests they're making enough money out of this for it to be worthwhile.
So, like, you know, months worth of prep, like,
That's a pretty high bar, and, you know, people are going for it, as clearly happened here
and in the previous one.
James, you had some thoughts on this one.
You said there's some interesting tradecraft here.
Yeah, it's like a one-liner in this article, but it really stood out to me, obviously,
because my knowledge of the Apple ecosystem.
Two of the attack vectors are kind of interesting.
The one that really stood out to me was they deployed test flight apps to the victims.
Now, test flight is, of course, is a way for developers to get unfinished apps that aren't yet in the app
store into the hands of people that are testing it.
The other bit that was just kind of amazing that they're latching onto this is they put hooks in the repos that if you even point your VS code editor at it and say,
yes, I trust this directory, which is something we all do whenever we open something in VS code.
There's a ton of scripts that actually run with zero user-ranked direction at all, and that's how they deployed some of that malware in other cases.
So they're really stepping up their game.
And speaking of, you actually have a podcast going out today in the Risky Business Features Channel, for those who are not,
yet subscribe just load up your podcatcher and look for risky business features which is james's
channel you've done an interview with jeff white who is the guy who put together the lazarus heist
podcast he's got another season of that coming and you did a big deep dive with him on the entire
ecosystem around uh yeah north korean fake IT workers and whatnot yeah the guy i've always been
sort of intrigued as to um what's beyond the headlines of uh north korean IT worker was found in a
or laptop farm was found and torn down.
You know, what actually goes on behind the scenes?
How did they get to that point of even assembling a laptop farm?
How do they produce enough credentials to get a job?
How do they build up credibility to move from small-time job to medium-time job
to big-time job at an enterprise?
And so Jeff and I sat down.
I think the podcast episode is about an hour.
It's a thorough deep dive and an end-to-end look at everything from the moment
when they apply for the job, how they get the job.
But then what's really interesting is they don't just show up to work and wreck things.
They work really well, and it seems to be a farm of people delegating work out there as one person.
But interestingly, there's a lot of triage that goes on.
You know, it really seems to be a well-willed machine where if there's a target that seems like it's got interesting IP,
that gets moved to a different department and they handle it differently,
and you might suddenly get the A-players behind the scenes working on this.
and then culminates in how does the money get to Pyongyang
and actually the most interesting thing is
a lot of it doesn't need to get to Pyongyang
to be super valuable for the regime.
So incredible, incredible chat.
It was so much fun.
All right, people can check that one out in risky business features.
Meanwhile, Siss's century of humiliation continues.
Apparently the 2027 financial year budget looks to be cutting
like hundreds of millions of dollars out of their budget.
Ah, yeah, they're gonna reduce Sissor's budget by 707 million.
That's like 30% of its budgets.
It looks like a whole bunch of stuff is gonna get chopped,
according to this piece from Cybersecurity Dive,
written by Eric Geller.
Their Vone scans, field support,
like a whole bunch of stuff is getting chopped.
I mean, you really do get the impression
that the reason this is happening,
there are two reasons this is happening.
First of all, Sisser did disinformation take down coordination
with a bunch of the social media platforms, which was a, you know, controversial among
megotypes.
And I guess the other reason is because Chris Krebs said that the 2020 election was secure,
and Donald Trump really didn't like that.
So you really get the impression they're trying to make it as dead as possible.
I mean, Trump's only been in power, like just over a year in this term.
So you get the feeling that by the time he's done with CISA in a few years from now,
they're going to be a shell of what they were.
I mean, James, what are your thoughts here?
Yeah, exactly. And you know, what's the most, just galling about this is the capacity that they're cutting is exactly what they need right now. It's the scanning. It is the relationships with industry. And like, I don't know, it's just a massive own goal, but I agree. Like this, there won't even be a sister to talk about, I would imagine, before long.
Yeah, I mean, and we've seen like FBI doing some CISA related stuff as well lately. So it really doesn't.
feel like they're trying to move some stuff away from CISER and onto other agencies
while cutting their funding and just like, anyway.
Well, also they said here that they're going to move it to the states
and the states have just all thrown the hands in the air and said,
with what resources?
Yeah, yeah.
So depressing for those who are left at Cicester.
And as you pointed out, James, this is, it is the case that the sort of stuff that they're
cutting is what they need now.
Like if we look at this next story for NBC News, apparently Iranian hackers are doing their best to break into U.S. industrial control systems.
And, you know, this is a warning from federal agencies.
And like they're cutting SISA as this happens.
This is happening also at the same time that the FBI has just labeled China's hack of their like Kalea system again as quote, unquote, a major cyber incident,
which means there's some sort of material impact or threat to life or, you know, demonstrable harm.
So all of this is happening and they're cutting back Sisa.
I mean, Adam, thoughts.
Yeah, I mean, it makes no sense whatsoever
And as soon as the Trump presidency is over
And whoever comes in and replaces
And the first thing am they going to do
Is build something to replace the functions
That they're just cutting out of Sessa right
This stuff is absolutely necessary
And it's just so pointless
To go throw it all out
And then start again for no reason
Other than partisanness
So yeah, it's frustrating
And like, as James said
Like this is what they need right now
And it's the exact thing they're throwing out
And it's so dumb
but I guess that's where we are, right?
Yeah, yeah.
Now, Brian Krebs has got a report on this one.
Apparently, Russia's military intelligence is like hacking home routers,
fiddling with the DNS entries,
and sending them to fake like Microsoft login pages,
which is generating a zillion cert warnings
and just hoping people click through them so that they can capture a whole bunch of credds.
I guess my question, Adam, is, but why?
And it is a very good question.
And like I don't know that the but why is particularly well answered here.
Like they really do just seem to be putting themselves in the middle
and stealing orthotagons to Microsoft services for we don't know exactly what.
And like the idea that somewhere in, you know, Russian military intelligence,
there is a group of people whose job is breaking into residential TP links
and changing the DNS settings.
Like in the time when, you know, Russia is at war with Ukraine
and there's all this stuff going on in the world.
And like, that's your contribution to it is,
compromising someone's TP link, you know, on their cable modem in, you know, Florida somewhere.
Like that doesn't feel particularly satisfying.
We don't know exactly why they're doing it.
The same group at the GOU were doing slightly more stealthy things with their compromise.
Lots of TP links and microtechs and other routers like that.
They got called out by the NCSC and then they've kind of pivoted to now more overt hacking of Microsoft infrastructure.
But yes, to answer your question, I don't know why.
I don't know what they're doing, who knows.
Yeah, yeah, who can tell?
Meanwhile, there's this piece here from the record,
which on the surface of it is not that interesting,
but then you read it, and it is.
Kudos to Jonathan Greig, who wrote this one up for the record,
but there's a hospital in Massachusetts
that turned ambulances away in the midst of a cyber attack,
and it looks like they kind of got this under control,
but they interviewed this guy, Errol Weiss,
who's the Chief Security Officer at Health ISAC,
about basically what's going on in like healthcare with attacks and whatnot.
And he said what they're seeing is a sustained high level of malicious activity targeting the healthcare sector.
There's been a bunch of incidents, but in many cases, organizations were able to contain the activity before it reached the level of a major public outage,
which is why you haven't seen them disclosed.
Because these incidents are still being worked through with law enforcement regulators,
we're not in a position to share specifics or name organizations.
Now, I'm going to infer a couple of things from that.
I'm going to infer that perhaps the attackers these days, given the disruption to ransomware
as a service operations and whatnot, the attackers have got a little worse and the defenders
have maybe got a little bit better.
That's what I take away from this.
Adam, what's your take on this story?
I wonder reading this, whether or not some of the outages we saw in the past was because
of overzealous responses.
So, like, how many times when you see, you know, a cybersecurity, a very thing?
that's led to a bunch of, you know, interruption to service is because the company had to turn
everything off because they didn't have a plan about how to deal with it in a more controlled
manner. And I wonder if some of the fact that these disruptions are, incidents are smaller and
aren't necessarily being publicly disclosed is because they've gotten better at coping with them
without going scorched earth, turning off the internet, turning off the network, unplugging everything
because we didn't understand. So if that's the case, then that's also a good improvement. Like it
maybe that the attackers are constrained by some of the, you know,
hound release that's been happening over the last couple of years.
But I feel like we are probably getting better at more managed, more controlled,
probably more playbook responses to bad stuff happening.
Yeah, I mean, I get that impression too.
Defenders a little better, a little bit more organized, you know,
attackers maybe the opposite.
Now, look, discombobulator, eat your heart out,
because we've got a new Trump-disclosed mystery device
to talk about via the very reputable
masthead of the New York Post.
We've got to read this because
we've got to talk about this one because it is just too interesting.
And what I love about Donald Trump
is that occasionally he just gets up and starts talking about stuff
that other presidents absolutely would not have.
And we've got all these sources apparently talking to the New York Post
about a device called ghost murmur
which they use to find the heartbeat of a downed,
you know, F-15 pilot or whatever in the desert of Iran
by, I don't know, detecting his heartbeat
electromagnetically with some sort of quantum device.
Now, could this be someone making stuff up
and telling New York Post people about it
after Trump spoke about a mystery device?
It absolutely could. Could it be real?
It also absolutely could be real.
So that's why I just thought we have to talk about it
because it's right up there with the discompobulator.
What's your feeling on this one, James?
Yeah, I loved reading this one.
Just some of the details.
It's, you know, specifically built sensors built around microscopic defects in synthetic diamonds.
Like, oh, okay, wow.
But also, it's such a weird story because it sounds fanciful, as you said.
And also the article kind of gives a blueprint to anyone that wants to defend against this working in future.
It basically spells out that, you know, this is super sensitive electromagnetic sensing technology that was able to pick up a heartbeat.
And it worked really well in the desert for two reasons.
It's an area without a lot of electromagnetic interference,
and the heat signature difference is quite wild,
so it gives you a very good secondary indicator.
So you can bet that the IRGC is now reading a whole bunch of electromagnetic interference things
to go and deploy whenever there's a rescue mission going on.
Like this is just, if it's true, it's a wonderful use of technology,
but you think of an O'Day being burnt.
This is a fantastic technology, if it's real, it's just been burnt,
because it's exposed its fundamental weakness.
And who knows?
Like this could just be CIA disinformation, you know,
which is one thing apparently where this,
where one of these pilots went down,
like there's some talk that like there was some OSINT people
who kind of got it wrong and thought the person went down in this particular area.
And then like CIA controlled accounts were like,
yeah, totally. That's where the guy is.
You know, so there's a lot of fun disinformation happening, I guess,
throughout this whole thing.
So anyway, fun stuff.
All right.
So we're going to wrap it up with this.
one you found this one it's from 404 media and you wanted to talk through it because it is
extremely funny there is a secure chat app and Joe Cox has put the headline on it
thusly which is a secure chat app app's encryption is so bad it is quote unquote
meaningless and you've looked at this and you're like yeah wow this is this is really something
yeah this is some some proper comedy here so there's this app called telegard which comes from
a swiss you know kind of privacy centric technology
provider with the very reputable name of Swiss cows, and they have built this, like,
end-to-end encrypted messaging app.
And when you install it, like, so it's kind of like signal, you, like, you set up an account
and it generates some crypto keys to then use to end-to-end crypto your data.
Someone looked into it and discovered that in the process of, you know, signing up your account,
it generates private key on your device, and then it sends that private key to the server.
to the teleguards like centralized service.
And, you know, it's sort of toy encrypted, like not properly encrypted,
sends them the private key.
And then when other people want to chat with you,
there's some kind of like API that lets them,
you get like the public key for the people you want to talk to and so on and so forth.
But the server has the private keys.
And then you can just kind of ask the server for those private keys.
And at that point, you, as well as the operator of the service,
can both just like decrypt other people's messages if you're on the wire between them
which the whole point of end-to-end crypto somewhat defeated by sending the private key to the server
to start with but then the fact they also have bugs that just expose those private keys to
everybody it's it's deeply deeply funny and um the four-for-media folks talked to a trail of bits
Dan Guido over at Trail of Bits
and they had a look at
Telegard and Dan sent back
a very fine meme which
4 of more media have now been running with
as the artwork for the story
and yeah like it's just
if you want a brief smile
in these otherwise very bleak times
then the story is definitely worth
read because the details of
the fail are most entertaining
and a rewarding way to spend your lunchtime
all right guys we are going
to wrap it up there
a big thanks to both of you for joining me to talk through the week's news.
And yeah, I'll catch you both next week.
Yeah, thanks so much.
We will see you next week, Pat.
Thanks, Pat.
See you next week.
That was Adam Boiloh and James Wilson there with a look at the week's security news.
It is time for this week's sponsor interview now, in which James Wilson interviews Benjamin
Chate, who works in product at Persona.
So Persona is an identity verification company, and, you know, this is the sort of
company that would do a lot of stuff around KYC, right?
So you might need to do a persona check if you're trying to open a bank account or whatever it may be.
There's a few companies doing that sort of stuff these days.
But where it gets interesting for our purposes is that these days there's a need in enterprise security programs for similar sorts of technology.
So like as you're hearing this interview, James posits, well, you know, maybe it's striker it would have been useful to have a check, an identity check before someone could like remotely wipe everything via in tune.
You know, maybe you want to insert a check there.
or, you know, if you're hiring someone for the first time,
maybe you want to actually do a bit of an identity check
and make sure that the person who does the interview
is the same person who turns up for the job.
So Benjamin joined James really to talk through these, you know,
more modern and contemporary enterprise use cases for KYC technology.
And here's what he had to say.
Enjoy.
KYC is not a new concept.
You've been doing this.
Many businesses have handled some form of government ID
and kind of selfie verification for years.
I think a couple things are changed in 2026.
One is that we are building tools to connect directly to platforms that companies already deploy.
So whether it's their identity and access management platform, whether it's their applicant tracking system.
We can talk about candidates in a few minutes, but we've got those types of integrations.
We connect with productivity tools like Microsoft Teams or Slack so that when someone fails a verification, you're not waiting for that.
that employee to contact the help desk, you get an immediate notification or your help desk
can then proactively reach out. We have also got some integrations where we're able to go get the
employee's legal name and date of birth from a system of record because not every company,
many companies actually aren't putting that information into their IAM. And if you don't
have that information in the verification flow, you're limited based on, well, was this a legitimate
human who seem to go through the verification, yes, but that doesn't actually tell you that that
human is meant to access this system. And so I think where we've been building and spending a lot of
our time is making sure we can connect to the tools and the ecosystems that already exist. And as any of us
who've worked in any enterprise, no, every company is a little bit different and a little bit unique.
And so you've got many different combinations and permutations of here's where we're going to go
to get this name. Here's where we're going to fit the date of birth. And here's the, the, the
the tool that's going to trigger this or that's going to lock someone out.
Yeah, the plumbing is super complicated, right?
In any given enterprise, there's such a fractured set of data between the HR database,
the authentication database, the IT systems, the who knows where Jenny from payroll stored
that copy of the driver's license that you send through.
So, you know, I think what I'm hearing from you is the strength of persona here is that
you can, you're not necessarily trying to aggregate this altogether, I don't think,
but you're connecting into a lot of different surfaces.
I think, I think what you're doing there is you're making, is this, is this,
person really who they say they are, a check that any point in the system can really do, right?
And so that it can be integrated at whatever points required. Is that a sort of good way to look at
this? I think that's right. The thing I want to push back slightly on is it's not just about
that point in time check. So we've got to do all that work, get connected to all the right places
in the plumbing. But another place that we add value is you're not doing just one check when you
bring on a Benjamin on day one, you're also able to do a check, you know, 90 days in when I get a new
phone or when I'm getting elevated access. And what becomes really interesting or powerful is we can
have consistency of that identity across so many different touch points. And we're then able to find,
you know, is it truly the same person? But we're not just looking at the, you know, the photo
that's being submitted. We're also looking at was this coming from a similar device? Was this coming
from a similar network range.
And I think that's where all of those other signals can also be really interesting from a
security standpoint, because then you can devise an experience that meets the, you know,
the friction that you want at that moment based on how risky that interaction might be.
Right.
So, I mean, you know, not to, I guess not to pour further shade on these poor folks,
but imagine if Stryker had persona sitting in front of their intune instance and right before
that person had clicked the button.
saying, yeah, go wipe those 200,000 devices and 12 petabytes, if only something had actually
made sure is that person who they really are. And so you guys can actually integrate into those
points in time. And as you say, it's not just about day one new employee. It's not just about
that initial establishment of identity. You can actually integrate this into really critical
upsteps in authentication, but also just, I think I heard you say that like, sometimes
maybe even just a periodic thing. Like, it might be just good to make sure in three months time,
I'm still who I say I am. Because for all I know, I was a,
a highly paid actor that showed up for the first three months and then collected my paycheck.
And then my farm of other folks doing the work showed up from that point on.
Well, and I think, yes, that's part of our philosophy is being able to tie in not just at these, you know,
moments of onboarding or when someone's being interviewed.
I really think the value is when that person as an employee is interacting with any of your critical systems.
But we also work with enterprises that they have a once a year, you know, you're going to go through compliance training.
So at that same time, they want to send out a verification or maybe even gate compliance training with kind of combine them together so that they know Benjamin went through this.
And also they have a recency record of that human.
And I think that becomes really interesting.
Yeah.
But you know, along those veins, why do you tell me about some of the cool things that customers are doing with this.
Like what does great look like beyond some of those examples that we've just talked about?
Yeah, I mean, I think the challenge of describing great is that a lot of this space is so new that there's a wide range of what we see and how we've helped customers.
I think the most exciting ones are where they're starting to leverage identity as a part of their actual authentication processes.
They're not just using it as a one-time check.
But I want to go back to those one-time checks do become really critical and really powerful,
because suddenly you're taking something where we talk with customers,
you know, in order to reset someone's password,
they're going to get me, they're going to get my manager,
and they're going to get, you know, IT all in a Zoom call.
And then my manager is going to assert that, yeah, that looks like and sounds like Benjamin.
And I think I have this personal fear that the tools that bad actors have
are getting so good that maybe even after this podcast,
they're going to be able to create a likeness of me that could fool,
could fool a lot of folks.
And so I think that's where
even adding a simple check
right before a health test performance in action
can become really, I think,
impactful and valuable for a company.
But again, going back to the, you know,
what does great look like?
I think it's employing identity
not as an additional burden,
but can you find a way to weave it into your everyday
business processes?
So maybe that's in hiring,
maybe that's in onboarding, maybe that's, you know, not meant to make compliance training harder.
It's just take a quick selfie. It's still you and keep going going, right?
Well, that's the bit I wanted to delve into because I, you know, I think it's important to say,
I think there's different levels of like validation here, right?
We're not talking about before I do that compliance training that you're going to make me go to
the closet and dig out my passport and go find my birth certificate and whatever else.
There is, there's different forms of validation that you can do at different points in time.
So am I right that there is like,
a really light touch way that you can do that
revalidation at that point in time?
How does that work?
Absolutely.
And so I think when we think about a suite of products,
we start with being able to collect just device signals
so we can run a background JavaScript that helps us
understand, is this a known device?
Is this something that we've seen before?
Tells us something about the network.
So I might not see anything at all.
We could.
Exactly.
Yeah.
So there is that kind of from the lowest friction to maybe there is an active
collection where we're going to say we need you to take a selfie or take a picture of your government
ID. And I think depending on the riskiness of what a company is trying to protect, we can do,
we'll call it a high friction flow, a high assurance flow where we might be asking for,
you know, a stronger form of verification, maybe using a digital ID or reading the NFC up a passport.
We don't want to do that for everyday use cases because A, most people don't carry their passport all
the time. And B, it's just, it does take a little bit more work, but trying to kind of point in.
Interesting that you mentioned there that you can do this, but it's, I don't, is it so much
that like this is baked in sort of canned ways that persona operates or is this sort of a modular
thing that actually it's the customer that chooses, you know, for this workflow, I want this,
this, this and this. Like, who's got the control here over how they model their, their, their,
their flows.
So I will say, like, we do this.
Like, this is not just a, you know, a capability that we can do.
We have this deployed for many of our customers.
And what we've built is this platform that allows, it's not just a government ID
and selfie.
It's a platform where a customer can say, we want to accept documents X, Y, and Z,
from these geographies.
And we want to perform these types of verification checks.
So it's very customizable for workforce settings.
There's this interesting tension that I think about in cybersecurity, you have usability versus security.
When we talk about workforce solutions, we have the complexity of we can infinitely customize our platform,
but many of our partners, many of our customers, like they might not be, might not have to oven deep into identity.
And so we come to them with, here's our baseline, here are our recommendations, here are the checks that we want to perform.
and from that you can either increase the checks or you can change them based on your
either workflow or your your tolerances.
Oh, that's cool.
So you've got basically a catalog of this is what you should be doing, but ultimately it's
up to the customer to then if they want a fine tune out they can.
So they're not starting off with like this gigantic toolbox and having to assemble everything
themselves as like some these ready made sort of good to go sort of options.
Yeah.
And I think we've learned that, you know, going back to that KYC example, we've been in a business for
years. And I think one of the hypotheses that started persona is KYC flows vary from institution
to institution. No two verifications necessarily are the same because every business has different
requirements. And we see that play out, especially in the workforce setting, because you have
different user segmentations or user populations, full-time employees, contract employees,
vendors. And you also have different geography. So we're really, really aware of the different
of compliance and privacy needs that you might have working in Europe versus working in other
locations.
Right.
And this is probably a good opportunity to take a little bit of a detour, I guess,
into the other end of the spectrum where this is useful, which is in candidates and
candidate management and validation.
Now, you know, we were talking before we hit record that, you know, I've seen this
firsthand, right?
I've worked in startups where people are calling me for a sort of a back channel reference
check on someone that I've never heard with, or never heard of.
There's the LinkedIn page and they claim they've worked with me.
But the particular angle I want to explore here with you on this is this is where it feels like there's a bit of a disparity between what's going to be readily available to an enterprise because they've got the budget versus a lot of the actual threat and a tax surface exists in the small and medium enterprise and even startups, right?
Because by and large, they're still the ones that are really openly adopting remote and hybrid.
you know, they're the ones that are open to hiring lots of different geographies around the world.
Those are the kind of things that an enterprise, maybe in being unfair, but they tend to have a little bit of a limited appetite for, which gives them sort of those built-in safeguards.
But talk me through, you know, what's persona's role here in that sort of candidate validation before you've even made the hiring decision or maybe when you're doing the loop?
And, you know, is that a fair sort of thing to say that this is more of a small to medium enterprise startup scale-up sort of problem?
or, you know, how does that sit with you?
Well, let me start with what can we do or what are we deploying for customers in the
candidate space?
And we have a philosophy, again, going back to this idea that we want to help build identity
solutions for the entire lifecycle.
So we recommend to our customers at some point in your hiring process, probably between
the recruiter and hiring manager stages, you probably want to build confidence that you're
actually talking with a real human.
And so we start with a government ID and selfie verification, just a very simple link that they can send
to the candidate's personal email or texted to them. And then you start that record of that identity
profile. And as that individual comes back maybe for a panel interview or an onsite, you could
choose to do selfie reverifications just a quick, simple, like is this the same human? And then when you get
to the point of an offer or wanting to run background checks, we can either help with background
checks through a partnership we have. And then say you offer some of the job, they accept on day
one, now we're granting them access to the actual tools and services you have internally.
So that's where we bridge the gap from being candidate Benjamin to employee Benjamin,
and we can simply take that same profile in our system, maintain that record of verifications.
And so we know that on day one, it's the same human that you started talking to two or three months
All right. Well, Benjamin, listen, this has been a great, great chat. I understand, I think, so much better now what you guys are doing. And I want to thank you for taking the time to have a chat with me today.
Absolutely. Thanks so much. And nice to hang out and take care.
That was James Wilson interviewing Benjamin Chait from Persona there. Big thanks to both of them for that. And thanks to Persona for sponsoring this week's episode of Risky Business. And that is it for this week's show. I do hope you enjoyed it. We'll be back next week.
with more security news and analysis, but until then, I've been Patrick Gray. Thanks for listening.
