Risky Business - Risky Business #825 -- Palo Alto Networks blames it on the boogie
Episode Date: February 18, 2026On this week’s show, Patrick Gray, Adam Boileau and James WIlson discuss the week’s cybersecurity news. They cover: Palo Alto threat researchers want to attribut...e to China, but management says shush An increasing proportion of ransomware is data extortion. Is this good? Cambodia says it’s going to dismantle scam compounds CISA sufferers through yet another shutdown Google Gemini’s training secrets are being systematically harvested to improve other LLMs Academics assess SaaS password managers’ resilience against a malicious server This episode is sponsored by SSO-firewall integration vendor Knocknoc. Chief exec Adam Pointon joins to talk about the latest in defences… which is to say Knocknoc for Solaris/Sparc and HPUX on PA-RISC?! Okay also that other little known OS… Windows. This episode is also available on Youtube. Show notes Data-only extortion grows as ransomware gangs seek better profits | Cybersecurity Dive Arctic Wolf Threat Report 2026 Exclusive: Palo Alto chose not to tie China to hacking campaign for fear of retaliation from Beijing, sources say Risky Bulletin: Cambodia promises to dismantle scam networks by April - Risky Business Media Age of the ‘scam state’: how an illicit, multibillion-dollar industry has taken root in south-east Asia | Cybercrime | The Guardian Critical flaw in BeyondTrust Remote Support sees early signs of exploitation | Cybersecurity Dive CISA Navigates DHS Shutdown With Reduced Staff - SecurityWeek Kimwolf Botnet Swamps Anonymity Network I2P – Krebs on Security BADIIS to the Bone: New Insights to a Global SEO Poisoning Campaign — Elastic Security Labs Over 500,000 VKontakte accounts hijacked through malicious Chrome extensions | The Record from Recorded Future News Password managers' promise that they can't see your vaults isn't always true - Ars Technica Zero Knowledge (About) Encryption: A Comparative Security Analysis of Three Cloud-based Password Managers Google finds state-sponsored hackers use AI at 'all stages' of attack cycle | CyberScoop Google: Gemini hit with 100,000+ prompts in cloning attempt Proofpoint acquires Acuvity to tackle the security risks of agentic AI | CyberScoop Cisco Redefines Security for the Agentic Era with AI Defense Expansion and AI-Aware SASE Sophos Acquires Arco Cyber to Bring CISO-Level, Agentic AI-Powered Expertise to Every Organization Dave Kennedy on X: "Regarding this, there was a couple questions on does the pacemaker continue to advertise - most BLE implantable devices go into a sleep type mode. In this case, we are lucky - it does not. We know based on law enforcement answers that she is using a more modern pacemaker with" / X Clash Report on X: "BIG: Dutch Defence Minister Gijs Tuinman hints that software independence is possible for F-35 jets. He literally said you can “jailbreak” an F-35. When asked if Europe can modify it without US approval: “That’s not the point… we’ll see whether the Americans will show https://t.co/f11cGvtYsO" / X Dutch police arrest man who refused to delete confidential files shared by mistake | The Record from Recorded Future News
Transcript
Discussion (0)
Hey everyone and welcome to risky business. My name's Patrick Gray. We've got a great show for you this week.
We'll be checking in with Adam Boilow and James Wilson to talk through the week's security news.
And then we'll be hearing from this week's sponsor, Knock Knock. This week's show is brought to you by Knock Knock.
And Adam Pointin, the chief executive of Knock Knock, will be along in this week's show to talk about some new stuff that they've built.
Probably the thing that most people are going to be interested in is a Windows agent for Knock Knock.
So this means if you've got some Windows boxes, either on your internal or external network,
and you don't want them to just open their ports to everyone,
you can just drop the Knock Knock Agent on those boxes.
And, you know, unless you have been through an SSO challenge,
you can't even get network ports on those boxes.
They've also built some agents.
Last week, I incorrectly said for mainframes,
what I actually meant was like HPUX on risk.
and Solaris on Spark.
So you know, like old school stuff, not mainframes.
I meant sort of like, you know, old school big computers.
They built some there and that was actually funny because, yeah,
you can't build a Go agent for Solaris on Spark.
So Adam will be along to talk about that little journey.
Very interesting stuff.
But before all of that, of course, it is time
for a check of the week's news headlines with Adam and James.
And guys, first up, we got a report.
here from Arctic Wolf that says data-only extortion is growing as ransomware gangs seek
better profits. That's how cybersecurity dive have written it up. I had a quick look at the report.
It isn't as good news as you would expect because ransomware is still like number one
caseload, you know, crime type that they're dealing with. However, it does seem that the data
extortion stuff is just really growing in popularity and maybe that'll cannibalize some of the more
disruptive ransomware. Yeah, it's an interesting.
distinction between the two because, you know, the upfront impact of encrypting ransomware,
like in terms of availability of services, you know, is very immediate, whereas the data
theft one, like the costs of that are much longer tail, right? They're spread out across the user
base. And in many cases, you know, the companies themselves probably are not going to feel
the costs of those. Like, it's their users, they're, you know, for people whose data gets, you know,
stole it and presumably at some point in the future league because even if you pay right there's still
the data still floating around out there and i'm in the case of vastamo and finland i guess is
the extreme end of data theft but you know there are impacts that are ultimately you know born by
the rest of us like tragedy of the common style through this type of thing so like you know it's a
change but in a way i kind of like the immediacy of uh you know of availability ransomware as opposed to
data there forance where but you know maybe that's also just because i like big splashy things going
wrong you know well i mean i just think it's it's a much better look if you had to pick one to live with
as a society which crime type would you want you would certainly want the data extortion stuff as opposed
to the stuff that makes hospital computers not work uh so you know just an interesting report there and
we've linked through to it uh in this week show notes now a big story from reuters this week i'll
read you the headline it's by raffa al sata and a j v sends um we've got the headline
Palo Alto chose not to tie China to hacking campaign for fear of retaliation from Beijing.
Sources say.
So the story is there was a threat report.
I think we actually covered this one like last week or the week before.
There was a threat report about some, you know, Asian threat actor
doing a bunch of stuff that really looks quite Chinese.
And apparently there was no attribution in the report
because the first draft had it in and then a bunch of executives came down and said,
no, we can't have that.
This will put our staff in China at risk and, you know, put the company
company at risk and you know we're just we're just not going to do it so they
toned it down and released it I'm not so sure I can I'm not sure I can really get
angry with Palo Alto networks over this I mean if you're at the point where
you've already got a presence in China and you know you do kind of have to you
do kind of have to think about this stuff I think really the role really the
organizations that need to be doing these kinds of attributions is probably
governments I also think when you're dropping a report that doesn't name China that
everyone can kind of figure out you're talking about China. Does that really matter?
Like, is it the role of these of these private sector vendors to do these attributions?
So Adam, I want to start with you on that, but I also want you to weigh in on this one, James.
So, yeah, Adam, what do you think here?
I mean, I'm kind of with you on this. The, I mean, the choice about how you deal with
adversarial nations is one you kind of have to make pretty early on.
I think like the point of comparison here would be Google, you know, as Google versus China
from when the Aurora hacks happened back in the 2000s,
you know, and they made a conscious choice that, you know,
they're just not going to do a business there.
And that is, you know, what you have to do.
And if you're someone like Palo Alto or Cisco or, you know,
any of these other big tech giants that, you know,
manufacturing in China, have business presence,
trying to have staff and, you know, everything there,
like at that point, it is kind of too late.
And I think, you know, I was thinking about, you know,
we saw some cases where 10,
firms that had offices in Russia and the Russian government was applying leverage to the local
staff there.
You know, you're getting into a position where, you know, corporate entity versus sovereign
power, you know, it's kind of corporates can't really win other than maybe capitalism
generally.
So I do have some sympathy for them, but at the same time it does just feel kind of weasily
and, you know, if we're all reading between the lines anyway.
And I guess the other point is, like, I feel like Palo Alto has already gotten consequences
from the Chinese government.
they were on the list of Western tech vendors that China was discouraging its private sector from using.
Well, by discouraging telling them not to use it. Yeah. So look, they were already kind of in trouble there.
I just, you know, I don't know that it feels weasily. And I think if you're a threat researcher and you've leaked this to the press,
I mean, I think if I'm a Palo Alto networks executive, I'm thinking why are we doing this threat research in the first place?
if all it's getting us, his headlines in Reuters calling us a bunch of cowards.
James, let's bring you into this.
What do you think here?
Yeah, look, I tend to agree.
I was at Apple at the time when we had to create a completely separate version of the ICloud
infrastructure and hand it over to a Chinese state-aligned vendor to run alongside of us.
And there was a lot of internal discussions about how we didn't feel that that was the right thing to do, was it weaseling out.
But I think it really brings into focus the realities of this is a private entity, it's a business.
They're going to prioritize shareholders, profits,
etc. And that's the nature of business.
I also don't think that the lack of attribution makes a material impact of the
value of the research they did. I'm not going to decide whether I act upon it or not based
on who it's attributed to. Yeah, I mean, that's the thing, right? It is the old line about
the responsibility being to shareholders and stuff. All right, well, I actually think we're all
on the same page with that one. Let's move on. We've got a write-up here from Catalan
Kempano, our very own, Katalin Kempano for the risky bulletin newsletter. The
The Cambodian government under international pressure is promising to crack down a dismantle cyber scam networks operating within its borders by April this year.
Look, I mean, Catalan's done a really nice write-up here about, you know, just the situation and what sort of things we can expect to see.
I'm skeptical, right?
I'm actually skeptical as to whether or not they're going to be able to do anything here.
And the reason I'm skeptical is born out in a Guardian piece from late last year, which I've also included in this.
week's show notes written by Tess McClure. And really it looks at the idea of like these,
you know, scam states, right? Like just as we had narco states in the in the 80s in Latin America,
due to the cocaine trade, we've kind of got a similar situation here with the with the scam
centers. And they've got a great chart here which shows that, you know, in Myanmar, you know,
the value of these scams is 23% of GDP. In Cambodia, it's 30.2% of GDP. And in Laos,
it's 68.5% of GDP.
So as much as the Cambodian government can say,
yeah, we're going to completely eliminate this,
can you think of any government
that is willingly going to destroy economic activity
within its borders that contributes 30% of GDP?
Like, if they're actually successful in doing this,
it's going to hurt the whole economy.
Adam, what do you think here?
I mean, unfortunately, that is the economic reality of it,
and it's not very palatable,
but I think that is a very real politic kind of way of thinking about.
Like this stuff is, you know, kind of too big to fail.
I mean, some of the numbers, like in terms of people involved,
like I think the Cambodians said that they have,
that what, 110,000 people have left the country that we're working in the,
you know, in the scam compounds.
Do we think all of those people are going to go home?
You know, are they just going to move across to Myanmar, move across to Lao?
And as you say, like the, you know, that whole.
region has this kind of problem and there is a bit of fluidity about you know if the scam
operation worked in Cambodia it's probably going to work in in Lao or Myanmar and you know they've got a
model for doing it you know even if Cambodia is entirely above board right even if they were like
we actually are really going to do this we're going to do a good job of it and take it seriously
you know it's not going to change the existence of a very very large pile of cash and a pool of
people that are now experienced at operationalising it.
And as you say, that's, you know, in regions where, you know, there isn't a lot of, you know,
68% of Lao's GDP.
Like that's a, you know, it's massive, right?
And it's going to be very hard to walk away from that, you know, across that whole region.
Well, I mean, there's, and this is the thing, none of the incentives line up towards making
this go away, right, at any level in the government.
So you've got the people lower down who get to benefit from being corrupt by being paid to
look the other way or being paid.
not to patrol down that street or being paid not to take reports from people who
escape from the compound or whatever it is you're getting your protection money
and then like at a macro level if you're the president of a country like this oh my god it's
bringing in so much money that you know is being i'm guessing a lot of it is going to be spent
in the region then again a lot of it's going to be repatriated back towards where the uh scam owner scam
ring owners operate and that's china right so i don't know maybe the economic hit isn't going to be
as severe as possible. I do kind of feel like the ultimate solution to this problem is going to have
to involve foreign aid. And with the United States not doing that sort of thing at the moment,
yeah, I'm not particularly hopeful. Where did you land on this one, James? Yeah, to me, this is straight
up supply and demand. It doesn't do anything to address either sides of that equation. You know,
the amount of billions and billions of dollars involved here, there's going to be massive demand
for that. Someone else will come up and want to set up the next industry around this. But it also doesn't
address the supply problem. The fact that there are so many people that fall for these scams,
that's a societal problem that, you know, as you said, there's an aid aspect to this.
There is a social problem here that needs to be addressed and just shutting down the venue
where the scam compounds can run. I don't think it helps at all. It wouldn't surprise me if it actually
puts the Cambodian leadership at risk of a coup because they'll end up in a downturn of the
economy and some political upstart will come along and say, you know, we're going to make Cambodia
great again and bring back the money.
Well, I don't even know if it's going to be that overt,
but when you do have, like,
illicit industries controlling that much money in your country,
it can create those sort of risks.
So let's see.
All right.
What else do we have here?
The beyond trust bug that, Adam, you flagged,
what, a week or two ago,
as, like, you were like,
oh, this one is absolutely going to get exploited.
Well, turns out...
It's being exploited.
It's shocking...
Shock and horror.
That's exactly what we expect.
It's like what CVSSS 9.
something, so, you know, we were expecting it,
and sure enough, here it is.
We've seen some people tracking campaigns to attack it.
I think, was it, Grainoy's were spotting some of it,
and they said the same group,
one of the groups they've seen using it,
was the group that went against the US Treasury Department.
Silk Typhoon, yeah.
Silk Typhoon, yeah.
So there are, you know, bugs like this,
in big demand by attackers and of course they're going to be using it and you know landing in
the privileged access component like in the privileged access manager that has credentials and has access
like it's such a sweet place to shell that you know just why wouldn't you i mean shelling the pam gear
the pam gear that is on the internet for some reason i mean i don't know yeah i mean i've done that a
bunch of times in my professional career you know back at in the insomnia times and like landing in the middle
of the privilege around access.
It's just, it's a thing of beauty.
Yeah, sure it is.
But, you know, it's okay, because I'm sure CIS is on this.
Oh, oh, wait, I'm being told we have some news.
Yeah, so CISER is kind of caught up in all of this stuff around ice, you know,
the, you know, the masked paramilitary police in the United States.
Yeah, so there's some sort of funding halt on DHS at the moment,
and that is spilling over onto CISIS.
Apparently something like, you know, 888 of its, very auspicious, by the way, 888.
888 of its current workforce of 2,341 staff are at their desks without pay.
So that's a wonderful situation for them.
But, you know, they can apparently bring back others if there's, you know, relevance to national security or whatever.
Anyway, the CIS's century of humiliation is continuing, is basically the vibe here, Adam.
Yeah, I mean, it just, I cannot imagine what morale is like inside that.
at the moment. I mean, you know, they got shredded by staff cuts, you know, and they've been
kicked around for their election security work, which was great work, but of course, now they
got punished for it. And then this, you know, the, well, there was the previous shutdown of the
whole government and now this one against DHS. And of course, the irony is that the actual target
of this, the ICE part of DHS, isn't even funded in this way. And so this shutdown doesn't
affect them. It just affects, you know, things like the TSA and so on. So, you know,
know, them being a partisan political football being held to ransom, like, you know, I'm not surprised
that, you know, the people are quitting. And I think over in the risky bulletin today, we were,
there's a story in there about, you know, some high up and sisser announcing his resignation and
department.
Yeah, was the head of like threat hunting, was it in all hands and said, yeah, I'm out, you know.
So, I mean, there's been so many people leave. Like it just, as I said, it's cissus's
century of humiliation. Yeah, no, it really is. And, you know, Sissor does such important
work. And yet, you know, here they are sitting at the desk, not getting paid, having to do it
anyway. And I, you know, I guess my hat's off to the people who are still there at their desks,
but, you know, boy, but what a, just what a mess, you know, what a mess. I can just imagine the
recruiters, like, the amount of inbound, the average sister person is dealing with from
recruiters at the moment would be insane, you know. They can smell, they can smell death.
They get in there quick. All right. Next story here is one from Brian Krebs. He's been writing
a lot about this Kim Wolf botnet. This is this residential. It's like a residential proxy
IoT botnet thing. But like it looks like whoever's like running it at the moment just keeps
making really dumb mistakes and like accidentally losing like hundreds of thousands of endpoints
in this in this botnet. The latest thing they've done that was kind of dumb is they're thinking,
I know, let's let's introduce a fallback, like decentralized C2 channel. And instead of using Tor,
we're going to be like anonymity network hipsters and we're going to use i2p which i'm guessing a lot of
people listening wouldn't have even heard of i2p i have due to a whole bunch of like reasons i won't get
into but i2p is sort of like was like a tour competitor but it never really took off so of course
when they joined like 700,000 boxes to this like tiny little anonymity network it started falling over
so that's the that's the latest here and what's crazy about this is you'd think if they had
have thought this through, they would have been able to task enough of the bots under their control
to actually participate in the I2P network that it would have scaled the network and it would have
worked out okay, but they just didn't do that. They're just like, loll, we'll just throw them all on
as clients as identities and here we all are. Yeah, I'm not sure what they were really thinking.
It looked like they were intending to operate the nodes that they were joining as routers that
would also add extra capacity to the network and maybe they screwed up the configuration or
like maybe there's some reason why it wasn't working is it seems like,
They added all these nodes, but then those nodes weren't forwarding traffic correctly,
and that's kind of what made the ITP network itself grind to a halt.
It's because all of these extra nodes weren't actually pulling their weight.
But it's overall just, you know, it's somewhat comedic, I guess.
And, you know, I feel like, you know, maybe they could have done this right
and it might have worked out for them.
But on the other hand, you can't put 700,000 nodes onto a 50,000 mode network
and still have anonymity, right?
That's not really how it works.
You have to hide in the crowd, not be the ground,
not be the crowd. Yeah, I mean, what I'm thinking too is that if you're just one of those
I2P hipsters who's been using it forever and like enjoying your little corner of cryptographically
anonymous, you know, computing la-di-da-di-da, and then someone comes along and basically
makes it toxic. And it's like now you get this massive benefit of killing this botnet if you
kill I2P. It's like, I'd be getting a little bit nervous if I was there. James, what did you
think of this? I mean, as someone with the background in engineering and running teams, this must
drive you nuts seeing them rack up these own goals. Yeah, I think I'd best put it as this is a team that
has some coaching opportunities ahead of them. It doesn't seem to be a highly skilled operation.
It did make me wonder, though, you know, was the choice of ITTP, not just for cryptographic
hipster reasons, but just due to the attention that Tor gets these days, like it does feel like
law enforcement is generally pretty well skilled at finding and taking down things off Tor,
despite the anonymity that should provide. But I don't know, when you're
weigh that up against just the dumb stuff that they do. I'm hesitant to read too much strategy into this.
Yeah, so James is going to pip you guys. You're going to go on James's pip plan.
It's a very short plan. Now, we've got a write up here from Elastic. I insisted this one go in just because it's so,
you know, this is such an old school type of criming that I just love it. It's the bad IIS
kind of global SEO IIS botnet. Someone's going around owning IIS servers.
And then what they do is they're loading them up with this like bad IIS module that publishes a whole bunch of like SEO spam designed to do like, you know, malicious SEO optimization for casinos and pornography and all sorts of stuff.
And it's just such like old school black hattery that I just think it's, I just think it's awesome.
It also explains where sometimes you're Googling around for some product or something.
And then you see like Google will spit out a link to some university in New Zealand or something that has a.
page on it like with this product. It's like these, these sort of malicious
SEO optimization networks like this is how they do it. They take over web
servers and they start using the to publish content. The better the domain, the
better the reputation, all good. And I just really enjoyed this write up. What
did you what did you take away from this? Adam? I mean it is definitely retro.
Like this is the going to the thrift store and getting your story from you know,
10 years ago, I guess we would have been covering this. Maybe 15. I don't know. It seems
it seems pretty retro. And in that respect, it's kind of funny.
in a way.
The, I mean, the way they're getting into these boxes is exactly what you expect.
It's just normal, you know, either web apps or credential stuffing or credential,
credentials from infesteaders, you know, whatever it else, whatever it is they're getting
into the IAS.
They've got 1800 IIS boxes and like, that's a pretty decent haul, I reckon.
Yeah, yeah, that's, you know, that's not too bad.
I am surprised that's enough to SEO your way up these days and maybe it's not.
Maybe they're doing SEO because, you know, no one's bothered to update that particular
a module for 10 years and it's you know the SEO is that old but yeah it's overall it's just it's
just kind of funny and you know it's kind of heartwarming in a way that someone's still out there doing
this um so you know in this modern like a ia agentic world and here they are putting you know
SEO spam on people's hacked i s like it's 2007 ad so you know well well well let's let's
you know speaking of old school let's talk about the next story which you very really really
dovetiles with this one which is 500,000
V-contact, you know, VK accounts.
VK, of course, being Russia's like Facebook equivalent.
Yeah, about half a minute accounts have been compromised by malicious Chrome extensions,
which is funny, right?
Because we used to do this sort of thing just by going after the browser itself.
And now it's like you just go after the extensions.
But it's still doing the same stuff that people would have done with the browser like 10, 15 years ago,
which is like like harvesting and whatever, right?
So these people have built this botnet that's doing like like and subscribe
I'm harvesting on VK.
I mean, James, you must have enjoyed this one.
I just, I got the feeling that you would have enjoyed this one.
Yeah, yeah, I love this one.
You know, so often when you're going through the malicious extensions, it's the same all
thing.
It's like this extension looks like that extension, and if you're silly enough to install it,
you get on.
But this is just like these extensions were doing kind of exactly what they said on the tin,
but just a little bit more.
And the two levels of like you could use this extension to improve your profile,
et cetera, and then your profile got owned.
But really, the chef kiss bit for this was,
and you could put in your payment details
and get some extra merch as well
and then they snapped your payment details as well.
It was just like, it was beautiful.
Yeah, yeah, it is.
I mean, I just think it's funny that like the vector here has changed
as in now it's extensions,
but they're doing the same stuff as so long ago, right?
Yeah, between this one and the other,
the one we had just before,
it's like as someone who's watching all this AI stuff unfold
and it feels like everything's changing,
everything's moving fast,
it's just like sherbet for the nerve,
soul to see that the same old attacks are doing the same old stuff and it's still porn and
adult toys that end up on the SEO hacked sites and the extensions. It's a relief. We're still
the same. Now we got some dense research to talk about, which is actually really interesting.
It brings up a bunch of interesting sort of concepts, I guess. So this is work out of ETH Zurich
and also the University of Sphys, well, God, don't yell at me Swiss people. Sphyssevich.
Italiana, also in Switzerland.
And yeah, so this is research looking at password managers
and claims that password managers cannot see
what's inside your password vaults.
And this research has said, well, kind of, kind of depends.
Kind of depends on a few options that people have selected,
kind of depends on the product and certain bits can be extracting certain bits.
I mean, what I found interesting about this research is, first of the,
of all how comprehensive it was. They went through and they did an awful lot. And second of all,
our initial reaction to the write-up of this, which was by Dan Gooden over at ours, which was,
well, you know, if someone compromises your password manager company, if they supply chain
attack the password manager, well, you kind of, it's all over anyway. Or if someone's malicious
on the inside. And really, though, once you look at this research, you realize that a bunch
of it would actually be useful to attackers who may only compromise.
parts of the password manager infrastructure as opposed to the whole enterprise going rogue.
So in that case, like after we saw with one password, was it one password or last pass or, you know,
the one that got owned where the theory is a bunch of sort of crypto seed phrases and stuff wound up getting cracked over the next year and whatnot.
You know, we have seen compromise, partial compromises of these sort of companies in the past.
So I do think this is very interesting research.
I don't think it's a dead end thought experiment to go through this.
stuff but first of all James I know you read the paper here just today you I don't I
don't think you probably had a chance to absorb it completely but what was your take on the
research here can you just describe to us what the research was here yeah the research is great
it really goes into the internals of how these these key volts work and the the crypto around them
and the central sort of you know hypothesis that putting forward here is that yes
when the password managers claim they can't see your password it's not entirely true but
But also it's not like the implementation is bad and it's flawed
and actually they're lying to you.
They can get access to all of your passwords.
Fundamentally, these are relatively secure,
but there's layers and features around them
that quickly expose you to problems.
And the first one they highlighted was around account recovery
is like straight away.
If there is a means by which you can recover your account
if you've lost that master password,
that is absolutely an exploit vector as well.
The reason where I'm asking you about this one first is you have some experience with this sort of stuff in your Apple days,
because ultimately you were discussing working on these sorts of tradeoffs when you were working at Apple.
Yeah, absolutely. So when I was there, we built the essentially 2FA for the masses and then also added onto that the advanced iCloud data protection,
which was just lovingly known as a key drop internally because that's essentially what it is, right?
it's a means by which we can drop the keys out of Apple's servers and say,
hey, customer, over to you, the keys are only on your devices and they're syncing
between the devices. We would have loved to have created that in a way where we could turn
that on for everyone. And the technology is there, and it's entirely possible. It's not a
cryptography or an engineering problem. That's all solved. It's actually just a, you know,
what is the right trade-off here? If we enabled this by default for everyone,
that means that you can ring Apple's help desk as much as you want, and they just can't help
they can't recover your account and the sort of the sense was that when you're talking about
hundreds of millions or even billions of accounts that's the wrong trade-off you've got to make it possible
for data to be recovered despite the technology making it possible to to make it essentially completely
unrecoverable if that main key's lost yeah and i'm guessing like you know that's the sort of
trade-off that they bumped into that these password uh uh you know store companies bumped into
adam what did you make of all of this i mean i guess it's it's i found it interesting because
like at the big picture, like macro point of view, you just have to trust people who write your
software, right? The supply chain aspect of this, like if you're using Bitwarden or OnePass,
ultimately you have to trust them to write good software and write software that operates in
your interest as a user. And, you know, the actual mechanics of how you leverage that,
you know, for a macro point of you don't really matter, like, you know, if an attacker can
compromise the whole company, but then like the point you made, which is that there, you
you know, these companies are big, there's many moving parts, as many people, you know,
you want to defend against as much of these companies that you have to trust as possible.
And these architectures let us kind of compartmentalize down, you know, which, you know,
who inside the company really matters, who do we have to actually trust?
And that's, you know, as we saw in some of the early, you know, some of the early cloud services
where, you know, all staff had access to everybody's data, you know, through whatever back-in system
they had.
And that, of course, ends up getting abused because people are people.
help desk, we'll go rummage or whatever else.
And so we started to compartmentalize that down a bit.
And I think, you know, this research is really useful because it validates that choice that we've made to start to restrict, you know, the blast radius of any individual or, you know, part of an organization getting compromised.
And so, you know, on the one hand, yes, you do got a trust who writes your software, but it is these days just a bit more nuanced than, you know, the sort of cyphepunk idealist version that we would have had in the 90s.
Yeah, I mean, it's not like your build servers are attached to your customer storage, right?
So like in the case of, and it was last pass, I just checked so we don't get sued by one password for putting them in that same sentence.
But, you know, in the last past thing, I mean, they store, you know, customer backups or whatever.
But that doesn't mean that they would have had commit authority in their repos to be able to push a poisoned update to do X, Y, Z.
Right.
So I think like as much as it's the case that you do got to trust your vendor, I also think that, you know, this sort of research is actually quite valuable.
because it wasn't just stuff around account recovery.
Like they did find some pretty, you know, some somewhat exotic stuff there.
Yeah, and we had some engagements back in the insomnia times where we got retained by, you know, a cloud company.
And one of the goals was, can you go from Network Edge to our belt system?
Like, we didn't get to the point where you can compromise the stuff that we ship out to end users, you know, what's that attack path look like?
And having that explicitly being a goal, right, to validate those internal controls, you know, that would,
was a really interesting exercise.
And obviously these were customers that were mature enough
to have already implemented some of the segregation
and then want to go check, like,
does the active directory reality of this match?
And so, you know, those are, you know,
whether you're doing it academically, you know,
looking at the software and crypto protocols,
or you're doing it, you know,
looking at infrastructure build internally,
you know, validating that the segregation works is, you know,
super useful work.
Now we've got a story here from CyberSoup,
which is looking at a report from Google.
There's just CyberScoops write up of a report from Google,
which has been written by Derek B. Johnson.
And really it's about how AI,
Google has found that AI is going to be useful
across every sort of step of offensive operations.
It contains, this story contains a bunch of comments
from John Holtquist over at Gugient.
He's Gugient, right?
Yeah, I think he is.
Yeah, that's right, of course.
He is Gugient.
And look, I don't know.
Like, he says a bunch of interesting stuff.
he says a couple of things I disagree with.
So like in this para they say,
Hulquist said that some state groups,
particularly those focused on espionage,
may not find the speed and scale advantages of agenic AI
useful if it results in louder,
more detectable operations.
That's a very big if.
The if in that sentence is doing a lot of work.
But, you know, I just think it is interesting
that Google is seeing this stuff
already pop up in caseloads where it's like,
well, they used AI for this bit,
they used AI for that bit.
I think the vibe I'm getting
here is that they're still figuring out exactly, you know, attackers are still figuring out exactly
how to use this. I do have a feeling they're going to get there quicker than most people think.
You know, James, let's bring you in on this. I mean, you know, we were talking about OpenClaw.
What was that? A couple of weeks ago. I actually saw some Twitter thread from a guy who
plugged OpenClaw into a bunch of pen testing tools and told it to do stuff and was actually
pretty blown away by what it was able to do. And that's just someone's five-minute dumb project.
you spend your time, a lot of your time with your head in AI.
How do you think this is going to play up?
Yeah, I think the article is interesting insofar as if you think of the framing a couple of weeks ago,
I think on the show you guys talked about the fact that there was early signals and Google
or Anthropic was like blocking these things.
And it kind of at the time felt like, well, clearly they don't care about getting snapped.
So they must be just experimenting with this.
And it's like a, we don't mind so much if this is out in the open.
I think that has rapidly progressed, though, to being actually,
we are using this to figure out a whole bunch of stuff.
As it says in the article, it's not like there's like a super defined,
really refined way that they're using AI,
but the fact that it's getting so much experimentation, I think, is telling.
It's not just little skunk works anymore.
I think this is now probably a bit more mainstream
in terms of the priority that's being put on it.
This sort of dovetails into a couple of other stories
that have come up this week around the fact that I think
the most interesting thing about this is the fact that at the moment
Google is able to see this,
and that turns into these articles.
I think that's the bit that's going to rapidly change.
The visibility of this from a frontier model perspective,
that is probably the biggest thorn in the side of a APT
or a threat actor at the moment.
And there are rapidly increasing ways
in which they might be able to make this invisible in the future.
Yes, so, I mean, that's, yeah, it's exactly that.
When people were like, oh, but look, Anthropics catching them in the act
and they can block them and, you know, tune their models and whatever,
it's like, yeah, no way, they're going to be using their own models soon.
like just look at deep seek.
They probably just wanted to burn anthropic tokens
just because it's what they're used to or whatever.
Yeah, so 100% agree there.
Before we get onto ways in which some of these models might come to be,
that can be used by attackers, Adam, I just wondered, you know,
you're off-sec.
You've spent, you know, decades in off-sec.
These days you're just working with us,
but I imagine you're still in touch with a bunch of your old, you know,
off-sec people.
Where are their heads when it comes to AI in, you know, offensive stuff?
And imagine they're going to be split between the people who see the opportunities and the people who are completely in denial.
What's the vibe out there?
I think in my, one of the things that I did a lot in my hacking times was, you know, writing disposable tools, right?
The, you know, if you've got a solid tool chain that you always rely on, when that gets burnt or you can't use it with some reason, you're dead in the water.
And so being an effective hacker, you know, if you've got the ability to whip up, the exact tool that you need in this exact environment, you know,
for this exact operating system for whatever problem you're trying to solve,
you can build that quickly, use it, and then throw it the hell away,
then that tends to be a better trade-off than investing in high-quality, long-term, maintainable tools.
Now, I'm sure the trade-off inside, you know, a big organization,
like an intelligence agency is a little bit different there,
but for, you know, smaller teams being able to very rapidly iterate, make sense.
And the, you know, the strength of modern AI dovetails exactly with that, right?
Being able to build exactly what you need when you need it.
things like language barriers, you know, once again, unless you're an intelligence agency with linguists available,
then if you're dealing with, you know, breaking into something that isn't English, you know, English-centric,
then relying on those kinds of tools to do very rapid translation to, you know, for lawyers for social engineering,
but also just reading technology, documentation, etc, etc.
Like, you know, we were using Google Translate for that long before, you know, modern LLMs came along.
So like I feel like the that kind of lends itself to use by AI and you know people I know who are still working in the space lean on for those reasons you can iterate rapidly, move fast, keep ahead of what's going on and adapt to your environment and I think that makes sense.
Yeah, I mean I just I guess the reason I picked on John's comment there though is that I think you can kind of agentically get around some of these issues, right?
So like what you're talking about, I'll build a unique tool.
Like you can pretty much tell Claude to do that for you already.
If you give it a good description of what you want.
Now, does that description need to come from a human operator or can it come from another agent?
And this is kind of what I'm thinking.
Like you can once you, you know, build the right agents, get them working together.
You can really do some pretty impressive stuff.
So I feel like, yeah, I feel like this is the year that a lot of it happens, right?
It started kicking off towards the end of last year.
And I think, yeah, we're just going to see so much.
Now, speaking of the idea that a lot of these threat actors are going to have their own sort of
of frontier style models or something approximating a frontier model, maybe with some of the guardrails removed,
which is what they really want.
James, we got a story here at NBC News who wrote this one.
Yeah, it was Kevin.
Hey, Kevin.
I know Kevin's a listener, Kevin Collier.
So he's looked at another bit of research out of Google, which says that someone tried to clone
Jamini, right, by doing a hundred thousand, you know, giving it 100,000 prompts, right?
And it turns out the number of prompts that you have to give a frontier model to extract a lot
of its value, or as I like to say, to extract its soul, right? It's its mana, its essence.
Not as many as you might expect. So this is something where you've spent a bit of time
trying to understand how far this sort of thing can get you. What, you know, what did you find?
Yeah, look, I remember when deep seek sort of broke, I guess that was a year ago,
Now, there was talk at the time that they'd sort of gotten a leg up from doing a lot of these distillation attacks, right?
Just endlessly prompting a model, getting the response in using that in training.
And at the time, I remember thinking that that just doesn't stack up for me, right?
You think about a model training taking petabytes and petabytes of the entire corpus of the internet.
I couldn't understand how possibly just flinging a whole bunch of prompts at a model could possibly result in a new model being trained.
Turns out, though, I was wrong, and it is actually really meaningful.
And so I've been doing some research into this.
When you first train a model and give it essentially access to all that data
and it goes through its training process,
you actually don't end up with something particularly useful.
It's kind of like if you sat down and watched all the YouTube videos on how to paint watercolor,
you're still going to be pretty useless when you first go to paint your first painting.
So it's this combination of what happens after that initial training run that's super valuable.
The RLHF, the reinforcement learning with human feedback,
the post-training that occurs to try to add these skills and really,
reasoning in, that's the valuable thing that they're trying to extract with this distillation.
And it turns out there was actually a really good paper on this that said that you could go
from Lama, which is an open weights model, to get it to approach 90% efficiency of comparable
to chat GPT with just 70,000 of these distillation prompt responses.
And so it's definitely possible.
And I think this is sort of the tieback to the story that we were talking about before,
which is this is how an adversary goes from an open weights, open source model,
adds in these prompts, closes the gap of reasoning and the other skills that are required to get a
really high quality result, especially working in software engineering and cyber.
And if they can get the chips to do that training and the inference for themselves in their own data centers,
with them we have completely lost visibility to these attacks.
Yeah, I mean, I'm sort of like, I think you could get most of this stuff done with something like Deep Seek already, right?
And this is just the idea that like even if the Western models get like a long way ahead,
China's always going to be able to extract something that's nearly as good, just the lazy and cheap way, right?
And if I'm them, that's the way I'm doing it.
Adam, the part that I want to bring you in on is I've been thinking about how the large companies might stop this.
And I'm coming up completely blank because they're operating at such a scale that you would think that you just need a large number of accounts.
which you could probably use AI to generate and register,
and then distribute that through some sort of residential proxy IP service,
and you're always going to be able to run enough queries.
Although, you know, do you need, does it need to be, you know,
100,000 sequential prompts or can it be, it doesn't?
Okay, so that's the thing, right?
Like, you can't stop this.
You know, what are your thoughts here, Adam?
If you have to tie this, you know, to signals from the network,
like originating IP or query load over time.
You know, we've got so much experience in bypassing, you know,
kind of scraping prevention techniques.
So we're kind of into a how do you stop web scraping problem at that point.
And that was already hard enough.
And now we've got a situation where these models are the backends of so many things, right?
When Gemini is used in so many contexts and so many places,
the idea that even Google could get to a point where they could have a holistic view
of how Gemini was being used, like that seems like a lot of.
of work as well when it's being, you know, used in third-party products, where there's
API calls come from all over the place. Like, I mean, you know, there's no reason that you
would go and do this directly to Google itself. Like, why wouldn't you do it to a chatbot
of a travel agent that happens to be used Gemini, you know, in the back end now, you know,
you've got to kind of try and tie all these things together. So, like, they're in a really
hard place, right? I mean, and the standard solution for anything involving AI, you know,
we want to put controls on it is to put another AI on the front. But, like, you can only
bolt so many on and, you know, like maybe the ones on the front of the ones that are being,
you know, used for distillation attacks. Like, it's, it's a crazy-ass world.
Yeah, well, actually, Fudder, you should mention that because another thing that you wanted to
talk about this week, James, is that there's been a bunch of, like, activity in the commercial
world with, you know, ProofPoint acquiring a company that's trying to tackle the security
risks of Agentec AI. Cisco's released a bunch of stuff as well. Softos has acquired.
a company, but like the point you wanted to discuss is that all of these things are basically
just like little proxy shims stuffed into the front of these agents, and you're not,
you're not particularly bullish on the technology. I'm not particularly bullish on it. And if
anything, it's starting to feel like this is the new version of the pain that we feel when we see
the same old bugs in the same old places, hitting the same old enterprises. It's just, look,
don't get me wrong. There is definitely a new category of products here that will be exciting. And I'm,
And I really, you know, I'm excited to see what startups come up with.
But what I'm seeing get played out in this motor acquisition activity is just things that are kind of like a proxy,
sitting in between your applications and your models, giving you visibility,
making sure that if you're a Gemini house that no one's using OpenAI, just by looking at domain names.
And it's like, well, you know, I think the key message here is we have to remember that LLMs and AI,
it's not using some new protocol that's suddenly endowed with like cyber telekinesis for extracting data.
This is still just plain old HTTP.
It's structured JSON flying back and forth.
It's conversational stuff in plain text.
It's nothing special, right?
It should still be the burden-butter domain
of a lot of the existing security tooling.
The thing that it really ultimately puts the pressure on
is your authentication and access controls.
Because at the end of the day,
what we're seeing with these agents is
it's not that these agents are getting spun up
and acting on behalf of the user with a separate identity,
although that tends to be the preference in the industry,
they're actually just operating as the user.
And so even with these acquisitions
and putting on the AI sticker and saying,
yeah, now we've now got powered by AI
or we're an agentic detection company,
the end of the day comes down to
what's the human got access to
because now a bot's got access to
and that bot knows a whole lot more
about how to exploit those things
than the human that used to only be the only one
that could access them.
Now, I should mention,
that we're in the process of spinning up your podcast feed because you're going to have your
own podcast feed doing your own stuff and um we've sort of soft launched that already it's only
available as rs s i've got to like submit it to the google you know podcast store and whatever um
and we'll be publishing like an interview hopefully um either late this week or early next week
into that feed as well but the one thing that is published there now is just a solo podcast of you
talking about your adventures with open claw and the one bit that really got me that relates to this
is that you know you wanted to get open clawed to do something with your Twitter but you didn't want to give it access to your browser so it talked you through getting the session cookie for Twitter so that you could just give it to it and then it would go hit the and the API and just do like do whatever and pretend to be you and the point is yeah how do you know if that is the agent or James doing it and the answer is you don't and I actually recorded a fairly long conversation yesterday with Josh Devon formerly the you know co-founder of Flashpoint who's now uh
working in a company called Sondera, which is trying to tackle these problems.
They've built harnesses to try to control agents, right?
But some of them, there's no way to hook them, right?
So you just have to do that proxy thing.
So they're trying to get away from that, but it's not always possible.
And the reason people are using these proxies is because that's the only way to instrument them.
I think I made some joke in that interview about how, like, it took Azure 15 years to get, like,
network taps into the, in there.
And even then they're only half work.
So I don't know if there's going to be the same sort of situation here.
But for those who are interested, you can head over to risky.biz and subscribe to James's new podcast
via RSS.
It is called Risky Business Features.
Not in the iTunes store yet or any of the podcast stores, but we're working on that.
All right.
So moving on, we've got a couple of just small items to finish off with.
There's been this crazy story in the United States where an elderly lady who's the mother of a
television presenter, a high-profile television presenter, in the
United States I think her name is Nancy Guthrie she was she's like 80 years old or something
and was abducted from her home and is being held from like for a Bitcoin ransom just a crazy
story but it turns out Dave Kennedy um hacking Dave on Twitter he'd written some sort of
Bluetooth scanner and it got out that the cops over there were using his Bluetooth scanner
to look for Bluetooth probes from the ladies pacemaker so they're flying a helicopter over
the city trying to pick up those probes and
just thought that was like the most awesome cyberpunk story that we just had to include.
I mean, Adam, you love this as well, right?
Yeah.
I mean, that's, it is.
It's so cyberpunk dystopic future right there.
And yet at the same time, also, like, quite a legitimately good idea.
I mean, of course, like if you've got an implied of medical device that sends out radio
signals of any sort, it's worth going and hunting for those.
And so, yeah, flying around the helicopter or whatever, some drones, looking for
Bluetooth beacons coming out of her heart.
I mean, hell yeah.
It's a great idea.
I would just hope that the people on the chopper had full, like, you know,
steampunk outfits, right?
You kind of need that for this caper.
We've also got a report here.
It's been picked up in a lot of places.
A lot of people making fun of it.
But there was some Dutch defense official.
He's the State Secretary for Defense.
Said something about, I'm going to say something I should never say,
but I'll do it anyway.
Just like your iPhone, you can jail break an F-35.
and I won't say more about it.
Now, this, of course, comes in the wake of people being somewhat concerned
by using Europeans being somewhat concerned about using American defense tech.
I just wanted to talk about this because, like, what?
Like, even if you could get control of the software of an F-35,
like, it is such a complicated bit of kit that if you are not regularly plugging it in
and doing the right things with it and sending data,
like, it's just so complicated that unless you're maintaining it properly,
it's just a very expensive F-16.
It's not special anymore.
So I just, yeah, felt like I wanted to mention that.
But yeah, when you got European officials
talking about jail-breaking F-35s, like what a world.
And finally, we got a piece here from Dorena Anten York over at the record.
This is our skateboarding dog this week, Adam,
so I'm going to let you take it away.
Yes, so a 40-year-old man in Holland
had some evidence of a crime that he wished to share with the police.
And like many police agencies, they have a mechanism where you can contact them and say,
you know, I've got some video footage or some photos that are relevant to an investigation you've got.
The police will provide you with an upload link into whatever their file transfer server,
you know, assuming they have one that hasn't already been owned.
And upload files.
The police inadvertently sent this man a download link instead of an upload link.
And he hits this download link, presumably gets a list of files that other people have uploaded,
decides to just help himself download those files,
which clearly he probably shouldn't have.
And, you know, okay, that would not be ideal.
And then when the police figured out what had happened,
they asked him very nicely, would he mind deleting them?
And he said, no, not unless you pay me.
And so no one will ever guess how this went.
Of course, the police rolled to his house and arrested him.
Who could have foreseen it.
I mean, no real surprises there.
I mean, it's really,
Dut-da-da-da-da-da-da-da-da-da-d-d-d-d-d-d-d-d-d-d-d-d-any.
Oh, dear.
All right, well, that is going to actually do it for this week's news.
Adam Bwilo, James Wilson.
Thank you so much for joining me to talk through it all.
Pleasure to see you guys.
Yeah, thanks, Pat.
I'll talk to you next week.
Yeah, see you next week.
Thanks, Pat.
That was Adam Bwalo and James Wilson there with a check of the week's security news.
Just before we get into this week's sponsor interview here is our colleague, Tom Uren.
telling us about what he spoke about in this week's Between Two Nerds podcast with the Gruck
and what he's got planned for tomorrow's seriously risky business newsletter and podcast.
This week on Between Two Nerds, the Grack and I spoke about what we've learned about
military use of disruptive cyber operations.
We look at a couple of case studies and we wonder,
does it really make sense for countries like Australia or other middle powers to invest in
those types of capabilities?
In seriously risky business, I'm writing about the shift in Europe.
A whole lot of officials are talking about European countries investing in cyber capabilities for intelligence and for disruption.
They want to fight back against Russia, but does that really make sense?
I also look at how both Google and OpenAI are all of a sudden talking about the rise of distillation attacks.
These are attacks that basically steal the special source of AI models.
Why are they talking about it all of a sudden?
What does it mean?
I'll look at that in seriously risky business as well.
That was Tom Uren there with a look at what he's been up to this week.
And of course, you can subscribe to his work by heading over to the Risky Bulletin podcast feed.
So just search for a risky bulletin in iTunes.
Or you can catch a lot of what he does on YouTube in addition to this podcast.
That is the Risky Business Media channel on YouTube.
See you all there.
All right, so it is time for this week's sponsor interview now with Adam Pointin,
who is the chief executive of Knock Knock.
For those of you who are unfamiliar,
Knock Knock is basically a way to instrument firewalls,
and that can be like your Palo Alto or your Fortnite type of firewalls,
or it can be like hosts own firewalls, so Windows boxes, Linux boxes or whatever.
And the idea is you can set up these systems so that when you're not authenticated via your SSO,
there are no open ports available to a user, right?
User wants to authenticate, they sign in with SSO, they hit a little web app, they hit open up,
and then bang, it opens up the assets that that user is supposed to have access to.
So this is something that's useful for blocking off external attack surface in a lot of instances.
But what we're finding is a lot of people are using this internally, right?
So they might have a group of machines that they just think are really risky, whether that's, you know, KVM over IP or whether it's a bunch of RDP or jump hosts or whatever.
And they use knock-knock so that, you know, the average user on that network or an unauthenticated user on that network just can't even access the network ports.
Very popular in OT environments as well.
Anyway, they're at the point now where they are building out more and more agents, right?
So the big one to be released is a Windows agent.
So any version of Windows back to Windows 2019,
you can now use Knock Knocks so that, you know,
you can put all of that machines ports in a closed state to everyone,
bar people who are SSO authenticated.
But they've also built some fun agents.
Last week, I mistakenly said Mainframe.
I just had a brain snap.
They've actually built Knock Knock Agents for Solaris on Spark
and HPUX on risk.
at the request of a customer.
So here is Adam Pointon talking, first of all, about building those clients
because you can't exactly use Golang on Solaris on Spark.
So here's him talking about how they built that Solaris on Spark agent in 2026.
Enjoy.
So we've got all these hosts, HPUX on Risk Architecture
and Solaris on Spark architecture.
We want them self-defending.
They're in a network that doesn't have, you know,
multiple layers of firewall controls because real time, low latency, etc.
Can these things become self-defending?
And I was like, oh yeah, Solaris on Spark, that's not old.
You know, I'm a huge, I was a huge fan of Spark.
No, Go-Lang Spark, not a thing.
HPUX risk, not really a thing.
And I was like, I'll jump on eBay, get some hardware, you know, we'll be able to work
this out, maybe we can virtualise it.
And yeah, it's a little bit more of a challenge than we originally thought.
But yeah, that is a use case.
self-defending systems that don't have layers of firewall surrounding them,
but they're important assets that do important things.
Yeah, so, I mean, in the end, you wound up clodding this one, right?
Into sea?
Sort of.
So we knew the natural thing was, well, we're going to have to write it in sea,
but we were very hesitant, obviously very hesitant in doing that.
So we tried Go Lang first.
We could go back to 1.18, not modern version.
Then we thought about all the libraries.
And we're like, let's go down that path first.
we kind of had C over here, we're in Golang World, and we're like, well, before we go straight
over to C and just write it, you know, should we go through another layer, should we do it in Python
and do bycode, should we write another language that's kind of ubiquitous? And then we had the
Python version 3, version 2. We kind of went through this whole thing. I'm like, all right,
we're just going to have to do it in C. So what does our attack service look like of the
orchestration agent? Let's thin it out. We don't need the thing on HPUX managing your Palo Alto, right?
like it doesn't need to do that.
It just needs to do the thing on HBOX.
So this kind of very slim, intentional agent is the path who went down.
And then, yeah, we use Claude and other things.
But a lot of that was actually automated testing, fuzzy.
Obviously, DAST and SAS is part of that from a, you know, static analysis security
perspective, but also testing.
So a lot of automation was sort of built using Gen AI to just make sure everything that we
wrote, would work on all these architectures in a very easy way. So actually, I love the ability to
automatically test and the scale that we could do that was just nuts compared to, you know, five,
six years ago. Testing was not an afterthought, but today it's just like you have multiple
agents that are just fighting each other to test every single line of code that's written.
It's, uh, yeah, it's great. Yeah, so you actually, what, you actually wrote this the old
fashion way, did you? A bit of both. So, yeah, it wasn't like, uh, we, we didn't, wasn't just
Like, you know, here's the Go agent.
Please rewrite this in C.
Off you go.
No, no.
There's a little bit of like, you know,
architecturally we had to really think about it.
Because the current Goal source code is complete, right?
It's got all the 40 net Palo Alto stuff.
So we had to really say, you know, this is actually what we're going for.
And then start to write it the old-fashioned way and then augment that with the, you know,
take our internal protocol, convert that across to C, how are we going to do that, etc.
So there was a bit of both.
It was kind of hybrid, right?
Yeah, but quicker, right?
And I think that's the point is that things have got a lot easier.
Oh, so much faster.
And it wasn't just full vibe code because we had our existing age, we had the existing code, we knew what we wanted.
It was intentionally slimmed down.
But the real time saver was the ability to test, the ability to have multiple architectures, you know, spark, risk.
And, yeah, it was a great outcome.
So much faster.
So much faster.
But the big thing that you've released now is a Windows agents,
I'm guessing it supports all flavors of Windows that you're likely to encounter in a contemporary enterprise.
It goes back to 2019, and we found Golang, modern Golang, just doesn't go earlier than that, really.
So we originally thought 2008 it's going to be in the environment still and we see it.
But we could only go back to 2019 without fundamentally changing the stack.
Yeah, so what was involved in creating a Windows client that can instrument the sort of Windows native firewall?
Was it unruly or I'd imagine?
Like something like that these days with Go, probably pretty simple.
Super simple.
Yeah, dead simple.
And it's orchestrating the firewall, not client.
So it is a weird, it's in a weird spot, right?
It's not a client that you need to get access, but instead it manages the Windows firewall.
So pretty trivial, really.
Like GoLung's super portable.
So it was as easy as we thought it would be, which was quite easy to port it across, make it work.
Now, I'm guessing most people aren't putting their...
Windows boxes on public IPs
just for the whole world to connect to without a firewall, right?
So, you know, I'm guessing this is very much targeted
at that internal use case.
Exactly, yeah, lots of Windows internally,
lots of jumpos, lots of RDP,
I don't even know how much Windows would be on the naked internet anymore,
not a lot, but yeah, mostly internal use case.
But there's a lot of it.
The progression has been, as you said, external first,
they start to use Knock Knock and see,
I can have this external thing invisible.
Now let's do that internally.
Now that I can do it on Windows,
let's just do that internally.
Why do we have these Windows machines
even on the internal network visible all the time?
So it's that natural progression.
Yeah, so I mean, one thing though is others...
So this is sort of turning into a micro-segmentation use case, right?
For knock-knock internally.
But it's a different way, I guess, of thinking about micro-segmentation.
Because normally people think about doing that network-wide, right?
So it's like this big project,
you're slicing and dicing a network and carving it up into little segments that can't talk to each other
or can only talk to each other in certain ways.
But this is just like, that box over there is super vulnerable.
Let's protect it.
I mean, it's more of a, you know, it's like asset by asset, right?
Yeah, network segmentation's usually been, or it's usually been centralized view of everything,
whereas this is self-defending hosts.
Like everything should be self-defending and not net.
necessarily relying on something else to make a decision for it.
And our approach has been that have the agent on the Windows machine that just then become
self-defending and not reliant on some other recipient of traffic, some routing thing.
So yeah, it's a distributed self-defending approach instead of more of a centralized control
approach.
Yeah, yeah.
But I guess the point is you might not want to make every single host in your internal
environment self-defending.
you might just have like 10 boxes that keep you awake at night.
You can just whack this on them.
Yeah, exactly.
It's the critical stuff.
Let's start here.
Let's make this thing invisible by default.
As opposed to kind of this whole arduous, like, what do I need, when, why, how.
It's just that thing's scary.
Let's have it off the network.
Well, and you could do it port specific, right?
So you can have a Windows box on the network where, you know, Port 443 is available.
So you've got all of your, you know, SSLTLS stuff, you know, being served normally.
but RDP is only available to people who've been through that SSO challenge.
Yeah, exactly.
It's not like...
And you're in the right group.
So people can't even access that port unless they've been through the SSO challenge
and are in the right group.
Yeah, exactly, which is where, you know, a lot of people start with admins first.
Admins have broader access.
IDP is a classic.
Backup services is another one.
You know, backup services that run on weird ports.
Like, should they be accessible to everybody all the time?
No.
So that's another thing that is often covered off
Now, of course, you were very much at the point with this company where the core of it is done,
and now you're moving out to sort of integrations, right?
So you've got Palo Alto, Ford and, you know, checkpoint and all of that.
Now you've got Windows firewall, you've got a Windows agent, you've got a Linux agent, like on and on and on.
But of course, there is the cloud stuff and the SaaS stuff.
You've been doing Amazon for a while, but now you're branching out, you've got DigitalOcean,
a few other bits and pieces.
Yeah, so AWS and is you're anything that's got a firewall.
But the other thing that is out there, Cloudflare has an IP address, allow list that you can manage,
and that's basically traffic that then goes to the underlying Cloudflare protected assets.
So we're orchestrating that in conjunction with AWS and Windows and Linux and all the other bits and pieces to kind of give that total view.
But that means that Cloudflare sources are only trusted Cloudflare sources of traffic will be pushed through.
So it gives another layer rather than just using Cloudflare.
for its waft and other benefits it's kind of an allow list within it in conjunction with all the
other platforms and you're doing like stuff like sales force as well right yeah well anything that
has an allow list so customer would come to us or you know we've got sales force how do you handle that
well actually it's got an allow list so let's let's get IPs in there digital ocean which is sort
or more similar to adios and as you are but yeah going down the SaaS path if it's got an allow
list and we can orchestrate it why wouldn't you tie that in why wouldn't you remove that tax surface it's
just becomes natural extension.
When a customer's using it already, they're like,
well, I've got this other third party app.
It's got an allow list. Can we get you in there?
Yeah, I can do that.
Yeah, sure.
Can. We can.
All right, Adam Pointon, thank you so much for joining us for that update.
Always great to talk to you.
Thanks, pleasure to be here.
That was Adam Pointon from Knock Knock there,
chatting about some work that they've been doing.
Full disclosure, I am a Knock Knock shareholder.
I'm also on the board of directors,
so that's something you all should know.
But that is it for this week's show.
I do hope you enjoyed it.
I'll be back soon with more security news and analysis.
But until then, I've been Patrick Gray.
Thanks for listening.
