Risky Business - Risky Business #757 – The ClownStrike cleanup continues
Episode Date: July 31, 2024On this week’s show, Patrick Gray and Adam Boileau discuss the week’s security news, including: The insurance industry’s reaction to CrowdStrike’s mess Goo...gle’s Workspace email validation flaw and its consequences for OAuth’d applications Is the VMWare ESX group membership feature a CVE or an FYI? Secureboot continues to under-deliver North Korea’s revenue neutral intelligence services And much, much more This episode is sponsored by allowlisting software vendor Airlock Digital. Airlock uses a kernel driver on Windows, so Chief Executive David Cottingham joined to discuss what the CrowdStrike kernel driver bug drama means for security vendors. This episode is also available on Youtube. If you want to ruin the magic of radio and see the faces behind the show, well, now you can! Show notes Business interruption claims will drive insurance losses linked to CrowdStrike IT disruption | Cybersecurity Dive Delta hires David Boies to seek damages from CrowdStrike, Microsoft CrowdStrike disruption direct losses to reach $5.4B for Fortune 500, study finds | Cybersecurity Dive (1145) Why CrowdStrike's Baffling BSOD Disaster Was Avoidable - YouTube CrowdStrike offers a $10 apology gift card to say sorry for outage | TechCrunch Crooks Bypassed Google’s Email Verification to Create Workspace Accounts, Access 3rd-Party Services – Krebs on Security Hackers exploit VMware vulnerability that gives them hypervisor admin | Ars Technica Microsoft calls out apparent ESXi vulnerability that some researchers say is a ‘nothing burger’ | CyberScoop AMI Platform Key leak undermines Secure Boot on 800+ PC models Chrome will now prompt some users to send passwords for suspicious files | Ars Technica Google Online Security Blog: Improving the security of Chrome cookies on Windows A Senate Bill Would Radically Improve Voting Machine Security | WIRED U.S. told Philippines it made ‘missteps’ in secret anti-vax propaganda effort | Reuters Cyber firm KnowBe4 hired a fake IT worker from North Korea | CyberScoop North Korean hacker used hospital ransomware attacks to fund espionage | CyberScoop North Korea Cyber Group Conducts Global Espionage Campaign to Advance Regime’s Military and Nuclear Programs North Korean hacking group makes waves to gain Mandiant, FBI spotlight | CyberScoop ServiceNow spots sales opportunities post-CrowdStrike outage | Cybersecurity Dive Chaining Three Bugs to Access All Your ServiceNow Data Cyber Supply Chain Risk Management Conference (CySCRM) 2024 | Conference | PNNL
Transcript
Discussion (0)
Hey everyone and welcome to another edition of Risky Business. My name is Patrick Gray.
This week's show is brought to you by Airlock Digital, which makes a fantastic allow listing
platform. And David Cottingham, a co-founder and chief executive of Airlock, will be along
in this week's sponsor interview to talk about why they use a kernel
driver to do allow listing on Windows and how they implemented their product using the Mac OS API.
Obviously, this is very relevant in light of what happened with CrowdStrike's
adventures in Windows kernels. That is a terrific interview, and it's coming up after this week's news with Adam Boileau,
which starts now.
Adam, how's it going?
Yeah, not too bad, Pat.
Not too bad at all.
Excellent.
Okay, so we're going to start off with this story right now, which is, well, a few stories
on the same thing, which is the fallout from the CrowdStrike blue screen of Deathapalooza,
which happened, you know, what, 10 days ago or so now?
What a mess. Such a mess.
Yeah, such a mess.
So it looks like, you know, this is turning into a mess,
not surprisingly, for the insurance sector.
And we've got a cybersecurity dive report here
written up by David Jones,
looking at some comments out of Moody's.
And they have said, quote,
we expect reinsurers to re-evaluate
underwriting practices, especially for systems failure coverage, to ensure there's a clear
understanding of the risk and pricing of the exposure, which I think is a very long-winded
way of saying that premiums are going to go up. That certainly is what happens in the insurance
industry, yeah. And it makes sense that they
probably didn't expect that much exposure from one event right i mean having so many insured
companies uh now claiming for business interruption or whatever else they're going to go after their
insurance policies for you know because of one incident that's the sort of thing that makes the
insurance industry you know pretty nervous yeah well yeah and the type of thing that makes the insurance industry pretty nervous. Yeah, well, yeah.
And the type of company that uses CrowdStrike is also the type of company that is going to have this sort of insurance, right?
Expensive insurance, yes.
That Venn diagram is pretty closely aligned.
Now, it's not just causing drama in the insurance sector.
Delta Airlines was one of the major corporations that was most severely impacted by the CrowdStrike outage. We all saw the pictures on social media of people being stranded at
airports, not just social media, mainstream media as well. There were people who were stuck in
airports for days. It just sounds like my personal nightmare. They've now engaged a law firm,
David Boies, and they're seeking damages from CrowdStrike and Microsoft. And I'm
going to be real curious to see how this goes, because as we know, software license use agreements
tend to say, my joke, my regular joke on this is they reserve the right to throw their users
through a wood chipper. But you kind of think, given how anomalous CrowdStrike's approach to rolling these updates is, I mean, maybe there's a case to answer here.
Yeah, I mean, I think it's going to be legitimately interesting to see whether those license agreements terms really hold up to this.
Because, like, we're talking billions.
The number we saw was like five and a half, I think someone said.
Yeah, 5.4 billion uh just for the fortune
500 right so yeah right and i mean you i i get the impression that's a conservative estimate as well
i mean i would not be surprised at all right because it broke all sorts of things and so
you know those click through license agreements there's a lot of words in those a lot of people
don't read them and then the idea that you're going to go out to get your corporate council
to read them and agree to them.
And there's just so many kind of parts of that whole ecosystem
that don't fill you with this is a legitimate practice
that is all squeaky clean and has nothing wrong with it.
And a multi-billion dollar set of losses.
You know, this is going to be interesting to see.
Yeah, well, Delta says, I mean, according to this article,
at least Delta's
losses alone are something like $300 to $500 million. I mean, that's half a billion dollars
for one company, which is why I say, you know, $5.4 billion feels a little, a tad on the conservative
side. Yeah, I mean, I remember when, you know, early in my career, I went out to the airport to
do some work on some Unix system, and I had to sign a bit of paper that says, you know, if you
stop the planes working, we're going to send you the bill right and i was working on air
traffic control radar unix boxes and i went well my boss told me to go here and do whatever they
said so i'm doing it but when i went back to work and said to my boss that i had signed us up for
you know airplane delaying liabilities he was rubbing his temple so you know like i i don't know i'm interested to see well
now that you work full-time for risky business media i would just ask that before you sign a
similar document you maybe check with us first right um yeah that's uh that's extremely not
great now i also did an interview on all of this with uh alex and Chris Krebs. Chris Krebs, of course, being the original founding director of CISO
and Alex being the former Facebook CISO and Yahoo CISO.
Of course, they both work now for SentinelOne,
which is a direct competitor to CrowdStrike.
So you've got to view the interview with that in mind.
They're very transparent about that.
But that was a really interesting chat from another EDR company. I thought that was interesting and
they really didn't hold back and just, they did not. And just said like, I think when they roll
updates and I imagine for most EDR companies, it would be the same. The thing that I found most
interesting about that interview is when they talked about their process for updating rules
that go out to the sensors on people's boxes, right?
Where it is staggered, there's a formal dynamic testing process.
And the other thing that I found really interesting
is that they actually collect telemetry from boxes,
you know, as those updates roll out
so that they can make sure that they're not falling over.
And they did admit that sometimes things go wrong.
They might hit a box with an unusual config
or some obscure software and there's a conflict.
I did find that interesting. Now, as I say, they're a competitor of crowdstrike right they're not going to be
sitting there making excuses for them um but certainly the process they described seems a
lot more in line with what i expected crowdstrike would have been doing yeah and i found that as
well i mean the you would have expected an organization the size and importance
of crowdstrike to be doing it in a more robust way but as we have seen with you know a bunch of
outages in say like microsoft stuff you know once not all vendors have time and especially i think
um the the ones that have adapted to this a little bit later than so i think like in microsoft's case
right with az, right,
they pivoted pretty hard to cloud
and they're still kind of in some sense,
say, catching up with what Amazon have spent
a long time with AWS building.
Like Amazon had a much gentler introduction
to being critical infrastructure,
cloud infrastructure than Azure had.
You know, Azure went from zero
to runs the entire planet
in the space of what feels like months,
although I guess it was longer than that.
And I do wonder, like, because CrowdStrike's been in this game for so long, you know, I wonder if there was a degree of complacency or a degree of, you know.
Well, I mean, I think it was just they did it this way by validating these updates and not testing them because that's the way they always did it. And it just strikes me as classic eye off the ball
and the people who used to care about that sort of stuff
don't work at the company anymore.
If you want to ask me for a root cause,
that's, I suspect, what is behind it.
But look, CrowdStrike also not doing itself any favours
in the way that it's responded to this.
I mean, the most insane thing that we saw
was them offering a $10 apology gift card.
It was like an Uber Eats gift card for $10.
You can't even get a slice of pizza on Uber Eats for $10.
It just seems like the most bizarre PR response, which is like, here, peasant, have a peanut.
You know, this was only going to make it worse because i and here's the
thing right if you want to put it into words why this was a bad idea it's because it trivializes
the grief that people have been going through oh here's 10 bucks because we just nuked your
entire environment this is just i mean they just keep stepping sticking their foot in it i don't
know what they're doing yeah like we grounded your airline here, have a cup of coffee.
Oh, no, you have to get it delivered.
Right, a delivery cup of coffee.
Like, I don't know what the PR, like, surely they have a PR firm, right?
And surely the PR firm would have said, like, just no, right?
I mean, yeah, it just has not helped at all.
And I guess this will be taught as, you know, what not to do lessons.
Yeah, no, 100%.
This is a case study in how not to respond to an incident like this.
Look, we're going to change gear.
That's it for our CrowdStrike discussion.
Although we are going to mention a little bit later on in regard to a ServiceNow piece.
Anyway, that's funny.
We'll get to that in a bit.
But now I want to have a bit of a discussion about this piece from Brian Krebs.
It's interesting on a bit of a discussion about this piece from Brian Krebs.
It's interesting on a lot of levels, right?
Because it actually exposed to me something that I didn't know about how OAuth works.
So the rough shape of this story is that there was a bug in Google Workspace that allowed
people to register email verified accounts.
So this doesn't get you a fully featured workspace environment,
but it allows you to spin up a workplace account
for that single email address that you've signed up for.
So there was some sort of bypass in the email verification step.
So, for example, you know, joe at risky.biz,
if we weren't a workspace shop,
someone could have signed up a workspace account for joe at risky.biz if we weren't a workspace shop someone could have signed up a workspace account for joe
at risky.biz and where this gets interesting is that this apparently allows you to oauth into
services registered under that email address because these services don't pin an authentication
method to accounts and i did not realize that was the case so i tested this
by signing up via email for a dropbox account because i haven't i didn't have a dropbox account
signed up with my email address and then oauthed straight into it from a different browser profile
from my workspace account so that would mean yeah i mean and that's exactly what it looks like
happened here it looks like these attacks were targeting the crypto space.
People were using this bug to register workspace email validated accounts or trial accounts and then onwards OAuth from there.
So the thing that really surprised me is I would have thought that if I have set up an account to be a password based account, like why is oauth working and it's because everyone places trust in google essentially as
an identity provider which this was expected behavior as far as you were concerned but i
can't be the only one who was surprised by this no i think it makes a lot of sense that you would
be you know if you haven't built a system like this you know understanding the exact plumbing
of how you integrate these kinds of
auth and stuff, you know, it isn't immediately obvious. And I think it's entirely a reasonable
expectation that you had. But, you know, this is the downside, I guess, of federated authentication,
of outsourcing your auth to somebody else is in the end, you are saying whatever Google says,
that's cool. Like that's what that, you what that authenticate with Google or sign in with Apple
or Facebook or whatever else button does.
It just completely delegates the auth to the other platform.
And if you have a flaw, like in this case,
being able to bypass email verification,
then that's really, that's not good.
The kind of idea that pluggable authentication
where you can have lots of different authentication options, the kind of idea that pluggable authentication,
where you can have lots of different authentication options,
you know, be it local password auth, be it OAuth,
be it some kind of integration with a directory,
you know, like an LDAP or an Active Directory or something that isn't OAuth integration.
Like that as a pluggable mechanism,
like to me as, you know, a sysapp,
like in Unix and Linux, for example, we have PAM, which allows you to have pluggable mechanism like to me is you know sysapp like in unix and linux for example we have pam
which allows you to have pluggable authentication and so this idea of it being entirely independent
of how you do auth to the application so it makes sense to me that these are interchangeable but
i can see why that's not what everybody expects yeah 100 and it also looks like it's possible and
we don't know,
it's possible that when all of that Squarespace was going on and people were saying that
these Squarespace bugs allowed people to seize control of Google Workspace accounts and whatever
if they had bought them through Squarespace, and that turned out Squarespace came out and said,
that's not the case. I think it's kind of possible that this activity that's been written up about,
because this was being exploited in the wild, and targeting the sort of same set of people so i think possibly
those two things got conflated and that's why there was some confusion about about whether or
not there was a a connection between the squarespace bug and weird stuff happening with google workspace
yeah i think that makes sense because the squarespace one was also like it being a little bit unclear how local auth and OAuth worked together,
even though it wasn't this exact situation with, you know, back of email verification.
Anyway, it's still a really great reminder to think whole ecosystem about your auth systems because I had never thought of, you know,
using a Google workspace sign up to make an account on OAuth. I never thought of using a Google Workspace sign-up
to make an account.
I never thought of that.
Okay, well, extending that a little bit,
imagine if you've got control of someone's own file
and you can do a domain-based validation
for someone who's not a Workspace shop.
So you've got control over their zone file you can register an entire workspace
on their domain and then you can impersonate every user with third-party services and we shouldn't
be surprised there like a zone file compromise is a big deal but this is one way to like turn that
into essentially like an idp compromise if you consider Google to be an IDP
for a lot of these third-party services.
So even if people are using email-based auth,
if you can register a workspace account for that domain
because they're not using it, off you go.
Yeah, and it's a much smoother way of doing it
than the traditional approach,
which is to hijack the MX, get in the middle of the email,
go through email password reset, like which is to hijack the mx get in the middle of the email go through email password reset like which is a noisy messy exactly you know easily observable set of things you're
going to break their mail unless you get in the middle anyway there's a lot of stuff that can go
wrong it's a big difference between you know inserting a text record into it into a zone file
right like people are not going to notice that yes yeah and the process of setting up a workspace
like it makes a little bit of noise, but it's not much.
No.
It's going to look like spam.
Yeah, exactly right. It's going to look like regular kind of fishy spammy stuff and not look quite as sus and important as it is. If I were a pen tester or if, you know, when I ran the team of pen testers and someone came to me and said,
okay, I want to try this on this job,
I would probably as the person responsible for making sure they stay in scope
be a little bit like, I'm not 100% sure if we can do that
or like that sounds like I'm going to have to have meetings about that.
Do you have any other bugs?
Yeah.
Do you think it's worth people who are not workspace users to just register a single
domain validated account so that people can't do this to them because it just i don't know
there's something about this that just gave me the willies i mean there are other options though
right i mean there's always something with apple there's always something with facebook all of the
other kind of options for like the more workspace of these. Workspace is domain wide.
Workspace is pegged to your domain.
This is what I mean.
This is why this one in particular makes me a little bit.
It's not like you have to pop, you know,
if you get that text record into a zone file,
you can impersonate all of a sudden every user in the,
you know, every user in the org.
You know, I guess like if you're in a position where your attacker
has got your zone file, there's so many ways you're going're gonna get wrecked it's just that this one is smooth and but that's the point right
and like it doesn't even have to be the zone file like if they can they can own your domain name
registrar do you know what i mean like it's just i don't know this this just feels like something
that is i don't know like someone's gonna get done this way eventually right yeah yeah like i mean i feel
like surviving your dns zone being compromised is very hard and something like spotting unauthorized
changes to your zone like just pull your zone down and have a look at it every now and again
like sort of detection would be a more generic and useful thing but yes like this gives me the
willies when i've got a workspace
account for my personal stuff and like i'm i'm just looking at it going hmm i hadn't really
considered that so yeah yeah it's worth it's worth thinking about and discussing with your security
team or people or pen testers or let's see the problem the problem with alerting on a zone file
change is like how many people are messing around with that zone file like is it truly weird that it
changed right like someone whacked in a text record could that have been someone else
over here if you're a big org i don't know i mean yeah it depends i mean i don't know they're not
they don't change a whole bunch these days like because i don't know like dns is relatively slow
amount of change and hopefully relatively controlled like change management processes
and all those kinds of things not that many people are yoloing their zone files all the time
i don't know man i don't know anyway as i say like this was a much more interesting story than it
appeared at first glance for sure because it certainly updated my understanding of how oauth
works um now let's talk about this VMware vulnerability, right?
Feature.
Yeah, so we've got a write up from Dan Gooden here
and I don't know, this is one that feels like a feature
that is, you know, working as intended, it's by design,
it's been written up as a vulnerability.
And what's great is we've got two contrasting stories here.
We've got Dan Gooden's write up from Ars ars technica and then this morning just before we recorded aj vicenz over at ciders
group wrote up this one uh where you know the headline is microsoft calls out apparent esxi
vulnerability that some researchers say is a nothing burger and honestly i i kind of had that
reaction to it when i first saw it because it's like, yeah, if you can create new user groups in
Active Directory, you can spin one up that is called ESX admins and then, you know, log
into these, you know, these machines.
I mean, if you've got domain admin, it's kind of over at that point, isn't it?
Like, walk me through why this is a vulnerability because I'm not quite sure I understand.
So I agree.
I don't think it's a vulnerability because I'm not quite sure I understand. So I agree I don't think it's a vulnerability it's a feature that is perhaps surprising which is often the best sort of
vulnerabilities in a way. Where this is impactful is when you're a ransomware crew and you want to
pivot rapidly into the ESX to deploy ransomware and you've got domain admin through the regular mechanisms that you would only get domain admin
and encrypting the ESX is the fastest way
and easiest way to go do it.
Now you're right.
If you've got domain admin, you're going to win eventually.
And going from domain admin to control of something like the ESX
without it being, if the ESX wasn't domain joined,
which is one of the prerequisites here,
then you'd have to go find the ESX admins
and key log them and follow them in.
And it's a time consuming process.
This is just a rapid way
if someone has domain joined their ESX,
which I don't know that there's a bunch of great reasons
to do other than as you suggested compliance.
Yeah, we were talking about this before we got recorded
and you're like, what's a good reason?
I said compliance. And you said,'re like what's a good reason i said compliance and you said no what's a good reason i'm like i think our our definition
of what a good reason is uh is different but yeah because when it's compliance you got to do it
yeah yeah a good technical reason is what i meant but yes i agree compliance unfortunately
sometimes a good reason um so yeah mostly it's a rapid technique and what was interesting is that that ransomware crews were
using this um yeah the surprising bit i guess is that uh if you didn't use this feature like so
when you enroll your esx join them into your domain if you hadn't set this up that you could
then just go create the group and it would magically start working yeah and then there
was also like because this is not a group that would exist normally like if you hadn't created it in your ad it still was known
about by your esx because it was kind of default by name etc etc so it could have been communicated
better and it was in the documentation so like vmware did say this is how it works but you know
if you didn't turn it on you probably didn't't expect it. It's a little surprising. Yeah, I guess.
But again, it just seems to me like it's designed to speed something up.
For legitimate users, it also speeds things up for attackers,
but it gets a CVE.
I just find that a bit strange.
Yeah, like I don't know that I would give this a CVE.
And like it's in the hardened guides.
Is it a CVE or is it an FYI?
You know?
Exactly.
I mean, either way, I guess,
like ESX as a cloud plumbing is going the way of the dodo,
thanks to Broadcom and other options for clouds.
But there is still a lot of it out there
and getting a little ransomware is pretty bad.
So if you do have a bunch of ESX and it is domain joined,
it's probably worth defensively creating this group
and not putting anyone in it.
Funnily enough, the thing that people are ripping
and replacing the like Citrix and VMware stuff with,
like for any VDI, it's all going island.
It's all going custom browsers like enterprise browsers,
which is interesting because it's a completely different technology
that you can use to achieve the same outcomes.
Obviously, Disclosure, they're a sponsor of the show,
but I do find that interesting that that's what's turned out
to be the number one killer use case is like ripping out VMware
and replacing it with a fancy browser, you know?
Yeah, I mean, VDIs were always a compromise in other ways,
like things like sound support or video multimedia support.
It was always just kind of like VDIs never worked as well as people imagined they would and i'm not surprised
that we're seeing it replaced with just the browser with some instrumentation and control
it was so exciting when it was new you remember yes yeah it was very exciting then but yeah i
thought i'd be able to run linux on the desktop forever because I could just use Windows on a VDI. Yes.
Now, let's talk about secure boot drama.
I've linked through to our colleagues right up on this,
Katalin Kimpano is right up on this.
We've got some key exposure,
which is causing big problems for a bunch of motherboard manufacturers.
Walk me through this whole one.
I didn't follow it that closely.
Right.
So secure boot's the infrastructure for verifying
that the boot process hasn't been tampered with
on regular, you know, x86 and now to some degree ARM computers.
This involves BIOS manufacturers and motherboard manufacturers
and operating system manufacturers all working together
to implement a PKI, right, And have some kind of certificate-based system
for asserting trust, platform trust,
all the way up into the OS during the boot.
How could that go wrong?
I mean, so far, so good, right?
Right.
And so we have a cryptographic scheme
that relies on everybody involved
understanding what they are doing
and making good choices.
And we're going to do this in a market
where price really matters, where there's lots of different vendors all kind of competing
and of course it's a mess right it takes a centralized org like with vertical you know
the full stack control like apple like microsoft on its surface tablets and stuff where they can
really assert strong control to do this well a more distributed ecosystem like regular PCs this has gone wrong in this specific case the way that it went wrong is that American Megatrends who
are one of the big BIOS manufacturers so they build BIOS or EFI firmware software and then that
gets licensed by motherboard vendors to implement in their. AMI shipped some example key material
in their implementation of Secure Boot.
And the key says like,
for test use, do not ship or do not trust
or whatever it is right there
in the name of the certificate.
A bunch of manufacturers and vendors of motherboards
used that key material,
didn't replace it with their own, shipped it.
And that's not great.
Like that would not be ideal,
except that then it got worse when somebody uploaded the private portion of that the private key that
corresponds to that certificate to github yay and then of course because this whole ecosystem is
quite distributed there's no like actually doing revocation or actually changing this
is very complicated and no one bothered.
And so now we have hundreds, like I think 800, according to research from Binali,
800 products from the likes of Dell and another big name manufacturer, Supermicro on the server side,
using certificates that are compromised in their boot process and the
net result of this is if you land code exec on a system like this and you want to embed yourself
in the bios for long-term persistence as nation states are rather fond of doing then you can sign
your own key material update the bias and you're there forever so Yeah. Well, and increasingly it's my impression
that advanced threat actors, make of that what you will,
are doing more and more to sort of like touch hardware
and peripherals as well.
They know how to really top to bottom own systems,
like not in the way that Edward Snowden disclosed.
They're doing other stuff now, right?
And obviously we've got very limited visibility into that,
but this is something that is a much bigger deal now than it used to be.
And, you know, what's amazing about this is it's hard to imagine
how you would chart a path towards something more sensible because of
as you pointed out when we first started speaking about this the number of stakeholders involved
in trying to make all of this work together it's just too many people with too much margin
pressure and you just think wow is there a way that we can re-architect this to make it more
sensible you'd sort of have to start again wouldn't you i mean this is the problem with the open pc architecture is implementing this kind of you know hardware
anchored trust root requires that you trust everybody involved in this ecosystem and
frankly we don't right we've got hardware manufacturers you know all over the world
even understanding how many bits of firmware like who who do I trust? Who do I need to trust to assure the
integrity of my platform? Like who made the firmware on my network card? Who made the firmware on my
graphics card? Who wrote the graphics drivers for my NVIDIA graphics card? Like how many other
components are integrated into those? Like charting that, understanding all the microprocessors and
all of the bits of firmware in your system. and I remember when I think Google started down the road of trying to understand the provenance of
all of the drivers and firmware in their systems and like that's a big project and doing it in an
open ecosystem with price pressure and all those sorts of things. It's just really, really hard. So unfortunately, you know, what Apple has done,
which is build a vertically integrated system,
and then, you know, if you want to use the, you know,
the fingerprint sensor on an Apple external keyboard,
it's got cryptographic key material that it shares across
with the host computer to ensure the integrity of the firmware
on that fingerprint reader. Like, I mean, that's kind of what it takes yeah and this is why
apple hardware is expensive well yeah it's one of the reasons one of the reasons it's also pretty
uh that's that's another reason but you just sort of think you know okay is there a market
opportunity here for someone to do sort of more higher assurance hardware right where this is less
likely to be an issue and then you start thinking about like what that would cost in terms of r and d when you're not you know okay you can buy in the
processor for somewhere else but like doing the r and d for like a whole motherboard and a bios and
like it's and then spinning up manufacturing like it's it's not a it's not it's such a heavy lift
you know and it's such a risk like you'd have to spend so much money to
spend something up like this and then you've got no guarantee that bail's going to materialize so
i think we're kind of stuck here for a while i mean this is the value proposition of someone like
dell or hewlett-packard right hp who are so big and sell to lots of big american organizations
and you know why you see them in American government environments is because they are the most trusted of these open platform vendors.
And yet we are talking about Dell firmware and HP firmware.
I'm pretty sure HP was on the list of HP on the list.
So these are the people who are doing the things
that you suggested and they still have this problem.
And what's interesting is I think Binali
Binali
who did this research
I think they did this research
in coordination on behalf
of Dell so
in some respects this is
the process is working we are
getting there presumably because
there is a propensity for the sorts
of things that
consumers of hardware of this higher in higher tier of vendors are worried about you know firmware
and hardware attackers uh it's probably what's pushing it but yeah clearly we're not there yet
no we're not let's just you know that's that's years of work right let's see uh let's see how
that one goes now let's talk about some google stuff now
we've got another one from dan gooden uh we're looking at how chrome now when you try to download
like an encrypted um you know password protected zip or you know other encrypted uh file format
it'll now give you the option to enter the password and send that off to google for scanning
uh you know there's a bit of hand wringing in this article about like, you know, confidential corporate information, whereas Google
is saying, yeah, we rip it open, we scan it, we keep it for like, you know, a short amount of time
and then it's destroyed. Personally, I think this is like absolutely a net positive. And I mean,
we've already seen that like some of the large email security firms, I think Proofpoint does
this as well, where before they release a password protected file
to a recipient,
they'll ask you to enter the password
so that they can do some scanning on it.
Because otherwise it's just this completely opaque,
whatever box of malware
that you're delivering to a user.
So, you know, I think this is a good thing.
Are you with the hand wringers on this
or what do you make of it?
I think it's a net positive.
I don't think I have any argument there.
I think in some corporate environments, handing over key material to a third party has probably got some hoops you'd have to jump through.
But how often when you download a zip or something something is it coming from something where you kind of
trust the origin versus like just something you downloaded from the internet that's got a password
on it for whatever reason where you don't really feel bad about sharing that with google so there's
plenty of times where even if you were the hand ringiest of persons this would still be a net
positive and then in some environments like imagine there'll be some guidelines but those
guidelines probably exist anyway with you know what material you put on dropbox or what material
you give to other cloud vendors in other context and when you're everything's being hoovered up and
you know processed by ai anyway what you know what's one zip password and zip contents you know
compared to the data soup that is the modern internet anyway so yeah i mean there is also a
download anyway button
currently for this implementation.
I am a bit skeptical though that,
you know, you sort of intimated
that there would be,
that users would make decisions around this.
Whereas like, this is a random one.
This is one that got sent to me by my colleague,
you know, and that they're going to be
in a good position to make sensible decisions there.
I also think, you know, quite often
the way that spear phishing
with these sorts of things work is they appear to be coming from a trusted source.
Like I might get one from you.
It's difficult, yeah.
For example, yeah.
So I think something like this like currently is useful for people like you and me, but less useful for the average user.
But, you know, got to start somewhere. And we've got another blog post out of Google too, which is looking at how they're doing app-bound encryption
for Chrome data like cookies and whatnot.
This is actually pretty cool what they're doing here.
This is like getting Windows to do app-bound encryption stores
so that if you spin up another process on that box
and try to access that material you just you just can't yeah i think this is a smart move they
have a blog post up describing their implementation but yeah the net result is they have a
driver or a process that runs in an elevated context so system uh that uses the existing
windows data protection apis to crypto stuff but to get the key material for that,
instead of asking the operating system,
that's the same user and in Windows,
there isn't a boundary inside a user context.
Like if you're running as that user,
you can ask the Windows DPI API for the key material.
In this case, you have to ask Google's system level process
for it and it gets it from systems DPI API storage
and only will do that if chrome is
asking you know if the binary asking is the path to the chrome executable so net result of all of
this is if your malware or you know a dirty pen tester running in the context of a user you can't
just help yourself through their cookies and of course hopefully you can't inject a debugger into
the chrome process because os security controls, et cetera.
So this seems like a smart hardening
and brings it in line with what you would expect on Mac OS and on iOS.
So yeah, Microsoft, I imagine,
will probably implement this as a feature at some point later down the track
and Chrome can move to using that implementation.
But right now, it seems smart.
Good job.
Yeah, it's just good engineering, right? And a good use of everybody's time. And, you know,
with cookie theft being such an issue. And we already walked through like what North Korean actors had to do to try to silently install a malicious extension in Chrome and the amount of
engineering work they had to do just to do that. And yeah, and this is just another great example
of that. Now, we're going to look at this other story by uh matt bracken uh from cyber scoop which looks at no before
uh which is yeah what do they do they're like a cyber security awareness training company
um they have admitted that they hired a fake it worker from uh north korea who made it through
all of their vetting process and everything i had had a couple of people ask me, is this the company that I was speaking about
when I spoke about another company experiencing this recently?
No, it's not.
This is an extremely widespread problem
and it's really good that KnowBe4 have actually come out
and publicly talked about this.
Yeah, because I guess other people who've had this happen
have been a little bit more reluctant to talk about it,
even though it sounds like in this case Know before you know their process has almost worked like they were pretty close
to catching this and they did get it pretty quickly once they had hired this guy and shipped
them a laptop or at least shipped a you know a mule that was running vpn farms for north korea
out of their you know bedroom in ark Arkansas or whatever um so it's good
that they got there at the end but it's just it's like if you had told me 10 years ago on Risky Biz
that this is what we would be talking about like North Koreans legitimately applying for jobs and
I guess doing reasonable work by all accounts yeah um because like they must be competent enough to
get the job and and if
you're going to be there for like these people must be actually like i wonder how good they are
at their jobs you know like is there some process where you could hire these guys and and uh have
them doing like is there a net benefit to having north korean contractors in your environment like
like if the quality of the work is good enough and you have controls around it.
It depends how you measure that net benefit.
It's probably not a net benefit,
given what they spend the money on.
Well, then that's the other half of this story.
So we've started to see some description
of this money being used for, you know,
funding nuclear weapons
and funding other North Korean government programs
and so on, which, yeah, okay,
maybe that part like society-wide,
not in that benefit, but, you know,
unsolicited outsourcing,
maybe there's something to this, you know?
Well, there's work from home
and then there's work from Pyongyang, right?
Which is the latest post-COVID trend.
Now, look, speaking of North Korea,
it's an issue that we've encountered
a couple of times in the last couple of months, which is
there's more and more talk about the ransomware
activity that North Korea is
performing. It looks like at the moment
there's a couple of crews, or one in particular
that we're talking about this week,
that they've
been hitting hospitals and stuff. So they hit their
espionage targets and do some ransomware on the way
out. So it's like a double attack. But they're also doing pure revenue raising
attacks against the US healthcare sector. And it looks like this activity is designed to fund that
group's operations, right? So they're sort of like a self-funded espionage unit working for the North
Korean government. And this is just one of the ways they make money. Again, you know, my opinion on this hasn't changed.
If this starts escalating, it's going to be, you know, it's going to be bad.
Yeah, I mean, it's certainly we've seen so much impact in the health sector.
And, you know, the idea that the North Korean government has, you know, as you say, self-funded,
like revenue-neutral intelligence gathering gathering where they just have to go off
and pay for their own hosting, pay for their own, you know,
bills or whatever else they have to do to collect the intelligence data
and to do that by ransomware in hospitals.
And, you know, when I was reading this,
I was thinking about our conversation last week,
maybe there was one before that, where we were talking about, like,
whether or not there are circumstances
where it's appropriate to pay the ransom,
where banning ransom payments has some downsides as well.
But then on the other hand, I read this and I went, well,
like, is there ever a – I mean, they're sanctioned already,
so I guess you can't really pay North Koreans to start with.
Well, but that assumes that you know they're North Koreans, right?
Well, that's it, right?
And the people who are sanctioned start doing a better job
of obfuscating who they are.
Yes.
Yeah.
Yeah.
So, yeah, I don't know.
It's just this is such a thorny and nuanced kind of set of problems
because we do want our health stuff to still work,
and at the same time we don't want to fund North Korean spooks.
And, yeah, it's a mess and
it's interesting though that we've seen uh like the stories we've been talking about are backed by
a um a national cyber security center advisory from gchq talking about apt45 which is the north
korean crew in question and has a whole bunch of tDPs and IOCs and useful stuff like that but just seeing the whole story like you know laid out like that end to end with the funding part of
it and the intelligence part of it and the ransomware victims and so on yeah uh it ain't uh
it ain't great now look um I hinted at this earlier but uh we're going to talk about ServiceNow.
ServiceNow has been doing, I guess, a little bit of marketing around the CrowdStrike outage.
So their CEO is like, there are sales opportunities after this thing because we helped people recover.
And okay, fair enough, right?
Like you're using ServiceNow, you're doing it right.
Like it's probably helpful in a crisis situation.
But it just seems a little tacky and i'm well aware that yesterday i published an interview i did with crowdstrikes competitors on this but you know in my view i don't think they were really
ambulance chasing they were just presenting a competitor's view um they weren't like call us
now and uh you know sign up for some free licensing for three months or whatever like that
was not what they were doing um i don't know what's funny here though is that um you know they're spruiking themselves
uh at the same time that uh you know earlier this month the asset note guys published uh just some
incredible research into into service now that uh that i know you're a massive fan of so why don't
you walk us through it yeah so this was uh from AssetNote that looked into a bug in a piece of ServiceNow's platform.
So ServiceNow has like a cloud component
and an on-premise component.
And you bridge those two together
with a third component called their mid-server.
And AssetNote had looked into this thing,
which is like 20 gig of Java, which is never a good sign.
And they basically found a way to go remote coding,
pre-auth remote code exec against this thing,
which kind of intentionally is on the internet
and has access to, you know, through ServiceNow,
all sorts of privileged bits of your own.
So what you're saying is threat actors have the opportunity
to do something really funny. Well, they do they do indeed um so this particular asset note
bug was just like they've written i've written them in a blog post it's kind of three things
bug chain together and the research is great like it's the sort of bug that i really loved finding
um when i'm looking at enterprise java stuff and I really feel for the amount of work that went into it.
Like the blog post is authored by someone else called Adam.
So hi to you, sir.
Great choice of name.
And his mental health, I feel, probably suffered trying to find this bug.
And in some respects, ServiceNow did a reasonable job.
There were a number of security controls they put in place that they had to circumvent or or bypass or find ways to to work around so like small
small biscuit for for service now but overall it's just was very funny juxtaposition seeing you know
customers gonna get like if you if your service now gets wrecked from the internet you are in such a bad
place and so seeing them out you know ambulance chasing after a crowd strike at you know within
the same couple of weeks this bug dropped just felt a little bit funny to me and i enjoyed it
indeed uh before we wrap it up i just wanted to mention that there's a call for papers
uh for a conference being run by the Pacific Northwest
National Laboratory in the United States. The conference is called the Cyber Supply Chain Risk
Management Conference. And just, you know, given that this is something a little bit different,
something a little bit novel, they asked me if I would mention their CFP and that's what I'm doing
now. But Adam, we're going to wrap it up there, mate. Thank you so much for joining me to talk
through the week's news. And indeed, this is our first pilot but Adam we're going to wrap it up there mate thank you so much for joining me to talk through the week's news and indeed this is our
first pilot edition that we're pushing out
onto YouTube hello everyone
this is what we look like
terribly sorry if it's disappointing
that's right we've got great faces for radio
so yeah exciting stuff
so thanks a lot and we'll do it all again next week
yeah thanks very much Pat I will talk
to you then.
Alright, well, it is time for this week's sponsor interview now. The other day, I
did an interview with David Cottingham, who's the
Chief Executive of Airlock Digital.
Regular listeners would know that I'm a huge fan
of Airlock. They make an allow listing
solution, which is also really
useful for hardening Windows boxes. You can crush lol bins. They make an allow listing solution, which is also really useful for hardening Windows
boxes. You can crush lull bins. They're just launching browser extension allow listing as
well. It's a terrific product. And I've been talking to Dave a lot over the last couple of
weeks because there's been all of this discussion about security companies using kernel drivers,
and it's turned into such a big topic. And Airlock does use a
kernel driver to get their things done. It's only like 60 kilobytes. It's a very simple static
driver. But I thought it would be a good opportunity, seeing as they were scheduled as the
sponsor this week, to talk to them about like why they're in the kernel and what it looked like when
they were forced to do an API-based implementation for macOS and also talk about their Linux implementation, which is not kernel driver-based.
So yeah, a fantastic discussion with Dave Coddingham from Airlock Digital. And I'll drop
you in here where Dave explains really why Airlock's in the kernel to begin with. Enjoy.
I think the main thing about developing for APIs is you really lose the ability to be specific.
All right.
And that has an impact on the level of security you provide.
And also, you lose the ability to be as performant because the API has a lack of specificity.
That's always a difficult word to say.
Specificity.
Oh, nice.
That's good yeah so um because you
you don't have that granularity i'm gonna say um you get you end up getting flooded with events
because the temptation is to go well i need to see everything so i'm gonna subscribe to all of
the avi endpoints and then you'll end up processing way too much which is just not possible so you know especially on i guess our
experience on on linux and mac is the biggest challenge is getting what you need filtering
appropriately and only processing the bare minimum um because you know it's a visibility plus
performance uh constraint that you have with the apis Yeah. Now, I know that, you know, you and your co-founder, Daniel Schell,
have spent time looking at re-implementing your software
and before all of this CrowdStrike stuff, right?
Because it's an existential fear for anyone who's operating security software
that lives in the kernel, that one day Microsoft's just going to say,
we're kicking you all out, you know, here's an API or just use our existing APIs.
And indeed, Microsoft has, you know, published APIs, has engineered APIs, which
allow you, would allow security companies to re-implement most of what they need. But I guess
I'm curious what the shortcomings are. I mean, you've said, what you've told me so far is it
would be hard, but you haven't said that it would be impossible, right? So, you know, help me understand what's difficult and what's impossible when you're
dealing with Microsoft's current APIs here. So I guess it's, well, Microsoft doesn't really
have an API for endpoint security vendors in the kernel today. There are little APIs around,
like let's say a user mode example is AMSI
or the anti-malware scan interface for PowerShell.
And let's say we want to inspect bad things
that PowerShell is doing.
You subscribe to that API
and then what happens is PowerShell
has a defined list of sort of criteria
internally to its application
about what is malicious or that scan interface.
And then it sends you back uh part of whatever its latest buffer is that it processed and it does some really cool things like it de-obfuscates if there's basics of all code it decodes it and
it gives you it says this is a buffer that i think is malicious and what you're left with
as a security vendor is saying okay well um do I think that buffer is bad or not?
And you have a choice to allow it or deny it.
And then you send it back to PowerShell and then it will either, the AMSI interface will either allow or prevent that code from executing.
So that's sort of the level of control you have.
And you're like, great, I'm adding a lot of value here because, you even though it's great um and and but you you're really not
adding a lot of value as a security vendor on top of what you know anyone else can do so it's really
about the limitations it's about capability it's about um performance as you mentioned earlier
performance yeah it's about you know so many ed vendors, it's all about detecting the latest stuff.
You know, it's about being technically the best that you can be.
And APIs really stop you from doing that
because you get to a bar very, very quickly and then it stops.
But would it be possible, I guess,
is the question for Microsoft to implement something decent, right? And you've told me that the Mac OS 1, because your software obviously uses the Apple endpoint security API. And based on what you've told me is it's great. It's fine. It does what it needs to do, although there are some things you can't do. So let me ask you a question in two parts, which is like, you know, what could Microsoft do and offer, which would be acceptable? And, you know, based on your experience with Mac OS,
what would you expect the limitations to be? Yeah. So I think based on our experience with
Mac OS, we've definitely hit for our product type, which is generally a file access control
solution. If you think about it,
it's we're about allowing or denying access to particular files. We've really hit a limit there
and we've had to make some concessions. So for example, like the AMSI example,
Apple, you sort of say to the ES framework, which is the API framework in the kernel in,
in macOS, tell me about all files that are executed. And it goes, okay, I'm going to tell
you about every file execution. That's great. Really, you know, you can, you can process that.
Um, now that's a lot of messages already to process. And that resulted initially in really high cpu for us so we had to do something
called muting which is okay well what we have to get rid of this flood of api messages because
mac os and linux are sort of file-based operating systems so we've got to calm the api down a little
bit so there's flags in mac os that say don't tell me about any execution of apple native system binaries great you just
drop out 80 of the chunk there and then sure but a big a big part of the hardening work you do
is about like neutering all the lol bins right so i guess that's yeah exactly right so that's
the first concession and then and then you look at that definition of tell me when a file is
executed so apple linux you know Windows will have sort of a definition,
I guess, of what is an execution. And that's really any file that executes natively on the
operating system without a third-party interpreter. But let's say a script execution. Well, the
framework doesn't consider that an execution. So what's your next best option if I want to
catch script file types? Well, I need to subscribe to another endpoint, which is tell me about every single file that's
opened on the system.
And now, as you can imagine, that is orders of magnitude, even more of a flood.
And then you end up filtering on that.
And then you end up saying, OK, well, I want you to tell me about all file opens except
for these type of things. And you have limited options to do that, which is like, OK, I want to to tell me about all file opens except for these type of things.
And you have limited options to do that, which is like, okay, I want to mute based on file
paths or X, Y, and Z.
And then you're like, okay, I've seen that a script file has been opened.
What do I do now?
I see the command line of that.
And then you sort of start to chain APIs together in order to achieve what you want. This is why vendors are in the kernel
because you can...
You can do what you want.
Change things together.
Yeah.
And you can say,
tell me when I need to process something.
Look at me.
I am the OS now.
Yeah, exactly.
And I think that...
Like, I did a talk at OSSER this year in Brisbane,
and I talked a little bit about the security lines.
Gold Coast, but anyway.
Gold Coast.
Oh, sorry.
It's the same thing.
Queensland.
They're pretty close.
And essentially there was, I showed an absurd attack chain recently
where an attacker did one thing into the next thing.
You know, you drop a script to run another thing.
And that's a really good news story,
is the fact that detection software and EDR
has gotten so good nowadays
that we've pushed attackers
to such an absurd level of complexity
that we're talking about these edge case techniques.
That's awesome.
Well, and often they rely on misconfiguration
and no one watching, right? You would, yeah, true. I i mean people people love to crap on edr but when it's configured correctly
and it's being observed it's very good exactly right and and but you now need to engineer for
those type of really low level detections um you know via an abstracted API, which inherently has limitations.
One example, we're not just talking about endpoint security here.
This is backup software, encryption software, DLP virtualization.
And you look at the state of virtualization on macOS today,
and API is limited, right?
So there's a software called Parallels for Mac,
which allows you to virtualize things.
And they actually say that the virtualization framework
is in its early stages.
And as a result, it does not enable many features
that exist in the older Mac processes.
You can't do snapshots yet.
You can only recently resize disks in guest VMs.
Yeah.
And so there's this slow march to capability,
but Parallels as a company is essentially constrained by that.
I think in order to develop an API,
back to the question for endpoint security,
you would have to expose, to do what EDR does today.
I mean, you kind of need to, excuse me,
this is where I keep landing as well,
is you kind of need to expose APIs
that have the power of a kernel driver.
And there's going to be some risks involved there,
you would think, right?
Exactly.
You're writing a kernel driver for an API.
It's just everybody's kernel driver at this point.
And it's got to be mega flexible.
Yes. And then what you have the problem is point, and it's got to be mega flexible. Yes.
And then what you have the problem is,
like the reason why you want to be in the kernel
is so you can detect everything above it.
Attackers don't land in the kernel
when they get in a box, right?
Like you've got that sort of, you know,
you're on top of the castle, you know,
you've got attackers running across a bridge.
You've got a chance to see everything really well and get them before they get to you. It's sort of like
that fortress mentality. You've got to have that level of separation. I think in user mode,
looking down through an API, it is far easier for attackers to say, okay, well, I'm seeing your user
mode security program that's subscribed to this API. how about i tamper with the api how about i mess with you in user mode and the the you've
just got a different challenge there when you're trying to defend against that you do but you know
you don't have bsod yes and I guess that, that look,
and I've got to recognize my bias here as a vendor that uses kernel drivers,
of course,
but,
but yeah, it's true,
you know,
and I think that's a legitimate discussion to be had,
but like,
like ES framework is brutal to security vendors.
It's like,
if you stop working or you're not performant enough,
or you don't respond in time
to this is the mac os api yeah the mac os yes they just kick you they just destroy you there you go
you're deregistered now go away because they're like user experience is the number one thing and
they also yeah this is our experience i haven't seen any documentation that suggests this but
because you're in user mode it also um well one you can see all the cpu usage that you're using but they attribute your
usage of the apis as as you were doing bad so max so if you pop up if you make the apis go
yeah they kill you yeah they go this security vendor using a lot of cpu over here like you know
this is this is the problem whereas
if you're in the kernel as a user you you look at task manager you go you know it's all good like
we've had so many people that have reached out to us over the years and gone hey app locker uses no
cpu like you know wdac uses none there's it's zero and i'm like no there is a cost but you just can't measure it or observe
it because it's actually you're done at the filter level but kernel code is you know really performant
you know so yes it is less but you know it's also invisible i guess this is why people don't really
think about drivers why crowd strike appears to be extremely efficient is because it's chugging
away in the kernel right so that's um you, that's another thing to keep in mind.
And I would attribute that design to be,
this is the reason why they are the leader in the space.
Yeah, yeah.
Yeah.
Yeah, it's a big reason for sure.
So another thing I wanted to ask you about, though, is, you know,
we've been talking a lot about what it would look like
if Microsoft were to implement a full sort of EDR capability,
you know, API. But, you know, being an allow listing company, you don't really need all of
that. And I imagine it would be quite straightforward for Microsoft to engineer
an API that you could use to reimplement your product that's not like a full EDR thing.
You know, and I think your co-founder is, you know, your CTO, Daniel,
he's already played around with trying to use
like native functionality,
like AppLocker and stuff
to try to reimplement your product.
And it's a bit rough around the edges though,
but it would be very easy for Microsoft
to give you the sort of API you would need,
wouldn't it?
Much easier than trying to do an API for all of EDR.
Yeah, that's correct.
I guess we don't need to sit around
and wait for the Microsoft name pipes API, right?
Yeah.
You know, it's, and you could argue
that something like that would never happen.
But, you know, we're playing a different game.
We're a different product.
And as I said, we're really about file access.
So largely where we are in kernel space is really just,
hey, tell us when a file loads with specific flags
and then we'll do an allow or block operation.
And we sort of have that luxury.
And we actually, on Linux, for example,
we started out in the early days with kernel drivers.
And because we're a smaller team,
the prospect of continuing with kernel drivers and recompiling for all the different Linux kernels that were out there all the time, because basically on Linux, you have to sort of build your driver against a specific kernel version for it to be safe.
And that in itself is a huge amount of work.
So we sort of went, can we make this work with the uh api in linux which is called uh
fa notify or file access notified and we can um you know and and depending on the kernel version
you're on uh there's different performance uh between so um as they've added additional
functionality to fa notify we check to see what functionality is available on the kernel when we register and then we operate in different modes uh you know and and um you know we've we on older linux
kernels there's a far higher cpu overhead on busy systems than there is on newer ones just because
of that capability constraint um so you look yes it's the answer we we could um but again we we're
sort of we're playing a simplistic game compared to a lot of these other vendors.
Yeah. So it sounds like, from your point of view, kernel driver is amazing. If you had your way,
you'd keep it. Second best choice would be something like the Linux implementation and,
oh God, don't make us deal with something like the Apple API again. Seems to be what you're saying.
Yeah, look, the Apple API is more full-featured,
but I think it's just, it's the Apple mentality of,
do you really need this anyway?
That, you know, they sort of, yeah, you know, yeah.
All right, Dave Cottingham,
thank you so much for joining me for that conversation.
That was all really interesting stuff.
I'll be curious to see what they announce for Windows 12, right? Because I imagine they're going to announce some sort of API
that won't be quite enough of what you need
and you're going to have to ring Microsoft and beg them
and can you please implement this or implement that?
Or you'll have to cobble something together with,
probably with AppBlocker, I'm guessing,
is going to be a big part of the solution there, right?
Yeah, I think we're in very interesting security times
from an ecosystem point of view.
But I guess the point is, though, it'll be okay, right?
The industry finds a way, right?
Yes, that's right.
That's right.
All right, mate, great to chat to you.
Look forward to doing it again.
Cheers, Dave.
Thanks, Patrick. That was Dave Cotting right. All right, mate. Great to chat to you. Look forward to doing it again. Cheers, Dave. Thanks, Patrick.
That was Dave Cottingham there from Airlock Digital.
Big thanks to him for that.
And that is it for this week's show.
I do hope you enjoyed it.
Until next week, I've been Patrick Gray.
Thanks for listening.