Risky Business - Risky Business #780 -- ASD torched Zservers data while admins were drunk
Episode Date: February 19, 2025On this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: Australian spooks scrubbed Medibank data off Zservers bulletp...roof hosting Why device code phishing is the latest trick in confusing poor users about cloud authentication Cloudflare gets blocked in Spain, but only on weekends and because of… football? Palo Alto has yet another dumb bug Adam gushes about Qualys’ latest OpenSSH vulns Enterprise browser maker Island is this week’s sponsor and Chief Customer Office Braden Rogers joins the show to talk about how the adoption of AI everywhere is causing headaches. This episode is also available on Youtube. Show notes Five Russians went out drinking. When they got back, Australia had struck Dutch police say they took down 127 servers used by sanctioned hosting service | The Record from Recorded Future News Further cyber sanctions in response to Medibank Private cyberattack | Defence Ministers What is device code phishing, and why are Russian spies so successful at it? - Ars Technica Anyone Can Push Updates to the DOGE.gov Website Piracy Crisis: Cloudflare Says LaLiga Knew Dangers, Blocked IP Address Anyway (Update) * TorrentFreak Palo Alto Networks warns firewall vulnerability is under active exploitation | Cybersecurity Dive Qualys TRU Discovers Two Vulnerabilities in OpenSSH: CVE-2025-26465 & CVE-2025-26466 | Qualys Security Blog China’s Salt Typhoon hackers targeting Cisco devices used by telcos, universities | The Record from Recorded Future News RedMike Exploits Unpatched Cisco Devices in Global Telecommunications Campaign A Hacker Group Within Russia’s Notorious Sandworm Unit Is Breaching Western Networks | WIRED How Phished Data Turns into Apple & Google Wallets – Krebs on Security New hack uses prompt injection to corrupt Gemini’s long-term memory Arizona woman pleads guilty to running laptop farm for N. Korean IT workers, faces 9-year sentence | The Record from Recorded Future News US reportedly releases Russian cybercrime figure Alexander Vinnik in prisoner swap | The Record from Recorded Future News EXCLUSIVE: A Russia-linked Telegram network is inciting terrorism and is behind hate crimes in the UK – HOPE not hate Remembering David Jorm - fundraising for Mental Health research
Transcript
Discussion (0)
Hey everyone and welcome to another edition of Risky Business. My name is Patrick Gray.
We're not playing our intro music this week and that is to mark the passing of a very popular and
lovely man in Australian cyber security, Mr David Jaum, and we'll be talking a little bit about David
towards the end of the news. This week's show is brought to you by Ireland. They of course make an
enterprise browser and Brayden Rogers who works for Ireland will be along a little bit later on to talk about the challenge of trying
to figure out where all your data is going when it comes to these AI
services right so you got your chat GPT sure you can block that but what about
all of the integrated like AI agents that are springing up everywhere it's
an interesting conversation and that is coming up after the news with Adam Bailo, which
starts now.
And Adam, we're going to start with a report from the Sydney
Morning Herald by Mr. Andrew Probin, which is heavy on color,
kind of light on details and timeline.
But it talks about how ASD apparently
did a bit of a search and destroy on some data,
some Medibank data being held
by Z servers now, or Z servers.
This is the Russian bulletproof hosting org that was taken down last week, sanctioned.
Looks like there was some, the RMRF shark may have been released and some servers seized
in Amsterdam and whatnot and sanctions imposed against them.
But this piece, as I say, it's light on details,
but what details there are in there
are actually kind of intriguing
and give us a little bit of insight
into what Australia's Signals Intelligence Agency
has been doing to target criminals.
Yeah, it's an interesting kind of counter
or kind of end to this particular story
because when the original Medibank hack went down,
we saw
and we on the show kind of encouraged the response by the Australian government
because a bunch of sensitive medical data belonging to a lot of Australians
something like what 10 million Australians something like that were
taken by ransomware guys and they had tracked it back to this guy Alexander
Irmakov they continued to pull that thread and now it turns out he was hosting a bunch of data
in this organization's Z servers that is a bulletproof hosting product on cloud servers
for crims and it looks like the Australian spooks figured out who was behind it or you
know broke into the organization behind it, looked at the various people involved and
that led to some of the sanctions and things that we have seen or at least we're not quite
quite clear how all those pieces fit together but yes the RMRF shark
allegedly got rid of about half a terabyte of Medicare data that was
stored on there and presumably they were pretty confident that that was the like
the place where Ermakov had stored it as opposed
to just one copy of it.
And yeah, I guess they had a few terabytes of, or many terabytes of other people's crime
related things stored there and bad times all around for the guys that run it.
It's also not clear when this action occurred, whether this was in the lead up to sanctions
or in the wake of Medibank.
So as I say, it's like heavy on color light on detail, but the color is good
Which is that apparently ASD waited until these guys were
Like heading out they were based in Siberia apparently and heading out for like a big party where they were gonna all get smashed on
vodka and waited till they were out
Doing that and then just like you, went a bit wild on their
environment, which is the way you want to do it. But look, there's some other details in here that
I found fascinating. One of them is that when they were investigating like Ermakov for his role
in Medibank, you know, they had linguists and psychologists building profiles on these guys,
which I think this is interesting because I don't know if you remember, but when Australia first announced sort of sanctions against Ermokov over his role in the Medibank attack,
like a bunch of brainiacs in the CTI space were saying that they got the wrong guy.
And you read through this and you're like, no, they got the right guy. And I remember one comment
I got from a Govee on this is, we know Mr Ermicov very well, which I guess makes sense when you've got a team of very
skilled psychologists actually profiling the guy.
Yeah, exactly.
That's kind of what you expect from an agency like this.
Like they're going to do a serious job.
They have serious tasking.
They have serious resources.
You expect a good quality, you know, well-resourced, thorough job.
And you kind of get the impression
that that's what they did here.
Yeah.
I mean, I was talking to a CTI pal actually the other day
about all of this.
And he pointed out that there had been two data breaches
at z servers or z servers last year.
And one of them, some data got published
that allowed them to actually figure out the IRL identity
of some of the admins.
And he wonders, like, was that ASD who did that?
And he doesn't know.
I mean, these bulletproof hosts have form hacking each other.
So it could have just been competition, but it could have been the spooks.
And we don't know.
And that's great.
That's really good.
We love this sort of uncertainty, right?
Yeah, exactly.
It really goes to the roots of how that crime ecosystem
has to work, right?
Which is there is a degree of trust
and a degree of reputation.
And to operate in that world, you have to build,
build trust, build reputation.
And when that's undermined, it introduces cost,
which is, that's what we're all about. Well, and it's undermined, it introduces cost, which is, you know, that's what we're all about.
Well, and it's an environment,
it's an ecosystem that operates based on rules,
you know, many of them unwritten,
and what are the rules now?
And that's not clear.
Yeah, exactly.
So, yeah, that's good.
Good works, books.
Yeah, exactly.
And, you know, this is the stuff
that we definitely like to see.
So we've dropped a link into that one, into the show notes.
And look, staying with the same story and Joe Walminski over at The Record has a write
up about the Dutch police's involvement in this.
This wasn't just an Australia thing when it comes to the sanctions and whatnot, but they
took over something like 127 servers that were in the Netherlands, I think.
And it's a bad time to be
like one of those guys, basically. Yeah, yeah, exactly. And also, if you were running infrastructure
on top of them, like if you were running your cybercrime, you know, you're hacking using their
stuff, then you've got to expect that all those disks are going to be sifted through and threads
pulled and yeah, you're going to have a bad time. Yeah, and I've linked through to the Australian government
like defense website where they link these sanctions
against Z servers specifically to the Medibank private hack.
So yeah, very interesting, nice to see it.
You'll love to see it as we say.
We do.
Now let's have a look at device code phishing.
Now, apparently this isn't new and that makes sense
but it's become
a technique that's very popular with Russian APT crews at the moment. And I think you know everyone
listening to this at some point would have used a device code. It's when you might log into a
streaming service or something and that session's expired on your TV and it says you know just pull
up your browser and enter this device code and that will authorize the device. Obviously this is fishable and we got a
great write-up here from Dan Gooden over at ours about how the Russians are using
this to access M365 accounts and I don't know why you would need device-based
authentication into an M365 account but some product manager at Microsoft made
this happen,
obviously, so you can get your email on your TV or something. But walk us through
this one, Adam, because I found this was a great write-up, actually, and
just a really interesting, yeah, just a really interesting phishing technique
that I think obviously is gonna work in a lot of places. Yeah, it is. It's a
great write-up. This technique involves, so if you're trying to log in a lot of places. Yeah, it is. It's a great write up. This technique involves, so if you're
trying to log into a cloud service,
normally these days you're going to be presented with multi-factor
authentication flows, maybe with handoff
to third party auth providers in the case of federated
authentication.
And that whole auth process, because it's
so flexible and configurable and there's so many ways of doing it,
like the web browser does it for you.
But if you are on a device that doesn't have a full
featured modern web browser, then you're kind of in trouble.
And so the OAuth standard has some kind of specifications
for how to deal with input constrained devices,
which is what they call things like TVs and printers
and your car and stuff that either doesn't have
proper input devices or doesn't have USB ports
you can plug a YubiKey into or whatever else.
And kind of go through the auth process on two devices.
So there's a device that you're trying to authorize,
your printer, your TV, whatever,
and then you use a real computer with a real browser
to do the normal auth flow and then authorize.
And there is like a shared code that you use
to kind of bind those two things together.
And this phishing process is designed to kind of capture
that code by exploiting the confusion
about what you're authenticating to.
So one of the examples Dan has some screenshots of
is someone saying, hey, you know,
we'd like you to speak at our conference.
We need to arrange some details.
Let's get together on a video call and blah, blah, blah,
blah.
And then they send you an invite that comes from Microsoft
to trigger that auth flow.
And so then the regular person,
like if you don't realize that you're going, if you're not familiar with that auth flow and so then the regular person like if you don't realize that you're all you know you're
going if you're not familiar with that auth flow you might not consider it to be out of place
and then you provide the code to the attacker in some you know whatever context they're fishing
you about and then they use that to get an access token long term access token you know for onwards
connection to your 365 world which Which, to answer your question,
if it's a printer, it makes sense
that you might want to print documents from the cloud.
It needs to be able to do those kinds of things.
So, you know, there are reasons why it exists,
but it's, you know, in the end,
phishing's gotta work.
That's the main thing, and this is a technique that works.
And clearly we've made the rest of the oil flow so difficult and complicated that they
reduced to this, but hey, it gets the job done and honestly, you know, I kind of got
to hand it to them.
Yeah, I mean, I, that was my take here as well, which is like, wow, this is really cool.
Exactly.
But you know, it ties back to something that we've been talking about more and more, which
is just about how modern authentication flows are just confusing.
Yes, absolutely.
Because there's so many parties involved, it's so modular, and things like when you
go through one of those OAuth grants or a flow like this, it will usually tell you what
you're authenticating to.
But there are always stupid generic names like Microsoft Trust Center or things like that.
We really, and often it's like a green tick for good luck, why not, or a padlock.
And so you really have no, like as an end user, what basis do you have to make a good
decision?
And like even people like you and I, who presumably are experienced at this kind of stuff, like
signing into Google's cloud stuff and using their various Google cloud things and handling all
the auth from that and understanding what's an app and what's a built-in function, it's
honestly confusing.
Yeah.
And I don't know how anyone has any hope to be able to auth safely.
Well, that's a little bit nihilistic, but I guess that's what our listeners have come
to expect from us.
Yeah, I mean, we, you know, I compare it to like back in the days when we were telling people to look at the padlocks in their browser bars and check TLS certificates.
Well, I mean, we never told them that, but I know what you mean.
Well, we the industry, not you and I personally. expecting users to make informed decisions about trusting CAs and certificates was unreasonable,
and expecting people to be able to complete these kinds of multi-party federated multi-device
auth flows safely, also unreasonable. And you just think, what do you tell one of your mates,
or a CISO who's got to deal with this? What's the control?
Yeah, and I was thinking about what's this going to look like in
the browser, like in the browser that's being attacked here?
Because you are ultimately, you're authing to a legit service for a legit thing.
You just don't realize that what you're authing to is being misused for something else.
Like you're authing to the real Microsoft.
There's no phishing, you know, there's no impersonation of the site.
It's all in your head.
Yeah.
And like that's not a thing that technical controls are great at.
I mean, again, though, I don't understand.
I mean, I guess like, why are you doing full auth to an M365 account with a device code?
Like, what?
I mean, is that for video conferred devices and why do they need access to everything?
I don't know. It just seems like quite often we just roll features without thinking about it,
but I guess that's nothing new, right?
And then underlying it is not one service, right?
It's some giant graph API with a thousand endpoints
to get assembled by apps,
client side into whatever functionality.
So you're like a video conferencing app,
it's not a video conferencing app,
it's a collection of 1700 raccoons and a trench code,
and every one of those is an auth graph API service
or blah, blah, blah.
You know, it's complicated.
And there's reasons why it's complicated,
but that does not make it okay.
No, and look, we're gonna move on now to some news
that could come from a simpler time.
I mean, this stuff, this OAuth extension
for device code phishing, you and me,
we've been around a while, we're old, it's confusing. It's stressful. It's anxiety-inducing.
Let's talk about something that's a little bit more old school, which is the doge.gov website.
We're, look, you know, it's not a good sign, right?
So why don't we just walk through exactly what's happened here?
We're relying on the write-up from Jason Cobler over at 404 media
so does has a website does dot gov and
It was it looks like like much of those infrastructure set up in a hurry and
Somebody was able to kind of post messages on this site
It's meant to collect like, you know stats and social media posts from X that doge posts
like, you know, stats and social media posts from ex that Doge posts. A little simple website, but yeah, it turns out there was some kind of like underlying data store that had no auth or something.
And yeah, you could just like post your own content onto their website. Great. Good job.
Yeah, it was not difficult to figure this out, I guess, is the point. And look, from the very start of this, we've just said the concern
for us as sort of cyber security people is around data governance. You know what I mean? And if this
is sort of happening on their website, you just sort of wonder what's happening in the background.
And I think my social media post on this was, this isn't funny. I'm a serious commentator. This is not
funny. You know, basically is where I am.
Oh, and speaking of social media too, we are now on LinkedIn.
So there you go. You can find us. If you search for risky business media,
we're going to be doing a fair bit on LinkedIn.
But yeah, Doge website got defaced essentially.
Now, this is my favorite story of the week by far. I was expecting, I've been
expecting Cloudflare to have issues for years, mostly over the fact that they platform a lot
of hate speech and cybercrime stuff, but it's not that that got him in trouble in Spain. Adam,
walk us through this one. Oh dear, oh dear. So, soccer is quite popular, especially in Europe and in Spain.
You mean football, but anyway.
Football, yes, whatever, whatever, whatever you want to call it. I call it soccer.
And there are many, many sites on the internet which offer pirated versions of commercial streams of football, soccer games.
And some of those are hosted behind Cloudflare
or accessed through Cloudflare,
or you get the apps to get them through Cloudflare
or whatever else.
The Spanish Football League went through the courts
to demand that Cloudflare block access to this stuff.
And the court said, yes, you can get them to block it.
And so the net result of all of this is that on the weekends in Spain,
when football matches are going on, the local ISPs are being forced to block,
by IP address, bits of cloud players infrastructure, so as to stop access to these football games.
Which of course means that on the weekend, if you want to do commit some code to GitHub or browse
Reddit or you know something else that happens to GitHub or browse Reddit or something else
that happens to go through that particular bit of Cloudflare,
well tough.
Yeah, so you can't do your code commit on a weekend
because of football piracy.
It's amazing.
I don't know if they actually directly sued Cloudflare
or if they sued local ISPs and demanded that they block them.
But you'd imagine that if the order is from the court,
you have to block this stream and it's hosted on Cloudflare.
The only thing you can really do is to nuke the CDN, right?
Yeah, I mean, no one comes out of this looking great.
And I think the local ISPs have basically said,
we're just doing what the court told us to do.
And Cloudflare has said a great many things.
Net result is, you know.
Freedom of speech.
Freedom of speech.
I think it's always freedom of speech with them, right?
Whether it's Nazis or pirated.
What have they actually said about this?
I didn't get to that bit.
Basically they said that Cloudflare has repeatedly warned
about the consequences of IP blocking
that fundamentally ignores the way the internet works
Is their argument. So that's the way the internet works is that they do piracy and that's okay
Yeah, so they can I mean I think they you know, it's sort of safety in numbers
If you launder, you know useful services on the same IATP addresses criminal services then
But that works until it doesn't and that's what I've been saying for years, you know,
is eventually there's gonna be problems.
I just didn't expect it to be football stuff.
But, you know, and again, they're using this line.
I see the part now in this write-up,
while Cloudflare cannot remove content from the internet
that it does not host, blah, blah, blah, blah, blah.
So they always stick with this line of like,
we're a CDN, we don't host,
but it is perfectly within their capability to, you know, kick people off their CDN. So
it's just a stupid semantic argument in my view.
Indeed, indeed. Yeah. Some stuff has to come home to roost with cloud, eventually, right?
But yeah, Spanish soccer, probably not it.
Well who knows? Maybe this is the first domino my friend let's see if the wicked punished
speaking of the wicked Palo Alto networks has another firewall vuln
under active exploitation apparently you can use it with a previous vuln and you
know just just own panos devices I mean it's like we see one of these every week at this point.
It's comedy gold.
Yeah, this one is comedy.
This was actually also a research out of Asset Note,
who were looking into the patch from a previous Palo Alto bug,
went and pulled the thread, and it turns out,
it's the same kind of request smuggling
when you've got 17 different HTTP proxies involved in parsing a request
and different ones implement different rules,
then you can kind of thread the needles.
I think in this case with like URI encoded dot dot slash
or something like it's IIS from the early 2000s.
So that's embarrassing for your security appliances.
But yeah, the result is if you left your Palo Alto
web interface on the internet,
you're going to get hacked again, presumably.
And you have to pay for incident response again, presumably.
And yeah, good job with your choice of security appliance.
And congratulations too to the AssetNote team who the company was acquired actually by a
company based out of England.
So well done to them.
But yeah, just, oh my god. But look, perhaps a more interesting vuln
that you don't see very often.
And as soon as I saw that this was a Qoloss one,
I knew there would be gushing.
So I'm getting ready for the gushing.
You're going to let me gush now.
But yeah, the team at Qoloss have
discovered a couple of bugs in OpenSSH.
And one of them's DOS, but the other one, like it affects non-default
configurations. But it's a bug that you would not expect to find in OpenSSH and kind of a bit
concerning. But why don't you walk us through the whole thing here? Because I've been getting
questions on this one. You know what I mean? It's one of those, well, what do you know about this
SSH bug? So do your thing, Mr. Boileau. I shall do my thing. So this is a beautiful, wonderful piece of research,
as always.
I'm gonna get the gushing out of the way.
Gosh, gosh, gosh, gosh, gosh, gosh, yeah.
Whoever it is at Qolus that does this
was reading some open SSH source code
and noticed an idiom that they were using for,
in particular, they're checking a couple of values
and then returning from a function.
And there's sort of an idiomatic set of code that they use to do that, and the idiom relies on them,
you know, getting something right every time. And so the researcher went, well, how many instances
in the code base are there of this idiom where they don't do it exactly as they should? And found,
you know, they wrote some code QL queries and read some code
and they found a whole bunch that were not security relevant
but they found one that was.
And this was in the process where an SSH client checks
the identity of the SSH servers.
It was like a host key that it uses to validate
that it's the real server.
And that particular check in a certain case,
like when you're using host key fingerprints
that are sourced out of DNS,
if you could make that process run out of memory,
then the way it returned its return value
was not done correctly,
and you could bypass the hosted entity check.
And the net result is, if you're on the network
between the client and the server,
you can impersonate the server
and not get a warning from SSH,
which gives you access to compromise the system.
One of the core guarantees of SSH is that this shouldn't happen.
It's meant to warn you about this.
The other bug you mentioned, which was the denial of service, well, they got to this
point where they were like, okay, if we can cause it to run out of memory here, then we
can gain access by bypassing the host validation. Are there any memory leaks? And what they found was a denial of
service commission that they can trigger to cause it to run out of memory pre-authentication.
And that's the kind of the second bug. So you can use that separately just as a denial of service,
but it's an integral part of actually exploiting the first bug. So both of these have now been patched by the OpenSSH team,
but these bugs have been around for a long time,
and the necessary configuration was on by default on FreeBSD for a few years.
By a few years, I mean like a decade.
So it's unusual that we see bugs like this in OpenSSH
because the guys behind it are very
good at writing.
It's the best piece of C code you're going to see anywhere in my opinion.
So bugs like this are unusual and of course it's whoever it is that qualifies this that
would find it because instincts on this one were just good.
They spotted something and went hmm, I wonder if, and then next minute, you know, shells.
Yeah.
So if you are running that config,
and the config is something to do with validation via DNS.
Yeah, the host key DNS validation, yes.
Yeah, host key DNS validation.
If that is something that you've got turned on,
this could affect you.
But I mean, how common is that configuration?
It's not wildly uncommon, because managing host key verification in bigger networks is
a kind of fiddly and the other options which are like certificate trust based or distributing
for your configuration management, which is that when I was looking after a bigger fleet
of boxes, we did this with the configuration management, but that led to a bootstrapping
kind of problem.
Yes.
So doing it in DNS honesty is not a terrible idea,
especially in a world where DNSSEC exists, but of course DNSSEC brings you
a million other problems. So yeah, I mean some people could get hacked by this,
which is pretty cool. Well and they're the people who are trying to do it the
right way right? Well yeah. Now speaking of further problems with the
technology that underpins our world, Adam,
which is what we like to do, we got a great write up here actually from Recorded Future.
And they've written an article about it as well, published to the record.
It's on salt typhoons adventures in Cisco devices, but they, it's funny, right?
Because there's two bugs that are exploited here.
It looks like they've compromised something like a thousand
Cisco devices, largely at telcos,
but some universities and whatnot as well.
What's interesting about this is the two bugs
that they're using are both termed privesk.
But the first one like allows you to add a user.
And this is like termed privesk,
which I thought was a bit strange.
But yeah, they've rolled up.
Well, I don't know that they've rolled it up,
but they've identified this campaign that has exploited
over 1,000 Cisco devices, a lot of them in the United States.
But it's just a really good write-up.
And I think it's for any salt typhoon watches out there,
you'll want to get this link.
Yeah, yeah, definitely worth a read.
And these bugs are in iOS XE, which
is the kind of Linux underneath variant of iOS, where
it's the Linux kernel doing, you know, routing and control, but they give you a user interface
that looks like a traditional Cisco iOS.
And these are, you know, in many places people may not necessarily realize that it's not
a like Cisco iOS Cisco device.
It's actually a Cisco iOS XE Linux device.
And once you've shelled it,
you have all of the power of being able to hide on a real Linux box and do other things.
So they're great targets, but they're also not quite as common as traditional Cisco IOS.
But lots of telcos still run them. They're in lots of interesting places. There's quite
a product range that run it as well. And I don't know what Cisco's rationale is
for which ones are which.
But yeah, if you look after a network with iOS XE,
you know, it's probably worth having a read
to understand what you're up against.
Because also once these guys are in your network,
boy oh boy, you're not gonna get them out in a hurry.
No, and these are CVEs from 2023,
but this is the sort of stuff that doesn't get patched, right?
Which is why we're seeing it. Funnily enough, you this is the sort of stuff that doesn't get patched, right?
Which is why we're seeing it.
Funnily enough, you know, this XE stuff that you're talking about, I don't know if it was
known by the same name at the time, but something like 15 years ago, some friends of mine found,
it was like a physical access bug, where if you plugged in via serial and like type the
right command, you would, you would exit the Cisco iOS CLI,
and it would just dump you into a Linux shell,
and then you could just trojan the absolute crap
out of that box exit, and the admin,
like it would be invisible to the admin.
So not a critical one, given that it required,
you know, physical access or whatever,
but it was still worth reporting,
and they actually hired me like by the hour
to handle the bug report off to Cisco.
This is back when I used to do this sort of stuff, you know, on the side.
And what was really funny was Cisco's response to this was just to EOL that
entire line of products.
They were just like, ah, because they were already old at that point.
They're like, we're not going to patch these.
They just slapped EOL, EOL, EOL.
They ended like 20 products or something on it because of that
report. So that was pretty funny. Moving on and let's talk about what Sandworms getting
up to. They're going after some Western networks, but they've got some sort of somewhat, you
know, somewhat more interesting C2 happening as part of this campaign relying on like Tor,
right? Which is interesting and I'm gonna talk about
why that's interesting after you give us a rundown
on what's happened here.
So Microsoft has written up, I guess,
an initial access campaign by Sandworm
and it looks like there is a kind of a part of Sandworm
that does the initial access and the part that does
like the ongoing operations, intelligence gathering,
you know, whatever other actions on objectives they have been tasked with doing. So this is the like
initial access crowd that have been pretty busy in Ukraine for the last few years but are now
you know back out doing things on you know on the wider internet a bit more indiscriminately and
Microsoft's analysis says like you know there's lots of targeting going on all around
the Western-speaking world at the moment from them. They've been at it for a while, and as you say,
they're one of the things that's a bit interesting here is that they are dropping
tall hidden services for command and control on some of these systems. And that's,
you know, not at all unheard of, but still kind of interesting.
And not all that difficult to detect, right? Like if you were set up for it,
not all that difficult to detect. But the reason I find this interesting
is you would only do something like this if there's a reason.
That's why I find it interesting. They are obviously doing this for a reason.
You know, you don't bother spinning up onion services on an infected host unless maybe
you're getting snapped using more traditional C2. And for a long time I've been expecting to see
more of this sort of stuff and more of like using social media services for C2 and whatever,
more using stuff like TLS 1.3 through CDNs, you know, kind of like a domain fronting style thing.
And we just don't see it, right?
We just see people just doing the dumbest, most basic C2 ever.
And you see something like this and you're like, well, they're doing this for a reason.
I wonder what that is.
Like, I wonder if it's that their C2s keep getting taken away from them.
And you know, that's an efficient way for defenders. Like when I'm talking about like nation states here to just like black hole,
you know, some C2 IPs or whatever it is.
And maybe this is a hedge against that.
I don't know what it is, but I guess what I would say is like, you don't do,
you don't do this for no reason.
Yeah, no, I agree.
And that I also had similar, similar kind of thoughts because there's always a
trade off when you're going for initial access between kind of size and complexity. Like we want something that's simple enough that
there's a very few things to go wrong because early on in this process you
don't really know the environment you're running in and so you want to do like
minimal amount of stuff because every extra complexity you add is a chance
that things might just go wrong or get snapped or whatever else. And pulling in a whole tour environment
obviously makes a lot of noise on the network.
You know, there's been, you know,
back when Microsoft used to run
like an IPv6 relay network for transition,
like when people had v4 and didn't have native v6
and Microsoft provided like a Torito tunneling service.
We saw people kind of abusing that
because it was minimal lift for you
as the attacker to bring in
because all the things you needed
were already there on the windows,
but went outside, normal stuff didn't look like
normal things on the network,
was a great kind of mix of those things.
So yeah, having to drag a whole tour runtime in
seems complicated and seems risky.
And I guess it means that people are pretty good
at spotting stuff on the wire.
Well, but I think the interesting thing for me is that perhaps the concern is less about detection,
but more about the fact that if C2 is highly concentrated, it's easy to disrupt.
You know, C2 on the open web, you know, you can find those boxes, you can hack them, nuke them,
RMRF them, you can talk to the ISP.
You've got a lot of options for taking away
a lot of access all at once.
Whereas with this, okay, sure, each individual victim
might be able to detect it, but that's like,
is that actually gonna happen?
Each individual detecting it,
and then they've actually got to remediate it.
So it's sort of, to me, this seems like sacrificing
a little bit of stealth on target to get more of a,
you know, result from a macro perspective. That's,
that's what I think is going on here.
Yeah. I think that's a pretty good, a pretty good analysis. And yeah,
I think so. Yeah. Yeah. Okay. Cool. I'm not crazy. That's good.
Now we've got this, um, terrific piece,
absolutely like riot fun piece from Brian Krebs looking at
what Chinese criminals are doing in terms of essentially phishing credit card holders
to add their credit cards to mobile device wallets.
And then what they do is they like sell the phones
with the actual, you know, with full wallets
full of people's credit cards.
And like, I think this is a fascinating walkthrough.
I mean, the photos are terrific.
The whole thing's really interesting.
I do wonder though, like, if the banks are doing
a sufficiently good job, like this shouldn't happen.
You know what I mean?
Like you should be able to detect when someone's trying to spin up a wallet like
when the device is on a different IP and like in a different cut. Like it just seems like this shouldn't work.
And yet it does and we have this nice write-up from Brian Krebs to talk about.
Yeah, I mean this is kind of what modern credit card skimmy looks like. In old days, you go to a restaurant, the waiter would swipe your mag stripe through his recorder
and clone your card later.
That's not how it's done these days.
This is how it's done.
And it's just, I also, thanks Brian for the great write up.
It's really good.
So yeah, they send you,
so normally the process of enrolling a card
for contactless payment on your phone send you, so normally the process of enrolling a card for
contactless payment on your phone
involves your bank sending you some kind of verification code through an existing contact mechanism they have via text message, via a banking app, whatever you've got and then these people will
usually in the context of another smaller transaction, so they might run up a fake
web store or you know an auction site or something.
Well, one of the things they're doing here is like sending texts saying you have an unpaid
toll or you have an unpaid, you know, you need to pay a little bit to the post office
to release this thing. And that's how they're setting up these transactions. And then as
part of that flow, they say, well, we're going to send a code to your banking app, just give
us that code. And then that's how they're enrolling the cards in Apple Wallet.
Or I'm not sure if it's Apple, but mobile device wallets.
Yeah, in mobile device wallets,
we have both Apple and Google.
And then yeah, I mean, it totally makes sense.
That's another example of people being confused
about the context of authentication,
which is very similar to the device code phishing
that we talked about earlier on.
And then yeah, load up the phones with a few dozen payment cards and then sell them which is smart. They're also providing a
service where you run an app on your Android phone which real-time relays the
payment through one of their phones that's got the wallet, got the full
wallet so you don't have to get the physical device from China. You can just
install an app on your phone, go up to a payment terminal in your regular country,
and then it will relay it in real time
to the other phone in China,
play it out to the payment terminal,
and voila, you've paid your stuff.
And there's actually a picture that Krebs has got here
of some guys being arrested in Singapore
at an Apple store or something doing exactly that.
So yeah, I mean, clearly people are doing it.
And the fact that it works so well, I think you were bang on.
It's interesting that the banks and the phone manufacturers, they're the ones that are in
a position to detect this from an anomalous pattern of use point of view.
And as this gets bigger, obviously that will become more of a focus for them because it does you know this this should be unusual. Yeah and you
know you would just think most of the banks should have an app presence on
devices where they're rolling like where they're trying to enroll these sort of
cards and there's like there's so much you could do here and I think I think to
a degree like for at least the big banks I think they'll probably be
able to tackle this but you know it's going to take time and effort which is why we are
where we are.
Yeah and anything that requires individual banks to do stuff is going to take a while
right because there's a whole range of banks with different ranges of maturity and yeah
it would be nicer if there was a Google and Apple end approach that could kind of cobble you know hobble this right
at the end point. Yeah and it's interesting too that those phishing
messages are actually or smishing messages are actually going out through
iMessage and in the case of Google RCS so they're not even like hitting the
telcos so anyway good stuff go have a read of that if that is interesting to
you. Another banger from Dan Gooden, he's really had the goods over the last week, but he's
written about sort of like trends, I guess, in prompt injection and how people are doing
funny stuff with LLMs.
Like in an example he talks about in this piece, it's where you can essentially send
people something that tricks the Gemini LLM that's in your
Google account into going and grabbing sensitive documents and sending them to the attackers
and whatever, and looks at Google's approaches to fixing that, where he's quite critical
of the way they've done that.
But then I just think, well, what's your alternative here?
This is just a great example about how plugging large language models into
absolutely everything is going to come with all sorts of very strange risks.
Yeah, yeah, exactly.
I was, I was struck reading this by the similarities of this and, you know,
traditional, you know, uh, memory corruption text where you're confusing code and data.
Well, and I was thinking about web attacks and things like cross-site scripting and
getting contexts all mixed up. Right. But it's, it's, I mean, I guess that's the
thing, right. It's all about, you know, a new technology that requires, you know,
boundaries to be better defined. Right. So it's, I guess everything that's new is
old again.
Oh, yeah, exactly. Yes. That's exactly what this is, right? It's confusion about who is providing
the instructions and who is providing the data. And some of these tricks for, like one
of the ones Dan talks about is they call it delayed activation or something where the
LLM has got some filters that stops it from taking actions immediately from the data that
it's processing. You can't just have it summarize a doc that contains
instructions to send all your passwords to the attacker, but in this case they're
doing a thing which is next time I type yes send all the passwords to the
attacker. So it's not doing the action then but it's queuing it up for later
which is the sort of stuff that you you know, a real artificial intelligence,
the ones that we one day may have,
ought to be able to understand
that that's not exactly what I meant.
But once again, now we're into like, you know,
crazy reasoning about it,
and who says a computer is gonna be any better
at making security choices than the humans are?
So like, it's, I don't know.
We don't talk about this specifically,
but like this week's sponsor interview,
the whole theme is like,
what do you do about LLMs everywhere
and not just the major services,
but like LLMs in products like Gemini, right?
And how do you avoid exposing the wrong sort of data
to that stuff because that gets risky
and how do you do that?
And they've, you know, they're taking a stab at it.
Island, you know, being an enterprise browser,
they can see a lot more in the browser
than you would otherwise and block various endpoints
and whatever, but you know, basically they're saying,
you kind of need to have that DLP approach
of like not allowing that data to get anywhere
near those things in the first place.
Yeah, it's complicated, you know.
It is complicated, man.
Like there's some interesting new challenges here,
especially now this stuff is everywhere.
Just quickly going to mention it.
John Greig has reported that that woman who was arrested in Arizona last year for running
a North Korean IT worker laptop farm, she's pleaded guilty.
Prosecutors are seeking like seven to nine years.
So she's having a bad time.
But I've always said, well, I have said for the
last year or two that I think that's the weak point with these scams or the laptop farms. They're
pretty easy to identify. And, you know, word gets out that this is illegal. And I think you'd be,
you'd have to be a real dummy to engage in that sort of activity and not expect to be caught. But,
you know, the world is not short of dummies. So expect we'll be reporting on that for some time to come.
We've also seen Alexander Vinik, who's the Russian guy who was operating BTC-E,
the cryptocurrency exchange, which is now defunct. And he was convicted of a bunch of cyber crimes
and he's in prison in the United States. Apparently he is being or has been released in exchange for some teacher who was being held in Russia for possession of marijuana,
which is their go-to charge when they want to hostage. So yeah, it looks like there's
been a swap there. I mean, it's depressing, isn't it?
It's depressing and it kind of rewards Russia's use of hostages to get these kind of concessions
and releases and stuff. So, sigh. I guess at least he did have to forfeit a hundred million bucks.
So that's not nothing.
That is actually mostly it for the week's news.
But I did just want to talk a little bit, as I mentioned, at the top of the show.
David Jaume unfortunately passed away last week.
He was a good friend of mine. I'd known him something like, you know, 20 years.
Good hacker, great person and, you know, I'm definitely
gonna miss him. But I did just want to, I spoke with his parents last night, right?
And they've agreed that I can disclose a little bit of detail about
how he died, basically, because there's a little bit of confusion
out there and understandably because the posts announcing his death were somewhat vague and linked off to a mental
health charity. Look, as best as everyone understands, this wasn't a case of suicide.
Dave did suffer from bipolar disorder and could become very sick occasionally. Right.
So, so I knew him quite well and you know, he would, he would be fine most of the time,
but then he would get on a massive like manic upswing and then a crash afterwards.
And you know, this is something that he'd been living with for quite a long time.
And you know, it did complicate his, his life somewhat, but he knew how to manage it.
He got better at managing it, right?
And an example of that is like last week when he felt himself going manic, he actually called
him sick because he had previously not done that and that had caused him problems at work.
And his previous work at the bank, I won't name the bank, they had to deal with some of that when
he would go a little bit off tilt. And they were very good.
He always really appreciated the way that they handled all of that.
So I think it's important some of them would be listening to this and they would know.
And indeed, I spoke to his parents last night, as I mentioned,
and they went and visited his office at one point when he was off sick and saw his desk.
I think they had to go and get something from the office for him and saw all of the messages
and gifts that people had left for him.
And, you know, he was definitely supported there and certainly appreciated it.
But it does look like what happened is sometimes when Dave would be on one of these upswings,
he would drink as a way to calm the sort of manic anxiety that he would experience.
And it looks like perhaps he just drank too much and that may have suppressed his sleeping,
suppressed his breathing while he was asleep.
But it certainly just looks like he passed in his sleep.
And, you know, I went back and forth with these folks last night about what to say,
because if you say it was misadventure, you know, as his dad says, like that,
that makes it sound like he thought he grew wings and jumped out a
bloody window.
If you just say it was mental health related people will assume it was suicide.
I just wanted to clear that up for people who knew him because so many people knew him
in Australia.
He was a hacker who had been hacking since the 90s.
Always fun to be around, terrific guy, wicked smart.
He organised the Tuscon Conference,
which was the one in a caravan park in Queensland
where it was basically just 40 people camping somewhere nice.
He presented at KiwiCon.
He was on the show talking about some of his research
into North Korea.
He spoke about Red Star OS at KiwiCon
while using his alter ego, which was a stuffed toy
called Lord Tuskington, the wal ego, which was a stuffed toy called Lord
Tuskington the walrus, which was up on the lectern and I think he was down
there with a microphone. You know, you knew Dave, you weren't as close to him as
I was, but you know he's definitely gonna be missed.
Yeah, he really is. I carried Lord Tuskington the walrus, the stuffed walrus,
I carried him out on like a decorative pillow for that presentation and put him on the lectern and position the microphone in front of the walrus, I carried him out on a decorative pillow for that presentation and put him on the lectern
and positioned the microphone in front of the walrus
while David voiced the walrus in a husky,
sort of faux English accent from offstage.
And it was just, it was a wonderful piece of infosec comedy
doing amazing research, but also presenting it
in a way that's engaging and memorable and fun.
And that's kind of who he was, you know?
And it's very sad that he has left us.
Yeah, it is.
And led an amazing life.
And indeed speaking to his mum last night,
the thing that they're taking comfort in is just,
you know, just how much he lived his life on his own terms.
He was an avid outdoor adventurer.
I once told him about a coastal walk
through a national park here
that's like a hundred kilometer long
Multiday walk and within a few months he'd just done it, you know
So he would go on these epic multi-day bush walks. He was an SES volunteer as well for the state emergency service
He was actually a devout Hare Krishna as well, which a lot of people wouldn't know he had been for decades
But that was something he was quite guarded about until more recent
years when he's like, no, this is okay. This is a part of myself that I can share. And
my condolences to everybody else who knew him. I mean, another great example of Dave
is he volunteered to work in an election one year just because he wanted to see how elections
would work and what the security situation is like with elections and
as it turns out according to him pretty good you'd need multiple people at multiple different levels of an election to try to
alter an account and whatnot. He was also a great friend to me when I, you know, my family went through its own health crisis many years ago.
Yeah, just a wonderful guy and yeah, we're really going to miss him.
What more do you say?
Yeah, exactly.
And I believe also he was afflicted by the same disease as you when it came to Java applications.
Yes, there are very few people who were willing to punch Java right in its Java files quite as much as I enjoyed and he was absolutely one of them.
So yeah, we
had that in common and we shared some good bugs over the years.
Yeah, lived around Australia all, you know, he would stay in a place for a while and could
say, okay, I've done that, onto the next place, onto the next place, onto the next place.
And you know, his skills development was the same, like this is a guy who just, you know,
couldn't stop learning. And yeah, again, okay, I'm ranting now. It's been, it's, this one's
hurt. I gotta be honest. It's really hurt.
I think a lot of people are feeling that at the moment. Okay, so let's wrap it up there, Adam.
Thank you so much for discussing the week's news with me and we'll do it all again next week.
Yeah, thanks, Pat. I will see you next week.
That was Adam Bialo there with a check of the week's security news.
It's time for this week's sponsor interview now
with Brayden Rogers, who is the Chief Customer Officer
at Ireland, the browser maker,
and you can find them at Ireland.io.
And they're, yeah, fully featured enterprise browser
with all sorts of very, very interesting features
and use cases.
And one thing they've been spending a bit of time on lately
is looking at just how much
company data can wind up being exposed to large language models and not just through people
spinning up their own private chat GPT accounts and pasting stuff into it. But, you know, as we
were just talking about in the news, like if you're a Google user or a Microsoft user, you know,
there's LLMs everywhere now. And so Braden joined
me to talk about, I guess, this issue and what people can do about it. Here he is.
There's the obvious destinations that are generative AI and, you know, we make decisions
about whether we decide to block those or we allow them or we put them through potential
workflows. And the workflow could be maybe we let a user make a request for an approval natively
in the user experience,
and then that approval flows through the organization
and an appropriate business level approver
makes the decision to allow it and understands the risk.
And what we do in that particular case,
we might put that in a place where the user can access it,
but keep it outside the boundary of the corporate app.
So one of the foundational things that we think a lot about is how do we deal with unstructured
data scenarios?
And I think this is one of the biggest challenges, but your comment, you just made, I saw the
eyebrows raised there.
The old school approaches of DOP.
The challenge with DOP, DOP feels like a bit of a washing machine.
And I've been working with it for 20 years.
It feels like a bit of a washing machine, and I've been working with it for 20 years, it feels like a bit of a washing machine stuck on spin cycle.
Because the way you deal with the challenge around your data is you start with tactical
pieces of data.
Let's say you just build your taxonomy that you're looking for, you build your lexicon
that you're identifying content with.
The problem is you struggle with that with structured data, much less unstructured data.
So what happens now with world of AI,
you get this whole issue around derivative forms of data.
So now think about that the AI can fling out versions
of my data that they're semantically different,
they look different, but they say the same thing.
And they're certainly not conforming to my DLP structures
that I created before.
So one of the things we're really focused on is thinking about how do we
tackle non-structured data in non-structured data in non-traditional ways.
And big part of that for the obvious stuff is, well, here's my corporate
applications and here's the things outside of those boundaries.
So don't let corporate data spill to those things outside of those boundaries.
Let the user freely use chat, GPT, or whatever the obvious thing the outside is that it's personal, but don't let the user just copy
beyond the boundary.
Yeah. So that'd be like a copy paste restriction solves some of this like out the gate, right?
Because they can't, they just can't do that copy paste.
Correct. Could be file movement in the same way.
Yeah.
But again, the boundary is a unique construct. Now within that, I think one of the things
that we think a lot about is there's obvious
areas of sensitivity and applications that are where unstructured data exists.
I'll use a perfect example.
Think about EMR environments, electronic medical records.
Physicians don't type their patient notes to conform again back to your structure of
your data protection technologies.
They write them in the way they want to write them.
And that unstructured language is a very difficult thing for old school DLP to tackle.
So we might do something like in that particular case, redact the fields of a form and govern
how people have accessibility to the fields in the form.
And when someone unredacts that data, don't let that specific set of objects be moved
over to the application that's in question here, this generative AI technology or whatever
outside the boundary.
So the boundaries and kind of the governance of the presentation layer give us some ability
to handle unstructured data in very unique ways.
Now, about your comment a moment ago, we can combine those things with DLP as well.
And it's not like DLP is dead, you know, Orgs have made years of investment in that.
And what we want to do is use it effectively.
So used in the context of those boundaries,
I don't have to worry about my data flowing
to the wrong places.
So now I apply DLP much more selectively
within the boundary and maybe even outside of it a bit.
But tying into the investments somebody's got,
you know, if they've been investing in Microsoft
information protection for years or a semantic DLP
or something they spent 15, 20 years investing in,
tying into that and leveraging it,
but not throwing it all the way right out of the gate can be important for
the places where you're just not sure.
So this field, what's backing this field?
This field got behind the scenes and we don't know it and it's an application we don't know
much about.
Maybe I'll just govern with my traditional DLP approaches there but again, the boundaries
change the game a bit.
Yes, I mean that's the next question, right?
You're talking about like from first principles just restricting the way that people can move
stuff around which is going to naturally limit their ability to like just paste whatever,
you know, copy and paste into ChatGPT, whereas they can copy and paste between corporate
apps and whatnot.
So, you know, that all makes sense.
But what about when it comes to all of these large language models that are popping up
in existing services and
they might be personal accounts, they might be corporate accounts, but the point is, you
know, it's data you don't necessarily want to expose to, you know, you don't want it
becoming part of someone's training set. You know, how do you then go about trying to deal
with that? Because I imagine, I mean, that's nigh on impossible.
Yeah, it's a, you know, within within the boundary you've accepted that you understand those apps and
You've adopted their standard corporate apps like so for example
Let's use an example of G suite as Google introduces AI or directly in the interfaces of their apps
You're making an analysis of the apps and saying hey, I'm gonna accept that that's the model is now part of their universe as an application provider
So you deal with those accordingly and And then obviously, as I mentioned a moment ago,
you may apply some DLP policies within that. Again, maybe the old school vehicles, again,
because of the fact that it's living in the boundary. But at the end of the day,
some of that you're bound by a bit, because at the end of the day, when they get your data,
you've got an understanding of how their models work. If their models train the other things
in their environment as well,
they're not isolated in your world.
There's not an obvious way to understand that
unless you have a relationship and understand your vendors
and you've done due diligence
and understand how they use the models.
For the things outside of that,
again, it goes back to those constructs
I mentioned a moment ago.
Sometimes generative AI objects
in the applications are obvious.
And those are things we can easily identify
because we see object level items in the DOM of the applications and things
a lot of those last one might govern those a little more tightly and sometimes they're
not so obvious. And again, the data is still the data nonetheless at the end of the day,
whether it's going into a generative AI model in the backend or it's going into somebody's
database on the backend, still all the data to flow to the wrong place.
Yeah. So really, I mean, I guess what you're saying here is that ultimately the thing you
want to do is just put some boundaries around stuff you don't want going into LLMs and don't
let it go anywhere it's not supposed to go.
Yeah. I think that's, I think it's true. And I think the, the challenges for the, for this,
this, this isn't going to work like, you know, shadow IT would did before, you know,
wake up calls coming for everybody. And I mentioned earlier that the competitive advantage, the pressure that
you feel as an executive in an order, when you see your competitor adopting it,
we're going to have to think of ways to empower people to say yes to things.
And otherwise we might not have said yes to you in the past.
We learned how to say yes to things like visitors in our office and our employees
roaming and getting all foreign networks and you know, wifi networks all over the
place. And we just had to figure out a way to say yes.
And again, with the browser being the center of how the user engages,
most of it seems like an obvious place to be able to find the creative ways to say yes.
Now, speaking of, I want to just change topics if that's okay with you,
because one thing I found myself wondering about is, you know,
if you could give us a bit of insight into what the maintenance of something
like Island looks like, right? Because you are maintaining, you know, essentially like
a Chrome fork, you know, chromium fork. And you know, what, how big does your maintenance
team that is just responsible for writing and distributing patches, like how, you know,
just talk, talk to us a little bit about how all that works because I imagine that is
a lot of what you do. Right? So there's all of the features and the DLP and the,
you know, the, the whiz bang stuff.
But I imagine like a big part of the business is just keeping this thing
running stable patched up to date. Yeah. Give us some insight there.
Fortunately for the chromium side of the fence,
you know, being a part of the ecosystem, we get the inherited advantages of being part
of the contributing community as well. So we see a lot of these things earlier and we
inherit those things as they make it to the ecosystem, both good and bad sometimes. But
obviously we put a lot of effort into making sure that we provide facilities for the org
from a change management standpoint, the ability to make decisions about when they take the new things on.
Very much like the Chromium ecosystem,
we do divide, we separate new feature capabilities
from security patches or security updates
so you can make decisions.
And, you know, God forbid that like some of the incidents
we've seen over the past year,
let you ring fence things, make you policy decisions.
You know, maybe I don't wanna take new features
as an end customer.
I don't want to take new features during the Christmas holiday shopping season if I'm a retailer.
Maybe I'll take a zero-day update if I need to be able to segment those out. And then again,
obviously, ring pinch your audiences. Maybe I don't take those in the stores, but I take those in this
part of the business any time that I need to. So again, the contextual clarity or contextual
awareness that it uses to be able to identify
circumstantial situations where we shouldn't,
shouldn't do these things is important.
For us, the reality is everything we do on top of that
is like any other software company,
it's building capabilities.
And we just happen to be building on top of Chromium.
So we set our teams up in what we call islands, obviously.
And there are ones for different types of situational things.
We have data protection island.
In turn, we have a team of folks doing that.
We have user experience island.
And they all kind of run as their own separate business inside the company.
They work very closely in alignment, but they're all trying to drive for some of them slightly
different outcomes, user experience, but then they start coming together where maybe I need
data protection to communicate something to the user slightly different than they had before.
So those two teams come together to work.
Cause I just would have thought like, you know, some Chromium updates might break an
island feature that you've implemented.
Right.
And like, I guess I was just more asking what the, you know, what sort of scale this sort
of, you know, maintenance and QR teams are, you know, do you have like just a specific
team that is dedicated to like the core Chromium stuff and figuring out like, well, we can
just merge this in like right away or this might break something or, you know, what,
what is that sort of testing QA and maintenance process?
Fortunately, fortunately today's climate, automating all these things is, is, is user
friend.
So a substantial amount of automation
takes care of the human side of the house in that as well.
The great thing about the way Chromium launches
and obviously the way we work with it with Iowans,
we follow that Chromium launch trend.
We see code early on.
So we begin our builds early on in the process
around things like the Canary builds of Chromium.
So Canary is early, early, early adopter stuff that most people in the world never see much
about and we continue our build process through beta.
In that process, before we release, we even have early adopter customers in different
segments as well that take these things on.
So we're not likely to really break something in a large scale production environment because
we see stuff so early in the process.
It doesn't take a lot of...
Honestly, our team's not very large that has to do these things because the automation frameworks that are
available now for handling many of these things are made much easier but certainly we have people
that spend their time focused on this to make sure that you know on the other end of this is
a resilient environment that that uh you know keeps the customer safe at the same time doesn't
break things. So I imagine you're doing like dog food in your own betas for example. Oh 100% yeah obviously we you know everybody internally we're the earliest of adopters of
the technology so as you know as we build something new and things like beta cycles etc we're all
consuming it internally ourselves but we've also got customers that they believe and they
want to understand early code you know some people have staging environments and obviously
I mentioned the ring fencing this allows allows some, some more to take on earlier
code, ring fence some of that code off from people and so earlier groups can get stuff.
So you've got like ring deployments and customers want to do that as well because they want to make
sure, because I'd imagine too that, you know, everybody's going to have their own use for this
thing. So it is theoretically possible. You could push an update that, you know, will be fine for 99.9% of people and might break something over here. So is that
why people are doing that is just to really make sure for their use case that everything's
going to go without a hitch?
Yeah, sure. I mean, think about, I think you probably talked about this ad knowledge on
the show, but there's been a lot of learnings over the past year. Um, crowd strike. And
obviously things before, I mean, we've seen,
we've seen things up in the night across the board that, that is the purpose of,
you know,
setting things up in a structured way where we don't just thrust everything upon
everybody all at once. And, uh, our world, we, you know,
it may be days before, you know,
the first group gets something four or five days earlier than the next group or
months, you know, some orgs will take,
take a change freeze
for something for a period of time that there's just no
tolerance for risk in that environment from a,
from a resiliency standpoint.
So built a lot of facilities, put a lot of thinking
into that and put a lot of thinking into things like
external services that we could be dependent on,
they could cause issues as well so that we have resiliency
around those.
Cause like, for example, your single sign-on provider,
you know, a lot of the world's dependent on that
when they have a bump in the night suddenly people
can't do work so you know thinking of things of external dependencies where
people can still do work even when some external service provider maybe have an
issue. All right, Braydon Rogers thank you so much for joining us on the show to
walk us through some yeah some Ireland stuff always great to see thank you.
You're as well Patrick. That was Br Braden Rogers from Ireland there. Big thanks to him for that. And again,
you can find Ireland at Ireland.io. But that's it for this week's show. I do hope you enjoyed it.
I'll catch you next time.