Risky Business - Risky Business #779 -- DOGE staffer linked to The Com
Episode Date: February 12, 2025On this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: Musk’s DOGE kid has a history with The Com Paragon fires ...Italy as a spyware customer Thailand cuts power to scam compounds… … and arrests Phobos/8Base Russian cybercrims The CyberCX DFIR report shows non-U2F MFA is well and truly over And much, much more. This week’s episode is sponsored by Dropzone.AI. They make an AI SOC analysis platform that relieves your analysts of the necessary but tedious work, so they can focus on the value of human insight. Dropzone’s founder and CEO Edward Wu joins to talk about how they approach the problem. This episode is also available on Youtube. Show notes Teen on Musk’s DOGE Team Graduated from ‘The Com’ – Krebs on Security ACLU Warns DOGE’s ‘Unchecked’ Access Could Violate Federal Law | WIRED Lawsuit accuses Trump administration of violating federal information security law | The Record from Recorded Future News The Recruitment Effort That Helped Build Elon Musk’s DOGE Army | WIRED States prepare privacy lawsuit against DOGE over access to federal data | The Record from Recorded Future News Union groups sue Treasury over giving DOGE access to sensitive data | The Record from Recorded Future News Student group sues Education Department over reported DOGE access to financial aid databases | The Record from Recorded Future News Hackers exploiting bug in popular Trimble Cityworks tool used by local gov’ts | The Record from Recorded Future News DeepSeek iOS app sends data unencrypted to ByteDance-controlled servers - Ars Technica DeepSeek Is a Win for Chinese Hackers - Risky Business Owner of spyware used in alleged WhatsApp breach ends contract with Italy | WhatsApp | The Guardian Another person targeted by Paragon spyware comes forward | TechCrunch Apple fixes security flaw allowing third-party access to locked devices | The Record from Recorded Future News U.S. sanctions bulletproof hosting provider for supplying LockBit infrastructure | CyberScoop Thailand cuts power supply to Myanmar scam hubs | The Record from Recorded Future News 8Base ransomware site taken down as Thai authorities arrest 4 connected to operation | The Record from Recorded Future News Two Russian nationals arrested in takedown of Phobos ransomware infrastructure | The Record from Recorded Future News The Company Man: Binance exec detained in Nigeria breaks his silence | The Record from Recorded Future News Deloitte pays $5M in connection with breach of Rhode Island benefits site | Cybersecurity Dive DFIR - Threat Report 2025 | CyberCX Request a Demo | Dropzone AI
Transcript
Discussion (0)
Hey everyone and welcome to another episode of Risky Business, my name's Patrick Gray.
On this week's show Adam Bualo and I are going to talk about the week's news and then we're
going to take a bit of a deep dive into the latest DFIR report from CyberCx which is Adam's
former employer.
They've just dropped an absolute banger of a report which is full of really interesting
information, trends on TTPs, what attackers are doing, how they're winning. So we'll be talking
through that and of course this week's sponsor is Dropzone AI and they make a
really interesting tool that you plug into your sock and it does all of your
tier one triage automagically with AI and it is less insane than it sounds and
in this week's sponsor interview we're chatting with one of the founders of Dropzone which is Edward Wu
and he'll be talking to us about model coachability because as it turns out
when you tell an LLM hey stop doing this thing most of the time it just won't
listen to you right so they've had to put in some work at making these models
coachable which in the context of SOC operations is very very important but Adam But Adam, just before we get into the news, I just want to do, you know, corrections and
clarifications from last week.
Someone pointed out to me that I was talking about how amazing it was that that malware
that did OCR on people's photo reels looking for seed phrases for Bitcoin.
I'm like, wow, secrets discovery and whatever.
Turns out like those wallet recovery phrases
are from a pretty limited dictionary
and doing that is actually not too hard.
So my bad on that.
And I also had an interesting conversation
with an Australian journalist about the Teragram thing.
Of course, last week we talked about the sanctions
being imposed on Teragram and how that might be linked
to a spate of sort of vandalism and, know, torched cars and what not happening in Australia.
This journalist, he's a bit skeptical that it's actually Teragram behind that
stuff. He pointed to a similar campaign in Sweden which was actually traced back
to Iran but there was more of a sort of organized crime nexus there, I don't know.
But anyway, the point is he's skeptical, he thinks the sanctions against pterogram could just be the government doing something
to be seen to be doing something. Who knows? In time, we will find out. But look, again,
we're going to start with Doge news, unfortunately, and again, we're not going to get bogged down
into it. Brian Krebs has dropped an absolutely terrific write-up here, which has linked one of these kids
That Musk has sent into various bits of the US government to the comm which is probably not what you want
Yeah, that is really not a great. It's not a great thing and Brian's write-up is as usual detailed and post lots of threads
But you know that we've seen some reporting
about like kind of how these kids that are working for Doge end up in these
roles and this particular kid had come up through Neuralink, Musk's like AI
startup thingy but yeah had been hanging out in the comm. It's yeah they're
really having that crowd rolling around inside the US government and you kind of
get the vibes that they
are just gonna throw a whole bunch of stuff into an LLM and just kind of see
what falls out and call that work because yeah it's I know the whole thing
is just it just seems a little bit nuts but yes having people from the comm and
especially given the like the history that the calm has of attacking its own members
I mean, we've got the Brian Krebs has got some screenshots and grabs and stuff of people talking about
This kid and you know the extent to which he can code and so on and so forth, you know
It's just battle around like you've got leverage against these people
You've got you know them not being necessarily very good at what they do
You've got all these connections into violence and organized crime and so on. It's just such a mess.
Well, a lot of them appear to have pretty Nazi stuff tied to their social media accounts,
and it's just amazing how unlucky Elon Musk is. People keep saying, questioning whether
or not he did a Nazi salute, and that's really unlucky when you're just trying to say, my
heart comes out to you, as is Retweeting Nazi accounts platforming Nazis
You know all of the all of the Nazi stuff that seems to happen around Elon Musk. He's just misunderstood
Just some like Nazi bogon flux that just happens to flow around him for some reason. I don't know why
Unbelievable stuff and of course, you know all of this has triggered a
deluge of lawsuits
from various unions, the ACLU, students, like what, there's just a million lawsuits here.
It looks like one of these, like a federal judge has ordered a stop to some of this stuff.
And I think the Trump camp is just saying, yeah, whatever. And they're kind of ignoring
it. You get the sense that this is heading to some sort of constitutional crisis. And
it's all traced back to IT systems at this, at this point. And you know, one thing that occurs to me is that sending a bunch of
like Nazi 4chan kids into the most sensitive government systems in the United States and
getting them to just randomly flick switches and turn things on and off. Like, I think eventually
people are going to understand that that's a pretty bad idea. You know, last week we spoke about how it might be great to have
some bright young things going in there, figuring out like drawing insights from government
data and whatnot. You know, I don't think that's what's happening here. I think this
is headed for something bad. Yeah, I think this is heading for something bad. I mean,
there's a couple of different ways to learn that this is a bad idea, like maybe, you know,
paying attention in civics and taking some time
to understand how government works
and why it does what it does.
And, you know, about how inefficiencies
are sort of inherent at scale.
And that's in the private sector
as well as the public sector.
Or you can do this.
You can just keep getting people to flip switches
until something big breaks.
And yeah, so look, again,
we're not gonna get bogged down in this,
but you know, we spoke about it last week
as a data governance issue.
It looks like that's the basis of a lot of these lawsuits
that some of this activity might violate various laws
and regulations and it's all going to the courts
and we'll just see where it goes.
But yeah, links to the com, extremely not great,
given where these kids are going, you know, and what they're doing.
Now we're going to turn to a, look, let's get into some actual, you know,
proper cybersecurity news now, not get bogged down in this. And you know,
something is interesting too,
is with this current administration in the United States,
they just generate so much, you know,
so many things to talk about and the media is running after every little thing
like a golden retriever. And, you know, let's not do that. Let's just cover it very simply and then
move on to some more bread and butter stuff. We got a great report here from John Grieg over at
The Record talking about a bug in this software called Trimble City Works, which is used by
municipalities to like manage assets and critical
infrastructure and whatnot or public infrastructure and people are exploiting this bug and you know
that's not good but this ties back to the discussion we had you know a couple weeks ago
about Niche SaaS being the next tire fire for you know 2025 Yeah, this particular set of software obviously manages important stuff
and actually I wouldn't try to have a rummage to try and find the exploit because there's very
little detail about the bugs itself. It's on the CESA Kevless because it's being used in the wild
but I didn't find an exploit. I didn't find some details about the software and it feels like a
.NET serialization bug and usually with these types
of software it's because there's some kind of like
secret key that's shared by the vendor
across multiple installations.
And if you find it one place then you can use.NET
to serialization to go almost to code exec,
kind of by design because these keys are meant to be unique.
That's usually what this is but we don't know for sure.
But either way, getting people shelled.
Yeah, I mean, I should clarify too
that this isn't actually SaaS,
but it is niche software, right?
Because it refers to how you can get RCE
on a customer's Microsoft
internet information services web server.
So you're thinking some sort of, yeah,
some sort of deserialization
is what would kind of make sense here, right?
Yes, yeah, I think they've said it's deserialization,
but we don't know whether it's the usual.NET thing,
which is, you know, using a hard-coded key, womp womp.
Yeah, yeah, all right.
Moving on, and we've talked about DeepSeq a little bit over the last couple of weeks.
We've got a report here from Dan Gooden over at ours.
You and I are both like pretty lukewarm on this report,
because the headline is, Deep deep seek iOS app sends data
unencrypted to bite dance controlled servers
I mean, are we really worried about the transit security of this information?
Like I don't think that's the problem. The problem is where it's winding up not how it's getting there, right?
So I think you know people are kind of losing the forest for the trees a little bit when they're looking at the risks involved in
using deep seek
Yeah, yeah exactly. I thought you know this piece
Focuses on that you get the transport encryption with some stuff that's in the clear
There's some stuff that's going to bite dance rather than deep seek itself
Presumably because of hosting or whatever else and then also the use of I think what likes triple dares for crypto
Which you know all of these things point out not particularly mature security practices in an app that's gone viral and
gotten big very quickly, but really doesn't, you know, deal with the big picture, which
is you can't use a Chinese AI app if you don't trust China. It doesn't really matter how
you transport that data to China. You know, that's the bigger picture problem. So yeah,
I think it is a case of getting lost in, you know, in the weeds when really this
is a big picture, you know, geopolitical issue, not a technical, you didn't crypto their stuff
on the wire issue.
Well, and it's a business issue as well, because I think, you know, the interesting thing for
me about all of this is it looks like models might be going commodity, which is going to
be very awkward for XAI, OpenAI,
you know, all of these companies
that have invested gajillions of dollars into models,
less meaningful for like Nvidia,
who are still gonna just sell, you know,
absolutely gargantuan amount of chips.
But I think our colleague Tom Uren,
and I've linked through to this in this week's show notes,
Tom's newsletter last week,
he had a take on DeepSeek
that I thought was very interesting,
which is currently, you know, Google will do a, you know,
threat report or whatever on malicious use of say Gemini.
OpenAI will do the same thing.
Now that there's an indigenous capability in China,
you know, we're gonna lose insight into how Chinese APT
crews, for example, are using this technology
in their operations.
And I think that's a really interesting component to this.
It was always going to happen.
And I guess there are open source models they could have been using or whatever, but they
still obviously found value in using some of the cloud-based models to do various things,
and we're just going to lose insight there.
Yeah, yeah. I think, yeah, absolutely.
I mean, the open source ones are useful and handy,
but the thing that made DeepSeq really stand out
is that it was producing stuff that was as good as,
and in some cases, better than the very, very expensive
models, the commercial ones, the ones that took so much
work to build for the various US firms.
And the fact that DeepSeSec itself was open source,
so you can, many actors beyond just China
now have access to something that's as good as those.
So this industry is so young and it's developing so quickly
that keeping a lid on it was never going to be
a good strategy for controlling it.
But the visibility we did get from those reports
was interesting.
Yeah, and that is gone.
Now let's follow up on Paragon.
You know, the news, I think when was it last week, week before, we were talking about how
the Paragon spyware was popping up on journalists' phones and stuff, and this was really not
great for them, having just agreed to be sold off to some, you know, American private equity
firm.
And as I said last week, I would expect that would be complicating their deal.
You know, it's, they've now Paragon itself has now like booted Italy as a customer for violating its terms of service, which say you can't do this.
Um, this is interesting.
And I feel like, I wonder if this would happen if all of the
action hadn't been taken against NSO.
I sort of feel like all of the stuff that was done to NSO for this sort of activity
has resulted in this company now going, oh no, and booting Italy as a customer.
I suspect that one thing that's different here though is NSO controlled the malware delivery. I don't think that's the case for Paragon. So they don't have quite
the same level of insight into how their customers are deploying the software. But you know,
if it is the case that Italy was misusing this, I mean, I mean, what do you do if you're
like one of these spyware companies, what you can't sell to Italy for, cause you, you,
you suspect they're going to gonna do human rights abuses.
I don't know, this is so complicated, all of this.
Yeah, yeah, it really is.
And I think you are right that I think the action
against NSO has probably brought this into a focus
that it wouldn't have otherwise had,
like that plus them now being acquired by a US firm.
And the US being such a big market,
and Tom makes the point that all the know, like all the different US agencies
purchase these tools independently.
It's not like you buy one license for the US government.
And so the US is an extra big market for these companies
because they can sell to so many places there,
whereas presumably the Italians buy at once kind of thing.
But yeah, it's, you know, is there a, you know,
there's plenty of people who would argue
that maybe there isn't an ethical way to sell spyware, right?
That, you know, you can sell it to your own government,
maybe at best, but once you start, you know,
dealing with others, then yeah,
you end up in positions like this,
because like Italy, the member of the EU,
surely they should be, you know, a reasonable customer.
But, you know, between the stuff in Poland,
the stuff in Italy, you know, stuff reasonable customer, but you know, between the stuff in Poland, the stuff in Italy, you know, stuff in Spain, maybe Europe, you know, isn't the safe customer
here in Israel, which, you know, funny world, right?
Yeah, it is.
And you just sort of think, I mean, I touched on this last year through some of our
discussions where just the existence of this stuff, I think for so many countries
that couldn't
develop it in-house, so to speak, the temptation to misuse it is just so great.
And we see it being misused in unexpected places. If anything, I'm sort of impressed that we haven't
seen the stuff sold to Five-Ive countries popping up where it shouldn't, right? Because we have not
seen that so far. But it does surprise me when you're seeing like, wow, Italy is popping shells on journalists phones. I mean, that's what
the allegation is here. It's turning into a bit of a political scandal in Italy as
well, not surprisingly. Yeah, yeah, it is, and they haven't quite gone down the same
route as, you know, Poland has gone because of the, you know, which government was in
power, etc, etc. But yeah, it's the temptation, I guess, is irresistible.
And part of me wonders, because of the long history
of electronic espionage and the five,
I was like, do we have more mature approaches
to governance and approaches to oversight and stuff,
because we've been doing it for a long time?
But then again, spying is a very old profession.
So yeah, I don't know what it is with the Europeans. Like you would think it would be
a little better behaved.
Get your act together guys. Come on. But I think, no, I think, I think you hit on it
there, which is the oversight is right. And I think certain things that happened in the
English speaking world, particularly in the United States, resulted in good oversight. And I'm thinking of things like, you know, Watergate, for example,
as soon as politicians realized that intelligence agencies could threaten them, then it's like
oversight, oversight, oversight. That's how it happens. So I think, you know, I do think some
of this is just sort of historical
and the oversight in the English speaking world is actually pretty good. And there's
reasons for that. We've also got a report from Lorenzo over at TechCrunch looking at,
you know, the specifics of a couple of the people who were targeted. And, you know, there
are people who run like refugee organizations and other people who run like news websites
that have been critical of the government and whatnot.
So yeah, definitely a bad look for Italy.
Yeah, absolutely.
Yeah, not great.
Not great.
Now, Apple has patched a security flaw that has made people very excited because they
described in their sort of advisory or their patch announcement that it was being used,
you know, it was extremely sophisticated and being used against very specific targets,
like with physical access to the devices.
Everyone was thinking James Bond.
I'm kind of thinking probably Selbright, right?
But they have patched a bug that allowed you to bypass
like the, you know, the sort of, you know,
how they disable the fun bits of the USB interface
until you've authenticated to a computer,
this bug would allow you to essentially present yourself to iOS devices, a trusted device.
Yeah, yeah. Anything that weakens those controls, I guess, is of concern.
But also, you know, it wasn't that long ago that we didn't have those controls, you know.
So it kind of shows that they have been pretty effective.
The things that Apple have added to, you know, lock the device down when it hasn't been used in a while or unlocked in a while.
So, kind of a win in a way.
Yeah, and you'd imagine like,
if you're a company like Cellbrite,
you would have another one of these,
like the plan would be to have another one of these
ready to go when the one that you're relying on
is inevitably patched.
I think the people I know who work in exploit dev,
the stuff they worry about are the big changes to iOS,
like various mitigations that squash whole classes
of bugs and whatever.
So you would think they'll have another one ready to go,
but it is getting harder now.
So we might be heading to a situation where companies
like cellbright won't have coverage for modern iOS,
for example, for a period of time at least.
Yeah, you certainly talk to people who work in that game when they say their jobs are getting harder, right?
And it's no easy days in the iOS exploit dev business.
No, that's right. It's not exactly a low stress occupation these days.
What have we got? We've got some US sanctions hitting a Russian-based
bulletproof hosting company. I mean, this is good, right? I wonder what sort of effect
this is going to have now that Russia is almost running this parallel decoupled economy these
days. But broadly speaking, I think these sort of actions are pretty good.
And that dovetails nicely with another item that we're speaking about this week.
James Reddick has the report for the record, which is Thailand has cut power, fuel and
internet access to scam hubs in parts of Thailand.
I mean, this is something that I advocated for, I think it was last year, where I'm like,
why don't we just cut connectivity to these compounds?
If you're going to concentrate huge amounts of activity in a physical location,
surely you can make life tough for those compounds. It looks like that's what's happening now.
Yeah. When you've got compounds that are serving what, a hundred thousand scammers, like that's
got to make a pretty big dent in terms of power utilization, you know, all the air con
and all of the computers and so on, plus the comms. So yeah, the interior minister of Thailand posed for a photo op
at the control center for the local power company.
I've got to say I'm very disappointed that the photo shows him holding a mouse where
he's about to click the power off button. I would hope they would mock up a Hillary
Clinton reset style button that he could smash for the up, you know, a Hillary Clinton reset style
Button that he could smash for the cameras, but no click. Yeah boring Yeah, anyone acts through a spear pet, you know big nice thick power cable with some sparks flying off
You know, that would be a good photo op but no I was looking with some details about how they
Like like how granular are the power cuts was the question I had like Like are they cutting off- I was wondering that as well.
Like are they cutting off nearby homes?
Like how does that work?
Yes, right.
Like is the collateral damage or like
are they targeting individual like power subscribers
based on their usage?
Like I wanted to see a few details.
I did click around a bunch of the reporting
and no one seemed to have answered that kind of question
because you know, it would be sad if there were,
you know, if there was collateral damage to other people that just happened to live
in the neighborhood or nearby or whatever else.
But on our hand, I guess they said,
well, to the Thai economy,
it's costing them what, $2 million a day in scamming?
Yeah, the whole thing's really nuts.
And Thailand and China are actually setting up
a coordination center in Bangkok to try to tackle this because
a lot of Chinese citizens are being sort of stolen, kidnapped and forced to work in these
compounds. So you know, China's involved in cracking down on this. I think the Russians
are getting involved as well and the Americans. Like I feel like finally we're seeing some
movement here. There's a great piece from CNN about a Chinese actor who turned up for a casting call or
whatever in somewhere in Southeast Asia and wound up being vanned and taken to one of
these compounds.
They got him back after a few days, but this turned into a really big issue in China, and
given the number of their citizens that are being stolen.
It's something I mentioned before. issue in China and given the number of their citizens that are being stolen.
It's something I mentioned before.
I know someone who works at a jobs listing website and they had this issue
years ago where I think it was in the Philippines because they operate in the Philippines as well, where people were turning up for job interviews and then
just disappearing and they did not know why.
So it turned out this is where they were going.
But obviously, what do you do?
You work in the sort of anti-fraud and sort of user safety department of job listing websites,
and then your job candidates start going missing. Like, what do you even do?
Yeah, I know. That is a mess, yes. Actually, I also clicked through to the link about that
Chinese Actica because that seemed to have caused the Chinese to do something.
I think also there is some government visit from someone in Thailand to China as well.
So I think that probably has applied a bit of pressure to them, you know, go shake some hands and show some progress, etc.
So whatever gets the job done in the end.
A lot of it's going to be coordination and resources and whatever.
And I think my point is everybody's on the same page.
It's got bad enough that everybody's like, okay, we've got to do something about
this. You know, again, it's going to be like ransomware, right?
Where, you know, release the hounds was never about, you know, going,
going after ransomware actors was never about ransomware elimination.
It's about suppression. It's about at least making it somewhat difficult.
We, you know, Tom's writing up at the, he's doing a write up for this week's
newsletter.
We just spoke to him this morning.
And there is progress there, like the amounts,
the total profits, the total revenue going
to ransomware crews is down.
There's less big game stuff, more targeting SMEs.
Like the economy is sort of,
the ransomware economy is contracting.
I mean, I hear through sources
that some of these offensive operations against ransomware crews are actually having an effect.
Like they are, they're ongoing. Like governments aren't talking about them, but they are doing that.
The hounds are out there, man, and they're causing a ruckus, right, for ransomware operators.
Good job, hounds. Good job.
Good job, hounds. I know we've got a lot of hounds listening to this. So, you know, happy hunting. Ah-hoo! Ah-hoo! So let's stay with Southeast Asia now. We've seen a bunch of arrests by Thai police of
ransomware actors. And then we saw like a DOJ indictment land that might be connected to that,
but we're not entirely sure. Walk us through this.
Yeah. So the eight base ransomware crew got a bunch of their infrastructure
taken down with a coordinated law enforcement operation in Europe and the
UK FBI, usual kind of people. They arrested four people in Thailand and
then yes, simultaneously we saw DOJ release saying that they had charged a
couple of Russian men
for their role in the Phobos ransomware infrastructure,
ransomware gang, and aid base and Phobos
are pretty well tied together.
Like they share a lot of plumbing and things.
So I think those are the same people.
No one has specifically said that,
but the DOJ says they're in custody
and the Thai police arrested two men and two women.
So I think it probably lines up
but you know that crew goes back quite a long way through other ransomware groups
so yeah these are experienced people so yeah good work good work. Sucks to be you guys!
Now look there's we've seen a whole round of coverage about this guy Tigran
Gambarian? Gambarian? I don't know. I always fail pronouncing
his name. He was the guy who worked for, what was it, Binance?
Binance, yeah.
Binance, yeah. So he was a guy who worked for Binance who got detained in Nigeria because
the government held him personally responsible for the currency they're tanking. Because really the government had mismanaged the economy and a lot of people
were starting to use stable coins through platforms like Binance, which,
you know, can't blame them for voting with their feet and, you know,
going to a stable form of currency. But yeah, government didn't like it.
They put him in prison. He was going to be there 20 years. Eventually he got out.
I mean, this is a story that's been going on for quite a while.
We've talked a lot about it.
The record Dina Temple-Raston and Sean Powers have a write up here and indeed he
appeared on their podcast.
We've also got a write up from Andy Greenberg,
who was apparently in touch with this guy while he was imprisoned.
And yeah, just a fascinating story.
And I guess this would be our reading list item for this week.
Yeah, there's a lot of interesting details about kind of the process and
how it kind of went down. Like if you're trying to imagine what it's
like, you know, being arrested as part of essentially, you know, sort of
almost like a blackmail operation, you know, and where international, you
know, the Nigerian government in this case, you case, just kind of wanted to hold someone responsible
so they were doing something.
And seeing the inside of that process, I think, is insightful,
especially with those of our listeners that work in cryptocurrency
and areas of cybercrime where you kind of get messed up in this stuff,
like to see how badly it can go.
So yeah, definitely worth a read just for those little details of like
how do you buy a cell phone in a prison in Nigeria when they think you're a billionaire
cryptocurrency mogul as opposed to where he was used to work at the IRS. So I guess he
had some federal government salary but wasn't really a rich man. Just lots of interesting
details. Go have a read.
Yeah, yeah, indeed. Now we're going gonna talk about this threat report from CyberCx.
CyberCx, of course, is the sort of large Australian
security consultancy, which bought the business
where you worked, you were kind of a co-founder
at Insomnia Security, which was acquired by CyberCx.
They bought, for people outside of Australia,
they might not know, they went around
and they just bought up a bunch of consultancies.
It was like a private equity roll-up. And you know, as a around and they just bought up a bunch of consultancies. It was like a private equity roll up.
And, you know, as a result, they are the big player in Australian cybersecurity services these days.
We don't have a business relationship with them.
You know, you did work there for years after the acquisition, but you are no
longer there, you are full-time risky beers.
But they've dropped an absolute banger of a DFIR report.
Unfortunately, it is like Redger walled.
So you have to cough up your email
address to get a copy of it but it's probably worth it right because as far as these sort of
reports go this is a good one. Yeah I mean the incident response team over at CyberCxR are
absolutely the best in the region like they are very good at what they do and they get some really
interesting places so yeah I think it's worth a read even if you do have to fill in a form. Now there's some really interesting key
findings here. Where's that? Page five, right? So the key findings just on their
own are very interesting. One is that MFA, if you're not using fishing resistant
MFA, you're not doing really anything to slow down business email compromise
because fish kits these days grab session tokens, not credentials. And I think this is something that's not perhaps well understood enough these
days. If you're not using some sort of web or fan or U2F or Yubikey or some sort
of fishing resistant, um, you know, MFA technology, it's rubbish.
Like you, you know, you're, you're basically not using anything at all.
Yeah. Yeah. And I think they specifically talk about a case they had where someone got compromised and they had conditional access
policies restricting logins to only come from Australia.
And the actor just went, eh, no problem.
We'll come from a NordVPN or whatever out of Queensland.
Job done, move on.
Because those kinds of controls are not
effective against an attacker that's motivated.
And that's get the job done whatever way works works. Now some other key findings here that are
less interesting to talk about like the healthcare industry was the most
represented sector, some stolen data is never advertised on the dark web it just
gets stolen, but an interesting a couple of interesting bits here espionage
aligned incidents like the dwell time is insane right so you got the financially
motivated actors who do smash and grab and then when you look at the espionage aligned incidents like the dwell time is insane, right? so you got the financially motivated actors who do smash and grab and then when you look at the espionage stuff like the
The time to detect was 400 days and this represent espionage related incidents
I mean, it's 5% of their caseload. So it's a non-trivial amount of incidents
Yeah
No, they definitely do a lot of work in that in that kind of area and the contrast of dwell time
I mean the average is kind of ske. And the contrast of dwell time, I mean,
the average is kind of skewed around
by some of the big numbers, but I mean,
we're talking three years in some cases
of dwell time in an organization
before the actor gets snapped.
And I think even then, some of those are
because they also get ransomwareed,
or they also get other actors in there,
and then you start investigating,
and once you roll IR, you find all you know, all of the, you know,
Chinese or whoever are up in the exchange service.
Well, we saw a story this week about some company in the United States that makes
label makers or whatever. They got ransomware and then they did DFIR and they
turned out like they'd, they'd been mage carted for like, you know,
eight months before that or something. So this is, this is a thing that happens.
But then there's the other interesting findings here,
which is there's a downward trend of C2 framework usage
because of EDR.
So you can't just cobalt strike your way around to glory
anymore.
But that is sort of counterbalanced
by finding that EDR is just too often misconfigured or not
appropriately monitored and is not doing what it should be doing. Yeah I
think that that was one of the takeaways here is that EDR is definitely causing
problems for attackers because there's a couple of areas where they are changing
their behavior so for example the increased use of DLL side loading to
circumvent EDR detections
But that yes EDR still has to be monitored and configured correctly to be effective and clearly, you know
Although it's being deployed more it's still not you know
I'm not doing everything that people expect of it just because you know
Hey, this is fiddly and complicated and you do need to monitor it and tune it and so on
Yeah, I mean DLL side loading right? I think it's important that people understand
that EDR will flag it.
It's going to pop an alert.
When something shady happens as a result of a DLL
being sideloaded, it's going to generate alerts.
But they're only useful if someone is actually
watching the alerts.
And I think that's what people don't necessarily
realize about EDR is without the monitoring piece
and good monitoring, it doesn't really get you as far as people
realize. Like the endpoint protection side of it is not incredible.
Yeah. But they talk a bit in this report about attackers using web shells for, you know,
backdoor for long-term access to long-term long-term possessed. And those of course don't
trigger EDR until you use them. so you can go sprinkle them around the web
servers and then you know sure you're gonna get snapped when you spawn a CMD
with them but until then you still got a way in and then you know you've just
got to outrun the people looking at the logs and the response and so on so it
gives you an opportunity to have another swing in it even if you get
evicted next time. Yeah it's funny actually one of the recommendations in the report is that people consider using Airlock Digital, which as everyone knows
I'm a huge fan of but also CCX are actually a shareholder in Airlock. So I think that's actually a bit cheeky
But look, I mean that's also the right solution to the problem
That's the thing. It is also the right solution to the problem, right? Which is they specifically
Solve these sorts of issues, right?
Like you, you, you might have a couple of exceptions or whatever, like, yeah,
you're going to have a few exceptions and maybe an attacker gets lucky,
but still it's going to slow them down and whatever.
And like it is shifting that whole protection left, right?
Which is you can't even execute it.
You're not relying on a tool to detect something after it's gone. You know,
you're, um, uh, you're stopping it from happening in the first place.
But all in all, an interesting report and something that was interesting.
So we've got someone starting on Monday, basically like a trial run to see if she likes working
with us and if we like working with her for this editing job, right?
Doing production and editing text, video, audio, all sorts of stuff, because we're at the scale now where we need that.
It's cool actually because it's actually Amberley Jack,
who's the sister of the late Barnaby Jack.
And when I first met Barnaby, the first thing he ever said to me is,
like, my sister's a journalist.
And so it feels very cool that she's going to come and see how she likes working with us.
But it meant that I had to go through the process, the mental exercise of thinking, It feels very cool that she's going to come and see how she likes working with us.
But it meant that I had to go through the process, the mental exercise of thinking,
well, we're going to need to provision her with some sort of managed device.
Right. And what sort of protections are we going to need to put on that device?
And there were three. And it almost surprised me where I landed up, where I landed, excuse me,
because the three things that I decided we would
need to provision to our remote, our one remote user who really needs this stuff because she's
going to have access to a lot of social media, our CMS, you know, etc, etc. You know, I thought,
okay, Airlock Digital for the anti-malware side and to restrict browser extensions, because they
do that now as well. We've got Knock Knock to restrict access to our content management system so that ties that to
her Google Workspace SSO status. Can use Google Workspace to limit OAuth grants. We're going to
have to set that up and that's going to be a nightmare in terms of navigating the Google
Workspace admin interface to actually make that happen. So much white space and yet still so
difficult to use. What are you doing Google? Yeah. And then the final piece, right,
and this is the one that I thought was surprising is like, um, cause I always, you know, I kind of
thought they were a nice to have and I've realized they're more of a must have now. And you know,
full disclaimer, I am an advisor to the company, but push security cause they make a browser
plugin, which is going to detect these fish kits that are outlined in this report, right?
And stop, they've got password protection features as well where you can't actually reuse passwords in, you know,
on different sites and whatnot, it just won't let you paste them in.
And it also does, you know, fish kit detection to detect the major fish kits that do this token theft stuff.
So I thought that was just a really interesting thought exercise where, you know,
I didn't land on CrowdStrike and an MDM solution.
You know, I landed on some stuff that is,
I guess, kind of esoteric,
but is actually gonna solve the problem.
Yeah, yeah, I thought, you know,
we've talked about this a bunch,
about, you know, what we're gonna do
and how we're gonna do it and so on.
And yeah, it's been interesting to walk through
this exercise of like, what does a 2025 AD new set up
for this look like?
Like, what do we need?
How are we gonna make it work?
And yeah, like it is a different list than, you know,
perhaps at first blush you would have thought of.
So yeah, I guess we'll talk to our listeners later on
once we've had some experience with it and see how it works for us.
And we're not posing this as a challenge, by the way, to anyone who would like to bypass these controls.
We're doing the best we can, but I don't know, you're the CISO, right?
Thanks, buddy. Yes, well, I guess if someone wants to deface us then post on our LinkedIn then.
Hey, well, best of luck to you.
Yeah.
Now look, just, just going back to that report for a moment.
Another thing that it's highlighted, and this is something that I'm hearing
across the board from everyone I talk to who's, you know, adjacent to incidents,
which is living off the land is, you know, that used to be kind of niche and now it is just standard workflow, right? And just more and more people are just using
whatever baked-in tools they can, whatever, yeah, whatever existing code
there's there that you can misuse, that's how people are doing it. And we'll talk
about the lull bins, but also this report looks at how an attacker was
actually using Microsoft's e-discovery tools to do exfil, which is just like, I We'll talk about the lull bins, but also this report looks at how an attacker was actually
using Microsoft's e-discovery tools to do exfil, which is just like, I mean, it makes
total sense, but it's also like a little bit, you rub your temples a bit thinking about
that.
But yeah, I mean, I can't say I'm terribly surprised that people are moving more towards
lull bins.
Yeah.
I mean, it makes sense that, you know, LoL bins and also existing kind of remote
management tools, either ones that exist in the, like in Windows, but also ones that are
in use in the environment or are generally widely available.
They also call out the use of cloud for data exfil, so rather than having to, you know,
raw up your files and send them out the old fashioned way where you might get spotted
by, you know, DLP controls or whatever else.
Like using cloud-based file storage,
whatever that's already in use in the environment to exfil it.
If your data is already up there in the cloud
and they can get it out, then you're not
sending it out of the network a second time.
So that was interesting.
I think one of the cases they worked,
the same one with eDiscovery for stealing people's email,
they were also adding,
they added like extra credentials to the cloud accounts
used by the whatever, like mail filtering provider
they were using.
So just logged into the Azure portal and the Intra-ID portal
and then add extra accounts to that
so that you can then use that to exfil the mail.
So an increased agility in using those kind of
cloud environments and using those tools against you,
be it OAuth grants, be it setting up extra accounts
with access, I think one of them,
they set up Microsoft Graph API access
so they could use that to query data
out of the cloud remotely.
So, yeah, just-
Well, and what's funny is how the new stuff echoes
the old stuff, because a lot of the time,
they're spinning up accounts that have MFA exceptions,
because they're essentially service accounts but cloud.
And it's like, oh my god, we've made progress,
but in some ways, we're kind of back where we started.
Yeah, these are all the same flavors,
but all the same things, but just a different flavor., but just like a different flavor like the same you know pattern but in the cloud versus on-prem and
it does make you wonder like if you did the old you know classic Brett Moore same bug
different app spreadsheet where you make a list of all the applications and all the bugs
that you know about and then look at the gaps where those bugs don't exist in other products
that you use then you know what bugs to go look for.
There is a lot of scope for exactly the same thing here with on-prem AD and business,
you know, business networks to Azure and you know,
Entra and all of the cloud stacks. So, it needs to be found there. Lots of fun to be had.
Yeah, I mean it's so funny that we've spent 20 years like throwing mud at AD.
And then, you know, we kind of realize, well, OK, there's
some serious foot guns in AD.
But even when you take them away and you
move to an all singing, all dancing cloud future,
identity abuse is just inherent to any directory.
It's an inherent problem.
And the tooling on AD is OK now. Cloud tooling for this stuff, I mean,
now we're seeing a lot of investment activity
and a lot of startups trying to address this problem,
but they're not mature yet.
Like it is kind of insane where we are at the moment.
You look at how people are running through organizations
and you just think, well,
we should have been better prepared for this and we're not.
Yeah, I mean, I think if anything, like whilst AD was a mess, it kind of moved more slowly
than the cloud.
It took us a long time, Dio, from what, NT351,
NT4, whatever it was that they introduced
the kind of what became modern on-premise Active Directory.
It took us 10, 15 years to really get our heads around
how to do that robustly. Whereas the cloud,
right, it's just moved, especially Azure and Intra, like it's moving so quickly that no one
has time to figure out what's best practice or whatever else. And, you know, we're going to be
finding new and exciting things in that setup for such a long time. And actually, I think that while you were talking about AD,
that occurred to me, it was,
we can imagine like a parallel future where,
Sun hadn't been ruined and Sun's LDAP
and Unix environment and network
as the computer future had happened.
We would have all these same problems
except they'd be in Sun's LDAP and One Directory,
whatever it was called. And it would totally be worse. Sun RTC services getting rid of it. I'd love
it but the Windows future happened but we would have been in this boat regardless of
Microsoft or Sun or whoever else.
Another thing that is interesting to me about all of this is just how much this reinforces
the idea that the browser is the new operating system because that's where
OAuth grants are made.
For example, an OAuth grants are the new code execution and authentication tokens are the
new means of the new tokens that you bear to prove an identity and that may lead to
code execution.
And like, it's just, it's just crazy how much of this is just about browsers,
you know, granting authorizations and exchanging tokens.
And that can lead to a full compromise of cloud environments.
And to make it worse, like external SaaS as well.
Yeah. And I think that's kind of where we ended up with, you know,
Airlock for controlling the underlying OS
So making that robust and yeah push for making the actual real OS ie the browser
You know do things that we kind of need to manage from our you know
Like the high-level security properties that actually really matter whereas you know underlying file system permissions or whatever else
You know underneath the browser really not that relevant anymore.
Yeah, I mean the missing bit there is the OAuth grants,
which you kind of have to do, in our case,
because we're a Google shop, we have to do that via Google,
can't really do that with Push,
they do some limited stuff there,
and you can actually use that tool to pull in
what's authorized and whatever, get a bit of a list.
But in terms of controls, and then, oh man, I mean, Airlock covering browser extensions is good
because that's been a way malicious extension that leads to token theft and whatever. But
yeah, it's just all, all the action is identity and you know, the way the browser interacts with
the identity and stuff. I do feel like we've solved or at least, you know, we've got good
tools for some of those old challenges. And you know, what does this mean for ransomware? Right? When we when we look at all of
this, what does this mean for ransomware? Are we going to see more attacks against
cloud services? And I think I think we absolutely will, which is just great.
Yeah, I mean, at some point the internet will run out of vulnerable fortnits.
Yes. And they'll have to come up with some new tricks. I mean, I don't think
that's gonna happen very quickly, unfortunately. Probably not.
Crazy stuff.
Anyway, that concludes our discussion about the CyberCX Threat Report.
We've put a link into the landing page in this week's show notes, but congrats to all
of the people who worked on that.
It was very thought-provoking, interesting stuff.
And of course, that concludes also our news segment this week.
Adam Boileau, thank you so much for joining me.
We'll do it all again next week.
Yeah, thanks, friends. Pat, we certainly will.
That was Adam Boileau there with a check of the week's news and also a discussion of the CyberCX Threaty Threat Threat report.
Very interesting stuff. This week's show is brought to you by Dropzone. Dropzone makes a AI based tier one SOC analyst right so it can do
some of that drudgery that all of the SOC analysts have to do in terms of like
looking at those first stage alerts seeing if they're a false positive or
worth investigating. It's very cool I've been through it with Edward who is our
next guest Edward Wu who is one of the co-founders of Dropzone and yeah, it's very interesting stuff now
They have been around a little bit longer than some of the new entrants who are sort of flooding this space, right?
It's a really sensible application for AI, which is why there are so many companies now flooding the zone
and
You know because they've been around for a while, they've been able to bump into a lot of the corner cases
that pop up when you're trying to use an LLM
to automate this sort of low level decision making.
And one of the issues they've come across
is actually an interesting one,
which is model coachability.
AI models, as it turns out, they're really stubborn.
Like you'll tell them, stop doing that. And they're like, yeah, whatever, meat sack, I'm a robot,
I know better.
So they just keep doing the same old stuff.
So this is something that Edward and his team have had to work at really quite hard.
And here's Edward talking about all of that.
Enjoy.
What we have found is, unfortunately, most AI agents, when you first build them, they come across
as somewhat smart, but very stubborn,
because they don't take your input,
they ignore specific organizational policies
and practices you wanted to adopt.
So what we have done is, as we are building our product,
we added a number of capabilities that allows
the users of our AI socket analyst to influence the activity,
the technique, and the behavior of our AI system.
By doing that, we transform
an AI agent from somebody who is very kind of smart, but quite stubborn,
to somebody who is smart, but can actually listen to additional instructions and do things,
you know, in a way that aligns with other members of the team.
Yeah, and I'd imagine too that this type of coachability to
some degree like it's better than coaching a human because you only need
to tell it once you know it's you know you're not gonna have you know it's not
gonna leave it's not gonna resign and be replaced with someone else who hasn't
had those instructions. So why don't you first of all let's talk about like what
you can coach it to do and then we can talk about how you actually do it.
So what sort of stuff, we were talking earlier
and it's sort of stuff like,
if you don't alert on this unless it happens over here,
or maybe we wanna know about this,
if it happens over there, it's that sort of stuff, isn't it?
It's really the sort of instructions and training
you would give to someone, a human in the SOC.
Correct, yeah, so there's a couple concrete examples.
One very common example is, hey, we're a cloud-native AWS
shop.
We have 500 different AWS accounts.
We frequently get publicly exposed S3 bucket alerts.
It's a very common alert.
But hey, for these 50 accounts,
we actually don't care too much about this type of alerts
in those accounts because those accounts were
for our partners and we're just stuffed a bunch
of fake data into it so people can play with that.
So that's kind of one concrete example where something,
it's something that you would have told a new hire
joining the security team as well.
But I can see exactly why you would have needed
to do this, right?
Because initially it just would have been stubbornly alerting
on something like that in a test environment.
There was no real way to tell it not to, right?
So yeah, that's funny.
Yeah, so these kind of organization,
it could be organizational policy, preference,
or sometimes practices, right?
This team always does this.
You should keep that in mind
as you're looking at security alerts.
The director of IT has a script that he or she runs
every nine Zen to reset people's MFAs
for a variety of different reasons.
And you know, this behavior is actually expected. So these kind of, you know,
organizational knowledge is one type of coaching that can be done. But another type of coaching
is more about, hey, when you are looking at AWS alerts, I want you to look at this specific label that's associated with asset.
Why?
Because in our organization, we use this label
to determine if it's prod or dev.
Yeah.
So, you know, not some things that you should be doing
everywhere, but in our environment, we use this label.
This label is like the ground truth of, you know,
of a lot of assets.
And as an AI SOC analyst,
you should really look at this label and
consider the value of that label
as you are looking at security alerts.
That's another example.
It's less about the context,
it's more about specific techniques that's
unique to the environment.
So how do you actually achieve
this with a large language model?
I'd imagine that, say you want to get it to do a thing,
you enter that somewhere into the drop zone interface,
and then in the background, what is it continuously reminding it?
Hey, keep in mind when you process all queries
that you need to do x, y, and z.
Is that kind of... I mean, that's just me, an idiot,
who doesn't really understand AI all that well.
That's how I'd imagine it would work.
Yeah. One way you can think of it,
it's kind of, I'm sure you have heard of systems like Rack.
So it's essentially a retrieval augmented system
where very similar to how,
like for example,
when we go through our day-to-day lives,
when I walk out of the door,
subconsciously, the activity of walking out of
the door and closing the door creates this reminder,
I should have probably locked the door as well as I exit my house.
I would say see, see, see
coach ability or see sub system actually works in a very similar
way, you know, you, you monitor what the agent is doing. And
when the agent is performing certain activities, that, you
know, correlates to certain configured, you can say tips or suggestions or devices
that the user has wanted to remember.
It will recall that advice,
and then which will help to remind the system,
hey, as you are looking at AWS assets and you are
trying to understand the context about an AWS asset. Remember,
this user two weeks ago told you that you should look at this label. So it is kind of a retrieval
augmented generation system where depending on the activity of the agent, the system is recalling certain advices
and tips and using that to remind the agent,
here are some additional things you should be looking at.
It's still pretty mind bending, all of this stuff.
Now we should point out too that your customers,
their data isn't just going into consumer grade models,
like everybody kind of gets their own instance
of these things.
What's interesting is, you know,
I work doing some advisory with you
and I know early on, like you would run
into a bit of a barrier where people were a little bit
nervous about like, well, is my log data
gonna start training like chat GPT
and then leak out through people's queries?
And of course, people are getting more comfortable
with the idea of, oh, okay, no,
you can sort of license your own tenant.
And there's contracts which say they can't train based on your data and whatever.
And there's a certain level of confidentiality that's similarly to when you get an EC2 instance
on Amazon, et cetera.
And one thing that's changed recently, and I find this really interesting, is what some of the business, you know, what some of the business drivers are that are leading
people to actually reach out to you at the moment. And it is boards having a
meeting and saying we need to find ways to make our business more efficient by
using AI. And they're sending that out to every single department, every single
manager in their entire enterprise.
And that's driving a lot of inbound for you.
And I think that's really fascinating
that we're seeing this as a absolutely a top-down thing
from boards who are like, we've got to find ways to paste AI
into every corner of our business
to extract that efficiency.
I mean, that's been a recent change, hasn't it?
Yeah, yeah, we have noticed a huge uptick from organizations
that as they wrap up 2024, they are looking at 2025.
I think between the recent DeepSeq announcements
to NVIDIA stock crashing to the $500 billion
of Stargate projects, I think GAI is definitely on top of mind.
Frankly, at this point, it feels a little bit difficult
to even isolate yourself from some of those news.
Yes, I can imagine.
For you, you go home at the end of the day.
The last thing you want to think about is LLMs.
You turn on the news, and there it is.
Yeah, yeah.
And I get calls from my parents, like, hey,
when are you building a stock trading AI using large language
models?
It's all over the place.
But yeah, I would say we have noticed definitely
a tremendous uptake in the number of organizations
where it's a board-level mandate.
We need to improve the efficiency
in every single business function,
and that obviously includes security teams.
And unsurprisingly, one of the most universal
and glaringly obvious need for a security team
is alert investigations.
Detection response is a core component
of most security teams.
And as a result of that, yeah,
we have definitely
been getting a tons of inbound interest
in learning about how our technology can allow security
teams to do a whole lot more than what they are currently
doing, but without doubling the budget
or doubling the headcount.
Yeah, yeah.
I feel like, yeah, 2025,
there's going to be a lot of prime time,
a lot of people using AI for real stuff.
But I mean, you built Dropzone to do what it does for a reason
because it is frankly the most obvious use case
for large language models in security.
Yeah, yeah, it's definitely a very,
and it's a very good use case as well
because from our perspective, our goal,
and when we are building Drop Zone,
it's not to replace human engineers
to human analysts and human analysts.
Yes, to take away the grindy, boring,
crappy parts of their job, basically, yeah.
Yeah, absolutely.
And we have seen that not only in enterprise security teams,
we have also seen that with some of
our early adopters and customers
on the service discovery front.
As you can imagine, most managed detection response and
managed security service providers
have a lot of security analysts.
Frankly, a lot of these security analysts
are doing 24-7 alert investigation,
not because they couldn't do something
that's more interesting and higher value,
it's because out of necessity.
So one thing we have been helping,
in addition to enterprise with service providers on,
is allowing them to up-level their team so they can
allow the existing talent and engineers they have on
their staff to offer and work on higher value projects,
whether it's, you know, same configuration, visibility fabric deployment, you know, security
architecturing, RAD teaming and pen testing, instead of endlessly triaging things that
probably aren't incidents.
Yeah, no, I get what you're saying.
It's funny, though, because one thing
I know through discussions we've had
is there's really two different types of MSPs out there,
or MDRs.
There's the ones who have hacked up DIY, horrible, horrible
stuff where you're limited in what you can do there.
But there are these new categories of MDR,
where they're using some of the more established off-the-shelf
tools.
They might have a multi-tenant arrangement of some big existing commercial seam or whatever. categories of MDR where they're using some of the more established off-the-shelf tools, you know,
they might have like a multi-tenant arrangement of some big existing commercial SIEM or whatever,
and you're having a lot of success there. Yeah, yeah, absolutely. What we have seen is a lot of
these MDRs or MSSP's, like you said, they will have a multi-tenanted license of a large SIEM,
and then because that's what allows them to achieve economic off-scale.
Then in addition to that,
they will oftentimes build their own content packs on top of those SIEMs.
So they're not taking Splunk,
Sumo Logic, or Penser, and just running it,
but they are actually building additional detection content pack,
which to some extent is your unique value proposition.
They have taken their expertise in security and built additional detection rules that's relevant
to the secondment of customers and organizations they serve.
With those organizations, because they are using the same SIEM across different organizations
and different clients, that really know, an automation product like us
to plug into their ecosystem
and help them, you know, autonomously take alerts
from this multi-tenant SIEM, you know,
perform investigations by pivoting across
different data sources and tables within the SIEM,
and ultimately, you know, generate
drafting investigation reports
that the service providers can,
their human analysts can review and build on top of.
We're out of time.
Edward Wu, great to see you, great to talk to you.
Always a fascinating conversation
and we'll chat again through the year.
Thank you.
That was Edward Wu from Dropzone there
and you can find Dropzone at Dropzone.ai.
And yeah, full disclosure, I am an advisor to that company.
Yeah, I'm into it.
Really cool stuff.
And that actually wraps up this week's edition of Risky Business.
I do hope you enjoyed it.
We'll be back through this week with more Risky Business for you all and another weekly
show next week.
But until then, I've been Patrick Gray.
Thanks for listening.