Risky Business - Risky Business #727 -- Mr Gray goes to Washington
Episode Date: October 31, 2023On this week’s show Patrick Gray talks through the news with Chris Krebs and Dmitri Alperovitch. They discuss: The SEC enforcement action against Solarwinds’ CIS...O The White House AI Executive Order CitrixBleed exploitation goes wide How Kaspersky captured some (likely) Five Eyes iOS 0day Elon Musk’s Gaza Strip adventures Much, much more This week’s show is brought to you by Greynoise. Andrew Morris, Greynoise’s founder and CEO, is this week’s sponsor guest. He talks about how Greynoise is using large language models to help them analyse massive quantities of malicious internet traffic. Show notes comp-pr2023-227.pdf Biden signs executive order to oversee and invest in AI tech Risky Biz News: CitrixBleed vulnerability goes from bad to disastrous Andrew Morris on X: "Confluence bug is popping off. VAST majority of it is blasting thru Tor, similar to the first wave of Log4J exploitation two years ago. If you haven't patched, it's probably popped. https://t.co/4JC0uiTaqc https://t.co/wLDgQpq7r0" / X Andrew Morris on X: "Confluence bug is popping off. VAST majority of it is blasting thru Tor, similar to the first wave of Log4J exploitation two years ago. If you haven't patched, it's probably popped. https://t.co/4JC0uiTaqc https://t.co/wLDgQpq7r0" / X How Kaspersky obtained all stages of Operation Triangulation | Securelist Kaspersky reveals 'elegant' malware resembling NSA code | CyberScoop Sophisticated StripedFly Spy Platform Masqueraded for Years as Crypto Miner A cascade of compromise: unveiling Lazarus' new campaign | Securelist Near-total internet and cellular blackout hits Gaza as Israel ramps up strikes Amichai Stein on X: "Israel's Communications Minister @shlomo_karhi in response to Elon Musk: Israel will use all the means at its disposal to fight this. Hamas will use this for terrorist activity. There is no doubt about it. We know it, and Musk knows it. Hamas is ISIS." / X Shashank Joshi on X: "Wonder what encryption, if any, they use? Vulnerable to tapping. "Hamas has maintained operational security by going “stone age” and using hard-wired phone lines while eschewing devices that are hackable or emit an electronic signature." https://t.co/ALVSXb55Zn" / X Hackers that breached Las Vegas casinos rely on violent threats, research shows | CyberScoop Octo Tempest crosses boundaries to facilitate extortion, encryption, and destruction | Microsoft Security Blog GitHub - cloudflare/har-sanitizer Russia to launch its own version of VirusTotal due to US snooping fears iPhones have been exposing your unique MAC despite Apple’s promises otherwise | Ars Technica VMware warns of critical vulnerability affecting vCenter Server product Judge tosses Khashoggi widow’s lawsuit against NSO Group
Transcript
Discussion (0)
Hi everyone and welcome to Risky Business. My name is Patrick Gray and we've got a special
show for you today. I'm still in the Washington DC area and we've got two local co-hosts with
us here in our makeshift studio in Dmitry Alperovitch's dining room. We have with us
the first ever director of Sysr, Chris Krebs, who is also a co-founder of the Krebs Stamos Group.
G'day, Chris.
Patrick, how are you?
I am great.
And also we have with us, of course, Dmitry Alperovitch, the co-founder of CrowdStrike turned geopolitics guy and also a podcaster.
Geopolitics Decanted is the name of his podcast.
It's very, very good.
Dmitry, thanks for joining us.
Hey, Patrick, when are you moving out?
Yeah, it's been a very, very fun 10 days.
And I'm here for a few more days and then headed out.
But it's been terrific.
So we'll be talking through all the week's news with these guys in just a minute.
And I should mention, too, that I am taking three weeks off after today's episode.
I am on vacation, so set your alert levels accordingly.
Adam Boileau is going to be holding the fort back in APAC,
so you can still get your six Risky Biz podcasts
every week delivered into your ear holes
by subscribing to our second podcast feed.
That feed is called Risky Business News,
and I'm really surprised, actually,
that there are still listeners of this show
who don't know that that feed exists. So you can find it by searching for
Risky Business News wherever you get your podcasts. It is a separate feed.
This week's show is brought to you by Grey Noise. And Grey Noise's founder, Andrew Morris,
is this week's sponsor guest. And funnily enough, we recorded this interview a few weeks ago when I
was still in Australia. But when I first got to the US, I actually spent a weekend at Andrew's
place. And he was a terrific host. And that was a lot of fun. So thank you, Andrew.
But in this week's sponsor interview, he's going to be talking about how he's using
large language models to automate the tagging of a lot of this shady internet traffic that
gets picked up by Grey Noises' sensors. And you know, if you're a bar humbug AI hater like me,
it is actually somewhat annoying how well
GrayNoises thing works. That is coming up later, but let's get into the news now with Dimitri and
Chris. And we're going to start off by talking about, you know, and it's the talk of the town,
really, this SEC enforcement action against SolarWinds and the SolarWinds CISO. I understand
too, Chris, let's start with you.
SolarWinds were once upon a time one of your clients,
so I'm guessing you're not going to get into the specifics here.
But, you know, in general, what do you think about an action like this?
They were in fact client number one with the Krebs-Stamos Group,
first client in the door.
So obviously not going to be able to talk to the specifics of SolarWinds.
But I think there's kind of two outcomes here. First is that if you're working in an organization,
and again, I'm not making this specific to SolarWinds, but there will be a chilling effect,
I think, across industry, where if you're a CISO in an organization, publicly traded specifically, that is not to see some turnover in the CISO ranks.
Because getting the Wells notice this summer,
which means, hey, we're pursuing an enforcement action against you.
You have time to object here.
And then to see this,
the actual enforcement action go forward,
it's got to be scary for CISOs out there.
At a minimum, you're probably asking for a raise, right?
Dude, I don't know if that covers it, right?
No, of course not.
Like, yeah, I got a 20% pay bump,
but oh no, I'm banned from ever being a director of a company.
And your insurance policies aren't going to cover it
because they don't cover fraud,
or at least that's the allegation.
And that's what the SEC is alleging here.
Like, it's serious stuff.
The interesting thing is that depending on the company dno insurance directors and officers
insurance may not cover you if you're not actually an officer of the company or a director of the
company so uh at a minimum if you're a ciso you might be asking to review those policies and ask
for special coverage or be added to that existing coverage. Well, but along with E&O, errors and omissions, that's not going to cover it either,
potentially because it's fraud. And you can't insurance policy your way around that.
So I think that's one possible outcome here is that we do see some turnover in the ranks of the
CISO community. But on the flip side of that, I'm sure that there are CISOs out
there in more mature companies that are well-resourced, that have the support of their
leadership, that are probably not sweating this at all. Because they're like, look, I don't have
to do that. If I have challenges, I go to my leadership, I get the budget I need. I'm not
getting paper in front of me that says X, Y, or Z that I'm signing off on.
So again, independent of this case in particular, there are going to be some interesting fallbacks.
But we don't want a situation where we got CISOs, like an exodus of CISOs from the companies that
really need CISOs, you know? Like that's a bit of a-
I think at a minimum, what you're going to have, and we were talking about this early because Matt
Levine from Bloomberg had a great column on this.
It was so good. It's so good.
But the worst thing for SolarWinds here was the fact that you had all these emails in writing
from uh engineers of the company saying how bad things are so to minimum you're going to have
policies from legal departments saying don't write that stuff yes right get it out of email get it
out of discoverable information because if you don't know for a fact that
your policies have been problem, then SEC can't claim it's securities fraud.
I mean, just for my, you know, I'm obviously not a lawyer and I read the SEC release on this and
I'm like, wow, this does actually seem pretty weak source. Like I thought everyone got really
uptight when the stuff happened with Joe Sullivan around Uber. To me, that always seemed like,
you know, at the very least there was a case that had to be worked through there but you look at this
and you just think how many companies do we know right where the state of the security like you
know there's there's people internal warning about all sorts of horrible things like that's pretty
much situation normally an awful lot of i think sec actually knows this and i think what they're
trying to do is make an example of out of solars in the hope, I don't think it will actually work, but in the hope that it's going to raise the security standards across all kinds of organizations.
I think what's going to happen is that people are going to get more lawyered up.
They're going to want to say, I don't want to know this versus I'm actually going to spend the effort on making myself more secure.
Yeah, I mean, are you awaiting some sort of magical brief of evidence
that's going to make this look less dumb than it is, Chris?
Or do you think it is as dumb as it looks?
I mean, when you read the filing, there are some claims, I think, in there
that are a little bit out there by the SEC that the defense counsel should have
maybe some degree of success in the case.
But again, kind of stepping back.
So there was a good Steve Schmidt article.
So AWS, or I guess Amazon, chief security officer now.
And it's like the six or seven questions
you should ask your board.
I think it was in Fortune or something like that.
And he said one of them is like, who really owns the risk? And it's the business unit
or the division head. Now I agree with that, that, that those that own the business need to be able
to accept the risk here, but that's not what we're seeing play out here. And I don't, I think that's,
that's not what the board expects. The board is expecting that the CISO that's tabbed with the
chief and the information and the security and the officer label is the one that's signing off on these things.
And so we're going to have to right size this.
And, you know, I've said this before, but I've never met a CISO that is successful when their C-suite doesn't support them.
Like that is like That is the threshold question
for how you're going to get through all this stuff.
But I thought the weakest part of the SEC filings...
The password stuff?
The password stuff is fine,
but their claim,
and I'll read it from the filing right now,
a reasonable investor,
considering whether to purchase or sell SolarWinds stock,
would have considered it important
to know the true state of SolarWinds password policies.
Yes.
Like what investor,
I'm not even talking about a reasonable investor.
I wonder if there's a single investor in the world,
even the most sophisticated hedge fund
that is actually looking not just at the password policy,
but the security policy of any company
before they're investing in that company
or deciding to sell the company stock.
There's this whole section of the SEC complaint,
which talks about SolarWinds, like, you know,
misrepresenting the state of its password policy
to the market and like, yeah,
find me an investor who cares.
And Matt Levine, look, for those who don't know,
Matt Levine writes a blog for Bloomberg,
writes a newsletter for Bloomberg, I'm sorry,
called Money Stuff, which unfortunately,
if you try to get it on the web, it's behind a paywall,
but the email it's free and it's just terrific.
And, you know, he's got a running joke about how everything is securities fraud, right?
And looking at this, like, yeah, passwords, bad passwords as securities fraud. And it's,
it's a really intelligent write-up, but yeah, it just seems it, you know, and normally I'm like,
well, let's just wait and see, you know, like in the case of the Sullivan stuff, I was very much
of that mind, but this just right out the gate, I mean, it does look pretty dumb. It just does.
It's, it's going to be interesting to watch. And you know, that everyone out there in our
community is going to keep a tight, tight one on this because it's going to have massive
implications one way or the other. But by the way, I think the one difference,
and obviously the cases are very, very different between the Sullivan prosecution and this.
But the important thing about this is that this is a civil measure, not a criminal one, right? A lot of the CISOs were
very concerned about the Sullivan case. And I agree with you that I think it was misguided.
But going to jail is very different from not being a director of a public company in the future.
Well, I mean, the charges were proven against Sullivan, he didn't go to jail. So I think,
you know, people were just hyperventilating about that one. He is appealing it too. So
God knows where this is going to be in a couple of years as a case study.
But look, let's move on.
We've got heaps more stuff to talk about.
But seeing as I'm in Washington, may as well talk about the AI executive order.
And oh boy, I've spent a bunch of time with policy people since I've been here, been to
a bunch of dinners, went to a Hewlett Foundation event in Philadelphia
and met with a bunch of people there.
And AI is just the talk of the policy people right now.
And the executive order came out today, wasn't it?
When he signed it today?
Yesterday.
Yeah, so-
I was at the signing.
You were at the signing, that's right.
So what do we think about the executive order on AI? Because I have just read
the coverage about it. I haven't read the order itself. And I was expecting it to be really
infuriatingly dumb. And it actually looks kind of reasonable. What do you think? Let's start with
you, Chris. First off, it would take probably the entire flight back to Australia for you to get through the EO. It's like 290 pages.
It's 118 pages.
118 pages.
118 pages.
It's a monster.
It's probably five to six standalone EOs cobbled together.
Have you seen an EO that long?
I have not.
I have not.
It covers a lot of waterfront.
There are probably 10 different issues involving a dozen plus agencies.
So it's a beast. But what's also remarkable is the fact that it was not even a year ago,
it was November 30th, 2022, when ChatGPT hit public release. So in 11 months, we're talking
11 months that the technology landscape from a
policy perspective has changed just so dramatically. I mean, it really, I don't think
dawned on me until RSA conference this year in San Francisco. And what was that April where go
out there and that's all everybody was talking about was, was AI. And just from a cyber perspective
alone, it could take up a whole week of conversation.
So I think that they were able to pull this together,
cover as many bases as they,
as they did.
It's,
it's remarkable.
It's,
it's impressive.
That's what I thought.
And,
and the fact that they could actually put out something that doesn't just
seem like the first take reactionary thing.
Do you know what I mean?
Like someone has,
has clearly someone
smart has clearly put some time and thought into this and it covers the whole gamut from model
safety to protecting the industry to all sorts of stuff dimitri you got some thoughts so that's
someone that put a lot of thought into it happens to be a friend of mine ben buchanan who was
responsible for putting this together obviously a lot of people contributed to this, but this is an individual that knows AI inside out, wrote a book on AI, has spent several years at
the White House working this. That's why you're seeing this being very, very technical. There are
specific requirements with regards to interconnect of networking between chips, the maximum computing
capacity and floating point operations per second for training models
for doing inference on them, so
There's a lot of thought that went into this on a whole range of topics from
regulating biotech which is a huge concern for people with regards to AI that I
Think a lot of people think that with good reason that AI is
going to revolutionize medicine. That's fantastic, but you could also use it to create very dangerous
compounds in the future. So there's attempts to regulate that. There's obviously an attempt to
introduce watermarking standards to try to deal with the social engineering issue. I think that's
going to be really difficult. No one has. I mean, i mean it depends though i mean if you've just got the major
platforms that are generating things like imagery if you make it a requirement that they have to
watermark stuff i mean i think that's well first of all i think technically doing that could be
very challenging in a way that you don't you can't strip easily the watermark and secondly
unfortunately some of the open source models are getting very, very good. So being able to strip that out.
Yeah, but again, this is a scale thing, right?
And I just think if you rob the ability of someone with bad impulse control from just being able to really quickly do something dumb with one of these models and have it not be watermarked, I think that's a positive thing.
So on the watermarking thing, I'm not sure we're thinking about it the right way.
And it's not so much social media platforms or whatever that we're worried about. It's going to be content creators
that self-apply the watermarking to their own stuff. And so what you start to see is a provenance
distribution across the information ecosystem where you can kind of tell the provenance,
you can use your own discernment to figure out if it's real or it's not so i would expect that the white house that the um that the the biden campaign will self-select
into whatever the guidelines are and to ensure that the biden administration or the biden campaign
that any of the the the media that comes out of the campaign is labeled so it can't be misused
anything that's not labeled.
So sort of like, yeah, I've heard people describe it sort of like DRM, but for, you know.
And you self-selected.
But the issue, Chris, is going to be, I mean, yes,
from the campaign literature,
you may have watermarks coming out of it,
but when you're on a call with someone, a Zoom call, right,
and it's an AI-generated image or it's a voice call,
someone calling you up,
your ability to discern that that's real or not
is going to be very, very difficult going forward.
Absolutely, but I'm not sure that's the use case.
But this is a different problem.
Yeah, I'm not sure that's the use case.
Moreover, let's be clear here.
The bad guys aren't going to play ball anyway,
so this is only going to be available.
Now, that in and of itself can be useful.
That's how you can control your own likeness
and keep to your point about DRM
and keep whatever you generate out there.
And this is gonna get,
they're gonna get the end run.
They're gonna work around it.
Bad guys are gonna work around it.
This is a couple of years maybe solution.
Yeah, it's gonna be messy.
Absolutely.
Ben Buchanan too.
And I just checked and yeah,
this is the author of The Hacker in the State, right?
Yeah.
Yeah, and he just published a book on AI last year as well.
I haven't read his book, but everybody tells me that I should.
From my point of view, you know, I was just at a policy powwow in Philly.
And I think the thing that really dawned on me is that, you know,
tech is such a fast-moving space,
and it's very difficult to see what's around corners.
So trying to focus on making policy move a bit quicker is something that I think everyone should do,
and this is just a good example of policy moving quickly.
But look, we've spent enough time on AI.
Let's move on to some proper cyber disasters, shall we say.
There is a Citrix bug called Citrix Bleed.
And basically you can, the reason they've called it that,
I guess it's a little bit Heartbleed-esque in that you can enumerate things out of Citrix servers,
including authentication tokens.
And this bug, I think it's been around a little while.
And then POCs started popping up online,
and now there is just so much scanning.
Like last week, we were talking about Cisco, about the number of Cisco devices.
What did I call it in the subhead?
40,000 to 50,000 feral Ciscos, right?
It's what we've wound up with.
And now it looks like we've got something similar happening with Citrix Netscalers and ADCs,
where there's just this mass scanning activity
and people are in bulk enumerating access tokens.
And I've linked through to Catalan Kimpano's
write-up on this.
Not really sure what's going to happen next,
whether those threat actors are going to deploy ransomware
or whether or not they're just going to start selling
those access tokens.
But this is just something that's been
like the last year or two,
is just anything on the edge of the network
that's based on anything not completely modern is just something that's been like the last year or two is just anything on the edge of the network that's based on anything not completely modern
is just getting done.
And the challenge is that people really are not doing a good job
of upgrading these things, right?
The Fortinet bugs, the Pulse Secure bugs,
this is just going to be yet another thing.
On Citrix and VMware and all sorts, right?
Right, but that's going to create problems for years to come
and it's going to cause a lot of ransomware, a lot of other intrusions, like all sorts of actors are
going to leverage this and not just today, not just next month, but two years from now, three
years from now. Yeah. Yeah. This one's going to be on the Kev list maintained by CISA, you know,
like that. Yeah. But I mean, that doesn't really help us when they're already. No, no, no, no,
it doesn't. No, that's for sure. And I think you need to update your buttons.
So you've got like the America button,
but you're gonna need a Rob Joyce,
seizing the high ground button
and then your edge device comment
because this is evergreen.
This happens every week on your show.
There's just some new product
that we're talking about.
Yeah, and Confluence as well.
I've got a tweet here from Andrew Morris,
who's funnily enough this week's sponsor guest,
but they have observed Confluence
just going absolutely ape in their telemetry.
But there was a real interesting thing in here,
which is he's noticed that of his sensors
that he's got based in China,
it's not really showing up.
And he thinks that is because they block Tor exit nodes, right?
For censorship purposes.
And that's how a lot of the threat actors do this mass scanning.
They use Tor.
So, I mean, you know, I've been saying for years on this show
that a really nice low-cost thing you can do
just to get rid of a lot of this sort of stuff from your,
you know, from hitting your machines is just block block tor or just deploy the great firewall right here right
is that what you're suggesting are you pro-censorship yeah no not at all but anyway i've
looked through to a couple of tweets there uh let's talk about triangulation now this is a
campaign that targeted researchers at kaspersky, actually. This was some months ago, and it looked like it was some, you know,
most likely some sort of Western intelligence operation
targeting people who worked at Kaspersky, their iOS devices.
And Kaspersky managed to catch this,
pull the ODA out of the payload and get it to Apple,
and, you know, that's how we wound up with a bunch of Apple ODA
getting patched at the time this was discovered.
But they've now published a blog post talking about how they were able to get the payload of this thing.
And it wasn't entirely straightforward.
It's a great write-up because essentially what would happen is their device would get owned.
You know, one of their iPhones or whatever would get owned.
But the threat actor behind this was doing a very good job of nuking the payload pretty much instantaneously.
So it was not possible to recover it and discover those wonderful bugs.
And then, yeah, these Kaspersky people went down the rabbit hole of trying to get the payload.
And what was interesting too is the implant that was being dropped on them didn't survive reboot.
So there was this constant process of their devices sort of rebooting and then getting reinfected. So they knew that they would have an opportunity to actually grab these bugs.
And eventually they did. But Dimitri, you read this and I know you really enjoyed reading this.
Well, there are a couple of things that stand out there. One of them is obviously the level
of effort, literally months of effort for them that went into trying to capture these exploits.
And the fact that they didn't give up
and get bored throughout all of this
is really a testament to...
Oh, come on.
Are you going to get bored?
Like when you know you're being ODate?
Well, actually, bored is the wrong word.
The fact that their bosses didn't come and say,
we're going to pull you
and have you do something else is really interesting.
Yeah, yeah, yeah.
But what this is actually testament to
is how difficult it is to do forensics on ios devices
yes they're incredibly locked down and it's difficult to compromise them but when you do
get compromised we've talked about this a lot a lot of times it's really really hard to figure
out what is going on very hard to detect it and just the level of effort that they went through
now they were limited they probably didn't have access to American forensics tools
like Gratia, for example, probably not available.
But see, the forensics tools wouldn't have helped them post facts, right?
Because the threat actor was dealing with that.
I mean, this was less of a forensics issue and more of a...
Well, there were forensics issues
because they had to go through the backups
to try to get some of the data.
Yeah, but there was nothing there, is my point, right?
There was nothing actually there.
So this was more of an instrumentation issue where they were actually trying to catch the
payload as it came in.
And because Apple just loves locking everybody out of the devices, it was a giant pain in
the you-know-what.
Before they realized that, they went through a lot of effort doing forensics.
And ultimately, it was a dead end.
But because they didn't have some of those capabilities, it took them a long time.
But yeah, some of the techniques that Apple has implemented,
like certificate pinning, made it very difficult
to actually look inside the protocol and actually capture this.
They did a lot of work to try to figure out
how to break the exploit
so that it would actually leave some of the artifacts on the device
so that then they could use the forensics.
So the forensics were important in the end to actually capture it off the device,
so it was still pretty critical.
But a lot of effort, but just tells you how difficult this stuff is to analyze.
And Apple does not make it easy and does not go through any effort, really,
to make it easier for security researchers to, A, figure out that the device is compromised,
and B, to actually investigate it. I mean, this is just such a wonderful write-up of a bit of cat and mouse and i think
everyone will enjoy it and it's called uh how to catch a wild triangle i even appreciate the uh
the blog post there so it looks like they yeah they did a good job crushed someone's beautiful
beautiful ode i mean what's your gut feeling here like it definitely seems like west someone
western intelligence but hard to know who you kind of get the sense that if it was the fact that there was crypto at literally every stage
of the protocol and well done crypto tells you that this is something someone a sigin agency
that knows what they're doing yeah and combine it with the counter dfir throughout it's just
it's yeah it's scream sigin you know you just gotta wonder who right um and you know that and
i think it's also interesting
that they didn't bother with persistence.
And I wonder if that's because
they did know that eventually this would get caught.
You know, like there's some real...
Because it's conspiracy.
This is a typical model,
even with NSO and other tools like that.
Persistence gets your exploits caught.
So you really don't want to persist.
Well, it gets your implants caught,
not necessarily your exploits, but you really don't want to persist well it gets your implants caught not necessarily your exploits but yeah well yes implants but implants could lead to to catching
artifacts of the exploits too potentially so you really want to leave as little presence on the
device as possible and honestly like you don't need when you think about mobile targeting it's
not like targeting a desktop or server where you need continuous collection uh a lot of times it's not like targeting a desktop or server where you need continuous collection uh and a lot of times it's okay to ping that device you know every few days grab the data
off of it and and and go on your merry way so um it actually lends itself well to those types of
devices that you're trying to infiltrate yeah and it was like a malicious apple watch face uh uh
file which is yeah uh so much fun and it took them two months to figure out right yeah
yeah delivered via iMessage as well so yeah ouch uh now look we're gonna stay with uh some Kaspersky
research here because they've got some cool stuff uh where they had a look at this thing that they
first saw in 2017 and it's a like it's a crypto miner but then eventually they took a good look
at it and they're like hang on this is just pretending to be a crypto miner now it's a crypto miner. But then eventually they took a good look at it and they're like, hang on,
this is just pretending to be a crypto miner.
Now, I think, look,
there's a lot of innuendo in the write-ups here
because it's got Eternal Blue in it and stuff.
And they're saying, oh, there's some code similarities
to the way NSA writes stuff.
They're trying to heavily imply NSA involvement.
I don't think they've got there necessarily.
I mean, it could be them, who knows.
But I just love the idea that some SIGINT is running around
dropping an implant that looks like a coin miner
because it is great cover.
Like this is, I mean, does it, okay.
Here's a question, Chris, does this count as false flagging?
I don't see why it doesn't.
Yeah.
I mean, Dimitri might not agree.
No, I don't agree.
I think this is blending in the noise
because the purpose of false flagging
is to mislead the attribution.
Is to assign blame onto someone else.
This is actually trying to not get caught, right?
I don't know if it's not trying to get caught.
I mean, you are assigning blame to somebody else.
Yeah, criminals.
Criminals.
You're waving your little criminal flag.
But the goal is for someone that finds this
is to say, oh, this is just a coin miner, delete, move on, versus looking deeply and trying to understand how it works, what it does, and trying to actually publish information about it that would get this blacklisted all over the place.
But also might suggest that it's not in the top tier echelon of tools that the actor's using.
They'll burn it and just hope somebody didn't pick up on it it's not like the prior uh tool that kaspersky was unpacking but i
do like that and i think this is gonna this is gonna strike fear into the hearts of a bunch of
incident responders listening who previously might have just been able to say well that's a known
coin miner get rid of it whereas now you kind of might have to go a bit deeper and say, well, is it, you know, it's just great. I
love it because everybody's so dismissive of coin miners, that it's the perfect thing to pretend to
be. You can't pretend to be ransomware anymore, because people take ransomware so seriously,
but you can pretend to be a coin miner and everyone's like, oh, we'll just get rid of it.
It's fine. By the way, there's another lesson here in that kaspersky says they first detected this miner back in 2017 which is years after the nsa equation
group report came out and they burned um the eternal blue uh with the shadow brokers release
and the fact that some of the code is still present in this new malware shows you how
difficult it is to actually write everything from scratch
and not have the desire to reuse some of the older code.
And that's what gets you burned, right?
Is that those functions that start appearing elsewhere,
even if they're very small,
and a small part of the overall code base
can get this connected to other things.
And look, another bit of research out of Kaspersky is looking at a Lazarus campaign,
looking at the way they're doing supply chain stuff inside an unnamed vendor.
I mean, we don't really need to talk about this one in depth,
but North Korea, I mean, I hate to say, am I a fan?
Is that the right word?
Like the stuff they're doing
around supply chain infiltration to target crypto i think is just fascinating and and there is a
sort of badassness to the way they roll and then it's funny actually because you know last night i
i uh turned up to the um alperovic institute at johns hopkins to um uh jason kicked his class and
we had a bit of a conversation i I spoke with some of the students.
And, you know, I was saying like,
if you're not a fan of the sort of crypto ecosystem,
like all of this activity has the added benefit
of also being quite funny, quite amusing, right?
But, you know, I understand also on another level,
it's quite alarming when you can see
how effective threat actors can be
at worming their way through supply chains.
Chris, I mean, you know, you were the first director of CISA
and this is the sort of thing that I'd imagine
when you were director of CISA would kind of leave you cold.
Am I right or am I stretching?
I think when you look at 17
and you look at NotPetya and BadRabbit and WannaCry
and those, NotPety not patches specifically with ME Doc.
That was what I think was a big shift was looking at the supply chain piece and how adversaries are
really starting to go up the kind of ladder of dependency and reliability. And you just get a
much broader spread of potential victims. That's where it was, I think, a big wake up call for us.
And you just
continue to see it every year there's something else that kind of follows that model yeah but
now the north koreans are not just you know hacking into one company they're sort of using
their supply chain access to access other parts of the supply chain right and they're good at it
and you just get the sense that after a while you know for as much as we're seeing i bet there's a
whole bunch we don't and they're sort of creeping into the the supply chain fabric, you know, for as much as we're seeing, I bet there's a whole bunch we don't. And they're sort of creeping into the supply chain fabric.
And, you know, I'd be lying if I didn't, you know,
on one level, oddly kind of admire it.
Do you know what I mean, Dimitri?
No, I mean, absolutely.
I mean, remember the 3CX issue from earlier this year
that Manda discovered where it was originally
an X-Trader markets tracking platform
and then got into the
3cx um software so they're daisy chaining this hacks and look i've been saying for over a decade
that north koreans by far are the most innovative actor out there um they may not be the most
technically sophisticated although they're quite technically sophisticated they're getting there
they're definitely getting there yeah i mean they're not quite on the level of a five equation
group or anything yeah but right that's what i mean, they're not quite on the level of a Five Eyes. They're not like a equation group or anything, yeah.
Right, that's what I mean.
But they're very, very good.
Don't get me wrong.
But on the creativity of the operations, oh, my God.
They were the first ones to use destructive attacks back in the 2000s.
They were the first ones to do the document leaks long before the Russians started doing that with the Sony hack, right?
They were the first ones to use it to make money from crypto from a nation state perspective.
And the supply chain stuff, they were not the first ones, but they really excelled at
it, right?
And we talked last week about the hiring their people inside companies, right?
So they're doing a lot of things and they're not afraid to take risks, to be creative.
And they're unconstrained.
They clearly don't have lawyers like, you know, over their shoulder every 10 seconds.
Other actors don't either, but they're just really, really creative and willing to try things that others are not.
It shows, I think, at least in cyber, that this is where necessity is the mother of all invention, really.
Yeah. It takes root.
And they have a smaller set of capabilities,
or at least access,
than the Chinese and the Russians do.
And you see them really innovating and hitting pretty hard above their weight class.
But also, they do a better job
than I think anyone else out there
in grooming their offensive cyber force, right?
They identify people in high school that show promise. They put them into the best universities.
They really push them through that pipeline to get them into the offensive forces. No one,
to my knowledge, is as good at that as they are. Obviously, you need a highly authoritarian system.
Was that spoken with a note of admiration?
I'm trying to figure it out.
No, but it's a huge issue.
I mean, yeah,
was it correcting my previous tweet regarding ISIL?
You do not under any circumstances
gotta hand it to them, right?
It's the old wint joke.
Look, let's move on.
Let's move on from this.
But, you know,
I have linked through
to the SecureList Kaspersky write-up about this,
and it is very interesting.
So people can go to our show notes
and check that out.
So let's, unfortunately,
let's talk about Elon Musk and Starlink
because this whole thing briefly came up again
and now has been squashed.
Obviously, there is a lot going on
in Israel-Palestine at the moment
and this military action in Gaza is very intense
and the whole thing is just horrible intense.
And at one point, it looks like Israel is really intensifying its campaign
in the Gaza Strip, and at one point,
internet access and telephony went down in Gaza,
and Elon Musk had a brainwave, didn't he, Dimitri?
Yeah, so he thought he would replicate his success in Ukraine,
although you'd think that he would learn from supplying Starlinks to a conflict zone,
given what happened in the Ukraine situation.
But he once again decided to step into it and say that he's going to provide Starlinks
to aid organizations, internationally recognized aid organizations that are operating
in Gaza that were impacted by the communications blackouts.
And he immediately backtracked.
Well, he didn't immediately backtrack.
Very quickly.
It took the Israelis saying, this is not happening.
Like, no way is this going to happen.
For him to then turn around and say, oh, well i'll check with the uh with the israelis first but i mean the first thing that would happen if a
styling dish made it into gaza is some hummus goon would turn up with a gun and steal it from this
humanitarian organization like it's just the most obvious thing that would happen chris what do you
think of all this i think the lesson learned here at my takeaway, is that if you're a company operating at a global scale, particularly with designs on geopolitically hot regions, you have to have a competent geopolitical risk management advisory team with you.
Or you need to, say, get a firm like yours,bs Star Muscle, under contract to do that for you.
Yes, thanks for the plug here.
I'll send the check after the show.
But again, you don't just go off half cocked
into these zones.
I fully appreciate what Elon was trying to accomplish here.
I do.
I appreciate his good intentions behind it.
The road to hell, Krebs.
Particularly thought-
The road to hell.
It wasn't particularly thought out, I think.
And I'm a big fan of no surprises, right? If you're going to mess around in a space like this, you do what he said
he was going to do. I will check with the Israelis. You do that before you come out and say you're
going to take a certain action. You've got to kind of square the circle before you put yourself out
there. Now, of course, there's a lot of confusion still about how Hamas was able to stage such a
large scale, you know, terrorist attack in southern Israel. And we actually have just I just wanted to
include this tweet here from Shashank Joshi over. He's the economist, I think, isn't he? Yeah, he is.
And he has linked through to a ft piece
on twitter which i read the whole piece and like right towards the end it sort of says that
hamas uh maintained its opsec by going stone age and using hardline phones while staying away from
mobile phones like i see dimitri's pulling his skeptical face and i'm kind of a bit skeptical
about this as well um but i think still there needs to be, at some point,
obviously not now, there's other things happening,
but at some point there needs to be some accounting
for how SIGINT and HUMID,
the entire Israeli intelligence apparatus missed this,
and I'm guessing there is going to be
improved OPSEC on Hamas' part
and avoiding the types of technologies that are, you know,
that Israel is very good at intercepting,
that's going to be a part of this, surely, don't you think?
Yeah, I find it hard to believe that they use hardwired phone lines
because if there's one thing SIGINT agencies know how to do
and have known how to do for decades, it's tap phones.
So that doesn't seem like it was the thing that was the problem here.
What is much more likely is that they didn't use any devices and communicate in person and limited the number of people that had access to this information to the very few.
We know, for example, that Mohammed Deef, who is responsible for designing this operation, who is the head of the military wing of Hamas,
he's been on the top of the Israelis' targeting list for decades now.
And the person that was killed by the Israelis in the 90s, who was his mentor,
was killed because his phone was booby-trapped.
His mobile phone was booby-trapped by the Israelis and exploded.
Yeah, the explosives in his phone was booby-trapped. His mobile phone was booby-trapped by the Israelis and exploded. Yeah, the explosives in his phone.
Yeah, and the word on the street is that since that time,
since the 90s,
Mohamed Diouf has never come close to a mobile phone, right?
So to have that type of operational security and discipline, right?
And the other thing that's said about him
is that he always sleeps in a different house
every single night for decades, right?
So very, very likely they weren't using anything that emits an electronic signature
and would not just be interceptable,
but would potentially reveal their locations
and would allow Israelis to target them.
So that's probably how they actually did it.
It is possible that they built some sort of...
I mean, Gaza is a small place.
You would think you could actually build
some physical infrastructure,
a PSTN network, limited.
You could probably build one.
Yeah, but any time this has happened in the past,
the cartels are a great example, right?
They build their own PSTN network.
But they built wireless.
They didn't build PSTN.
Yes.
All it takes is one person that comprom. Right. That compromises it.
And the Israelis are really good at turning assets.
So I don't think they would have trusted that because that has the problem of being a single
point of failure.
Anyway, I mean, I just think it's time to, you know, at least very preliminary stages
of thinking about how this happened.
But you had something.
Yeah.
I mean, there, there will be a lot more that comes out. Yeah. And, and what the
security and the intelligence failures were, you know, I do wonder, you know, if you see the
playbooks that were found at some of the kibbutzes and, you know, at the, the, the concert that
looked to have kind of targeting sets and maps and other things, those were pulled together somehow.
Was that with a laptop that's subsequently printed out?
Were they stitched together like a ransom note,
you know, pasted on paper?
So there may have been technology involved here somewhere,
but how they kept the, you know, the OPSEC appropriate,
I think is going to be interesting to find out.
Yeah, it's just such an unanswered question at the moment.
Yeah, now just quickly, Apple is updating iOS because, I don't know,
the way this has been reported I don't think is that great.
So a few years ago, Apple introduced MAC address randomization into iOS,
which is really good.
It can stop people from being tracked, you know,
physically tracked by people who drop, like,
MAC address sensors around the place.
But, you know, a few people out there are having conniptions
because it turns out, like, after you connect to a network, there's's some service some udp service that if you query it it'll give you
the real mac which is like okay sure you can scan a network on this port to elicit a mac address but
that's not really what apple was trying to fix the first time around and now they're they're making
changes to that service anyway and this is sort of being written up as some agreed just privacy failure on apple's behalf i mean did you
have the same take as me dimitri on this yeah i mean it was a problem that needed to be fixed but
the reality is that there are many ways to track your phone and changing the mac address is helpful
but it's not the key to keeping your privacy no i mean you know really the change that they made
was to stop you being tracked around your, you know, shopping centers.
Yeah.
You know.
But only if you're connecting to the Wi-Fi in those shopping centers.
Well, no, previously it was because you're doing the SSID probes,
looking for networks that you previously connected to.
Every time you're walking around, you know,
your phone is actually broadcasting its MAC address
and that's what they changed.
So I kind of feel like the reporting on this is a bit...
And real quick, I'm just going to flag it now
because I am about to go on vacation,
but there's some awful new VMware bugs out there.
So you've been warned.
We've got a CVSS on one of these of 9.8 out of 10.
And I'm going on vacation.
You have been warned.
And finally, and I want to get both of
your thoughts on this uh Jamal Khashoggi's widow had been trying to sue uh NSO group in a Virginia
court uh the judge has thrown out the lawsuit basically over jurisdictional issues saying that
Virginia wasn't really a place that had jurisdiction to to hear the case This is sad. I can't imagine what this woman has been through.
It's hard to know where we go from here
in terms of her getting any sort of redress for this.
What do you think about all of this?
I think that you haven't heard the last of this one.
The judge was pretty clear in saying
you've got the wrong defendants here.
Instead of NSO group, perhaps it's other individuals,
including Saudi individuals.
Now the question is, do they have sovereignty?
And therefore they're immune from her case here.
Also, it didn't seem that in her filing or in her claims
that she established that the infection happened in Virginia.
Now, I read that and thought it was a little suspect
because I would think that every subsequent time
that the tool was used to access the device
and effectively commit a CFAA violation,
that that would be a new invasion.
Well, every time that Trojan would be executing a command on the device,
that would be a CFAA violation.
Yeah.
So, again, I think the judge was sympathetic,
Karen said,
but based on the way you've filed that, you know,
you need to go take a harder look
and maybe change up
some of your defendants.
Guys, we're going to wrap it up there.
Thanks a lot to both of you
for joining me
to do this week's show.
It's been really interesting.
It's been great to hang out in person
because Chris, you know,
we've chatted for years and it's the first time we It's been great to hang out in person because Chris, you know, we've chatted for years
and it's the first time
we've actually been able
to see each other face to face.
This one's three years overdue.
Three and a half.
We were supposed to do this
in RSA 2020.
That's right.
I was supposed to come
to the US in 2020.
I actually have my,
you know,
up-to-date media visa.
Flew down to the consulate.
I have a visa stamped 2020.
It's a five-year visa.
And yeah, quite funny actually getting that visa
and then having the border slammed shut
for a couple of years.
So just my luck.
But thank you very much for joining us to co-host.
And Dimitri, as always,
a pleasure to chat to you too, my friend.
Thank you.
Thanks a lot.
That was Dimitri Alperovitch and Chris Krebs there
with a look back on the week's security news.
It is time for this week's sponsored interview now
with Andrew Morris, the founder and CEO of GrayNoise.
GrayNoise operates a vast and adaptive honeypot network
all over the world, and the idea is that they collect
incredible telemetry on large scale and automated attacks
that are happening over the internet.
And this type of information is very handy for when you want to know when something is targeting you
or, you know, whether it's targeting the whole internet. And, you know, grey noise can also act
as a sort of early warning system, picking up mass exploitation events as they kick off.
But as you can imagine, the internet is a noisy place. It's a very noisy place. So there's a lot of work that goes on behind the scenes at Gray Noise, tagging new traffic types with various attributes. But yeah, Andrew and his team have actually thrown a large language model at this work, and I regret to inform you that it actually works really well. So here is Andrew talking about the new auto-tagging tech at Grayise, which they've called SIFT. SIFT is blowing my mind. So the short
answer is that we've built something that basically surfaces net interesting network traffic of
what's hitting our sensors across the internet, right? So there's a long kind of series of things
that had to happen for that to work. So rewind a year or two ago, some folks internally at Gray Noise built basically like a
binary sort of clusterer that what it would do is internally it would just show us instead of like,
hey, this is all the traffic. It was like, here's the big blobs of traffic, right? Here's the big
blobs of traffic. And then we started doing that over time. Here's the big blobs of traffic and
here's how they change over time. And then
what we started doing is we started looking at what are the net new blobs of traffic. So stated
differently in our little classification models, we've got, hey, here's new binary payloads, new
HTTP payloads that are passing, that are passing gray noise sensors. A net new cluster has emerged,
right? A new cluster. So now we've seen
this new cluster of things. And what we're doing is, and this is where SIFT really comes in,
we're plucking out basically a member of that or a handful of members of that new cluster of things
that gray noise has never seen before. And we're basically shoving it over to LLMs to actually
decorate and tell us what is this thing? What is this thing?
What does it mean? Right? And then do that basic first level triage on everything that is net new
and only surfacing the things to us that basically seem interesting and severe, right?
And so this is the sort of journey that we've gone on to go from first, we've got to figure out what's interesting
and useful in this giant series of basically needles, right? And then now it's like, cool,
we've got our hands wrapped around that, but that last squishy part that's always been really
tricky for us is annotating and making that into something that makes sense. So we've actually
pushed that over that last part of the problem over to the LLMs. And the LLMs are actually basically doing the very first round of triage for us.
So the really cool part is that we've taken the same approach and we've actually back processed a bunch of our old data for things that took a lot of people to find.
And we've basically said, hey, would SIFT have found this?
Would SIFT have found Drupal Geddon?
Would SIFT have found Log4j?
Would SIFT have found blah, blah Geddon? Would SIFT have found Log4j? Would SIFT have found blah, blah, blah?
And the answer is yes.
And what's really interesting is that a lot of the times it's been more right than us.
And so we'll go and we'll look at it and we'll be like, I didn't know that that's what that
thing was for.
And we'll start by saying like, we've got to fix this false positive.
We're like, ah, actually it's right.
And so the answer is yes.
And it surfaces a bunch of,
it makes a lot of really- Well, hang on, hang on, hang on. You just mentioned,
you just mentioned false positives. I'd imagine that that might be a bit of an issue when using
an LLM to do this stuff. And obviously you're going to have a human in the loop to double
check the work that this thing is producing. Funnily enough, like last week's sponsor interview
with Socket, I mean, we were having a very similar conversation
about how to use LLMs.
And this seems like for doing this sort of analysis,
yeah, it seems like you can automate a lot of it,
but you still sort of need someone there to check it.
Have false positives been a bit of an issue with this?
Honestly, the false positives are,
it's just a different kind of false positive.
So in this case, it's like-
It thinks something's new when it's actually not, or no, it thinks that something's interesting and dangerous when it's actually just
somebody, you know, like basically doing something with a certain header where it'll be like, Hey,
this is so bad. It's someone trying to check if you're a proxy. And I'm like, well, yeah, but you
know, that's going to happen a million times, but SIF doesn't know the difference between whether that's a big deal and when it's not. The interesting thing for us is it doesn't
misclassify very often, but it does misinterpret the severity of things. For example, it'll be
like this connect request that's going to you on this public web server is severity nine. And I'm
like, it's not though, you know? Yeah. Yeah. So it's an excitable child
who's been sat in front of a console for the first time. Yeah. Yeah. And, and there are,
and then some of the things that it's not so good at is like, then now it'll write an IDS signature
for us. But sometimes that IDS signature really doesn't make a whole lot of sense. And so then
we're like, okay, you were right about that first part. But the second part here is like the,
the rule that you created is a terrible rule, right?
So then we've got to teach it how to be better at writing rules.
So you end up with a lot of the same problems that you have with maybe junior analysts of
kind of like teaching them a little bit more about what matters and what doesn't.
But the really cool part is that like LLMs don't sleep.
LLMs don't like, this is, you've got as much skill on this thing as you want.
So I imagine, yeah, I imagine this just simplifies your whole process, right?
Because instead of having to shrink it down.
Yeah, exactly.
Instead of having to do this whole end to end process, basically you can just look at
what the LLM is pooping out and say, well, yeah, we'll keep that one.
We'll keep that one, that one.
And it's nonsense, but we'll keep this one, this one, this one. Like when it comes to judging, like the percentage of the output that you keep versus discard,
what does that ratio kind of look like?
I don't have the numbers for it off the top of my head.
What I do know.
Ballpark finger in the air kind of thing.
Honestly, like the big thing is that it just, it's a lot less likely to miss things that
we would, that we would have missed.
And it's picked up a lot of things
that we have not picked up.
And so what it's done is it's basically,
it's created a lot more throughput to our human team
for actually putting into production for something.
But it's not, and it's much more so that like,
we're now looking at,
instead of looking at 1 million
things a day or 100,000 things a day, we're looking at a much more manageable amount of
things every day.
And that's what has become so beautiful.
It has a perfect memory, right?
It hallucinates from time to time.
It's not very good at writing rules, but it's applying basically the same set of standards
to the raw data that's happening day in, day out. And it's turning the things that a human has to look at into a much,
much smaller, more manageable pile of things. So we've gone from, we have to have a lot of people
to look at all this traffic, to write rules that go into something that a Gray Noise customer gets
value from, to actually we've basically got, you know, the five experts who understand this traffic
and all they have to do is teach this thing when it's right
and teach this thing when it's wrong.
And that's what they have to do.
That's another, I mean, again, you know,
same question from last week,
which is, I mean, they do call it machine learning, right?
You can teach these things to gradually improve.
I know it's early days, but I mean, you know,
what Firas was saying last week is it's a matter
of just sort of adjusting your prompts versus trying to, I think that's what he was saying is, you know, figuring out better prompts rather than trying to change the model.
I mean, has that been your approach as well?
Yeah, that's right.
I mean, a lot of it is basically just prompting the crap out of the LLM and saying like, you know, really, really, really being terse with what responses are acceptable and not.
Really what you want to get is you want to get closer to an API where it's like predictable
responses and results. And so you really have to kind of beat the machine into submission.
Yeah. You don't want to give it any wiggle room, right? Like you want to very narrowly scope
what can come back out of that query. That's right. And then you have to end
up making it kind of compliant with the rest of your sort of data model and ecosystem, which is
a series of problems in and of itself. Do you expect that you will be able to get this thing
to poop out signatures that are useful in the future? Yes, absolutely. So like the name of the
game, we got to get away from basically there's kind of two different products that we are looking for.
And I'm using the term product a little abstractly.
We're looking for guidance on high-end type block lists.
This is AIP.
Block it right now.
And then the other product that you're looking at is a detection rule that can be eaten by another product to produce those same block requests, right?
Yes.
Or those same block requests, right? Yes. Or those same blocks. So that's ultimately what you kind of end up with
is like you really do,
the short answer to your question is
I am very confident in LLMs being able to produce rules
that are good, that will generally outperform people.
But yeah, like what we're talking about doing
is just ultimately shrinking the amount of time
that it takes from stimulus to occur,
reconnaissance,
attack, news, et cetera, to what is this thing going to look like on the wire? How do I know
when it happens and how can I make sure that it doesn't talk to my network? We're really just
trying to shorten that thing as much as we possibly can. And for what it's worth, it's okay
to me for new information to come out later on that makes a signature better
after the fact. There's nothing you can do about that. So basically like the question, the question
is not, will we be able to do this? The question is, can LLMs do this better than the state of the
art right now? And, or, or can they do that this year? And I think the answer to that is yes. I do
think that we're, that LLMs are going to be able to produce better detection rules than humans
this year with the right data.
And,
and again,
if we short,
if we shorten the path to getting access to that raw data and telemetry.
I think also another advantage of this approach is you might be able to shrink
down the scale at which these things
have to hit before you're detecting them, right? That's right. Whereas something had to be really
big for it to sort of land on an analyst's plate. Now, because you've automated that, you might see,
you might be able to catch something as it's kicking off as opposed to when it's in full
swing. I mean, I'm guessing that's kind of what you were getting at before with shrinking that time down. Exactly. So I mean, a huge part of it is that what you really want to
do is you want to go from lots and lots and lots of unmanageable things into smaller number of
manageable things and faster. And LLMs don't have the same biases that people do. So it's like
people are going to want to see big, people are going to want to see big. People are
going to want to see lots. People are going to want to see vast. But computers and LLMs, they
don't care about any of that stuff, right? They have really good memories. And so they're saying,
look, I'm not excited about this thing because it's happening a thousand times. I'm excited
about this thing because it's happening for the first time ever. That's why I'm excited about it,
right? And so that's part of the reason why I'm more confident in LLMs being effective in certain areas in this.
They don't have some of the same biases that we do.
Yeah, I mean, it's sort of like, this might sound a weird connection to make, but it sort of reminds me of like the way the mobile handset companies like Google and Apple handle crash dumps, right?
So they'll see a crash dump come in from an iphone and it's like the first time an
iphone's ever done that and they're like okay that's probably you know nso group or whatever
right and they go out and they crush the bug um and it's the same sort of thing right like when
you're when you're dealing with scale in their case like you know iphones versus in your case
like network traffic you you can't do that man You know, it's not like Apple has someone
going through every crash dump manually to say, oh, that's a weird one. You know, like all of
this stuff has to be automated. And it is all about picking out the weird stuff. The problem,
I guess, though, with networks is that they're less predictable than, you know, something like
a crash on a uniform OS. So it's always going to come back to where you set your threshold, right?
For interest.
And that's what SIFT is going to be about.
Yeah, that's part of it.
But like part of why the world needs a gray noise is that you need to have that base level,
right? You need to have that like the internet wide expected.
You need to have like what normal or what normal is before you can figure out what's
abnormal, before you can figure out what's abnormal, what before you can figure out what's net new.
So there's too many things to say, um, sift up every net new thing that happens to me,
but there's not necessarily too many things to say, sift up everything that happens to
me that's only happened to me, right?
Because now you've got a really narrow aperture of, you both have the control plane of like
what's happening to everybody else all the time.
And then you've got you.
And then now that's already a really good place to start.
But then a really good place to go from there is of the things that are only happening to me and are, you know, that look like attacks.
Only show me the ones that are dangerous attacks that are only happening to me today that have never happened to me before.
Because that feels to me a little bit more reminiscent of what the security analyst's
job was 20 years ago, right? When the internet was quieter, right? And so it's a little bit of
a return back to kind of, in my opinion, where things kind of were before, before things were
so noisy and automated. I'm guessing this is pretty popular with governments, right?
Yeah. Because of the scale thing, right?
Because of the scale thing, because governments care about both offense and defense,
because governments have, yeah, lots and lots and lots of networks and IP space and alerts to work through. Yeah, that's right. And so like of the people that we've been sort of surveying this
with, we've gotten really, really good responses back from basically threat research teams,
vulnerability management teams, stuff like this, anywhere where they're basically trying to
figure out like what's going on out on the internet and do we care and do we not? And are we showing
up in this and does anyone that we care about or does anything that we care about show up here?
Again, it shrinks the problem. It shrinks the problem. Yeah. All right. Andrew Morris,
thanks a lot for joining us to talk through your annoyingly useful use case for LLMs.
A pleasure to chat to you as always. Cheers.
Thanks so much, Patrick. Anytime, anyplace.
That was Andrew Morris there from Grey Noise.
Big thanks to him for that,
and big thanks to Grey Noise for sponsoring this week's show.
And again, another big thanks to Andrew
for hosting me at his place for a few nights in Washington.
That was a lot of fun.
But that is it for this week's show.
I do hope you've enjoyed it.
I'll be back with more Risky Biz on November 29.
But until then, I've been Patrick Gray.
Thanks for listening. Thank you.