Risky Business - Risky Business #740 -- Midnight Blizzard's Microsoft hack isn't over
Episode Date: March 12, 2024On this week’s show Patrick and Adam discuss the week’s security news, including: Weather forecast in Redmond is still for blizzards at midnight Maybe Change H...ealthcare wasn’t just crying nation-state wolf Hackers abuse e-prescription systems to sell drugs CISA goes above and beyond to relate to its constituency by getting its Ivantis owned VMware drinks from the Tianfu Cup Much, much more This week’s feature guest is John P Carlin. He was principal associate deputy attorney general under Deputy Attorney General Lisa Monaco for about 18 months in 2021 and 2022, and also served as Robert Mueller’s chief of staff when he was FBI director. John is joining us this week to talk about all things SEC. He wrote the recent Amicus Brief that says the SEC needs to be careful in its action against Solarwinds. He’ll also be talking to us more generally about these new SEC disclosure requirements, which are in full swing. Rad founder Jimmy Mesta will along in this week’s sponsor segment to talk about some really interesting work they’ve done in baselining cloud workloads. It’s the sort of thing that sounds simple that really, really isn’t. Show notes Risky Biz News: The aftermath of Microsoft's SVR hack is rearing its ugly head Swindled Blackcat affiliate wants money from Change Healthcare ransom - Blog | Menlo Security BlackCat Ransomware Group Implodes After Apparent $22M Payment by Change Healthcare – Krebs on Security Change Healthcare systems expected to come back online in mid-March | Cybersecurity Dive LockBit takes credit for February shutdown of South African pension fund Ransomware gang claims to have made $3.4 million after attacking children’s hospital Jason D. Clinton on X: "Fully automated vulnerability research is changing the cybersecurity landscape Claude 3 Opus is capable of reading source code and identifying complex security vulnerabilities used by APTs. But scaling is still a challenge. Demo: https://t.co/UfLNGdkLp8 This is beginner-level… https://t.co/mMQb2vYln1" / X Jason Koebler on X: "Hackers are hacking doctors, then using their digital prescription portals to "legitimately" prescribe themselves & their customers adderall, oxy, and other prescription drugs https://t.co/6elTKQnXSB" / X How Hackers Dox Doctors to Order Mountains of Oxy and Adderall CISA forced to take two systems offline last month after Ivanti compromise VMware sandbox escape bugs are so critical, patches are released for end-of-life products | Ars Technica A Close Up Look at the Consumer Data Broker Radaris – Krebs on Security Brief of Amici Curiae Former Government Officials Securities and Exchange Commission v Solarwinds Corp
Transcript
Discussion (0)
Hi everyone and welcome to Risky Business. My name's Patrick Gray. This week's show is brought to you by RAD Security, the company which was until very recently known as KSOC.
RAD's founder, Jimmy Master, will be along in this week's sponsor segment to talk about some really interesting work they've done in baselining cloud workloads, which is the sort of thing that sounds simple,
but really, really isn't.
And they've managed to get all of that working.
And you can already head over to rad.security slash catalog
to check that out.
It is spelled C-A-T-A-L-O-G, the American spelling,
the simplified English spelling.
But yeah, check out some of those baselines.
They include process trees, CVEs, SBOM information,
network domain information and whatnot.
Jimmy will be along a bit later to talk to us
about all of that.
KSOX slash RAD, of course,
is the company that does Kubernetes-related security stuff.
We're also getting here from a feature guest this week,
John P. Carlin.
John was the Principal Associate Deputy Attorney General under Deputy Attorney General Lisa Monaco for about 18 months in 2021 and 2022. He was also man behind the recent amicus brief.
An amicus brief is a letter, essentially,
that is filed with the court as like a friend of the court.
But this amicus brief has some issues with the SEC going after SolarWinds in the way that it has.
So he'll talk to us about that.
And he'll also be talking to us more generally
about these new SEC disclosure requirements, which are, of course, in full swing these days.
That is coming up later.
But first up, of course, it is time for a check of the week's security news headlines with our good friend Adam Boileau.
And Adam, it looks like Russia's SVR is still causing some drama out there, microsoft and its customers of course they were able to access a whole bunch of microsoft corp email with some fancy 0365 tricks which apparently hit a
whole bunch of organizations these these tricks and this actor midnight blizzard uh what it looks
like is happening now and we know this thanks to microsoft releasing information on this on a
friday afternoon which is very that's how they roll, yes.
...proper old-school Microsoft of them.
But it looks like they are now using secrets
they obtained in their initial breach
to further their attacks.
Yeah, we've seen reports from Microsoft
that the attackers gain access to a bunch of internal systems,
you know, kind of be on to their email,
help themselves with the data,
and also source code repositories, given what Microsoft does that could be of concern you know
those options for using that source code for evil Microsoft has said that you know the data that
they got from the email spools and presumably from other things contained a bunch of stuff like you
know tokens and API keys and so on that the attackers have been using to continue their attack despite
being evicted, presumably evicted, after that kind of one that started with Azure and went
onwards into Microsoft.
They have said, so no customers, they haven't said any customers have been compromised,
but what they did say was that some customers have been targeted which is a little bit weasley it's a nice
nice it could mean anything right it could mean anything um but yeah so we don't know we saw
reports after the midnight blizzard thing about like for example hp enterprise said they had seen
actors with the same tdps uh inside their environment but we haven't really
seen anything much about it since and you know clearly they are not just packing up and going
home and they're working with what they've got to make money yeah it is interesting right because
back when all of this first surfaced i did hear from a few different places that there were just
so many organizations impacted by
Midnight Blizzard using these TTPs and yet we haven't really seen much publicly come out and
it's been a little while I've got to say I'm surprised by that yeah that is a little weird
and I had imagined that we would see more and I guess like there's two options one is
that you know the thread didn't go particularly far and they investigated and it wasn't as bad
as it sounded or it's so bad that the investigation is huge
and no one's willing to talk about it yet
and all of the people who were, you know,
mentioning it to you have been told to shut up
because it's terrible and horrible.
You know, I guess we'll just have to wait and see
which one of those things it is.
I've realised, you know, let's just,
we'll just have to wait and see is our ultimate
get out of jail free card here at Risky Biz when we don't
know what on earth is happening. We just have to wait and see.
We'll just have to wait and see. Now look, let's follow up briefly on the change healthcare debacle.
We've got an interesting post here from Menlo Security looking at a bit
of a timeline around the affiliate. There's a very
interesting little data point in here,
which I'll get to in a moment.
But, you know, this is a decent walkthrough
around a bunch of what happened.
Yeah, so obviously when Change Healthcare got compromised
by, you know, Alfie Black Cat Affiliates,
got ransomwared, and then Alfie eventually absconded
with the ransom payment, that kind of division between who the
affiliate was and who the group were, you know, kind of became more interesting. And so this is
a dive into the affiliate, Notchi, who appeared to be the one behind the intrusion into Change
Healthcare, and the one who still has Change Healthcare's data, but not their money. And so
understanding who that person is, and what their motivations are, et cetera, et cetera,
and how their relationship with the rest of the, you know, Alfie Black Hat crew looks now,
which the answer is pretty rough, that's, you know, becoming more important now because of that weird,
you know, kind of social dynamic going on. Yeah. And I think this post points out quite
rightly that if Change Healthcare did pay to keep the data from being published, well,
that's money down the drain because the affiliate who actually has the data didn't see the money.
Now, the data point that I mentioned earlier is that, and I'll just read it out, it's an analyst
comment, and this just appears in the post in italic. Some of our human sources with direct
contact to Notchi says it's high probability that Notchi is associated with
China nation-state groups. Now that is thin, okay? Obviously that is thin. But you remember
when we first spoke about Change Healthcare, the first thing that we said about the incident
was that they had initially come out and told everyone that the intrusion was a nation-state
backed thing, right? So they didn't come out and
say ransomware they said this is a you know an apt crew uh has owned us and then we you know of
course we fired up the microphones and said look at this they were covering uh for ransomware uh
oh how silly of them i mean why not both is what i'm thinking here yeah no that this is definitely
a really interesting data point and we are very used to companies saying,
oh, look, this was an advanced persistent fair.
Look how advanced and persistent they are with their off-the-shelf.
Sophisticated is the word you're looking for.
Yes.
Sophisticated adversary.
Yes.
And, you know, often they are not.
But especially in light of that, you know,
the data we saw out of that ISUNun data leak in china we have a bit more
understanding about you know how fluid the relationship between state-sponsored state-directed
cyber crime you know just hacker kids is in china you could certainly imagine a scenario where
you know a chinese hacker is also a nation state actor,
but is in their spare time using their same tooling,
their same techniques,
maybe some of the same infrastructure
to go and do ransomware.
And that would explain why it might have looked
like a nation state attacker at first,
because it was just a different role,
different day of the week.
I mean, I wonder too,
like we have seen China target healthcare data in the past, and I would think terabytes upon terabytes of healthcare
data stolen from a United States company would be something that, you know, say MSS would be
interested in just, you know, importing into one of their giant databases. So once you've done that,
once you've taken the data, handed it off to the MSS, why not make a bit of extra dough?
But it would be very interesting if we started seeing, you know,
people who are essentially Chinese APT operators
in league with Russian ransomware gangs
to sort of put a bit of extra monetisation into their day-to-day, right?
And, I mean, the ISUN leaks...
These damn communists always monetising things.
Free market communists is what they are over there.
But, yeah, I think the ISUN leak also showed us
how poorly paid a lot of these people are, right?
So I think, you know, hopefully we're not going to see more of this,
but I just got that uneasy feeling, Adam.
In the ISUN leak, there was also that kind of wrinkle
where we saw sort of a proactive targeting
where, you know, the ISUN hackers were going out and hacking stuff
and then attempting to flog it off to the Ministry of State Security
or whoever else after the fact.
Yeah, and they weren't interested, yeah.
And then they weren't interested,
which suggests that there's a degree of speculation here
and it may be like, let's see who we can hack, go in,
maybe we can sell it to the state,
maybe we can monetise it through ransomware,
maybe we can do something else with that data.
Well, this is a terrific development, isn't it?
Well, exactly, because this is a terrific development isn't it well exactly
because this is exactly what we need in the you know in in the security this is just what the
doctor ordered yes but no you know i just thought this that was a very very interesting wrinkle
in this particular story and you know it fits what we have seen from the outside so far and
certainly sounds believable um and yeah we'll see what we'll see
what not she decides to do with the terabytes of data now so yeah you know maybe change we'll have
to pay them a second time yeah yeah i mean meanwhile change say they'll be back up any day
now i think mid-march is there is when they're aiming to have everything back online but it's it
will you know it will have been something like a month that they've been down
and the costs are just mind-boggling.
You know, when you're essentially running a payments clearinghouse
for a significant chunk of the US healthcare complex,
that's a lot of money.
So I wonder what the sort of fallout is going to be
once this is all back up and running.
I don't know.
Like it's a big disruption to hit US healthcare is my point.
Brian Krebs also has a write-up here about,
Alpha E is also called Black Cat.
Some people call them Alpha V as well,
which I always find strange because there's no A there.
No, but the V is kind of stylized as an upside down A.
Then wouldn't it just be Alpha?
Yeah, but I don't know.
I'm just going to stick with Alpha V. I don't know i'm just gonna stick with lv i don't care
about their style guides i get enough you know we had to change a sponsor's logo on our website
recently because they changed the dot on their eye to a square and initially they sent us the
new logo and we're like hang on it's the same as the old one they're like no the dot anyway we love
you by the way that particular sponsor but it's always funny. And I honoured that, but I will not honour the branding style guide
for a ransomware crew.
For Russian hackers, yes.
Yeah, that's right.
So, you know, they are also Black Cat.
They're known as Black Cat slash Alpha V or Alfie or Alpha.
And, you know, drama, drama.
They're imploding, basically, is what Brian has written up here.
Walk us through his piece here.
So, I mean, obviously ransomware crews have a fractious relationship,
you know, amongst themselves and amongst the crime ecosystem
because, you know, trusting criminals is hard.
The fact that Black Hat have absconded with the ransom from Change Healthcare, you know, threw a cat amongst the pigeons in the forums and in the scene there.
But, you know, there's quite a long history of this kind of shenanigans amongst ransomware crews.
I mean, we've seen Black Hat got FBI'd towards the end of last year as well.
So, you know, there's a lot of distrust.
I mean, like Black Cat's supposed exit scam, you know, being written up as, oh, we got
seized by the feds with a, you know, a fake, you know, fake seizure page.
It's not a community that is filled with trust.
And that's kind of a good thing in a way.
See, this is the thing everyone's like
oh look they got disrupted last year but they came back you know so disruption doesn't work
they came back got one epic payday and then exit scammed would they have still exit scammed if they
didn't have that sort of heat on them right and that that was a conversation i was having with
tom uran uh our colleague this morning tom is our in-house contemplator and he has been sitting around head cocked uh contemplating
this and you know he's he makes a good point which is you know it's his assertion that these
disruptions you know disruption plus a big payday time to call it a day yeah yeah exactly and i think
um so on this week between two nerds episode, Tom and Grok talked through some of the ways
to more effectively disrupt ransomware operations
by attacking what Grok calls the structural parts,
the structural members underpinning that.
And that's things like trust and things like knowing
that your ransomware as a service operator
isn't going to just exit scam on you and take a ransom after you've done all the hard work.
And he talks through, you know,
things law enforcement could be doing
beyond just takedowns to try and foment that distrust.
And we don't know whether this exit scam was actually them.
Wouldn't that be funny?
So I was just thinking there, you know,
wouldn't it be funny if the FBI, instead of putting official seizure notices and putting out press releases and
whatever maybe they could just make them look like fake seizure notices and don't announce
anything from doj that's some don't claim it make it all look like exit scams exit scam take the
money give it back to change health care along with some decryption keys like yeah i mean that's
i'd like to imagine that that was what happened.
I don't think we're quite there yet.
I don't think we're there yet at all,
because they've still got to claim credit for the win, right?
But I think you're touching on something interesting here, right?
Which is that the thought process in government,
particularly around in law enforcement,
on this stuff is a little bit off, right?
Because instead of doing the full announcement
with the comment from the Attorney General on whatever, you know, you've got to be a little bit off right because instead of doing the the full announcement with
the comment from the attorney general on whatever you know you've got to be a little bit nasty you
got to confuse these people i think taking them down and making it look like they're all exit
scamming would be much more effective long term yeah yeah yeah much more effective you know so
i don't know it's less good on the powerpoint of the report you know for what you've done this
quarter or etc etc when you're you know, for what you've done this quarter or et cetera, et cetera,
when you're, you know, reporting back up to your politicians.
But in terms of effectiveness, like we do need to be a bit nastier. And I think, you know, those of us that grew up in, you know,
90s IRC scene wars have kind of the right stuff for, you know,
playing a little bit dirty, making a little bit messy.
A good understanding of head ****.
Yes.
Like advanced PhD level old school internet head ****.
And I think, yeah, Gruck makes the point that maybe the British
are a little bit better at that than the Americans
and the Americans are a little too, you know,
square and straight up and down.
But as we, was it last week, a couple of weeks ago,
we were talking about like,
maybe the CIA is good at that kind of, you know, staring at guys.
Yeah, the FBI people, they, you know,
and I know we've got a lot of listeners at the FBI,
howdy to you, and you're not all like that,
but there is that sort of reputation of a culture
of people who iron their underwear, you know?
Yeah, yeah, very square-jawed, square-up G-men,
you know, that, as you say, iron their tighty-whities.
No offence to those of you at FBI who do not.
They have fine underwear, very supportive, good choice.
Now, look, speaking of other disrupted ransomware groups,
Lockbit apparently shut down some South African pension fund
back in February, but it's really not clear.
Everyone's like, oh, look, Lockbit reformed their back.
The disruption didn't work but you read this piece from john greg over at the record and it's it
really looks like current lock bit looks like one of those film sets for a western you know where
there's all the facades of the buildings but behind them there's no building there's just like
a couple of bits of wood holding the front up.
That's kind of what Lockbit feels at the moment.
I don't know that they have bounced back the way people have insisted
that they have.
Yeah, and this is, you know, it's also a good example
of the ecosystem is just more complicated than that, right?
There's Lockbit affiliates who are still out there,
perhaps mid-campaign, you know, who've already broken into things,
already started the process.
So, yeah, I think it's too soon to say LockBit dead, LockBit alive.
Like LockBit is – Schrodinger's LockBit, right?
Yes.
I just had the same thought, yeah.
But either way, right, it's still bad for the South African government
pensions agency because, I mean, that's the largest pension fund in Africa
and if they're having problems with
their computers i feel bad for them son um i don't know whether the south africans have hounds to
release as well but like i would imagine like if you went into australia and messed with pensions
like you would be seeing some some dingoes i suppose well that was the expectation our
friends in the australian government were trying to set yes wasn't it you know which is we will overreact basically so just one last ransomware uh piece here the
rice cider group was behind that uh ransomware attack against the children's hospital in chicago
that we spoke about a while ago they claim to have sold the data that they obtained in the attack
to someone for $3.4 million,
which smells not right to me just at all.
I don't know who out there would pay $3.4 million for that data.
Just stolen children's healthcare data.
Because when I first saw the headline,
I was imagining that maybe they had paid a ransom,
but no, they claim that someone bought it from them and it wasn't the hospital itself
paying to have it not published it doesn't that's yeah the reporting does not say that it was the
hospital it was that they had successfully sold it uh not taken a ransom so like that's
like i'm not sure I believe them.
Like, it may be this is a, you know, maybe this is a head game or a pressure tactic or whatever for future victims.
They look at how successful we are at selling the data.
You better pay us so we don't sell it again.
But if we take it at face value and that someone paid this money
for stolen children's medical data, then what a world man like i just i mean it just
feels like complete horse it does like and you read the the statement from the hospital it says
we're aware that individuals claiming to be a reciter a known threat actor claimed to have
sold data they allege was taken from lorry children's we continue to work closely with
internal and external experts as well as law enforcement and are actively investigating the claims. You know, if it was them who paid, I don't think
they would be investigating the claim, right? So it's just the whole thing is very, very strange.
Now, look, changing topics for a moment. I've got a really interesting post here from Twitter
that someone sent to me about using large language models to drive fuzzers to do vulnerability research and the results were
actually really interesting so one of the things that first popped up when when llms first came up
and the hype was just huge whenever it was a year ago or whatever i got asked uh by someone oh you
know isn't this going to allow attackers to go wild with bug discovery and whatnot. And I said, well, possibly yes,
but also it allows defenders to, you know, find bugs at scale. And, you know, so my prediction
at that time was that once this automated vulnerability discovery stuff kicks off,
if large language models can make that happen, we'll see a pretty disruptive period,
but we should come out the other side of it in much better shape. And besides, it ain't
like there's a shortage of vulnerabilities at the moment anyway. But this is really interesting work,
and you found this interesting as well. Yeah, but I'm pretty sceptical about AI and LLMs in
general and their applicability to a bunch of these kind of classes of problems. And so this particular write-up was talking about
the Clawed large language model being fed code snippets
and asked to find bugs.
And this example was like some C code
for I think from Linux kernel that was vulnerable
to like a re-entrancy attack
where the code was not correctly acquiring a lock
and you could kind of corrupt heap data structures and the ai identifies it and writes it up and proposes a
solution uh and it's all extremely sensible and you know these are bugs that are not you know even
if human reviewing the code like it takes a real focus to be able to find bugs like this and i was
quietly impressed with this actually um so
but gluing the llm to the fuzzer too seemed like yes right and we've seen a lot of work out of
google on you know using ai with their existing kind of they've done so much work around fuzzing
already at scale that using ai to drive test cases and triage bugs and so on. Like it makes sense,
but it's one of those things that the idea
is different from the actual implemented reality.
And I was surprised to see that the implemented reality
is actually better than I was imagining.
Yeah, yeah.
I mean, I thought it was very cool as well.
404 Media, which is the website established
by a bunch of the ex-motherboard people,
including Joe Cox, Jason Kobler.
There's two more.
I can't remember their names.
I'm very, very sorry.
But they've got an absolutely terrific report up at the moment
looking at how criminals are obtaining access to doctors' accounts
in these e-prescription systems
and selling scripts for things like Oxycontin, right?
So you can go to these hackers, say,
yeah, send a e-script over to this pharmacy
so that I can then go and pick it up
because all of the prescriptions
in the United States are electronic.
It's a little bit different here,
but it's fascinating.
It's a fascinating write-up
because what they're doing too
is they're like getting docs on some of these doctors
and like applying to some of these platforms as them.
So it's not just that they're doing account takeovers.
They're actually establishing new accounts.
And now that this is happening, this is going to be hard to rein in without some serious work.
And I think the DEA is going to have to get all over this just in the interim to try to stop it.
But did you also find this one interesting?
Yeah, this was a really interesting story.
And attacking medical systems is a thing we've been worried about for a while.
And we've seen concerns around privacy and stuff.
But this is a whole other way to monetize it.
And what we've seen is that when a new vector for monetizing misuse of a computer shows up, people leap on it really quick and it gets big very, very quick.
I mean, think, you know, it's not that long ago that ransomware didn't exist.
And now it's the central trend in so much of InfoSec.
And I think, you know, this is going to be a really big deal.
You know, when these kids are able to have a Telegram bot that will sell you prescriptions for Adderall,
like that's going to go big quickly.
And protecting these systems is very, very hard at scale because exactly as you described, right,
the identity management during sign-up,
all of the password reuse and more computer security issues.
Well, the identity verification, right?
Like it all, so much bad stuff comes back to our inability
to centrally and robustly authenticate identities.
Tie the use of a computer to a human being.
That's difficult to do and banks struggle with it.
We've seen all the work around money laundering and identity.
You know your customer stuff.
This is going to be really, really hard.
And these are niche, complicated systems in healthcare
that are largely not well-funded, not well-cared for, old.
It's got all the ingredients of being a total mess.
Yeah, and I think one point the story makes
is that largely a push to use these systems
has been driven by the fact that there were pill mills
giving out dodgy prescriptions in the past, right?
So, yeah.
Can't win, why try?
That's our motto, isn't it, Adam?
Exactly.
Ah, now in a story that absolutely delighted everyone in security
on social media, CISA itself experienced some Ivanti-related drama, Adam.
Yeah, they were running a couple of Ivanti platforms for connections.
I think one of them was out to, like, the chemical industry partners
for something, and, yeah, they got themselves compromised amongst everybody else who's had all their
Avantis compromised.
And, you know, it's just, it's a little bit funny.
Like, I mean, it's not funny when people get hacked, but it is a little bit funny that
even CISA can't manage to secure their Avanti products.
Well, no one can secure Avanti products, right?
But I think the lesson here, I've got a different takeaway from this,
which is that we all put our pants on one leg at a time.
Even CISA.
Even CISA, yeah.
Their Infrastructure Protection Gateway
and their Chemical Security Assessment Tool
both experienced, they both got Avanti'd.
But why is anyone surprised
that CISA is running enterprise software?
That's the bit that gets me, right?
Do we think that they're going to be running Google-grade zero-trust tooling?
I almost think it's good that CISA is running the same crap
that the rest of the US government is expected to run
because they've got a better understanding of the issues.
Yeah, I mean, that's a much more charitable view.
And I think it's
also correct right i mean they do have same problems as everybody else and it makes it more
relatable but at the same time it must stick in the craw a little bit when you see some of the
advice that they give out and you're like well have you guys done that too because it's hard
yeah yeah that's true that's true um and And finally, yeah, the VMware Sandbox Escape.
It's getting patches even out of support.
Products are getting patches for this because it's so bad.
I think these bugs were actually first demoed at the Chianfu Cup as well.
So congratulations.
The MSS has them.
But yeah, these are awful, aren't they?
I mean, it feels like Broadcom squeezing the last drop of life
out of VMware is just doing us all a favour.
Yeah, probably big picture it is.
And so many people rely on VMware for that isolation.
And so guest-to-host, and not just one,
like three or four different guest-to-host bugs
across the full set of products, you know,
Workstation, ESXi, and Fusion on Mac OS.
And the fact that, yeah,
these were used in the wild at Tianfu,
which means they've been in MSS's hands
for quite some time.
It makes sense that Broadcom are trying to front run this
and patch them even in end of life stuff
because, yeah, like these bugs are,
you know, they were worth a lot of money once upon a time.
And, you know, these days, I guess, they're worth a lot of cups of tea in china yeah yeah they're worth a laugh i think
worth a laugh now yes oh man and look finally i'm just going to link through to a piece uh that
brian krebs uh wrote up it's a deep dive on a consumer data broker named Radaris. And there's links to Russia in there.
And it's all like, you know, reading it,
it kind of made me think of some of these data brokers
as being a little bit like spammers back in the day.
You know, they've just all got that whiff of shady about them.
You know what I mean?
Yeah, exactly.
Yes, they do.
This is a great read.
I'm sorry, not spammers.
Email marketing companies.
Yeah, that's a good, it's. This is a great read. I'm sorry, not spammers. Email marketing companies. Yeah, that's a good...
It's a classic Krebs thing.
It's not strictly InfoSec,
but it's definitely worth a read
if you enjoy the art form of the Krebs.
And yeah, the Russian links are very interesting as well,
especially in light of US worries
about exporting personal data
and sharing those to other places.
So yeah, have a read.
All right, Adam, that is actually it for the week's news.
Thank you so much for joining me
and we'll do it all again next week.
Thanks so much, Pat.
We certainly will.
It is time to bring out this week's feature guest now.
These days, John P. Carlin is a partner
at the law firm Paul Weiss,
where he chairs its national security practice. And is a partner at the law firm Paul Weiss, where he
chairs its national security practice. And as you heard at the top of the show, he worked as the
Principal Associate Deputy Attorney General under Lisa Monaco in 2021 and 2022. And he even served
as Acting Deputy Attorney General for a few months back in 2021, in addition to having worked at FBI as chief of staff under Robert Mueller. But he's here today
because he wrote an amicus brief in support of SolarWinds. An amicus brief is like a letter,
you're a friend of the court, you're sharing your opinion with the court. The SEC has charged
SolarWinds and its CISO with a bunch of offences because it says SolarWinds knew its security was terrible,
but issued misleading statements to the market saying otherwise. So John, why don't you start
off by telling us why you wrote this brief and why so many former government types signed it?
We don't want enforcement, however well-meaning, to send a message that chills the type of
information sharing that can actually help protect chills the type of information sharing that can
actually help protect us. And the information sharing that can actually help protect us best
is information from those who know about what's happening, the chief information security officers,
to those in government who can take action because they have the right tools, they have the right
legal authorities, they can go after the bad guys. They can share information in ways that can help protect their systems. And that tends to be at the Department of Homeland Security, at the FBI, at the Justice Department. And so number one was whatever the judge does in this case, we want to make sure it doesn't chill that type of disclosure takes place, right, at a time where you're uncertain.
And so there are risks to having that conversation when you're uncertain. And we want the risk calculus to favor coming in, telling people what you know at the time, what you think it might be, even if that later turns out to be wrong.
So that was number one. and linked was we want to make sure we don't put out such detailed vulnerability information
that we actually give a roadmap to the bad guys, to the crooks, to the terrorists,
to the nation states that makes it easier to attack companies and cause harm. And we're in a
different space than a lot of other spaces where you enforce, where every time you enforce,
you're going after the victim in some sense, which isn't to say that there are no circumstances where that should take place, but it does mean you want
to make sure you get right what the carrots or sticks are and make sure you have a real
clear theory of the case as to why you are going after a victim.
Yeah.
I mean, reading the SEC complaint though, you know, the SEC says that this action doesn't
really have anything to do with the incident that occurred at SolarWinds.
What it does have to do with is the fact that they published a security statement to their website, which said that they were using a secure development lifecycle, that they had access controllers on everything, and their security was wonderful.
And yet a whole bunch of internal communications revealed that that was, you know, the allegation is that that was not true. And they explicitly say in the complaint that, you know, this had nothing to
do with the incident. It is simply because they were giving misleading information to investors.
So, you know, why then should they be defended? You know, why then should an enforcement action
not happen if they have indeed been misleading
investors?
And to be clear, our brief does not take a position as to whether there should be an
enforcement action or not, which would be defending SolarWinds and you'd have to know
the full facts of the case.
But there were, to answer your question, there were two complaints.
So there's an original complaint and we we wrote our brief, and others wrote their brief, and then they filed an amended complaint distinguish it from otherwise what sounded like the day in the life of a CISO.
And what we were finding from version one was when we talked to people in this position, they were saying, I don't understand what it is that I'm supposed to do as a CISO based on this. I thought
my job was, even if I think our security program is good, my job is to go find things that can
improve and to point out when there are weaknesses. Does that mean I can't do that and at the same
time tell my security program to give one question? Look, look, look, there's always,
you know, it's a long road to get there and there's always work
to be done. But, you know, some of the internal communications that are spelled out in the SEC
complaint say things like, we are not doing what we say we are doing. You know, we are nowhere near
this security statement that we've published for investors. So it's not like, you know,
the case that, well, you know, it could use a few tweaks here. You know, certainly the SEC
seem to have surfaced some complaints. Oh, well, some, yeah it could use a few tweaks here. You know, certainly the SEC seemed to have surfaced some complaints.
Oh, well, some, yeah, complaints from internal staff
saying that their security program was woeful.
You know, I guess the...
There were complaints from internal staff.
It's true.
I do think you need to be careful with complaints.
This isn't something that we weighed in on the Amicus brief, though,
because their disclosures to the public were different,
and they said in their general disclosure how
this could be a critical risk to the company, where the SEC, and this is interesting, at least
under a US law and Australian law, it's a little different, but SEC isn't normally the regulator
that would say, you're saying a deceitful statement to a customer. They're the body
that monitors to the shareholder. I did find it interesting that a lot
of this hinges on a statement published to the website rather than information contained in SEC
filings, right? So I did also notice that that was a little bit strange. The question is though,
you know, you mentioned that they've amended this complaint. Have the amendments changed at all how you think or feel about this action?
I think we'll stand on our wreath for the government officials, which really was about
educating the judge on the type of information sharing that we think is critically important
and cautioning and asking the judge however he decides to make sure that he doesn't chill
the CISOs from having that type of candid open conversation with other CISOs and with government
officials, number one. And number two, to make sure that the rules that they set don't require...
So in other words, you could say, I know I have a big risk in this area without saying,
and here's how you could exploit it,
so that it don't require so much detail on the vulnerability. So that I think will stand with
our brief. I do think the amended complaint makes more clear the particular facts that led them to
take action and distinguishes it from the broad swath of CISO and company behavior.
It's also more focused on the person.
That has its own drawbacks.
I don't know what you're hearing, but here in the States, there have been so many actions lately that it's becoming harder to recruit.
I know I've been looking for clients.
People don't want to be a CISO anymore because they don't really have the authority to, no matter how good they are,
you can still have an attack that's successful, right?
And then the feeling is hindsight's 20-20.
Okay, but I mean, I will say that over the years,
every time the SEC does anything at all around this,
every single CISO on the planet, or even the DOJ,
we saw this with the Sullivan stuff with Uber.
Everybody was saying, oh, who would want to be a CISO and whatever.
But, you know, really, if you look at the complaint, there was some misconduct, some
pretty specific misconduct alleged there that I don't think most CISOs would have to worry
about.
So they're a nervous bunch, John, is what I'm saying, because they deal with risk.
But look, I guess what's...
Yeah, well, taking your point, I think what's key is,
and I can't talk about that case
coming from when I was,
dating from when I was at the Justice Department,
but I'll say more generally that,
taking your point,
I think it's important if you're able to articulate,
hey, here are specific things
that were done in this case,
don't do them.
And it's clear that that's the ideal place when you're
an enforcer, whether you're the Justice Department, SEC, or otherwise, right? That you shouldn't be,
you don't want to be in a gray area when you have the hammer, particularly Justice Department,
where the penalties are criminal. But also with SEC, where you can ban an individual from being
in, it can have enormously consequential personal harm versus, you know, a drop in
share price or losing salary, et cetera.
So you want to be very careful that you've set a standard so that someone knows, okay,
I'm in enforcement or not enforcement territory.
Yeah.
Now, look, one area where I think I can see, you know,
a little bit of a problem with the SEC complaint
is that we would not have known about the true state
of SolarWinds internal security or lack thereof
were it not for them being somewhat transparent
in the aftermath of the incident.
Is that one of the motivating factors behind, you know,
leaping to their defense?
So that was, again, I want to be clear about Arbery in terms of leaping to their defense? Again, I want to be clear about Arbery
in terms of leaping to the defense, which was we don't take a position on the merits of the case.
But there was, and this is what I think I clarified in the second version of the complaint,
in the first complaint, to your point, there was discussion about what in US law is so-called 8K, which is, and there's a new rule that's about to go into
effect. This was brought prior to the new rule being in effect, saying you have to determine
what's material to the company in a very short period of time. And then if you determine it's
material, you have to make that public. And so according to the complaint, they were informed by one of the victims
essentially that weekend. And then they went public that Monday before the markets opened.
That's very fast. And then they said that it wasn't essentially fulsome enough, that disclosure
on the Monday morning. That had a lot of people scratching their head because their share price went down something like 25, 26%, a significant impact. And yet they still were
saying that wasn't fulsome enough. And then later that week, they added to it and they said,
you should have been able to put that additional fact in your first disclosure.
So the second round, I think to your point, focuses more on, okay, here are statements
that you made through your public blog or to consumers that were different than the facts
that you know. Investors read those types of sites, and so you're misleading your investors.
So it was a different theory, essentially, than theory one of the complaint.
Yeah, yeah. So I got a question, though, about this idea that this
could cause a chilling effect, which is that, you know, now the SEC, and this brings us to the
second part of this interview, right? The SEC now requires that companies disclose cybersecurity
incidents when they have a potential to be material, right? So if these disclosures are
now mandatory, does a chilling effect have any difference if they're forced to disclose anyway?
Yeah. So let me walk through that because that's not quite. So what happened, it already was mandatory if you concluded that a company had a material incident that you were supposed to report it.
The new rule just puts a tight clock on it.
Yes.
It says you need to look and it puts a tight clock on it. It says you need to look and it puts a tight clock on it. The chilling that we're concerned
about and exactly you've hit what we wanted to educate the court about is not mandatory. And
that would be, you have an incident, you're at the early stages, you haven't concluded that it's
material and you go tell the FBI or the Department of Homeland Security that we're having this
incident or the incident concludes it's never material we still want them
coming in because we want to know and learn about who is this bad guy how do they operate what are
their tactics what are they and i guess what you're saying is you don't want the information
that's being shared with the fbi finding its way into an sec enforcement action right or they later
come back and say hey you never disclosed this to the public. You must have thought it was really serious.
You went and told the FBI.
Yeah.
I mean, but couldn't we sort of,
in the case of publicly listed companies,
couldn't we just mandate that sort of sharing?
Can the SEC, I mean, you know,
I don't know if you'd have to do that through new legislation.
I mean, I'd imagine you could just do it
through regulation
because the SEC is a powerful organization.
You could just say, hey, when this happens, you've got to go contact the relevant authorities because that's what a responsible management for a publicly traded company would do.
It's a great point.
And in fact, in US law, it's just a different agency that has the regulatory authority, there's a new statute, so-called CISA, that says that at least
if you're in critical infrastructure, you have to report significant incidents. A lot is going to
be left for the definition, but they gave two years to come up with regulations as to how that
new statute should work. But for the first time in the US, there is going to be a mandatory reporting
regime in critical infrastructure. And that was carefully calibrated and includes things like the information that you provide
in the report that's mandatory can't be used against you as the victim, but it can be used
So there's like a safe harbor provision to prevent exactly this from happening.
Exactly.
And so there's a question of what, that was carefully thought
through by statute. And we would talk about that in the brief as a different approach.
I'm not saying that the SEC, again, and we don't weigh in, if someone, for instance,
has a material event, hides it from the public, shareholders get hurt. SEC should enforce. That
is their mandate. It's a different
fact circumstance, but they're not the right agency to set up this mandatory reporting regime
because they can't require you to report to law enforcement. Congress can. What they can do is
report to them, but they don't do anything other than their mission is about the shareholder.
Yeah, you're not sharing it with the SEC's global SIGINT team, right? Because they don't have one.
No, I mean, I get it. But I mean, it sounds a little bit like, and you know, I'm just being kind of an ass when I say this, but it sounds like, you know, we're kind of advocating for
the right of companies to sort of lie about it as long as no one finds out when they have an
incident and share information. I mean, surely there's got to be some balance here. Absolutely. And you can't
lie. And if it's a lie to your shareholders, it's clear that the SEC would have enforcement
authority. If it's a lie to a consumer in the US, you're deceiving your consumers. It's more
likely the FTC that would have the regulatory, or it could be even a criminal.
Now, look, we've hit on this whole concept of disclosing events that might be material.
That, I'm guessing, is a little bit difficult.
I wouldn't want to be the one making that decision at a company because I've been reporting on cybersecurity for a long time.
It seems like shareholders have pretty uneven and unpredictable reactions to cybersecurity incidents.
So, you know, on one hand, it would be, I think,
very, very difficult to come up with some sort of reliable system
for determining whether or not something is material.
And on the other hand, I feel like because it's so arbitrary,
you know, companies should probably just disclose
everything. And that kind of seems like what the way this might be shaping up. I suggested on a
different podcast, well, maybe the SEC can issue some guidance on, you know, what should and should
not be considered material. And then I got shouted at by someone with some experience in securities
law, because they told me that the SEC will never do that because they don't issue guidance
on what is and what is not material.
So how is one supposed to navigate this whole concept
of breaches, incidents, ransomware, whatever,
and materiality?
That's a great question.
And it's still, this is one of those areas
that's currently uncertain.
What the SEC has been saying is,
we care more about your process. We want to make sure in terms of your governance that the people with the information about the incident, the chief information security officer, et cetera, get that information to the people qualified to figure out a disclosure, the general counsel, the chief financial officer, the key business leads. So you can make an informed decision that's particular to the facts. That's why they don't
give out great guidance on this for your particular company as to whether or not it's material. But
to your point, in some areas, that's close to a quantitative test. You can say, okay,
my financial harm is more than 2%. Yeah, look, we're going to be dealing with this much money
in terms of like having to deal with the regulatory fallout.
We're going to be dealing with, you know, brand hit worth this much, whatever.
But then you look at, you know, incidents here like the, you know,
I know it's not an American company, but the Medibank Australia incident,
which was huge.
It was in the national press for a long time.
They did take a bit of a dip.
Share price recovered.
You know, they didn't crash nearly as hard
as you might expect them to.
So it seems hard is what I'm getting at.
Like you say, okay, well, it's the job of the CISO
to pass it to general counsel.
How on earth is general counsel supposed to know?
I mean, in the case of an incident like that, sure,
you're going to disclose it.
But I mean, I just don't understand in this environment what sort of incident you wouldn't disclose.
Yeah, no, it's what you've seen since the combination of this case plus the new guidance. is a number of companies now make so-called AK disclosures, which means they're using the form
that's appropriate for sharing information with shareholders, but they're saying they
didn't conclude it was material. Yes. They're saying maybe we will down the track is what
they're saying, right? We've seen that a few times, which is at this stage, we don't think
it's material, but hey, who knows? Later on, leaving themselves a bit of wiggle room there, right?
Yeah, and that's pretty unusual in this space, but I think you've walked through the thought process why folks are arriving there. And the SEC did put out some guidance on this,
and they said explicitly, reputation, qualitative factors, not just quantitative
factors like reputation, should be part of your calculus as to whether
or not it's material. One thing they didn't publish, which I was curious about, but if you
look, and this surprised me, frankly, as being in this field for a long time, if you look at the
share price of most companies, even hit by some of the most severe incidents, including business
disruption, their share price was up maybe a month, maybe two months later.
It didn't actually have a long-term impact.
And then you wonder, well, gee, your point is that you can spook the market essentially inaccurately.
And now the guidance is I need to spook the market in order to not deceive the market.
It landed in a bit of a strange place.
What may happen is so many people
report it has no impact anymore. And this is why I wonder if the SEC is better off issuing some
sort of guidance. But I'm told that this is absolutely something they won't do. Why is that?
I'm not aware that there's some rule on guidance. They did issue a circular as to how you should
think about materiality when it comes to cyber that talked
about the different factors that go into the calculus. In terms of a right line test, I think
from their view, it wouldn't be right because it depends so much on your sector, on your particular business line, the prior reputation of your company as to the impact.
And their view is there are knowledgeable shareholders out there. What we want to do
is make sure that they have information that would be material to them in making an investment
decision. And so they want you to over-report. Yes, I suspect. My expectation was they want people to over-report,
and then maybe they'll issue some sort of statement saying,
well, these ones that fall into this category,
we're not so interested in anymore.
Do you think that could be sort of how this goes?
I'm pausing because...
It's impossible to know, right?
I would hope that they would issue, yeah, some...
If they get enough input from the market
that there's just confusion
around it, that at some point they may do one of these circulars that they put out to give you a
little more interpretation. Sometimes what I hope they don't do is have people figure it out by
enforcement. So then you're reading the tea leaves of when they enforce it, when they don't,
which is a little bit the concern that we have, unless the judge is very careful in how they will evaluate the evidence that might occur in Solar
Winds, that it sends a clear message if you're doing it by enforcement, unlike an advisory where
you can really lay out, okay, here's our thinking or thought process. Now, just to conclude this,
can I suggest to you, John, that a positive to come out of this SEC enforcement action is that companies will be very careful
about publishing bold claims
about their internal security practices
to their websites from now on.
I do think that will be a change that's resonated.
I expect that if you use the Wayback Machine,
you might've seen a few statements
just drop off the internet after this, right? Well, I don't know if you have this in Australia,
but in the US, every advertisement, et cetera, has this long series of tiny font disclaimers,
and it's all about legal liability risk. So it'll be like, buy this drug, it's terrific,
and then I'll say- The funniest one for us is you cannot advertise
prescription medication on television uh in australia if there's a new treatment for some
sort of condition you can say there's a new treatment but you can't talk about a specific
drug which is why it's very funny for us to go to the united states and see
primetime tv advertisements for clotting medication that come with you know for 50
percent of the advertisement is a disclaimer about how it might kill you.
Exactly.
So that, you know, I'm not sure how useful as a consumer that that approach is.
That is the approach that we have.
That may turn out to be in cyber where you get, you know, some claim we have cybersecurity,
but we are likely to be hacked.
There is no defense against a hack.
If we are hacked, it's going to be terrible consequences for us. We're constantly in a cycle of improvement.
But that's what they already put in their filings, right? So that's why I find it funny that the SEC
has gone to the statement from the website, not the boilerplate text that everybody uses. Anyway,
anyway, John P. Carlin, thank you so much for joining us on the show to have this conversation.
It's great to get some expert commentary around all of this because I know it's a big issue for a lot of our listeners.
Great to see you again.
Cheers.
Cheers. That was John P. Carlin there with a discussion of the SEC action against SolarWinds
and also the SEC disclosure requirements more generally.
Big thanks to him for that.
It is time for this week's sponsored review now with Jimmy Mester of RAD Security.
RAD Security was KSOC, but they changed their name literally this week.
And, of course, KSOC slash RAD specializes in Kubernetes security.
And Jimmy is joining us this week to talk about how they've been able to baseline
cloud security workloads. So you can head over to rad.security slash catalog, C-A-T-A-L-O-G,
to flip through their baselines. And I have actually done that and it is actually very
cool. So I started off by asking Jimmy to explain what's
in these baselines and here's what he had to say. So the baseline includes obviously metadata about
the container that we're fingerprinting or baselining. We kind of pin everything to the
container ID so it's unique and we can track those over time, bat them off of a base image and different versions.
Also within the fingerprint itself is a full process tree.
So we kind of go down the entire process tree
and learn everything that's happening
when the container starts up and when it runs.
We also look at all of the network connections
and codify those and track those.
You know, if oftentimes something like MongoDB, it's going to go to downloads.mongodb.com
to kind of get some patches and, and on startup and, and grab some updates.
Uh, and we also will look at any files that are accessed, right?
If, you know, if the, if the container is, um, accessing, you know accessing Etsy password or any variety of files, that's included in the fingerprint.
That's formatted in JSON.
It's a version, too.
So the aspiration here is to be a fully formed spec.
So these fingerprints will be reproducible and used in other locations. But the challenging
part and why this took a bit of innovation is getting these fingerprints to be reproducible.
If you actually watch the behavior of these containers and track everything,
there's a bunch of stuff that's kind of temporary, that's ephemeral,
that changes, that makes the fingerprint not actually verifiably reproducible.
So we have deduped and stripped all that out and basically made it so when you run Mongo once,
twice, three times, 25 times, it's going to look the same from a fingerprint perspective.
So I'm guessing customers using this to fingerprint their own containers, right?
Like whatever image it is that they're shoving into Kubernetes, like you can fingerprint it.
And you're doing that, a bit of that's being done with eBPF, right?
That's correct.
Yeah.
So we've also went back to the drawing board with how we get the data from these workloads.
eBPF is kind of at the center of that, but we've decided to turn this into a data engineering
problem versus like, let's write more signatures and do more things on the box.
Because one of the big problems with a lot of the traditional CWPP sort of technologies or workload protection technologies is that they take a lot of compute and memory and they are prone to problems.
And you just keep adding sidecars to your sidecars and more daemon sets.
We've really tried to put performance in our customers' environments as the top priority.
And so that makes it a data science problem, right?
The fingerprints are generated elsewhere and drift is detected elsewhere.
And that's really different than what we've seen before.
I mean, I really like this.
I remember several years ago, we had some of the ExtraHop people on the show and they
were talking and right then everyone was talking about s-bomb
right software bill of materials and they came onto the show and said you know what might be a
little bit more useful right now uh and i know everyone's real scared and real panicked about
stuff like you know log4j and whatever but how about a network bill of materials kind of thing
right like wouldn't it be nice to know what everything in your network is expected to connect
to for example and i look i thought they made a really good point um and i think you know hopefully
someone from sysr eventually is going to get around to forcing vendors to actually provide
things like you know what ip is this thing's going to connect to um in some sort of standard format
hopefully that'll happen one day but essentially what you've done is you've done that part of it
for Kubernetes environments, right?
You've actually built the tooling
where you can do that for your environment.
And it's not just network connections.
You've added a lot of other stuff there as well.
So you can really very easily know
when something's doing something weird.
That's the name of the game.
Yeah, if you think about,
every vendor talks about sort of these major supply chain attacks. If you really think about verifying the integrity of your software, SBOM's part of it, signatures are part of it. But if I'm actually shipping software to customers who are going to run in their environment, I want to verify with certainty its behavior.
It's great to get an SBOM, but then you're sort of assuming that it's correct, right?
Exactly.
Right.
The SBOM is a point in time snapshot of the components that make up that particular piece
of software.
And that's fine.
And you need an SBOM.
But I want to know all of the things the SBOM doesn't catch, right?
And there's a lot.
Well, there is.
And I've done a bunch of work over the last few years with Airlock Digital, right?
Who do allow listing, which I think is fundamentally a great approach to solving a lot of problems.
And when you start thinking about expanding that out a little bit to not just is this file okay but once you start expanding that
out to certain behaviors right and a bigger list of whether or not something can just execute or
not it's pretty powerful but it's also way too fiddly unless you can automate it right like
having some sort of manifest like i think it's probably at this point impractical for stuff like
you know stuff that runs on windows but you know in a Kubernetes environment, you can do this.
Right.
And I guess that's, you know, that's really what you've done here.
That's the bet.
Right.
And I think you can, um, we're really trying to push towards that positive security model
that we've been preaching about insecurity forever.
Right.
Instead of, instead of the deny list, let's go for an allow list.
Um, and we're doing the same, interestingly enough, for identities.
So we found that this fingerprinting method works great for service accounts,
even for humans or IAM roles in your cloud where you can say,
hey, Patrick does these things over the past 90 days.
He's only done this and he's only ever done this
inside of this
kubernetes cluster for example so let's codify that and we can start detecting drift when
patrick kind of goes crazy and does something different we just released a whole identity
threat detection and response suite for kubernetes uh this week actually where we can go go find
risky identities that you didn't even know were lurking in your clusters.
So there's a lot of innovation that still needs to happen here.
And we think we're at the forefront of it.
Yeah, yeah.
You know, we've spoken about the profiling you're doing
and how you can spot things that are weird.
What's the approach then of like, you know,
what does your stuff do when something weird happens?
Is there blocking actions?
Is there alerts?
You know, what are the false positives like?
Like, can you talk through the then what part of all of this?
Yeah, yeah.
So, you know, the natural progression here is, you know,
if there is drift which is expected drift happens
then we need to kind of inspect that drift with a microscope is it dangerous is it you know is
it benign or did this software package just update the set of ips or the domain where it gets its
future updates from exactly and or did this person get a job promotion and wind up with more privilege
or you know like It's all going
to be stuff like that, right? Yeah. And when you're dealing in the kernel level from a workload
standpoint, you can catch a lot of things, right? I mean, it's not a stretch to say this is a zero
day detector because that's how zero days work. You don't have a CVE. There isn't a known vulnerability or risk
configuration. It is just software or existing tools that may be already bundled into that
software. And we can detect that when they get used. So, um, it, it, it works by, by basically
extending that concept of drift detection into kind of other realms. And you can
act on that by, you know, quarantining the workload, severing a network connection, or just,
you know, out of the gate, creating a, you know, slack alert or something, you know, that you
already used and as part of your kind of SOAR or other security tooling that you have in place.
In the experience, and I know this is quite new, right? So not many customers are using it. Like what's the feedback been? Have there been certain little triggers
that they've said like that one's no good because it goes off all the time or, you know, it missed
this or, cause you know, you're always going to have that, right? Like you can say this is version
one of this product and it's going to be perfect, but you and I both know that's going to be
bullshit if you say that, right? There's always going to be some tuning so you know how's that played out yeah i think um it's it is new right and it's it's one of those
things that when you see it i think that people get it and they are there's like a light bulb
that goes off and they think of some way that they can use this kind of fingerprint model there's there's
challenges in certain container environments and like certain builds of containers that make
the fingerprinting hard right like busy box has just weird stuff that happens and it always will
and even the ecr image scanners from aws don't handle busy box that well. Same goes for trying to make a fingerprint
for it. But those are just, you know, those edge cases we'll keep chopping away at. But overall,
I think there's, we're going to ease into this, right? The biggest thing will be getting the
drift mechanism scoring algorithm dialed in. So you've reduced false positives, but don't miss
something important. Yeah. And I'm guessing for most people who are existing customers, they're like, cool, I'll plumb
this through to our Slack.
And it's just, you know, useful FYI, I'm guessing is the way, you know, that's the way the people
I know would treat this as like a bit of a FYI Slack bot.
Yeah.
I mean, I think when you build your own custom registry that is yours to the software you build, it's unique and you're going to be able to use it for all sorts of different things.
I mean, most people are really doubling down on the kind of CI checks, right?
If you talk about supply chain again, like you're going to want to make sure in CI that behaviors match up, match up, or just trying to get out of this.
We talked to someone at the bank yesterday
who wrote 600 plus signatures
for their runtime product last year,
their whole team, that's all they were doing.
And it's a lot of people,
it's a lot of signatures to write
and you don't even know how to test all of them.
This is a different model.
Yeah, well, I mean, it sounds like a worthwhile endeavor of them. This is a different model. Yeah.
Well, I mean, it sounds like a worthwhile endeavor to me.
Jimmy, great to see you again.
And we'll chat again soon about all of this.
And yeah, as I say, best of luck with it.
It sounds great.
All right, thanks.
And check out rad.security
if you want to see these things in action.
That was Jimmy Mester of rad.security there.
Big thanks to him for that.
And you can find them at rad.security.
And honestly, it's worth going to their website just to check out the new logo.
But that is it for this week's show.
I do hope you enjoyed it.
I'll be back soon with more risky biz for you all.
But until then, I've been Patrick Gray.
Thanks for listening. Thank you.