Risky Business - Wide World of Cyber: A deep dive on the f5 hack
Episode Date: October 21, 2025In this edition of the Wide World of Cyber podcast Patrick Gray talks to Chris Krebs and Alex Stamos about the f5 incident. They talk about what happened, whether it’s... a big deal, and why private equity ownership of mid-tier cybersecurity companies is often a red flag. Show notes
Transcript
Discussion (0)
Hey everyone, and welcome to another edition of the Wide World of Cyber podcast.
My name is Patrick Gray.
Wide World of Cyber, of course, is the podcast that we do here at RiskyBizHQ with Alex Stamos,
the former CISO of Facebook, former CISO of Yahoo, founder of ISEC partners,
and now he's back in startup land.
What is it now, Alex?
I am the Chief Security Officer of Corridor.
Fantastic.
And we are also joined by Chris Crabs, the founding director of Sisa, co-founder with Alex,
of the Krebs Stamos Group, currently fun-employed, I guess.
Enemy of the State.
Well, hold on now.
Come on.
That's like, I hope my mom's not listening to the podcast because she would not like that very much.
I am, let's put it in this way.
He's not the enemy of the State.
He's just disliked by the State, I think.
you would say. He's the, the front of me of the state. I'm in stealth mode. Let's just put it that way.
All right. So stealth mode, uh, stealth mode, Chris, uh, joins us as well. Currently, this podcast is
unsponsored. Uh, if you are interested in sponsoring this podcast, please do reach out to sales at
risky.biz if you are brave enough. Um, so today we are going to be talking about the F5
compromise, which of course came to light, uh, last week. As it turns out, and of course,
details are still coming to hand. We're recording this on Tuesday, the 21st of October.
But it looks like we had a Chinese APT group inside F5 for something like two years.
They were doing things like raiding the internal bug tracker, looking for bugs and whatnot.
So of course, now that F5's discovered this, they've rotated a bunch of keys, which is
interesting because those keys should be in a HSM, and this rotation kind of implies that they're not.
And they've also announced patches for something like 44 bugs.
I actually want to start with you on this, Alex,
because I've been through the bugs that F5 is patching.
None of them look particularly serious.
That said, all of them described as DOS conditions, etc.
Looked like they actually do have the potential to be quite serious.
You being the sort of hard tech guy on this podcast,
what was your read on the bugs that F5 actually patched in the wake of this
thing. Yeah. So they're rated low so far. These are also exactly the kind of flaws that I would
never bet my career on somebody not being able to turn to be expletable. You know, what we have seen
over the past several years is high-end threat actors really liking to go after these sealed
network devices because they are identical to each other. You could not run the EDR or other kind of
security products on them so and often they lack good anti-exploit technologies so if you do have a
use after free bug if you have a buffer flow if you have some kind of memory management flaw
the odds of you having an exploit that you can make reliable is actually quite good uh and uh they
are the kind of like you said they are described generally as DOSs I would not I would trust F5 as far as I
can throw them metaphorically at this point. So I would not trust them at all. Fortunately,
you can get their source code now pretty easily. So, you know, there are a number of people who are
reversing out these bugs themselves and trying to figure out from the patches. So I expect that we
will figure this out pretty soon, as well as probably start to see more exploitation in the wild.
Yeah. So, I mean, one thing that initially occurred to me when I heard that they'll patch
a bunch of bugs and everyone's talking about how these fives are all at risk is i thought geez you know
are all these people like exposing the management interface of their f5 to the internet because like
man you know that's a really risky thing to do even without these bugs but then when i started diving
into the detail of the advisory there was a line that keep popping up over and over and over in these
advisories and it was undisclosed traffic could cause this condition you know could cause this
this process or service to terminate and I was thinking undisclosed traffic so wait all you've got to do
is get the right bits to hit an interface and you can trigger this condition that's what made me think
oh okay this might be quite serious yeah and it's it's not totally clear to me I mean obviously the fives
include WAF functionality right so they're doing deep inspection of everything that goes past
and and some of this stuff does include uh you know a lot of it is like you said management interface
and you should absolutely never, ever, ever, ever,
ever be exposing the internet interface
of any kind of sealed network device.
Those things should always be extremely highly isolated.
And those management interfaces should only be exposed
for the minimal time possible to be doing those,
that kind of management.
You should be VPNing in, honestly, into those VLANs,
to be doing that kind of maintenance.
But yes, some of it, the language here is,
does hint towards
at some of these
probably exploitable
from in-band traffic
to the applications
that are behind the F-5s themselves.
Nice.
Real nice.
We love it.
We love it.
Okay, so Chris,
what's your take on this whole thing?
I mean, so it's bad,
but one thing that I found interesting
about this whole thing
is they had access for two years
including access
to the update infrastructure,
right?
Where F5 prepares and ships the patches
but there's no evidence that they actually tried to ship a malicious patch.
And I think that's very interesting when we look back over similar, you know, supply chain
compromise incidents like SolarWinds is the one that comes to mind.
They did manage to ship out malicious updates that would pull malware once they had update
to similar infrastructure at SolarWinds.
Why do you think they didn't do it in this case?
Is that because that's how they got caught last time?
Well, it was not them, right?
So first things first, right?
Oh, that was the, was that the Russians?
That was, oh, how soon we forget.
Yeah, amazing.
Wow.
Look, I mean, maybe we're making a couple of assumptions here.
One, I know they say state actor and then the rumor mill spits out China and I think it's probably accurate.
The second is how long they've been in, right?
So F5 talks about sometime in August of 2025 and there's some other kind of jumbled reporting that says, you know, it looks around a year.
And there's other reporting that says maybe it's two years.
I have heard that F5 knows exactly when it happened.
And they're providing that information in, you know, single soul engagement,
not broadly to-
Well, that's what's leaking and that's what's leading me to suggest that the initial intrusion occurred in late 2023, so about two years ago.
Yeah, and all I'm doing here is like, let's just kind of like stitch together the context around this one.
Because as much as we like to think these things are cut and dry, they're not.
not and they just kind of come dribbling out. So maybe there's a comms lesson learn here as well,
and we've seen much of that. So why not? I, you know, it's China, whether it's a contractor or
it's MSS or whatever. I don't know if I know yet or maybe someone knows it just hadn't been
shared yet. Maybe they're that much more deliberate. Maybe they weren't in the right spot yet.
Maybe they didn't have a payload. They were ready to ship. But it also could be like, this stuff's
hard. And maybe they just weren't ready to go yet. If the background story is true,
if the rumors are true that they have the appropriate logging in place to kind of know when
they got in and see some of the movement, then maybe F-5 had a better than we would have
expected security posture given, you know, where they sit in the kind of the mid-cap space.
You know, they're not Microsoft. Microsoft said they're on a set of issues. AWS, Google,
all those guys. So, you know, I think there are a bunch of different.
possibilities, but I do always keep coming back to that private equity backed mid-cap enterprise
software space. And if I'm a threat actor, I'm trying to paint these guys up because, and I'm not
saying this is the case with that five at all. You're saying like that is a super attractive
target set because of the way private equity really looks to maximize profits out of some of
these companies and maybe they're not investing to the extent a larger security company is.
So just kind of some immediate thoughts about where they are.
And there's a whole other set of issues that we can talk about, just like what's it like
to execute the cleanup and the mitigation set, particularly in places like the U.S.
federal government right now.
Yeah, I mean, F5, we should point out, is actually a listed company.
It's kind of an interesting company in a lot of ways because they make, you know, these load
balances and things that help you do hybrid sort of on-prem cloud-based stuff.
Like, they make some fairly unique equipment, which is not known for being particularly
secure.
They also took on EngineX as well.
So just an interesting sort of mid-tier company.
We actually took them on as a sponsor years ago just to find out more about what they
were doing on the engine X side.
But they are not a PE backed company.
A bunch of institutional investors, though, not that that kind of makes the impact.
I will say they did something really interesting, though.
They, from a personnel perspective, and that's always something to keep an eye on when you have these big events.
What happens on the personnel front?
They took a board member, a board member of F5, a former CISO of Equinix and a bunch of other places, a guy named Mike Montoya.
He was a board member, and they brought him down.
He left the board, and he came down as the chief technology and operations officer, something like that.
But it's a really interesting move by the board to kind of like send a shock troop in there,
get their arms around what they understand is a really significant problem for them.
Branding-wise, reputation-wise, engagement with customers.
I mean, Alex, we've seen this before in places like Solar Winds and Avanti.
It's just some of these decisions you make right out the gate are huge in terms of stabilizing the company itself
and kind of the perception of how you're responding.
Mike Montoya was on the show years ago
talking about the Equinix Breach and, you know, definitely a sharp guy,
definitely liked him.
That's a great interview, that one.
But, yeah, Alex, what do you think of this response, right?
Of, you know, getting someone from the board
to actually go into the trenches while all this is going on.
I mean, that does, you know, to Chris's point,
that does seem like kind of a good thing to do in this situation.
Yeah, I mean, it was a quick move.
I think, you know, the other thing that they have said so far that we'll see if it changes.
You know, F5 does not just ship pizza boxes.
They have cloud services, right?
So this could get much worse if it does turn out.
One of the reasons why you would want access to F5 over a multi-year period would be access to their broker services, right?
So they're distributed CDN and their cloud services, which if these actors did have access to those services, that would be an engagement.
ingredient access into many, many, the back end of many companies.
They are saying right now that there is quote of quote, no evidence of access to those systems.
We have all seen that change, right?
And especially since those services obviously use base F5 products.
So if they had access to vulnerabilities in those products, you know, it is pretty early in the incident response cycle to be making those claims.
So that would be another reason why you would see them not doing something as obvious as backdoring patch cycles,
because the big victory here might be access to those customer consoles that are already hosted in the cloud
and then utilizing those, you know, basically the zero trust access broker mechanisms to get access.
But I believe all of that is bagat from F5 picking up EngineX, which is why I mentioned it earlier, right?
So it's almost like a completely different company.
So I, you know, I'm not prepared to speculate.
Yeah, I don't know how much of that is, is EngineX versus F5 proper, yeah.
Yeah, well, we don't know.
I mean, it's interesting because F5 is like this consolidation of a bunch of different.
It is like, like Chris said, like some of these PE-backed firms where you have a bunch of different sub-brands that get smushed together.
And then it's extremely hard to tell from the outside of where the technical lines are in the inside.
because from a branding perspective, they try to make it all look the same.
From an attacker's perspective, they're all probably quite different.
Yeah.
And once you get on the inside, it actually gets quite hard probably to cross those administrative domains
because the internal integration is actually probably gone quite poorly.
Yes.
Well, I mean, that's kind of what I was driving at, which is to expect that they would be,
yeah, integrating the old part of the business and the new part of the business doing that well.
I can providing those sort of opportunities to attackers, yeah, I'm a little bit unsure.
sure about that. So we will get into that sort of private equity led, you know, mid-cap vendor discussion
in a little bit. But more immediately, you two were heavily involved in SolarWind's response to
their incident. This was a Krebs-Stamos group gig when you went in there. You know, what did you
do in there that worked? Where was the emphasis in that response? And where do you think F-5 should be
focusing? Yeah, I mean, that was our first project. That was actually
kind of pushed us off the bench, Chris was trying to actually take a little bit more time
to spend with his family and decompress from his very, uh, very, uh, from his running, his first
running with Trump. Yes, exactly. And Alex, you were, you were trying to write a book. Can you,
I was, which is still not done. Right. Right. I mean, it's still, it's partially up, right? If you go to
TSbook.org, our trust and safety textbook is partially done. And, and we can,
to release chapters as they get done.
But yeah, so basically Ferdwin's happened and they had a new CEO coming in who the CEO had,
he had accepted the job, not knowing that they had been secretly breached.
He finds out that this breach was being announced and he says, okay, I will still come,
but you have to let me bring in my own people.
And he calls us and says, hey, will you be my guys to help me navigate this?
And we, you know, formed the company and came in to help with the response.
And so, you know, it was a big effort.
There were tons of people helping on the forensic side.
But Chris and I were kind of on the really focused less on the hands-on forensics for which there are, you know, lots of skilled people and more on the corporate recovery of how do you rescue this company and, you know, retain some enterprise value out of what could have been a complete and total write-off of the organization.
So what was the advice, Chris?
There are two components of that, right?
There's an up and out, and then there's a down and end.
And what I mean by that is, you know, working with the board,
working with the C-suite on the strategy of response where, you know,
you're not listening to every single thing your outside counsel says.
You're not listening to every single thing your comms team says.
Because if you did, you wouldn't do anything ever.
You'd file your SEC required filings, and that would be.
it. So, and yes, that's an exaggeration, but nonetheless, you know, our emphasis was,
hey, you've got to kind of meet your customers where they are and you can't withhold information.
You have to be as transparent as possible. And really, the case here is like ACTA, right? I mean,
we've talked about this on the show before, about the, you know, a couple years ago they had
their incident. They didn't share enough. The next one, maybe they overshared. But nonetheless,
it's working with these teams, helping them put into context, the advice they're being provided
by outside counsel on engagements, on transparency, Sudhakarama Krishnan, who's the, the SolarWind CEO,
had a regular blog post. He did webinars, very, very transparent, not just on kind of what the
say of the response was, more importantly, like, how are they getting to good? And I think that's the
real question right now for F5, is if they have truly been internally owned front to back,
where a customer looks at their service catalog or their offerings and says,
how do I know that the Chinese didn't own every single one of these?
Like, you're going to have to rewrite the entire codebase for everything.
How can you get me something that makes me feel better about that?
And so that's a lot what the down and in was with Alex working with the team and Tim Brown
and those folks over there at SolarWinds.
They're like, okay, all right, here's how you clean up your PIP, your CICD pipeline.
Here's how you restructure your security team, put a governance process in place.
Here's some of the accountability measures you can put in place.
And it's going to take you X period of time.
And, you know, at the same time, you're getting a lot of pressure, right?
Because the sales organization says, hey, I need an attestation from KSG, from Alex and Chris that says we're clean.
And Alex that are looking at each other.
Like, are you out of your mind?
We can't do that.
We can tell you what you're doing.
is, you know, the right direction.
And so a big part of it, we had to basically build a whole new application security function
for them.
We also massively simplified their product line.
So one of the problems SolarWinds had was they ship too many products.
And you see this at these kinds of security companies where they never throw a product
away, right?
So I guess not security company.
There's just kind of a, SolarWinds is kind of a general IT vendor.
And what will happen is they'll accumulate this long tail of products that they
put into sustained engineering where one or two people are still churning out dot releases every
once in a while and it's making a couple hundred grand or half a million dollars a year and they're
like well why throw that tiny little bit of revenue away and they don't realize that thing is
actually it is not an asset it is a liability on their books because it represents a significant
amount of security risk and they don't see it as that the opposite of that Alex is something like 20
years ago when some friends of mine came to me and asked me to help them coordinate a vulnerability
disclosure to Cisco. And it was like a Wi-Fi product that had come to them via acquisition
and was quite old. And we reported these bugs to them and their response was, well, we're just
going to EOL this entire line, which was quite funny. But, you know, probably like we thought that
was crazy at the time. But, you know, probably in retrospect, that was the smart thing to do.
Yeah. I mean, I think the ethical thing to do is to end of life stuff.
stuff, you know, before you get the bug report, right?
Like, if people are paying for stuff, you should patch it and then you can say, we're
end of life in this year out.
And to their credit, I mean, solar winds didn't just kill stuff.
They're like, they announced, we're end of life and stuff in the future.
But that's one of the things we did was we helped them kind of triage, hey, you know, this stuff
is like, we're just not making enough money for this to make sense.
And so they announced, you know, six months from now, a year from now, this stuff is
dead and simplified it.
And then we built an application security process to go through.
and do assessments to triage.
We're doing pen tests.
We lined up all these pen test firms.
We lined up code review firms to go through all of the products
other than the core product that had been backdored.
And to go find,
because the other thing that happens,
this is the same thing I did with Zoom,
when Zoom had the same problem,
is you have the core issue,
and then you have all the hangers on, right?
What happens is the media starts paying attention.
And so every tiny little thing
that would usually just be at a,
a $1,000 bug bounty payout
becomes a front page New York Times story.
Another bug in Solar Winds.
Another bug in F5. Yeah, no, I know.
That's like little tiny bugs in Zoom
literally became like front page stories
in the New York Times business section.
And I would have to like yell at a New York Times reporter
of like, you know,
Microsoft Teams had a bug that you could take over all of teams
with like a bad JPEG or something.
And you guys never mentioned it,
but like this stupid little bug in like a Zoom installer,
you guys are now turning into a huge thing
just because Zoom is in the news, right?
And this is not ethical journalism.
There were some pretty decent bugs in those in those Zoom installers.
There's some real bugs in Zoom,
but like that wasn't the real bug, right?
So what you have to do is then you have to front run
all the stupid bug bounty guys who now want to become famous
because, you know,
and so you have to go find all those bugs and fix them
because everybody wants to become famous,
everybody wants to run to the media with every tiny little bug.
And so you have to go do that in every single product, even, and you have to massively
overextend.
And so we're doing all that kind of stuff while rebuilding the application security team,
while, you know, doing a bunch of code review and then uplifting the code that's
getting built going forward while, like Chris said, rebuilding the CICD pipeline, which in the
end, this wasn't a security flaw.
What happened here was the SVR decided to spend nine months.
infiltrating inside of solar winds, and then they built custom malware to, in memory,
I'm just to remind people of what happened here. In memory, they waited for the build process
to happen, and then they replaced in the memory page and the kernel, the code as it got
compiled by Visual Studio, replaced it before it hit disk. So it never actually even touched
disk. It, like, decrypted the code, changed the page in the kernel, and then cleaned itself.
up. The only reason we ever figured out what happened is that one of those build servers was a
VMware machine and it got snapshoted the moment during the build. And so we had a memory snapshot
of the malware decrypted in memory. The SVR actually cleaned all this stuff up perfectly.
They just didn't know there happened to be a old snapshot that got found and then one of the KPMG guys
I think did forensics on it. And we got super lucky.
which we still not know exactly what happened.
I think you just described the only instance in known history
where there was a security benefit from running VMware.
But this is also like why I get kind of pissed about all this SEC bullshit that happened.
Like, Sorwin's made a bunch of mistakes.
But the idea that like the SEC who couldn't even like secure their own Twitter account
from being hacked because they didn't turn on two factor,
the idea that they would be able to stand up against a concerted SVR effort
at this level is just a joke right like this was the absolute scalpel the surgeons of the russian
federation slowly infiltrating this company and then building this incredibly beautiful malware
and then doing a really good job of cleaning up after themselves and like we got spectacularly
lucky that we have any idea of what happened here um anyway like we we had to rebuild the whole
application security team at this company to uplift like solerwin's did
not have an APSEC program that they should have. That is honestly true of pretty much every software
company their size. Yeah. Now, look, before we get onto talking again about what other
of these mid-tier companies can do, because you both see some opportunities there for ways that
they can, you know, at least make this problem not so, not as bad. Chris, I wanted to ask you,
we've just talked about what F5's response should be. You know, what should vendor, what should
user's response to this be. And in particular, I wanted to ask you about the U.S. government
response to this, because SISA has issued an emergency directive for government agencies
to patch this. But the government's shut down. As security people still actually at, you know,
working and turning up to work in U.S. government agencies at the moment.
I mean, do you guys remember when emergency directives out of SISO, which the authority wasn't
even granted until like 2015 or 16?
Do you remember where these were rare?
Finding operational directives, yes, but emergency directives were super, super rare.
Like one of the first ones we pushed out was mandating the removal of Kasperski from civilian networks.
That was in the 2017-18 time frame, and that actually went all the way up, kind of an ancillary case, went up to Supreme Court.
But immediately after that, or not too long after that, there was 2019, and there was the D.N.
records tampering the sea turtle case. I think it was. And that was January of 2019. And again,
that was in the middle of a partial shutdown. It was not a full shutdown. It was a partial shutdown.
But for the outside observer, it doesn't matter. It's the same freaking thing. Nonetheless,
a good chunk of IT teams were furloughed. So they were sitting at home. And yet we had this
active exploitation out in the wild, changing records, DNS records. And there was a
this whole list of things we had tasked federal agencies to do. And there were some agencies that
had to recall IT folks. And, you know, the thing is, like, typically security operations folks
are not furloughed. But a lot of this stuff, even with the F5, it's not going to be done by the
security operation scene. Some of it will be maybe pulling some of the CDM data and some of the
other the scanning and inventory they can pull that.
And in terms of actually patching systems,
that's not going to happen probably in the security team.
That'll happen in the IT operations side.
So it's hard.
And then you know, the same time you're dealing with a lot of the communications
angles where again, your PR team, your comms team, those folks are out too.
And then you have the internal coordination piece like reporting up to SISA.
So a lot of the important people that would be involved in executing this,
would not be necessarily on duty.
You can recall them when they're furloughed,
but that takes time.
There's a process.
There's a 24-hour process.
You got to notify them.
You got to say, hey, need you on duty at station.
And you're supposed to kind of be while you're furloughed.
You have to check your phone a certain or email,
a certain with certain frequency.
You can't go too far away because you have to get back to the job.
So it adds just that much more complexity
to something where obviously Sissa thinks it's a big, big damn deal.
But it obviously needs to happen.
I think there's a kind of a knock on effect as well, something that I learned in 19 was that
as soon as Sisa pushes out an emergency directive, every other organization, commercial, state,
and local, or whatever, it doesn't matter if they're not in the federal government.
They're looking at that and they're saying, hey, we need to do this too, because their bosses
are going to expect them to do it too.
plaintiff lawyers are going to expect, hey, did you do this? No, you did. You got popped. Well, you should have done it. So there's a lot of kind of, of a long tail of those emergency directives. And again, in the period of a government shutdown, makes it that much more complicated.
So you think probably it ain't going to get done quickly within government, but this will still hurry along the private sector?
I don't know. So for sure on the ladder, it's definitely going to hurry along the private sector, because it is,
It takes a bunch of different disparate assets, including F5 and other recommendations,
puts it down into one really digestible set of guidance and instructions.
You can kind of checklist off and execute again instead.
As for what happens inside government, look, I think they're still probably for the most part
going to hit their timelines, the 72 hours and other deadlines that are in the emergency
directive.
It's just going to be like hurt locker stuff.
I mean, they're going to be working just crazy.
hours to get this done through the weekends, which, hey, I mean, that happens in the private sector, too.
I think they'll get it done.
I think, again, on the plus side, though, we're, as far as I know, again, we're not seeing
active exploitation in the wild, at least still, you know, what, what I've heard.
Even before the shutdown, there's basically nobody working on cyber in the United States
government.
I just want to point, oh, that's not true.
Okay, so, but, okay, even before the shutdown, there is one confirmed
leader working on cyber. It's Sean Kieran Frost, the National Cyber Director.
Right. We do not have a director of the NSA. The acting director was just informed that he will not
be promoted to be the director. So we have no confirmed director of the NSA. Yeah, but there are
CSOs and admins and people doing the actual work, Alex, come on. Right. We have an acting
director of Sisa. Sean Planky has not been confirmed. Under the Biden administration, there are
about 400 people working on the National Security Council, including a fully staffed cyber
division. There are fewer than 40 people working in the National Security Council. Fewer than 40
people. There's not been that few people since the creation of the National Security Council.
There's nobody, basically nobody works in the EEOB, right? Like, it's like you're going back to the time
at which the entire Department of Defense fit in that building before the Pentagon.
Well, look, look, I got to say, I got to say, I was actually surprised to see an emergency
directive come out from Cicester, given everything that's been going on with Cicester being
got it, right, Chris?
And it's a testament I'm going to say to Chris and Janesterly, who set up these processes
and the professional staff who's still there, that they're keeping the ball rolling when you
have this complete total turnover and that there's no political staff left.
But, like, we can only go for so long when there's nobody in charge.
Yeah.
And there's nobody coordinating in the White House of any of this stuff.
Chris, Chris, you had something there.
Yeah, just to, right, to defend Sissa here, I think, Alex, to your point, there is a lot of
institutional knowledge and muscle memory that's been built up over the last several years.
In fact, I don't think this is the first emergency directive of this administration.
I believe there was one not too long ago.
So the team that knows how to do these can do them, and they've been doing them for years.
So that's there.
I think what you're kind of hitting at, though, is the more strategic.
lack of capacity or capability, really, that's within the federal government at SISA, perhaps, at the White House.
I mean, I don't know what's going to happen with Plankies, Sean Plakey's nomination to be Sissa director.
Ted Bud from North Carolina is holding all DHS nominations, including Planky.
So, you know, who knows when that's going to wrap up.
But this will hamper for the lack of leadership and the strategic push.
behind cyber is for sure going to hamper, you know, pushing the ceiling on where these agencies
can go. And I swear, I mean, this is just the op-ed waiting to write itself. You know, we're
in, sit here October of 2025. The policy conversations around cybersecurity are virtually
the exact same as they were in October of 2015. Information sharing and offensive
cyber, including hackback.
Dude, there's not a lot of drift and there's not a lot of net new.
Because the only real appreciable difference, frankly, is that AI is finally here.
Now, now, now.
That's a perfect opportunity to segue here because the last thing we're going to touch on is
this PA stuff, right?
So, you know, Avanti is a great example of, oh, look, it's not even private equity.
It's just old software being acquired and the new owners just
turning the handle on this new software, you know, churning out invoices, doing as little maintenance as possible.
Now, that's one model in the software industry.
I'm going to actually stick up for PE a little bit in some circumstances because you can see them do smart things.
Sophos, for example, was pretty adrift.
Then they got some new PE backers.
They're on the march again.
They seem to be doing really well.
ProofPoint got taken private by Tomabravo.
things seem to be going really well for proofpoint. So PE is not always a disaster, but there is this certain category of acquisition. Not always PE, but often PE, where they acquire some software product that's deeply embedded in government and enterprise, and they just flog that old horse until it collapses in the street. And this is obviously leading to some pretty bad security outcomes. One piece of software like that at the moment is Yvante. Adam and I were talking about this last week, and that
code base is something like 35 years old, right? So, you know, we haven't really seen a way out
of this, but the three of us were talking before we got recording, and you two are a little bit
bullish on the opportunities with AI to improve some of these legacy code bases. I want to get
your thoughts on this first, Alex, and then you, Chris, and then we're going to wrap it up.
Yeah, so this is a great example. I mean, there are different kinds of PE firms.
And I think Joe Levees is actually a great CEO.
And I think he has, he came up with a good deal for his company and he has made the best of it.
And look, I just quit being a public company, see so.
Being at a public company can totally suck, right?
Like having to live up to quarter by quarter Wall Street expectations, you know,
I can totally see the benefits of P.E.
And in certain circumstances.
Yeah, I think the, you know, Chris and I did some work for Avanti and we've done some work for other companies where you're looking at,
this old code base and you're trying to estimate what would it take to renovate this code base
and fix all the security flaws and you're coming up with these estimates a couple years ago
and you're like, oh, okay, great. So you would have to hire hundreds of software engineers
and they would have to spend hundreds of man years to fix these flaws. It is completely
impossible. And the truth is now in the AI coding era, that is not true anymore. It is possible
instead of fixing security flaws
to just rewrite
significant parts of the code base, to refactor
the code, to eliminate
entire classes of vulnerabilities,
that that becomes
a possibility. So I do
think we are
moving into an era where that becomes
possible under, with AI
coding, and that is going to become
in the next couple of years a
totally practical solution
to some of these companies.
I mean, I, I,
not like going
from C to Rust yet, yet, but going from like C++ to C++ to C++, 21, moving to like a standard
template library, you know, getting rid of your malics and moving to like a secure allocator.
Those are the kinds of things that I think are actually possible.
Well, I mean, AI is getting so much better at that stuff so quickly. And in, you know,
in the AGI case, I'm a bear on stuff like this. I'm a total bull. I think it's amazing.
And I see what you mean. But now my question, and this is where we're going to end it.
But my question for you, Chris, is, you know, Alex has just described how you can turn a gargantuan, very complicated effort into something more manageable.
But I think one of the key problems here has always been and is always going to be incentives, even if it's easier, are these companies going to be incentivized to actually bother doing this?
I think it depends.
I mean, you rattled off a couple examples of companies that improve that for P.E. takeover acquisition.
But that's on the execution side.
There are at least two other elements they have to drive.
And that is the quality of the code.
That is separate than execution.
Execution is getting the product out there, the go-to-market and all that.
That stuff, entirely different animal.
And then the third piece is what the internal security program is.
The security program is always the afterthought, always the last thing considered in these sorts of deals.
So I think what the incentives are and what I am seeing and talking to a number of private equity folks that we work,
worked with a KSG and even with Sentinel One, but also just that I know, and I think the differentiator
here is that if the private equity team comes from a security background, like if these are
former CEOs that sold companies to kick-ass job and then set up their own fund, they take a really
rigorous GRC approach to their portfolio. They do a really rigorous job on due diligence.
I've seen plenty of others that don't.
And that Alex and I and our team had to come in after the fact
and build risk registries of the existing portfolio and the companies
because they didn't want to get whatever companyed.
They didn't want that to happen to them.
They didn't want to have to give up all that money in the stock crater.
So there are a bunch of different things going on here.
But again, I think it's who's owning, who turns the screws,
who has the influence over the company?
and do they effectively have a security mindset?
Yeah.
But that's the same everywhere.
No, I mean, I think what you're saying is the obvious answer, right?
Which is the ones who haven't been doing it because it's not been economical,
but they want to do it, we'll do it, and the ones who just don't think that way won't, right?
Like, that's just, that makes a lot of sense.
Yeah, it'll be interesting to see if you have, like, P firms who actually specialize in this.
I could totally see firms saying, I'm going to buy this underperforming asset,
and I'm going to have a group that specializes in, we know how to renovate it.
we're going to fire half the engineers and replace them with AI,
and we're going to rebuild this code and maintain it.
And I think that actually might become an interesting little special.
Yeah, I'm not sure about that.
I don't know if the business model works.
I don't know what you can squeeze more blood out of that stone,
but I think it would be interesting to see the write-up for sure.
All right, we're going to wrap it up there.
Chris Krebs, Alex Stamos.
Thank you for joining me for another episode of the wide world of cyber.
Always great to chat to both of you.
Cheers.
Thanks, Patrick.
I'm going to be.
