Risky Business - Risky Business #811 -- F5 is the tip of the crap software iceberg
Episode Date: October 22, 2025In this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: China has been rummaging in F5’s networks for a couple of y...ears Meanwhile China tries to deflect by accusing the NSA of hacking its national timing system Salesforce hackers use their stolen data trove to dox NSA, ICE employees Crypto stealing, proxy-deploying, blockchain-C2-ing VS Code worm charms us with its chutzpah Adam gets humbled by new Linux-capabilities backdoor trick Microsoft ignores its own guidance on avoiding BinaryFormatter, gets WSUS owned. This episode is sponsored by Push Security. Co-founder and Chief Product Officer Jacques Louw joins to talk through how Push traced a LinkedIn phishing campaign targeting CEOs, and the new logging capabilities that proved critical to understanding it. This episode is also available on Youtube. Show notes Why the F5 Hack Created an ‘Imminent Threat’ for Thousands of Networks | WIRED Breach at US-based cybersecurity provider F5 blamed on China, sources say | Reuters Network security devices endanger orgs with ’90s era flaws | CSO Online China claims it caught US attempting cyberattack on national time center | The Record from Recorded Future News Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials Hackers Say They Have Personal Data of Thousands of NSA and Other Government Officials ICE amps up its surveillance powers, targeting immigrants and antifa - The Washington Post John Bolton Indictment Provides Interesting Details About Hack of His AOL Account and Extortion Attempt US court orders spyware company NSO to stop targeting WhatsApp, reduces damages | Reuters Apple alerts exploit developer that his iPhone was targeted with government spyware | TechCrunch A New Attack Lets Hackers Steal 2-Factor Authentication Codes From Android Phones | WIRED GlassWorm: First Self-Propagating Worm Using Invisible Code Hits OpenVSX Marketplace | Koi Blog European police bust network selling thousands of phone numbers to scammers | The Record from Recorded Future News Stephan Berger on X: "We recently took over an APT investigation from another forensic company. While reviewing analysis reports from the other company, we discovered that the attackers had been active in the network for months and had deployed multiple backdoors. One way they could regain root" / X Linux Capabilities Revisited | dfir.ch CVE-2025-59287 WSUS Remote Code Execution | HawkTrace TARmageddon (CVE-2025-62518): RCE Vulnerability Highlights the Challenges of Open Source Abandonware | Edera Blog Browser threat detection & response | Push Security | Push Security How Push stopped a high risk LinkedIn spear-phishing attack
Transcript
Discussion (0)
Hey everyone and welcome to the risky business podcast. My name's Patrick Gray.
This week's show is brought to you by Push Security. Big thanks to them for that. And a little
bit later on, we'll be hearing from Push Security's very own Jacques Lowe about a LinkedIn-based
fishing campaign that they were able to detect and disrupt. Very interesting campaign targeted
at chief executives in a particular vertical
and yeah push has built some new
bells and whistles that have been really useful
in helping people walk back those campaigns
discover where fishing links have come from and whatnot
Jacques will be along a little bit later on
to talk through all of that but before all of that
of course it is time for a check of the week's security news
with Adam Boileau and Adam of course
the F5 hack is the talk of the town
We've already published a podcast on that.
I spoke with Alex Stamos and Chris Krebs yesterday
in a wide world of cyber edition of that.
But let's just, for the listeners who have not heard that,
let's just recap the guts of the story here.
It looks like the Chinese were rattling around inside F5s
like, you know, dev network.
Well, their network, their whole network for a couple of years.
Yeah, it's not a great look for a security vendor, that is for sure.
The reporting seems to be that they filed, or they,
spotted this incident, I think around August this year,
filed some notices with the SEC as they're
required to, and then since
then, it's become clear
that the attackers have probably been in there a good
couple of years now.
Some of the stories we've seen say
that it's probably China. I don't know we've seen
like a concrete attribution there, but like
I mean all signs point
to at this point, right?
Yeah, yeah. And it seems like
initial entry was probably through
Vones in their own products, which, you know,
I guess they eat their own dog fruit, so that's nice.
But, yeah, attackers got in there, have been rummaging around with source code access
and build environment access for F5 Big IP, which, of course, is one of their key products
that people use on the edge of their network.
And that's the thing that I think has got a bunch of people pretty concerned, because, you know,
devices that are on the network perimeter that implement security controls, that is a place
that attackers love to go.
And being in the supply chain for that, even just being able to see the source code,
there's some reports that they took
the attackers took vulnerability reports and stuff
it looks like yeah
they were all over the internal bug tracker right
which is why as soon as this has all come to light
f5s dropped like 44 patches or something
it was interesting too like one thought that I had though
is like I mean surely there's not that many
big IP management interfaces on the internet right
so this how can this be a big deal
and then you start reading the advisories
and it's all like unspecified traffic
crossing the device
could cause this service to terminate and you're like oh okay right so that's a dos
condition but they all read like the sort of things where if you put in the research
you might actually get them to be exploitable just by getting packets to transit the
device so I get why people are freaked out yeah yeah absolutely of course once you're
in a position like if you compromise an edge device like this like they are by design
break and inspect SSL like their front-end proxies that are terminating TLS they've got
credentials and access for onwards movement they're on the edge of the network
without EDR, without other controls.
Like, it's just the perfect place to be.
And stealing the source, looking for new bugs,
stealing vulnerability information,
and maybe turning DOSAs into actual weaponizable things.
Like, that's just smart hacking.
And, you know, it kind of got it to hand it to them, I think.
Like, that's just, like, good work, China.
That's, I mean, I'm jelly.
That would be a great thing to have.
One thing I found interesting, though,
is they did have access to the part of the network
where the patches were sort of signed and distributed and it doesn't look like they pulled the trigger
on that whether or not you know i mean we sort of speculated on that uh in yesterday's podcast with
chris and alex you know maybe they were going to drop you know pull the trigger on that later
they just hadn't got around to it or maybe they were worried they were going to get caught or you know
i don't know but um yeah it seems like the sort of thing you would say for a special occasion
like that's not just uh let's do that on tuesday that's uh like you know preparing the battle space kind of
thing rather than a you know let's just pull the trigger because we can yeah now a big part of
yesterday's conversation was really about what to do about these mid-tier vendors that you know
aren't really taking appropriate care of their products i mean i got some sympathy for f5 to be
honest because they make a fairly unique set of products right some less unique than others like
there's plenty of wafts and stuff out there but try finding a solid replacement for like a big
IP load balancer you know like they are just the industry standard um and people are buying them so
where's the motivation to actually fix the product, right?
When you're kind of unique in the market.
And along those lines, like CSO magazine has done this great feature.
The journalist's name is Lucian Constantine.
And he's covering sort of like stuff we've spoken about a fair bit over the last year or two,
which is, yeah, these edge devices are just crap and no one really seems motivated to work for them,
to improve them.
You know, yesterday, Alex and Chris made a good point, though, which is that at least
now with some of these AI tools emerging, these mid-tier vendors that want to improve
their code quality, they can actually do it now. It's becoming much more economically viable
to do that. But I have a feeling that there's plenty of these, you know, private equity
controlled companies where they won't even get the budget to do that, right? So, but walk us through
this feature by Lucian here because I think it's very interesting. Yeah, it's a really great
piece. He's kind of written up this kind of whole scene where there's, you know, security critical
devices that ultimately, you know, have been getting people owned. And he's pulled together
interviews with the boss of Watchtower, the guy from Volncheck, some other people from like
Vera Code, I think Chris Weisopel. So like pulled together a bunch of really smart people to talk
to, the guy from Bishop Fox even, and like it's just a really great survey. He's got like a list of,
you know, here's a whole bunch of products that are in this category, the bugs they've had,
the kind of things that have gone wrong with them. And it's just, you know, it's a really good, you know,
sort of state of the world as far as network edge devices go.
Yeah, just solid work there.
Now, I probably massacred your name, by the way, Lucien, or is it Lucien?
And is it Constantine or Constantin?
I have no idea.
I'm very sorry.
I'm just going to put that out there.
Now, look, one of the reasons, you know, look, there's some reporting from Reuters, too,
just going back to the F5 thing for a second.
There's reporting from Reuters that people familiar with the investigation are pointing
the finger at China, which would suggest to me that that attribution is more than likely correct.
But another thing that points to China actually having been behind all of this is the fact that they are now accusing NSA of doing a whole bunch of stuff, which is like, you know, every time they get accused of hacking something, like they usually put out a release saying, you know, the Americans are the worst cyber terrorists in the world, they suck, blah, blah, blah, blah, blah.
A little bit more often, though, over the last few years, they'll attach some claims of, you know, American hacking against their interests.
This piece is from the record
written by Alexander Martin.
Very funny that up the top
there's an editor's note
which says this article was updated
at 245 PMEST
with a comment from an NSA official
and the comment is just a glomar
which says we don't confirm or deny
which is like, okay.
Really glad you got that detail in there.
Even getting a glomar though out of NSA
is an achievement.
But yeah, like what's the go here?
China's accusing America
of going after some central timekeeping
authority that has pretty significant military applications.
So, like, this is the other funny thing is when China tends to whinge about the Americans
hacking them, it's always like universities that are doing like missile research and,
you know, military timekeeping departments.
Is that what we're looking at here again?
Yeah, that seems to be.
It's their National Time Service Center, which Alexander Martin kind of compares to the
U.S. Naval Observatory in terms of functions, or it provides, you know, kind of ground-based
timing information or timing services for a whole bunch of users, including the military.
And yet, it totally seems like a legitimate target.
The China's National Cert has put out a release that we are really reading the machine
translated version, because it's in Chinese, obviously, and it has a bunch of details
about a particular intrusion, which they say started with mobile phones of people who
worked at this particular organization, and then escalated into, you know, kind of compromise
and onwards from there.
They kind of attribute it to NSA based on a bunch of like this looks similar to things
we'd seen in some of them previously leaked NSA tools.
They draw a bunch of parallels between Dandas Spritz, which was an NSA tool that we saw,
I think, in the Snowden Docks, or maybe the Shadowbrokers dump.
I can't remember which one it was.
Well, you notice also that every time China's trying to docks an NSA operation, they're linking
it to like decade-old tools, right?
Which is like, how old is this operation?
Well, I think in this case they are saying that there's a bunch of similarities in, like, the tool that they are seeing looks like a modernized replacement for Dandasprits.
Okay, okay.
They draw some parallels, which are honestly a little bit flimsy.
I mean, some of it's like, you know, this just sort of generic good hacking tradecraft as opposed to specific, like, this was compiled on the same workstation by Dave from, you know, Maryland or whatever.
So, yeah, a little bit, you know how China is with their attributions.
it can be a bit flowery sometimes.
A little bit vibes-based, yeah.
Yes, yeah, exactly.
But yeah, it's very much like, you know,
those capitalist imperialist pigs are all up in our timing systems
and, you know, how dare they, etc., etc.
Yeah, and how dare they accuse us of like...
Anyway, kudos too to Alexander Martin for this paragraph in the story.
Although the MSS described the US
as repeatedly trampling on international norms governing cyberspace,
the activity it described,
the specific targeting of a service with military applications,
would not necessarily be considered
a breach of non-binding cyber norms
agreed to it, the United Nations.
But I mean, look, we're not,
we are not their audience for this sort of stuff, right?
The audience is the wider public
and indeed policy makers
who are not subject matter experts
and don't necessarily have a good grasp on norms.
And, you know, if they can influence people to say,
well, you know, they're just doing to us
what we do to them, which is the, you know,
what you really want out of an information campaign like this,
then they have succeeded.
Okay, now let's do.
take a look at a couple of stories out of 404 media written by Joe Cox. These scattered
lapses hunters, com adjacent kids who've been raiding Salesforce for data, they've actually
managed to docks hundreds of DHS, ICE, FBI and DOJ officials using the information they've
obtained from Salesforce, but they say they have the data on thousands more. This would seem to be
somewhat of a provocation, Adam, and I don't think this, you know, given most of these kids
tend to be based either in the United States or like European countries where they will
happily extradite these kids to the United States. This just strikes me as suicidal. Like,
what are they thinking? Yeah, it's definitely not a well-thought-through plan. And, you know,
the comm is a bunch of kids doing dumb stuff. And I think, you know, you're sitting there on a
trove of, what do they say, like billions of records they've looted out of various sales force
users, you know, mining that for whatever topic of the day and then, you know, starting
to dump some data around trying to make some, you know, in the end, like these kids want
publicity more than anything else. So, I assume probably also they want money. But yeah, like
making a big mess using the data you've got without really thinking through the consequences
seems pretty on brand. But I do feel like, you know, winding up the NSA in particular is probably
not a smart move and all the other various agencies.
NSA's not really in much of a position to do anything about this.
I mean, FBI certainly are,
but you're also looking at a climate in the United States,
going after ICE when this current admin is in power?
Like, that's how you get declared a terrorist in America right now, right?
Yeah, yeah, and they are not going to be particularly shy.
I mean, I suppose the only saving grace is the US government's kind of half shut down,
at the moment so that might slow down response a little bit but i don't think the shutdown affects
this sort of stuff man like you know that's just yeah anyway it just seems like winding you know
it's very much like you know kicking the hornet's nest for really no good reason yeah so yeah i don't know
what they're thinking and i guess the answer is not much now look speaking of ice i did just want to
quickly mention this piece from the washington post which has taken a look at the surveillance technology
being procured by the United States Immigrations and Customs Enforcement Agency.
It's a part of DHS.
They are just going on a spending spree at the moment.
Of course, they've had $170 billion allocated to them through the big beautiful bill.
Now, the reason I wanted to mention this is I've had quite a few people over the last six months.
Ask me if I think NSA is going to go rogue and present some sort of huge problem, you know,
that they're going to step out of the bounds of their authorized activity and start surveilling their own citizens,
perhaps, you know, surveilling people outside of America just for political purposes.
And honestly, I don't see it.
But if you want to know where the problem's going to be, it's going to be ICE, right?
Because at this point, ICE is sort of being empowered to investigate people who oppose ICE.
I think the funniest tweet on all of this I saw was like, oh, well, I don't have to worry about this anti-antifer stuff, says person who doesn't realize their antifa.
This is sort of how it's going.
I've dropped a link in to this week's show notes, but I feel like, you know, some of the numbers here, oh, they've spent 3.4 million on Clearview AI, man, tip of the iceberg. They've got the money. They're going to spend it. They're going to ramp up their domestic enforcement agencies. Even these kids who are doing the doxing, like I can imagine ice going after them for doxing their personnel because they are being empowered to protect themselves as an organization. So that's just my two sense on that. I don't think Americans need to be worried about NSA.
quite yet because abuses are going to follow the path of least resistance and the path of least
resistance runs right through DHS and into ice. So, you know, uh, yeah. Now, uh, that seems like a fair
take to me, man. Now, look, uh, just speaking of another, look, this is a really interesting
story. Kim Zeta's done a great write up on this, uh, for her zero day, uh, blog, uh,
but she's taken a look at the indictment against John Bolton. And it really,
looks like, so he was Trump's national security advisor in his first term, it really looks
like there's a case here. So we've seen a bunch of investigations and, you know, that look
like retribution launched by this White House. But the one, this one against John Bolton,
I mean, sure, it might be politically motivated, but you walk through this report, Adam, and it
certainly looks like John Bolton was acting extremely foolishly with information he should have known
was classified. Yeah, this is a great example of like, it can be both.
retribution and also like legitimate like this guy probably did some bad stuff here.
The story has a bunch of details about him sharing class-wide information with his family
members in like a group chat and a bunch of work that he was doing to prepare for writing
a memoir once he was out of the Trump administration and I guess he probably never particularly
thought Trump was going to come back around again.
Every day he was writing 15, 25 pages of notes. He was transcribing the notes and
then using his personal email address to email them to his wife and daughter every day.
Now, these were not like marked classified documents, but obviously when you are making notes
about classified meetings and whatever, like obviously a lot of that's going to be going to be
covered. So he emailed all of that from his personal email address to his wife and daughter.
Now, indeed, when he wrote his book, he submitted the book to the National Security Council to
review it. They did identify a whole bunch of classified information in his book, which he
removed, but then he published the book without getting sort of final clearance, which was
also a no-no. But it looks like the information that was deemed to be classified was stuff that
he had emailed his wife and daughter from his personal email account, which is the stuff
that it looks like Iranian hackers got their hands on. And they were trying to blackmail him
with this stuff. So all in all, not a great look for John Bolt. No, no, really not. And there's been
lots of mishandling of classified information cases. But this one does seem a little.
worse than average you know i don't know if it's you know marilago bathroom kind of standard but it's
certainly not good yeah i mean it's hard to feel sorry for john bolton though who for those of us who
are old enough would remember as one of the architects of the second uh gulf war so you know sucks to be
you the walrus uh moving on and a u s court has ordered ns o group to no longer target
WhatsApp, which is pretty
funny, isn't it? It's like, you know, okay,
spyware company, you can no longer target
WhatsApp, here's an injunction. Like,
what? Yeah, it's pretty
funny. The, uh, the,
Inneso group was challenging the
result of its court case against Meta,
uh, where they were, you know,
meta got a whole bunch of damages awarded to
it from, from NSO.
NSO got that, talked that down from
167 million down to four. So that's a good result.
But then, yeah, this injunction
where NSO said like,
But you can't tell us not to target WhatsApp because what else will we do?
That's what we do.
We are a spyware company.
We can't not hack meta accounts or WhatsApp accounts or whatever else.
Like that's, you know, what else are we going to sell our customers?
So, yeah, a little bit of QQ more there in a sort of group because, yeah, it's just a funny outcome, you know.
And if they do, what's the remedy?
Like is meta going to sue them a second time if they violate this, you know, this particular thing?
Well, I don't know. I mean, I think the point is with the news we spoke about what last week, you know, things have just not really been going NSO groups way, which is, you know. And look, staying with sort of spyware related news, Lorenzo over at TechCrunch has a story of intrigue, Adam, where it has chronicled the story of this guy. I think they've given him a fake name for the story, but they've given him the name Jay Gibson. He was apparently fired from L3 Harris Trenchant, which is, um,
which was a big part of like azimuth and what was that lynchpin labs as well they got bought into trenchant
and whatever so you know they were doing um uh you know exploit development for like ios and whatnot
uh so this story talks about this guy who got fired uh by trenchant because they thought he was
leaking exploits um and then he gets a message from apple saying that his phone was being targeted
by sophisticated state backed adversaries and whatnot and he's making the case now through lorenzo that he was
improperly fired because clearly it was the hackers what done it and got access to those to those
exploits so just an interesting little peeling back of the curtain into um i don't know you know
the sort of problems you have when you work for these companies yeah i mean yeah it's an interesting
like interesting insight i guess and lorenzo's talked to a few other people that are kind of you know
worked at or maybe still work i'm not quite sure uh at trenchant um and have kind of corroborated some
aspects of the story but it's just an interesting insight and and and and
I guess a reminder that, you know, if you're someone who does have to work in this field,
like, it can just get a little bit weird because you end up mixed up in all sorts of, you know,
like life can get complicated, being a, you know, being an exploit dev for the military for the government.
It's not all, you know, money and, and Ferraris.
Well, I mean, you know, this is the second time today we've spoken about attackers going after bugs, right?
Because this is something that has happened as long as bugs have existed is people would, you know,
go to the source and you know we heard about people knocking over f5's bug tracker this is a similar
sort of thing you want some sweet ios exploits you get them from the people who create sweet
ios exploits yeah yeah absolutely yeah it's a it's a good methodology now look we're going to
talk about this uh dan gooden piece that originally appeared in ars technica about a technique used
to steal mfa codes like numerical codes out of android devices this is academic research i can't
imagine it's terribly practical in the wild, but it's still cool, and you wanted to talk about
this one. Yeah, I just really liked it because it's a great example of, you know, turning academic
research into something that's like legitimately into and interesting, even if not necessarily
practical in the world. I just, yeah, it kind of tickled my fancy. So this is research that
uses GPU side channels to kind of extract data about what's being rendered to the screen.
And this is implemented on Android, where you get a malicious.
app onto an Android device and then it can through the like Android intent system, etc., open
other apps. Now, it can't, you can't see the contents of those other apps, but what this
technique does is like if you open the two-factor code generating app, like your Google Authenticator
or whatever, and then the original app tries to overlay something on the screen and then times how long
that takes to go through the GPU rendering pipeline and by kind of crafting that, you know, you
you can end up extracting pixel data essentially out of that timing side channel
and from that recover the contents of, you know, a two-factor orthcode or something.
And that's kind of a cool application of a timing side channel.
And I just, like that combined with the way like Android works, I thought was pretty cool research.
And it's certainly not the first GPU rendering side channel that we've seen.
But it's just I thought it was good, you know, kind of lateral thinking to apply that to stealing two of A codes or whatever else.
yeah i just don't understand the workflow of how this is supposed to work from the attackers side
which is what you've got a user has their phone unlocked screen on and then you're trying to
pop their authenticator app and then scrape the screen users may be going to notice that and like
the malicious app that you've got them to install can that actually run in the background on
android it can on ios right so i'm just sort of thinking yeah i think in this case like those
concerns i think you can get around like you'll be able to you know maybe if the thing you're
overlaying on top of it really completely hides it maybe and you know you can open other apps
through intense in ways that you know don't require permissions on android so like it's just a yeah
I thought it was an interesting like normally you would look at this side channel and go hey
that's interesting but what what uses it and kind of chaining all of the other bits and pieces
together to turn this into something that is useful and the history of this particular graphical
like graphical rendering side channel was that it was originally they tried to use it in a browser
context to steal stuff between browser tabs and then that the amount of isolation and complexity
that made that difficult and then so reapplying it to where else can we use this primitive
and turns out Android as an OS actually is a great place to do that so yeah that was that was my
thinking now the I think this is one of the most awesome stories of the week which is this
VS code this VS code worm but it's it's a VS code extension available on the open vSX
marketplace it's a worm it's man it's it's it's knocked over tens of thousands of boxes it's
designed to steal crypto but it is brutal man like you're reading this just going oof yeah yeah
this is this is good stuff so this worm um showed up in one of the um vs code extension
marketplaces both the official one i think and the open vs x one and when it gets when you you know
when you have an extension that is infected they get auto updated so your vs code or automatically
picks it up deployed onto you uh and
And then it will, it uses the Solana blockchain for C2.
So it reaches out of the blockchain, pulls in a C2 URL.
So that's already interesting.
It has backup C2 via Google Calendar.
And then the next stage brings in a cryptocurrency stealer.
So it rummages around, support something like 49 different cryptocurrency wallets.
So it will steal crypto.
It will also scrape for credentials, for GitHub, for the VSX extension marketplaces,
and then use that to propagate itself.
So that's kind of the worm part of it.
And then it also drops like a SOX proxy and a web RTC peer-to-peer-based, you know, kind of comms mechanism for access to those proxies.
It drops a VNC server so that the attackers can use it to interact with your desktop or graphical applications.
Like, this is pretty polished, honestly.
Yeah.
And whoever, like, this is not, whoever did this, this is not their first rodeo.
Like, they are, you know, they're working, they'll definitely work in this pretty hard.
and yeah like it's just going to be hard to shut down because of the way that the C2 works
and the self-propagating stuff like it's out there live right now like people are getting
infected as we are speaking which is pretty cool work honestly i mean that was my reaction as well
as i'm reading this going my god like this is this is brutal but sort of impressive and it and it
feels old school in a bunch of ways right like it doesn't feel like yeah this doesn't feel like
com kids this feels like someone of our vintage i mean honestly it does like i mean and far be it from us to
say that our way of you know old school hacking like we did it you know is the only way to do good
hacking but it does have that kind of like someone really thought about this and they did good job
good job there was some engineering in this one uh basically um just quickly we've got one cool
thing uh the code it injects into the things that it uses to propagate uh it uses
invisible Unicode characters
so when you look at it in a text editor
you can't see it
like just chef kiss
it is yeah chef kiss 100%
what was the gift that I dropped
is my reaction into Slack today
it's the American Psycho
ooh
yes exactly exactly
very very chef kiss
just quickly wanted to mention
we're going to link through to Darina Antenukes
a version of this
story on the record
but police in Europe
I think it was Latvia wasn't it
they knocked over another one of these
SIM farms. So they got something like 40,000 active SIM cards, 1200 SIM box devices.
The reason I think this is interesting is, of course, we saw a few weeks ago a similar
operation getting taken down in New York where there were rumors that Chinese intelligence
services had been using those SIM farms for various things, although they were, you know,
predominantly like fraud, you know, was fraud infrastructure, but also being used by foreign state
actors. So interesting to see a similar operation getting rolled up.
up very quickly after that.
So I just sort of wonder if that's part of the same thing.
Could just be a coincidence.
I've got no idea.
I got no insider information there, but just thought it was worth flagging.
Now we're going to talk about an interesting technique that's been observed in the wild,
Adam, some good old-fashioned Unix persistence techniques that popped up in an instant response
investigation, and the guy has, like, posted about it on X, and there's a blog post and whatnot.
Yeah, so this is an attacker in the wild using a user.
Unix, using a Unix trick for like long-term back-toring.
So once you've gained privileged access to a system and, you know, at some point in the
future you might want to come back and regain privileged access.
You know, one of the old-school ways you might do this is you might make a Seward
root binary, a binary that whenever it's executed runs as the root user, not as the user
who is executing, so it provides your way for privilege escalation.
This particular attacker is using a trick where you can use Unix capability, or Linux
kernel capabilities.
to do the same thing.
So you can essentially mark a binary file
to run with elevated privilege
in a way that most people did not realize
that you could do this.
I wouldn't necessarily know to look for it
when you are hunting.
And in particular, I feel like I know quite a bit
about Unix local privilege escalation
and ways to escalate privilege in Unix boxes.
I didn't know this was a thing.
My mental model of how Linux kernel capabilities
worked and interacted with file system perms
was wrong.
and this post showed me that I was wrong
and I feel like that doesn't happen super often
and so in that respect like it was just, it's super cool
I tested it out this morning
like it legitimately works
like the guy describes
you set the cap set UAD capability on a file
and yeah in the future you come back
and you gain root which is just yeah super cool
and one more thing for people doing instant response
on Unix boxes to hunt for
oh now bug of the week
this one again oof
There's a remote code execution in WSUS, which is, what is a Windows Server Update Service.
What do they call it, Windows Software Update Service?
Is that right?
Running a server update service.
Yeah, server update.
Anyway, WSUS, right, which is how you get, you know, if you want to control patch rolls rollouts
via Windows into your internal organization, like that's how you do it.
Yeah, this is like a full straight up remote RCE due to a deserialization bug, which is a very big deal.
I think the other thing that's a big deal about this is since COVID, and a lot of
of people started working from home, even though the best advice is you shouldn't expose your WSUS
server to the internet, if you go and look through forums and Q&As and whatever, and people are like,
how do I update my home users? They're like, man, just bung her on a fully qualified domain name.
She'll be right. So I asked you to go to census and see how many WSUS boxes there might be out there
on the internet. What did you find? Like 7,000 of them? Yeah, 7,000-ish, you know, based on certificate
names that look like WSUS so yeah like certainly in the thousands which i mean i'm not that
surprised i've seen you know a lot of people do put the stuff on the outside for the reasons that
you describe like managing distributed systems and work but hang on hang on could some of those
certificates have just come from transparency logs and it's a fully qualified domain name but it's
actually firewalled off it's like in a DMZ somewhere that no this is from census actually
connecting out and collecting the certificates so this is not from ct logs yeah okay okay anyway walk
Walk us through the bug.
This is a clangor.
Holy crap.
So this was a CV that Microsoft disclosed this month.
And it's, as you described,
straight up remote code exec via HTTP into WSUS.
But the thing that's particularly beautiful about this
is that it's deserialization through dot net binary formatter,
which if you can put untrusted data in front of it,
you get code exec.
Microsoft itself says there is no way to make binary formata safe.
do not use this ever, you can't fix it.
And it's been marked obsolete in dot net since 2020.
And like they actually fully ripped it out.
Like it's not even there anymore in the most recent releases of dot net.
So this is very much Microsoft not following their own advice
and getting everybody owned as a result.
Now at least it's been patched, so that's good.
And hopefully people who have WSUS are the sort of people who patch regularly.
So the idea that you would have a WSUS that doesn't then itself auto patch,
I guess is not particularly likely
but yeah it's just
the irony of Microsoft
getting owned by binary formata
in its own stuff
yeah it's just rude so rude
Microsoft yeah so rude
I mean but this is like this is an old
this is an old style bug right like this is
this is the sort of advisory you expected to see in 2005
not 2025
I mean kind of yeah pretty much
well I mean in deserialisation wasn't really a bug class in 2005
but I just mean from a perspective of like
crafty Microsoft
code in a core product, like, you know.
You don't necessarily expect this for Microsoft.
The only saving grace here, I guess, is that, and the reason perhaps it hasn't been seen
in the past is that the data that gets decrypted by, or that gets processed by binary
photodata is encrypted, using a hard-coded crypto key.
And so somebody had to go figure this out and extract the key, which of course, has now
happened.
And I think the end result bug is like CVSS 9.8 or something, which in WSUS and Internet
Facing WSS, oh boy, good times.
Yeah, now we're going to wrap it up this week, Adam,
with a discussion of another CVEE.
It's been given a name, and I'll allow it,
because it's a cool name, Tarmageddon.
So we've got a tar bug.
You love tar bugs because you are an old beardy Unix guy,
walk us through Tamageddon.
Yeah, so this is a bug in an implementation of tar in rust,
so the quite popular async tar Rust library.
and it's not the world's worst bug.
This is a kind of a thing where you can craft a tar file
that when it's untared by this particular Rust implementation of tar,
then you get different results
than if it's untared by some other tar process,
which is a problem in environments
where the tars are initially, you know,
checked by some other, you know, at some process
and then later processed by Rust.
And, you know, it's not straight up code exactly,
like previous tar bugs that we've seen.
But it is just kind of interesting.
And the complexity here is also that the Rust ecosystem,
like Rust is very popular because of its good security properties, right?
You're not going to get memory corruption bugs.
There's a whole bunch of things that make Rust ecosystems more resilient.
But the kind of open-source lineage of this particular tar fork made patching
this very, very difficult because there's a number of downstream forks
and the middle fork between the original upstream code
and the downstream ones that people actually use is abandoned.
So getting it passed was quite complicated.
So it's a kind of cool bug, and it is used in, for example, the U.V,
Python UV package manager thing, which is very popular in modern environment.
So, yeah, it's just an interesting bug in a place that's not great
and an ecosystem that whilst security robust,
generally a good, you know, Rust is generally a good thing,
this is a great example of where that ecosystem is not delivering a great quality outcome for everybody.
Plus, I just love tarbugs.
You do.
Now, we actually have a moment just before we roll onto the sponsor interview.
I just wanted to get your thoughts on a couple of other things in the news.
AWS Outage, oh my God.
We pushed that podcast yesterday.
It does not have a video.
No video on YouTube because Riverside, which is what we use to record, you know, video-based interviews,
was down completely from AWS.
So much stuff was down, including security tools, which I, you know, it's just fascinating, isn't it?
that you lose U.S. East 1 and everything just stops.
Yeah, like it really is the center of the world for cloud
because so much stuff is dependent on U.S. East 1 eventually,
even if it's through 47 intermediate dependencies.
Like we were talking in Slack whilst that after was going,
I couldn't paste a GIF in Slack while we were having a conversation.
Yeah, I tried to paste a screenshot of like AWS's like support accounts,
most recent tweet being two weeks old or whatever.
I took a screencap, tried to post it, couldn't.
Yeah, yeah, yeah.
It's just funny how much stuff's dead in the order
because U.S. East won.
So, yeah, that was a hell of it.
It was a hell of a time watching that status post update
with more and more Amazon services falling over
as they tried to figure it out.
And in the end, it was all DNS?
Oh, was it?
I think in the end they sort of worked out
that there might have been a bit more to it than that, actually.
I think like it started with DNS
and then they ended up where they couldn't launch easy two instances
fast enough or something.
Yeah, yeah, yeah.
Fall out from DNS.
So I guess the main lesson there is maybe have a look at your failovers people and, you know, don't take it for granted that U.S. East 1 is going to be completely up all the time.
I did appreciate Elizabeth Warren, the U.S. politicians saying, oh, this is proof that we need to break these companies up because the Internet shouldn't fail because of one company.
It's like, oh my God, if only you knew how bad it is and how many individual points of failure there are.
Also hilarious that it took down a bunch of the crypto world considering their whole thing about decentralized.
decentralization. It's like, yeah. Decentralization, it seems to involve a lot of centralization on U.S. East, U.S. East 1. The other thing I just wanted to quickly get your thoughts on, because we've got time, is chat GPT, OpenAI. They've released a browser today, the so-called Atlas browser. You know, I think agentic browsers are going to be a big thing, but I don't think they're nearly ready for prime time from a security perspective. I think they're going to turn into a problem for enterprise enterprises out there when their users are going to be bringing their own browsers in, logging into account.
with them and then getting prompt injected and company data spilling that way.
I mean, I mean, I know I'm pretty black-pilled sort of person, but is this, it was this your
reaction as well when you heard about this?
Yeah, I mean, we've been talking about how the browser is the OS these days, and now we're
saying we're going to have an open AIOS, like that sounds like a terrible life.
If you said, let's replace macOS with some open AI operating system for Apple hardware, you would
be like hell to the no, but that's kind of what we're talking about if they're going to make a
browser as well, like all of the complexity of a browser.
plus all of the complexity of AI
plus how fast
modern AI stuff is moving
like it just does not make me feel good
and as you say like end users
being involved in this makes it worse
and like all of the other
AI enabled browsers we've seen
have had challenging times with prompt injection
and you know just gluing a browser to an LLM
just gives me the willies and I'm not
I'm not happy about it
yeah and the whole you can't actually separate code
and data issue with LLLR.
You know, yeah, let's stick that in a browser.
That little thing, yeah.
Let's just whack her in a browser.
She'll be right.
That's fine.
All right, let's get on to this week's sponsor interview now, Adam.
And I know you actually listened to this one as well, because I asked you to QA it yesterday.
But it's with Jacques Lowe, who is one of the team over at Push Security.
Push security, of course.
It's mostly a browser plug-in-based product, which will prevent your users from being able to be fished.
This is the better way to deal with fishing than just an email gateway.
these days because fishing messages can come in through teams they can
come in through LinkedIn they can arrive in taxis for all you know they can
arrive in an envelope printed on paper right your your email security
product ain't gonna be able to do anything about that so the nice thing about
push is it sees the final payload that the user sees and in that vein they
actually managed to unpack a LinkedIn fishing campaign which was targeted
towards CEOs in a specific vertical so that's interesting but the even more
interesting bit is the way that they are now tracking and they started doing this for internal use
and it's something they're now making available for their for their customers they actually track
the user's whole journey to a fishing page right so you can actually say oh okay they hit this fishing
link but trying to go back and actually figure out where that link came from can actually
sometimes be hard right so it's something that it sounds like it should be simple and it isn't so
what push have done is now you actually get the full sequence in a nice little diagram in in the product and
you can step back and you can say okay well user tried to enter their password here we stop them
you know there might be a couple more chains back higher which is oh they hit this page which
looks us but then you can keep walking it back and actually get to the to where that link uh where that
whole you know problem initiated even including redirects and whatnot i just want to wonder what you
thought of that because i know you thought it was you know i'm pretty sure you thought it was cool
yeah yeah i mean i think you know trying through investigations in a browser centric world without
access to the browser is already super difficult. And I think, you know, anytime you're
investigating an incident and you've got like full packet dumps from the network, like it's
super value. We've got a point of truth that you can, you know, in terms of dates and times and sizes
and so on, that you can try and correlate activity. Even if you can't see inside encrypted
communications, you've got some data points. Having that for the actual like inside the browser,
being able to like TCP dump in my old Unix way of thinking about things, you know, and see the
ground truth of what's going on in the browser in front of the user's eyes like how good is that
yeah so like it just sounds like a godsend for investigating any incident that involves a web browser
which is all of them these days so yeah like super yeah i mean there's like you got your edr your
nDR and you know as jacques explains in this interview is like this is sort of is that bit in the
middle which has been blind i mean what would you call it a browser detection response bDR did we just
did we just coin a new one all right so we're going to roll on to that interview now and here is
Jacques Lowe talking about that LinkedIn campaign that they rolled up. Enjoy.
This attack was really very targeted. So the malware they were using very off the shelf,
but the link delivery and how they were getting people to that malware, extremely targeted.
So in this case, they were targeting CEOs at tech companies. So these are the kinds of tech
companies where you will recognize the, you'll definitely recognize the name. And what we found
is that they were actually compromising a direct contact of those CEOs with which they had been
having a conversation, use that compromised LinkedIn account to deliver a link through LinkedIn
Messenger. That link then goes through to a legitimate Google.com domain where there's a hosted
document with a link, you click that link, you go through to a legitimate document hosted on a Microsoft
domain, and only then once you've gone through Google and Microsoft, you end up on the actual
phishing page. So very interesting in terms of how people got to the fish kit, but the fish kit
itself fairly off, fairly stock standard. Yeah, right. Okay. So how did you first get wind of this
one? How did you detect it? Because I understand that when someone, when a user, when a push user
tries to enter like their SSO password, for example, into their browser, into anything that isn't
their SSO, like that will, they will be prevented from doing that and there will be an alert flagged.
I mean, is that how you caught this one? Was it going to succeed and, you know, push detected that
password entry or were there other signals like how did you first become aware of this yeah there's a
couple of layers so in this case um we detected the cloned login page so we're detecting that there's
something looks like a in this case that the first attack was a google login page customer is a
google workspace customer um and then uh obviously when they tried to enter their password so we caught
the clone login page the actual fish kit that was being used they clicked through two warnings and
then function, we ended up blocking them on the password entry, which is great for us.
The security team, like, immediately got in touch with us and told us, like, oh, wow, we just
turned on that block mode, like, four hours ago.
We're so happy we took that.
Yeah.
Yeah, but I mean, I'm a little bit worried for that customer of what happened four hours prior,
right, and in the time leading up to that.
But the interesting thing is here, so we're talking about this as being a LinkedIn fishing campaign.
I guess one of the interesting things is here that the.
The reason you know the link was delivered through LinkedIn is because you're actually capturing enough sort of session information to be able to walk it back, right?
And I've seen the blog post that you did.
You can actually see where, you know, you can see the full chain of events and user clicking from here, clicking from here, clicking from here, oh, then you get an alert, then you get another alert, then you get a blocked password going in.
So, I mean, is this something that you've always done?
You've always offered the ability for people to walk back through the chain and see where that link first popped up?
And this is how a lot of our detections work, but having that data, that context available
at the point of detection is something we released a couple of weeks ago, or a couple of months
ago maybe now, probably a little while back.
Time is a blur, I know what you mean.
We're moving quick, yeah.
So, no, no, and it's been really interesting, like just the kinds of things we're seeing
as soon as that context becomes available.
We've just recently started doing these click-fix attack detections and like a bunch
of them are popping up and it's very interesting to see where they're originating.
So unlike, you know, the fishing attacks that are coming from.
you know, typically, okay, there are still a bunch coming through email, but then typically
like Twitter DMs, LinkedIn DMs, this kind of thing.
The ClickFix stuff is malvertising and pop WordPress.
Like, that seems to be the go-to.
So the guys are like vibe coding some kind of website that is specifically targeted to
nail one search result, paying a little bit to boost that, and that is how they're delivering
this campaign, yeah.
Yeah, so now that you've been able to, okay, so in this case, you walked it back and
detected that the links landed through LinkedIn, you were able to, or the incident
responders that your customers were able to go to the CEO and say, you know, you got this
message here, like the link came from LinkedIn, like, can you go back through your messages
and like, where did you get that link? And then I'm guessing they've discovered that, you know,
that their contact must have been breached or whatever. Like, were you, have you heard
through them, because I'm guessing they're the ones chasing the incident, like who the
attackers were and what they were trying to do? Do they have any idea? And, and, and,
And have you seen this same crew elsewhere or was this just an isolated incident?
We saw multiple versions of like an identical campaign.
So it's not just this company, multiple tech companies, multiple C levels getting hit in exactly the same way.
We ran that blog post.
As soon as we put the blog post up, a lot of other people got in touch and got like, oh, yeah, same thing hit us.
Obviously, less information about whether it succeeded or not in that case.
So I don't know how many of these attacks succeeded, but like obviously hyper-targeted.
and all in the same vertical.
So very interesting from that perspective.
So now, you know, you've developed this stuff
which allows you to walk back the sequence of events,
you know, find the origin point of a malicious link.
What is the main interest there from users
in wanting to be able to walk back these sessions
and discover the origin point for links?
When you know something bad is happening
and it's targeting specific people in your org,
that is something to pay different kind of attention
to do when it's just like scattergun
and just randomly hit someone,
happen to hit someone in your org.
So I think there's a little bit of, yeah,
there's something actionable there.
But I think to a larger degree,
a lot of this sort of tracing,
this discovery, this metadata collection
is built there so that we can improve the detections.
I mean, that's the core of why we built this stuff, right?
So once you start understanding these patterns.
Some of these chains themselves
will become detections, right?
Absolutely.
Yeah.
And like that context,
like often when you're investigating these things,
so let's say you're looking at things,
like an OAuth app or a browser extension.
It's very hard to tell bad or not bad,
based on the permissions that it's asking for, et cetera, et cetera.
Like, you need a lot more data and context.
The second you have, like, the entire flow
of how this thing came to be installed or approved,
that often makes the difference,
and it will jump out at you very, very quickly
when you start looking at this browser extension.
It's called HubSpot, and you installed it
by going to the HubSpot website.
Yeah, good.
This one, you clicked on a link
in Reddit and went through these.ru websites and then ended up installing this thing called
HubSpot, probably not okay. So it jumps out at you quite quickly when you start looking
at that extra context. Yeah, now we should mention too that this is something that you are doing,
which is new, which is you are able to pull extension information from the endpoints now,
from the browsers. Are you actually able to do any blocking or enforcement or is it more
informational at this point? Information at this point? The blocking is coming right around
that corner of that feature released last week.
So we are right around the corner of getting that out, yeah.
Excellent.
Now, you also mentioned just then, intriguingly, Jacques, OAuth.
So are you actually able to now work these OAuth events into those, like, timelines?
Absolutely.
Yeah, I mean, detecting an OAuth event is actually a lot easier than you expect it is.
Like, once you just start looking at some of the network traffic, you pull out indicators from that.
It's very easy to make that generic.
So that's something we're very actively pursuing as well.
But I think, like, ultimately we're pursuing.
everything that happens in the browser,
any kind of attack that's happening in the browser,
whether that is like some kind of social engineering,
getting you to consent to something,
approve something, share something,
whether that is something like getting you to download malware,
especially through the clipboard,
as we've seen with these reasons,
there's click fix attacks that are so effective
for reasons I still don't quite understand.
But yeah, I mean, they're certainly working
and they're effective.
So yeah, every time a new technique comes out,
this is the tooling, this is the back end
that we're using to basically get on top of that
and make sure we get detections out very, very quickly.
Like the second we have this capability,
we're using it in a very defined way.
We're trying to cover broad attacks
that are happening every day.
But every time you speak to a customer,
they're like, oh, we have these four or five problems
that this is the exact data we need to solve those problems.
So we have this weird attack that is targeting our employees
by, you know, they're buying AdWords
with this exact internal tool that we're using
and basically using that to get into our estate.
Or we have this use.
use case where this user shared something, accessed something, did something. We want to know who did
it, where did it happen, when did it happen. So like all those things are very easily queryable
when you have this metadata available. Yeah, right, man. That's funny, right? Because, you know,
the killer use case for push has always been anti-fishing and for finding, you know, preventing
fishing when the mail gateway has failed or is not in a position to find the link or blow up the link,
right? And even now, obfuscations got so good that quite, you know,
often mail gateways will see the link but they can't get the payload they don't know it's
malicious right so so that's always been in my mind the number one selling case for push but i can
absolutely see how the metadata stuff you know it's it's catnip for like more advanced teams like is this
so this is mostly user driven is it the that you know you're developing this yeah i mean it was
developed internally but then we realized oh there's actually a lot of people that are willing
to invest the time and learn how to use the stuff because the data is very very valuable i mean like
can you imagine you're in a situation now where you have proxy log
and you have EDR logs.
Something is happening on a website somewhere
and then the EDR lights up.
But what is happening in between?
There's kind of like a missing middle layer there.
And this is exactly the data that fills in that gap.
Now this isn't, you know, like we're a push customer,
it's not just like you automatically get this,
you sort of have to reach out.
This is kind of in early access at the moment, is that right?
Correct, yeah.
I mean, we're using it internally,
but we want to make that available to customers.
We have a list of customers that have expressed interest in that.
So if you're one of those, and this is something you want to get your hands on,
please join or have a look at the link.
Come have a chat with me.
Would love to hear your use cases.
All right, Janak Lo, thank you so much for joining me to talk about that.
I mean, I'm a big push fan, as you know, and this is really cool, actually.
I think it's really cool.
But great to see you, mate, great to chat to you, and I'll look forward to doing it again soon.
Cheers.
Love it.
Thanks.
Just bad.
That was Jacques Lowe from Push Security there.
Big thanks to them for that.
And big thanks to Push Security for being a risky business sponsor.
That is it for this week's show.
I do hope you enjoyed it.
I'll be back soon with more security news and analysis.
But until then, I've been Patrick Gray.
Thanks for listening.
You're going to be able to be.
