Risky Business - Risky Business #745 – Tales from the PANageddon
Episode Date: April 17, 2024On this week’s show Patrick and Adam discuss the week’s security news, including: Palo Alto’s firewalls have a ../ bad day Sisense’s bucket full of creds g...ets kicked over United Healthcare draws the ire of congress FISA 702 reauthorisation finally moves forward Apple warns about “mercenary exploitation” but what’s the India link? And much, much, more This week’s sponsor is Panther, a platform that does detection as code on massive amounts of data. Panther’s founder Jack Naglieri is this week’s sponsor guest, and we spoke with him about some common detection-as-code approaches. Show notes Palo Alto Networks releases fixes for zero-day as attackers swarm VPN vulnerability CVE-2024-3400 PAN-OS: OS Command Injection Vulnerability in GlobalProtect Rapid7 Technical Analysis Why CISA is Warning CISOs About a Breach at Sisense – Krebs on Security Congress rails against UnitedHealth Group after ransomware attack | CyberScoop The US Government Has a Microsoft Problem | WIRED House GOP bridges divide to reauthorize FISA surveillance bill - The Washington Post Top officials again push back on ransom payment ban | Cybersecurity Dive Ex-White House cyber official says ransomware payment ban is a ways off | CyberScoop Over 500 people targeted by Pegasus spyware in Poland, officials say Apple drops term 'state-sponsored' attacks from its threat notification policy “All Your Secrets Are Belong To Us” — A Delinea Secret Server AuthN/AuthZ Bypass PuTTY vulnerability vuln-p521-bias Security engineer jailed for 3 years for $12M crypto hacks | TechCrunch Alleged cryptojacking scheme consumed $3.5M of stolen computing to make just $1M | Ars Technica Twitter’s Clumsy Pivot to X.com Is a Gift to Phishers – Krebs on Security
Transcript
Discussion (0)
Hey everyone and welcome to Risky Business. My name is Patrick Gray. This week's episode is brought to you by Panther and Panther can and we spoke with him about some common detection as
code approaches. That interview is coming up later but before then we're going to check the week's
security news with Adam Boileau and Adam I guess we should start this week by saying oops a bit of
a mea culpa. We did a server upgrade late last week and we made a change to the RSS feed that
we didn't entirely plan,
which was it basically appended forward slashes to all of the podcast GUIDs,
which for most podcatchers wasn't a problem,
but a bunch of people were using stuff like Pocket Casts
and whatever.
You know, those clients decided to re-download
the entire Risky Business podcast catalogue.
Oh, dear.
So, yeah, sorry about that.
Sorry about that sorry about that everyone
and uh i don't know we've got some good new features in our upgraded server like dark mode
for our content management system which is very nice but yes unfortunately that seems to have
resulted in quite a few gigabytes of podcasts so yeah sorry about that that was a busy day for the
old cdn um can't wait for the bill.
But yeah, we just wanted to say sorry to everyone who that impacted because, yeah, a few people were having a bit of a WTF moment
when all of a sudden their podcatchers tried to download
the last 150 podcasts in both our feeds onto their devices.
Anyway, let's get into the news and let's talk about the Panageddon. I'm talking,
of course, about the CVSS-10 in Palo Alto Network's appliances, like current versions.
It's not a memory corruption bug. The bug itself is actually kind of cool. There's been a lot of
freaking out about it and it is bad, but based on our research research we think we might actually skate on this one with
limited impact but let's start with the bug tell us about it so uh some researchers think
valexity in the wild saw some attackers compromising palo alto devices and they are
thinking you know nation state based on the you know on where they were seeing this kind of pretty limited exploitation happen. The bug turns out to be kind of path traversal in the value of a cookie.
So that cookie, you send a cookie to the web interface of your Palo Alto
when you're logging into the VPN,
and it writes a file onto the file system down the line firewall.
That file name, it turns out, you can put..slashes in
and write yeah
the contents or write write a file outside of where it would normally store cookies now arbitrary
file write as root well actually arbitrary file name write as root is a difficult thing to turn
into a shell uh but the attackers in question did find a way to do it where the file names are processed by a script that reports
telemetry back to Palo Alto and that will fall for a shell metacharacter injection and onwards to
command execution so we've seen quite a lot of you know good quality memory on you know
InfoSecMask. Yeah how it's a dot dot slash but it ain't quite your grandpa's dot dot slash right
it's not your grandpa's dot dot slash right right? It's not your grandpa's dot dot slash, right?
And the thing that struck me about this is, you know, okay,
the original like R bright with the dot dot slash,
that would have been reasonably easy to find.
Like if you're kicking this thing around,
you start looking at those cookies and you're like, well,
what if I do this?
You know, I can see how that would be easy to find,
but turning that into reliable,
like CVSS 10 exploitation through this telemetry thing, this you know i can see how that would be easy to find but turning that into reliable like cvss10
exploitation through this telemetry thing like it it actually it meant whoever did this had a
palo alto box they were putting some time into and they were putting some time into really
understanding how it worked so you know it does suggest that whoever did this had the time to
actually bother with all of that yeah yeah this is a bug that you wouldn't have found without having a palo alto on the bench to play with
and whether it's a virtual one or whether it's a physical one like these days virtual appliances
do make that kind of research more straightforward but you know getting to the point where you've got
a shell on one of these things so you can inspect how it works um and then you know instrument weird
stuff uh that you might find.
So you might have found this bug through fuzzing.
If you were fuzzing the web interface,
you see weird files get written in the file system.
Then you have to track back to where it's written.
So it's not difficult to find,
but then turning it into now what is a bit more fiddly.
And we've seen heaps of other appliances like this.
If you are in a position to get inside of them and look at their gubbins, often you will find these types of bugs.
So, you know, having a research lab with these devices on the bench for you to work at is a
pretty good place to find high impact bugs, but it is a bit expensive if you need actual physical
devices and a plurality of firmware versions and so on.
Yeah. So, I mean, I guess what we're saying is it's not as dumb as it first looks.
It's still pretty dumb. Don't get us wrong. It's still pretty dumb, but it's not quite as dumb as it looks. Now, the mitigation advice that initially came out of Palo Alto, now keep in mind
that, you know, there was no patch for this thing, right? So the best they could do is say, well, here's how you mitigate it.
You turn off the telemetry.
And, you know, that was the first thing that occurred to me as well.
Well, you just stop that from happening
and then people can no longer, you know, get shell.
But, of course, it doesn't solve the underlying bright bugs.
So we're already seeing that mitigation being bypassed in the wild already.
Finding another avenue to turn controllable file name right as root into,
like there was always going to be other ways to do it.
And I think, like I do feel a bit of a pillow having to walk that line
of having a mitigation option to give people so they feel like they're doing something,
but also in the case of the attackers they had seen it would have been effective,
but without giving away where the actual where the actual bug was right which is in a different component and by their powers combined etc um so you know it was a little bit weaselly
but i think given the trade-offs that they had to make right it's under active exploitation
we've seen a bunch of interest from people in this bug like it's got a lot of of traction everyone was looking around for proof of concepts to understand what it
was um weirdly since the patch has dropped and people could reverse them and figure out what
the bug was like i was looking at gray noise this morning and there hasn't been a massive uptick
which no i'm kind of surprised by like i was well i mean as you as you say there's been plenty of
people trying out the fake exploits right like fake pox and whatever so you know gray noise et
al uh picking that up i i should correct myself too i kept saying arbitrary file right but it's
not it's the file arbitrary file name and arbitrary content yeah yes yes yeah exactly but the file
name gets it done because that's enough to bust through that script. And yeah.
Shell metacharacter injection in the file name,
which,
you know,
it's like shell shock,
but dumber.
Well,
actually it's hard to get dumber than shell shock.
I mean,
you know,
overall writing cookies to the file system without sanitizing it or storing
it as like a hash or whatever else,
like it's a dumb bug and you would expect better from a security appliance.
Well,
yeah.
You know,
it's not as bad as the memes on infosec mastodon made out no but i mean here we are right and i remember what was
that october november last year when that citrix bug landed and it was immediately obvious to me
and to plenty of other people that that was going to be a ransomware palooza style festival, right?
Like that was absolutely going to be a pile on.
Now, even though these Palo Alto boxes are generally domain joined,
it's still fiddly, isn't it?
Like you're still going to have to know what you're doing to use this bug,
get access to the Palo Alto device,
and then from there pivot into the Windows network and onto great victory it's not
straightforward i mean it's once someone's done it and publicized a path through then it's probably
doable but i imagine there are going to be a few wrinkles right i mean the i mean i'm not saying
it's it's i'm saying it's hard for somebody who's like a run-of-the-mill ransomware affiliate idiot
so instead of landing
on a citrix box where everything's right there this requires extra steps that requires people
to actually know what they're doing that's more what i mean i'm not saying oh it's impossible to
go from a pan to it you know there's there's not research that's going to be involved there just
you know well somebody knows what they're doing yeah yeah no i mean i agree like landing on a
citrix windows desktop is a pretty familiar environment for most people,
much less specialized than having to pivot through a pan.
I'm also not like, you'd have to go workshop up
how are Palo Alto domain integrations done for auth?
How do you steal the creds?
How do you pivot onwards?
What proves are typical for a Palo auth integration?
Like, so these are things that systems integrators
and people will be able to answer very quickly.
You know, for me as an outsider
who hasn't maintained a whole bunch of pans,
I don't know the answers to those questions.
And getting to the point where you can answer that question
is going to take a bit of work.
So yeah.
Yeah.
Yeah.
But I guess what we're saying here is it's not,
you know, we're not guaranteed to see ransomware affiliates
going ape with this one, right?
Like it's entirely possible
that part of the you know edge device cvs s10 circle of life will be avoided i mean i think
it'll happen eventually but it probably you know hopefully it will happen a little bit after people
have passed as opposed to while people are still scrambling to get patches and indeed palo has been providing patches for
older but still maintained releases over time so there's been a there's a window whilst we
understand the bug but slightly older point releases of the software are not yet patched
and so hopefully it takes longer than you know the week or so that palo has got to do all of
the testing and stuff for the older builds and people can start applying i thought this only affected like 11 up and not 10 or i don't even
know i'm not a you know i'm not that deep in the weeds on pan same as you right pan says it goes
back to panos 10.2 but not earlier than that so 10.1 but there are maintenance releases of
10.2 11 11.1 and speaking of patches are patches available yet yes so they've released
patches but they released it for the most recent version on the 14th right so they focused on the
current yeah because they can say to people running the older one just go come on upgrade
yeah if they're already on 11 and they're only talking like days different so like every day
since they dropped they're releasing a couple of versions worth of updates i guess the reason i'm
talking about this from a ransomware perspective too
is because we've been talking about how the whole ransomware ecosystem
is going through some changes.
It's in this weird sort of flux state.
And I just don't think we're going to divine any insights from this episode.
If it does pass without ransomware actors hammering it,
I don't think that's evidence of tremendously successful disruption.
I just don't think it's going to tell us anything.
I think this is,
this is like very much,
you know,
you've seen China in your pulse secures kind of thing.
That's what,
that's what this feels like to me.
Yeah.
It does feel like it's,
you know,
was a pretty rarefied bug for a while.
And now we're just in a kind of a race to get it fixed,
you know,
and maybe we'll win that race for once.
Maybe we won't.
Yeah,
indeed. And yeah, we've linked through to a story from the record by John Greig. get it fixed you know and maybe we'll win that race for once maybe we won't yeah indeed and uh
yeah we've linked through to a story from the record by john greeg uh we've also linked through
to uh palo alto's advisory and we also have a link to some work out of rapid seven looking at this
exploit so people who are interested in finding out more can go to the show notes for this podcast
and check them out.
Now, look, you know, it's so funny, right?
Because last week we were like, gee, pretty slow week.
And then I think it was like the next day, like all hell broke loose.
Shouldn't have said it, right?
Just shouldn't have said it.
But the other big thing that happened since we last spoke, Adam, is the Sisense breach.
Yes, we saw an advisory from CISA talking about a breach of Sisense who make
kind of like enterprise analytics products. So you buy a product slash service from them,
you glue it into a bunch of your business systems, and it gives you analytics about your service or
your environment or whatever else is going on. And to do that, it has to have access to get that data and that means credentials and
tokens and remote access and all sorts of things from Sisense's platform into all sorts of
businesses and the scale of those businesses and the you know there's quite a lot of very big ones
that use the service made this
more concerning than your average vendor data breach the backstory is that Sisense were running
GitLab the kind of open source Git you know host your own GitHub kind of clone some and that has
a pretty checkered security history someone shelled that uh pivoted from there into an s3 bucket inside
scisense's aws environment and that was filled with all sorts of wonderful loot and booty and
thousands of customers worth of creds uh and that's not just just to go back a second was
this a breach or was this just an exposed bucket because i'm still not entirely clear on whether or not someone actually got this loot or if it was just left lying around then
someone noticed uh no so my understanding based on krebs's reporting is that it was a bug in gitlab
that then gave them the necessary access or tokens or whatever to get access to the s3 bucket that
contained the data so i think it was a two-step thing not just a left lying around on the internet so I mean that's
nice I guess doesn't really help the customers of course so we've seen them the Sisense out
communicating to their customers about what to do here the real challenge for people is that this stuff is super deeply integrated with people's business systems.
And it's not like just change the passwords
that have been exposed.
Like we're talking API keys and LDAP credentials
and web tokens and all sorts, certificates even,
whether it's SSL or X5 and 9 certs involved in the auth process,
actually changing all of the things,
like database passwords,
where they like straight up ODBCing
into people's databases to get the data out.
Like that is a lot of things to roll creds for.
So for the customers, big deal.
And I think you alluded to it just then,
which is that, you know,
really this company walks
into the crown jewels right of all of these orgs and yeah that's a bad thing to get popped it really
is so you can see why there was a lot of concern about it um and depending on how widely dispersed
the data gets like this could have quite a long tail because changing this stuff is difficult. And the guidance that Sisense's CISO sent out in an email,
you know, lists all the things that customers should do.
And it's a huge list.
And all of them are hard.
So yeah, rough times.
Christian Vasquez over at Cyberscoop
has a great report here about the political hot water
United Health Group is in
following the ransomware attack against its subsidiary Change Healthcare.
Yeah, this is interesting because it turns out the merger, well, the acquisition of Change
Healthcare by United was already controversial. It was opposed by the Department of Justice
because they were saying, hey, you know, we're sort of concentrating too much healthcare,
you know, too many healthcare services
with this one organisation.
That turned out to be, I guess,
a founded concern just on security grounds.
Like not even, you know, forget about competition and whatnot.
But I find this interesting because it looks like
this whole thing has gone political.
Yeah, which, I mean,
it's probably good that it has. I mean, the fact that there was a vendor this big in the middle of
so many places. And one of the things the story kind of illustrates, I guess, is that, you know,
there were downstream users of Change Healthcare that didn't even understand that they were,
you know, that they were impacted by it because you know there's a couple of levels
of you know indirection between them and change and you know it was just so much to put in one
place and that that kind of happened without people necessarily understanding clearly the
regulators understood a little bit um but it's a good reminder that this kind of vertical integration
you know does come with a whole bunch of risks, not just benefits or cost savings or whatever else is used to justify it.
Yeah, I mean, Anna Eshoo,
the subcommittee ranking member for the,
what is it, the Energy and Commerce Health Subcommittee,
said that this amounts to a national security risk, right?
Because it's just the scale of it.
And as a single point of failure, it's pretty bad.
So, you know, maybe they can think about
where to fix this problem in other areas before there's gigantic ransomware attacks you might yeah but it
also it also comes back to that thing that you and i have said on this show many many times which is
everything is critical infrastructure really exactly yes and thinking about critical infrastructure
and the fact that everything is critically and with an eye to how
does this look on adversary what is this and what leverage does this provide over us is a thing
that's beyond healthcare it's beyond you know cloud services right beyond things like microsoft
like these are problems that we need to face you know in how the economy works right it's not just
a technical issue or a security issue.
Yeah. Yeah. Now look, let's have a look at this Wired piece by Eric Geller, who's filing for them these days. He's written a piece just about Microsoft and the way that it prefers to sell
security products rather than fix the security problems in its products. I mean, it's stuff that
we've talked about on the show many, many times, but here it is in black and white. And it's like, he really ripped him a new one.
And I feel like the release of the CSRB report has kind of changed the vibe on Microsoft a bit,
right? Like I feel like things are kind of turning around to the point where microsoft's
deficiencies are getting more attention and i just think this story is a great example of that
because as i say he really ripped him a new one yeah no he he did a great job i quite enjoyed it
as a good read um and he makes a couple of what i thought were just really good you know obvious
but good points you know and good journalism often does, you know,
say the thing that's on everybody's mind,
but it makes it very explicit and clear.
And one of them was,
when was the last time you saw this happen to Amazon or Google?
Well, I mean, you and I were talking about that
and I thought the Paige Thompson, you know,
metadata service thing was probably a reasonable example
of something clunky in AWS
that was an example of like bad product, right?
But they seem to have actually fixed things as a result of that.
So it's a little bit different.
Yeah, I mean, I think back like Google Aurora
is the last time I could think of Google getting really badly spanked.
I'm sure they have other incidents that they've managed to keep more low key,
but compared to like the Paige Thompson thing,
and indeed the whole design of the AWS EC2 metadata service
was dumb to start with.
Yeah.
But that's still a long way from SVR has global admin
inside Microsoft or Chinese.
Yeah, I mean, I guess what I would say, though,
is that AWS, you know doesn't offer
productivity suites and you don't often you know so it is a little bit different there but you know
google's google offers roughly equivalent services and you don't hear about people
oh all thing their way to an entire workspace you know add a super admin uh by just getting
people to authorize a couple of apps right so? So yeah, it is certainly, yeah,
certainly Microsoft looks the worst of the three.
Yeah, it's good to see some criticism of Microsoft,
you know, really picking up the pace.
The CSRB report, I think, has given a lot of people license,
you know, to wave sticks at them,
which is exactly what we want, of course.
Yeah. Now it's over.
Finally, it's over finally it's over the 702 reauthorization
has gone through congress and now it's with the senate uh you know geez this this was this was
just a saga wasn't it it has been a very long time coming and i know you know i edit the
news that comes out of catalan you you know, three times a week,
and the amount of times that there's been a, here is the latest, you know, blow by blow about 702,
and we go, okay, I mean, it's important, but we don't, you know, let's just, let's just bin that
one for today, because, you know, call me when it's actually passed or actually failed, and at
least now it's passed one of the two steps involved. But, you know, the urgency that, you know,
because there's a drop-dead date coming up real soon,
like they're going to have to move.
And I think, you know, maybe we'll see a return to it.
But it's weird watching this become such a political football, right?
When really it's a key thing for the intelligence community
and the intelligence community is important
for US national security overall.
This shouldn't be a thing that has become a political football,
but the fact that it has is, you know.
I think there's, you know, I mean, we saw with the FBI
being a bit yellow with their queries that there's room
for reform, right?
But if you want to reform something like this,
you don't try to do it like two weeks out.
Because it is complicated and nuanced, right?
It is.
And there's so many other uses of stuff collected under this authority
that are, you know, way more important than, you know,
the relatively small scale of abuses that we've seen.
So, yeah.
Yeah, exactly.
And, you know, they've even expanded some authorities,
well, some uses here.
So the warrant requirement didn't happen.
That was never going to happen.
But the, you know, you and I were talking about this before we got going.
Like, I think a lot of people sort of don't realize that the 702 data set is quite constrained
to begin with because it's tasked by NSA and, you know, and friends.
And, you know, they're really looking for those national security threats.
And so, okay, maybe some data gets incidentally collected on Americans or whatever, but it's
not the purpose of the program.
It's quite a constrained data set.
I don't personally believe that FBI should just be able to YOLO search information about,
you know, American citizens unless they have a bloody good reason.
And I think they could use some more oversight there.
It doesn't look like that's really happened.
But, you know, they've suggested some new uses here could be, you know, when someone's applying for a visa to the United States, maybe just see if they're in the 702
data set. Because if you are in that data set to begin with, like that's probably a bit of a red
flag. You might be in there innocently, but they'll be able to tell, right? So I think using
information like that for that purpose is entirely what it's for.
Yeah. I mean, if you're going to collect all of that data, then you're supposed to use it to make good choices, right?
Make good decisions, et cetera, et cetera.
While protecting the rights of your citizens.
Yes, because, I mean, this data,
this authority is for collecting data about foreigners.
And the vast bulk of the data in there
is going to be about foreigners like you and I.
Well, we're five eyes,
so a slightly different class of foreigner,
but, you know, we're still foreigners.
When I go to renew my media visa,
which I'm going to have to do, I uh at some point which means flying to sydney and going and
get getting grilled by some really horrible person because they're not very polite at the embassy
um you know if they want to throw my name into the 702 data set i mean that's that's that's their
prerogative right it's their country it's their country of their data but so it's just weird that
so much focus being put on domestic use of this when it's
really not the primary use case or you know it's very throwing the baby out with the bathwater kind
of situation yeah people get people get real binary but i mean we even had this case where
was something i wrote wound up on how can use recently and people are like oh yeah risky
business is great and then of course there's a few like, yeah, but that guy's like so pro, you know, pro spy, pro NSA man, you know.
And it's like, I don't know.
I think really once you drill down into looking at how this stuff is used, you know, find me the abuse, really.
Like the FBI stuff was more like against the, how would you put this?
It was sort of against the spirit of the thing
rather than being like abuse abuse, if that makes sense.
I worry much more about like state police
and in the case of the United States,
even like county police and whatever,
getting their grubby mitts on all sorts of data sources.
I just don't worry so much about the intelligence agencies
because they are pretty well overseen anyway.
Yes, lots of oversight and ultimately they're professionals,
you know, in a way that maybe small-town cops don't so much.
Yeah, exactly.
No offence to the cops in my area.
Now let's have a look at what's happening in Poland at the moment.
I think we've touched on this briefly before,
that the new government there is still looking into
how the previous government was using tools like nso's pegasus on you know political opponents journalists
whatever domestically and uh yeah we got some numbers behind that now the polish government
now says over 500 people were targeted by pegasus within poland yeah i mean that's quite a bit bigger
than like because previously we had seen some kind of indication
Of like number of victims that were like
In the 30 for example, or 10s
And 500 is pretty significant
And there was some detail here about
You know, like the leader of the opposition party
And all sorts of other pretty high profile targets
And it's nice that the new polish government and of course i guess
there's political reasons for them to be doing this are really going to you know hunt this down
and make a make an example out of it because we have seen pegasus used you know in other governments
in the eu and there being a you know a pretty solid public example of getting spanked as a
result of that i think is good um you know yeah, I mean, people are going to be in trouble for this, and that's good.
But I think one thing that's interesting here is, do you think this would have happened
if they had to rely on a domestically engineered capability?
And I think it's, you know, it is almost a case where the availability of the tool just
in and of itself kind of encouraged the abuse, you know, and I find that an interesting thing to ponder.
Yeah, and I think especially in light of, you know,
when NSO Group was kind of at its peak and was arguing,
hey, we've got an ethics committee and we only sell
to Western governments and whatever else.
And, you know, they were trying to make hay out of that.
And, you know, in what world would Poland have not been an acceptable customer for them?
Like, no, regardless of some of the shadier people they sold it to.
Like, you would have thought an EU government would be a safe place.
But then, as you rightly point out, the availability of the tool is just too tempting for people who can get it.
And they would not have stood up a domestic capability to
build this just for this right it's too expensive too niche you know there would have been too many
moving parts to do that quietly whereas just buying this and then a little bit of you know
slippings and numbers and giving it to your favorite your favored security service and
away you go right yeah versus if you had to build it all then that would be a much bigger project so yeah i think i think you're right that the availability makes this kind of
abuse happen yeah and i think that's why you've got things like the pega commission and stuff in
the eu really having to think about this i mean i i previously i don't know that i would have been
i would have been sympathetic to that argument in countries that didn't have the money or the
expertise to develop this stuff but it never occurred to me that even in the countries that do have the money and the expertise to do this, the fact that they can do it sort of quietly by just acquiring it, that does really change the dynamics.
So I think all of this work that's going on in various international fora around spyware and the spyware market, you know, I think it's really important work.
I do.
Yeah, no, i agree completely because i
mean clearly poland has a lot of very smart you know computer nerds and they absolutely could
build this kind of thing domestically but you couldn't do it quietly yeah you know so yeah
that's right well it's all blowing up on them anyway so that's uh that's something uh meanwhile
staying with spyware apple has uh notified people in what is it like 92 countries that
they've been subjected to attacks from mercenary spyware which tells us a few things tells us that
apple's telemetry is actually pretty good and i saw a couple of journos on on socials saying wow
you know i had no idea that you know apple's telemetry is good it's like
trust me if you talk to anyone in that that game who like works on exploiting ios they know how
good apple's telemetry is because i think we talked about this last week like if you crash
an iphone like that crash dump's going to a team of very smart analysts to pick it apart right so
um yeah so someone messed up, I guess.
And they can push out detections.
Like, they've got really good instrumentation in iPhones.
So if they, you know, discover one infection somehow,
someone else brings it to them, Citizen Lab, whatever,
and there's some sort of artifact they can check for,
they can push out a detection for that.
And they just do it all very quiet.
But, yeah, it looks like they rolled up, yeah hundreds of um uh well notified hundreds of people that they had been uh impacted here
and interestingly enough and Reuters has has taken note here that they're using the term
mercenary spyware they're not using the term state sponsored and that's because well Reuters thinks
that that's because the Indian government has been pressuring them on that language and I find that very interesting yeah that that is an interesting nuance because obviously
you know amongst the 92 countries uh would have been some you know Indian users and the you know
state sponsored if you receive that message from Apple and you were in you know opposition or in
political opposition in India like you could see how you could very much draw the line between well there's only one state that cares about me it's you know the actual
government of india you know and that may in fact be true but you know apple doesn't want to be the
one saying that and clearly they have been well reuters assertion that they have been perhaps
pressured to alter that wording to make it a little less, you know, a little less spicy.
But, I mean, you know,
given we were just talking about political opposition being spyware,
you know, not unbelievable.
But, you know, Modi, I mean, he wouldn't do anything untoward.
What are you talking about?
Well, yes.
And then, of course, there's the wrinkle that,
since China is a hard place to do business
in these days apple makes quite a lot of phones in india so yeah it's complicated it is you know
i mean it used to be the thinking that if you manufacture somewhere you have leverage over them
but it turns out it's the other way around so yeah that's interesting now let's talk bugs, bugs, bugs. Bugs. Adam, and we got one in a Delinea secret server,
which is an auth n slash auth z bypass.
I haven't really looked into this one.
You put it in there and I'm like, is this interesting?
And you're like, yeah, it stays.
So take it away, Adam.
Tell us about this one.
So Delinea, which used to be Thycotic,
make secret server, which is their privileged access.
I know Thycotic. I just hadn't heard of Delinea.
Yeah, so they renamed themselves a little while ago now.
So this is a thing that you store your credentials in
and put on the internet.
And they run a cloud version of this
if you don't want to run it yourself and put it on the internet.
So the impact of any bugs in this are not great.
And this bug was honestly pretty dumb.
It's a bug in the like SOAP interface.
But net result is you can steal creds out of the thing that stores creds.
And because it also does, like you can proxy connections through it.
So you can use it as the kind of gateway onto your on-prem environment or whatever else for auditing reasons.
So having access to this thing
you know could get you remote access it certainly gets you creds uh so this ended up being dropped
after they tried to report over a couple of months uh via eventually they went through their local
cert instead so big old disclosure mess big old bug. And the impact if you're a user of Secret Server is pretty rough.
Yeah.
Now, the other bug we're going to just quickly mention is one in Putty.
And for those who don't know, Putty is, well, I'm guessing most people listening to this know, but some wouldn't.
Putty is like a, you know, lightweight little SSH client for Windows.
Yeah, I mean, it's the SSH client for Windows and still Microsoft started shipping, you know, the open source one with base Windows.
So this bug is if you use Putty
to create digital signatures
for a particular type of key,
the NIST P521 elliptic curve key,
then people who can see those signatures
can recover your private key.
Now, in the case of SSH, that's not really so bad
because those signatures are like inside the SSH auth process
and you don't see it unless you're the server.
So server can steal your private key, not great.
SSH is not meant to make that happen, but you know, ultimately not terrible.
You can't do it on the wire.
But the other big use of this
is for things like code signing. So if you're using GitHub and you're signing your commits to
GitHub, then by design, those signatures are public. And so someone can harvest signatures
off something like GitHub and it requires about 60-ish to do this attack. And then they can
recover your private key, sign commits as you or log into
computers that use ssh now with your with your private key so it's a pretty bad bug um the
git use case is particularly concerning um there are some patches out from the putty maintainer
but ultimately you know i've seen some criticism like this is a really dumb bug and this you know
I can't believe this happened with putty when it's so widely used but if you read the the gubbins of
the bug it's actually because Simon Tatham the guy who writes putty implemented the algorithm
that is currently you know kind of failing 15 years or something before the standard was done
because he was well ahead of the time in thinking about it. And his implementation was being fine since 2001, but fell apart because
keys have kind of gotten a bit bigger than what he originally thought about. So the bug is not
as dumb as it sounds, but the impact was still not great.
So the solution here is to go and revoke keys that were generated this way?
So, I mean, you replace the keys,
but it's not a bug in the generation.
It's a bug in the signing using keys.
So if you signed things with a key
that you generated anywhere
using the particular vulnerable curve with putty,
then you should rekey.
But yeah, it's not actual key generation
that's the problem.
Yeah, got it.
Got it.
Understood. Bugs in putty. I didn't expect to be spending five minutes yeah it's not actual key generation that's the problem yeah got it got it understood
bugs in putty i didn't expect to be spending five minutes talking about a bug in putty
like ever really i mean it's so widely used that really you know it matters right it is man putty
and i always think oh putty you know like that's always that's the way everyone thinks of putty
isn't it yeah i mean it's just it's one of those sort of fixtures that's been around our industry
for so long that, yeah, it's got a degree of nostalgia around it, you know?
Yeah.
It's almost like, oh, thank God someone made putty, you know?
Because it's like, honestly, you're on Windows, you've got to SSH into something and like
that, you know, pre-putty you couldn't do, you could not do.
Yeah, absolutely.
Yeah.
Putty is, you know, it's one of those little gems in our history.
Now we're going to talk about something insanely stupid,
and that's because this is a couple of stories from the land of cryptocurrencies and crypto exchanges and whatnot.
The world of stupid.
This ex-Amazon security engineer has been jailed three years
for stealing $12 million in currencies, tokens, whatever.
But the reason this one's interesting is because this guy hacked something called Crema
and stole a bunch of money from him and then offered to return it if he could keep $1.5 million
as like a white hat bounty. And my dude, that is not how white hatting works.
They quite wisely, I think, called the police instead.
And yeah, now he's going to prison for three years.
It is how white hatting works in the crypto world
where apparently, yeah, you can just show up,
steal people's money and then demand
that they pay you to give it back.
So I'm happy to see someone going to jail for this
because it's a scourge that we have seen play out
in a whole bunch of crypto systems over the last little while.
And I don't want to see that normalized as a thing that people can do.
It's sort of like mugging someone and then offering
to give their wallet back if you can keep $100.
Yeah, like for a reward.
Like, hey, I found your wallet that I just stole off you 10 seconds ago.
Your vulnerability was you didn't notice when I came up behind you and knocked you over the head with a lead pipe.
You weren't wearing a helmet when I decided to knock you over the head.
Yeah, exactly.
Yeah, so I'm glad that somebody did not just fall for the scam, went to law enforcement, got this guy now prosecuted and jailed.
So, yay, good job.
I mean, one point you were making to me earlier
is that quite often the reason these exchanges and whatever
will pay a bounty is because it's funny money anyway.
Like, it's, you know, they're tokens or whatever
that aren't really traded at any real depth
and probably didn't really cost the people holding them anything and you can just
carve them off and give it to them and there's no tangible loss so that's perhaps one reason why
they often play ball when people rack off with 10 million dollars worth of a token and want to
keep a mill yeah and you know the the hit to any one of these little crypto startups if someone
does steal all their stuff is they go out of business giving away
free money that didn't cost them anything to get it back is pretty attractive and that's why people
get away with shaking them down like this so i'm just real glad that someone got some comeuppance
for it once yeah yeah and look staying with crypto stuff this guy here what's his name he's a number
nebraska man this one's from dan goodman over at parks the third charles o parks
the third it has been indicted because what he was doing was signing up to various cloud services i
think under you know fake or stolen identities and then just you know doing epic crypto mining
operations mining stuff like monero but what's interesting about this is we got some numbers
out of the indictment where it turns out he, you know, caused costs.
He incurred costs of like three and a half million dollars to mine one million dollars worth of cryptocurrency, which just goes to show you like, you know, you really need an edge if you want to go do mining in this day and age.
Or you just, you know, you're going to be basically, yeah, spending three bucks fifty to make a dollar yeah it's just like the the blatantness of this
of like just sign up don't pay a bill for a month because the bill doesn't come to the end of the
month and and crypto mine and it's probably just like if it's monero it's probably cpu mining which
is why it's so extra inefficient um so yeah like but even if you're using an nvidia right you've
got your hardware cost and then your energy costs and everything. Yeah, it's almost like the whole crypto ecosystem is bunkum.
Yeah.
It's almost like.
I'm bracing for the email.
Bracing for the email.
Now, we're going to end this week with a Krebs on security story.
I did actually see this one before he wrote it up,
doing the rounds on the old Twitter,
which is since they changed the domain from Twitter to X.com,
they did something hacky,
which has resulted in funny, bad stuff happening.
I'm really setting this up badly,
but Adam, walk us through this one because it's actually hilarious.
So Twitter is attempting to make it so that you don't see Twitter.com, you see X.com.
And so they are rewriting links that people post in tweets from Twitter.com to X.com or X.com to Twitter.com.
So that you can, you know, you don't necessarily see that but the net result of how they did this is that if you had a domain
that ended in x.com it got rewritten to twitter.com so for example no the other way around if you had
one that ended in twitter.com it would get rewritten to x.com well so on the display side
it gets rewritten to x.com but on the actual click-through side it's twitter so the example
was like fedex.com it would so if you had a link to fed twitter.com it would be displayed as fedex.com
but actually would link to fed twitter.com yes so people have been going around registering
you know copycat domain names or whatever else that for domains that ended in x.com and then sticking a Twitter instead.
So like space-twitter.com is SpaceX.
Yeah.
And Carfax Twitter becomes Carfax.
Yes.
So like naive string replacing turned out wasn't perhaps the right way
to deal with Earl rewriting their Twitter,
which is just super dumb and embarrassing.
They've now fixed it, but not before you know there was a whole bunch of of comedy uh and face palming on x.com
and you can totally see how this happened which is elon comes down to the skeleton crew that's
remaining after like 75 layoffs and says why am i still seeing twitter.com fix it or you're fired
and someone did this and then they all just moved on
and forgot about it.
Like, you know that's what happened.
That is 100% how it went down.
Now, before we go to Martin Groton pointed out
that Orthodox Easter Sunday isn't actually until May 5.
So we kind of got our timing a little bit funny
when we were joking around about Orthodox Easter last week.
But that means ransomware market might resume on Monday, May 6th. Let's see.
Well, we'll have to see. I guess we'll look forward to that data point and see whether or not
we were right or wrong or not at all.
Yeah. Yeah. After everyone's done celebrating Easter with grandma or observing Easter,
I should say, with grandma. But yeah, we're going to wrap it up there. And there's no show next week.
Well, there's no regular risky business next week.
I'm taking the week off because at school holidays,
I'm going to hang out with my kids and, you know,
have some fun with them, which is going to be great.
But we are running a podcast next week and it's a new one.
It's a new series that I'm very pleased to be able to announce right now.
We've partnered up with Sentinel One to do a podcast
with me, Alex Stamos and Chris Krebs, because they both work over there now. So we don't really have
a name for the podcast series yet, but we've recorded the first episode. It's edited. It's
ready to go. So we're going to push that out instead of the regular weekly show next week.
And the topic is all about technology and sovereignty and how China, Russia,
the United States all have different sort of supply chain concerns and whatnot um and i you know it's about 45 minutes
long i found it very interesting adam you edited that one actually and uh you know i know you heard
it yeah it was uh i really enjoyed it like it's a great listen i don't know if we're going to call
it crepe star most gray or what it's going to end up being but it's a great listen it's like 40 minutes of just like the sort of high level conversation
about these big picture issues that I think we don't have enough of so yeah I think everyone
listening to the show will totally dig it so yeah look forward to it tune in your podcatcher for
next week yeah so I mean it's just great to be able to do a podcast with Chris and Alex who, you know, they've been regular appearances
on the show over the years and to be able to actually do a podcast
with them is terrific.
I think we're going to push out one of them about once a month
for the next six months or so at least.
So, yeah, hope you all enjoy it.
But, yeah, that's it for the news, mate.
Thank you so much for joining me and we'll do it all again in two weeks.
Yeah, thanks, Pat.
Enjoy your week off and I will talk to you then that was adam boileau there with the check of the week's
security news it is time for this week's sponsor interview now with jack naglieri from panther
and i guess you can think of panther as like a modern seam you know you can throw an awful lot
of data at it and run detection as code know, you can throw an awful lot of data
at it and run detection as code on all sorts of events and all sorts of data sources. And,
you know, this is just something that is like more contemporary. If you had to simplify the
description of them, it's like contemporary seam. But yeah, detection as code is the new hotness
and Panther is definitely at the forefront of making a platform that lets you do detection as code. So I started off by asking Jack about how people are actually
using detection as code in the real world. And here's what he said.
It's not really about getting a hundred percent coverage in the MITRE ATT&CK matrix. It's more
about covering the tactics and techniques that are important for your company and like really
doing a great job of covering those.
So that's kind of like the first start is the mindset going into it is you have to know what's worth protecting and then you have to work backwards from, okay, based on that,
how could someone get in and cause harm?
Okay.
How do we work backwards from that and then figure out what data do we need to inform
a correlation that would make
that really reliable. Patterns that we're also seeing are incorporating more business logic and
identity logic into rules. And I think it's a little bit more of a recent development. I think
that prior we would just have to hard code that into rules. Now integrations are a lot better to where we can build that into logic by default.
So really pulling in the surrounding context
from the organization, the historical context as well.
And now that things are in data lakes and structured,
we can do really interesting things,
like obviously AI.
Setting a lot of that up really just comes down to having great data.
And once you have that data,
you can start using all of these different bits and techniques to your advantage.
So we're seeing much more interesting combinations of these things,
like deterministic correlations, looking across log streams,
using LLMs to understand outputs.
A lot of it really is collecting the right amount of evidence
to have confidence because analysis is not often black and white.
I think that's what makes it really difficult
because it's all evidence-based.
So what else can we look at that gives us confidence
that this is actually right?
Versus in something like cloud security
tends to be a bit more black and white.
It's like this thing's vulnerable or it's not vulnerable.
It is this version that we know is bad or it's not.
I mean, what's amazing to me
is how much detection we've done until now,
just on what you were saying about identities,
is about how much identity information
has been lacking and missing from these detections.
And it is great to see this starting to sort of catch up,
but it's kind of stunning, isn't it?
That that integration hasn't been there until recently.
Yeah, for sure.
I mean, I think a lot of the reason that we took some time
to lag behind is that we didn't even have
good infrastructure baselines to really build
on the volumes that we needed. So I remember, you know, many, many years ago when I was working as
a practitioner, we tried shoving an off the shelf sim into our environment and just falls over
because it came and begin to handle the amount of data that we have. And this was, you know,
eight years ago, 10 years ago.
I think it's also that, you know,
environments are just so different, right?
Like I had this,
I remember having this conversation
with someone very smart, like over a decade ago,
and they're just like, well, you know,
it's very difficult to baseline
what a network should look like
because they're all such snowflakes.
Yes, they're all very unique.
And I think this is why the promise of ML and security
has been really challenging historically.
But I mean, shouldn't ML be the solution there,
which is where it can come in and actually find a baseline
for every individual environment?
Right, for every individual environment.
But I think in the past, the approach of like,
we train it on a common data set and roll it out.
Yeah.
We just see a lot of tools even still in the wild
that aren't very reliable.
Yeah, that's...
Right.
Ain't that the case?
I was thinking too that like, you know,
a pen tester buddy of mine, a million years ago,
there was one product he used to really like,
which was actually a Rapid7 product called,
I think it was User Insight or whatever.
And it was really because it had that user analytics bit
that it would correlate against other event information. And it's just funny that it was kind of like a small to
medium enterprise targeted tool that he was just like no this is good so here we are a decade later
and you know doing it properly at scale so you know you've talked about the type of information
you know you want to bring together get it one spot, put all that context there.
Obviously, once you've done that, you can start correlating, right? You can start looking across
different data sources and seeing when you've got a five alarm fire. What are the dead giveaway
correlations that people are writing rules for at the moment? Well, I mean, obvious ones are around, I think, XFIL is a very loud one, right?
You see a bunch of data being accessed
that is either somewhere very sensitive
or just is a very high volume of access.
Like that one's fairly simplistic
to have pretty reliable results in.
Looking at things like brute forcing in Okta
is a very canonical example we throw around all the time,
which isn't very interesting.
It's like a bunch of failed logins and then a successful login.
It's like, okay, yeah, something went wrong
or someone really forgot their password.
I think the more interesting ones come when you look across data sets.
So it's like the failed login and successful login is not interesting,
but then when you pair it with the XFIL, it's like, oh, okay.
So multiple points together are starting to happen.
And then this goes back to what we were talking about about MITRE,
where you might look at one specific technique
and then a sequential technique right after and sort of pair them
together. Right. Those types of correlations are really important and really helpful. And we see
people talking about and building those a lot. I mean, that's going to catch a lot of stuff,
but I mean, there's always going to be the concern, isn't there, that an attacker is doing something
that you just haven't, you know, set up a rule to detect in terms of being able to correlate that particular sequence.
Or people slow down, right?
So we've seen this in the past, right?
People will do one step of an attack
and sit there for 30 days and wait for the logs to flush
and wait for the correlation window to drop off and then continue.
I mean, that's still a win because you're slowing them down,
but you're not going to get everything this way,
I guess is what I'm saying, right? Yeah, I mean, the best still a win cause you're slowing them down, but you're not going to get everything this way, I guess is what I'm saying.
Right.
Yeah.
I mean,
the best attacks look like normal behavior.
Right.
Yeah.
Um,
or they look like one alert that gets buried,
I guess is more my point,
you know?
Yeah.
There's,
there's obviously not a silver bullet.
It's all about defense and detection in depth.
Right.
Yeah.
You want multiple signals to really like layer on top of each other to bubble things up that are interesting.
And something that we've been talking about a lot
on my podcast as well, Detection at Scale,
is really about this signaling concept.
Because there's a lot of security-relevant events that happen
that we don't necessarily need alerts on,
but we need to sort of tag and use for later.
And the correlations that we don't necessarily need alerts on, but we need to sort of tag and use for later. And the correlations that we've been building
are really along that line of like thing that happens,
we sort of tag it, put it to the side,
and then we sort of create these like derived rules
that look at all these various signals together in different ways.
And that can help improve our ability to detect
more than just like a singular, like atomic rule,
which is what we were doing for a long time.
Yeah, yeah.
So, I mean, is that window, you know,
is a delay useful to an attacker?
Like that window concept
that I was just talking about a moment ago,
are people still doing that to try to, you know,
get around this sort of correlation?
I'm guessing not,
because most people are still just doing smash and grab, right?
Right, I was just, it's funny you say that.
I just use that exact phrasing. Like a lot of attacks are just like quick smash and grab we have
access we got to use it right now because the longer that they wait the longer you know the
potential it is for them to be discovered so why lose that opportunity yeah i mean probably the
most enjoyable thing to watch and i shouldn't say say enjoyable, you know, it's very bad, it's crime.
But the laps of smash and grab against Uber,
like a couple of years ago, that was just extraordinary, right?
When, you know, they were making so much noise
and everyone was watching them in there.
And, you know, that was a race to eviction, but wow,
that's amazing how much damage they were able to do
in such a short period of time.
Yeah, that one was brutal.
Yeah. Yeah. So can you think of any other sort of, I mean, you've given us a couple of great
examples there, which is failed logins, then successful login, then exfil, you know, what
are some other sort of sequences and steps that are generic enough to catch things that have a
bit of variation in them, but also specific enough not to throw false positives at you?
I think another interesting one is combining security telemetry alerts
with things that are probably
appearing a bit more benign.
So kind of combining this idea of
high and low signal things.
So
layering, for example, additional logic into
something like GuardDuty could be one.
So GuardDuty can
be helpful for certain types
of attacks, and you might want to join that with some other thing
that might appear benign on its own but together are significant.
So that pattern as well of taking something
that is one potentially reliable set of signal
and combining it with something that's typically benign,
and then together that becomes more relevant.
And that's similar to what we see in more risk--oriented systems they do similar things where they're looking at like
the more bigger picture of what's happening to that particular identity now your customers are
more likely to be sort of more at the at the cutting edge of seam right they're more likely
to be your people who are comfortable taking all of this stuff sticking it in a data lake and
letting models crawl over it as in the words of
the head of security of Lyft who was on the show a few weeks ago.
And he didn't seem to think that was,
that was awesome because it's there's a few questions around that when it
comes to the, the, you know, ML models crawling all over him,
which is, you know, fair enough. But you know,
just my question is what sort of orgs are embracing this type
of approach? Because we've been stuck in a Splunk world for so long that it's almost hard to imagine
that it's going to change. And, you know, so what are the orgs that are starting to embrace this
new way of doing things, of just pumping everything into one place and then doing the analytics there?
It's actually pretty surprising.
We see orgs of all shapes and sizes, some bigger enterprises, some startups,
some mid-enterprise companies that were born in the cloud and continued on that way.
But we tend to find that folks really want to get off of Splunk for a lot
of the obvious reasons. Yeah, it costs a lot. But they, I think, rubbed a lot of people the wrong
way after the acquisition and we're raising prices and things like that. So I think things like that.
I mean, that's the playbook for an acquisition like that. I mean, it's not just Cisco who's
going to do that, right? Like look at Broadcom and VMware.
I think someone was on recently referred to the,
you know, the license renewals,
people are getting sticker shock.
Yeah, it's hard.
And everyone's trying to be more efficient anyway.
So it just adds, you know, salt in the wound in a lot of ways.
But yeah, we see all different types of enterprises
wanting the benefits of things like a security data lake
or just getting detection as code.
And a lot of those things are hand-in-hand with each other
because you need well-structured data
to do anything more interesting with security logs anyway.
Are you starting to see much rip and replace
or is it mostly greenfields when people are doing this?
It's a combination.
I think, to your point, you know, it's rare
to be fully greenfield. Um, but that's the easiest way to start. But, um, we typically see like
certain workloads that are more cloud oriented or super high scale. Starting with that is a common
pattern. Yeah. So they start where it makes sense and then gradually it is going to eat the other
stuff. Yeah. It's funny. I was doing research for a blog that I was writing this week
and they were talking about EPS limits in Sims
and I was like, wow, I haven't thought about that in so long.
But it's still a real thing in some cases.
We just don't have that problem in cloud-based workloads
because things are auto-scaling, they're serverless,
you don't think about it.
Yeah.
That's what we want.
Every innovation cycle, right, we want.
Yeah, I was going to say,
I mean, like maybe it's just modern,
you know, and contemporary,
like, and not based on ancient approaches.
Yeah.
All right, Jack Neglieri,
thank you so much for joining us on the show
to have a bit of a chat about, you know,
really how practitioners are using Detectioners Go to tick off a few, you know, really how practitioners are using detection as code to tick off a few,
you know,
solid detections.
Very interesting stuff.
Thank you,
sir.
Appreciate the time.
That was Jack Neglieri there from Panther.
Big thanks to Panther for sponsoring this week's show.
And yeah,
you'll be hearing from me next,
next week in that wonderful podcast that I've recorded with Alex Stamos,
Chris Krebs.
But until then, I've been Patrick Gray.
Thanks for listening.