Risky Business - Risky Business #802 -- Accessing internal Microsoft apps with your Hotmail creds
Episode Date: August 13, 2025On this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news, including: CISA warns about the path from on-prem Exchange to the cloud ... Microsoft awards a crisp zero dollar bill for a report about what a mess its internal Entra-authed apps are Everyone and their dog seems to have a shell in US Federal Court information systems Google pays $250k for a Chrome sandbox escape Attackers use javascript in adult SVG files to … farm facebook likes?! SonicWall says users aren’t getting hacked with an 0day… this time. This week’s episode is sponsored by SpecterOps. Chief product officer Justin Kohler talks about how the flagship Bloodhound tool has evolved to map attack paths anywhere. Bring your own applications, directories and systems into the graph, and join the identity attacks together. This episode is also available on Youtube. Show notes CISA, Microsoft issue alerts on ‘high-severity’ Exchange vulnerability | The Record from Recorded Future News Advanced Active Directory to Entra ID lateral movement techniques Consent & Compromise: Abusing Entra OAuth for Fun and Access to Internal Microsoft Applications Cartels may be able to target witnesses after major court hack Federal judiciary tightens digital security as it deals with ‘escalated cyberattacks’ | The Record from Recorded Future News Citrix NetScaler flaws lead to critical infrastructure breaches | Cybersecurity Dive DARPA touts value of AI-powered vulnerability detection as it announces competition winners | Cybersecurity Dive Buttercup is now open-source! HTTP/1.1 must die: the desync endgame US confirms takedown of BlackSuit ransomware gang that racked up $370 million in ransoms | The Record from Recorded Future News North Korean cyber-espionage group ScarCruft adds ransomware in recent attack | The Record from Recorded Future News Adult sites are stashing exploit code inside racy .svg files - Ars Technica Google pays 250k for Chromium sandbox escape SonicWall says recent attack wave involved previously disclosed flaw, not zero-day | Cybersecurity Dive Two groups exploit WinRAR flaws in separate cyber-espionage campaigns | The Record from Recorded Future News Tornado Cash cofounder dodges money laundering conviction, found guilty of lesser charge | The Record from Recorded Future News Hackers Hijacked Google’s Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home | WIRED Malware in Open VSX: These Vibes Are Off How attackers are using Active Directory Federation Services to phish with legit office.com links Introducing our guide to phishing detection evasion techniques The State of Attack Path Management
Transcript
Discussion (0)
Hey everyone and welcome to risky business.
My name's Patrick Gray.
We'll be getting into a discussion of the week's cybersecurity news in just a moment with Adam
Boiloh and then we'll be hearing from this week's sponsor and this week's show is brought
to you by the team at SpectorOps and they make and maintain Bloodhound,
which is a very popular open source package for doing attack path analysis.
pen testers worth their salt all know and use Bloodhound and of course there is an enterprise
edition of that software as well and they've just done a major release I think it's version
8 and forgive me if I got the version number wrong but yeah they've just done a major release
and they have broadened out the attack graph capabilities of Bloodhounds such that you can
basically Swiss Army this thing to get it to analyze whatever type of attack powers you want
Justin Kohler will be joining us a little bit later
on in this week's sponsor review to talk through what all of that means. But it is a major
release and it is very cool stuff. So I do recommend you hang around to watch or listen to that
discussion. But yeah, let's get into the news now, Adam. And kicking things off this week,
and yeah, I think it was like shortly after we put the show down last week, CISA, you know,
put out a warning on the latest exchange vulnerability saying, oh my God, you have to patch it
immediately and like it looks like exchange admins are having a worse time than normal which you
wouldn't have thought was actually possible but there's this bug that allows you to go from like
on-prem exchange like escalate up into exchange online or whatnot like what what can you tell us
about this one so there was a talk at black hat by dirkian mullimer who we've had on the show before
and he's you know kind of understands a whole bunch around how weird Microsoft all things work
and he was looking into how, amongst other things,
how Exchange on-prem integrates with Exchange Online.
And Microsoft has a number of features that rely on that integration.
They describe it very euphemistically as the,
was it like the enhanced coexistence features of their environment?
Yeah, totally normal language that you use when everything's fine.
But basically, this is the thing that says,
like if you've got both exchanges on-prem and cloud,
which many organizations that are migrating
or have different business units,
like it's not unusual for that to exist.
Anyway, if you want to have things like integrated free busy
or integrated profile picture sharing,
where really important features like that,
then there is some kind of like mechanism
for the on-prem exchange to integrate with cloud exchange
and they can share and talk about stuff.
And it feels like the mechanism by which this happens
is probably like dates from fairly early
in Microsoft's cloud transition
and so it's not super well thought through.
And the net result is that, yes,
if you are an admin of an on-prem exchange
that has this, you know,
enhanced interaction stuff turned on,
then you can,
then the account that is used to facilitate that in the cloud
has way more privileged than it should do.
So like read, write, user directory or whatever.
So you can steal the credentials
or steal the access from on-prem
and then leverage that to move up into the cloud.
Durkjohn's talk has a whole bunch of really excruciating details
about exactly how and why things be like they be.
And the answer is it's just really complicated
because Microsoft's stuff is really complicated.
And they don't necessarily all understand how it works.
But yes, Sessa seems to think that if you're a FedGov agency,
then yeah, probably you should apply the patches.
Well, you must.
And I guess what's cool in this case,
right, is that, you know, normally when you see one of these emergency directives from Sisser,
it's because someone is actively exploiting the bug.
In this case, it looks like maybe they've got out ahead of it, which is nice.
Yeah, and actually, what's interesting about this bug as well is that Microsoft,
I don't know if I've seen Microsoft do this before.
They've got this program where they're going to intentionally break these features
for a couple of days at a time over the next three or four months.
So people will see them broken and say, oh, I need to investigate that and hopefully fix it in the means of.
Yeah, so to try and get their customers' attention,
they're just going to turn off, you know,
what is it, like the free busy scheduling
and the other features that this needs.
They're just going to turn it off for a couple of days at a time
and hope that people notice.
And they've specifically said, like,
there's no exceptions.
You can't phone the service desk and say,
oh my God, you know, we really, really,
really need our free busy scheduling
between on-prem and cloud.
Like, that's just tough.
And then they're going to permanently break it
at the end of October this year.
So that's kind of cool in a way.
They must be more worried than average about it.
Or maybe this is a sign that Microsoft will be willing to, you know, like,
this is a company that really loves back with compatibility.
And they're actually going to brick one of their own features just to get people's attention.
They've done it here and there.
Like I remember all the way back for Service Pack 2 of Windows XP,
that was the first time I think that really broken stuff in the name of security.
So they will do it.
And that was like entirely justified.
It had to happen.
They did it.
They pulled the Band-Aid off.
So it is nice to see him do it again.
I mean, one thing I'd note here is like you and I, we don't sit here saying,
oh, you know, there's a class of vulnerable devices.
You know, people just should not use them.
Like, we don't usually do like absolute advice.
But I do remember several years ago, you and I both saying, like,
exchange on-prem has had its day, it's got to go.
You know, and it's like it's something that we try to avoid doing
because, you know, often it's not realistic to tell companies to stop using stuff
that, you know, does have some inherent problems.
But in the case of exchange on-prem, like it's had its day years ago.
Yeah, yeah.
I mean, that's how I feel about, say, Fortinet, for example.
But, yeah, you can't just turn around and say,
yeah, you just can't use them anymore, or stop it.
Yeah.
You know, and if we could stop people using exchange, we probably would.
So, yeah, it's funny that they are, you know,
of all the things that Microsoft has to use this particular stick with,
it's funny that this is the one, you know?
Yeah, now let's stay with Microsoft and talk about some.
research here where someone was just having a poke around and a whole bunch of horrors fell out.
Walk us through this blog post because you put this one into this week's run sheet.
Excuse me.
It's called Consent and Compromise, Abusing Entra Oaf for Fun and Access to Internal Microsoft
Applications.
Again, this was sort of like someone stumbling onto something that was horrifying.
Yeah, this guy, Vaisha Bernard, wrote up this journey where he was using Microsoft's, so
you've ever seen like the AKA.m.S. Link Shorten that Microsoft users for a bunch of their stuff.
And he wondered like, what happens if you just go to AKA.coms? And there's a login prompt
and he tries to log in with his user credits because why not? And it says, no, you don't have
perms to use this app. That's totally fine. But that leads him to thinking that I wonder how many
things in Microsoft you can log into with a consumer account or another tenants account
because of the way, you know, enter app IDs work. So he exhaustively,
like domain discovered all of the Microsoft domains,
went through and found all the ones that have a login prompt
that will take a, you know, as you were, you know, enter ID off,
tries to auth to all of them.
And that leads him into this vein of, you know,
really quite a rich goal of internal Microsoft applications
where the team that deployed it,
deployed it as supportive for multi-tenant
and the app developers that built it
never thought about what would happen if he deployed this multi-tenant
and didn't specifically check,
that you are, you know, coming from an internal Microsoft organization.
And the net result is, yeah, he ends up in all sorts of weird, you know,
Microsoft engineering backwaters with interesting bits and pieces.
And he submits all of this to MSRC, and they say, thank you very much.
They stand up a team to go through and order it all and figure it all out and deal to it.
And then they don't give them a bounty.
Boo!
And so then the end of this particular journey blog post, which is wonderful.
has good memes and quality write-up is him logging into the Microsoft cloud application
where they approve and send payments to people for bug bounty rewards or other things
where like basically you just stick a PayPal account in say how much money you want to send
it hit send and then Microsoft will send you that money and so yeah he leaves the blog post on
that question of whether or not he sent himself some money well assumed that he didn't but
I mean this is the standard right for crypto stuff I mean he could have sent himself 10 million
and then offered to return nine million of it.
No, that's what he should have done, yeah.
It would have been absolutely fine.
Microsoft's lawyers would not have had anything to say about this.
It would have been absolutely fine.
But the real moral of the story is like,
even Microsoft doesn't know how to use their own tooling,
and everybody else who's ever screwed up deploying apps into Azure
and made these kind of mistakes.
Even Microsoft gets the stuff wrong,
and they even get it wrong on important internal things.
So like it's not just you, this really is hard,
and really is hard and it really is kind of a mess.
And I hope Microsoft will learn something from this process.
I mean, it's, you know, this is the strength and the weakness of ASIO, right?
It's just one big system.
You know, it's one big directory and everybody's in the same directory.
And it's just this big multi-tenant thing, which isn't sort of separated along customer lines.
It's one thing.
Yeah, I mean, it's one mainframe.
We've gone from distributed computing back to, like, there's the Microsoft computer and the Google computer.
And it's all just, you know,
multi-user, you know, one big multi-user computer that we all share. Yay.
Yeah, fantastic.
Now, back in, I think it was July, Politico reported about an intrusion into the US court system, right?
The court filing system.
They've got some follow-up reporting here, which is somewhat alarming,
which says that drug cartels may have accessed a bunch of the data stolen out of these court systems.
Now, this is just as per officials, some officials.
that Politico's spoken to, but, you know, they seem to be good at journalism, right?
So you would assume that they're speaking to well-placed people who, you know,
these are reasonably grounded fears, fears that are grounded in reality.
Although the interesting details here is, like, what's not really clear,
how the cartels may have got access to this information,
whether or not they hacked it themselves, or they're just buying it from other hackers,
or whether there's corrupt officials in other places that acted.
And one of the reasons they're not sure exactly what went down,
here is because there were so many different threat actors in these systems at
once that it's a little bit hard to figure out exactly what happened but you know
I know our colleague Tom Uren he's taking a look at this this week for the
seriously risky business newsletter and podcasts to subscribe to those too if you
haven't people but you know this idea that now you've got to worry about like the
Sinaloa cartel you know like we had that report you know a month or so ago where
we were talking about the cartel
actually surveilling FBI people with like stingrays and stuff on the ground to find
cooperating witnesses and whatnot. Now you've got these reports where they're hacking the court
system or at least obtaining information that was hacked from the court system really changes
the way you need to think about the criminals you're investigating when they have this sort of
capability. Yeah, like it's a pretty messy situation and some of the other reporting has said there's
been all sorts of Russians up in there, and they're not sure whether it's Russian, you know,
like government, cyber, or whether it's, you know, cyber criminals or why not both, you know,
or why not cartels as well?
I mean, you get the impression from reading this that, like, everybody had a shell in their systems, right?
That seems to be the case.
And, you know, there's some pretty extraordinary tales of, like, judges being told to not use
it for certain cases or having to handle things on paper only because they can't trust their
computer systems.
I think there was some report to House Judiciary Committee
that one of the systems PACA is basically just not sustainable.
Like unsustainable due to cyber risks and needs to be replaced.
Like it sounds pretty bad from the kind of technical problems they've got.
But also just, you know, in general, the court system is a thing that you kind of rely on
to be able to enforce, you know, sealed filings or whatever else.
I mean, they handle NATSEC stuff apparently independently, like on paper,
not using the system, but there is a whole swath of cases and, you know, Mexican drug cartels,
etc., other sorts of things where, you know, there is very real risk to witnesses and to other people
involved in court processes if you start, you know, having these, you know, documents and data
available to anyone who's buying or anyone who's hacking.
Yeah, and it's not like, look, I mean, it's not like this sort of risk is just contained to the
courts and the FBI.
I mean, we've seen some spectacular failures in the intelligence.
community on the cyber front that have resulted in people being killed.
I'm thinking in particular around that covert comm system used by the FBI,
which by the sounds of things with something, like, you know,
just drop a WordPress comment here and it's fine, you know, ding.
So, but you would think that, like, I think the FBI and the DOJ need to up their game, right?
And that's going to be expensive.
That is going to be expensive.
Yeah, I know when you were talking, I think it was when you were talking to Tom in a previous
episode about of SRB, about the drug cartels, like that the FBI really does have to be able
to operate like it's a real intelligence service, not just law enforcement. And this is much
the same. Like court systems need to be, you know, robust against their worst adversary. And,
you know, now that you've connected everything to the world, that range of adversaries is really
quite big. And all that complex document handling, all the identity bits and all of the, like,
these are complicated systems. It's understandable that they end up.
being kind of a mess especially when they've evolved over a long period of time and across a range
of technology solutions and stuff like you can see why it's like this but it's just not you know
it's not good enough and it's going to be expensive to fix yeah and i think tom's take and you know
more on this again tomorrow uh head over to risky.biz to subscribe to the newsletter if you want to read
about this but um you know tom just his early thoughts seem to be well the trump administration seems to be perhaps
over-indexing on offensive disruption for cyber stuff and we want to see offensive action but
how would that work to address this risk right like you're not talking about disrupting a ransomware
crew or something or you know going after an APT crew like this is just different so we're sort of back
to square one in some ways where it's going to come back to do a better job of defense yeah and that's
hard and expensive and you know sys you know i'm thinking like some of the the resources sysa was
bringing to bear on that secure by design initiative for example like that would have paid off over the
long term, you know, some of those have been kind of disrupted by changes in the administration
as well. So, yeah, it's a mess. Yeah. Now back to some bread and butter infosec.
More Citrix net scalar companies, it's turned out like some critical infrastructure operators
got owned with some Citrix bugs. But apparently these bugs were popping up in the wild
as O'Day before they were patch. Now, is this a different set of bugs to the most recent
bugs we talked about? This is Citrix bleed too. So it's the memory leak that we talked about.
out a little while ago.
Apparently it was being hit in the wild something like a month before Citrix disclosed it.
So, you know, it's a little bit unclear exactly what that timeline looks like.
But, yeah, that's critical infrastructure amongst many, one of the many victims of this particular bug,
which, you know, I think we all predicted was going to go big, and here it is.
Yeah, yeah, that's right.
Well, it turns out it actually had gone big before we predicted that it would go big because it was being used as O'Day.
But, yeah, we've linked through to Cybersecurity Dive.
We've got a report on that.
now this connects quite nicely to our theme last week where we spoke about AI on
offense there was the DARPA you know AI Cyber Challenge or whatever that happened in
Vegas last week the results are in there were three teams that did well
trailer bits disclosure they're a minor sponsor of this podcast trailer bits came second
which is pretty cool so congratulations to them a group called team Atlanta came first
and theory claimed the third spot but what was interesting
here is the way the challenge worked is DARPA grabbed a bunch of open source packages,
inserted like 70-something synthetic vulnerabilities into them, and then these AI agents were
supposed to go and try to find them and auto-patch them. And they did pretty well. They found
something like 50 of the bugs. They had patches for like 40-something of them. But what was really
interesting is they found a bunch of actual bugs that were not put there by DARPA as well, right?
So again, this just sort of reinforces the idea that we're pretty early days into vulnerability research using large language models.
And already they're actually proving to be quite useful.
I feel like it's too early to judge what this is going to look like in a couple of years.
But you would have to say this looks pretty promising.
Yeah, I mean, this is, you know, the whole point of this competition was to shake out a bunch of interesting research and approach and so on.
And one of the great things about it is that it also required, like if you entered this competition,
one of the conditions was you had to then release it as open source afterwards so that everyone
else can see what you're doing and we can kind of build as a community on these kinds of sets
of tooling.
And that's really cool.
Like, I mean, regardless of how well it works, like that's a cool approach to this.
But yeah, it did work pretty well.
I had a quick look through a trailer bits had published their particular set of code on GitHub.
So I was having a rummage through it earlier on, just out of curious.
And essentially like that, their system takes code bases, hooks them up into OSS, uses OSS fuzz kind of bindings to build fuzzing harnesses for them,
triage bugs that are found in that process, and then write patches for them and then kind of iterate through that with AI models kind of guiding that process.
And it's just, you know, it's kind of what you imagine that would look like.
But there's so many fiddly bits of doing that and making it work well.
so like really good work from all the teams involved
and yeah it's going to be fascinating
to see what this is like in a few years
yeah well that's right I mean you know
it's one thing to okay can build a you know fuzzling harness
and whatever and automate a bunch of stuff like
what does that look like in a few years
like when this is the babby steps right
when they start to look more comprehensive and whatnot
so that's you know that's really cool
now last week also we mentioned that James Kettle
was due to present some research
presumably on HTTP like desync
stuff and yeah that's what he's done he's wound up spinning up a website i think it's called
ht p1.1 must die uh and you know the research is yeah typically interesting stuff i mean we're
talking about issues sort of inherent to hdtp 1.1 that are going to be very very very difficult
to fix um why don't you walk us through this because you are much better qualified to do that than i
yeah this is absolutely you know classic james kettling um the research really focuses on
a structured approach to thinking about desynchronizing different levels of web server along the path.
So when you have a client talking to a web server, that's kind of one thing.
But when you've got proxies in the way and nearly everything of value on the modern internet
is not a web server in the internet, it's a web server behind a cloud clear behind an acamey
behind some other kind of reverse proxy.
And that proxying layer is quite difficult to do.
And HTTP, the protocol, H.3 won the protocol, has a whole bunch of,
of, you know, kind of historical baggage and complexity that makes proxying it well really quite
difficult. And the goal of his research here was to get to the point where, like, we accept
that there is no way to make this okay, except moving everything to HTTP2, where, you know,
the whole transport mechanism is, you know, redone from scratch in a way that doesn't have
the same sort of inherent confusion and problems. And most of these flaws kind of come down to
you know, that parses at different steps of the way, interpreting things either in a different
way or in different order, and being able to kind of manipulate that in a way that, you know,
is useful to the attacker. And some of the examples that he shows off here are, you know,
being able to steal other people's auth tokens post-orth, you know, from intermediate proxy layers
by confusing it and so on. But really the point he makes is that, you know, there is an infinite
source of these types of bugs because of the confusion and like point fixing any one of them
is not going to help us and that was his goal and of course as as usual he's released a tooling
that he uses and all the methodology and stuff so that everyone else can also go and find this
stuff so yeah pretty cool yeah it was funny too because last week we were talking about how you would
have like an AI agent eventually with burp and whatever and then you had people yelling at you
through the week because apparently burp already does have an AI agent yeah there's a burp plug-in
that implements an MCP endpoint so that you can control it from your...
Which is like, that's one part of that kind of puzzle, but yeah, I mean, it does already
exist, so yeah.
Yeah, yeah, I mean, I think, yeah, it's just worth following up, given, yeah, James Kettle
is behind burp, so, yeah, that's one that's definitely worth reading about post-Vagus.
You would have to agree.
All right, so moving on, and the United States has taken down some ransomware gang,
apparently that took, you know, $370 million in ransoms,
which is quite a lot of, quite a lot of Bitcoin.
John Greig has a write-up over at the record.
Yeah, this is the black suit ransomware crew
that used to be the Royal Ransomware Group,
and they did a bunch of, I think, US cities, they ransomed.
But they got shut down a few weeks ago now,
but it wasn't really, like, no one had been really talking about it.
Like, they had, like, their darkware, you know,
leak sites and stuff had been seized and put banners,
you know, banners been put on them,
but no one was talking about it
and then the Germans
I think maybe last week
the Germans said yes we were involved
but now we've finally seen
US law enforcement actually
Justice Department come out and talk about a little bit
so yeah they shut down some things
and seized some crypto
and all the things that you would expect
from a cyber crime group
being shut down
yeah yeah
they just it is whack them all though at this point right
but you do wonder how bad it would be
without the takedowns
I don't know.
We're back to that same old discussion about, you know,
is there an impact here from these takedowns?
Is it measurable?
You know, it all comes down to hypotheticals, got it, you know.
Let's not get bogged down in that one again, but sheesh.
It is nice to see another takedown.
Let's just say that.
Meanwhile, Doreena Antinuk over at The Record has a report
that a North Korean cyber espionage group called Scarcraft,
apparently is dropping some ransomware recently,
which is unusual for them, apparently.
You know, I've said a few times,
on the show over the last couple of years, if North Korea really embraces doing ransomware for
profit will have all sorts of problems. It hasn't really happened that way, right? Like we see it,
it's more of an occasional thing. They're not doing industrial grade ransomware. I do kind of
wonder why that is. I think possibly it's because they're making so much money out of crypto theft
anyway. They don't need to do that. But, you know, maybe it's because they're worried it's too
disruptive and it will invite other types of responses. I don't know. But either way, we do have an
example here of a North Korean group dropping some ransomware. Yeah, yeah, which I think,
you know, I think you're right that it is kind of interesting that they don't. And, you know,
part of me wonders, like, I wonder if this is more like the Chinese ecosystem where these
groups do have a little bit of free reign about how they make money on the side. And, you know,
especially if some of these groups are not operating, like, directly inside North Korea,
then maybe they're influenced by other Chinese groups, you know, that are making money this way.
or, you know, I think, you know, it could be something as simple as like maybe they didn't meet their targets, you know,
and if you don't meet your targets, you start to get desperate, you know, maybe they didn't steal enough, you know, from cryptocurrency firms,
so you've got to make up a shortfall somehow and maybe that's good enough.
So, yeah, we don't, you know, it's always hard to know what's going on, you know, inside the Hermit Kingdom and exactly why.
Now, it was only a few months ago, Adam, and you would remember this, when you actually educated me on what an SVG image is and why it's basically like,
a bunch of, you know, it's an image with a bunch of JavaScript, basically,
which is, yeah, not, it's no bueno, basically.
And we've got a great example here of people using SVGs, like, somewhat maliciously.
It's a write-up from Dan Gooden over at Ars Technica.
Apparently, there are a bunch of SVG files popping up on adult websites,
which use JavaScript to do, like, a Facebook like, on a target page.
Like, this seems like a pretty victimless crime, if I'm honest.
So obviously someone is like trying to boost Facebook likes on some page or whatever.
So what they do is when a target visits one of these adult websites,
the JavaScript in the SVG, if that user is logged into Facebook,
like does a like on that page for them.
And I just think this is absolutely hilarious.
I mean, who'd have thought that this is what would happen
when you allowed JavaScript to be, you know, contained in image files?
I mean, SVG is just particularly dumb format in that regard
because most people don't expect image files to be, you know,
full featured image documents, you know, in the style of HTML,
which is what an SVG really is.
The kind of saving grace and the reason we don't see this be an absolute disaster
across the entire internet is that when an SVG is parsed in an image context,
so like loaded by an image tag and a webpage, no scripts get run.
So the scripts only get run when the SVGs are parsed in some other context.
So, for example, if you open it in the iframe or you open it using an object embed.
But the one that the porn sites are using is if you download an SVG and then in Windows, you open it, you open it an edge.
And at that point, when it's open as a bare document, so not in a context of a web page, it does execute the scripts.
So that's the trick here is that they are getting people to download.
Because I read the R's thread, because this is Dan Goodman Rights for Us Technic.
read the comments thread and people are like but what do you mean you're not using incognito
mode when you're browsing porn sites like what are you doing why is that a like you're not logged
into your Facebook how does how does this work and that led me to the question of like how does
this work and yeah it's tricking people to download SVG files open them later and then in
windows the default is you get your full featured edge which probably is logged into your
Facebook and at that point then it can go click on and you know add likes to whatever thing it's
trying to like or no I mean that makes sense because
I was wondering like how that would work if people are actually accessing these websites
with all of those cookies set and I thought I don't know man normie's a weird like maybe
that's what they do right um but this makes a lot more sense that yeah they're they're like
oh here's an image library you can download of this model or whatever and then people you
know store it and categorize it and whatever it is that they do and then bang open it up and
what so when you open a SVG image the default like image opener is what like edgium
actually I'm not sure on Windows I guess it probably is yeah I don't know I don't know how
the file associations work on I don't know what the standard file associate I don't open
SVGs on Windows very often so I don't know but yeah if it opens an edge and then at
that point you're going to open in you know the with your standard browser sessions and
then onwards you go to terrible times which yeah it's just a funny it's a funny world
it's a funny old world yeah we keep finding ourselves saying this one recently now we got a bit
of an update from Sonic Wall here.
Remember like last week we spoke about how their advice was like,
enable MFA, it won't probably work,
but enable it anyway and do this and do that.
They didn't really know what was going on.
They were worried about a O'Day in their product.
Turns out wasn't an O'Day.
Yeah, Sonic Wall has instead blamed the customers,
and they have said that after the most recent round
of previous exploits where people were getting their config stolen,
when people upgraded from that,
that they also needed to change their password.
because the passwords have probably been stolen previously
and people who didn't change their passwords
were the ones getting compromised.
And some of the early reports
did seem to say that that wasn't the case
like that those customers had
and I guess perhaps those customers were confused
about exactly what they'd done
or didn't want to admit that they hadn't changed the password.
But this totally fits with the vibe of like
these boxes are getting owned mysteriously
and we can't figure out why.
It's like attackers have the credts.
That totally explains it.
Have the credits, yes.
And the irony is MFA
probably would have helped then.
But yeah, so either way, I guess,
if you're a running Sonic Balls,
you're still having a bad time
regardless of whether or not
there was zero day this week.
Because, hey, there might still be zero day next week.
Who knows?
Yeah, now speaking of bugs,
you know, it's a win-rah bug.
You always got to go with the golden oldies, right?
Like, that's like,
this is like a radio show.
We're playing a golden oldie.
There's two groups out there
apparently exploiting win-Rar bugs
in separate cyber espionage campaigns,
according to this report from Darina
over at the record.
Yeah, this is another Winra bug.
I think this one was like an integer overflow or something.
I think of the exact specifics.
Well, remember it's a pastroversal bug.
I read the details.
I mean, it's usually a path traversal.
It's usually a path traversal.
In this particular case, the funny bit is that it was being exploited in the wild,
and one of the groups exploiting it in the wild was a Russian-backed, you know,
cyber espionage crew.
And the other group exploiting in the wild was someone hacking Russian organizations.
And so, you know, we don't know who got it first or who was doing it.
Apparently there was rumours of a bug like this being for sale on some Russian
underground forums, which either, you know, for or against Russia, it's going to buy from
Russian underground forums so we don't really know.
But yeah, kind of ironic when we've got, you know, write-ups from ESET saying,
hey, Russians have been using this, and then write-ups from some Russian security firm
saying somebody's hacking us with this.
So everybody gets a Winrow bug.
I mean, it's just amazing that Winra persists because Microsoft, like, Windows doesn't have a good archiver.
Like, it's 2025, man.
Like, come on, Microsoft, either buy Winra, please, just buy Winra and make it better,
or just develop a good archiver.
Yeah, the built-in Windows compressed folders functionality was not great.
I don't know if it's gotten any better since, you know, in Windows 11 or anything.
But, yeah, people still run WinRWA, and it's very common, like, you know, everywhere.
But, yeah, especially it does seem like ex-Russian, you know, ex-Soviet states.
Like, they seem to really love Wynra there more than average.
So, yeah, I don't know.
Well, and what's real funny, you talk to the airlock guys about stuff that just sticks out like a sore thumb that just, like, if it's not in an environment and is suddenly introduced into an environment that's like a red flag is Wynra.
Because there's so many crews, like, APT crews who just B.YO WinRah to, like, archive stuff for X, you know, for X-Fill.
So they're like, if you see, like, a blocked execution attempt for WinRah and you have.
seen that before it's like that's like run go to that box and figure out what's going on um
always a lot of fun uh now it turns out someone hit a pretty massive payday for a chrome bug
250 grand which that would be quite nice um tell us about this bug and why it's worth 250k out
yeah so some guy turned up with a you know a chrome sandbox escape bug uh and you know there's a
thread in Google's bug tracker
and the Chromeian bug tracker like where this
bug gets triage and investigated
so the
it's technically quite interesting so
it's like a logic bug right like not so much
like a classic you know
yeah it's not mem corruption no it's a kind of
it's a complicated design and there's lots of moving parts
but yeah it's sort of a what they call it like
a confused deputy I suppose
but anyway so Chrome
is made up of a whole bunch of processes
and this was a an architectural
choice that Google made pretty early in the development life cycle to separate out
using the existing operating system controls, different tabs, different process, different
components to try and limit the blast radius or any particular bug, which is a great idea
and has really stood crime in good stead, so that when you've got different tabs and
different security contexts, they're running in different operating system level processes.
So even if you get code in second one, you don't get much else, there is communication between
these processes to handle things like, I would like to draw some stuff on the screen,
and I'd like to interact with the file system or the network or whatever.
And there is a gatekeeping sandbox process
that's responsible for mediating all this access.
And this was a flaw where you could basically convince
the sandbox component that you too were the sandbox
and that you were authorized and essentially be able to kind of impersonate it.
And that's a pretty cool, like you can leverage this into full sandbox escape,
which is, you know, in the context of Chrome security model, pretty catastrophic.
But it's just a really fiddly, nuanced, interesting bug,
and a great write-up, and the guy showed up with, you know,
prove-a-concept code and stuff.
So exactly what, you know, Google wants when they get this type of bug report.
And, yeah, they decided to show their appreciation in the manner of a quarter of a million dollars.
So, cha-ching.
Yeah, and I think one thing that's interesting is, as you said,
it's exactly the sort of thing that Google wants.
And they cited that as a reason for the payment.
being so high when they when they passed on that payment it says like this is exactly the sort of
stuff that we want to see we want to reward this we want to encourage this so that's nice and a great
payday i'm sure that that research was very happy now james reddick over at the record has
reported that one of the three founders of tornado cash roman storm what a name uh he's like there's
like two of the founders their first names are roman the romans which is kind of interesting um
He has been found guilty on some charges and not guilty on money laundering.
So I think there was like found guilty of operating like a, you know, unlicensed money remittance business or something, but not guilty on the much more serious charges.
So it's just funny that this is still going through the courts.
I think one of the other founders is on the lamb, you know, outside of the US.
And yet one more founder, he was convicted and imprisoned and then is out on awaiting.
appeal or something like that. So this thing is just still dragging on years and years later.
Yeah, because it was, what, 2019, $209.20 cash, right? So it's going back away in the context
of the crypto world. That's practically ancient. But yeah, it's kind of interesting because
money laundering is such an important feature of the, of cryptocurrency for crime. I mean,
there's other things you can do with cryptocurrency, but like being able to use it for crime
and obscure the origin of your funds. I mean, that's, without that.
that, as we have seen, like with the extent to which blockchain tracking services have
made, you know, using stolen cryptocurrency difficult, right? Without that anonymity, it's very
hard to actually spend the, you know, hundreds of millions of billions of dollars that you've stolen.
I mean, we had a, there was an item last week that we wound up dropping from the run sheet
about some massive Bitcoin heist from way back when, which these days, like that amount
of Bitcoin would be worth, you know, like tens of billions of dollars or something.
And the money's still sitting there on the blockchain untouched because you can't move it around.
Yeah, I mean, it must be really weird to have stolen multi-billions of dollars
and not be able to then use it for something
because how do you launder, was it $4 billion or $14 billion?
It's like you stole a whole truck full of gold bars,
you know, you buried him and then someone built an army base on top of them.
Yes, exactly, right, exactly.
Ah, crap.
But no, so we've often talked about like the value of targeting
the bits of this ecosystem that are good for disruption, right?
and money laundering is one that really makes sense to target.
So it's kind of weird to then see a high-profile money laundering service like this
not get, you know, not be prosecuted,
well, not result in a really big prosecution for the thing that you want to punish.
Well, I mean, you know, this piece actually points out too
that there's been a real shift in tone around cryptocurrency regulation
with the new administration in the United States.
I mean, they've even passed like new regulation.
that allow people to invest in crypto for their, like, pensions and whatnot.
You know, so they're really, like, pro-cryptos.
So I can't imagine there's going to be a lot of, you know, momentum.
Yeah.
Yeah.
But, I mean, things can change, right?
Like, let's see what happens in three and a half years from now.
Yeah, well, exactly, right, exactly.
So let's hope things go back to the normality, or at least some degree of sense.
But, yeah, for now, I guess this guy, if he only gets five years in jail,
which I think is the maximum he's facing now, then, you know,
you probably will hear you know other than being shaken down while in jail for all of his crypto
yeah yeah i mean probably it's better than 20 years yes true true uh now we've got this story from
wired and i feel like i need to dim the lights and play some spooky synth music for the intro here
because as you pointed out to me this one is written up somewhat breathlessly i'm going to
give you the headline here and it's such a cracking headline hackers hijacked google's
gemini a ii with a poisoned calendar invite to take over a smart home beaum
Wow, you know, very, very cool.
And yet, you know, I still think this is worth talking about.
I feel like the smart home takeover bit, like that's the stunt hacking part of this story.
But I think the fact that you can do prompt injection via a Google Calendar invite
and get Gemini to start doing stuff it shouldn't be doing, I mean, that ain't, that ain't good.
No, no, and that's the ultimate reason this PC ended up still in the run sheet,
despite the, you know, rather breathy framing of it all.
So this was right above some research that was presented, I think,
a black-cutter DefCon, which was, yeah, looking at ways to do prompt injection,
you know, kind of second-order prompt injection, I suppose,
in the sense that you send someone a meeting invite or something else
that gets put in their calendar.
At later on, at some point, the user interacts with the calendar
via the Gemini AI model and then it reads the calendar input which then contains some kind of
prompt instructions for it and those then are setting up the AI to later take action when
they use it as something else so it's sort of laundering their instructions the malicious
instructions that you're prompt injecting kind of a couple of degrees away from where they
originally came from to try and confuse you know the source of the instructions so that
you know, existing controls that are in place to kind of stop the sort of thing are ineffective.
Yeah, do this when I say X, Y, Z. So it's like the user has initiated the action, right?
Yes, yeah. And then it kind of asks the, so like the example with the smart home thing was like
the prompt told the AI that it's now in charge of some bits of the home. And when I say,
thank you, you should open the windows or whatever. So that later on, when you say thank you to
that instruction has been kind of loaded in and off it goes. And the idea that, you know,
we're going to hook up all sorts of systems to let these models do stuff on our behalf, right? So,
you know, take this calendar infight and stick into my calendar or, you know, whatever other
stuff you might ask it to do. And then we're blurring kind of instructions and data and then
access to the rest of our personal infrastructure, their smart home stuff or whatever other
stuff you've got, you know, kind of hooked up to your Google accounts. It's all a bit blurry
and a bit fuzzy and the kinds of controls that we were put in place around this really don't
feel very reassuring because they're all vibes, right? It's, you know, hey, I, I don't do
something I don't expect, but please do all of these other things that I do expect and I'm going to
trust you to kind of, you know, make that decision yourself based on being, you know, a very smart
spell checker like it's just yeah well as i say you've got to keep thinking of these AI agents is very
eager to please 14 year olds yes that's what they are given root access on your on your device
it'll be fine yeah yeah i'm glad someone's doing this research and i'm glad that people are talking
about it but um yeah i don't know about this future man i don't know i don't know it's going to
be an interesting few years like i definitely think it's going to be an interesting few years
um now john takner who is the secure annex guy i mean i've spoken
about what he's up to these days like he's the guy who looks at malicious chrome extensions
and whatnot ones that get bought and then turned malicious and whatever another thing that he
looks at is like vs code uh extensions now something that's interesting is cursor and windsurf
have actually been booted from the microsoft like vs code extension store because it's not
an official Microsoft product so you can't access the store so bye-bye now this means that people who
want to buy extensions for cursor and windsurf and now having to go to like these other
stores that are full of malicious extensions and this is extremely no bueno and john's done a write-up of
it here he's also he also sent me an email um earlier about this like alerting me to this and um
i think he said that some of these extensions have been linked to to like supply chain attacks
that have resulted in like half a million bucks worth of crypto going which i know is small beer in
the sort of crypto space but still like this is this is not good and you you sort of wonder like
Microsoft come on it seems a little bit petty of Microsoft to kick you know cursor and windsurf out of
these out of these stores yeah I mean I guess the sort of the the back story there is that
visual studio code got open sourced and then of course people use that code base to build other
products but there are restrictions about how what you can call it you know from trademark reasons
or whatever else, like you can't call it Visual Studio Code anymore,
you have to call it something else,
and distancing Microsoft from the downstream forks of it.
And then I guess they decided that, you know,
having other people's products that aren't Microsofty
using their marketplace for extensions, you know,
brought them some liability.
I imagine the lawyers were involved somewhere.
But the outcome, I guess, is that anytime you end up
with these ecosystems of extensions and plugins
and whatever else, forking and going off and doing their own thing somewhere else.
And I think the OpenVsX marketplace or plug-in extension registry that they're using
is actually operated by the Eclipse Foundation as sort of a good for the community.
But operating any type of this kind of thing brings with it all of the problems of managing a store, right?
Yeah, it costs money to keep the bad stuff out.
Yes.
Yeah, because like the review processes are complicated, you know,
and making hard moderation decisions that's complicated.
like staff and like if it's not core business like say it is for the apple app store then you
end up with a total trash fire and that's unfortunately what's going to happen what is happening
and yeah the you know people having their crypto stolen it's kind of what we expect unfortunately
yeah i just love it that john's managed to carve out a niche business you know bootstrapped niche
business that sells intelligence on dodgy extensions whether they be vs code or chrome or whatever
it's just so cool I dig it
and one more thing I just wanted to cover
quickly is push
and full disclosure I'm an advisor to push
one of those guys are sent over today
just a write-up they've done on some like
ADFS fishing with office dot com
anyway it's just a write-up of a fishing campaign
that shows how creative fishing campaigns are these days
and I think it's probably worth a read for people
yeah yeah it's an interesting track where basically you
can send someone a link
that sends you to logon.microsoft.com,
whatever it is,
and then we'll redirect you onwards
to an attacker controlled site.
So the link looks legit.
All the people you've told to look at the link
so only click on it looks believable
because that's how we solve fishing
can now be kind of tricked by this.
And he makes, I think Luke Jennings
the write-up and he makes the point
that like if there was an arb redirect in
office.com
where you can make it go
somewhere, then that would be bad.
And this is basically the same thing where you just register an Azure tendency,
set up ADFS and use that to redirect people.
So it's just another trick for redirecting, but, you know, it's one that's being used
by fishers because, as you say, they are creative and they find all sorts of interesting ways
to do it.
So, yep, yet another one to look at.
I guess another good reason why just telling users to, you know, think before they click
isn't really that helpful when they're faced with tricks like this.
Yeah, I mean, that's why, like, we are literally a push customer because of that.
and we've got some i've also linked through to their write-up which is called
introducing our guide to fishing detection evasion techniques this is a guide written by jaques
um for like security teams so that's actually a pretty pretty useful thing that i've
that i have linked through to in this week's show notes but adam that is actually it for uh
this week's security news big thanks for joining us and uh yeah i'll chat to you again next week
yeah thanks very much pat i will see you then
That was Adam Boyleau there with the check of the week's security news.
It is time for this week's sponsor interview now with Justin Kohler from SpectorOps.
And SpectorOps, of course, makes Bloodhound.
This is both a community open source project and an enterprise software tool.
And what it enables you to do is to enumerate the attack paths that are present in your organizations, right?
Directories are very complicated things.
And Bloodhound helps you to figure out, like,
where the misconfigurations are, where the risks are in your directory, like, permission
structures, right? So it's not like a permissions audit, it's much more graph-based, and it's very
interesting. Now, people have been using this against Windows stuff for a long time, but there's
been a brand new release of Bloodhound, and they've opened up the graph, right? So it's like an
open graph approach now, and Justin's going to explain what that means. Essentially, it means
you can extend your attack graph analysis beyond just Windows directories, and you
into really whatever you want. So this is most relevant to researchers and, you know, pen testers
and whatnot at the moment, but obviously this is going to trickle down into, I'd imagine, pre-canned
analysis and pre-canned approaches for different types of directories and credential stores.
Anyway, complicated stuff to try to introduce, but I'll let Justin Kohler explain what is in the
latest release of Bloodhound. Enjoy. There's a lot in here. We broke it into three kind of components.
One, usability, so like how can I use it easier and faster and better?
Two, how can I expand the use of it?
So expand to new areas of the platforms we cover today.
How can I integrate that data into other tools that I might use on my side?
And then the last one that we're super excited is the announcement of OpenGraph,
which is the ability to model attack paths into brand new platforms.
So beyond the Microsoft ecosystem that Bloodhound's really known for,
we can now model attack paths in one password or snowflake or use.
fill in the blank. Okay, so walk us through how that works, right? Like, walk us through how
that would work in the context of like all of those things, actually. Yeah. So, um, it traditionally
when a bloodhound would ingest data, it was looking for active directory or Azure data. If you
tried to send us something else, we'd just drop it on the floor because we didn't know what to do
with it. That actually was problematic for two different reasons. One, it was horrifically complex
to expand within the platform that we already covered, like to get a new active directory attack
path in or an Azure attack path in. It was a monumental workload for our team to do so.
And then it kind of prevented us from even thinking about expanding. But that was always the
vision. We've talked a long time about how attack paths are not a Microsoft problem. It's a
complexity issue with identities, right? And privileges. Well, and it's inherent to any sort of
directory, right? Like, that's one thing I've learned from knowing you guys is like this is not
this is not something that you can software QA your way out of. No. And we, you know, the active
directory was the easy button because that's how we took over environments for decades.
But we abused attack paths in AWS and in Kubernetes and you name it all the time.
It's the same logic, but we've never had a way to model it.
And so today, now we do.
We've actually, we quietly released this two months ago, but we didn't talk about it because
we have set our research team on it to see what they could build and what lessons we needed
to learn before we launched it to the community.
So that was like, you know, if you throw in custom data, how do we help you delete it faster?
or, you know, like, clear that out without clearing out the rest of your data.
The other surprising thing is people were able to build stuff really fast, like Jared.
No, so Jared's our CTO.
Jared's not the typical bloodhound user by any means, but he went from idea to attack paths in the graph for a one-password instance within about four hours.
It was insane.
Like, we actually, yesterday, we announced this, right?
And by the end of the day, somebody had already posted their proof of concept for an SECM attack path.
That happens to be in Microsoft, but they used OpenGraph and the flexibility to do it ridiculously fast.
So help this make sense to me, right?
Because one password, not a directory.
I mean, it is a store of credentials with various privileges and whatever, but how do you actually pull that into an attack graph and like make that make sense?
Yeah, so it's incumbent on the researcher, so Bloodhound, this initial step into expanding the
on the Microsoft ecosystem is primarily focused on researchers, both internal and external.
So external community contributions can be adjusted.
We have a clear, like a library of extensions for adding to the Bloodhound graph that we host
on our documentation and then point out to GitHub.
So we have examples for SQL, one password, snowflake.
I can't remember what the last one is.
But we have examples of how you would do this.
And we're incorporating any community submissions.
But these are primarily researchers.
So they define the model and of the attack path by saying this edge and like goes this direction between these two nodes.
And they supply all that in the JSON package they post up to the API.
This might sound complex, but it's actually really not.
If you do any of this work or if you're a pen tester, you can start to model these attack paths very, very quickly.
Okay, right.
So what is proving, I mean, this is very, very new, but what's proving popular so far?
You mentioned like one, you know, one password, SECM, but that's still within Microsoft.
Like, why don't you just tell us, what does an SCCM attack path even look like, you know?
Yeah, well, the cooler one, I think, is SQL.
Then use that one.
Let's go with the cool one.
Not cooler.
The one that we internally developed, I can properly talk about that one.
Yeah.
So in SQL, the same directory problems, like who has what role exists within the SQL system itself.
Yeah, because that's not an active directory permission, is it?
No, it's completely housed within SQL, but ask any SQL.
DBA like who has the rights to do what it's a choose your own adventure of how you would try to
answer that and so we just map all the permissions in our attack graph uh model of SQL and just
hosted that to bloodhound and it just took care of it for us and the cool thing is is it hooks into
any of your existing data so if you build a model that relates you know this uh user is the same as
this user in SQL in your attack graph then you can go from a user in active directory or Azure or
whatever else you put in there to SQL and back and forth.
Yeah, so you can, okay, so you can wind up, you know, a concerning attack path there
might be if someone owns this help desk support user, that can result in a compromise
of the production database.
Yeah.
That's not great.
100%.
And I think, like, you know, the cool thing, like two years ago, right, the data dog team came
out with QPound, which was like their implementation of Bloodhound with Kubernetes attack paths.
And they did that, you know, they probably wanted to release their research and I, I'm not sure if I can talk about it, but they tried to do that in Bloodhound.
Let's just say they had to build their own and they probably didn't want to because then they had to worry about the UI.
They had to worry about like data management and stuff when they really just wanted to show the attack path research that they were doing in Kubernetes.
So they had to build this other separate system.
And then try to glue it to yours and whatnot.
And I guess what you're saying is you've made it now easy just to do that.
Exactly. They couldn't glue it to ours. And the problem is, is like, then you have a standalone system that doesn't understand the rest of your identity footprint. So like, you know, again, like you might harden Kubernetes, but talk to any of our infrastructure engineers. They're not necessarily worried about Kubernetes itself or IWS IM rules itself. They're worried about the crossing of those things. So how are they misprovisioning IAM roles with Kubernetes and like those crossing of boundaries? You might not be.
be able to elevate within Kubernetes, but you can hop back and forth and elevate privilege
as you go. Yeah. I mean, I would imagine this substantially broadens your market, right?
I mean, because it has been just a, you know, very much a Windows focused product for a very long
time. Yeah. And I'm guessing that there is a section of the market out there where they feel like
they've got their Windows stuff under control. And there are some organizations where that is the case.
I mean, I would imagine most organizations that have been around for a while are going to benefit from running bloodhound in their environment.
But there's going to be some where they're like, this isn't really something we're worried about.
But this kind of opens it up to everybody, right?
Because even if your Windows directory is an all singing, all dancing, beautiful, you know, choreographed thing.
All five of you?
Yeah, right.
Because any more than five people and it's not.
But even if it were, like, you know, yeah, you start pulling in other systems.
Oh, yeah.
That hold any sort of privilege information.
That's, it's just never going to be.
It's never going to be good.
And I do want to say, like, that's really where we're going next year.
You're going to see the things that Bloodhound covers natively expand greatly next year because
of this capability we just unlocked.
Again, this is primarily got researcher focused, so pen testers and researchers.
And obviously, if you have tinkers within your enterprise that, you know, want to
bolt on things to Bloodhound, we have a few of those as customers. You can do that right now.
But having it like a formal portion of the app where we have like supported data collection
methods where you can just pull it in like kind of at a click of a button, that's like we're
probably talking early next year. But yeah, like what if you're an organization that uses
Octa, either in combination with Active Directory or not? You know, now we can start to model that
for you. Yeah, right. So if you're yeah, so even if you're multi-SSO2, which a lot of organizations
are. Oh, yeah. 100%. Yeah. So I mean, I imagine you see some really wild stuff like when it
comes to mergers and acquisitions and people like merging different directories and stuff. That's where
you get all sorts of like horror horrible things coming out. Yeah. Like if you thought like trying to
merge active directory domains together was hard, what happens when someone doesn't even use active
directory and you have to make those two companies work together and then like all the weird
glue that you do to temporarily, temporarily in quotes enable the business, but it never goes away. And
like now you just have all these like privilege links and attack paths that you've created. Yeah,
we've seen it all. So, so at this point, this is like in Bloodhouse, Bloodhound community and
enterprise, but it's more of a research, research thing. And then later on, it's more pointy,
clicky part of the enterprise solution. Absolutely. Yeah. So right now, this is just going to be
like a, in like a pipeline of just research that we will come in organically. And we are using
internally to expand our coverage beyond and within Microsoft platforms.
But people are really going to feel this starting next year.
But like for the pen testing community, I mean, this is a huge leap forward in their capability.
Again, like we said yesterday, we just saw a person from the community did it in hours.
Yeah, it's funny because as soon as I finish doing this interview, I'm sending it to Adam
because he's going to really enjoy hearing about it because he was a big advocate for Bloodhound
back when he was a pen tester.
So look, this is a major release.
This open graph stuff, yeah, sounds very interesting.
What else have you released?
Because as I say, it's a major, major, major release.
Yeah, absolutely.
And all of this works together.
So, like, for example, in usability, sometimes, like we represent things in graphs, right?
Like path from A to B.
But sometimes a graph isn't really useful.
So I'll give you an example.
If you're looking for, let's say something super simple.
Users who haven't reset their password or identities that haven't reset their
credentials in the last five years. That's pretty bad. But you don't want to graph. You don't want
like nodes on a canvas to click through. You want a table and drop that to CSV. So we added that.
That's been something that people have wanted for a long time. We added new integrations for
ServiceNow and Duo. So if you want to take that information and put it into the systems that you
use to remediate attack paths at scale, you can. We also cover Azure privilege identity management
or PEM roles. So a lot of people use this. You should if you're an Azure EntryD customer. You
don't permanently assign access, you add people as eligible to give themselves that role when they
need to. Hopefully, you're using that in combination with conditional access and MFA. But what we found
is you don't. And so we cover all the different levels of maturity there so we can identify
if somebody's not going through conditional access or does not have MFA enabled. Yeah, wow. This is,
you know, this is much bigger. Yeah, there's a lot. I mean, honestly, I'm, I'm,
I'm scratching the surface here.
And alongside this, too, there's all these cool things we can do on the technical side.
But a lot of people have asked us, like, how do you operationalize this program?
Because, again, like attack paths are kind of this new thing in security or identity attack paths.
Like, everybody's used to, like, patching systems and hosts, right?
But what do we do for an identity that's misconfigured?
We have a huge state of attack path management report that we released alongside this and a maturity model.
So teams can, like, understand how we've either helped or seen other companies adapt this internally
and the different levels of maturity that you can do and, like, intertwine that data with other teams.
Excellent.
Well, I will be sure to drop a link to that report into this week's show notes.
You know, this is a cool release.
This is a very cool release.
Congratulations, Justin Kohler.
Thank you very much for joining us to walk us through it, and I wish you the best with it.
Thank you.
That was Justin Kohler there from SpectorOps.
Big thanks to him for that,
and big thanks to SpectorOps for being a risky business sponsor.
And that is it for this week's show.
I do hope you enjoyed it.
I'll be back next week with more security news and analysis.
But until then, I've been Patrick Gray.
Thanks for listening.