Risky Business - Risky Business #777 -- It's SonicWall's turn
Episode Date: January 29, 2025Coming to you from the same room in Risky Business headquarters Patrick Gray and Adam Boileau discuss the week’s cybersecurity news. They talk through: Sonicwall f...irewalls hand out remote code exec like candy Mastercard make a slapstick-grade mistake with their DNS The data breach at PowerSchool and other niche SaaS providers Academic research proposes taking down Europe’s power grid Apple CPUs get a new speculative execution side channel And much, much more. This week’s episode is sponsored by Push Security, who make an identity security product that runs inside browsers. Luke Jennings joins to discuss some of the pitfalls of federated authentication, like attackers using unexpected identity providers to log in to your apps. This episode is also available on Youtube. Show notes SonicWall warns hackers targeting critical vulnerability in SMA 1000 series appliances | Cybersecurity Dive MasterCard DNS Error Went Unnoticed for Years – Krebs on Security Data breach hitting PowerSchool looks very, very bad - Ars Technica OpenAI rival DeepSeek limits registration after ‘large-scale malicious attacks’ | The Record from Recorded Future News Hackers imitate Kremlin-linked group to target Russian entities | The Record from Recorded Future News UK to examine undersea cable vulnerability as Russian spy ship spotted in British waters | The Record from Recorded Future News Questions grow over whether Baltic Sea cable damage was sabotage or accidental | The Record from Recorded Future News Researchers say new attack could take down the European power grid - Ars Technica At least $69 million stolen from crypto platform Phemex in suspected cyberattack | The Record from Recorded Future News BreachForums admin to be resentenced after appeals court slams supervised release | The Record from Recorded Future News Apple chips can be hacked to leak secrets from Gmail, iCloud, and more - Ars Technica Apple fixes zero-day flaw affecting all devices | TechCrunch I’m Lovin’ It: Exploiting McDonald’s APIs to hijack deliveries and order food for a penny Government websites vanish under Trump, from the Constitution to DEI Trail of Bits: Director, Technical Marketing Push Security: Security Researcher (remote in the USA) A new class of phishing: Verification phishing and cross-IdP impersonation
Transcript
Discussion (0)
Hi everyone and welcome to another episode of Risky Business. My name is Patrick Gray
and for those of you joining us on YouTube, as you can see, sitting next to me is Mr.
Adam Boileau who is visiting the beautiful Northern Rivers from New Zealand to record
this week's show. We're doing a bit of work, hanging out, you know, having some good times
and yeah, we've got a great show for you as always. This week's show is brought to you by Push Security. They make a really interesting
sort of browser plugin based identity security product. And they found some wacky stuff in their
customer environments. So we're going to be talking to Mr. Luke Jennings from Push a little bit later
about that. And what we're really talking
about is cross idp impersonation where someone takes their corporate uh email account registers
a say google account with that email address and because these things aren't domain validated
these days they can start logging into sas apps as if they're that corporate user and um you know
this is something they've seen people doing non-maliciously in the wild, pretty freaky deaky. We're going to talk about that later. But Adam,
let's get into the news. Although, first up, let's get into some corrections. Last week,
I said that the TikTok app was still available in the Apple and Android store. It turned out
that was wrong. So I think it was Akamai and Oracle were still keeping TikTok ticking over. But I didn't actually mean for that pun to land.
Yeah, they were keeping it ticking over, but they were yanked from the store.
And I think I got a bit confused because I was talking to someone in the US.
And at that stage, very early on, the apps hadn't disappeared just yet.
So that's something I wanted to correct.
Also got some really interesting feedback from a listener who works in industrial
control system security in the United States, who pointed out that if things were to escalate
in the cyber domain between the United States and China, which is something we spoke about last week,
that could get pretty messy pretty quick. The general gist of what this listener said to me is
we're not ready for that so yeah just a couple of interesting
follow-ups from last week but let's start off you know another week another disaster with a with an
edge device this time at sonic wall um there is a c what is it a cvss 9.8 which is you know usually
you just ask a box to give you a shell when it's that serious and it does yeah so a cvss 9.8 under
active exploitation hitting a whole bunch of different versions of sonic wall devices and
to me this one smells like a chinese apt crew going and and building a you know network of
orbs right like that's just what this one feels like yeah i mean because why wouldn't you if you've
got code exec uh in a whole bunch of edge devices,
what more could you possibly want?
I think this was a deserialization flaw in the management interface.
So like the port 8443, I think it is in SonicWall's case.
So yeah, if you have that on the internet,
this is straight up unorth code execution,
which not really what you want in a security appliance.
And, you know, we have seen SonicWall bugs abused
by ransomware crews in the past, but, you know,
everybody's going to be all in all of the SonicWalls
doing, you know, whatever they fancy,
be it orb, be it ransomware.
Now, over at Brandy the other night,
we actually talked about this one because, sadly,
we still talk about this stuff just when we're hanging out but but um you know you were you gave me some interesting history on the on the
you know on deserialization attacks where really this sort of thing shouldn't work these days
like to the point where it gets i mean we don't precisely know the the mechanical like there's
no pock for this right i haven't seen one and I don't know what tech stack the SonicWall web admin interfaces,
because like Java and.NET are the common ones
for deserialization, but PHP, Python all have similar things.
We don't know what, I haven't looked at the garbage
of a SonicWall in a while.
But the point is like even like contemporary
deserialization attacks, they don't allow you
just to fire off something that gets you InstaShell, right?
So there's something funny going on here,
probably like, you know, feels like old Java maybe, something like that.
Yeah, whatever it is, it's likely to be old rubbish
because the kind of the gadgets that you use to trigger deserialization
and to make it do, you know, attacker-controlled things,
those are, you know, usually in common libraries,
in the standard library or in common libraries
and they get kind of patched out relatively
quick once they get used. Now,
SonicWall's interfaces could well be very, very
old and they may be vulnerable to ancient
old stuff, but yeah, deserialization
can be fiddly, but once
you've found a gadget that works for the particular
thing you're hitting, then it's CVSS
9.8, good times. Yeah,
I guess my point was more that like
this you shouldn't have a cvss 9.8 deserialization bug in 2025 you know so
yeah the people who maintain the tech stacks that are targeted with with deserialization
um um you know attacks like that they have made harder. It shouldn't be this easy, I guess, is my point. Now we're going to talk about a Krebs piece here, which is, you know, I mean,
is this huge news? I don't know. But it's comedy. It's definitely suitable as comedy. So MasterCard,
when they, you know, delegated their domain name servers in their domain registration or whatever,
there's like seven name servers there. And they akamai customers so you've got like a1-29.akam.net you know so on and so forth and
one of them is a22-65.akam.net so they typoed it so one of their five name servers
pointed to an unregistered domain in niger which, you know, and the funny thing is,
like if you typo like a DNS record
with something like this,
you're going to notice
because people are going to be like,
you know, there's typos,
they're going to break stuff.
This doesn't because, you know,
DNS is pretty resilient.
So someone found this,
registered the domain
and holy moly,
did they see a whole bunch of traffic, basically.
$300 to hijack MasterCard's DNS,
which is, that's some value for money right there.
So good job to whoever did that.
They got shafted on the bug bounty here too.
I mean, they didn't even submit it through BugCrowd,
but because they were registered with BugCrowd,
BugCrowd started sending them emails
saying that they
should take it down to be professional or whatever and they weren't following good disclosure
practices like this is kind of the ugly side of corp bug bounties where it's about kind of hushing
things up right like that's just um yeah that's just uh that that's no bueno but also mastercard
said that this wasn't a risk to their systems which i don't know what they're smoking, to be honest.
Because, I mean, you know,
why don't you walk us through some of the things you can do
once you're able to, you know, answer these sort of queries?
Because it's a lot.
Yeah, I mean, everything is glued out of DNS these days,
and in particular TLS certificates is the main one.
But getting into a position where you can intercept email,
you know, forward responses to DNS requests,
and because they've got a bunch of name servers,
you're kind of into slightly probabilistic attacks
where some percentage of name server requests
might end up going through the path that you control,
but for something like MasterCard,
that's going to be a lot,
and you've got some good options.
And things like certificate transparency help us to
detect when you know things like misconfigured dns are used to then carry out certificate
registration and so on but you've just got a whole bunch of great options and like if they're windows
on the inside you've got options for attacking internal domain infrastructure using internet
you know domain names so there's just a whole heap of stuff. And I think it's, yeah,
they're kind of playing down the potential here
because it suits them.
I mean, you could even theoretically do
like a web proxy audit discovery,
like wpad.mastercard.com.
And, you know, Robert's your mother's brother.
Yes, you've certainly got great options.
And in the past, in my pentesting life,
we did register domain names that
were glued to people's windows infrastructure and then lead onwards to in some cases victory
and in some cases accidental denial of service which is not great uh so bad times either way
i remember once having a coffee this is quite a long time ago with a mate a mate of ours
who his first week on a new gig got sent into a bank and he just did a port scan of that floor,
but he took out some D-Link that they were using.
All of a sudden, everyone's running around with their hair on fire.
It wasn't a good first day.
Yeah, we've all been there and done that.
I know I have.
But it was a fairly gentle scan.
And what were they doing using that?
But it's still your problem if you're the one who takes it oh yeah oh boy oh boy is it ever uh now look we've
got a bit of an issue to talk about here which is the massive consolidation of types of data with
specialist cloud providers and the reason we're talking about it this week is because there's a
company in the united states called power School, which offers school management software for schools.
And as a result of that, a lot of schools use it.
And it looks like they've been popped.
And this is just turning into an absolutely gargantuan data breach.
And I suspect it's one that we're going to hear more of.
I mean, there's, what, 16,000 K-12 schools worldwide.
Unsure if it's one that we're going to hear more of. I mean, there's, what, 16,000 K-12 schools worldwide. Unsure if it's used here.
But this got me thinking about a recent conversation
I had with a friend whose partner is a therapist.
And they use, like, a, you know, clinical management software
that now the therapists leave their laptop open
and the sessions are transcribed by AI.
This is obviously very useful, right? So my friend asked me, he said, when my partner started talking to me about this, I just started feeling
really uneasy and I wanted to get your feeling on it as to whether or not I was being reasonable
or whether or not I'm just being paranoid and silly. And I'm like, I thought about it and I'm
like, look, the AI part of this isn't the problem.
The problem is a centralized repository of patient notes,
where, as he told me, you know,
sometimes people are talking about things like,
you know, childhood sex abuse and things like these.
And the idea that that's all being,
all going onto some disk somewhere.
And then, so I actually looked at the provider.
I won't name them, but I went and looked at their website and their security statements are things like we use
military grade encryption and just the sort of stuff that like red flag red flag red flag
so and then when i saw this power school thing i just thought this is going to be a problem right
where you've got all of these you know industries disciplines whatever you've got all these
specialist cloud services and it're just such attractive targets.
I just think we're going to see more and more.
Yeah, I think you're absolutely right
because, you know, these cloud services
usually are disrupting some incumbent
or some like on-premise thing, right?
And so being nimble and fast and cheap
and attractive, easy to use,
those are the priorities for them,
not robust security
or robust encryption or military-grade encryption.
Perhaps they've got time for that.
This reminded me that we were talking the other night
about Vesta Amo, the Finnish psychotherapy chain,
where they had bootstrap, start-up, minimum viable,
producted management system for therapy clinics in Finland,
and they ended up getting all of their patient notes stored
because they put it in a MySQL database
with no creds on the internet.
Yeah, but now where the AI becomes relevant
is it's just going to exponentially increase
how comprehensive patient notes are.
There's going to be transcripts and everything.
And because it's an integrated system
where you're managing your patient records,
billing, all of that,
there's no real way that you can use pseudonyms for your patients or anything like
that. And you just sort of get the impression like, you know, I don't think they've thought
of that. No, no, I doubt because everyone's so busy innovating and not really thinking about
the long-term risks of holding data full stop, let alone really sensitive data. And something like school systems where your user constituency is sensitive, I guess,
or is vulnerable.
And there's something like 60 million people in this data set.
Their first, middle, last name, date of birth, gender,
health card numbers, grade level and school information,
all sorts of stuff.
Medical information like allergies, conditions.
Yeah, and disciplinary notes, you know,
like the headmaster gives you a telling off for doing something bad
and that's going to be in there and then data leaked
and next minute, you know, some insurance company will be denying you coverage
because, you know, in the sixth grade you were a bad kid.
Yeah.
Ain't it great?
Ain't it great?
So, look, I just think, think you know whether you're a dentist
whether or not you're a psychologist like it doesn't like everybody's using specialist stuff
we're actually developing some tools at the moment well talking about it uh developing some tools
that are going to be actually quite useful for newsrooms and then hey that's maybe something
that we'll license to other newsrooms to use as like an information management tool and look
in that case we're dealing with public information.
So it's not really that sensitive if something happens to it.
Ironically enough, less likely to happen to something built by you
because you're a security person.
But I guess the point is, you know, everything's specialized these days.
Like we're moving away from spreadsheets running the world.
I think we've still got 20 years of that.
But eventually, everything's going to be specialised
and you just, you know...
I mean, it used to be MS Access rules the world,
so I'm kind of glad that that era is starting to wane.
I'm sure that some of these psych practices
who keep their own notes and use their own tech
are more vulnerable than these cloud providers,
but the thing is, one of them gets done,
OK, it impacts a small number of people.
One of these big places
where all of this data pulls together,
it's no bueno.
Okay, so let's talk about DeepSeek.
We're going to preface this by saying
we're not AI experts.
We are two dudes on a podcast
who are saying we're not AI experts,
which is incredibly rare.
But it looks like, you know,
a Chinese group has released an open source model called DeepSeek that has AI people all freaking out
because they're claiming that it costs them
very little to develop
and it's very computationally efficient and whatever.
And this has led to, you know,
people bailing out of NVIDIA
and like its share price collapsing 17% and stuff.
I have no idea if this is justified or not. What is interesting though, you know, people bailing out of NVIDIA and like its share price collapsing 17% and stuff.
I have no idea if this is justified or not.
What is interesting though,
is that they had to restrict signups to people in China now
because they say that they've been experiencing
all sorts of like DOS and API abuse.
And I can't say I'm surprised by that
because if you're the big shiny new thing,
you're going to get attacked.
Yeah.
And I mean, even just the sheer amount of people signing up probably looks like it's an aisle of service
and the amount of use and all those sorts of things you know once a large enough user base
is playing with your things you know you start discovering all sorts of weird edge cases but
yeah they were what they took over chat gpt is the most downloaded you know yeah that's not a good
metric everybody's already downloaded chat GPT.
You know what I mean?
So like there's the long tail of people who haven't downloaded it yet
and they get over to, I don't know.
There's so much hype around this stuff and it's, you know,
and the idea that, you know,
I think NVIDIA lost something like $600 billion of value.
And it's like, I've got no idea if that's justified or not.
I don't really understand how this is a breakthrough or, you know, whether or not it's like, I've got no idea if that's justified or not. I don't really understand how this is a breakthrough
or whether or not it's a big deal,
but people are treating it as a big deal
and I guess that's all you can do.
Okay, so now we're going to talk about this story
from Darina Antonyuk over at The Record.
And apparently this is a story that kind of makes the allegation
that there's a crew out there false flagging
and pretending to be Gamma Redden
and they've been given the name Gamma Copy.
So the idea is like they're using enough TTPs from Gamma Redden
that people are saying, oh, they're false flagging.
We had a disagreement about this
because you think this is pretty thin,
but I don't know.
I feel like it's specific enough
that I can understand why at first pass,
like if you're a threat intel person,
you're going to cluster this together.
So I can understand why
they would have clustered it together,
but then had to disambiguate later.
So it's stuff like using self-opening 7-zip files.
And I think there's similarity in the payloads.
And there's just a few little things here
where you're like, hmm.
Yeah, there was some obfuscation techniques
that were quite similar.
One of the other data points
was that they were using UltraVNC
as a kind of like thing that they would drop.
You know, 7-zip and UltraVNC
are not exactly unique things.
But, you know, the victimology, I guess, was also interesting
because Gamma Redon is generally attributed to the Russian FSB,
and in this case, this group was going after Russian organisations,
so Russian targets with Russian lures and that kind of thing.
And to me, it felt less false flag and more like,
why not just use Russian TTPs for the lulz?
Yeah, why not do what they're doing that works?
Yeah, like, A, it's easy.
B, you're not burning any specialist knowledge.
And C, it's just kind of like,
especially if this is a Ukrainian group doing it,
thumbing your nose at the Russian
by using exactly their TDPs right back at them.
Like, that just feels like trolling more than it does,
you know, false flag.
Yeah.
What's funny is there's threat intel people listening to this
and they're either nodding along, you know,
or they're raging.
And we don't know because like AI,
we are not threat intel specialists at all.
So we are happy if you are happy and we are sad if you are not.
I think that's about all we can say
now let's uh jump into the issue of all of these cable cuts that have been happening everywhere so
alexander martin again at the record has written up this story where there's reports that
intelligence officials are starting to say look all of these internet cable cuts in like the
baltic and whatever they're probably accidents but other people don't know for me the discussion around this has really reminded me of like the havana
syndrome stuff where people thought you know maybe their cia folks were being irradiated by russians
and using microwaves or whatever and there's people who don't believe in it people who do
i think this is a similar sort of thing where we don't really know if there is a concerted campaign
by you know russia working in concert with china to to cut
undersea cables um but nonetheless and this is why we haven't talked about it up until now because
there's just not enough uh information nonetheless i think it's time people maybe started thinking
certain classes of companies need to start thinking about what their contingencies are
when they start losing major cables now the people who operate these cables obviously they're they will have thought
of this already but i'm thinking maybe some people downstream might actually have to think about like
well what's the impact of a of a series of cable cuts to this provider or that provider and what
do we do and you know it's probably something to work into your into your dr scenarios sadly yeah
and it's quite difficult to plan around because I mean you kind of need some deep understanding of
what your upstream providers have in terms of their international transit subsea cables how they work
and I mean some cable operators will sell you geographically redundant services so often they'll
have you know cables that are sort of figure eight loops or that kind of thing where they'll sell you
capacity on both sides of the loop.
But that doesn't help you when there's a Russian ship dragging an anchor for 100 kilometres.
Yeah, cutting both of them.
Cutting both of them.
So it is very hard.
And also it's quite expensive provisioning diverse access, especially subsea.
It's bad enough across town.
So it's hard.
And some of the reporting we've seen you know that does say
hey maybe this is accidental you know some of it sounds kind of compelling but then you know the
idea that you could drag your anchor for a hundred kilometers and not notice like i'm i am no mariner
but we're like like ai like threat, we are not maritime experts.
We just read the computer shipping news, not the actual shipping news.
Yeah, yeah.
So look, and another thing, there's a great story here,
again from Alexander Martin, where he's, you know,
there was this really funny disclosure from the British Defence Secretary, John Healey,
who said that he'd authorised British submarines to surface near
suspect Russian vessels just to let them know they were being watched. And I just got an image of
that in my head of like, hey boys, what you doing? So I think, does that work? I don't know.
But it's funny. It is funny, yes. And it must be, you know, that would be a fun day at the office,
you know, having to poke the head of your submarine out the door.
To shout at some Russians.
To shout at some Russians, yeah.
Better than slinking around all day.
I mean, you do wonder, though,
like if things were really to kick off in some major conflict,
like this is a point of vulnerability.
It is.
Especially for countries like ours, like Australia and New Zealand,
where we are islands.
Yes.
You know, and if someone were to cut our cables, that would be extremely not good.
Yeah.
And I mean, how many cable fixing ships are there?
Right.
That's pretty specialized equipment.
It takes ages to get it into position.
And it's not like you can task your entire Navy to patrol your cables.
No, exactly.
Right.
Yeah.
These are difficult problems.
And, you know, if and when it kicks off,
boy oh boy, we're going to be in some trouble.
Yeah.
Now we've got a really interesting story here from Dan Gooden
which has the FUD headline but the nuanced reporting,
which is an odd combo.
But it's a story about some people who've been reverse engineering
some of the protocols that are used in power delivery,
particularly in Central Europe, right?
So it turns out like a lot of the grid
is just controlled via this unencrypted,
unauthed wireless protocol.
And they've managed to actually reverse engine
and get it working,
like being able to deliver payloads
with a flipper zero, right?
So that's not good, you know?
So that's really not good.
And, you know, the idea is the story sort of argues that if they could do enough simultaneous messing around with this protocol, they might be able to trip some sort of big event like the withdrawal of supply or the addition of supply.
But either way, something that would make the grid unhappy enough to just sort of disappear for a while.
Dan's reporting here, though, does speak to a few experts where they're saying
it's not likely oddly enough though i really didn't find that all that reassuring right which
is what i liked about this piece is that you're like okay probably not the end of the world but
they didn't really convince me it's it's not a risk worth worrying about was that your vibe here
yes absolutely right i've done plenty of work in environments
where hacker mindset is relatively new
and all of the old greybeards that are involved in those systems
are pretty dubious about some of the claims that hackers come up with.
And often we are getting it wrong because we're new to the area,
we don't have the engineering expertise.
But this is one of these
proof-of-concept or GTFO kind of situations,
which is a little difficult when you're talking
about the entire European grid,
but the work that they have done is pretty interesting.
It's pretty, like, the researchers
have done pretty comprehensive,
you know, not just, like, naive extrapolation.
So in some cases, like, they made a flipper zero
transmit some of these ripple control signals,
which are relatively low frequency,
and they can do that by kind of abusing
the RFID mechanisms in the Flipper.
So it's very, very short range.
And then they've done the, okay,
how do we build this at continent scale?
Like how long an antenna do we need?
Can we string it up from a balloon
and make it half a kilometre tall
and then transmit it?
So they have talked to a bunch of people and done some of that work to think about scaling it up from a balloon and make it half a kilometre tall and then transmit it. So they have talked to a bunch
of people and done some of that work to think
about scaling it up. So
it's more compelling than some of the FUD
headlines we've seen, but I think Dan did a
pretty good job of at least getting
perspectives from everybody involved,
even if we didn't really come to a
comprehensive conclusion about
whether or not you can just turn off the
entire European
credit.
Yes, which wouldn't be great.
It just wouldn't be.
Not ideal, no.
Now, in some terribly surprising news, Adam, we never see this happen.
John Greig has reported over at The Record that 69 million bucks has been stolen from
a crypto platform called Femex.
I mean, it's just amazing, man.
It's been like, what, two or three years
of just one of these, two of these, three of these every week.
Yes, and big money amounts to them.
And sometimes we'll get things that are in the news,
the risky bulletin news list,
and it's like $10, $15 million.
I'm like, meh, let's cut it.
And then Catalan says, look, if you had $15 million,
you'd be pretty happy about it. So sometimes we do says, look, if you had $15 million, you'd be pretty happy about it.
So, you know, sometimes we do put them in.
But yeah, $69 million, nice, is a fair amount.
Yeah, but last year there was $308 million stolen from DMM Bitcoin
and $235 million stolen from WazirX.
Yes.
WazirX.
WazirX.
And this one did look like North Korean.
Some people are saying the way the currency moved after it got nicked
looked like normal North Korean-ness.
And they are kind of the world experts at stealing bulk cryptocurrency.
The question is, are they going to convert it to Trump or Melania?
Well, thanks to the blockchain, I guess we can find out
which the North Koreans prefer to hodl in.
Now, a while ago, we spoke about the Breach Forum's admin,
Pom Pom Purin, Connor Fitzpatrick,
how he'd been sentenced to like, he got a slap on the wrist
and we were just like, wow, that's really weird.
17 days time served.
Yeah, and it's funny because, I mean, he got like, you know,
20 years of supervised release or whatever, but, but you know it was interesting because we're normally sitting here talking about
how the u.s justice system has gone too far um but in this case we're like wait what um it looks
like his that sentence is being appealed by uh doj and uh he's he's probably going to get
resentenced yeah they've handed it down to a lower court to go back and do the sentencing again
because it did seem, you know, the judges decided
that it was unfair and that he should have gotten more.
But yeah, you definitely got some feeling of frustration
from the prosecution about what he got.
Yeah.
You know, given some of the stuff he was into.
And, you know, Breach Forums was huge.
It's kind of rare to see this, though.
You know what I mean?
Like, it is rare to see judges accept that their colleagues have erred yes right and go okay we're going to recent like
it's kind of a big deal so um yeah i think he's he's not going to have a he's not going to have
a great time all right so let's talk about some academic research now um into apple chips right and you know apple's ability to just come out of nowhere
and in a few years just switch from intel to all of the m series chips is just incredible
um but you know there's always going to be side channels in these things and that's what this
research looks at i you know this stuff is all greek to me pretty much um but what what's the
go here yeah so this is a group of researchers i think mostly
university of georgia uh that have done a bunch of prior work in you know meltdown inspector and
so on so they kind of understand side channel attacks and they've come up with two sets of
side channels against apple m series cpus and a series cpus uh that allow them to predict
to have side channels uh in the way that instructions are
predicted and in some cases data loading.
So like Apple's chips will make up data speculatively whilst the memory is offloading it and then
operate on that speculatively made up data, which in itself is just wild.
I guess that's why these things are so fast.
But from a practical point of view,
what they demonstrated was doing this in web browsers,
Safari and Chrome,
and in the case of the Chrome and Safari attack,
they are able to leak memory from other browser tabs.
In the Chrome case,
there's a feature called site isolation
where unrelated sites
don't get put into the same address
space so there's a separate process
between your Gmail
and your banking for example
but in some cases
it will share address space if they are sub
domains of each other so like something.google.com
might end up in the same address space
as calendar.google.com
or whatever.
Anyway, so they've demonstrated some data leaking between,
you know, across that boundary.
And honestly, that's pretty cool research using WebAssembly
and some of the, you know, kind of tricks to make the browser
do what it needs to do to trigger these attacks.
So it's amazing academic work.
But on the other hand, I don't know that I'm super worried about it.
No.
But like with all of this stuff and more and more of these sort of fiddly attacks,
you know, it's been 11 and a half years since Edward Snowden walked out of NSA
with all of this good stuff, right?
And I wonder if we'll see another Snowden one day.
And then we're going to find out, oh, my God, people have been using this, you know what I mean?
Maybe, maybe. You just never know
but yeah, you read
this and you think, well, if you're just
trying to get on target, like this ain't how you do it
It seems pretty unlucky
and I would be really interested to hear from
you know, pen testers who've actually
used speculative execution bugs
other than like local privisks maybe, like that's
one case where we have seen them being used.
But like in terms of practical things you can do in the wild,
like it's, you know, it's not the bug you're going to be reaching for.
In this case, it is.
Did anyone wind up using Rowhammer, for example, right?
I mean.
Because I know there were viable exploits out there,
but you'd never hear of them being used in the wild.
Yeah.
And I wonder if that's a next Snowden thing where we find out.
Or if it's just that it's so easy to detect
by EDR or whatever that it's just
no one bothered. Or you've got other options.
I don't recall
having used, in my pen testing
career, having used any...
Maybe Rowhammer we did.
What was the... Heartbleed was one of
the ones that leaked memory and
it wasn't really a side channel.
That was a like.
Yeah, it was a memory disclosure.
Memory disclosure through like reuse of a buffer or something like that.
So yeah, I don't know of anyone using them practically,
but maybe that's just my bias.
Maybe I like, you know, trad Unix shells
and, you know, not super obscure.
He's a meat and potatoes hacker.
Yes, exactly, yes.
All right.
And yeah, look, staying with Apple,
and there's a bug in their core media stuff,
which is used by all of their operating systems and whatever.
It's being talked about in this story.
This is a TechCrunch one by Lorenzo.
It looks like it affected software
older than iOS 17.2,
which is a little bit old,
but we don't know when this bug started being used
or whatever,
so we don't know if it was Oday
when it was in the wild.
But yeah, Apple's fixing it now.
I mean, we have seen a lot of bugs in core media, right?
And that's where you're going to find them.
There's a lot of parsers.
Absolutely.
You know what I mean?
Yeah, it's parsers and attack service.
The two, by their powers combined,
is where you get the good bugs.
Yeah.
I mean, local prevests in Apple stuff
are particularly useful for people
who are running exploit chains
and using them in the wild
because that's a thing you normally have to do
after you land.
So every one of these that Apple kills
probably kills someone's very expensive tool chain.
Well, I mean, in this case, it looks like it was dead already.
But as you point out,
local privilege escalation bugs for any iOS chain,
they're really valuable.
It's not like most OSs where that's eh.
Yeah.
I mean, they've crept up in value for
things like windows as well over the years but like an ios like you know essentially if you want
to break the sandbox like yeah you know because you know you you need them all the time in ios
whereas on every other operating system you know often you don't need a local privy ask often you're
kind of there already uh you know you've kind of already on target you've got what you need so
they're less kind of less valuable there yeah all. So we're going to talk about some research here.
And this is like bug bounty style research, which is like, I think it's just the write-up that's so
good rather than the research. Cause the research is essentially fairly work a day API security
research. But someone has written it up. Who's this? Eaton Works, right? Have written this
up and the website's fantastic. It's very funny. So they took a look at McDonald's API in India
and they discovered that it was like, had several vulnerabilities, which would allow them to do
things like order food for one cent and yeah, do all sorts of stuff. They could steal, hijack,
redirect other people's delivery
orders and you know retrieve the details of any order or whatever and they've put up a web page
here and it's covered in like little mcdonald's fries emojis and burger emojis and the mouse
cursor turns into like a little you know little little packet of fries geocity style yeah a little
bit geocities but you know i i don't know. It's kind of cool.
But, like, the walkthrough of the research here is just, like,
I think anyone who's responsible for, you know, an API should just take a little thumb through this
because it's just, you know, this is how someone sitting down
is going to have a go at you.
Yeah, I mean, in the end, one of the things that bug bounty kids
are really good at is writing up their research
because you have to be, otherwise you don't get paid. And, you know, you can follow the train of thought, it's all explained pretty clearly, it all makes sense. It's also kind of fun. Like, you know, I miss the fact that security research has gotten so serious. And so it's nice to see someone kind of, you know, horsing about having a bit of fun with it. And the bugs are legit too, right?
I mean, there's some unauthed API parts.
There's some bits where the guy uses mass assignment
on the shopping cart midway through the payment process
to change the values to, you know,
like whatever the minimum the card processor will do.
You know, like it's solid hacking.
And yeah, it's just a fun read.
Yeah, it is. So, you know, good job, guy. And Eat it's just a fun read yeah it is you know good job
guy and ethan works is apparently a guy not a company so there you go i just was was looking
into that while you were speaking um and uh you know kind of a quick one this week isn't it because
we're about to wrap up uh but c is back as a computer language.
The White House has deleted its memo on using memory safe languages
because we don't want any of that woke rust stuff in our code.
Adam, it's back to C.
You know, Trump comes back in, you're allowed to use C again.
Yeah, go langs for communists and hippies.
That's right.
Who are trying to destroy America.
Exactly, yes.
Honest to God, just like Brian Kernighan intended it,
we should all be writing C for everywhere.
But there's a whole bunch of web properties
that have been changing during the transition in the government.
And we've seen executive orders go missing.
We've seen all sorts of other guidance about stuff go away.
But it's just, you just, what a mess.
But hey, if C comes back, I'm not mad about that.
I'm not like C.
You should be mad about that.
Oh, man.
God's own macro assembler.
That's a beautiful thing.
All right.
Well, that's actually it for the news, Adam.
But you and I are going to do something we don't normally do
and actually have a bit of a chat about this week's sponsor interview
because it is very interesting.
I mentioned it briefly at the intro but basically push security they make a identity security uh platform and a big part of it is a browser plugin right so they
can collect things like login information that people have done through web browsers and stuff
give you incredible visibility into the sas apps you're using um they can do all sorts of stuff
they can prevent people from putting uh from recycling their SSO password into other sites and, you know, just really,
really handy stuff. Yeah. So you can, you can just Google push security, you'll find them.
But they found some unexpected stuff. And, you know, that's what, that's a big part of what's
in this interview is they noticed, some of their customers noticed that they were getting all of
these Google logins, logins to Google accounts using corporate email addresses and they're like huh we're not a
google shop so what's going on there and it turned out what some of their staff were doing was
actually registering google accounts with their corporate email addresses and you can have a
google account that's not an email account so there's no domain validation they'll just email
that uh user They'll email the
address of the person who's trying to spin up the account with a code. And then bang, you've
registered the account. And then the reason they were doing that is so that they could then log
into SaaS apps that they were using as part of their jobs by clicking login with Google.
Now this got the people at Push thinking, well, if you could just fish that code, you can start logging into all manner of SaaS apps and you haven't had to do any fancy token theft or install a malicious extension or you know what I mean?
Like this is just a really easy way to do that.
And I remember, yeah, like signing up for an account with Dropbox and then with a username and password and then just OAuthing to it with no password from a different browser profile with that email address
and it just let me in.
So all of this stuff's a bit of a mess.
Yeah, like it's quite a complicated problem
and some of this is because of the way
that kind of authentication has changed over time
from local authentication
through to federated authentication
and identity providers
and so on so authentication mechanisms in most systems are pluggable so you can authenticate
from a local account you can authenticate with an idp you can authenticate from elder app or from
saml or whatever else and the systems that use those kind of authentication plugins don't really
care which one and using that in the real world,
where you've got malicious use cases like this,
does require a bit of whole system thinking
that you lose when you've got that sort of pluggable mechanism.
And the options you've got for dealing with
that kind of blended authentication
is defensively registering stuff
with other identity providers, which...
Which, you know, can be done, right?
And there's only a handful of them
that are used commonly, right?
So you've got Apple, Google, and Microsoft
are going to be your common ones.
But, I mean, you know, I made a mistake
on this when we were talking
because I'm like, that's part of the spec, isn't it?
It's agnostic.
If that's your identifier,
if it's your username at domain.com,
if that's your identifier,
like it's in the spec that it respects it,
whether it's username or password,
but it's not in the spec.
And they can, a lot of SaaS providers can choose
to pin people to an authentication method or an IDP,
but they don't do that
because say we were to switch to using M365, that would break all of our SAS access
and that would cause them help desk problems.
So this is a real issue.
Yeah.
I mean, what the spec says is that identity providers provide
an attribute that says whether or not they have validated control
of the email address.
So that's one kind of data point that you can use. And then they provide the email address
related to it. They also say that you shouldn't use that email address as the kind of private,
as the key, as the identifier for that user, that there is a separate unique value that's
guaranteed to be unique, that you should use for that and those kinds of details
are not something that as an outsider you can really see right if you're trying to assess
am I you know do the systems that use third-party identity providers that I rely on do they care
about the email validated attributes do they validate that you know do they do things per
the recommendations
because there's the letter of the spec
and then there's the kind of intent of the spec
and then the as-built reality of it
and then there's the way that everything kind of interacts
and the glue between an entry provider
and relying parties and all,
it gets kind of messy and complicated.
As an end user, they want it to be simple.
The flow should be click button,
receive authenticated session.
There shouldn't be much to fiddle around with.
But if you're the security person and you kind of have to care about this stuff,
it's really, really complicated.
Well, and it's opaque as well.
That's why I found a really interesting thing about this story
is the way it turned up was people using push to do something else
and then just looking at the logs and going, that's weird.
And you can totally understand why end users would do it
because it is, it's often straightforward.
You don't have to manage another password.
In some cases using federated auth like that
is more robust and easier.
However.
But however, right?
And then we're into the world also where
how individual services
handle multiple IDPs and whether they pin it.
You're kind of supposed to, but there's no spec that says you should.
That's just kind of accepted wisdom, best practice,
the sort of thing a pen tester will probably write in a report.
Nobody does.
No one reads pen test reports either.
Yeah, they just tick the compliance box.
Yes, exactly.
Got SSO, Green Tech next.
All right, so we are going to wrap up the news there.
But I should mention too,
we've got a couple of job postings to talk about.
We've got lots of resumes that we're going through
at the moment for our job
and we're doing some interviews around that.
But there are two more jobs up for grabs.
One is with Trail of Bits
and one is with Push Security,
who's this week's sponsor.
And I'll drop links to both of them
in this week's show notes.
Both United States-based positions.
But here is that interview with Luke Jennings from Push Security talking about this whole issue of cross-IDP impersonation.
Enjoy.
We see people do this with their own accounts for real.
We've got customers where one of the reasons it came out is that they they were looking for uh logins across
their estate and they saw all these logins to google but with corporate email addresses and
they were confused saying well hang on we're a microsoft house this must be a bug and then we
looked into it's like no no like a certain percentage of your users are all registered
personal google accounts with their corporate emails and they're using those to over with google into other downstream
sas apps so it happens even for legitimate use like people going oh my god that's a that's a
so that's a really funny way that you've discovered that which is that it was a non-malicious use
which is their own corporate users legitimately logging into corporate sas apps with the wrong idp
yep probably because it's easier they get a a button that says login with Google and then
they go and register that with their corporate email once and they just do that from then on.
It's easy. Wow. That's amazing. So walk us through like the phishing workflow
to spin up those accounts, right? Because you're going to need to do a little bit of
fancy footwork, I'd imagine. Yeah. So, I mean, when I was looking into this, I was thinking, how can an attacker take advantage of this?
And it made me realize, you know, a lot of people have got very strong SSO authentication methods now.
Maybe they're even using passkeys or something to stop phishing entirely.
So, how do you get an account on a different IDP?
Well, actually, as an attacker, you can go and register say that google account yourself
you set the passwords you do everything and then it will it will email the attacker on their their
email address and go to their outlook for example now you need to get that verification code but
you need to get that once and you think about it that's way easier to fish than you know doing
an attacker in the middle phishing attack against their actual soso account or you know if they've
got pass keys it wouldn't even be possible anyway
so you've just got to convince them to give you that
code through some social engineering pretext
once. For an account they know
they don't even use so why would they be
too worried about it? You could create some context
around that, I'm not giving away a password
I'm not giving away an account I use
it's unused, you've just got to get
that code once and then you can register
that account, you control the password and then you can register that account,
you control the password, and then you can start logging into things as Google instead.
And it won't even go into the SSO logs for the real organization because you go straight to the
downstream app. So they won't even see suspicious logins through the IDP. You bypass that as well.
Yeah. I mean, I think that's the interesting thing here is that without you know because of course you make a plug-in right that captures this sort of telemetry
without that like no one would have realized this was even this was even happening yeah it's true
i mean it surprised us um ourselves just seeing how legit users or how many of them were doing
this as well um and thinking through the four applications, you realize it
creates quite a lot of potential problems in a large organization where you've got, you know,
just more forms of ghost logins appearing and you've got ways of circumventing SSO with
verification phishing as well. Okay. So all of this begs the question,
you know, what do you do to prevent this happening to your org?
You know, you sent over some notes, obviously,
before we started having this conversation. And one thing you can do is like register tenants with like Microsoft.
But can you do that with Apple?
Yeah, so you can register your domain and verify it with other providers.
So even if you don't make use of them, you can kind of claim it.
And for like Apple or Microsoft.
Yeah, I mean, you could spin up like a a workspace domain right for and and just have like one user or whatever
yeah so like that's that's the intention and um if you do that with say apple for example you can
claim that domain and as a result you can then stop people making new personal accounts on that
domain so you can kind of close off apple as a route for example the difficulty is with google it's a little bit different uh with google it will then tell you
if there are other personal accounts like unmanaged accounts on that domain so you can
gain visibility of them but you have to kind of control how it handles conflicted accounts
it's not simple but effectively that's one way of doing it you go and you go and claim your domain on the
other identity providers even if you're not actively using them yeah okay and have you i'm
guessing that you would have recommended that to a bunch of clients and they've been through that
process have they found it pretty simple yeah um yeah with the exception of google as we said yeah
yeah but google's slightly more annoying um but certainly with apple it's very
easily and well we've done it ourselves uh you know for our accounts too um i mean downstream
from there you can obviously go to your major apps and and try and configure the authentication
set as to prevent logins from other idps too but obviously you're at the mercy of what the app
allows and you've got a lot more apps to go and do that for rather than just you know a few different idps i guess i guess the main thing though the takeaway
from this conversation is that because most of this oauth is apple google microsoft like it
doesn't take a whole bunch to seal this off as a viable attack path for for people out there right
so people listening to this at least they've got something they can do.
Yeah, I think really it should probably become standard practice to go and claim your domains
on the other identity providers in light of this.
It would be definitely good practice for people to do.
And actually, if you just want to test
what your level of vulnerability is
in your own organization,
you can do it pretty simply
without even being an admin too.
I mean, you can just try and register your own account
with other identity providers
and see if you can log into any of your apps yourself.
And like, you'll find out your level of vulnerability
that way too.
It's very simple.
Yeah.
Now, is there anything that those vendors
should be doing differently,
I guess, to prevent this from happening?
I mean, it's, unless they're going to restrict it
with some sort of domain validation,
I don't really see what else they can do, right?
And as you say, product-led growth,
they don't want to slow down the number of users
who are coming to their services.
So I can't think this is going to change on their end, right?
Yeah, for the identity providers,
I don't know what else they can really do
other than look for the account creation
as looking as suspicious for other reasons,
but then they've got to go and contact the editor of the domain.
It's going to be some sort of manual process.
It's kind of hard for them to do.
I think for general SaaS application vendors, though,
what they can do is make it so that once you've logged in with one method,
it's not easy to log in with a different method
without some other step
being taken. And that really does just depend on the application. Some of them do that quite well.
If you log in with Google, then you can't just go and log straight in with Microsoft,
but others just let you use whatever method you like, unless you go and explicitly disable those
things. So it's really- Yeah, but there's always going to be that situation, isn't there, where
people are going to switch between providers
for their cloud accounts.
And then what, it breaks all of their SaaS access?
Because I mean, I did have a chance
to have a bit of a noodle on this.
And I'm like, I can't really see how this gets fixed,
to be honest.
I actually, to test this on a related story a while back,
I think I made a Dropbox account
with a username and password and then like oh from from one uh chrome profile then went to my corp google profile and just
o-warfed into it without the password and yeah let me in no problem so it doesn't seem that these
applications really care what method you use whether it's username and password or which IDP you're using. And
I can see why they do that, but I also feel like that's, yeah, not ideal. Let's put it that way.
No, it is. Yeah, for sure. And I think it almost seems like some of the biggest apps are the ones
that are more vulnerable there because they've tried to make it as easy as possible.
Well, they're the ones who, every time they make support a little bit more complicated,
they make their product a little bit more complicated, their support costs go through the roof, right?
So it makes sense that the bigger services are the more difficult ones.
But yeah, I just, I mean, look, again,
we've got some actionable advice here, so that's great.
But, you know, not every company listens to risky business
and is going to see this research.
So that begs the question,
what's the response to this work that you've been doing, been been like so far like have a lot of people you know have you
talked to a lot of people about it has it generated a bit of buzz with people saying
gee i didn't know that that was a an issue sure yeah i mean we posted a couple of blog posts on
it and um and and sort of video demos of the attack i got a lot of good feedback from that
um i have seen like there's even bits some other people like Sublime released some detection mails in there
for the verification phishing side of it.
So we've got like quite a lot of feedback
from different people.
And I think most people are pretty surprised
at the outcome.
It seems really simple.
It's only when you sort of think about it a little more
that you realize the implications.
So yeah, it's been, you know,
it's been interesting response to it.
All right, Luke Jennings,
thank you so much for joining me
to talk about cross-IDP impersonation.
Very interesting stuff, my friend.
And yeah, always great to get some actionable advice
out there in the show.
Cheers.
Thank you.
That was Luke Jennings there from Push Security rounding out this week's
edition of the Risky Business Podcast. I do hope you enjoyed it. That's it from Mr. Beardy Adam
Guy over here and me. But we'll be back next week with more security news and analysis.
Between Two Nerds is coming back as well next week. Tom's had a couple of extra weeks leave,
so he'll be back with the gruck next week, and so will Seriously Risky Biz,
and all of that over at Risky Bulletin.
But yeah, we'll catch you all next week.
Bye.
Thanks very much.