Risky Business - Risky Business #795 -- How The Com is hacking Salesforce tenants
Episode Date: June 11, 2025On this week’s show Patrick Gray and Adam Boileau discuss the week’s cybersecurity news: New York Times gets a little stolen Russian FSB data as a treat iVerif...y spots possible evidence of iOS exploitation against the Harris-Walz campaign Researcher figures out a trick to get Google account holders’ full names and phone numbers Major US food distributor gets ransomwared The Com’s social engineering of Salesforce app authorisations is a harbinger of our future problems Australian Navy forgets New Zealand has computers, zaps Kiwis with their giant radar. This week’s episode is sponsored by identity provider Okta. Long-time friend of the show Alex Tilley is Okta’s Global Threat Research Coordinator, and he joins to discuss how organisations can use both human and technical signals to spot North Koreans in their midst. This episode is also available on Youtube. Show notes How The Times Obtained Secret Russian Intelligence Documents - The New York Times Ukraine's military intelligence claims cyberattack on Russian strategic bomber maker | The Record from Recorded Future News Harris-Walz campaign may have been targeted by iPhone hackers, cybersecurity firm says iVerify Uncovers Evidence of Zero-Click Mobile Exploitation in the U.S. Spyware maker cuts ties with Italy after government refused audit into hack of journalist’s phone | The Record from Recorded Future News Italian lawmakers say Italy used spyware to target phones of immigration activists, but not against journalist | TechCrunch Android chipmaker Qualcomm fixes three zero-days exploited by hackers | TechCrunch Cellebrite to acquire mobile testing firm Corellium in $200 million deal | CyberScoop Apple Gave Governments Data on Thousands of Push Notifications A Researcher Figured Out How to Reveal Any Phone Number Linked to a Google Account Bruteforcing the phone number of any Google user Acreed infostealer poised to replace Lumma after global crackdown | The Record from Recorded Future News BidenCash darknet forum taken down by US, Dutch law enforcement | The Record from Recorded Future News NHS calls for 1 million blood donors as UK stocks remain low following cyberattack | The Record from Recorded Future News Major food wholesaler says cyberattack impacting distribution systems | The Record from Recorded Future News Kettering Health confirms attack by Interlock ransomware group as health record system is restored | The Record from Recorded Future News Hackers abuse malicious version of Salesforce tool for data theft, extortion | Cybersecurity Dive shubs on X: "IP whitelisting is fundamentally broken. At @assetnote, we've successfully bypassed network controls by routing traffic through a specific location (cloud provider, geo-location). Today, we're releasing Newtowner, to help test for this issue: https://t.co/X3dkMz9gwK" / X Ross Ulbricht Got a $31 Million Donation From a Dark Web Dealer, Crypto Tracers Suspect | WIRED Australian navy ship causes radio and internet outages to parts of New Zealand
Transcript
Discussion (0)
Hey everyone and welcome to Risky Business. My name is Patrick Gray. We've got a great
show for you today actually. It's full of intrigue and dumb bugs and criminal activity
and basically everything. It's a real mixed bag but a very fun show coming up. So we'll
be talking with Adam Bailo in just a moment going through all of the week's security news and then we'll be hearing from this week's
sponsor which is Okta and this is actually a cracking sponsor interview
this week Alex Tilly who people in Australia would remember as our
friendly local fed he worked for a long time as an investigator for the
Australian Federal Police these days he's in a threat intelligence role at
Okta and he got his hands on he's got his hands on so much data there to crunch and
analyze and today we're going to be talking to him about how he's going
about identifying a lot of these fake North Korean workers using
threat intelligence from like login events and whatnot. It is a really
interesting interview. I let it
run a little bit long because it's just very compelling stuff and I would
recommend that you stick around for that one. But let's get into the news now
Adam and as I said there's we got a whole broad array of stuff to cover
this week but let's start with a bit of intrigue. There's a group what are they
calling themselves again? Errors Leaks.
Errors Leaks, yeah.
So these guys, presumably guys, have just decided that they want to live dangerous lives,
right?
So their business is apparently offering documents stolen from various intelligence services
for sale over Telegram.
And they have given some pretty juicy documents to the New York
Times as like a free sample and a way to advertise their service. This won't end
badly for them at all. No, no not at all, not at all. They claim to have gotten
hold of some documents from the Russian FSB, the intelligence apparatus, and one
of the samples that they provided to New York Times
related to how China, to the intelligence capability
that Russia uses against China,
the spy on weekend conversation,
which you would kind of expect Russian intelligence
to be up in Chinese comms, that seems relevant to them.
But yeah, this does feel like living
a little bit dangerously.
They won't say where they got the data from, but they are.
They also advertise to buy data from intelligence agencies.
So they post on Telegram and say, hey, if you work in an Indian intelligence agency,
we'd love to hear from you.
And then they package it up and they sell it on, which I mean, it's very 2025.
So that's nice.
Yeah, but you do wonder if they're going gonna get a case of the Windows, you know?
Well, yes, exactly. I mean, especially like selling stuff from the FSB like that.
I mean...
On Telegram, which is known to cooperate with Russian intelligence services as well.
Do you want to flood a window? Do you want to get Novichok in your underpants?
Like, none of this ends well.
Yeah, now the New York Times can't definitively say
if these documents are legit,
but they have done their homework on it.
They've reached out to a bunch
of Western intelligence agencies,
and some of their sources have said,
well, you know, this seems to vibe
with what we understand as they're doing as well.
And other people have looked at the documents
and said, this looks like legit, you know,
Russian intelligence documents. And then of course the New York Times has spun out a couple
of stories about the content of the documents. I think the most interesting one is about how
suspicious Russia's intelligence services are of China and really looking at Chinese efforts to
steal things like military, you know, technology secrets and whatnot from Russia. They're also really trying to understand the capabilities
of the Western weaponry being used in Ukraine
for obvious reasons.
The WeChat stuff though, I thought was a little bit weak
because this times piece doesn't really go into actually
how they're alleging this data came to be obtained.
Like, is it just from like malware on an endpoint?
It says that there's an agreement that WeChat data is
hosted in Russia for Russians.
So maybe they just have legit access to that data
in the first place.
So I actually found that one the least compelling angle
to all of the material they accessed.
Yeah, I think that it sounded more like that was tooling
for processing the data, so less about access to it,
but more about integrating it
into the kind of intelligence pipeline.
But that's the kind of stuff that is work a day
at an intelligence agency, is that kind of processing.
Yeah, it's like the least exciting thing,
which is they have a tool for processing
dumped like WeChat logs, like who cares?
Yeah, it's bread and butter stuff though, right?
Yeah, exactly.
I think the thing that was interesting about that though
is they said that quite often these Russians who
have been targeted by Chinese services
don't realize they're talking to Chinese spies on WeChat
and whatnot.
So I think these stories are worth reading.
So check out this week's show notes if you would like.
Meanwhile, we've got a report here
that Ukraine's military intelligence folks have stolen a bunch of sensitive information from
Tupolev which makes you know long-range strategic bombers and things like that. Although I can't say
I'm impressed when they say that they got away with a whopping 4.4 gigabytes of data because that
seems like one person's mail spore kind of thing. I mean, these days, that's not an impressive amount of data.
But once upon a time, that would have been quite a lot.
And maybe Tupolev's information systems are from the Soviet area,
where 4.4 gigabytes in Soviet times, that would be quite a lot.
So it could be old file formats that are very small.
I don't know.
But yeah, I guess this is a hot story just because of the Ukrainians' attack on a whole
bunch of Tupolev bombers with the drones that we talked about a little bit last week.
So Tupolev is kind of fresh in the mind and it's a bit of a finger in the eye, thumb in
the eye.
Yeah, they defaced the website with their logo as well, but they say they've got sensitive
data on internal communications, personnel files, purchase records, notes from closed door meetings, blah, blah, blah, blah.
These drone attacks in Ukraine are ongoing too.
Everybody was very focused on the attacks against the airfields, but they keep attacking
all sorts of other stuff like munitions factories, trains, all sorts.
That's still going.
That piece was written by Darina Antoniuk, who is based in Kiev.
She's Ukrainian and based in Kiev. And I saw some very upsetting posts from her when she was hunkered
down and it was raining drones and missiles on Kiev. And Dorina, we hope you're okay. And yeah,
it's a horrible situation. I was very relieved, in fact, she had posted that she was scared she was going to die.
So I was very relieved to see subsequent posts that she survived the most recent bombardment
and continuing to deliver such quality work in such difficult circumstances.
I think it's commendable.
So good on you, Duna.
Okay. I think it's commendable. So good on you, Doreena. Okay, let's move on now.
And we've got some work out of iVerify
who do iOS focused security stuff.
They've found a bunch of artifacts
on devices they've analyzed,
which suggests those devices have been owned
via some sort of iMessage exploit.
Apple is kind of being weasely about it.
I guess the most interesting thing
about this is iVerify says that some of the targets of this campaign, where they don't quite
have the bug pinned down, but they're like there's evidence of exploitation here, they were
targeting the Harris-Waltz campaign. Which look, honestly, those people getting quality malware and exploits dropped on them should not be at all surprising.
Yeah, I mean, they seem like pretty legitimate intelligence targets for some people.
But, yeah, Iverify does have kind of a unique perspective because they're one of the few companies other than Apple that gets to collect crash dumps and logs and stuff from a really quite large swath of Apple devices.
And what they've got to piece together here is that there were a bunch of crash dumps
in a funny sort of place, combined with some targeting information and whatever else.
And they sort of pulled the thread a little bit.
And what they've arrived at is that they think someone has been exploiting a bug that
Apple patched, I think, earlier this year in in what, 18.3, something like that.
And the bug appears to be in the processing of avatar updates
via iMessage.
So sometimes when you're using iMessage,
you'll get a thing that's saying,
your friend has updated their contact photo.
Do you want to update it in your address book?
And so that particular type of message
is processing image data from this kind of contact update,
and it's doing so in a way that apparently was able to escape the sandbox that's normally around image processing in iMessage,
just because I guess it was maybe the context bit works a little bit differently.
Anyway, that's kind of where they were thinking.
I verify they're pretty careful to say that they don't have, you know, conclusive proof.
This is kind of circumstantial and, you know, Apple says they patched the bug in this area.
So, you know, the kind of dots kind of line up. But it is just, it's kind of,
it's always interesting when you see this research from iVerify because of their,
you know, parallel perspective to what Apple has.
I mean, you know, the thing they point to is really rare crash dumps.
And that is the sort of thing that tends to get you snapped.
If you've written an exploit and you're deploying it against iPhones
and you generate a weird crash dump, that bug is getting crushed.
And I know that this is something that keeps people awake who develop these exploits,
because I've talked to them about it.
And they're like, you cannot generate a crash on an iPhone because
then your bug is going away.
So it looks like, you know, that may be what has happened here.
It's weird that Apple's saying nothing to see here though.
Maybe they know something we don't.
But yeah, they must know something we don't know.
There must be some other aspect that means that they're not willing to talk about it.
But yeah, it's still good
work and I'm glad that somebody is kind of keeping an eye on it because we do have to
because that Apple ecosystem is so closed, we have to put so much faith in Apple. So
it's kind of nice to have somebody else, you know, just kind of keeping an eye and checking
what they're up to.
Even though I mean, I would argue Apple probably has the best team out there.
Yes, yes.
Almost certainly has the best team out there.
We have almost no visibility into what best team out there. Yes, yes. Almost certainly has almost no visibility
into what their team is doing. No, no, I agree with you. It's like trust but verify. I verify,
there you go. Maybe that's why they named it that. But yeah, I also just want to point out too that,
you know, we're just saying like if you're in a role like that, you are going to get targeted by
high quality malware, by high quality exploits. I mean, this is why you don't discuss imminent military actions on your signal
group chat. But anyway, Suzanne Smalley over at The Record has this report, a bunch of people
who reported this. Paragon, the spyware manufacturer that had already, I believe, suspended its
relationship with Italy because there was an allegation that the Italian government had deployed its spyware
against a journalist. It has now terminated completely its contract with Italy. The interesting
thing in this though is that Paragon say they had found a way that with Italy's cooperation,
they could have verified that this journalist wasn't targeted and Italy refused to do it.
Okay. Which is interesting because Italy is saying, oh no, this person wasn't targeted at
all, never happened. And then Paragon's like, well, here's a way we can check. And they're like,
no, thank you. So I think that tells you all you need to know.
Well, well, yeah, exactly. There's a sort of like a process going on in Italy at the moment where they're investigating
these allegations and there's like parliamentary committees or whatever the structure is there
that have been looking into it. And in this process Paragon was trying to keep their name
out of the mud, I suppose. And they showed up with this offer and the Department of Information for Security,
which oversees the intelligence agencies in the country,
basically said like, yeah, this is a bit invasive.
It's, what do they say, invasive practices,
unreliable in scope results method,
and therefore not compliant
with national security requirements.
So we don't want to let Paragon in
to go process the logs from Paragon's
infrastructure or whatever bits of plumbing are around it. So yeah, as you say, we can read
between those lines, I guess. I mean, look, it could be some sort of compliance issue, but either
way, Paragon's made the right decision, which is, well, if you can't prove to us to this bar
that you haven't done the bad thing that everyone's saying you're doing, well, we're just not going to
do business with you.
I mean, there's been the parliamentary inquiry into all of this in Italy says that the spyware
was used against sort of immigration activists who work to save immigrants at sea.
I mean, again, it's hard to know.
Context is everything when you're talking about that sort of thing
because should people be targeted for activism, being activists?
No.
But once you start getting into the area of like, well, who are you dealing with as part
of this?
And is there involvement of foreign actors and whatever?
And are you using this to collect evidence to press charges or are you just trying to
collect intelligence to understand if there's a nexus between these people and foreign actors and whatever and like, are you using this to collect evidence to press charges or are you just trying to collect intelligence to understand if there's a nexus between these people
and foreign actors who might be up to no good? Like it all gets pretty fraught, but I think
Paragon ultimately has made the right decision here. Yeah, I mean it's complicated because
outsourcing this kind of responsibility to a private organization instead of a government,
you know, you end up with all things, but they don't have the necessary
information to make good decisions.
You can't share it with them.
This is sovereign state business, but they're using private sector tooling.
And we can't really expect private
sector companies to hold sovereign states.
But that's what happened in this time.
Other than just not selling them in the first place.
Right. There's no granularity to that, I guess, is what I mean.
Well, I mean, the irony here is that in this case,
it's a private sector supplier providing oversight
over a government, not the other way around, right?
So it's a bit of a-
That's exactly what I mean.
We can't expect them to take that role because, you know-
Well, I think we kind of are though, right?
Like that is kind of how it's working out,
which is that like, if you don't want to get sanctioned
and have an awful time,
you kind of have to put that oversight on your customers or you're going to have
a bad time. Yeah. But I mean, that's not a, you know,
if the only tool you have is to not sell your product to them in the first place,
well, just withdraw it after the fact, you know, that's pretty blunt.
I know it works for me. Fair enough. Right. Yeah. Now look,
we've got a bunch of mobile stuff to talk about this week,
including this next story.
Lorenzo has this one over at TechCrunch.
Qualcomm has patched a bunch of bugs in its chipsets
or whatnot, and these things were being exploded
in the wild, and this is thanks to,
we know this thanks to some work out of Google Tag.
Yeah, these bugs all seem to be
in graphics driver implementations,
so it looks like one of the Google researchers either found an interesting bug
or saw some crash dumps or something that led them to go have a nose around
in Qualcomm's graphics plumbing, and they've got a bunch of bugs that are
critical severity or related, which basically say you can go
via the GPU into,
you know, code executive kernel or memory leaks
or whatever else.
So, you know, that's a complicated set of attack service
in the graphics environment and mobile devices
are just as complicated as everything else these days.
So good work to Google finding some cool bugs.
Yeah, I guess one of the issues with all of this though,
is that patching them, like it going to be up to the individual handset
manufacturers because it's in Qualcomm drivers not in Android itself, right? So that always gets a bit, you know
That's the age-old problem with Android. I guess. Yeah takes a while to percolate through the ecosystem
now
some business news some sort of
You know mobile
exploit development business news. Corellium is being acquired by Cellbrite
for 200 million bucks. Now Corellium of course is a company that makes like, I guess, how
do you even describe Corellium? It's like they offer like virtualized iOS environments
that people can use that are very handy when you're doing exploit development. I mean,
it does have uses outside of that as well. But I think where it's most commercially
successful is as a tool used by people who are doing exploitation on iOS. They even offer like
iOS over cloud, right? It's pretty crazy. And Apple even sued them trying to say that it was
a copyright problem or whatever. Like amazingly, Corellium actually won that one.
We've had Chris Wade on the show before. He's the founder of Corellium, an Australian guy, actually,
who grew up not too far from where I'm sitting.
And yet 200 million bucks for Chris Wade and co.
to sell off Corellium to Sellbrite, which is, you know,
I mean, it seems like a logical place for for Corellium to wind up.
Yeah, yeah, I think so.
I mean, it's you know, it is a pretty niche tool, but so is Celebrite.
They've got, for them being able to test their products on these kinds of virtualized devices.
I mean, anyone who's had to maintain a pool of test mobile devices,
where you've got different device configurations. I mean, in the Apple world, it's bad because
there's all these devices
and all these iOS versions you have to maintain.
In the Android world, it's even worse
because now we're dealing with,
hey, I need to test a Qualcomm exploit.
We need Android, all the versions,
but we also need all the different Qualcomm chipsets
and other vendors, modem chipsets.
And that's a hard problem to solve.
And having it virtualized
in the cloud environment, like super handy for a researcher.
So I can totally see why, you know,
Celebrites nerds would be like,
hey, that would be super handy to have in the lab.
Yeah, but I mean, they bought the company,
not just the product, right?
So I think, I don't think it's just gonna be a case
where they'll take it for in-house use
and no one else will be able to use it.
I think it's just gonna be one of their products. I mean, this whole deal is subject to CFIUS review, but I don't
think there's going to be any dramas there. But yeah, they do cover Android as well. I
mean, it is, it's exactly what you say. And even companies that have to manage, you know,
an app that winds up on a really large array of devices and they have to work, you know,
it's very useful technology for them. But yeah, I just think it's crazy to see that, you know, you had this company that Apple
were really trying to shut it down.
And you know, they managed to actually get through that unscathed after appeals everything
and sold the company for 200 million bucks.
So congratulations to them. Now we got this one from 404 Media about Apple giving up push notifications to people,
right? So governments are requesting information on push notifications. I feel like the story here,
not intentionally, but is a little bit misleading because it says push notification data can
sometimes include the unencrypted content of notifications. And there have been requests from the US, UK, Germany and Israel.
I mean, sure, you might get some unencrypted data in a push notification, you know, if
you've, if you've given Apple a warrant, but that's not going to be like an E2E, you
know, messaging app, like, you know, Signal or WhatsApp. This is going to be much more
boring stuff than that.
Yeah, yeah. There's basically two patterns by which people use these messaging push notifications.
So apps kind of register with Apple to get an identifier to deliver a message to their particular user in their app context.
And then when the backend system wants to send a message, it forms up a message, send it to Apple's endpoint with that particular token,
and that causes it to be routed out to the user. And those messages can either be, you know, here is
a message, here is what the pop-up is going to display, you know, the icon, the type,
and the message, or it can just be a go wake this application up and tell it that there's
something for it to do. And in most cases where you're handling sensitive content like
Signal, that's what they're using. They're just getting, you know, Signal says, get the message saying, hey, something's happened, go poll the
server and get what you need. And those are not displayed to the user. Those are just internal app
things. And then the app can generate its own pop-up message to alert the user once it's gone and got
the relevant context. So most of the time you're going to be seeing really boring content in your
push notifications. It's going to be, you know, the lights that your Apple Home got set to 40% or whatever.
Your car is fully charged.
Your car is fully charged.
So the metadata aspect of it, if you're looking at metadata for timing and saying, hey, there's
a notification going to a crypto wallet app about the time we saw some crypto get stolen,
that might be a useful
correlation point.
But...
Well, and even just message timing back and forth and whatever.
Like, there's plenty of useful stuff.
So I think it's not at all surprising really that we're seeing this sort of data.
No, it's a natural thing that, you know, law enforcement can get a warrant to go get from
Apple and, you know, the list of things you can get out of them is not super huge.
Yeah, I think we had another one from 404 that I don't believe made it to the final
run sheet, but it was about how airlines were selling passenger manifest information to
the US government. And it's funny, right? Because you and I both agreed that this is
like a story for 404. But if the government didn't have access to that information, it
would be a story for the Washington Post. That would be the scandal if the government didn't have access to that information, it would be a story for the Washington Post.
Like that would be the scandal if the government didn't know who was flying around on planes
all the time.
It's all in the eye of the beholder, I guess.
What else have we got here?
We've got some research.
Again, 404, Joe Cox was the first one to have this.
It's a research by a guy who's figured out how to brute
force the phone number out of like a Google account, which would be very, very useful in
the context of doing things like SIM swapping attacks and whatever. Google shouldn't be coughing
up that information. I believe they've now fixed this issue. Yeah, it was some interesting research.
The person who found this basically chained together a couple of things. The core guts of it is they can turn a Google account into a full name and a phone number,
which that's probably useful.
There's a couple of tricks.
The phone number was a case of brute forcing it out
of the password reset flow using the non JavaScript version.
So like Google has an old legacy version of the password reset flow that doesn't use JavaScript and
the kind of bot, anti-bot scraping protection stuff was tied to like a JavaScript proof-of-work process and
then if you hit it in the non JavaScript way you would get a capture and that was the kind of how that
control was meant to work. But this guy figured out that you could take
and that was how that control was meant to work. But this guy figured out that you could take the proof of work
that you did in the JavaScript version of the thing
and submit that to the non-JavaScript version's endpoint,
and it would be like, oh, OK, so I don't need to capture them.
And at that point, now he can brute force phone numbers.
And then combined with getting a username out of some other,
like some obscure Google app that no one's ever heard of,
would cough up a full name in a particular case. So he was able to go and join the dots and turn this into something.
Google ended up giving him I think $5,000 as a bug bounty which yeah seems fair.
It seems a little stingy to me man if I'm honest like okay it's not worth $100k but $5k really?
I mean they originally gave him $1337 and he went back and had a sook.
Okay yeah good well I'm glad he did that because yeah, 1337 is an insult.
Uh, I think, um, another one from Dorina over at the record.
And this one's interesting, right? Cause Luma Stealer was, or Luma Stealer,
whatever you want to call it. Um,
this was one of those botnets that was taken down recently.
It was like an info stealer botnet.
And what was really interesting is Catalin wrote that up in his newsletter for us,
Risky Business News, which if you're not subscribed, head over to risky.biz and subscribe to that.
We also do a three times weekly news bulletin based on his newsletter in the Risky Bulletin
podcast feed. But he wrote that basically this meant Luma Stealer were done, right? Like even
if they were going to try to recover, people were going to move on. What was interesting is I had someone from the CTI world reach out and say look that was a really good write-up
But they're trying to bring it back online now and you know
They might succeed and Catalan in our internal chats just said no way man
They're finished and it looks like this accrued info stealer is really taking over market share from where Lama Stealer used to be
So I think as usual,
Catalan, the final boss of like macro threat intel, appears to have been right on this one.
That guy just huffs so much InfoSec news all day every day that yeah, he just kind of gets it.
It's a doggy dog world over there in Russian cybercrime, so it makes sense that there's someone waiting in the wings who are taken over.
You can sort of cut the news two ways, can't you?
You can say, well, the bad news here is that people have access to other infostealers,
but the good news here is the disruption appeared to stick.
And it's just, I think really the success of all of these recent law enforcement
actions, they can't be a point in time exercise.
They can't be, you know, they always say, Oh, it's operation, you know, whatever.
You can't just have a single operation and it's great to take down all of these services
at once.
There will be an impact, but all of this stuff has got to be rolling.
You got to have a process for it.
It's got to be happening all the time.
Right?
So, okay.
A creed has taken over.
Hit them, you know, hit them soon.
You know, don't turn it into the next 18 month operation.
But, you know, I think we're a way away from that, but definitely things
are moving in the right direction as evidenced by this story from John
Grieg over at the record as well, which is the darknet forum Biden cash has
been taken down by the Americans and the Dutch.
Yeah. The Dutch always do seem to be the ones that come out swinging against their stuff there. They seem to be involved in basically every one of these that we report on. But
yeah, they just keep knocking them down. And Biden Cash was just funny because of the branding.
I mean, there's just something funny about Joe Biden's face in the login screen
of a...
Very nice photos too.
Yeah, they're good photos.
Biden looking very handsome in the way that US politicians can, you know?
Yes, exactly.
Yeah, but yeah, this kind of run-of-the-mill cybercrimey kind of stuff, but it's just fun.
Yeah, exactly.
All right Alright so Alexander
Martin again at the record has reported that the National Health System in the
UK has issued a call out for more people to donate blood. They say they need
1 million blood donors because at the moment their stocks are not doing so
well they particularly need O type blood and this is because of that ransomware attack. I think
it was against that pathology place. We spoke about that like maybe last year. And so it's
got real bad. Like this disruption due to a cyber attack has led to shortages in national
blood stocks. So I mean, this is just one of those stories you talk about because it's
a sign of the dystopian hell we live in. Yeah, yeah, it is. I think in this case, because the pathology system was grinding under the load
and with all of the other things that were going on with it, they were forced to use more generic
O-type blood because they weren't getting results for tests back from people fast enough to then use more specific blood types that they had available. So that's kind of
the mechanism of this. And so now they've diminished their stocks of generic
blood and they need to replenish that because that's the stuff you need, you
know, in a real emergency when you don't have information, you don't have time to
do tests and get something more specific. So yeah, I guess if you are an O-type
blood person in the UK,
then this is a time for you to go do your duty and, you know, provide some juice.
Yeah, I mean, it's important stuff, right? So if you are a listener in the UK and you can
find a way to give blood, we would hope that you would do that.
Yet one more from the record, they are covering this ransomware incident targeting United Natural
Foods. So they've issued a statement, they've filed documents with the SEC, the
attack apparently began on June 5 and it is disrupting their ability to fulfill
customer orders. Now the reason this is news is because they are a massive
distributor of foods in the United States. Yeah I think Catalin had some
details that said like this was the biggest distributor. Yeah, I think Katalin had some details.
I said like this was the biggest distributor.
Yeah, I saw you fact checking him in Slack this morning.
You're like, are you sure?
And he's like, here you go.
Yeah, yeah, he probably appointed me to some links.
But yeah, they provide most of the distribution
for Whole Foods, which is a very big chain in the US
and a bunch of other things.
It does sound like they are still managing
to do some of the deliveries
and they're kind of prioritizing,
but this is like, what was it?
$8 billion last quarter.
These are $30 billion a year food distribution business.
So that's pretty serious stuff.
And this can turn,
there could be long tails to disruptions
to this kind of ecosystem,
much like with the blood system we were talking about.
So messy, very messy.
Yeah. Yeah. I mean, when you get a disruption to like a major component of your food supply,
like what does that do for farmers? What does it do for supermarket earnings? Like, you
know, there's just all sorts of stuff that can go sideways. You know, it's interesting
that this one, well, I guess it's only just recently kicked off, but as a public incident,
but you know, I mean, this would have been
sort of more front page news five years ago. It's amazing how tolerated this sort of stuff
is now. You know, I would hope that there's a windowless basement somewhere where people
are cooking up a response. I would think that if anything hits the threshold, this would
be it.
Yeah. I mean, you'd certainly hope so. I remember when we saw what was it, JBS Meats a few years back and we were covering that
and that was getting quite a bit of mainstream press coverage as well as in the meat press.
So yeah, you would hope that...
I think the thing that'll move the needle on press coverage is if people go to Whole
Foods and they can't buy their potatoes.
That's when it starts kicking off.
But yeah, the Wendellus Basement people, I'm sure, you know,
they hopefully they are thinking about, you know, where their hounds at.
Yeah, there's also been an attack against a company in Ohio called Kettering Health.
This disrupted their operations.
Looks like they're all back up and running now, though.
Yes, yeah, things are small.
What was 14 medical centers and care facilities in Ohio.
So, yeah, we don't have it in the run sheet or in the show notes, but I think Marks and Spencer
are mostly back online now, which is about time, right? But geez, what a incident that was.
Yeah, I think they've got some online ordering is back, so not all of the services. You can't
click and collect yet, but there are some aspects of it. So it's taken them a while to claw their
way back and they've still got a way to go. Yeah. Well, when you got to rebuild your whole environment,
like why are you being attacked? It does tend to take some time, right? So yeah.
Now we're going to talk about, look, I actually think this is one of the most interesting stories
of the week. And you're like, when we've talked about this previously, you're like, oh, well,
it's dumb, but it works. I don't think it's dumb. I think this is the sort of thing we're gonna see
an awful lot more of.
And that is hackers using voice phishing
or social engineering really.
Let's not call it voice phishing.
Let's just call it social engineering.
To trick people into authorizing a malicious app
into their Salesforce tenant
and like connecting an app into their Salesforce tenant that and connecting an app into their Salesforce tenant
that allows them to siphon off the data.
Now, the reason I find this such an interesting story
is because I don't really know how you would go
about addressing this threat,
beyond making sure that the only people who are authorized
to make those app authorizations
really understand what they're doing.
And that is not gonna be possible everywhere.
So, you know, on one hand,
oh, okay, it's a bit of a dumb attack.
Well, kinda, but it really does exploit,
you know, it requires knowledge of Salesforce,
how it works, how these app authorizations work.
And, you know, Salesforce are doing the typical thing of like,
well, there's no vulnerability in Salesforce.
It's like, no, this is more of a like,
your architecture allows this to happen kind of issue.
I mean, walk us through this one, man.
Cause like this tale, this goes back to,
I think stuff that was first highlighted
by Salesforce back in March,
but now Google, I think it's tag again, isn't it?
I think so, yes.
Yeah, so Google is actually sounding the alarm on this
because they're starting to see it.
So yeah, walk us through exactly what's happening here.
Yeah, so the deal here is that essentially
people are social engineering you to, you know,
authorize an app into your Salesforce.
But why this is really clever is,
and like your point, like this is dumb technically,
but it's also really clever.
The clever part is that we have mechanisms
to address credential theft now.
We have things that deal with,
multifactor auth ultimately is about preventing credential theft from being useful.
Fishing for code exec is much harder than it used to be.
All of the software as a service apps
have a bunch of other protections around it.
But that whole,
how do we share data between applications?
How do we authorize stuff things like auth,
app permissions, and so on.
That's the bit that is still, is taking over from this.
And so this is targeting that authorization process
with basically cloned copies of Salesforce's real apps.
So like things that you'd use for integration
and calling by Salesforce API.
So attackers will take one of those,
modify it to be able to do the other things
that are extract data in bulk, whatever,
and then get that authorization process
through the regular flow just to a different app
or a different endpoint or whatever else
that the attacker is running.
And to someone who is being socialed,
like computers are mysterious enough to start
with but most people understand username and password and they can recognize
that giving some of my password is going to do a bad thing and I shouldn't do
that or I should at least be aware that something happened to me but something
like OAuth app authorizations or whatever are so opaque to most people and
they're opaque to us like we work and report in this field.
And I was using Google Cloud stuff the other day,
and I had to click through some authorization stuff
into our Corpo G Suite.
And it's like, I'm going to just click yes,
because I don't understand what it's asking me.
And I'm an expert at this.
And if you social be like that, it would hella work.
Because Cloud Stuff boosts so quickly, it would hella work. Because cloud
softboost so quickly, it's so opaque, there's no way to inspect it. You know, if you showed up and
asked a Windows user, can I apply this particular, you know, anti-FS, you know, tree permission mask
or whatever, you'd have no idea what it meant. And we're in the same world now, except that
it's all software as a service, so we can't see the insights.
Yeah. I mean, I think, I think, so, you know, I do, I work with Push Security as an advisor,
and they've been saying some interesting stuff like about how, you know, attackers are going
after like OAuth grants, like not through your primary IDP. So, and this is a perfect example
of that, right? So you might log into Salesforce through some OAuth grant
or SSO process, but then you're doing a separate
authorization, which doesn't, isn't tied back
to your primary like IDP account at all.
So the only way you're gonna be able to address this,
I think, and I don't know that push, I mean,
like maybe they do, they'll ring me up and yell at me
if I get this wrong, but I don't think they've got anything at the moment
that would defeat this in particular.
I think it would be easy enough for them to do something here,
but then you've got to look at like,
well, where does the intelligence come from
that can let you know if this app someone is trying
to authorise into a Salesforce tenant is okay, right?
And we've got other issues that are very similar to this.
It's all browser stuff, right? So you look at the issue of Chrome extensions.
A while back we chatted about some work by a guy called John Tuchner who runs a
company, small company called Secure Annex, and they do sort of threat intel,
I guess you would call it, on Chrome extensions or browser extensions generally and other extensions in the developer world and whatnot
And it's really interesting because what they can do is look at well has the ownership changed of this app that you're using
Is this one known bad is it using weird code like they can do some sort of analysis of them and they're getting some pretty
High fidelity and reliable data, but then, okay, what do you then do with that information? Right.
So if you've got a situation where you're allow listing apps,
like you could use that information to figure out when one of them has turned.
But like in terms of like,
if you want anything at all sort of open and permissive at what level do you
start instrumenting that? Yeah. You know, and it, and it gets really hard.
So again, I find this an interesting
story because like, we don't quite have a standard agreed upon approach yet for dealing with this.
You need something that can do the control of the OAuth event. You need something that can block it,
but you also need the intelligence to let you know when you should block it. And we don't really have
either of those things just yet. Yeah. And also in the case of SaaS vendors like Salesforce,
we also need basically their cooperation.
Like they need to care about, explain, document, monitor,
alert on all these kinds of events inside their apps.
When we're used to layering controls around the outside,
we have things like OAuth that standardize
some aspects of that process,
but like this bit might not even be OAuth.
Like it may be some Salesforce internal business
that someone like Push is never gonna know about
until they've got customers that particularly demand it.
And then at that point, you are distributing
the effort required to implement these controls
across hundreds of apps.
Yeah, I mean, well, I mean, yes or no.
I mean, so much of this stuff, you know, if you take the top hundred, right, whether it's
GitHub, NoFlake, Salesforce, right, like you can cover most bases there.
But like, as you said, this might not actually be OAuth.
I mean, I've been talking about it as being an OAuth grant.
It might not actually be OAuth.
It might be some sort of other authorization process.
But the point is, like, even if you're in a position to instrument that and block it,
how do you know what to block?
Yes. Yeah. On what basis are you going to make a choice?
And then if you're going to have to ask the user, is this what you meant?
Right now, they were involved in that.
Like, yeah, this is the consequence of fixing passwords, right?
Of making phishing-ishing resistant auth a standard
kind of thing of us moving to YubiKey and Fido tokens and passkeys and so on.
If we make stealing auth difficult, do attackers stop?
Especially ones that are, like I think this is the calm-esque group.
People who are pragmatic and kind of don't care about our poor infosec people having
to do this the right way or the cool hack away from the 90s.
Right, this is people who just want to get it done and they'll do what it takes.
And hey, if you can't steal passwords, why not steal authorizations or integrations or
API tokens or whatever else you can't steal passwords, why not steal authorizations or integrations or API tokens or whatever else you can get.
Yeah.
And I will say too, I think what John Tuchner is doing on the sort of browser
extension side over at Secure Annex, I think it's really cool.
I wish him all the best with that.
I mean, I think it's interesting that he seems to be taking the sort of intelligence
approach, which means that if you're a vendor, which is in a position to do stuff
around extensions, you know, you can take that data and do stuff with it.
I think mostly what their customers, early customers, it's early days, mostly what they're
doing is just like, we use these apps, keep an eye on them.
That seems to be the early business case there, but I think we need more thinking like that.
So I think it's, I think it's very cool.
But you know, we spoke spoke about attributions and threat intel
and whatnot last week.
I just want to read you a paragraph from this Cyber
Security Dave report by David Jones about all of this,
which says, Larson said there are broad overlaps
between the Salesforce hackers and an underground collective
known as the COM, which includes the notorious cybercrime
gang dubbed Scattered Spider. Larson cautioned, however, that the threat actor involved in the Salesforce
attacks is a distinct group from the threat group tracked as UNC 3944,
which overlaps with a subset of Scattered Spider activity.
Well, I'm glad we killed. I'm glad we cleared that up.
So it's look, as I've always said about the com staff,
scattered spider, lapsus, it's not a group, it's a vibe.
Yeah, exactly.
It's a feeling.
It's a feeling, it's a vibe.
All right, so yeah, we've linked through to the report
on Cybersecurity Dive and also the March post
from Salesforce Battle of that.
Now, another bit of interesting technical work from Shubham Shah,
who is a terrific Australian hacker and the co-founder of Asset Note.
He's built a tool that allows you to inspect the internet from the IP ranges
of cloud computing environments and CDNs and whatnot that are typically
trusted in ways that they just shouldn't be. So why don't you give the background on why this is interesting?
Yeah, so there's a lot of places where, you know, in the old days we relied on
origin network or, you know, source network addresses to kind of make access
control decisions. And then as things became a bit more fluid and people
started pushing stuff up into the cloud.
A lot of times we kept those traditional controls, but now we're using them from an environment
where we don't strictly control the origin networks anymore.
Those origin networks are now operated by Google or Amazon
or Microsoft or whoever else.
And you can just go buy access into those environments.
And so a lot of people have, end up with access control rules or policies
that they've just opened it up to a whole swathe of the network,
and also understanding how big those ranges are,
where you're going to be coming from,
if they're dynamic, it's all complicated.
So a lot of people just let stuff in.
So if you pop up in one of those environments,
sometimes you'll get elevated access.
And Shubz's tool, you basically give it, here's a list of URLs I care about, here's a service
that I'd like you to check from.
And it will just go spin up connections through EC2 instances or through Amazon's other APIs
or through Azure functions or whatever else.
And from different regions as well.
From different regions, from different locations, like near your target, far away from your target,
and just try and find some combination that gets you
something that you didn't expect, which, yeah,
it's the sort of very pragmatic hacking
that we expect from that crew, from Sharks and Pals.
Yeah, I mean, I think it's, I thought it was really cool.
I messaged him, I said, this is really cool, man.
You know, like, I dig it.
I mean, he points out that GitLab's official advice is to whitelist the entire GCP region that their shared runners are in.
Yes, exactly. Right. Yeah.
You know, come on. So I guess what he's getting at is, you know, you might find, you might discover through whatever process, a domain name, that for some reason, whatever it is, you can't hit it.
You know, but then you throw that into, yeah, as you said, into this tool and you might say, oh, look, I can reach it from AWS East or whatever. And then, you know, it's party time. I mean,
tricks like these, we've also seen being used to discover the origins. I mean, similar techniques,
I guess, used to discover the origins of things that are behind CDNs and whatever. But you know, you might even find, yeah, you might even find access to different services
based on this, right?
So you might not be able to hit SSH from the raw internet,
but you can from a GCP IP range, right?
You're gonna see stuff like that.
Yeah, absolutely, yeah.
And you will find treasure.
I mean, I know my pen testing days,
you know, there is sometimes just popping out of Amazon Yeah, absolutely. Yeah. And you will find treasure. I mean, I know my pen testing days,
sometimes just popping out of Amazon or popping out of a cloud provider somewhere else, you get treasure.
You do. You do.
You find joy.
Now, speaking of treasure, let's talk about Ross Ulbricht because he has received a mysterious $31 million donation from someone who's linked to a different underground
marketplace on the internet.
And the whole thing is extremely suspicious.
People are saying, well, it just might be someone who made a lot of money out of doing
online crime, who considered him a inspiration.
I mean, it could also be that he gave someone 300 bitcoins to hold for him and he's getting
repaid now that he's out of prison.
So, I mean, it feels like a little bit escrow service.
It does a little bit, yes, like a debt coming back.
And I guess having been in jail for all this time, you know, that enforced hodling, if
it was a debt that's being repaid, it's done quite handsomely for him.
$30 million worth of Bitcoin, like that's nothing yeah, that's nothing to be sneezed at,
donated to his, you know, to his funds
by mysterious persons that I think was it Chainalysis
did the work to kind of dig through the various mixers
and Tumblers and other, you know,
kind of exchanges that had been through
and pointed the finger back to someone
who was selling on Alphabet way other way back in the early 2010s.
Yes.
I mean, that amount of Bitcoin when Albrecht was busted wasn't actually worth all of that
much, which is why it feels like, here, hold these three.
But I mean, I'm just speculating.
I've got no idea.
It could entirely just be a donation.
But there you go from prison to pardon to you know, presumably life of luxury now
I would say though that if this were to occur in Australia
And I'm not sure what the laws are like in the United States on this
Given these funds are linked to criminal activity like they'd just be seized by the government
They would say these are proceeds of crime legit, you know, there's proceeds of crime legislation here, which would mean yoink
You clearly did not do the work to earn this money in a sensible way.
We can't point out exactly but.
Yeah, exactly.
Sorry pal, you can't keep it.
I mean they do that all the time.
They roll up on a bunch of bikers or whatever and just say, that's a nice Bentley there
pal.
How did you buy it?
Bring the tow truck.
Now look, we're going to end with just a funny story, I think, which is that an Australian
Navy ship managed to like DOS radio comms in a fairly large area of your country the
other day, Adam.
Yes, I think it was quite a big ship in the Australian Navy.
Was actually still out in international waters, had its radar on on and a number of kind of regional wireless
internet providers that use the 5 GHz band had all of their services disrupted by this
radar.
And there was a lot of whinging and blaming the Australians for being irresponsible.
Like someone said, oh, the Australians never leave port so they didn't know that they
were supposed to turn it off when they're near land or something. But the actual, the kind of reality of this here is that these
are ISPs using dirty bands that are meant for radar use. And in order to have like the
license for using the spectrum, the spectrum range in New Zealand basically means if a
radar wants to use your bands, you have to GTFO the band. And you have to use, they call it DFS, dynamic frequency, something, something.
Which basically you just have to shut up and change frequency when you see a radar.
And of course the Australian radar was bigger than all of the available frequencies.
And so these wireless ISPs were basically just had to shut down and wait.
And if they wanted that to not happen, they could have paid the $150
a year for an actual allocated spectrum license where the radar wouldn't stomp on them. So
there was a lot of QQ from the wireless providers in question.
That's funny, right?
Ultimately, the Australians just said, oh, sorry, and turned the radar off.
It's funny because this is like perfectly encapsulates the Australia-New Zealand relationship,
which is family, you know?
We're family.
We're right next to each other down in this sort of isolated part of the world.
We love each other deeply, but we love to scrap, you know?
Yes, we do.
And blame each other for this kind of shenanigans.
And I'm sure when a New Zealand ship visits Australia sometime, we'll just leave our radar
on, assuming that we can afford the wattage.
We probably can't afford the fuel to run the radar.
But I'm sure we've got radar.
You're building a ship.
When does it launch?
I mean, after that one sunk on the reef in Samoa, wherever it was that it was meant to
be surveying a reef.
And it sailed into the reef and then we did have a ship. Yeah, we did have a ship.
Yeah.
Anyway, we're going to wrap it up there.
Love you Kiwis, by the way.
Fantastic.
Well, we do, you know, and I will just say too, it's been like, I still hang around a
bit on X and what's been really weird is watching the American right posting a lot of videos
of like Kiwis doing the
Haka and like sort of ridiculing it and whatever. And I just say like, you know, that pisses
me off as an Australian, like to a very high degree. I just think, you know, leave them
alone. Uh, which is also part of the, I guess the, the, the relationship, isn't it? Which
is like being mean to Kiwis is our job and we don't do it like that.
That is too far.
Don't do that.
But yeah, big love to all the Kiwis
and sorry we dosed your wireless internet with our Navy.
That's it for this week's show.
Adam, thank you very much for joining me.
It was great to chat and yeah,
we'll do it all again next week.
Now you're most welcome, Pat.
Talk to you then. That was Adam Boyle there with a check of the week's security news.
Now it is time for this week's sponsor interview and we're chatting with Alex Tilley who these
days works in a threat intelligence role at Okta.
But some listeners might remember Alex
from his time at the Australian Federal Police where he worked as an investigator for many
years and was a regular fixture around the security and hackacons in Australia and New
Zealand. And yeah, a friend of mine as well who I haven't spoken to in a few years so
it was really good to be able to do this interview. So yeah, as I mentioned, Alex has a background
as an investigator. So when he took this job with Okta, it I mentioned, Alex has a background as an investigator.
So when he took this job with Okta, it meant that he was able to access all
sorts of data around identity events and you know with the remit of like well what
can we investigate here? What can we find? He's been spending a bunch of time
looking at what North Koreans are doing in terms of these fake IT worker scams.
So both from the perspective of them putting up job ads that they want people to apply to
so they can drop malware on them, also from the perspective of just getting in there,
getting salaries and also getting in there and dropping malware and on and on and on.
So yeah, Alex joined me for this interview that I let run a little long because I thought it was very, very interesting
where he talks about all of his work, analyzing, you know, Okta data to tie together what these
North Korean operations look like. Enjoy.
From the research, there's three different angles that I'm really looking at here. One
is you've got North Koreans applying for jobs at legitimate companies. So they just search
for remote full stack developer job job and then probably spam applications.
That is sort of brought up with the second model of it, which is they're hosting fake job ads and
they're generating these opportunities that don't exist to draw in legitimate applications
to then understand, okay, well, what does a good CV look like for a full stack developer?
What does a good CV look like for someone with this many years of experience? What sort of school should I have gone to? These types
of things to build that quorum of knowledge as to who they're supposed to be when they
become the job applicant. And then there's the third point of view, which is, sorry,
there's the third angle on this, which is the whole get people to apply for jobs and
then deploy malware in their machines to also connect
reconnaissance. So like it's a full featured rats that they deploy to then, you know, harvest
either, you know, what is their development system look like on their laptop? What sort
of tooling are they using or anything that they can do to make their own personas look
more legitimate and get past those instant sort of allow or deny gates on these applications.
Yeah, there's that whole, if you want this development job, you got to download this
image and do the coding challenge. And it's just, you know, a giant blob of malware and whatever.
Yeah, go to Git and it's got a little bit of, you know, a little, little 5k of obfuscated
JavaScript goodness added into the end of it and off you go.
Yeah. So, you know, you're looking at this, you're, I mean, you're working at Okta, obviously,
and you know, how do you then, when, when your main source of data is like login events
and identity events like that, like how do you begin to start tracking all of this behaviour?
Like how, how were you able to tie this together and actually draw some insights out?
It's a lot of times it's around talking to customers and saying, hi, you know, we've
seen some accounts being used.
We're trying to figure out what it is they're up to. And obviously most of these customers
won't tell us this individual is doing this, but it very much is because of privacy information.
Of course, there's, you know, no one's sure that these accounts are North Koreans. At that point,
it's about, you know, does this look strange to you? And they'll say, yeah, it looks strange.
That person was looking at job ads or that person was generating CVs,
et cetera like that.
And with that, and then the subsequent login events that we see from other
customers, we can build a picture of their whole toolkit.
What are the services that they're using to generate these identities to make
them look more attractive, to look more legitimate, to then go through, go, go
forth and apply for jobs to then get employment.
So it's around sort of doing that homework piece
in the backend.
So it's not just about that spam CV to a job ad somewhere.
It's about this whole backend infrastructure
of trade craft being set up to make sure
that that spam CV looks good
and we'll get them past that first line.
So, you know, you talked about going to a customer
and saying, hey, this identity,
we think it might be up to no good.
How are you actually, you know,
landing on that detection in the first place?
How are you actually getting this preliminary list of like shady identities?
It's generated through a lot of different reasons,
through a lot of different methods. Really it's around sometimes people say, Hey,
I've identified this account as being a probable DPI K and we have a look at it
for them. And we say, okay, well there's,
there's a really base level of about five or six different services that a lot of these individuals
will use to start with. And I can identify those five or six services and say, yes,
if it's an account using, let's say this particular VPN provider, someone who knows has,
has identified it as being probable DPRK. And I can see it using one of these, you know,
commonly used services over time. And then I can see it using one of these, you know, commonly used
services over time. And then I can then pivot off that and say, okay, well, what other IPs were
they using? What are the basically, you know, investigation? Yeah, yeah, yeah. So you can say,
we think this one's sus because it's doing all of the things that we normally associate with
DPRK, DPRK like behavior. What else have you got on this identity? And then away you go. So,
um, you know, you mentioned the three angles to all of this.
What have you been able to learn
through doing this research?
It's funny, actually, I spoke to your colleague,
Brett Winifred, about you coming in to do this interview.
And he told me that when you came on board at Okta,
you were just like, oh my God, data, so much data.
But what have you been able to learn
from all of this Okta data in terms
of like how these schemes are operating and, you know, also where the vulnerable points
might be?
Yeah, it really is, has been really fascinating to me to learn and see them using this backend,
almost like a development life cycle to really get these applications through all the ringers and
jump through all the hoops that we as hiring companies may do to put them through to make
sure that their applications look the best and have the most chance of success. They know which
HR systems have what sort of scoring. They know what a good CV looks like. They know
sometimes what VPN service to not use because it's been publicized which ones they do use so they sort of move away from those VPN services just like in
you know the olden days of publications of where botnets sit and do their
bulletproof hosting they tend to migrate away from that similar in this case and
then through using all these different systems around webcam identification
like if I conduct an interview with an AI that's tuned to look for webcam fakes or webcam filters,
et cetera, will I get passed? Will I get detected by these systems? And when I look at these
customers, say, okay, well, here's a probable DPRK email address or identity using this particular
service that advertises that they detect deep fakes or they detect fake webcam filters and overlays,
et cetera.
That to me shows that they are using those systems to try and say, okay, well,
will I get picked up? Will I get past that step? And it's all about just advancing that one step after the,
after another to try and get closer to an individual for an interview,
to try and keep a job for a few months.
Yeah. I mean, what, what I find fascinating about this, right,
is that you've got a situation where you can track a single identity, going to all of these different touch points and you can infer, you know,
you told me earlier, you can infer from like how much time they spend where, like kind
of what they're doing and what the life cycle is of establishing these identities.
What I don't quite understand is why they're using the same identities to do all of these
different things in the touch point to like do all of this research and I'm guessing it's got to be just kind of
laziness. It does appear that some people have a certain way that they like to do
business and some of the facilitators like to just use one or two different
email accounts as their you know account to sign up for all these services to try
them to then you know use them until they get burned or get turned off or stop working.
It does just seem to be, this is how I do my job.
The interesting part about it though, is that the reverse is also true.
When I look at some of these identities and what they're doing, there's a
lot of discipline there.
And by that, I mean, they're only really using either job related stuff or actual
work connection sort of stuff that we sort of see
through customers or whatever.
But we're not seeing them doing a lot of just personal browsing or at least connecting to
services that we have a touch point on.
So very, very much is like the actual workers themselves are sticking to that particular
path of only using those accounts for work purposes, which is interesting.
So there's some segregation there, which enables you, which will stop you from being
able to pin down their real identities and correlating that to their work identities,
but you can still track their work identities across a large number of different services.
Exactly, and understand what they're doing over time.
So the question becomes, like, what do we do with all of this information now that you've built a bit of a sort of, you know, pattern for these types of it.
I'm, look, I'm guessing you've, you are able now to run some sort of batch, you know, threat
hunting and just, you know, run some, some custom queries against Dr. Data and a whole
bunch of these identities fall out.
I mean, is that what you're doing?
And then from there, what do you do with that?
Yeah, definitely working in the, in the direction of making it much more automated.
It's really much about trying to understand as again, they shift their trade craft over
time, try to make sure that we keep, keep up with what are the new customers that they're
using, where are they moving to now?
So we can make sure that we understand what they're doing right now.
A lot of that is historical looking to sort of see, okay, well you might pick up an identity
a month later. You know, someone might say, Hey, we just sack this person for being a potential
DPI K worker. And then I can look back historically and say, okay, what were they doing? So it's
a little bit historical in that respect to sort of try and get that fingerprint of what
they were doing and then understand what we can do with that going forward to look for
that. So things like keeping corums of email addresses, that sort of stuff are definitely
useful for future proofing because they do, as we've discussed, tend
to use similar email addresses for quite a little while. So that's definitely
useful for sure. Yeah, so I'm guessing from an Okta perspective, the priority is
to be able to, where possible, be able to flag to customers, hey we think this
identity that's interacting with you is probably a North Korean, you know, fake
worker. And you know, I'd imagine you would want to, you know, squash those
identities and then, and then just to be able to detect and notify is that kind of the thing.
I'd imagine you'd be passing some of this stuff to law enforcement as well.
Definitely part of it. Definitely trying to work down, down those paths to try and get
some sort of remediation done. But a lot of the, what I'm doing now is understanding that
this is not just a high tech industry
thing. This is not just like a fang thing. Shall we say this is all verticals. And that's
the interesting part about what's falling out of the research is that when we actually
see them logging into, let's say a client that they've successfully got a job with,
we're not, and we look back and say, okay, well, what was that particular company advertising
for?
We're seeing manufacturing, we're seeing healthcare, we're seeing all kinds of different verticals
or advertising for full stack developers.
People seem to think this is like a large tech problem, but it's not.
It's any vertical that's advertising for these people is getting these applications and probably
is going to at least interview one or two of these people.
So it's about getting that information out there.
People saying, hey, hiring managers, you need to be aware that if something looks funny
and smells funny, it could be worth looking into.
And that's a really big part of it.
People seem to think that that's just the high end of town's problem.
But it really is individual small organizations problem.
And that's sort of why, you know, me, I like to tell people what, what, what to worry about. And in this case, it's sort of like, yeah, if you're advertising
for these remote developer jobs, it doesn't matter what size your organization is, you
need to be aware of this particular problem because it's probably going to get worse because
they seem to be having some success. Yeah. I mean, look, as best I can tell the MO seems
to be raise money for the motherland. Quite often they'll just take the jobs, half ass it,
only do a few hours a week work,
and they'll probably got five or 10 jobs on the go at once,
and it's just about raising revenue.
Then they might drop some shells, get some persistence,
see if there's anything else they can do there.
I mean, I'm wondering if ransomware is gonna feature
more heavily in all of this in the future.
And of course, if they land somewhere
that's at all adjacent to cryptocurrency, they're
going to try to pivot into theft.
But when it comes to these businesses that you've just been talking about, like, you
know, not exactly Ma and Pa, but like SMEs, I mean, what's the actionable advice you can
give them that will really help them to defeat this?
Because I guess ultimately what it comes down to is
just really carefully vetting remote hires, trusting your gut and then monitoring them
once they've started.
That's the key point, right? And that's probably one of the key parts of this is it's about
vetting upfront, understanding, okay, are there any red flags that we've got to look
for? Is there anything that's pinging straight away saying this is weird, but it's also about, okay, a month
down the track is the person that you hired today, still the person doing that job in
a month's time. And that's the bit where a lot of places seem to fall down is they, they
may invest heavily in time and effort initially to verify the person, but they don't really
come around and do another quick round of verification later on. I mean, is it, is it possible there from, from what you're saying that the person, but they don't really come around and do another quick round of verification later on.
I mean is it possible there from what you're saying that the person who got through the application process
and the interview and got the job isn't the person who's then on the tools doing the work?
Entirely possible, entirely possible. We've seen that through all kinds of other crime types.
You know, we saw back through in the days of money laundering, etc.
or in the days when I was involved in money laundering.
When you were involved in countering money laundering,
I think we need to be clear there.
That was around, hey, person, hey, student, here's 500 bucks.
Could you go and open a bank account for me?
Show up, give your identity, and then just give me
the card and the means of accessing the account, right?
That was a very basic way it was working.
We're seeing it similar with this.
It's like, hey, person X, could you be me on this webcam interview? Could you answer these questions,
et cetera, like that. And then hand over control of those accounts. That particularly one way
that I believe it's happening. It's hard to see that one happening all the time, but it
definitely is the case where you see one person who may not be, um, you know, uh, a white
guy from Texas or shall we say who then is uh, a white guy from Texas, or shall we say, who then is a different
non white guy from Texas two months later. And that's the, that's the interesting part
about this is trying to sort of track that changes over time. And a lot of organizations
struggle with that obviously because of profiling reasons, et cetera, which we'll go into,
but there is that, that scenario of is that person the same person two or three months
later on?
Yeah. All right. So look, any final words for people out there who might be I mean it's you know it's
difficult right because one of the things is this is a well understood issue in cyber security but
the people who are best positioned to do something about it are HR teams not cyber security teams but
you know are there any parting words that you would have for people on the security side of things
you know how they might help their colleagues in HR kind of deal with this, detect it and
whatnot.
Yeah. And you've hit the nail on this, on, on the head there. It is, it is our colleagues
in HR. It is definitely about us as security people getting involved with the HR people
as much as we can to understand, well, what data sources are available there to us? What
can we actually log into to see, can we as a security team add value to say, Hey, this person's logging into an
interview via an X VPN connection.
That's weird.
Or this person, you know, this person seems to be using a webcam filter or their
CV was created by the same person as other CVs.
You know, these are technical tricks that we can do as security people.
If we get involved with our colleagues in HR and our colleagues in HR and hiring managers, really it's about being aware. And that's unfortunately
that is the hard bit about the job that I'm trying to do here is to say, just be aware
and have a look and don't think that you're not getting targeted because you probably
are and understand that what's my pathways internally where something looks strange.
Maybe it's a coworker says, hey, you know,
that particular guy on my team hasn't been at a team meeting for six weeks or, you know,
the code seems to be just generated out of an AI or something like that. Something strange here.
Do you have a pathway for those team members to raise a red flag and say, hey,
can someone have a look into this person? Because obviously we're in different time
zones, all kinds of different things going on, but something doesn't smell right because technically we
can do a fair bit to identify there's something strange here, but really it is the coworkers
and the managers who can see that.
And the extra bit I will say is that if I call you, please answer the phone.
I'd love to talk to you and try and help you.
There you go.
Yeah.
No, look, everything that you've just said makes sense.
Security people need to think about what services they
can offer to the whole organization,
how they can raise awareness, how they can tell people, hey,
this is something that's happening.
And here's the number you call, or here's
the person you email when you're SUS,
and this is what we'll do to look into it.
Alex, we're out of time, mate, but it was fantastic to see you. Alex is an old mate of mine and in fact lived very close to my
mum's house in Melbourne many years ago so when I would travel down to see Melbourne
we'd always make sure we snuck in a few beers. Great to see you again my friend and we will
chat again soon.
Thank you. Thanks Pat.
That was Alex Tilley there from Okta with this week's sponsor interview.
Big thanks to him for that and big thanks to Okta for being this week's sponsor.
And that is it for this week's show.
I do hope you enjoyed it.
I'll be back soon with more security news and analysis.
But until then, I've been Patrick Gray.
Thanks for listening. holistic.