CyberWire Daily - MaaS infrastructure exposed. [Research Saturday]
Episode Date: August 24, 2024Robert Duncan, VP of Product Strategy from Netcraft, is discussing their work on "Mule-as-a-Service Infrastructure Exposed." Netcraft's new threat intelligence reveals the intricate connections within... global fraud networks, showing how criminals use specialized services like Mule-as-a-Service (MaaS) to launder scam proceeds. By mapping the cyber and financial infrastructure, including bank accounts, crypto wallets, and phone numbers, Netcraft exposes how different scams are interconnected and identifies weak points that can be targeted to disrupt these operations. This insight provides an opportunity to prevent fraud and protect against financial crimes like pig butchering, investment scams, and romance fraud. The research can be found here: Mule-as-a-Service Infrastructure Exposed Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. of you i was concerned about my data being sold by data brokers so i decided to try delete me i have
to say delete me is a game changer within days of signing up they started removing my personal
information from hundreds of data brokers i finally have peace of mind knowing my data privacy
is protected delete me's team does all the work for you with detailed reports so you know exactly Thank you. Hello, everyone, and welcome to the CyberWires Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and analysts
tracking down the threats and vulnerabilities, solving some of the hard problems,
and protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
What the team have been doing is working on a generative AI platform where we can talk to and from criminals and actually work out what is happening in that
scam by kind of playing the victim, playing along with the conversation and seeing what happens next.
That's Robert Duncan, VP of Product Strategy at Netcraft.
The research we're discussing today is titled Mule as a Service Infrastructure Exposed.
And by extracting that kind of detailed insight over the course of conversation,
you can then start to link conversations together and then identify groups.
start to link conversations together and then identify groups. And then that's what's led to this discovery of these centralized mule account services. So I think to kind of take a different
twist on that and think about what that looks like. When you're thinking about, you know,
you're a cyber criminal, you want to eventually extract some money from the victim.
The types of cyber crime we're talking about here are not nation state actors.
They're typically financially motivated and their overall goal is to extract money.
There's lots of different ways to do that.
So gift cards is an option.
But one common option is to use money mule accounts.
So these are bank accounts that are operated by third parties,
so not directly by the criminal,
but are kind of operated by a third party.
And they may be in on the scam
in the sense that they know that they're involved in a criminal activity.
And in other cases, a mule might have themselves
been a victim of a scam, for example, a job scam,
where they've been tricked into thinking
that their legitimate job involves
sorting payments and changing bank account destinations
based on instructions they receive.
So there's a variety of different types of mule
that kind of fall out of that. And what we've been able to do is link those mules together
in different conversations. And so, for example, being able to say, we've seen a mule account,
for example, at a bank in Italy that has been used by threat actors who are based
in Italy that has been used by threat actors who are based in Africa and in a separate group by threat actors based in Spain. And the only link between those two groups is the fact that they
used a single central mule account. Well, before we dig into some of the details of all of these
dots that you have been connecting. Looking through the research here,
I was struck by an image that you shared,
which was an ad on a social media network.
The image has the logos of several prominent banks,
and there's text added to it that says,
do you have a current account in one of these banks?
It's easy for you to make 1,000 to 1,500 euros in 48 hours.
I mean, I can see that being an attractive lure for folks who may be in a bit of a financial struggle.
Yeah, exactly. So that's kind of an interesting element of looking at this from two different
perspectives. One is, how do these mule accounts become mules?
And it's exactly this, via these advertisements on social media platforms.
Another common idiom is where students are recruited.
For example, when a student's leaving a country,
if an international student's leaving a country,
they often leave behind bank accounts.
And so in some cases, those may get left dormant and then are sold effectively as mule accounts that can be used
when the kind of real account holder has left the country.
There's quite a few different idioms for how that happens.
I think what's interesting is being able to look at the types of mule accounts that you get
and then where they appear in scams can be quite different.
So for example, we can see a mule account in Italy
being used in scams in Spain,
and there's a fairly strong international element,
certainly to the data that we've been investigating.
There's lots of cross-national payments,
and that helps explain why this type of fraudulent activity
is quite hard to mitigate,
especially when it crosses international borders.
Well, help me understand here.
In the research, you mentioned using generative AI personas
that you all have spun up on your own.
You've created your own generative AI system here.
Can you walk us through how that works
and how you facilitate that technology to create these dots
and establish this web of connections?
Sure. So it's something that's pretty interesting and something I don't think would have been
possible, say, five years ago. What we to be able to interact with these scam messages.
So the way that process works is that we start with some of our threat intelligence.
So for example, we've got access to SMS honeypots and have access to large feeds of emails, kind of bulk emails.
So in that, I guess that's a haystack of stuff that involves quite a lot of different types
of activity, not all of which is scam activity. A lot of it's the unsolicited email that you may be used to
that doesn't necessarily lead to these conversations.
But what we do is we pick out those newer messages,
so the messages that we believe are the start of this type of scam,
and then feed that into a generative AI solution that we've trained to take on
the kind of personality of a victim and play along with the scam.
So we're able to communicate back and forth with the criminal as this victim persona and
be able to extract intelligence about what the criminal
is doing. So for example, we can certainly look at the types of messages that we're receiving
and being able to say, well, this looks like an investment scam, or this looks like pig butchering,
or this looks like a romance scam. And then also extracting the kind of payment details
that we've been talking about from the scam.
Sometimes it's Bitcoin, sometimes it's a bank account,
sometimes they're asking for gift cards.
There's a really broad range of techniques that are being used
to extract money out of these messages.
And to kind of demonstrate the scale of this,
so for example, we can talk about some of the conversations that we've been having.
Some of them are hundreds of messages long, spanning over months, where the AI is able
to keep up with the criminal.
So we're able to continue these conversations
for a long period of time.
There are other conversations that we've had
where criminals will be pretty much on the hook.
So they will be sending email updates every 10 minutes
asking us where the payment is.
Of course, we're never going to make the payment.
But one of the nice things that we do,
which is actually pretty interesting,
is when we get sent a mule account,
one of the things that we do,
which doesn't happen all the time,
but one of the things that we do is say that the first payment didn't work
and then ask for a second account.
That approach has been incredibly successful in some cases
where we've had 17 18 different accounts
over a six-month period from the same threat actor in that case we don't necessarily believe
that it was a single person typing emails to and from us but we believe it was either a group
or potentially even a generative ai system on the criminal side interacting with us. And that's why we were able to extract such a large number of different accounts from
essentially from the same conversation.
It's incredibly powerful and it's obviously something that's quite difficult to ensure
that you're maintaining that victim persona over a long period of time.
We've been successful using specific training
and specific prompts from fairly industry-standard
generative AI tools in order to be able to do this interaction.
And it's been very powerful at changing behavior.
So for example, if we see a different type of scam, for example, a romance scam is fairly different to say a high mom or
high dad scam. So the behavior of our victim also changes. We definitely have some of our AI
personas who have their own name, they have their own bank account, they have their own phone number.
In some cases, they've got their own girlfriends too.
So we're kind of interacting with these romance scams.
We've got our kind of portfolio of girlfriends that we have on the go
at any one time that we're kind of stringing along, as it were, I guess,
in order to kind of reach the end of the scam
and figure out what's
happening, what's being used, which bank accounts are being used, which banks are they using,
what Bitcoin wallets are they using.
We'll be right back.
We'll be right back.
Do you know the status of your compliance controls right now?
Like, right now?
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks.
But get this.
More than 8,000 companies like Atlassian and Quora have continuous visibility
into their controls with Vanta.
Here's the gist.
Vanta brings automation to evidence collection
across 30 frameworks, like SOC 2 and ISO 27001.
They also centralize key workflows
like policies, access reviews, and reporting, and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
It's interesting to me looking through the research that there's this progression, the complexity of the webs.
And, you know, there's more and more information gets connected. And one of the things I noticed was how often it is a phone number, it is an email
address, you know, and it strikes me that those things being central points in these webs,
you're able to make these connections at scale. And it's fascinating to go through the
research and see the progression of how things connect over time. Yeah, that's actually a pretty
interesting insight. So that's something that's definitely worth exploring in more detail.
How it changes over time is also pretty interesting. So being able to say, well,
it changes over time is also pretty interesting. So being able to say, well, this is what we saw and this is the next time we saw the bank account. We've definitely got conversations where we've
seen the same mule bank account over a year apart. So we've seen it appear and then in a message a
year later from a different threat actor, we've seen that bank account reappear in a different threat actor context.
So on a different email address, a different phone number.
And that in itself demonstrates that that particular account,
certainly the criminal believes that that account is still live
and hasn't been detected as being used in fraud.
And so there's some pretty interesting insights that you can get
looking at the data, even if you just look over a small period of time,
being able to see what those connections look like.
Certainly when we were producing some of the data for this research,
the graph of connected nodes, as it were,
takes 30, 60 seconds to actually load in a web browser.
So it's like a fairly meaty amount of data,
even just from a relatively small section of data
that we're looking at here.
And what do you see as the potential for this data to be actionable?
I mean, is this a case of a bank could be able to use this?
So various financial institutions to accelerate their ability
to shut down these accounts?
Are those the kinds of things we're talking about?
Yeah, that's right.
So that's one particular use case.
So certainly banks are one of the obvious users
of this type of data,
but certainly not exclusively.
And there's also a couple of different interesting twists
on how that data can be used.
So one is maybe the most obvious example,
which is imagine that we've been sent a bank account
that belongs to a particular bank,
say it's Wells Fargo, for the sake of example.
That in itself is a pretty interesting bit of information for that particular bank,
so they know that at that point we've got a fairly strong signal
that that particular account is either involved in fraud already or is just about
to become involved in fraud. But there's a second element, which is actually the entire network is
actually interesting. Because if you're a consumer and you're about to send some money,
you're about to make a bank transfer, a wire wire transfer you don't really want to send your money
to any of the accounts that have been flagged
with this flag that they've been involved
they've been marked as a destination
in a scam
so there's a pretty interesting use case
which is to prevent outbound payments
so your customers, if you're a bank So there's a pretty interesting use case, which is to prevent outbound payments.
So your customers, if you're a bank, don't have the ability to make payments at all to any of these bank accounts.
Or certainly, they're used as a flag in a risk system.
The neat thing about this data is it's pretty much a binary signal.
Most risk scoring is probability based.
This one is quite different in the sense that
we're certain that we've been sent a particular
bank account or Bitcoin wallet address
in connection with a scam and we've got the transcript to prove it.
And being able to say with certainty that we know this account is involved in this scam
is a pretty compelling bit of data.
Which isn't to say that we know that the account is run by the criminal.
In many cases, we know that that's not the case.
And the account holder may themselves not be aware of what the account is being used for.
So it's not the case
that to say that we've identified the criminals, but what we have done is identified, you know,
bank accounts that are being used in connection with the scam. I'm curious, you know, whenever
we're talking about generative AI and these large language models, we talk about putting guardrails
on them,
you know, to protect them from doing things
we don't want them to do.
Despite the fact that it seems like
on the large part you're talking to criminals here,
to what degree has that been a consideration
of putting some constraints on this model
that you all have constructed?
I mean, that's, yeah, great comment.
So that's definitely something that we've thought about
when doing this research.
And there are many guardrails in place
on our use of generative AI in this.
When we're looking at this data,
we've got some human review processes involved.
We've got lots of automated rules as well
to mitigate any risk that's involved with this.
But the neat thing about how we're operating
is that we're pretty convinced
just from the start message
that we're involved in a scam.
And so future messages,
we of course need to be careful about use of AI tools,
but we've got plenty of safeguards in place
that mitigates that risk.
And then the kind of second twist on that question,
which is that certainly many public generative AI models
do have guardrails,
and if you ask them nicely to generate a scam email,
they will say no, no thank you, and tell you to try something else.
That's also a consideration with this type of research,
but it certainly isn't a concern for how we as the,
I guess maybe it's too presumptuous
to frame ourselves as the good guys,
but we're trying to detect this type of crime.
And so there's definitely a case of
what do we think criminals are doing with the AI as well.
So there's a couple of interesting elements there.
And that's Research Saturday.
Our thanks to Robert Duncan,
VP of Product Strategy of Netcraft,
discussing their work,
Mule-as-a- a Service Infrastructure Exposed.
We'll have a link in the show notes.
And now, a message from Black Cloak.
Did you know the easiest way for cyber criminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform Thank you. Protect your executives and their families 24-7, 365 with Black Cloak.
Learn more at blackcloak.io.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity.
If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to cyberwire at n2k.com.
We're privileged that N2K Cyber Wire is part of the daily routine of the most influential leaders and operators in the public and private sector,
from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies.
N2K makes it easy for companies
to optimize your biggest investment, your people.
We make you smarter about your teams
while making your teams smarter.
Learn how at n2k.com.
This episode was produced by Liz Stokes.
We're mixed by Elliot Peltzman and Trey Hester.
Our executive producer is Jennifer Iben.
Our executive editor is Brandon Karp.
Simone Petrella is our president.
Peter Kielty is our publisher.
And I'm Dave Bittner.
Thanks for listening.
We'll see you back here next time. Thank you. products platform comes in. With Domo, you can channel AI and data into innovative uses that
deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to
your role. Data is hard. Domo is easy. Learn more at ai.domo.com. That's ai.domo.com.