The Decibel - How scammers deepfake businesses
Episode Date: February 28, 2025Scammers are using generative AI technology to create deepfakes, compelling their targets to send large sums of money. And it is not just individuals getting scammed any more – businesses are increa...singly being targeted by these look-alikes too.While there are positive applications for generative AI, these digital replicas may mean the need for better regulation.Alexandra Posadski is the Globe’s financial and cybersecurity reporter. Alexandra will explain how these scams usually work, how deepfakes are increasingly being used, and what can be done to help protect ourselves against them.Enter this Decibel survey: https://thedecibelsurvey.ca/ and share your thoughts for a chance to win $100 grocery gift cardsQuestions? Comments? Ideas? E-mail us at thedecibel@globeandmail.com
Transcript
Discussion (0)
So, Dan Kagan, who is the country manager for Canada for this identity management company
called Okta, he's in a meeting in his office in downtown Toronto, and he gets this panicked
phone call from his 80-year-old mother, who is telling him that his son, her grandson, Jordan, has been supposedly pulled over by the cops,
and they found a large quantity of marijuana in his car,
and he's about to be arrested
unless she, the grandmother,
can produce $2,500 in cash immediately.
Alexandra Posadzki is the Globe's
financial and cybercrime reporter. And Dan immediately realizes that something is off.
And so he goes, hold on, I'm going to go find Jordan.
And lo and behold, it turns out that Jordan is sitting in his cubicle
at Okta, where he also works as a business development professional.
And so Dan tells his mother, like, it was not Jordan.
And she's like, but I heard his voice
and it sounded exactly like Jordan.
And he even called me Bubby, like he always does.
And so she was absolutely convinced that it was Jordan.
AI generative technology is getting better
at replicating people's faces and voices.
But what it turns out is that some scammer
had essentially taken a recording of Jordan's voice
and then used it to create a deepfake
and used that deepfake to attempt to scam Jordan's grandmother
out of $2,500.
And it's not just individuals who are being scammed.
Businesses are also being targeted by deep fakes.
Today, Alexandra joins us to talk about how these scams usually work, how deep fakes are
increasingly being used, and what we can do to help protect ourselves against them.
I'm Manika Raman-Wilms and this is The Decibel from The Globe and Mail.
Alex, thanks for being here again.
Thank you so much for having me on the show.
So you just walked us through this example off the top, Alex, of how this tends to happen.
Can you break it down though, like, from a scammer's perspective, what are they doing
when they're trying to trick someone with a deepfake?
The first thing is they're identifying a victim.
And so in the case of the type of scam that was targeting the kegans, they often would
look for somebody elderly.
And then they need, you know, to find a relative whose voice is out there.
And so they need to get a sample of the voice.
And so that could be through the person's social media account.
It could be by phoning the person at work and getting them to talk and using that to take a recording.
Or it could even perhaps be by sampling their voicemail greeting. And so once they have
a recording or even several recordings of that person's voice, then they deploy the
technology that allows them to clone that person's
voice. And then what they're doing with that cloned voice is they're then
initiating a phone call where they're creating a sort of sense of urgency. This
is always really key to every scam is the sense of urgency because you're not
thinking as clearly when you feel like you're under pressure and there's sort
of a ticking clock hanging over you.
And so that's kind of what the scammer lies on. And then of course they have to collect the money somehow if the scam is successful. And so that might be through a cryptocurrency
payment or perhaps it like as was the case with the Kagan story by sending somebody to
the person's home, which is what the scammers were intending to do. Of course, by that point,
the Kagan's had figured out that it was a scam.
They'd called the actual police and the cops were essentially at Dan Kagan's mother's
home waiting in case the scammers showed up to collect the money.
Wow.
Okay.
So that's kind of how it breaks down.
The key part here, of course, is the impersonating someone's voice using this technology.
You mentioned that they could listen to a voicemail and get a sample.
Is that all it takes?
Something so short like that they can get a voice sample from?
That's what some of the experts that I spoke to told me is that at this point, it only
takes a few seconds of your voice to be able to create a deepfake convincing enough that
a regular person can't tell that it's an AI-gener a deepfake convincing enough that a regular person can't tell that it's
an AI-generated deepfake.
Now, there are varying levels of technology.
And so some of these services that offer this,
they actually request several samples
that are slightly longer.
And so that's going to maybe determine
how convincing that deepfake is.
But they've gotten it down
to like a pretty limited amount of recording
that's required at this point.
Okay, huh.
And we've been throwing around this term deep fake,
but let's just kind of define it for a second here.
What does that actually include?
So a deep fake is not necessarily
just cloning somebody's voice,
like what we had here,
it could also be copying somebody's image.
And so you could even have like a live video, for example, a zoom call, where it's actually
juxtapositioning somebody else's face on top of your face as the caller. So it looks like, you know,
to the person on the other end of the line, it looks like they're having a zoom call with a
celebrity or whoever it is that you're trying to impersonate.
Okay, wow.
So that's how this happens on a personal level.
We've also now seen deepfake scams targeting businesses, right, Alex?
So how does this work on a business level?
So when I started looking into this story, one of the things that I'd found is this press
release essentially from FinCEN, which is the US anti-money laundering
watchdog. And they had actually put out this caution to financial institutions telling
them to be on the lookout for deepfake scams, in particular, people using AI to modify or
generate images on identity documents. So things like a passport or a driver's license, and then in some cases,
using those fraudulent identity documents
to open up bank accounts so they can, for example,
launder the proceeds of crimes through those bank accounts.
So that's one example.
There's another example, this is probably
the most famous example of a deepfake targeting a business,
and that's a British multinational engineering company
where a Hong Kong based employee was actually tricked
into wiring the fraudsters $25 million US
based on a Zoom call where every single other employee
on that Zoom call was actually an impersonation created
by artificial intelligence, including the company's CFO. That's kind of incredible so this
person thinks they're getting on a zoom call with their colleagues but actually
all of those people on the call were fake? Yep. Wow. Exactly. And what was
interesting about it is the employee apparently was actually contacted by
scammers pretending to be the CFO initially saying,
I need you to do a secret financial transaction.
That kind of tipped the employee off to the fact that something was maybe not
quite right here, but then it was after they got on this video call with all
of these recreations of their coworkers that they were convinced to actually send the money.
Yeah, yeah, I imagine that would be pretty convincing, right?
You think you know these people, you think you're talking to them, but you're
really not. We've been talking about kind of scammers and you also mentioned
services that someone could procure in order to do this. Alex, how exactly would
that work? Yeah, so there's actually like subscription services that you can
subscribe to that allow you to create deepfakes. You can also actually find
the tools to do this
for free on GitHub.
So I think that's a really important thing to keep in mind
is what's really behind the growing popularity
of these scams is they're so inexpensive.
They're so accessible at this point.
It's so easy to get the voice sample
and create a voice clone or even create a video clone or a fake image of someone.
And then on top of that, we could see from the example
of the woman who wired $25 million US,
how lucrative they can be.
And so it's sort of a perfect storm of,
it's right there, it's not expensive,
you can easily pull it off and if it works,
you can make a lot of money.
We'll be back in a minute.
So not that difficult to execute, quite lucrative.
We're going to focus now kind of most of this conversation on how this is impacting businesses,
because this is, as we just heard from that example, where a lot of the real money is in this situation.
Do we know how many companies in Canada have actually been targeted by deepfake scams?
Unfortunately, we don't have detailed data on how often this is happening. But there
are some clues. So for instance, there was this Deloitte study, it didn't look specifically at Canadian companies, but it polled executives, C-suite and other executives, and it was a
pretty large sample size. It was more than 2000 respondents, and more than a quarter
of them actually said that their organization had experienced at least one incident of a
deep fake that was targeting their company's financial data. And then there's also this US-based company
called Entrust, which processes identity verifications.
And they said that from 2022 to 2023,
they saw a 3,000% spike in deep fake attempts.
OK.
So it sounds like this is getting more prevalent, then,
from what we're seeing at least.
Absolutely.
I mean, that's the other finding of the Deloitte study is that
more than half of the respondents actually expected the prevalence of deepfake attacks to increase.
Okay. Do we have a sense then of how much money deepfake scams are actually costing Canadians?
We don't know that either but we do know that fraud in general is very very expensive for the
Canadian economy. So if you look at the Canadian Anti Fraud Center, they put out stats every year
on how much money Canadians are losing to scams and frauds. And it's in the hundreds of millions
of dollars. And those figures are really just the tip of the iceberg. You know, it's a lot of people
don't report these types of incidents to police. And so there are estimates out there that it's just like a tiny fraction, maybe like
5% of fraud.
So that number you can imagine is probably much, much higher.
Yeah.
So Alex, we've talked about how this looks like it's an increasing trend of these deep
fakes games, even though we don't have like really concrete numbers to look at yet.
I guess I wonder though though is it possible that the
numbers we do have are even an undercount because something we haven't really touched on yet is a
lot of people don't want to admit that they've been scammed. Exactly I mean there's a lot of stigma to
the idea of having fallen for a scam or a fraud. Some people think like oh this this you know makes
me look stupid. I wasn't careful enough and so a lot of people are just embarrassed to admit
that this happened to them.
And so absolutely, Dan Kagan, who I spoke to for the article,
said that he thinks that it's happening much more than we're
aware of, particularly because people are too embarrassed
to report these kinds of incidents.
And I think that's a pretty widely held belief
in the industry.
And I can imagine that would work on a personal level, but probably also on a business level too.
I mean, I imagine that might not be good for business to have that out there.
For sure. And I mean, obviously, if you're suffering some kind of a breach that's very large,
like it's a material impact on your business and your publicly traded company.
And there's, you know, loss of customer data that's likely to lead to severe consequences for people's lives.
There are certain rules about having to report that.
But if you're a private company
and maybe your breach didn't actually impact consumer data
or employee data, then it's,
maybe there is no legal requirement for you to report it.
And in that case, absolutely,
why would you wanna tell the world that,
hey, we got deepfake scammed?
Yeah.
So we've been talking about a lot of negative uses
of this technology, obviously, where people
are getting tricked and scammed.
I guess I wonder, are there any positive applications
of this kind of AI technology?
Absolutely.
I mean, in general, generative AI
is expected to be a big boon to productivity.
Whether or not those
promises have really panned out at this point is undecided. But in particular, if you look
at cybersecurity applications, there's a lot of research out there showing that using
AI to protect your company's systems can actually shorten the length of a cyber breach,
it can reduce the frequency of cyber
breaches. So there was a report out from IBM that essentially showed that there was a significant
reduction in the length of a breach by companies that are using AI and automation to protect
themselves from cyber attacks.
Okay. Well, this might kind of roll into my next question a little bit here, Alex, because
I'm kind of wondering about solutions here, what we can do about these deepfake scams.
Are there ways to detect them?
There are a lot of startups actually working
on that exact problem and trying to identify deepfakes.
And the way that a lot of them have sort of positioned
themselves is to try to create technology that can help
identify nefarious uses of AI. Because not all AI-generated content is innately nefarious uses of AI.
Because not all AI generated content
is innately nefarious, right?
Like you might have people within a company
writing emails to each other using chat GPT, right?
You go on LinkedIn and it's automatically prompting you
that like you can rewrite this message with AI.
And so very likely that is not a nefarious email or message.
But then there are these nefarious uses, these scams.
And so what a lot of these technology providers
are focusing on is creating technology
that can help identify those kinds of nefarious uses.
So if they get a sense that an employee is interacting
with some kind of AI bot, they might compare what the employee is interacting with some kind of AI bot,
they might compare what the employee is doing
or being asked to do with the company's policies
and procedures.
So if the employee, for example,
is being asked to change their password,
that might be an indication that something nefarious
is going on because someone's trying to gain
unauthorized access to the system.
And so that's, I guess, kind of the next step is,
can we use the same technology
to try to police these nefarious uses?
Yeah, is that kind of thing available
to the general public right now?
Like, would people have access
to using something like that?
I don't believe that that sort of technology
is yet, like, widely commercially available. Okay. Maybe in the future then at some point. I also want to ask you about government legislation
because this kind of thing often comes down to regulation, right? To regulate what companies
can and can't do. Is anything in place in Canada to help protect people here from being
deepfaked? So we don't have any mandatory laws against the use of deepfake.
We did have a couple of bills that were kind of working their way through the system. One was a
cybersecurity bill that was really around sort of protecting Canada's critical infrastructure,
and the other was this bill called ADA, which stands for the Artificial Intelligence and Data Act. And initially, ADA didn't actually specifically talk about deep fakes or identifying AI-generated
content, but then there were amendments made to it, including provisions for identifying
AI-generated content. So the idea being, you know, instructing companies that are providing
services around generative AI that would sort of
Watermark that content so it was easier to identify it as having been AI generated
But of course when Parliament got prorogued it killed all of the bills that had not received royal assent and that included
Ada and Bill C-26 the cyber security bill, okay
So they're not actually in place in Canada right now even though we did have things in the works to maybe try to tackle this at some point.
Okay, other countries must be dealing with this as well, I would imagine, right?
Do we have a sense of how they're approaching it?
Yeah, so there's a couple of countries I was recently reading about.
The US and New Zealand are both considering legislation where people would essentially
be able to own the rights to
their own face or image and whether that might sort of protect against deep fakes
and so this kind of would really potentially address the the problem of
celebrities who are often kind of being copied. These digital replicas are being
created of them either for pornographic reasons for the purpose of scamming
people right so you see a lot of videos out there of like Elon Musk telling you being created of them, either for pornographic reasons, for the purpose of scamming people, right?
So you see a lot of videos out there of like Elon Musk
telling you to invest in a certain cryptocurrency,
or there was even one involving Justin Trudeau
telling you to invest in, you know,
certain investment scheme that of course
turned out to be fraudulent.
And so on the other hand, even if you are giving people
sort of the right to own their own digital image,
I mean, scammers who are giving people sort of the right to own their own digital image, I mean,
scammers who are trying to scam people, they're not exactly trying to follow the law, right? And so at
the end of the day, it's all going to come down to enforcement. How are you enforcing it? How much
resources are being deployed through policing, through law enforcement to go after these bad
actors? Yeah. Before I let you go, Alex, I'm sure people
are wondering about this.
And if they're encountering anything similar to this,
what they can actually do about it.
So what have experts told you about how individuals can maybe
mitigate their risk of actually being scammed in this way?
I mean, there's a couple of key things.
The first is really like, if you or a family member gets a call from someone
who says, hey, I have your daughter here
and she's been pulled over by the police
or your son's just rear-ended me on the highway
and like, I need some money,
but the phone number is not your relative's phone number.
Hang up the phone, call the relative directly and ask them what's
going on.
That's a pretty simple thing that you can do.
If you're getting a call from an unknown number as opposed to from your actual relative whose
voice you're hearing, try to connect with that person directly.
The other thing that I've actually implemented with my family is having like a passcode or
a passphrase so that, you know, should my parents ever get some kind of an emergency phone call?
Because I mean, look, there are reasons why maybe like your phone is dead
and you're not calling from your phone.
If it really was me saying,
hey, I've had an emergency and I need some money,
they could ask, what is the password?
And I would be able to tell them what it is.
And so those are two of the kind of simple things
that we can do at this point.
Yeah.
Alex, always great to have you here.
Thank you for doing this.
Thank you. Great chatting with you.
That's it for today.
I'm Maynika Ramon-Wilms.
This episode was produced by Tiff Lamb.
Our producers are Madeline White,
Michal Stein, and Ali Graham.
David Crosby edits the show. Adrian Chung is our senior producer and Matt Fraynor is our managing
editor. You can subscribe to The Globe and Mail at globeandmail.com slash subscribe.
Thanks so much for listening and I'll talk to you next week.