Front Burner - Police in Canada are using controversial facial recognition software
Episode Date: March 3, 2020That photo you posted to Instagram? It might be a part of Clearview AI’s massive database of some 3 billion images, all scraped from the internet. The facial recognition app has experts worried abou...t privacy overreach. Canadian police forces first said they’re not using Clearview — until it turned out they are. Toronto Star reporters Wendy Gillis and Kate Allen have followed this story closely, and they’re here to talk implications.
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization,
empowering Canada's entrepreneurs through angel investment and industry connections.
This is a CBC Podcast.
Hello, I'm Jamie Poisson.
Hello, I'm Jamie Poisson.
And we want to make sure that this tool is used responsibly and for the right purposes.
But in the wrong hands, right? It could do a lot of damage. That's why it's strictly for law enforcement. We've had other people...
Well, you hope, but it could go into the wrong hands.
So a couple of weeks ago, I'm sitting at my desk when I get an email from the media contact at the Toronto Police.
It was a follow-up to a question we'd asked them about Clearview AI.
This is a very controversial facial recognition tool.
It's essentially this enormous database of billions of scraped photos from the internet that critics have called dystopian and reckless.
Earlier, I'd asked Toronto Police if they use it,
and they said no at the time.
So I get on the phone, and the police, they change their tune.
Turns out Toronto Police had been using Clearview,
and also many other police forces across the country have too.
Some of that information comes from a massive data breach at Clearview AI.
Today, Clearview's Canadian users and the growing push to rein the company in.
I'm talking to Kate Allen and Wendy Gillis, my former colleagues at the Toronto Star,
who have been reporting on this extensively.
This is Frontburner.
Kate, Wendy, hello.
Hi.
Such a pleasure to have you both on to the podcast.
It's so great to be here.
So before we get rolling, I think we should get a reminder of why this technology is so controversial.
Kate, what is it about Clearview AI software, this giant database of some like 3 billion photographs scraped from the internet
that's causing so much concern. So as you mentioned, this is a database that the company says
is made up of 3 billion images of people scraped off the web, including from social media sites.
So this has the potential to be able to identify anyone if your images are in this.
So I would say that the concerns fall into sort of two buckets, which are if it does work and if it doesn't work.
So if it does work really well and it works as well as, you know, they say it does.
So what the company has said is that this is a tool for law enforcement and it's used to help solve major crimes like murders and
rapes and child exploitation. Right. This is how you could find a pedophile. Yes. And I think a lot
of people would probably be at least willing to discuss the possibility of having their personal
information in this database if it meant catching, you know, child predators. Right. Right. The
problem with this company so far is that its use has proliferated among law enforcement agencies
in Canada and in the U.S. without really any oversight or anyone knowing about it. And so
we don't really know all the ways that it's used. So we've found out that it's been used to try to
catch shoplifters. Another police force told us that it was used to investigate a car theft.
So I think it becomes sort of a different conversation if you're talking about having, you know, this ability to identify anyone at any point if it's for, you know, investigating petty crimes.
Right. And I know, you know, another concern connected to law enforcement has been that police could also use it to like surveil mass demonstrations, for example.
Like maybe you want to come out to a public space to protest because police could run
your photograph, a photograph that they captured at this protest through this database and
identify you.
Right.
So this is called function creep.
And it's something that privacy experts talk about all the time.
So you might adopt a technology for one use, but then it starts to be used in other instances if there isn't oversight and regulation, which so far with this tool there is not.
And so you talked about the second bucket.
What if it doesn't work?
Right. not the right way to be talking about AI, but basically the idea is they use all these different points on your face to try and identify you as a person. The more photos they have,
the more accurate they say they are. Right. That's actually, that's a good way to describe it.
These tools essentially take your face print, kind of like a fingerprint, and search for matches
in their database. But studies have shown that facial recognition systems in general have higher error rates for people of color than it does for white people, which is obviously a problem.
You know, black people and indigenous people are already overrepresented in the Canadian justice system.
And there are obvious concerns about adopting a tool that may make more mistakes that lead to more apprehensions, you know, wrongful
apprehensions of people of color.
Juan Tartad is the founder and CEO of Clearview.
So one thing our software does is it does not measure race and it does not measure gender.
All it measures is the uniqueness of your face.
And when you have all this training data that we've used to build the algorithm, we made
sure that we had enough of each demographic
in there. So other algorithms might be biased in terms of not having other minorities, enough
training data in the database for them. So that's something we made. These are some real concerns
about law enforcement, but we also have concerns that this company, this database could be used by
private individuals, right? Like a husband who is abusing his wife,
who then tries to find his wife who may be in hiding? Or, you know, what are some of the other
concerns that we have here, Wendy? Well, I mean, I think so far our reporting has shown that,
you know, for example, a Rexall employee had been using this and Clearview AI had been pretty clear
that this was going to be used by law enforcement. And I don't think we would all consider, you know,
a Rexall employee to fit into that bucket per se. Apparently, that it had been recommended by a
Toronto police officer to Rexall to actually, you know actually use Clearview AI to catch a shoplifter.
But that doesn't really rise to the definition of law enforcement, I think.
What we've learned is that Clearview AI is not being incredibly discerning about who they consider to be law enforcement.
I should say that we've asked Clearview AI on several occasions how they check who's using it.
You know, so you can go to their website and find their login page for submitting your email address to get a trial version of this software.
And it says, you know, this is for law enforcement professionals.
But, you know, we've asked on multiple occasions how they know that the people, you know, other than using an email address with at Toronto Police dot whatever, how they know who's using it and what for. And they haven't responded to any of our questions.
So all of this is kind of bubbling around the last few weeks, and then we get this massive
data leak. And Kate, tell me about the data leak.
How does it first come to your attention? What do we find out about what's in it?
So BuzzFeed got in touch with us and told us that they had obtained client data from Clearview AI.
And with respect to that, I can tell you what Clearview has said to other media, which is that a unspecified flaw of theirs allowed
for, quote unquote, unauthorized access to their client list. It's what we're talking about here,
possibly a hack? No, it's not a hack. Okay. Or at least they have not used the word hack
with relation in relation to this. But BuzzFeed News essentially gets a list. BuzzFeed got their hands on a massive amount of client data of Clearviews.
And what does the list say?
They chose to share the Canadian client data with us.
And it showed us that at least 34 police forces across Canada were using it, including more than a dozen that had told us that they weren't.
It also showed us that a handful of
private businesses were using it. Right. This is what Wendy was talking about when she mentioned
the Rexall. Yes. Yes. And then also another handful of government agencies. So, for example,
the Department of National Defense, a specific special operations force was using it and a few
more. Okay. And I should say BuzzFeed, like the database that they got in its entirety shows 2,200 law enforcement agencies, companies, and individuals around the world.
And Canada was the second biggest market outside the U.S.
Interesting.
Yeah.
I also want to mention that Clearview has said there were, quote, numerous inaccuracies in the illegally obtained information.
obtained information. So tell me what, I want to come back to law enforcement in a little bit, but tell me what we know about how other entities were using this. The Canadian military was using
this, for example. They tested, so what they told us, I believe it was on Friday, they just got back
to us, that they had used a free trial version of this to just run sort of tests on, I think they said inanimate objects.
They said inanimate objects, animals.
And people, publicly available images, essentially. And, you know, their line on it is that anyone
and anything that poses a threat to Canada is going to be, you know, using the latest technology.
And so I think they were trying to say, well, we need to be prepared for those kinds of threats as well. You know, we should say too that all of these tests have
been run on trial versions and companies and law enforcement agencies have been stressing to us
that they have ceased using it or, you know, they haven't entered into any kind of official
agreement. Okay. Although I know BuzzFeed is reporting that in the United States,
like the Department of Immigration, ICE has entered into a formal agreement. Okay. Although I know BuzzFeed is reporting that in the United States, like the Department of Immigration, ICE has entered into a formal agreement. Yeah. And
actually there's one Canadian police force that has, which is the RCMP, which is a whole separate
story. Not a separate story, but a little bit different. Well, let's talk about the RCMP then.
What do we know about how they have been using it? This was one of the law enforcement agencies
that several weeks ago would not confirm or deny whether or not they used it. They used,
that several weeks ago would not confirm or deny whether or not they used it.
They used, Wendy, well, the very popular explanation that they don't comment on investigative tools, right?
Yes, very, very investigative tools.
Yeah, so that's what they told us.
They said that they wouldn't confirm or deny it.
And I reached out to the Federal Privacy Commissioner to ask more about that.
And the Federal Privacy Commissioner's Office told me that the RCMP had previously committed to, okay, this is going to be a jargony term, but they
committed to doing a privacy impact assessment before using any type of facial recognition
technology. So this is like an assessment that they would work with the Privacy Commissioner's
Office to do to try to minimize or eliminate privacy concerns around a technology. And the
office told me that they had
committed to doing this before deploying any type of facial recognition technology in late January.
So that was at that point less than a month before. So then after we got the BuzzFeed data,
it showed that the RCMP was not only using Clearview AI, but paying for it and had run
over 450 searches with it. So we went to the RCMP and said, hey, you wouldn't confirm or deny this before,
but we have data showing that you have used this and that you pay for it.
And also, the Federal Privacy Commissioner's Office has told us that you committed to
doing this privacy lessening assessment.
What's the deal?
And so we didn't hear back for a day.
And then the next day, they released this public statement to all media saying that they, you know, actually said, quote, in the interest of transparency,
we are revealing that we do use Clearview AI. And here are the ways that we use it.
And how did they say that they've used it? They talked about,
I read this massive press release, too. They talked about a child exploitation case, for example.
Yeah, so they said that their specialized online child exploitation
unit had used it and they had actually been able to identify and rescue two victims through using
this tool. And they also said that an unspecified number of criminal investigative units had also
used it and didn't specify anything further. And they didn't make, they didn't comment at all
on the privacy impact assessment questions that we had sent them.
In the Dragon's Den, a simple pitch can lead to a life-changing connection. Watch new episodes
of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization,
empowering Canada's entrepreneurs through angel investment and industry connections.
Wendy, I want to talk with you in a moment about the potential positives, right, of using this law enforcement.
But before we move on, look, you've been a crime reporter for several years.
You've been reporting on all of these police agencies. For people listening and hearing that these are queries that you have asked police agencies around the country and received like a negative response, like, no, we don't use this. What are people supposed to make of that? Like, it sounds like these agencies are lying to reporters.
like these agencies are lying to reporters.
Right. And I'm glad you asked that because I think that that's an important nuance to this,
is that when we reached out and asked them, my understanding is that for the most part,
the media spokespeople had asked, okay, is there a formal arrangement? Have we, I know in one case,
one media officer reached out to the finance department to determine whether any money had exchanged hands, which seems reasonable and diligent.
They didn't know necessarily that officers from a specialized unit had, for example, gone to a conference or had this technology recommended to them and tried it on their own. And so the information that was provided to us last week that suggested that these police services hadn't used Clearview AI was the best
available information at the time. And in part, that's what makes this so alarming, is that the
police services themselves hadn't known about this testing. Kate and I agree that our reporting and the reporting
on this in general has shown how technology has really outpaced oversight at multiple levels,
federal, provincial, municipally, and within police services themselves.
Interesting. And I would say that, yeah, if we thought that the cops were lying,
we would not let them off the hook on this. Well, yeah, I guess my follow-up question is,
do you buy that? They all seemed genuinely, I mean, Wendy can chime in on this, they all seemed genuinely
surprised.
Like the Vancouver Police Department, for example, when we first asked them, do you
use it?
The response we got was, Vancouver Police has never used or tested facial recognition
from this company and has no intention of doing so.
Right.
And then we went back to them.
Yeah, we went back to them and said, well, actually, we have data indicating you did.
And, you know, this very helpful media person said, you know, I canvassed our senior management.
I was told we never used it.
Media people at police forces actually asked us, like, can you tell us more information about who is using it in our police department?
I think that's like a scary thing for some people to hear.
Will be a scary thing for some people to hear.
Okay, so let's go back to how some police forces have been responding to this, because the RCMP is saying, look, like this did help in a child exploitation case.
And so what are they saying, Wendy?
exploitation case. And so what are they saying, Wendy? And I mean, that's such an interesting aspect of this is that, you know, while we have been reporting on this because it is alarming to
see how police services have adopted this so quickly without the knowledge of, you know,
higher command, their police services boards that serve as an oversight function,
the knowledge of privacy commissioners at provincial and federal levels.
But at the same time, I mean, we need to take stock of the fact that this is technology that
can be helpful in solving crimes and that there has to be kind of a nuanced conversation about
how it's used and not necessarily a knee-jerk reaction of,
no, you know, it might be reasonable to ask for a moratorium or a stop while we figure out
oversight of this technology. But I think a balanced conversation includes the potential.
And so that is anything from, you know, there's a lot of conversation about identifying perpetrators of crime, but what we've seen from getting some granular detail about who has been testing this is it's the Internet Child Exploitation Units at a lot of police services.
So they're using this to identify victims, children, and that's something that may be worth talking about if that can help them find otherwise unidentifiable victims.
The idea here being that like they could get an image off the Internet from a child pornography site and then be able to identify that child.
Exactly right.
OK.
Pretty much every major tech platform has told you to cease and desist, stop scouring our pages.
What does that mean to that 3 billion image set?
So first of all, these tech companies are only a small portion of the millions and millions of websites available on the Internet.
But one thing to note is all the information we are getting is publicly available. It's in the public domain. So we're a search engine just like Google.
We're only looking at publicly available pages and then indexing them into our database.
Okay, so now I want to talk about some more reaction here in Canada and when it comes to the concerns of privacy advocates in particular. So what kind of reaction have we seen in Canada? I
understand there's been several investigations launched by our privacy commissioners, right,
Kate? Yeah. So the first thing that happened was that even before any of this BuzzFeed data,
when we just knew that the Toronto police and a handful of GTA police forces used it,
the federal privacy commissioner and three provincial privacy commissioners who all have jurisdiction
over the private sector, they launch an investigation into Clearview AI and whether
or not it breaks Canadian privacy laws. Because, I mean, they all have slightly different privacy
laws, but the sort of bedrock principle of all of them is that companies have to obtain your consent
to use and disclose your personal information. So if you knowingly agree to upload your pictures to Facebook,
another company can't come along and scrape those off a social media site
and then use them as a database for law enforcement.
That's a totally different use.
They have to obtain your consent to do that.
So when we're talking about criticisms that this could contravene Canada's privacy laws,
that's what we're talking about here, that Clearview AI may have scraped all of these from Facebook, Twitter, YouTube. Millions of websites. Millions of
websites without our explicit consent or knowledge for this purpose. Exactly. That's what they're
investigating. So then after the RCMP announced that they were using Clearview, the federal
privacy commissioner then launched another investigation into the RCMP's use of Clearview AI.
And then on, I believe it was Friday, the Alberta privacy commissioner also launched another investigation into the RCMP's use of Clearview AI. And then on, I believe
it was Friday, the Alberta Privacy Commissioner also launched an investigation into Edmonton
Police Service's use of Clearview AI. Okay. And what can we reasonably expect to come out of
these investigations? You know, that's a really good question. Last year, the Federal Privacy
Commissioner and the BC Privacy Commissioner, who is one of the ones involved in the new Clearview investigation,
they finished up a long investigation into Facebook
and Facebook's breaches of privacy
and misuse of Canadians' personal information
in the Cambridge Analytica scandal.
And they essentially slammed Facebook for, you know,
misusing personal information.
Right. Same principle here.
Exactly.
That those users hadn't consented for Cambridge Analytica to have their data. stop doing these things. Please overhaul your, you know, how you handle personal information. And Facebook essentially just, you know, disputed the findings and didn't do anything. And no
Canadian privacy commissioners have the authority to enact fines. And the federal privacy commissioner
doesn't even have the authority to make orders. So earlier this year, the federal privacy
commissioner went to a federal court and said, hey, please take up this Facebook case and put out some orders asking Facebook to stop doing this.
Okay. And, you know, I should say before we move on, the Clearview AI has responded to some of
these criticisms, particularly in the United States. They've been asked by like Google,
Facebook, Twitter to cease and desist because of exactly what we just talked about, Kate,
about not having consent to use these images.
But the CEO has said that this contravenes his First Amendment rights.
So we're seeing this conversation play out in other jurisdictions.
And Wendy, like you mentioned, this idea that there will be more emphasis or more attention
placed on this.
You know, I wonder if we could also talk about the political landscape here in Canada.
The NDP's Charlie Angus has called on the Liberal government Monday
to halt the use of Clearview AI until it can be investigated.
And Parliamentary Ethics Committee has now launched an investigation as well.
I want to play you a clip from Charlie Angus,
who says he wants to see
stronger laws and better enforcement when it comes to things like Clearview AI's technology.
The Trudeau government has been a little reluctant so far, but I think they're on the wrong side of
history on this. There is a real movement ever since the Cambridge Analytica breach.
People are much more aware of the abuses that can happen.
People are much more literate about their presence in the digital realm. And I think that the reasonable position for Canada would be, let's put a moratorium in, let's lay down ground rules, and if we have to amend our laws, let's do that.
Do you guys think that there's going to be a shift as well in how our politicians deal with this?
think that there's going to be a shift as well in how our politicians deal with this? For instance,
Kate, could you see a scenario in which our privacy commissioners are given more teeth?
It sounds like what you were saying before is that there are criticisms that they're a bit toothless. Yeah, I mean, I don't know. I guess we'll see. Like last summer, we had all
these committees in Ottawa about tech oversight and tech regulation. And the Federal Privacy
Commissioner went before that committee and said, we need more teeth. Our privacy laws were enacted in 1983. You know,
as the BC Privacy Commissioner put it to me, this was at a time when what we were arguing about was
how much information should you put on your dry cleaning slip? Like, can they ask you for your
address? Like, that's a very different landscape. We are not in smartphone territory.
Exactly. Yeah, not like billions of images in a database that can be searched within seconds with artificial intelligence.
So, you know, the Privacy Commissioner asked for more teeth, more enforcement powers, and frankly, a total shift in how privacy laws are built.
That they're not just narrow data protection laws, but enshrine privacy as a human right.
And, I mean, that was last summer.
All right.
Thank you both so much for being here. This was a really interesting discussion. I hope you guys will come back soon. Thanks, mean, that was last summer. So. All right. Thank you both so much
for being here. This is really interesting discussion. I hope you guys will come back soon.
Thanks, Jamie. Thanks for having us.
All right. So before we sign off today, just a note to say that we also reached out to Clearview
AI for comment, but had not heard back from them as of Monday afternoon. We're going to keep on
the story, but that's all for today. Stay tuned tomorrow, though, for a special two-part series
on surrogacy in Canada. Our colleagues interviewed dozens and dozens of people,
and this is a really fascinating and complicated snapshot about surrogacy in Canada, which feels
like it's not really working for anybody, the parents or the surrogates.
So stay tuned for that.
I'm Jamie Poisson.
Thanks so much for listening and talk to you tomorrow.