Front Burner - Putting the brakes on facial recognition technology

Episode Date: January 23, 2020

A leaked draft memo revealed recently the European Union is considering a temporary ban on the use of facial recognition technology in public spaces. And in the last few days, Google's CEO and the edi...torial board of the Financial Times have called for a moratorium on the burgeoning technology. Facial recognition is evolving and disseminating so quickly, that some are saying it's time to pump the brakes. Clare Garvie thinks that's the right idea. She studies facial recognition technology at the Georgetown Center on Privacy and Technology. Today on Front Burner, she explains how it's being used and its potential for abuse.

Transcript
Discussion (0)
Starting point is 00:00:00 In the Dragon's Den, a simple pitch can lead to a life-changing connection. Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization, empowering Canada's entrepreneurs through angel investment and industry connections. This is a CBC Podcast. Hello, I'm Jamie Poisson. We all know that facial recognition technology has left the realm of science fiction, and it's here now, in the real world. Lots of people use it every day to access their fancy new iPhones. It's convenient, I guess, faster than punching in a pin.
Starting point is 00:00:48 Maybe even more secure too. But the extent to which facial recognition technology is being used around the world, well, that might come as a surprise. A map of your face or a face print could exist elsewhere, outside your personal device. Facial recognition technology is evolving and it's disseminating so quickly that some people are saying it's time to pump the brakes here. Last week, a draft memo was leaked revealing that the European Union is considering a temporary ban on the use of the technology in public spaces. And in the last few days, Google and the editorial board of the Financial Times have also called for a moratorium on its use. Claire Garvey thinks that's just about the right
Starting point is 00:01:32 idea. She studies facial recognition technology at the Georgetown Center on Privacy and Technology, and she's with me today to talk about this technology, how it's being used, and its potential for abuse. This is FromBurner. Hi, Claire. Hi, how are you? Good, good. Thank you for making the time to speak with me today. Of course. Thanks for having me on. So I wonder if I could start by asking you, I have a driver's license, I have a passport photo, I have a Facebook account and a Twitter account. How likely is it that I am in some database somewhere where this technology is being used? If you have a visa photo, a passport photo, a driver's license, it is more likely than not that your face print has been established and face recognition is used on those photos.
Starting point is 00:02:31 So it doesn't even matter if you have a Facebook account or some sort of public facing job. Facebook also uses face recognition. That is probably the most typical or most common application that most of us are familiar with. Tag suggestions on Facebook are done through face recognition. Okay. And what is facial recognition technology? Can you take me through how it works? Very simply, face recognition refers to the ability to take a photo of an unknown individual's face and compare it to a database of known individuals, think driver's license database or Facebook database, and identify who is in that photo.
Starting point is 00:03:16 And there are a number of applications for that. Let's say somebody else presents your photograph, and then the driver's license agency or the passport agency can run that photo against their existing database and say, nope, that's actually not Jamie. That's somebody that's an imposter seeking to get credentials under your name. Okay, and that seems like an instance where this technology could be used for good. That's right. Ultimately, face recognition is a tool to assist identification. So there are
Starting point is 00:03:45 lots of positive applications for the technology. But face recognition, because it can be used remotely and in secret, it can identify people at a distance without them knowing. It also carries a number of risks. And the applications by law enforcement in particular raise those concerns. Okay. And I want to talk about the application by law enforcement in particular raise those concerns. Okay, and I want to talk about the application by law enforcement. I know this is where a lot of your research centers, but first, quickly, can you tell me about some other ways that facial recognition technology is being used already in the United States? You mentioned face recognition on phones. That's pretty typical face recognition and social media.
Starting point is 00:04:30 Retail outlets are beginning to use face recognition in a couple of different ways. We've seen instances where face recognition or at least face tagging, tagging certain customers by what their face looks like, is used to identify who is in a given store. There are also lists of potential shoplifters or known or suspected shoplifters that stores purchase and then they compare anybody entering their store to that. Casinos use face recognition and then a lot of public agencies use it as well. Nexus users flying into Vancouver International Airport, the first in Canada, to use facial recognition kiosks. Please look into the mirror. The Canada Border Services Agency claims it will speed up processing and align its security with global trends. That's interesting. And it's interesting that you spoke about retail because here in Canada,
Starting point is 00:05:18 there were these two malls in Calgary that were using this technology. It was in the mall's digital directory. You know, when you go up to the directory, you're trying to find where a store is in the mall. And the mall said that they were allowed to do it under our privacy laws that govern private enterprises because they weren't taking people's names. They were just trying to figure out their ages and their genders.
Starting point is 00:05:40 It all started on Reddit when a user posted a picture of one of Chinook Mall directories appearing to have some sort of coding. But that coding included terms like face encoder and face analyzer. Cadillac Fairview says the data is used to better understand traffic patterns and insists that no video or images are stored. Still though, when it was made public there was this huge backlash here. The Office of the Privacy Commissioner launched an investigation. But CF now says they are suspending use of the cameras inside those mall maps. That's interesting. Yes, face recognition describes a broad group of technologies, one of which is face analysis or face classification,
Starting point is 00:06:20 looking at somebody and determining what their race, sex, or age is that is used by advertisers or by retail stores to try to identify what their target group is and what ads to serve to individuals. This does raise a number of concerns, even if it's not individually identifying somebody. There's a well-known study out of MIT. Researcher Joy Boulamwini found that gender classification algorithms
Starting point is 00:06:46 overwhelmingly fail in gender identification of dark-skinned women. Oh, interesting. So they get it wrong. They get it wrong. Okay. And I want to talk about some of the implications or the consequences of that with you a little bit more in a moment, but just a few more examples here. I've read that Taylor Swift has used this technology at her concerts. Taylor Swift's security team set up a special kiosk here at the Rose Bowl in Pasadena. The kiosk showed exclusive rehearsal footage, irresistible to hardcore fans. But when the fans stopped to watch, they had no idea they were being
Starting point is 00:07:21 secretly photographed. Yes, there have been rumors that they're used at sporting events and concerts to try to identify individuals suspected of stalking. In Taylor Swift's example, it's also used to identify individuals who should not be at certain sporting events or who have outstanding warrants, perhaps. All right, so we were curious about the laws that might govern this in Canada. So our producer Imogen, called former Ontario Privacy Commissioner Anne Kavoukian, she's now the Executive Director of the Global Privacy and Security by Design Centre. Here's what she had to say. To the best of my knowledge, we do not have laws relating to
Starting point is 00:08:02 facial recognition per se. We have privacy laws relating to the collection of personal information, both by private sector entities and public sector. But none of them are specific to facial recognition. And the problems arise because facial images can be captured at so many publicly available sites. This is, everyone acknowledges this, this is a growing, growing trend. And that's why so many people are calling for the creation of federal laws to prevent, to outlaw the use of facial recognition like this, clearly without consent. So that's what's going on here in Canada. But what kind of laws exist in the U.S.? At the federal level in the U.S., it's very much a rules-free zone still.
Starting point is 00:08:52 There are no federal laws that specifically speak about the collection of somebody's face template or face biometric. Okay, let's get to law enforcement. So what has your research shown about how law enforcement is using facial recognition technology in the U.S.? Face recognition use by police in the U.S. is incredibly common. police in the U.S. is incredibly common. It is primarily used as an investigative tool to seek to identify somebody who's been caught on camera committing a crime or being a witness to a crime or a victim or just somebody who's associated with a criminal investigation. We estimate conservatively that a quarter of all law enforcement agencies have access to one of these systems, and increasingly law enforcement has the ability to run or request searches against driver's license photos, meaning that over half of all American adults are actually
Starting point is 00:09:52 enrolled in databases that are used for criminal investigations. And these are face recognition databases. This is unprecedented. Can you give me an example of how that might work? So we have a number of examples from Freedom of Information Act requests that we've submitted to agencies around the country. Some examples from New York include an individual was suspected of assault in the Bronx. The investigator had a photo of the individual. They submitted that to their face analysis team, who ran it through the face recognition system, received a candidate list back, a list of possible matches that the algorithm thought might be a match. And then an individual, an analyst, looked through that and decided that somebody was a match. They then sent that back to
Starting point is 00:10:43 the investigators, and the investigators eventually made an arrest. Okay. And then how did that play out in the courtroom? Here's the thing. In the U.S., we have yet to see prosecutors introduce face recognition as evidence in one of their cases. But that's not to say that face recognition hasn't led to the identification of somebody and potentially the misidentification of somebody. In the U.S., we have the Fifth Amendment, which ensures the right to a fair trial. Included in that is the right of a defendant to get information speaking to their guilt or innocence that the prosecution has. So in my view, any information about a face recognition search that's run must be turned over to the defense. But we're not seeing that happen. We're not seeing the fact that sometimes the individual arrested might be the 316th person on the candidate list,
Starting point is 00:11:37 meaning that the algorithm thought 315 people looked more like the suspect. Let's talk about that for a minute, this idea of misidentification. You mentioned it before that the algorithms can be wrong, particularly when it comes to gender and people of color. And so how does it misidentify someone? Face recognition is a probability tool. It says this person is more likely to be a match than someone else. Because of that, it is an identification aid, and it relies on an individual to then make a
Starting point is 00:12:11 determination about whether the algorithm got it wrong or not. Sometimes the algorithm gets it wrong, and sometimes the human gets it wrong. The American Civil Liberties Union used Amazon's facial recognition technology to scan photos of members of Congress. But they found the machines mistakenly identified over 20 of them as people who've been arrested for crimes. In the Dragon's Den, a simple pitch can lead to a life-changing connection. Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization. Empowering Canada's entrepreneurs through angel investment and industry connections. I wanted to ask you about this study that you did called Garbage In and Garbage Out,
Starting point is 00:13:07 where you looked at the kind of information that's being used by police in these facial recognition searches. So we talked about the fact that you could put a picture in and it could spit out, you know, the wrong identity. But I understand you also found this case where, like, a police officer put in a celebrity doppelganger to try and figure out who the suspect was in a crime? That's right. NYPD, the New York Police Department, was investigating somebody who was wanted for stealing a few beers from a drugstore. The photo was not very good quality. One of the detectives thought the guy in the photo looked like Woody Harrelson. Mr. Malone, this is
Starting point is 00:13:47 the proudest day of my life. And if you ever need someone to just, you know, yell at, I'm your man. So they went on Google, found a photo of Woody Harrelson and stuck it into their photo database, stuck it into their face recognition system and came up with a possible match.
Starting point is 00:14:04 The person that they thought was a match, not to Woody Harrelson, of course, but to the original suspected beer thief, was the tenth person on the list, meaning that the algorithm actually thought nine other people looked more like Woody Harrelson than the person that they landed up investigating and charging with the crime. This underscores the fact that we talk about face recognition as a biometric, which makes it sound highly scientific. It makes it sound similar to fingerprints. But the reality is when we look at how it's used by police,
Starting point is 00:14:35 it's very much not controlled by any scientific methods or protections against submitting celebrity doppelgangers, submitting highly edited evidence, that type of thing, all of which will increase the likelihood of a misidentification. The NYPD says, quote, facial recognition is merely a lead. It is not a positive identification, and it is not probable cause to arrest. No one has ever been arrested on the basis of a facial recognition match alone. Police say concrete evidence is developed to link the suspect to a crime. So we know that some law enforcement agencies in Canada are using facial recognition technology
Starting point is 00:15:19 as well. The Toronto Police say they use only their mug truck database to identify suspects committing criminal offenses. The Ontario Provincial Police, which is a fairly large provincial service, uses it as well. Our National Police Force, the RCMP, is refusing to say whether or not they use this technology. What worries you most about facial recognition and policing? In my view, one of the most risky applications would be face recognition as a surveillance tool. The ability to hook up face recognition to surveillance cameras and basically be able to identify anybody in a crowd walking down the street and locate where they are
Starting point is 00:15:58 at a given point in time, meaning that you could track their movements across time and space. This is something that in the U.S., our Supreme Court has recognized that we have an expectation of privacy to our movements in public, even though we actually can't help but show our face in public. There are a lot of laws in various states that say we can't cover our face, and we certainly can't leave our face at home like we could our phone, for example. This raises very serious questions about our privacy in public, our rights to anonymity, and also our rights to peacefully assemble and protest and associate in public. What happens when I want to engage in public protest, but I don't
Starting point is 00:16:40 necessarily want myself to be identified as part of that protest. In 2015, we know that the Baltimore County Police Department used face recognition with social media monitoring to try to identify people at a protest around the death of Freddie Gray in police custody. Gray was arrested in 2015 for having a switchblade, dying after bouncing around in the back of a police van. Which sparked angry protests. No justice! No peace! And riots in Baltimore. For those that believe that I'm anti-police, it's simply not the case. I'm anti-police brutality.
Starting point is 00:17:14 So they were protesting police behavior. Now imagine, even if police aren't using face recognition like that, just the perception may lead to people not participating in those discussions, leading to a fear. Right. So when it comes to the concerns around law enforcement, there's obviously the concern around misidentification, but the second and for you, even larger concern around surveillance. That's right. And I think the more accurate the technology gets, the less of a concern misidentification. And I think the more accurate the technology gets, the less of a concern misidentification may be. But the more accurate the tool gets,
Starting point is 00:17:50 the better of a surveillance tool it becomes. So just making the technology better isn't going to I want to ask you about this New York Times article that popped up over the weekend. It's quite remarkable. It's about a company called Clearview AI. And it is being used by law enforcement agencies, including in Canada, though no law enforcement agencies we reached out to would confirm that they use this. The Toronto Police essentially said they don't use it, they just use the mugshot database. But, you know, this company essentially is an app that has snapped up to three billion images from Facebook, YouTube. This data goes far beyond mugshots. Kashmir Hill is a technology reporter for the
Starting point is 00:18:46 New York Times. The police officer said it was incredible that it had helped them solve dozens of cases, dead-end cases that they had abandoned. I did an interview with their founder and he ran the app on me and it pulled up photos of me that I didn't know were online. And then I covered my face and it still pulled up seven photos of me. If the police can use all these pictures to identify anybody on the internet, you know, how concerned are you about that? This company very much takes all the fears of face recognition and realizes them. When I first started researching face recognition,
Starting point is 00:19:26 this was what people were most concerned about. The ability of this technology to make photos Google-able, essentially. Find the photos you don't even know that you're tagged in on social media or just on the internet generally. This company has purported to make a database of 3 billion random images of probably most of us based on the photos we either intentionally or unintentionally appear in. I also can't imagine in a jurisdiction where there is a requirement of notice and some sort of meaningful consent, how those 3 billion images make it into that database in the first place. Did every person that shows up in those 3 billion images actually consent to their use in this capacity? I highly doubt it. Right. That's a very good point.
Starting point is 00:20:16 Michael Arntfield is a criminologist and former police officer. Well, I think we're deluding ourselves if we think that we have any privacy whatsoever. I mean, people throw around the word, oh, this is Big Brother. There has been no ruckus about what's been going on already with these companies. So to think that now we can actually use this for a productive purpose, for a public safety purpose, I'm not sure now why is the alarm being sounded now. So all of this really brings me back to what got us interested in the story in the first place, which is news that the EU is considering a temporary ban on
Starting point is 00:20:50 the technology, a moratorium of three to five years. And I understand this is something that you are supportive of as well. And what do you think it would help accomplish? That's right. I do support the idea of a moratorium. We very much put the cart before the horse in implementing this technology without fully understanding how accurate it is. It is whether the accuracy rates vary based on who you are and who you're searching for, the consequences to our privacy and to our other civil rights and civil liberties. You know, it feels like a conversation that isn't being had very loudly. Would you agree with me? I don't hear this conversation very often. I agree. I think I'm in a bubble where a lot of us talk
Starting point is 00:21:40 about face recognition a lot. You must talk about it all the time. I do, but it is a broader discussion, and I think a lot of countries right now are grappling with that. In the UK, face surveillance is relatively common. It's been piloted by the South Wales police and others, and there has been a court case about this. Ed Bridges sued after he was filmed participating in a demonstration against the arms trade. It's become pretty apparent that the fundamental issue is that the technology has been developed quicker than the law has been able to keep up with it.
Starting point is 00:22:11 At the very least, I think what we can expect to see from this is the law being forced to catch up. On balance, the court did find that individuals have a privacy interest to their identification in public, but that the law enforcement need of this technology outweighs that privacy interest. This shows an attempt of a court at looking at the various interests and trying to weigh them. But the conversation's very premature.
Starting point is 00:22:37 And while we're having those conversations, we're not having them, the technology continues unregulated. Do you worry that it's going to be impossible or already is impossible to put the genie back in the bottle here? The Clearview AI example does worry me that it's too late in some applications. It's not too late to regulate. It's not too late to put very, very strict controls on how law enforcement agencies use it. But the question does become, if a company has already put together a database of 3 billion images, even if we regulate now, what happens to that database?
Starting point is 00:23:14 It still exists. So ideally, the time to have these conversations would be 10 years ago, but here we are. And today is also a good time to have those conversations. Recurring theme in this world that feels like, Claire Garvey, thank you so much for joining me and having this conversation today. Of course. Just an update on the new China coronavirus. We did an episode on this on Wednesday about the virus
Starting point is 00:23:56 and how the world just isn't ready for a global pandemic. You can find that in our feed. As of Wednesday night, the death toll in China had risen to 17 people. And the city of Wuhan, home to about 11 million people and where the virus began, is stopping buses, subways, ferries and flights to and from the area. People there are being told to avoid crowds and limit exposure. This is happening in a week where millions of people are expected to travel for the Chinese Lunar New Year. The WHO is still trying to decide if they're going to declare this an international health emergency.
Starting point is 00:24:32 They postponed that decision until today. They're saying that they need more information. We're going to stay on the story, but that's all for today. I'm Jamie Poisson. Thanks so much for listening to FrontBurner and talk to you tomorrow. For more CBC Podcasts, go to cbc.ca slash podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.