Today, Explained - The fight for your face

Episode Date: May 14, 2019

Today San Francisco could become the first American city to ban government agencies from using facial recognition technology. Vox’s Sigal Samuel explains how a cool sci-fi feature might now wreak ha...voc on civil liberties. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 If you do marketing in your life, MailChimp has an all-in-one marketing platform that allows you to manage more of your marketing activities from one place. And who among us doesn't want to manage all of the things we have to do from one place? You can learn more at MailChimp.com. Science fiction almost got it right. Face recognition technology has become part of daily life. The ability of computers to identify faces has gotten a hundred times better, a million times faster, and exponentially cheaper. Imagine if instead of fingerprints, we could use people's faces to find out who they are or even where they are. It was all fun and games in the movies, but now facial recognition technology is here and we have to deal with it. The technology can single us out in real time as we go about our daily business, often without us ever knowing.
Starting point is 00:01:08 And today, San Francisco is voting on something called the Stop Secret Surveillance Ordinance. In addition to being a delightful tongue twister, it is also a really important ordinance that, if it passes, will all-out ban the use of facial recognition technology by government agencies. Sagal Samuel writes about technology for Vox's Future Perfect. It would also require other types of surveillance tech, like license plate readers, to only be adopted by cities after they receive a full public airing and they're voted on. And it would also require city governments to get super clear policies on how they're going to use any surveillance tech. So if this passes today, San Francisco will become the first city in the U.S. to all out ban this technology for government agencies.
Starting point is 00:02:03 San Francisco wants to ban the government from using this kind of technology. Are they already using it? So there are a lot of agencies that really favor this technology. The FBI, for one, has a huge database with millions of faces in it. If you're an American, there's like approximately a one in two chance that your face is in one of these databases. A lot of people don't realize like quite how widespread this is already. One Georgetown law study found that law enforcement face recognition specifically
Starting point is 00:02:34 impacts over 117 million Americans. You know, this facial recognition technology is already being used by police departments in many states. We don't know exactly how many police departments, which, where, because they're not forced to, you know, make that information public. But cops can use this to, you know, if they pull you over, they just show up at your door, wherever they find you, they can use this technology to identify who you are just by scanning your face. I'm personally dying to know, are we already at the point where it's like the Jason Bourne movies where they're like searching for someone in a train station?
Starting point is 00:03:16 Where the hell is he? You cannot afford to lose this guy, people. And all of a sudden, every camera starts turning and focusing in on people's faces and eventually, boom. Okay, there he goes. They find the guy they're looking for. Is that already happening? This is sort of a bit of the nightmare scenario, right? It's creepy. Yeah, that's creepy because that puts the burden of proof on you to prove that you're not the person that the AI system is saying you are. Which, footnote, un-American. Not how it works here. Right. Airports are increasingly installing these systems.
Starting point is 00:03:49 We've seen headlines in the past few weeks about these systems being used to check you into your flight instead of you using your passport or your boarding pass. It just looks at your face. But these systems are also just installed in airports, so they can theoretically just surveil who's going where. So yes, it's happening? Yeah, I mean, I don't think it's quite at foreign identity levels yet. But in a country like China, I think, yeah, they're getting pretty close to that,
Starting point is 00:04:16 if not already there. And ordinances like San Francisco's are designed to stop America from going further along in the China direction. Are there other places besides San Francisco that are, you know, scared of going down the Jason Bourne slash China road here? I do think San Francisco could become kind of the model city for a lot of other cities and states if San Francisco does pass this. Berkeley, Oakland, they're already considering, you know, very similar ordinances.
Starting point is 00:04:46 And also Washington State, Massachusetts, they're also weighing bans. The U.S. Senate is considering a bipartisan bill to regulate the commercial use of facial recognition tech. So, like, we are seeing more and more movement around this. Okay, so a handful of American cities and states are feeling ambivalent to afraid about this technology. What about the technology community itself? Actually, in April, a lot of leading AI researchers, including some who are affiliated with these big companies like Amazon and Google, wrote an open letter to Amazon saying, you should not be selling your facial recognition technology to law enforcement until that tech can go through an independent review process,
Starting point is 00:05:32 like to assess its civil liberties impact and pass that test. How did Amazon respond? So Amazon has actually tried really hard to quash a vote by its shareholders that would seek to bar Amazon from selling that tech to law enforcement. In the end, Amazon was essentially forced to let that vote take place, and it's going to take place later this May. Do you think there could be like a large-scale review of this technology and what constitutes its ethical use? I think realistically what's going to happen is
Starting point is 00:06:05 we probably will see increased regulation of this technology. I doubt that we're going to see it abandoned wholesale because it is very attractive to police departments and ICE and folks like that who see it as a method to increase their efficiency. After the break, facial recognition software isn't just creepy. It's also racist because, of course. at the top of the show i told you that MailChimp has this all-in-one marketing platform that allows you to manage more of your marketing activities from one place. Guess what that will help you do? It'll help you grow your business if you're into that kind of thing. MailChimp is trying to eliminate the need for multiple tools by giving you everything you need to create, publish, manage,
Starting point is 00:07:25 and measure multi-channel campaigns. I don't think I've ever waged a multi-channel campaign before, but if you have, you probably know how tricky that can be. MailChimp wants to help you get to know who to talk to, what to say, and when to say it, and the best channel to deliver the message. And that's what its marketing tools are going to help you do. The complete marketing platform has Thank you. more at MailChimp.com. Seagal, while some are scared of this technology, I'm sure others out there are also finding this all comforting. Like, now the police are extra sure about who you are because they have facial scanning capabilities. Yay!
Starting point is 00:08:24 So first of all, even if you're a perfectly law-abiding citizen with a halo over your head, you shouldn't really take too much comfort in this. Facial recognition tech isn't always accurate. It is pretty accurate if you are a white male. How accurate is it if you're a white male? It only will misidentify you maybe 1% of the time. Okay, that sounds pretty good. But if you're a woman or a person of color or both, it's a lot worse. How much worse? More like a 35%
Starting point is 00:08:52 chance of being misidentified. Wow. Yeah. Last year, the ACLU wanted to test how much of a problem this is. And they actually ran a facial recognition test and they found that Amazon's facial recognition system, which is called recognition with a K, actually matched, wrongly, 28 members of Congress to criminal mugshots. This is happening largely because these AI systems are trained on data that is biased against people of color. You know, systems that have been fed more images of, say, white men, so they're better at recognizing white men, properly identifying them. But the AI systems haven't been trained on lots of images of women, of people of color, so it's worse at identifying them correctly. Didn't Google, like, tag Black people as gorillas or something awful like that sometime back? It was a glitch. Like, it was totally a mistake. That was probably the most sort of glaring example of this. Again, you know, their system had not
Starting point is 00:09:57 been trained on enough images of African-American faces. And so when the system saw this, it just didn't know what to do with it. There's also this very strange lawsuit going around right now. Basically, a teenager is suing Apple for a billion dollars. 18-year-old Yuzmain Ba says the tech giant's facial recognition four in the morning to arrest him for these thefts. Beyond being creepy, that's just rude. Four in the morning? I know. Pretty harsh timing. I'm guessing this guy wasn't white? Nope.
Starting point is 00:10:38 He then subsequently got summons from multiple states alleging that he committed thefts in Apple stores like all over the map. And he was saying, I didn't do any of this. What the hell? And he says it caused him so much emotional distress. And that's why he's suing for $1 billion. And what did Apple say? Apple denies that it uses facial recognition tech in its stores. But the teenager alleges that they did use this tech. And the teenager is actually going off the police detective's hypothesis. So whatever the truth turns out to be in this particular case, this high profile, very expensive lawsuit is just emblematic of this like much larger wave of backlash that's happening across the country. So why haven't the people behind these technologies or the people using them, be it Amazon or government agencies, fed more data into these systems so that they're more accurate?
Starting point is 00:11:32 Why haven't they fed them more pictures of white women or black women or black men? They have to some degree been working on feeding more faces of different types of people into these systems, but that's only part of the problem. Another part of the problem is that these systems just kind of amplify pre-existing biases in our society. Basically, human bias can creep into these AI systems. If communities of color have been over-policed in the past, which they have been, like black communities, then any AI system that is trained on that kind of data is going to just like replicate that bias. Are communities of color freaking out about this? Are they even aware that it's happening? Yeah. Other communities that are kind of vulnerable in the U.S. right now, let's say
Starting point is 00:12:18 Muslim communities, undocumented immigrants, are really, really concerned about the use of facial recognition tech. I know, especially for example, the Muslim community in California has been super adamant about fighting this technology. And one woman who works for the Council on American Islamic Relations recently was chatting with me and she told me, you know, let's say you go to mosque and you know these facial recognition technologies are operating around you and you know that the U.S. has a history, especially since 9-11, of, you know, kind of surveilling Muslims at mosques and you start to be afraid to even like show your face in public in certain places. So it can have impact on, you know, freedom of assembly and things like that.
Starting point is 00:13:10 Do we know if anything like this has happened already, targeting specific racial groups or ethnic communities with facial recognition technology? To my knowledge, we don't know of it being used in nefarious ways in circumstances like those, but it is coming up now as a possibility because mosques, churches, synagogues, places like that are increasingly worried about security in the wake of lots of shootings and tragedies. And so some people are saying, hey, let's bring facial recognition tech into our spaces. And others are saying, whoa, whoa, whoa, like that's actually a terrible idea. That's going to make us more prone to being surveilled. It kind of feels like even if this gets 100% accurate, it might just amplify a lot of deeper problems. Yeah. Just a few weeks ago, the AI Now Institute released a report on this question, and they
Starting point is 00:13:57 said, like, let's just consider that sometimes de-biasing your AI system is not necessarily a good thing. Let's say you're an African-American, and right now facial recognition tech is not great at identifying your face. Maybe you kind of want to keep it not great at identifying your face because your community has already been over-policed, and if the police gets better at identifying you and the rest of your community, that might make it actually worse for you. So the AI Now report basically argued that we need to not just focus on technical de-biasing, but look at how these technologies are actually used in the real world where we have histories of racial injustice, et cetera, and then decide who benefits from using these technologies. You know, just because you make a technology work just as well on everyone, it doesn't mean it's working just as well for everyone. On a wider scale, like this is the basic bargain that we've all been wittingly or unwittingly striking with big tech for years now, is like we're trading our data and our privacy for convenience. Right. striking with big tech for years now is like we're trading our data and our privacy for convenience, right? So like Google and Facebook offer us these cool free services and they're not asking for our
Starting point is 00:15:11 money. All we have to give up in return is our data or our privacy, but our data and our privacy are actually a pretty huge deal. And I think the big problem is people see AI through this sort of techno-solutionism lens, and they just think, oh, this cool, shiny new gadget, it'll solve everything. It'll make our lives more efficient. People also think AI will make our judgments more impartial and objective. So there's a real sort of seductive logic around it that we're just starting to see people push back more against that recently.
Starting point is 00:15:43 We didn't even talk about how game people were to have Facebook suggest tags for our friends and our photos or to pay with our face or whatever. If so many people are just going to hit agree and sign up, is the spread of this technology inevitable? It does seem like there's been this huge wave toward adopting this technology on a really large scale. But the past month, we've seen so much activism against this and lawsuits and, you know, just public outcry. We could be starting to see real results of that activism. We could be a society of people who potentially fight back against this if that's not the reality we want to live in.
Starting point is 00:16:34 Sagal Samuel writes about technology and artificial intelligence for Vox's Future Perfect. I'm Sean Ramos for him. This is Today Explained. Thank you to MailChimp for supporting the show today. One last reminder that MailChimp's all-in-one marketing platform allows you to manage more of your marketing activities from one place. So you don't have to move while you make your marketing smarter and faster and better. Learn more at MailChimp.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.