Today, Explained - How AI makes policing more racist

Episode Date: July 2, 2020

Turns out it’s just as biased as people are. Transcript at vox.com/todayexplained.  Learn more about your ad choices. Visit podcastchoices.com/adchoices...

Transcript
Discussion (0)
Starting point is 00:00:00 The all-new FanDuel Sportsbook and Casino is bringing you more action than ever. Want more ways to follow your faves? Check out our new player prop tracking with real-time notifications. Or how about more ways to customize your casino page with our new favorite and recently played games tabs. And to top it all off, quick and secure withdrawals. Get more everything with FanDuel Sportsbook and Casino. Gambling problem? Call 1-866-531-2600.
Starting point is 00:00:23 Visit connectsontario.ca. A while back on the show, we talked about the dangers of facial recognition technology. And we talked about the first city to ban it, San Francisco. But over the last few weeks, the anti-facial ID movement has gone from a trickle to a flash flood. Until now, a lot of the conversation has been around privacy. These days, it's racial justice. Amazon says it is temporarily banning police and law enforcement from using its controversial facial recognition software. The decision follows weeks of protests against police brutality in the wake of George Floyd's death. IBM said that it is getting rid of their facial recognition programs.
Starting point is 00:01:08 Microsoft is urging Congress to put more regulation on facial recognition technology. IBM CEO sent a letter to Congress saying that he wants to work with Washington to promote justice and create a dialogue on whether facial recognition software should be used by local police. But it's not just big tech. Boston just became the largest city after San Francisco to ban government use of facial recognition technology. Boston should not be using
Starting point is 00:01:32 racially discriminatory technology and technology that threatens our basic rights. And now the coders themselves are speaking out. This week, the Association for Computing Machinery, the world's largest computing group, wrote this in a statement. The technology too often produces results demonstrating clear bias based on ethnic, racial, gender,
Starting point is 00:01:52 and other human characteristics. Such bias and its effects are scientifically and socially unacceptable. I asked Diana Howard why all of this is happening now. She's a roboticist who also teaches about ethics, robots, and AI at Georgia Tech. It's because with George Floyd, the word I keep hearing is an awakening, that people started to realize that Black folks, as well as Black and brown folks, but Black folks specifically, have been kind of on the wrong side of some of these practices and policies with police and with law enforcement. And facial recognition has, of all the AI applications that have been out there, facial recognition has had kind of the most movement into some of these fields with law enforcement. We know that AI is used by border patrol.
Starting point is 00:02:46 The FBI and U.S. Immigration and Customs Enforcement are reportedly using driver's license photos for facial recognition searches without their owner's knowledge or permission. We know that the court system for criminal recidivism determining who should be paroled, and they use AI quite a bit. What's scary is when we see AI is being used in areas where our civil liberties can be violated. How does that work exactly? If you think about facial recognition, it means you need a lot of faces to put into the system.
Starting point is 00:03:21 Now, where do you think those images are taken from? They're actually taken from police records. They're taken from the web. They're taken from all the places you can think of that you have images. Well, we already know if you look online and you look at media, they historically have represented black and brown people in a negative light. And then if you're adding in the fact that you're using police records, we already know that there's a bias, and everyone knows this, with respect to specifically Black men, that's the data that's being fed. And so if most of the data you have that's being fed into AI basically says 80% of these individuals that look like this are perpetrators, it means that if you come in and you have someone innocent,
Starting point is 00:04:06 well, you know, you're going to match to a larger percentage of the database of negative images because that's what's been learned. Say you're innocent, but you're brought in. Well, first off, a mugshot is taken. You have fingerprints. Even if you are exonerated, you're now in the system. You have been targeted, which then means you're searchable. So it becomes a systematic cycle where it's very difficult to break out, especially because it's AI. Have we seen any recent examples of AI wrongly accusing a person of color? There was a robbery in Michigan. Someone stole nearly $4,000 in watches from this Detroit Shinola boutique.
Starting point is 00:04:46 Robert Williams, who has no criminal history, was wrongly arrested for the crime on his front lawn in front of his wife and young daughters. Facial recognition software falsely matched his driver's license photo to security footage of a shoplifter. Julia comes out while they're putting me in handcuffs, my oldest daughter, and I tell her, hey, Juju, go back in the house. They took the images of the camera. They then put it within a system and created a mugshot. A detective turns over a picture of a guy inside Shainola, and he's like, so that's not you? I look. I said, no, that's not me.
Starting point is 00:05:26 He turns another paper over and he said, I guess that's not you either. I picked that paper up and hold it next to my face. I said, this is not me. I was like, I hope y'all don't think all Black people look alike. They then found out, oh, sorry, wrong person. Which means that also there's human biases that was introduced along with the AI bias. And so you had a double whammy. There's been a lot of discussion in the last month about defunding the police or radically rethinking police instead of making, you know, smaller fixes.
Starting point is 00:06:02 Do we need to radically rethink AI too? Defunding the police basically means you don't have as many police officers. But you still have to do something in terms of quote-unquote law and order. What's going to happen? You're going to bring in AI by default, right? That's like what people do.
Starting point is 00:06:20 You bring in automation when you reduce your workload, which means we need to be able to fix AI. If you abolish AI and you also abolish the workforce, I don't know what you would replace it with. But should AI have a place in policing at actually had a nice way of using AI. They had an AI tool, basically predictive policing, that identified hotspots. Traditionally, what police do is they would send police officers to places that have hotspots, you know, to make sure that they find the crime before it happens or find individuals. Instead, what they did is that they used those hotspots to then engage with the community leaders within those hotspots. So they worked with the community to actually alleviate the crime. And crime went down, robberies went down. They used the AI not to deploy police
Starting point is 00:07:18 officers. They used AI as a tool to have much more human-human collaboration. So that's rethinking what you're doing. And so if you think about the justice system, instead of using AI for criminal recidivism, how do you use AI to do something else, right? Like, what is it that enables human-human interaction to then fix the problem? It just seems a lot like playing with fire to me, even if people or an institution have the best intentions.
Starting point is 00:07:48 I mean, if we're going to use AI, how do we keep it under control? You know, one of the things we don't have, which I think might actually work, is this aspect of accountability. And it's because if you think about what AI was used for, it was to make our lives better, So we have, you know, nice chat bots online. It wasn't really used for things that can impact us in our day-to-day lives. But when we have something that impacts us, and I would say like drugs and medicine, right? Like there's an organization, example, the FDA, that monitors. Like I, as a company,
Starting point is 00:08:23 if I decide to make a drug in my house, I can't just release it. Like I have to go through a process and sometimes it takes long and sometimes it gets people frustrated, but there's a process where I have to show like this drug has this positive benefit. And guess what? It does have these harms as well. And sometimes FDA is like, yeah, you can't do it. You got to go back to the drawing board. Sometimes FDA is like, okay, this is an acceptable harm, but you have to put it on the label. If we think about AI for those things that impact our civil liberties, like facial recognition, you know, when a company is developing, it should go through this whole aspect of, these are the positive things.
Starting point is 00:09:03 These are the harms. And we've done the studies. And you have a third body basically say, yeah, no, not acceptable. Up next, how the federal government could manage artificial intelligence or even ban law enforcement from using it. Thank you. software designed to help you save time and put money back in your pocket. Ramp says they give finance teams unprecedented control and insight into company spend. With Ramp, you're able to issue cards to every employee with limits and restrictions and automate expense reporting so you can stop wasting time at the end of every month. And now you can get $250 when you join Ramp.
Starting point is 00:10:11 You can go to ramp.com slash explained, ramp.com slash explained, R-A-M-P dot com slash explained, cards issued by Sutton Bank. Member FDIC. Terms and conditions apply. Bet MGM, authorized gaming partner of the NBA, has your back all season long.
Starting point is 00:10:40 From tip-off to the final buzzer, you're always taken care of with a sportsbook born in Vegas. That's a feeling you can only get with BetMGM. And no matter your team, your favorite player, or your style, there's something every NBA fan will love about BetMGM. Download the app today and discover why BetMGM is your basketball home for the season. Raise your game to the next level this year with BetMGM, a sportsbook worth a slam dunk, and authorized gaming partner of the NBA.
Starting point is 00:11:12 BetMGM.com for terms and conditions. Must be 19 years of age or older to wager. Ontario only. Please play responsibly. If you have any questions or concerns about your gambling or someone close to you, please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge. BetMGM operates pursuant to an operating agreement
Starting point is 00:11:31 with iGaming Ontario. Seagal Samuel, co-host of the Future Perfect podcast here at Vox. In the first half of the show, Ayanna Howard mentioned that she wanted to see something like an FDA for artificial intelligence.
Starting point is 00:11:46 Is there any indication that the federal government is going to do something like that? So far, I haven't seen any indications that the federal government is actually willing to do that. But it's not only Ayanna who's calling for that. Definitely, I'm seeing an increase in calls for that from groups like the Algorithmic Justice League, which is headed up by Joy Bollamwini, researcher at MIT. Last year, I did a piece for Vox. It was presenting a crowdsourced algorithmic bill of rights. And a lot of the experts I spoke to for that piece also said this same idea. For example, Ben Schneiderman, who's a computer science professor at University of Maryland,
Starting point is 00:12:22 he said that we need to create what he called a National Algorithm Safety Board. If you're a major company and you're about to put out a major algorithm or you're a bank and you're going to change the way credit is assigned, I think it's appropriate that you come before the National Algorithm Safety Board and that there's a review. Just like we have oversight boards for airplanes, you know, they investigate the crashes, we need the same thing for facial recognition. So we've got some proposals for these oversight committees,
Starting point is 00:12:54 but is anyone working on legislation that could make these a reality somewhat quicker? Something pretty exciting just happened on June 25th where lawmakers in the House and the Senate jointly introduced this new legislation that would effectively ban law enforcement from using facial recognition in the U.S. It's a pretty big deal. It's called the Facial Recognition and Biometric Technology Moratorium Act of 2020. That was a mouthful. It's sponsored by Senators Markey and Merkley and Representatives Jayapal and Presley.
Starting point is 00:13:27 The criminal justice system is already rigged against black and brown Americans. We have to act with urgency to ensure that this technology doesn't become a new tool in the 21st century to subjugate and fill the system with people of color. Basically, it would right away put a stop to U.S. federal agencies like the FBI from using facial recognition, and it would also require state police agencies to put in place similar policies banning the use of the tech if they want to be able to receive certain federal grants. So if that passes, that could be pretty significant, especially since we're not talking about a moratorium here. We're talking about a permanent ban. It's the kind of legislation that would stay in effect until new legislation is passed to unban it. How likely is that to pass?
Starting point is 00:14:22 You know, a few weeks ago, I would have been more cynical and I would have said, I don't think it's that likely to pass. But now I actually think it's more likely. You know, this legislation came just one day after Robert Williams told his story to the press. That's the black man in Detroit who was arrested falsely due to this racially biased facial recognition algorithm. And this is all coming on the heels of the upswell in Black Lives Matter protests and the major national conversation that has sparked about facial recognition. So given the cultural climate we're seeing now around all this stuff, I think that might have actually teed up quite nicely this moment where
Starting point is 00:15:03 that kind of legislation might be more likely to pass. So that's the political side of things, but we've seen companies taking steps on this too. I guess, how cynical should we be about companies sort of trying to police themselves? Personally, I think that we have every reason to be skeptical and not too credulous of giant tech companies when they say they're going to regulate or put moratoria on the technologies that they're creating and selling, right? Their interest is in their own bottom line at the end of the day. So, you know, IBM said, okay, we don't need to do facial recognition anymore, but it wasn't making that much money off of its facial recognition tech to begin with, so it's not a really big deal for it to pull out of that business.
Starting point is 00:15:51 Notice that Amazon said, we're doing this one-year moratorium on the technology to give Congress enough time to come up with regulations. Let me actually just read you what Amazon said in its statement. It said, We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested. So what does that mean, right? Translation, we want to help write the regulations so that these new rules won't totally destroy our ability to profit from this tech. So I guess we have government regulation that seems like it's a bit away from being a reality. We have companies that are maybe saying that they want the government to regulate
Starting point is 00:16:32 them, but could be actually trying to get in on the regulation themselves. I mean, how hopeful are you that we can put the cat back in the bag here? It's funny. I tend to be quite cynical about this stuff, but I would say if ever there was a moment when it looked like we actually might be able to push back strongly in law against facial recognition, it's now. A lot of that has to do with the Black Lives Matter protests that we've just been seeing over the past few weeks. During these protests, you saw the FBI plus police in various cities like Seattle, Austin, Dallas, publicly, explicitly asking citizens to send them videos of the protests so that they could capture visual images of the protesters and then use facial recognition to identify them by name so that they could punish them if they were damaging property or looting. You know, and I think the news of that really alarmed people and it helped kind of fuel
Starting point is 00:17:33 this national uproar over facial recognition. I think the other thing, which is a bit bittersweet, is that the protesters were this racially mixed population. You had, of course, a lot of Black people. You also had white people. You had people of all backgrounds protesting together in the streets. And so I think for some of the white people, some of the non-Black people at the protests, that was maybe the first time they came to realize this technology could actually be used on me. It could actually have harmful consequences for my life too.
Starting point is 00:18:07 And so more and more people outside of that Black community and outside of the kind of privacy nerd community are starting to see this as a real problem. So I think that's helping to build up this base for the pushback against facial recognition tech. Sigal Samuel is the co-host of the Future Perfect podcast at Vox. Future Perfect actually just released a new limited series called The Way Through. It's all about how to use philosophy and faith to help us out in times of COVID.
Starting point is 00:18:45 I'm Noam Hassenfeld filling in for Sean Ramos for him. He'll be back on Monday. The rest of the team is Muj Zaydi, Afim Shapiro, Jillian Weinberger, Bridget McCarthy, Amina Alsadi, and Halima Shah. Cecilia Lay checks our facts and the mysterious Breakmaster Cylinder makes our music. We had help from Hannes Brown this week. And Liz Kelly Nelson is Vox's editorial director of podcasts. One more thing. We're working on some kids episodes for later this summer, and we want to know all the creative ways your kids have passed the time. Have they made up games, written a play, gotten into mysterious kinds of trouble? Or are they just really bored?
Starting point is 00:19:18 We'll listen to that too. Have them record a voice memo and email it to us, todayexplained at vox.com. Or they can call and leave a message at 202-688-5944. That's 202-688-5944. We're off tomorrow for the holiday weekend. We'll be back in your feed on Monday. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.