Factually! with Adam Conover - Technology and Race with Ruha Benjamin

Episode Date: September 2, 2020

Princeton University professor Ruha Benjamin joins Adam to reveal the issues at play at the intersection of technology and race, her concept of the “New Jim Code", digital redlining and and... how the technology can’t be relied on to solve what are ultimately social problems. Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript
Discussion (0)
Starting point is 00:00:00 You know, I got to confess, I have always been a sucker for Japanese treats. I love going down a little Tokyo, heading to a convenience store, and grabbing all those brightly colored, fun-packaged boxes off of the shelf. But you know what? I don't get the chance to go down there as often as I would like to. And that is why I am so thrilled that Bokksu, a Japanese snack subscription box, chose to sponsor this episode. What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill grocery store finds. Each box comes packed with 20 unique snacks that you can only find in Japan itself.
Starting point is 00:00:29 Plus, they throw in a handy guide filled with info about each snack and about Japanese culture. And let me tell you something, you are going to need that guide because this box comes with a lot of snacks. I just got this one today, direct from Bokksu, and look at all of these things. We got some sort of seaweed snack here. We've got a buttercream cookie. We've got a dolce. I don't, I'm going to have to read the guide to figure out what this one is. It looks like some sort of sponge cake. Oh my gosh. This one is, I think it's some kind of maybe fried banana chip. Let's try it out and see. Is that what it is? Nope, it's not banana. Maybe it's a cassava potato chip. I should have read the guide. Ah, here they are. Iburigako smoky chips. Potato
Starting point is 00:01:15 chips made with rice flour, providing a lighter texture and satisfying crunch. Oh my gosh, this is so much fun. You got to get one of these for themselves and get this for the month of March. Bokksu has a limited edition cherry blossom box and 12 month subscribers get a free kimono style robe and get this while you're wearing your new duds, learning fascinating things about your tasty snacks. You can also rest assured that you have helped to support small family run businesses in Japan because Bokksu works with 200 plus small makers to get their snacks delivered straight to your door.
Starting point is 00:01:45 So if all of that sounds good, if you want a big box of delicious snacks like this for yourself, use the code factually for $15 off your first order at Bokksu.com. That's code factually for $15 off your first order on Bokksu.com. I don't know the way. I don't know what to think. I don't know what to say. Yeah, but that's alright. Yeah, that's okay. I don't know anything. Hello everyone, welcome to Factually, I'm Adam Conover, and for decades we've heard about how technology will change the world, right? In the 90s, it truly seemed as though the internet had revolutionary, even utopian potential. We were all told, and we believed, that it would break down the barriers that separate people and information. We were told tech would fix all of our problems. It would solve educational access with free, massively online classes. It would eliminate discrimination because no one could tell who was who online. So we would all compete on the merits. We even thought that tech
Starting point is 00:02:55 would be strong enough to topple dictatorships. Remember the Arab Spring? The media literally told us that Twitter was going to be stronger than decades of authoritarianism from Tehran to Cairo. And this was back when Twitter only had 120 characters. Each of those characters was doing a lot of heavy lifting. Well, suffice it to say, if you're living in the same present that I am, none of these dreams has come true. A decade later, the democratic gains of the Arab Spring have largely rolled back and authoritarian governments have proved to be even more adept at harnessing the communicative power of the Internet than democratically minded citizens are. And in America, well, let's just say our utopia of technologically secured equality has not arrived yet. Let's start with the claim that technology would be a boon for education.
Starting point is 00:03:46 That sure would be useful right now, right? At a time when students and teachers are unable to gather together in the same place. Why, surely after decades of talk and iteration on the idea of online learning, we should be ready for this moment, right? Well, unfortunately, we are not, because the digital revolution has not come equally to America. According to the Pew Research Center, about 15 percent of all households with school age children lack a high speed Internet connection. You know, it's pretty hard to log into the Zoom with your teacher when you're stuck guessing the neighbor's Wi-Fi password.
Starting point is 00:04:25 Wi-Fi password. And the reason for this gap, this digital divide, is that rather than our government exerting its power to make sure the Internet in this country was built fairly, we've allowed corporations to do it for us with very little oversight. And the result is that wealthier areas and populations have been prioritized throughout the history of the Internet, while lower income neighborhoods have had to wait or have never received a connection to broadband internet at all. Advocates call this digital redlining because just like residential redlining, it's had a disparate racial impact. Black Americans are less likely than white Americans to have a broadband connection at home.
Starting point is 00:05:00 A future of internet equality, this is not. Now, look, if you listen to the show, you know that being skeptical of the promises of the techno-utopians is not new to me. It's a recurring theme on this show. But I don't want you to think that I'm turning into a technophobe either. I'm not here to tell you that technology is actually bad. Because the truth is, the source of the problem is deeper. Because the truth is the source of the problem is deeper. The truth is that technology is not produced in a vacuum by abstract thinking techno innovators with their minds up in the cloud.
Starting point is 00:05:31 No, technology is produced by a society. And when a society is based in injustice and inequity, our tech ends up reflecting and reproducing that injustice and inequity. So, you know, if you start with a racist police system, we shouldn't be surprised when the algorithms of predictive policing produce racist results. Why wouldn't they? The technology arises from the society.
Starting point is 00:05:55 So it follows that if we want to fix our society, we're going to need more than just technological fixes. Giving children in a broken school system new laptops is not going to be as effective as reforming their school system so that students don't receive a poor education just because of what zip code they live in. Or how about addressing the poverty that those students return home to every afternoon? A free iPad doesn't fill up the empty belly that makes it hard for students to focus in class. None of this is to say that technology is useless, but on its own, it'll always be insufficient because social problems need social solutions.
Starting point is 00:06:34 Well, to talk about how technology, when improperly used and deployed, ends up reproducing rather than solving social problems, our guest today is Ruha Benjamin. She's a professor at Princeton and the author of Race After Technology, Abolitionist Tools for the New Jim Code. This interview was so much fun. I found her incredibly fascinating and lively to talk to. You're going to love it. Please welcome Ruha Benjamin. Ruha, thank you so much for being here. My pleasure. Thank you so much for inviting me.
Starting point is 00:07:04 Thank you so much for being here. My pleasure. Thank you so much for inviting me. So, look, I grew up with in the middle of the tech revolution. Right. And with all these ideas of tech techno utopianism, that the Internet was going to solve every social problem on the Internet. No one knows you're a dog. The famous New Yorker cartoon. Right. Algorithms will bring in a world of equity and fairness across the land. Over the last decade, there's been a lot of criticism of that point of view. I think you're part of that criticism. Could you talk about that?
Starting point is 00:07:39 And could you talk about this phrase that you've coined, the new Jim Code? I'm really curious about this. Absolutely. So there are two dominant stories that we often tell about technology. One sort of goes back to that phrase techno utopian. And that's the that's the Kool-Aid that Silicon Valley is trying to sell us. And it's about, you know, the robots are going to save us. They're going to you know, they're going to make things more efficient. They're going to be more equitable, all the good things.
Starting point is 00:08:02 But there's another story that we're also accustomed to hearing, the techno dystopian story that Hollywood sells us, which is the robots and the technology is going to slay us, right? So it's going to destroy humanity, take away agency, take all the jobs. And so although on the surface, these seem like opposing stories, like the surface is pretty different. When you dig down, they share an underlying logic that technology is in the driver's seat. We're either going to be helped or harmed by technology, but the human agency and agents behind the screen get lost from the narrative. And so partly one of the things we have to begin to do is to tell different stories
Starting point is 00:08:42 about technology that recoup the elements of power and agency that are already there. But part of the issue that I take is that right now, a small sliver of humanity is doing that imaginative work and doing that design work and that programming work. And so their worldviews, their imagination is being embedded into our physical and digital infrastructures. And so part of what we have to do is explode that phenomenon and really make it much more participatory, democratic. And to do that, we have to think about the existing power relations, which bring us to the new Jim Code, which, you know, will sound kind of familiar for those who say read Michelle Alexander's critique of mass incarceration,
Starting point is 00:09:25 which she terms the new Jim Crow, which itself is a riff off of the way that we've talked about white supremacy and racial segregation in this nation when we term the kind of era of Jim Crow of explicit legalized segregation. And so in the same way that Michelle Alexander is trying to get us to see how mass incarceration continues to perpetuate social and racial control, my concept of the new Jim Code is trying to get us to think about how technology continues to do that work, that it hides so many forms of inequity under the guise of progress, under the shiny surfaces of AI systems, automated decision systems, machine learning, et cetera. And so that code part of it is key, that it's coded, it's harder to discern, but hopefully now with the growing language,
Starting point is 00:10:17 as you mentioned in the last few years, of people and organizations and movements shining a light on this phenomenon, we can start talking about it and pushing back against it. Oh, my gosh, there's so much in there that I want to dive into. I mean, you had this incredible idea at the beginning about how we talk about technology as neutral and as something that's just coming like a force of nature that we almost have no control over. It's literally the robots. It's literally the algorithms. And you're right, that decenters the people who are making it, which is really, which is really interesting. Yeah. Can you talk more about that?
Starting point is 00:10:56 Absolutely. And so I think part of the thing to realize is that so much oppression happens out of solutions to things. And so mass incarceration itself was a reform from the penal, sort of like just killing people. Now we're going to hold you in cages. So now the issue is that how do reforms, how do things that seem to present fixes for things actually reproduce certain dynamics and sort of social hierarchies or forms of oppression. And so in that way, innovation goes hand in hand with inequity and oppression.
Starting point is 00:11:33 So often we associate technological innovation with social progress. That's part of the Kool-Aid is to get us to believe that those two things are the same. When in fact, innovation has long been gone hand in hand with all manner of oppression. And so I often think about like the first person who put up a whites only sign in their store. That was an innovation, right? Yeah. And then later they put up a neon sign and they innovated. Exactly. Or like the first person who said, you know what? You know, what's a good idea? Let me let me create a colored water fountain like that was a bright idea. And like whoever did that, like, you know, it caught on.
Starting point is 00:12:15 And so the things in hindsight that we think of as so like backwards and regressive at the time, they were innovative, which should get us thinking about what now are we so enamored with? You know, we think, oh, this is the next bright thing. And so that's why often when I'm talking with folks, I refer to that Better Off Ted episode. It's titled Racial Sensitivity. I love that show. This is a great pull. That show was canceled before its time. I was a big fan. It was too subversive, Adam. So there's this three minute genius clip in which the company decides to install sensors all over the building. But the sensors don't detect a darker skin. And so none of the black employees can open the doors. They can't use the elevator. They can't use the water fountains. But that's what happens is they, you know, when they bring it to the CEO, the response is, oh, don't worry,
Starting point is 00:13:12 we'll fix this. We'll put a manual water fountain for the darker skin employees next to the one with sensors, which to me is just a really great illustration of how, you know, you create all of these kind of like splashy things to make life easier. And then if it doesn't work for some people, we retreat back to this really iconic, you know, example of the manual water fountain and all over the water fountain, it says, this is for black employees because they can't use anything else. And so, again, getting us to think about our assumptions about what innovation is and what it does. Well, what this connects to for me is I've done work in the past on, for instance, we did an episode of my TV show called Adam Ruins the Internet. And we opened that with a critique of techno pessimists, right? People who say, oh, no, this new technology is destroying our minds, right? Oh, our cell phones, we're all addicted to our cell phones, it's destroying human society. And we made the argument, hey, same as it ever was, right? This is what people have said about every form of technology. People said about books, oh, that they're a destructive new technology that will destroy the younger generations. Literally, they did. People said that about paperback books
Starting point is 00:14:21 and train travel and telegraphs, et cetera. Right. And we were saying, no, you know, technological innovation has always been with us. And there's always been people saying the sky is falling. Yeah. But this is the argument that you're making is the same argument, just the obverse, that it's that, hey, technology also doesn't fundamentally change our society the way we think it does. Right. If we've got a built-in power structure where some people hold power and some people don't, then likely the technologies that that society creates are going to just keep reifying, reinforcing that power structure. Does that track for you? That definitely does. And to the first part of your comment, I was thinking about that meme
Starting point is 00:15:02 that shows like maybe it's like the 1950s, like a train scene, like a New York City train scene. And all the guys have, they're holding up their newspapers and not talking to each other. And it's like, see, this is like people never talk to each other. It's not the phones that, the phones that did it. But certainly this idea that it's not the technology that's inaugurating these forms of sort of antisocial interact, you know, antisocial behavior or inequity. It's the larger ecosystem. It's the social inputs that actually continue one era after the other to continue to produce the same predictable outcomes. And so rather than just say, you know, get rid of all the technology, let's look at ourselves.
Starting point is 00:15:45 Like, really, let's use this black mirror, as it were, you know, to shine a light on what we take for granted about our social order and think about addressing the root issues rather than just trying to throw the technology out. Yeah, when we see those stories about, like you said, from that Better Off Ted episode, which was, that was was like 2009 or something that was very ahead of its time it was like five years later we started having actual stories like that come out all the time pressing pressing about yeah about like this or that facial racial facial recognition technology not uh recognizing people of certain races etc um we see those stories and we say, wow, bad technology, but that's not really a story about the technology. That's a story about our society and which built the technology. Absolutely.
Starting point is 00:16:32 There's a wonderful line there where the, one of the bosses said, this is not racism. It's not racism because we're not targeting black people. We're just ignoring them. And for, and, and, and for me, that's just a perfect, like just expression of so much kind of corporate diversity culture and all of the things that has less to do with those sensors and the technology, but really how people, how so much indifference continues to perpetuate inequality. It's not the
Starting point is 00:17:00 big bad boogeyman behind the screen, like, let me get these people. It's like, I don't give a damn. I don't really care about them. That like allows things that the wheels to keep turning. Well, and so let's talk about that because that is literally the way you put that. And that's a great line is that is, I don't see color like made manifest, right? It's like, I literally don't see it. My tech, not the, the algorithm I designed cannot see it. don't see it. My tech, not the, the algorithm I designed cannot see it. But the, the thing about, I don't see color or these sort of neutral ways of doing it is that it seems a lot less pernicious
Starting point is 00:17:31 than the racism or the racist structures that we were taught about that I was talking about in sixth grade. Right. Exactly. So yeah, we have the image of like, you know, the police officers sticking the dogs on the black youth and the water fountains. Like, really, our imagination of what counts as racism is like the, it requires the white hoods. It requires the snarly, like white men who are out to get you when the vast majority of racist systems just rely on us clocking in and out, like just doing our job, like put your head down and just follow the rules. And so there was a really great study last year this time that came out that was looking at healthcare algorithms and this widely used healthcare algorithm that basically was like a digital triaging system. Like you come in as a patient and it'll tell the healthcare provider whether you're high risk or low risk. And so if you're high risk, they use this particular algorithm to get you
Starting point is 00:18:31 more resources to prevent whatever bad things are predicted to happen to you based on sort of past data of people like you. And so what the researchers who studied this algorithm found is that it was flagging white patients at a much higher rate to receive these coveted resources, like more time with your doctor, more services outside of the hospital. It was basically designed to help people stay out of the hospital. And that black patients who were sicker were not getting flagged for these services. And so this particular algorithm in many ways was carrying out the work of a whites only sign at a hospital. Like you can't come, you're not doing it, but it was through this neutral system. Right. And so when the, when the researchers opened it
Starting point is 00:19:16 up to figure out like, what's happening here, why is this, is it like out to get these black patients? This system was in fact, race neutral in fact race neutral. It wasn't keeping track of race at all. Instead, it was using a proxy variable. It was using healthcare costs, like how much we have spent on particular patients. That was used as a proxy to say, if we've spent more on you, that means that you're higher risk, like you're more likely to get sick. But we have a system in which people who need care can't get to the hospital or don't have insurance. So lots of sick people, they're not getting anything spent on them, which makes the system think, oh, you know what? You're fine. You're low risk. And so precisely by using cost as a proxy for the healthcare need, the system
Starting point is 00:20:03 was perpetuating this past inequality and projecting it into the future under the guise of like a neutral system. And so millions of black patients for years were not getting these coveted services because the healthcare professionals were relying on this system called Optum. And so this is just one of many examples
Starting point is 00:20:24 where really we see that race neutrality, or as you say, colorblindness can be deadly because we're ignoring the past, right? The data that's being used to train a system, it has all of these patterns of both institutional and interpersonal discrimination, because both are at work in our health care system, like individuals and other ways in the policy and the insurance is structured. But it was using that as if it was just a straightforward reflection of reality and teaching this algorithm, make more decisions like this. And so this is what we get when we ignore history or we ignore social inequality in the building of technical systems. And the people building that algorithm probably had no idea what they were doing. They were like, hey, we're just making a little cost algorithm here.
Starting point is 00:21:12 Probably if you told them, they'd be like, no, that's not a racist algorithm. That's almost a direct quote from after this study came out. That's like a direct quote from like, you know, a million articles where they were like, we didn't try to do this. This is not what we meant to do. But again, we go back to better off Ted. Indifference is really one of the main drivers of racism. And so hiding behind the kind of, I'm not racist, but, you know, whatever comes next
Starting point is 00:21:42 is probably going to be racist. Yeah. Are there other examples of this that stand out to you? In almost every, every significant area of our lives where important decisions are being made, those who, the human beings, you know, the gatekeepers are outsourcing those decisions and consulting technical systems as if they're neutral. In our penal system, it's everywhere. From every single stage, from who gets policed in the first place, who's paroled, who's, you know, every single stage, we have risk assessment tools that are being employed that are deeply racially biased. And the studies are starting to pile up that have been auditing these. In fact, now during the pandemic, there was a system called pattern that was being used to decide who would be released because, you know, our, our prisons are overcrowded.
Starting point is 00:22:38 And so COVID is running around. So they're like, okay, we've got to figure out who to let out so that we can sort of, you know, deal with the overcrowding. And so some places use this pattern assessment, again, to decide risk, like who's low risk or high risk. And then the people who were deemed low risk were the vast majority were white people who were in, white men who were inmates. People who were homeless were high risk um black men were high risk people with mental illness were so all of the most marginal in this already marginal and oppressed population um were deemed high risk and so were kept in you know caged where
Starting point is 00:23:18 others were released and the difference was something like 30 plus percent of white men were deemed eligible for early release versus like six or seven percent of black men. And so this is and so pattern this was during the pandemic. But those kinds of risk assessment tools are being used up and down the penal system and our education system. There are examples when it comes to getting like a home loan or other kinds of like loans. And so any area of our lives that you can think of where people are making decisions about a lot of people at once, this kind of new gym code discrimination is happening. Yeah. Just mentioning housing, like it, there seems to be a connection back here, like to redlining, for example, which I've talked about, you know,
Starting point is 00:24:06 extensively on our show and on this podcast before. But cause that was like, for me, really, really learning about the history of it and, and the massive effect it had on American society. That's to me the most vivid example of how, you know, these, how, how the structure of our society can be set up in a racist way that you can end up perpetuating without even knowing it today. Like those the restrictive covenants that they literally had in the suburbs in like Levittown, you know, only Caucasians apply, you know, will be given these mortgages. Of course, that was overt racism. But then, hey, 40 years later.
Starting point is 00:24:54 Of course, that was overt racism. But then, hey, 40 years later. Well, now, if you're just continuing to operate on, hey, what is the home values of the neighborhood? What is is the neighborhood, quote, high crime or not, you know, et cetera. Then you end up perpetuating that original sin. Right. Exactly. And all those people, they just had to go to, you know, go do their little office job at the bank, like fill out the little forms. I love to show people like the actual like bureaucratic forms that go into that to sort of demystify it. Because it's not like a big bad banker standing in the front and be like, no, get out of here. We don't want you. It's like a little bureaucrat sitting there and filling out, okay, this many Italians live in the neighborhood, this many Negroes, this many people who are getting welfare, calculate, calculate, sorry, you can't get a home loan to go in here. So it's like, for me, the forms of harm that we associate with kind of like these scary boogeymen of the past are often carried out by people,
Starting point is 00:25:43 again, just putting their head down and doing their job. In my grandma's neighborhood in Los Angeles, Leimert Park, I dug up this flyer from like the 1940s when that neighborhood was getting developed and the housing developers were trying to entice white families to move there. And they put up these flyers that basically said, come, come live here. We have beneficial restrictions. Your investment will be secured. And of course, beneficial restrictions was a reference to racially restrictive covenants. So they were basically telling them, you buy your house here, you have these covenants that ensure that it will stay white,
Starting point is 00:26:19 the neighborhood will stay white, and your investment will be secured. Now, I was following that kind of rabbit hole. and I learned about a black family that was trying to move into the neighborhood, the Wilson's. And when they did, the homeowners association rallied around and there was a white family that wanted to sell their house to the Wilson's and the homeowners association sued that white family and like, no, you're going to mess up the neighborhood. And it's so interesting when you read like the interview of the main plaintiff from the homeowners association, he says, I'm not motivated by any racial animus. This is strictly an economic, you know, issue. And so even in the 1940s, he's like, yeah, I'm not racist, but this black family is not moving into this neighborhood for X, Y, and Z reason. And so like that rhetoric that we are familiar
Starting point is 00:27:10 with now, it's been with us for a while. People don't want to own like what the ill feeling, but they, the economic motivation, the idea that racism is productive. It doesn't just harm people. It actually garners wealth and status and all the good things of life to those who are perpetuating it. And so even this guy back in the 40s was like, you know what? I don't care. I don't give a damn about the Wilsons, but you're not going to mess up my property investment. Last thing I'll say about this story of the Leimert Park is that there was a Reverend and a rabbi who went door to door knocking on people's, you know,
Starting point is 00:27:51 doors and talking to all the neighbors and was like, this is terrible. We shouldn't be suing this family. We should let the Wilsons move in. And so they went and kind of did this like labor of trying to like impress upon the folks in their neighborhood that this adherence to white supremacy, however, sort of hidden behind the language, the legalese of this lawsuit was not like the values that we should be upholding. And ultimately, through their efforts and others, the Homeowners Association dropped the suit,
Starting point is 00:28:24 the Wilsons moved in, there was like a party for them. And so this is just one example, like we don't always have to wait for the laws to change to start to like force changes in our relations and in our own backyards. And to me, the example of this rabbi and reverend who were like, not having it, were like, we're not waiting for the federal government, but our neighborhood is not going to do this. And so that's really, I think, a call to action for all of us. I do. It does raise the question for me, though, that we were talking earlier in this conversation about and by the way, that's an incredible story. Thank you for sharing it. I'm glad it had a somewhat happy ending. But but we were drawing this contrast between the quote neutral algorithms today and the sicking the dogs on the on the folks on black folks, you know, in the old days.
Starting point is 00:29:11 But what you're describing is a is a story that doesn't sound too dissimilar right now. I mean, we still have homeowners associations. Yes. And there are homeowners associations that still use cloaked legalism and economic arguments to keep out black people. And guess what? We still have cops physically attacking and brutalizing black people. So I mean, I love that. Are things really so different or is it same as it ever was? Yeah, to me, I think that that that insight is like the key because it's not there hasn't been this transition from an old timey racism to this newfangled coded racism. So one of the things that I'm really trying to trace in Race After Technology is the continuity. The fact that now computer codes are doing this work of coding, but legal codes have long been doing this. There's all kinds of ways in which this coding of racism, this embedding it into
Starting point is 00:30:06 our systems, other tools have been used before we've had fancy algorithms. And so that's, again, the kind of point we started with is it's not simply the technology that's inaugurating new forms of racism. It's providing a new kind of like twist on something that's been with us. And so one of the things I do is really show exactly how legal codes have done this. But even if we go to something a little less tangible, like I think about the way that we culturally code our names. I start the book talking about people's first names and how we often use that as a proxy for other qualities about people. And it's often used as a pretext to open doors and shut doors for people. And so there's a great audit study from about 2002 or 2003 in which two economists from the University of Chicago sent out thousands of resumes to employers in Boston and Chicago.
Starting point is 00:31:02 And what they did was they just changed the first names. They changed, like some of the resumes had names like Emily and Greg, some like Lakeisha and Jamal, and all the qualifications were the same, the number of years of education, all the things. And they waited to see who the employers would call back. And of course, we wouldn't be surprised that those white sounding applicants, the names of the white sounding applicants received many more callbacks and, and calls of interest. And the economists calculated this to mean to be equal to the assumption that those white applicants had eight additional years of experience,
Starting point is 00:31:38 work experience that they didn't actually have. And so they received 50% more callbacks. And so this is a way in which our names code certain assumptions about us for good or bad. And people use them all the time to actually provide opportunities for people. Now, someone hearing that, the results of that study might think, well, man, humans are crap. We have this implicit bias. We're discriminating. Humans are crap. Like, we have this implicit bias.
Starting point is 00:32:04 We're discriminating. So shouldn't we let computers make the decisions about employment? You know, like, that's the shift. It's like, okay, acknowledging our bias and then saying, okay, let's let this AI-powered system in which I sit here in front of this screen and it kind of tallies all of these data points. And then what these firms that are selling this do is they say, now we'll compare job seekers scores to those of existing top performing employees in our company in order to decide who to flag as a desirable hire or who to reject.
Starting point is 00:32:38 And so, again, the assumption is that this system, which is presumably created by human beings and had to be taught how to judge applicants, it's somehow going to be more neutral. And in fact, that doesn't turn out to be the case. Like if you in your own company have been hiring mostly men or mostly white guys for the last 50 years, and that's your base, that's your standard for who a good employee is. And now you're judging everyone else according to that. However you code that in terms of body language and posture and accent and all the things that the AI system's keeping track of, you're likely going to get more of the same. And the danger is, is that people actually think that that system is more neutral than,
Starting point is 00:33:19 say, a person looking through resumes and deciding, I don't want Jamal working here, you know? Yeah. Well, a simple way to put it is like, you know, AIs, I'm not an AI expert, but I've played around with some AIs. I've talked to some AI experts on the show. You train them on data, right? Like a very common form of AI now is a neural network. It's sort of this general learning machine. You give it a whole bunch of data. It sees patterns in the data without you even knowing how it's quite doing it. Right. And then it's able to tell the difference between A and B. Well, if you're training the system on a racist society, right, you're saying let's let's train it on every employee in America
Starting point is 00:34:00 and you know what, how much money they make and how many skills they have, et cetera. Well, of course, it's going to you're going to end up many skills they have, et cetera, well, of course, you're going to end up with an AI that says, oh, yeah, white people are more qualified than black people because in the system I currently have, those are all the lawyers and paralegals and accountants, right? Exactly. So the smarter it gets, the more racist and sexist it becomes. If we're judging intelligence by how closely it mirrors human decision making. So like this intelligence in quote is actually like the most racist and sexist version of human, you know, thinking. And so, in fact, a couple of years ago, Amazon's own hiring algorithm was weeding out women precisely because that their workforce is predominantly male.
Starting point is 00:34:42 So it was like seeing these resumes with like, you know, Laura or Tanisha. And it was like, oh, this company doesn't want this. Like, checked it out. But then once they got rid of like gendered names, it got smarter and it started looking, okay, this applicant was on the Women's Chess Club. Throw that out. This applicant went to Bryn Mawr. Throw that out.
Starting point is 00:35:03 And so, and then it started looking at how applicants talk about their work, like what kind of adjectives people use. And we know through other sort of social psychological research that, you know, that has a gender dimension to it, the kind of language we use to describe our work. So it got even more intelligent. It was like, okay, like throw those people out. And so eventually Amazon had to recall this whole thing. So if Amazon can't get it right, then we all should be pretty wary about like outsourcing these really important decisions to systems that we assume are neutral. Well, I want to ask you about what we could be doing otherwise, but we've got to take a really quick break. We'll be right back with more Ruha Benjamin. Okay.
Starting point is 00:35:58 We're back with Ruha Benjamin. I want to talk about what we could do differently. I you're a science fiction fan, aren't you? I am. I know that you use science fiction in your work. Yeah, I'm teaching a class this fall called Black Mirror, Race, Technology and Justice. Oh, that sounds great. I wish I could take that class.
Starting point is 00:36:18 Yeah, it's going to be fun. It's going to be fun. Well, what is, what is is why the focus on science fiction and what do you what does it change? I'm sorry, let me take that back. Let me let me edit that question. Well, why the focus on science fiction and why is it important? The question of who gets to imagine the future that we're going to have that question of imagination? I'm curious about. Yeah. As we think our way towards a solution. Yeah, I should say first, you know, my earlier work before I got to all this stuff around AI and algorithms was in the life sciences. And so my first book was about stem cell research
Starting point is 00:36:55 and regenerative medicine. So I was hanging out with all of these really fantastic scientists that were doing cutting edge work, growing things like heart cells in a petri dish in a lab. So that say, if your relative needed a heart transplant, rather than having a donor, the idea, the hope is that one day we can reverse engineer your, their own cells and grow them a heart from their own, their own cells so that their body doesn't reject it. Right. Cool. Like, cells so that their body doesn't reject it. Right. Cool. Like, you know, it's like, you know, out of this world. And so like hanging around people that are doing this just like as their day job and then, and for anyone else seems like this is like, this is science fiction. But I realize in talking to them, how many people were initially like the, their interest in science and technology was sparked by seeing like
Starting point is 00:37:47 a Star Trek episode or some wild, like reading some interesting thing in a book. And so, like from a young age, like those seeds are planted and then they eventually, you know, follow their pursuits. And then some, a small slice of those people actually get the opportunity, the scholarships, the education, you know, the mentors, the institutional affiliations to be able to take that early, you know, those ideas that sort of sparked when they were young and actually get to materialize it in an actual lab where now they're growing actual heart cells or developing a scanner like they had on Star Trek to say, okay, let me figure out what's wrong with you. And so for me, it started with realizing how
Starting point is 00:38:31 important imagination is to the things that end up becoming science and technology for the individuals. And similarly for me, but also like there was a real lopsided investment in imagination. Like I would be one of the few social scientists in these spaces and I would be like, okay, that's great. That's nice. We'll be able to grow people's organs. Now what about the fact that so many millions of people can't even get like the basic healthcare? Like now we're in the middle of a pandemic, like can't even get a test for this deadly, you know, this deadly virus. Like,
Starting point is 00:39:05 how do we match up this great imaginative and actual economic investment in these cutting edge things with a lack of investment in some basic like social provisions and social safety nets and healthcare? And oftentimes my questions around that, those would be cast as like far-fetched, like, oh, we can't ever insure everyone. Oh, there will always be people who will have to die for the common good. You know, like my basic questions about like public health or the common good, like that was seen as the thing that was out of this world, like we'll never be able to do. So for me, it's like, how can we have so much optimism around sort of biological regeneration or AI and so little
Starting point is 00:39:52 imagination and so little investment in our social ills and our social wellbeing, right? And so that's what animates me is like, okay, right now our collective imagination is being monopolized by the people who are able to do all this fancy shit. Whereas people who just want like to be able to take their kid to the ER when they crack their chin open, play riding their bike, they're the ones who have to sit there and like hope that thing closes up because they can't afford to get there. is up because they can't afford to get there. Well, that's what, oh my gosh, you're clarifying so much about the techno utopian worldview to me, because what it is, is these are folks who say that our social problems are inherently unaddressable, right? They say, ah, racism, inequality, like all these things, you can't fix those are too thorny. Like, oh, like, oh,
Starting point is 00:40:46 who's going to what we need to do are technological fixes that are going to skip right over them. Right. That'll that'll that's the easy thing to do. That's the thing I can address. It's Elon Musk sitting in traffic. We beat up on him all the time on the show. But but I really do think this is not being gratuitous. I really think this is a great example. He's sitting in traffic going, I hate being in traffic. I know I'll build a tunnel so that rich people like me can skip the traffic and go right underneath right now. First of all, I don't know why the motherfucker doesn't just buy a helicopter. Like he's rich enough to get a helicopter. Why is he in traffic to begin with?
Starting point is 00:41:20 This problem has already been solved for rich people, dude. Jeff Bezos is just in a helicopter. He's not trying to build tunnels all over the place. But why can't Elon Musk, who is like this, you know, he's made his reputation as being this visionary, imaginative guy, this guy who's like, you know, thinking about the future of humanity. Let's go to Mars. Let's, you know, have AI, da, da, da, da, da.
Starting point is 00:41:41 Why can't he imagine a world with no traffic? He sees traffic as being an unsolvable social. How could you get rid of it? I don't know. There's too many people around. Whatever. Why can't he imagine? Why can't he ask the question? Well, what if we how could we get to a world with no traffic, which would be a world with, I think and hope, public transportation, more walkable cities and like, you know, less people who have to drive an hour to get to work because they could only afford to live out in the suburbs. So more affordable cities, all these sorts of questions. But that whole sphere of questions, which also involve questions of race and inequality, are put in the bucket of unsolvable by our technological.
Starting point is 00:42:21 Because they don't have it. They can't imagine solutions. No, and they're not they're not they don't have, they can't imagine solutions. No. And they're not, they're not, they don't have any reason to. And so I love that you use that example because the epigraph for my first book, People Science, was from someone I interviewed and she said, before we figure out a way to get to the moon, can we just make sure everyone on my block can get to work? That was her position. So it's so perfect. But again, you know, it is really like this lopsided investment. And part of the other speakers was the guy behind the Singularity University. And so he was like beaming in through zoom and he gave a talk and he was ending his talk and he, and he was right before me. And he, so he said, he's basically telling this auditorium full of students,
Starting point is 00:43:20 like painting a picture very much like an Elon Musk type picture of the future. This is what's happening. You either get on board or you get out of the way. You're irrelevant. You either are signing onto this and figure out a way how to navigate. And so it was like the inevitability of the future that he is invested in. And so the first thing that I said when I got up there was like, he's wrong. He's wrong.
Starting point is 00:43:43 No, you do have a choice. That is not inevitable because that's part of the sort of anti-democratic underpinnings of that whole thing is that they really don't want to hear from anyone else. Like this is the vision of what the collective good is. And anyone else who raises questions about it, critiques it, is painted as anti-technology or anti-science. And so the last thing I'll say about that is that even the sort of phrase that we use and we call people Luddites, it misrecognizes who the Luddites actually were. They were not against technology. They were against the social costs of technology. They were against the fact that the inclusion of this technology into industry was going to push down the wages, was going to have all of, maybe I am, because it's not being against something. It's saying we need to talk about the social costs of these technologies and do better. We've been shitting on technology this whole conversation.
Starting point is 00:45:08 Are there technologies that fundamentally do shift the balance a little bit? I think about how our new communications technology have allowed people who were, we took down all the media gatekeepers, right? And now people who were formerly marginalized are able to be loud, spread their message, connect. Now, also, I think one of the side effects is a lot of people who were kept out of media who we don't like, you know, like the fucking like like overt like white nationalists didn't use to have a platform. And now they do as well. Right. So there's there's a give and a take. But but is there are there any technological advancements in your view that that did make a positive change in this sphere? There's quite a few.
Starting point is 00:45:47 And the way that I would characterize them is it's not even simply that the technology is the thing that's so radical or subversive or liberatory. But it's like before you even get to designing a particular technology, you have to identify who is this for? What is it for? How is it actually going to intervene in business as usual? And the subversiveness or the power of a given technology to do that starts well before you start coding, well before you start designing or programming. And so, it has to do with the question that we pose that technology is supposed to answer. Because a vast majority of, especially when we're talking about these risk assessments and automated decision systems,
Starting point is 00:46:30 they cast their view on the most vulnerable populations. And they try to predict, for example, the riskiness of a youth to get in trouble at school or someone to be on parole or someone to not follow their meds or something. So it's looking at individuals who are already vulnerable. The technologies that I find to be so important actually flip the script and point the direction of the technology or the data collection and prediction to those who wield power, to those who are monopolizing and producing risk for the vast majority. So, for example, there's a great project called the Anti-Eviction Mapping Project. So, going back, I think you've had Matt Desmond on your show. And so, rather than try to have some kind of risk assessment
Starting point is 00:47:17 that tells a landlord, okay, is this renter going to default on their loan? It puts the tool in the hands of renters and people who are experiencing housing insecurity to actually look and to judge real estate owners and landlords to tell them how these people are treating their tenants, and then to be able to mobilize and rally people together in terms of housing justice movement. So again, the technology doesn't save people in that case. It gives them the tools and the data to be able to look at where, how the trouble is being produced and then to move in that direction. A second example along those lines, that's more of a parody example, but I like it
Starting point is 00:47:55 because it really shows the absurdity of so many of our tools that are used by policing and our carceral system is called the white collar early warning system. So it's like this system where it flags all of the places and cities where white collar crimes are happening. And it has an app that shows you the image of a prospective criminal. And when they designed the algorithm that produces that facial recognition system, they use the profile photos of 7,000 corporate executives on LinkedIn. And so naturally, the face of a criminal is white and male. And so here you have this data mapping, you have this facial recognition system, and it's throwing in our face the fact that this exact set of crimes and populations
Starting point is 00:48:43 always go under the radar. They always go through the tunnel to go back to your. Yeah. And they are the subject of this kind of, you know, surveillance. Yeah. Oh, my. I want that version of the citizen app, you know, rather than the citizen app that says, oh, there was an altercation with a knife at, you know, a couple blocks away. And you're like, ah, well, I'm not there. So like, oh, but I'm scared now for no reason. I want the citizen
Starting point is 00:49:09 app that's like, yeah, someone's embezzling downtown. Yes, beware that banker. There's a landlord illegally converting an apartment into an Airbnb. Exactly. Like, let's create the techno dystopia for those in power. Like, let's think about technology if we're going to use it in that way. And in both of those cases and many others, it really starts well before you start designing to think about how we, what we think of as the problem that technology is supposed to solve. And too often the problem is the racialized community or, you know, the same old kind of problem spaces. And so technology needs to subvert that. And we have some examples of that
Starting point is 00:49:52 coming up down the pipe. Well, what this makes me think, and going back to your point about the Luddites, is that, you know, your argument isn't anti-technology. It's anti-techno-utopianism. It's anti these sort of views that some have about technology, that technology is neutral and it's going to solve our problems. But I think you'd argue technology is a tool. What we need to solve are our social issues. And we can use technology as a tool to do that if we're mindful of it. Absolutely. As long as we keep technology in its place, as long as we don't think that technology
Starting point is 00:50:31 is going to save us as like one half of that narrative. And so really like putting technology in its place, not as the kind of magical fix, but as a tool, but also recognizing that it doesn't mean that any given tool is neutral, because if the point of that tool is to calculate the risk of, you know, someone who's been locked up, you know, in an unjust system, then it doesn't matter who's holding it. If it was designed to calculate the risk of those individuals, it's oppressive. And so it has to also do with rethinking that design process so we can produce tools that can be used in ways that empower communities rather than oppress them. What is the Ida B. Wells Just Data Lab? Would you tell me about that?
Starting point is 00:51:17 Sure. So it's here at Princeton in the African-American Studies Department. And it's an umbrella initiative that connects students, researchers, artists, and community activists in order to design just tools. And so, over the course of the summer, for example, I had 10 teams of students working from everything. There was a housing team and a work team and a policing team. And each of the 10 teams collaborated with a community organization to build some kind of data justice tool that could be used in the context of advocating for some anti-racist initiative in the context of COVID. And so it's a space to create those connections where academics aren't seen as having all the
Starting point is 00:51:59 answers. Like we need to also humble ourselves and learn from people who are working in communities about what's actually needed. I think this goes for technologists, too. I think too often the Kool-Aid of Silicon Valley is assuming that they can come up with the interventions without talking to the people who all of these things are supposed to help. Right. And so and so part of it is really creating an environment that that can happen. And so for those who are interested, we've posted all of the tools from the summer at thejustdatalab.com backslash tools. And you can take a look at what's been developed over the last few months. It's, oh, sorry. I need to edit that moment out.
Starting point is 00:52:44 I apologize. I'll take a sip of water. I'll wait for you to finish and I'll get back into it. That sounds so cool. I want to ask, what do you advise for folks listening, right? When they're, you know, engaging with technology, right? What questions can they be asking about it to help them improve their relationship with it and sort of see these systems a little better? Yeah, I think what I've found in the last few years is many more people who aren't necessarily working in the tech sector have become rightly skeptical
Starting point is 00:53:27 about the promises that were so commonly sort of marketed to us for the last, you know, 20 years or so. And so I find like the average person I talk to is already thinking critically and engaging these things and basically not taking things at face value, like always with like, okay, what's really happening? And so I think when it comes to the data that's collected behind the screens, behind the scenes, in terms of all of the things that we use for free, as the saying goes, if it's free, then you are the product. Your data is the product. And then so I think, you know, I remember a few weeks ago, Zoom made this announcement where they said that people who use their service for free, Zoom had the prerogative to sell all of our, their data would be protected. And so for a week there, my lab decided we're not using Zoom.
Starting point is 00:54:29 We're going to find another platform. But the outrage, the public outrage was so like loud and vigorous that within a week, Zoom reversed course and said, okay, okay, okay, we won't sell. And so that's an example of us collectively voicing what we want out of these things and not sort of assuming that we just have to submit, submit, submit when we press those forms. Like if we think that something is, you know, is not right when it comes to, you know, what my colleague Shoshana Zuboff calls surveillance capitalism, for example, we need to speak up. This is true, say, for parents right now. The more that classes have gone online, remote learning, learning management platforms, like, find out what the school that your kid goes to, what they're doing, what their policy is, their data policy is about all of that time that your kids are on, you know, whatever the learning management platform is. And I will say that young people in particular are becoming more savvy around this. I know about a year or two ago,
Starting point is 00:55:31 there was a students in Brooklyn staged a walkout out of their school, not around the data issue, but because they were spending all their time on this learning management platform and saw their teachers like 20 minutes a week. And so they were like, this is not education. And so we're not doing this. We're boycotting. And so that's another example. But in both of those examples, you see it's working together. It's like, it's not simply thinking about ourselves as individual consumers, like, okay, I'm not going to buy this product. I'm going to go to here. That's fine. Like people should do that. But the more powerful change happens when we team up, when we organize like those students in Brooklyn or like the public outrage around Zoom.
Starting point is 00:56:21 And so I think more and more, we really need to stop thinking of ourselves as users, because as I say in Race After Technology, users get used. And so we really need to think about our relationship to technology as, you know, really as stewards, as citizens, thinking about holding accountable, what values do we want to be embedded in these structures? Because if we say and do nothing, it's really going to be the same old kind of corporate surveillance values that, you know, that we see as the kind of dominant ethos of surveillance valley. And so we have to, we have to voice our, our, our outrage when it's warranted. And we need to be able to articulate proactively, like what kind of ecosystem do we want technology to be designed in? What do we want the social inputs to be? And I think we do that best in collective. So finding like your
Starting point is 00:57:12 local just data organization and teaming up with them too. I really like that because, you know, we're so used to seeing tech companies as being kind of like outside of society, you know, and for the first 20 years, hey, they kind of were, they were all insurgents and, you know, these weird small companies that were making, you know, really groundbreaking technology. And a lot of them had, you know, their don't be evil type slogans or they seem chill and they seemed like, you know, they're the breath of fresh air coming through. And now these are the most massive companies in the country, right, that have the most entrenched advantage. And we need to start looking at them. I think you're right, not as users who are just like clicking a button, but we're members of a society.
Starting point is 00:57:56 Yes. And those companies are also part of our society. Yes. And what sort of relationship do we want to have with them and how much power and what kinds of power are we all right with them having? Exactly. And it feels like that conversation is starting to happen. I mean, just seeing the antitrust hearings that were happening on Capitol Hill a couple of weeks ago was I mean, that would have been unimaginable five years ago. And it was still not quite enough. but maybe we're starting to see progress. Yeah, absolutely.
Starting point is 00:58:29 I mean, for me, it's been a dramatic shift just in the last few years in terms of us just kind of being like, oh, yay, the iPhone 70 millions out. Like, let's stand in line overnight and like to being like people being much more like, stand in line overnight and like to being like people being much more like, you know, savvy and skeptical about all of, all of the shiny things. And so I think that, I love the way that you described it in terms of recognizing that that little image, the image of kind of like the little outliers, innovators in their garage, like now the Silicon six, these big companies,
Starting point is 00:59:04 not only are they like the biggest entities kind of monopolizing power and resources in this country, but many of them have net worth that are larger than many countries in like in the world. Right. And so like in terms of the power that they wield and monopolize, we really are, have to are culturally like under put them in a different sort of category of actor and understand what an influence that they're having on public life, but behind private doors. Like their decisions have such a huge impact. We got to completely shift up the regulatory infrastructure, the accountability, and maybe even ask, like, do we want them to be that big? Do we want them to continue monopolizing, even as they fail to pay billions of dollars in taxes
Starting point is 00:59:52 every year? So they say they're doing all this stuff in the name of the greater good, but they don't actually put their money where their slogans are in terms of paying back into the public good. And so that's like a basic 101 thing we need to be demanding in terms of paying back into the public good. And so that's like a basic 101 thing we need to be demanding in terms of their commitment. And one of the things that really struck me from Tim Wu's book, The Curse of Bigness, about monopoly and the history of antitrust is that like the original idea,
Starting point is 01:00:18 when we talk about monopoly and those issues, we talk about them in terms of money a lot. Oh, they make prices higher and they have too much money and and you know uh inequality and things like that but the real the original argument against them was about power was that a single company having so much power we talk about standard oil or whatever right that's so much power they have more power than the government and then the than the democracy which means that it's inherently anti-democratic. And we would say in America, we don't want a single person who's the CEO of this company to have that much power. And that is happening again with these with look at Jeff
Starting point is 01:00:56 Bezos. Right. And how much power he wields over so many different sectors. And that's the question we need to ask is not it's not just economic. It's also power. Exactly. And the last thing I would just add to that is that, you know, these companies and these individuals, they recognize that the tide is turning. They recognize the shift in public discourse. I mean, even if we just go to the Cambridge Analytica scandal and, you know, the 2016 Brexit and U.S. election. So, part of their reaction, and this is something we have to be very wary about and keep a vigilance around, now they're trying to, what I think of as domesticate the critique, they're trying to create in-house these methods of accountability and ethics and all of that, hiring people like in my field.
Starting point is 01:01:47 You know, Facebook created this board to oversee what it does. And some of my colleagues rightly called it like Facebook's Supreme Court, because they're trying to create like in-house what really needs to be independent and third party. And so they know that we won't accept the status quo anymore, but we have to be careful about what their solutions are to it, which will just be them kind of creating their own mechanisms of, you know, at least giving sort of, you know, a face of that, those slogans and all of that. We need something independent that's in the power of people to be able to govern, not in-house in terms of these companies attempts to do that. So democratically, we should all have a voice in how our data is used and who's wielding power and these issues. I think that's absolutely right. Well, I can't thank you enough for coming on the show. There's been such an awesome, fun conversation. My pleasure. It's great to talk to you, Adam. I hope once I get another book done, I hope you invite me back. Oh, my gosh, I'd love to. I learned I've learned so much from talking to you.
Starting point is 01:03:06 And I'm sure another hour I would learn just as much. Thank you so much. My pleasure. Take care. Well, thank you again to Ruha Benjamin for coming on the show. I really hope you enjoyed this conversation as much as I did. If you did, please leave us a rating or a review wherever you subscribe. I know, I know every podcast host says that. It really does help us out. Open up Apple Podcasts, open up Spotify, open up that Google Podcast, give us a five star and a review if you like the show. If you want to send me a comment about what you might like to see on the show in the future, why shoot an email to factually at adamcounterver.net and I will be happy to take a look. That is it for us this week on Factually. I want to thank our producers, Dana Wickens and Sam Roudman, our engineers, Ryan Connor and Brett Morris, Andrew WK for our
Starting point is 01:03:53 theme song. You can find me at adamconover.net or at Adam Conover, wherever you get your social media. Thank you so much for listening. We'll see you next week on Factually.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.