How to Talk to People - How to Know What's Real: How to Keep Watch

Episode Date: June 3, 2024

With smartphones in our pockets and doorbell cameras cheaply available, our relationship with video as a form of proof is evolving. We often say “pics or it didn’t happen!”—but meanwhile, ther...e’s been a rise in problematic imaging including deepfakes and surveillance systems, which often reinforce embedded gender and racial biases. So what is really being revealed with increased documentation of our lives? And what’s lost when privacy is diminished?  In this episode of How to Know What’s Real, staff writer Megan Garber speaks with Deborah Raji, a Mozilla fellow, whose work is focused on algorithmic auditing and evaluation. In the past, Raji worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products. Write to us at howtopodcast@theatlantic.com.  Music by Forever Sunset (“Spring Dance”), baegel (“Cyber Wham”), Etienne Roussel (“Twilight”), Dip Diet (“Sidelined”), Ben Elson (“Darkwave”), and Rob Smierciak (“Whistle Jazz”). Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:04 You know, I grew up as a Catholic, and I remember the guardian angel was the thing that I really loved that concept when I was a kid. But then when I got to be, I don't know, maybe around seven or eight, like your guardian angel is always watching you. At first it was like comfort, and then it turned into kind of like a, are they watching me if I pick my nose? Do they watch me? And are they watching out for me or are they just watching me? Exactly. Like, are they my guardian angel or my surveillance angel? Surveillance Angel I'm Andrea Valdez
Starting point is 00:00:40 I'm an editor at the Atlantic And I'm Megan Garber A writer at the Atlantic And this is how to know what's real I just got the most embarrassing little alert from my watch And it's telling me that it is quote Time to Stand
Starting point is 00:00:59 Why does it never tell us that it's time to lie down? Right, or time to just like go to the beach or something And it's weird, though, because I'm realizing I'm having these intensely conflicting emotions about it, because in one way, I appreciate the reminder. I have been sitting too long. I should probably stand up. But I don't also love the feeling of just sort of being casually judged by a piece of technology. No, I understand. I get those alerts, too. I know it very well. And, you know, it tells you stand up, move for a minute, and you can do it. You know, you can almost hear it go in like, bless your heart. Bless your lazy little heart. The funny thing, too, about it is like, like I find myself being annoyed, but then I also fully recognize that I don't really have a right to be annoyed because I've asked the watch to do the judging.
Starting point is 00:01:55 Yes, definitely. I totally understand. I mean, I'm very obsessed with the data my smartwatch produces. My steps, my sleeping habits, my heart rate, you know, just. everything about it. I'm just obsessed with it. And it makes me think, well, I mean, have you ever heard of the quantified self movement? Oh, yeah. Yeah. So quantified self. It's a term that was coined by Wired Magazine editors around 2007. And the idea was it was this movement that aspired to, quote, unquote, self-knowledge through numbers. And I mean, it's worth remembering what was going on in
Starting point is 00:02:30 2007, 2008. You know, I know it doesn't sound that long ago, but wearable tech was really in its infancy. And in a really short amount of time, we've gone from, you know, our Fitbit to, as you said, Megan, this device that not only scolds you for not standing up every hour, but it tracks your calories, the decibels of your environment. You can even take an EKG with it. And, you know, when I have my smartwatch on, I'm constantly on guard to myself. Did I walk enough? Did I stand enough? Did I sleep enough? And I suppose it's a little bit of accountability, and that's nice. But in the extreme, it can feel like I've sort of opted into self-surveillance. Yes, and I love that idea, in part because we typically think about surveillance
Starting point is 00:03:14 from the opposite end, right? It's something that's done to us rather than something that we do to ourselves and for ourselves. Watches are just one example here, right? There's also smartphones and there's this broader technological environment, and all of that, that whole ecosystem, it all kind of asked this question of who's really being watched and then also who's really doing the watching. Mm-hmm. Mm-hmm. So I spoke with Deb Raji, who's a computer scientist and a fellow at the Mozilla Foundation. And she's an expert on questions about the human side of surveillance and thinks a lot about how being watched affects our reality.
Starting point is 00:03:58 I'd love to start with the broad state of surveillance in the United States. What does the infrastructure of surveillance look like right now? Yeah, I think a lot of people see surveillance as a very sort of out there in the world physical infrastructure thing where they see themselves walking down the street and they like notice a camera. Yeah, I'm being surveilled, which does happen if you live in New York, especially post-9-11, like you are definitely physically surveilled. There's a lot of physical surveillance infrastructure, a lot of cameras out there. But there's also a lot of other tools for surveillance that I think people are less aware of. Like ring cameras and those types of devices? I think when people install their ring product, they're thinking about themselves.
Starting point is 00:04:42 They're like, oh, I have security concerns. I want to just have something to be able to just like check who's on my porch or not. And they don't see it as surveillance apparatus, but it ends up becoming part of a broader network of surveillance. And then I think the one that people very rarely think of, and again is another thing that I would not have thought of. if I wasn't engaged in some of this work, is online surveillance. Faces are sort of the only biometric. It's not like a fingerprint. Like we don't upload our fingerprint to our social media. Like we're very sensitive about like, oh, this seems like important biometric data that we should keep guarded. But for faces, it can be passively collected and passively distributed without you
Starting point is 00:05:24 having any awareness of it. But also, we're very casual about our faces. So we upload it very freely onto the internet. And so, you know, immigration officers, ICE, for example, has a lot of online surveillance tools where they'll monitor people's Facebook pages and they'll use sort of facial recognition and other products to identify and connect online identities, you know, across various social media platforms, for example. So you have people doing this incredibly common thing, right? Just sharing pieces of their lives on social media. And then you have immigration officials treating that as actionable data. Can you Tell me more about facial recognition in particular.
Starting point is 00:06:01 So one of the first models I actually built was a facial recognition project. And so I'm a black woman and I noticed right away that there were not a lot of faces that looked like mine. And I remember trying to have a conversation with folks at the company at the time. And it was a very strange time to be trying to have this conversation. This was like 2017. There was a little bit of that happening in the sort of like natural language processing space. Like people were noticing, you know, stereotype language coming out of some of these models, but no one was really talking about it in the image space as much, that, oh, some of these models don't work as well for darker skin individuals or other demographics.
Starting point is 00:06:40 We audited a bunch of these products that were these facial analysis products. And we realized that these systems weren't working very well for those minority populations, but also definitely not working for the intersection of those groups. So like darker skin, female faces. Wow. Some of the ways in which these systems were being pitched at the time for sort of selling, these products and pitching it to immigration officers to use to identify suspects. Wow. And, you know, imagine something that's not 70% accurate and it's being used to decide, you know,
Starting point is 00:07:08 if this person aligns with a suspect for deportation. Like, that's so serious. You know, since we've published that work, we had just this, you know, it was this huge moment in terms of it really shifted the thinking in policy circles, advocacy circles, even commercial spaces around how well those systems worked, because all the information we had about how well these systems worked so far was on data sets that were disproportionately composed of lighter skin men. Right. And so people had this belief that, oh, these systems work so well, like 99% accuracy. They're incredible.
Starting point is 00:07:40 And then our work kind of showed, like, well, 99% accuracy on lighter skin men. And could you talk a bit about where tech companies are getting the data from to train their models? so much of the data required to build these AI systems are collected through surveillance. And this is not hyperbole, right? Like the facial recognition systems, they're built on top of, you know, millions and millions of faces in these databases of millions and millions of faces that are collected, you know, through the internet or collected through identification databases or through, you know, physical or digital surveillance apparatus.
Starting point is 00:08:16 Because of the way that the models are trained and developed, it requires a lot of data to get to a meaningful model. And so a lot of these systems are just very data-hungry, and it's a really valuable asset. And how are they able to use that asset? What are the specific privacy implications about collecting all that data? Privacy is one of those things that we just don't, we haven't been able to get to federal-level privacy regulation in the states. There's been a couple states that have taken initiative.
Starting point is 00:08:48 So California has the California Privacy Act. Illinois has BIPA, which is sort of a biometric information privacy act. So that's specifically about, you know, biometric data like faces. In fact, they had a really, I think BIPA's biggest enforcement was against Facebook and Facebook's collection of faces, which does count as biometric data. So in Illinois, they had to pay a bunch of Facebook users a certain settlement amount. Yeah. So, you know, there are privacy laws, but it's very state-based. And it takes a lot of initiative for the different states to influence.
Starting point is 00:09:21 force some of these things versus having some kind of comprehensive national approach to privacy. That's why enforcement or setting these rules is so difficult. I think something that's been interesting is that some of the agencies have sort of stepped up to play a role in terms of thinking through privacy. So the Federal Trade Commission, FTC, has done these privacy audits historically on some of the big tech companies. They've done this for quite a few AI products as well, sort of investigating the privacy violations of some of them as well. So I think that that's something that, you know, some of the agencies are excited about and interested in, and that might be a place where we see movement.
Starting point is 00:09:58 But ideally we have some kind of law. And we've been in this moment, this, I guess a very long moment, where companies have been taking the ask for forgiveness instead of permission approach to all this. So airing on the side of just collecting as much data about their users as they possibly can, while they can. And I wonder what the effects of that will be in terms of our broader informational environment. The way surveillance and privacy works is that it's not just about the information that's collected about you. It's like your entire network is now caught in this web and it's just building pictures of entire ecosystems of information. And so I think people don't always get
Starting point is 00:10:38 that. But yeah, it's a huge part of what defines surveillance. On Gainty, pain can hit hard and fast, like the headache you get when your favorite team and your fantasy team both lose. When pain comes to play, call an audible with Advil plus acetaminophen and get long-lasting dual-action pain relief for up to eight hours. Tackle your tough pain two ways with Advil plus acetaminopim. Advil, the official pain relief partner of the NFL. Ask your pharmacist at this product's rate for you. Always read and follow the label.
Starting point is 00:11:24 Do you remember a surveillance cameraman, Megan? Ooh, no. But now I'm regretting that I don't. Well, I mean, I'm not sure how well it was known, but it was maybe 10 or so years ago. There was this guy who he had a camera and he would take the camera and he would go and he'd stop and put the camera in people's faces. Oh, wow. And they would get really upset. Yeah.
Starting point is 00:11:47 And they would ask him, why are you filming me? And, you know, they would get more and more irritated, you know, and it would escalate. And I think the meta point that surveillance cameraman was trying to make was, you know, we're surveilled all the time. So why is it any different if someone comes and puts a camera on your face when there's cameras all around you filming you all the time? Right. That's such a great question. And yeah, the sort of difference there between the active, active being filmed and then the sort of passive state of surveillance is so interesting there. Yeah. And, you know, that's interesting that you say active versus passive.
Starting point is 00:12:23 You know, it reminds me of the notion of the panopticon, which I think is a word that people hear a lot these days. But it's worth remembering that the panopticon is an old idea. So it started around the late 1700s with the philosopher named Jeremy Bentham. And Bentham, he outlined this architectural idea. And it was originally conceptualized for prisons. You know, the idea was that you have this circular building. and the prisoners live in cells along the perimeter of the building. And then there's this inner circle, and the guards are in that inner circle,
Starting point is 00:13:01 and they can see the prisoners, but the prisoners can't see the guards. Oh, my goodness. And so the effect that Bantam was hoping this would achieve is that the prisoners would never know if they're being watched, so they'd always behave as if they were being watched. And that makes me think of the more modern idea of the watching eyes effect, this notion that simply the presence of eyes might affect people's behavior. And specifically, images of eyes, simply that awareness of being watched does seem to affect people's behavior. Oh, interesting. You know, beneficial behavior, like collectively good behavior, you know,
Starting point is 00:13:39 sort of keeping people in line in that very bentham-like way. We have all of these, you know, eyes watching us now in, I mean, even in our neighborhoods and, you know, at our apartment, buildings in the form of, say, ring cameras or other, you know, cameras that are attached to, you know, our front doors. Just how we've really opted into being surveilled in all of the most mundane places. I think the question I have is, where is all of that information going? And in some sense, that's the question, right? And Devraji has what I found to be a really useful answer to that question of where our information is actually going, because it involves thinking of surveillance not just as an act, but also as a product.
Starting point is 00:14:24 For a long time when you, I don't know if you remember those, you know, complete the picture apps or like spice up my picture. They would use generative models. You would kind of give them a prompt, which would be like your face. And then it would modify the image to make it more professional or make it better lit. Like sometimes you'll get content that was just, you know, sexualizing and inappropriate. And so that happens in. in like a non-malicious case.
Starting point is 00:14:51 Like, people will try to just generate images for benign reasons. And if they choose the wrong demographic or they frame things in the wrong way, for example, they'll just get images that are denigrating in a way that feels inappropriate. And so I feel like there's that way in which AI for images has sort of led to just like a proliferation of problematic content. So not only are those images being generated because the systems are flawed themselves, but then you also have people using those. flawed systems to generate malicious content on purpose, right?
Starting point is 00:15:23 One that we've seen a lot is sort of this deep fake porn of young people, which has been so disappointing to me, just, you know, young boys deciding to do that to young girls in their class. Like, it really is a horrifying form of sexual abuse. And I think, like, when it happened to Taylor Swift, I don't know if you remember someone used the Microsoft model and, you know, generated some non-consensual sexual images. which is of Taylor Swift. I think it turned that into like a national conversation. But months before that, there had been a lot of reporting of this happening in high schools.
Starting point is 00:15:59 Anonymous young girls dealing with that, which is just another layer of like trauma because you're like who, you're not Taylor Swift. Right. So people don't pay attention in the same way. So I think that that problem has actually been a huge issue for a very long time. Andrea, I'm thinking of that old line about how if you're not paying for something in the tech world, there's a good chance you are. are probably the product being sold. Right. But I'm realizing how outmoded that idea probably is at this point, because even when we pay for these things,
Starting point is 00:16:32 we're still the products. And specifically, our data are the products being sold. So even with things like deep fakes, which are typically defined as using some kind of machine learning or AI to create a piece of manipulated media, even they rely on surveillance in some sense. And so you have this irony where these recordings of reality are now also being used to distort reality. Yeah. You know, it makes me think of Don Fallis, this philosopher, who talked about the epistemic threat of deep fakes and that it's part of this pending infopocalypse.
Starting point is 00:17:09 Oh, my goodness. Which sounds quite grim, I know. But I think the point that Fallis was trying to make is that with the proliferation of deep fakes, we're beginning to maybe distrust what it is that we're seeing. about this in the last episode, you know, seeing as believing might not be enough. And I think we're really worried about deep fakes, but I'm also concerned about this concept of cheap fakes or shallow fakes. So, cheap fakes or shallow fakes, it's, you know, you can tweak or change images or videos or audio just a little bit. And it doesn't actually require AI or advanced technology to create. So one of the more infamous instances of this was in 2019. Maybe you remember there was a video of Nancy Pelosi that came out where it sounded like she was slurring her words.
Starting point is 00:17:57 Oh, yeah. Right. Yeah. But really, the video had just been slowed down using easy audio tools and just slowed down enough to create that perception that she was slurring her words. So it's a quote unquote, cheap way to create a small bit of chaos. And then you combine that small bit of chaos with the very big chaos of deep fakes. Yeah. So, one, the cheap fake is it's her.
Starting point is 00:18:22 real voice. It's just slowed down, again, using like simple tools. But we're also seeing instances of AI-generated technology that completely mimics other people's voices. And it's becoming really easy to use now. There was this case recently that came out of Maryland where there was an athletic director at a high school. And he was arrested after he allegedly used an AI voice simulation of the principal at his school. And he allegedly simulated the principal's voice saying some really horrible things. And it caused all this blowback on the principal before investigators, you know, they looked into it. They determined the audio is fake. But again, it was just a regular person that was able to use this really advanced
Starting point is 00:19:09 seeming technology that was cheap, easy to use, and therefore easy to abuse. Oh, yes. And I think it also goes to show how few sort of cultural safeguards we have in place right now, right? Like the technology will let people do certain things, and we don't always, I think, have a really well-agreed-upon sense of what constitutes abusing the technology. And usually when a new technology comes along, people will sort of figure out what's acceptable and, you know, what will bear some kind of social cost. And will there be a taboo associated with it? But with all of these new technologies, we just don't have that. And so people, I think, are pushing the bounds to see what they can get away with. Yeah.
Starting point is 00:19:54 And we're starting to have that conversation right now about what those limits should look like. I mean, lots of people are working on ways to figure out how to watermark or authenticate things like audio and video and images. Yeah. Yeah. And I think that that idea of watermarking, too, can maybe also have a cultural implication, you know. Like if everyone knows that deepfakes can be tracked and easily. That is itself a pretty good disincentive from creating them in the first place, at least with an intent to fool or do something malicious.
Starting point is 00:20:27 Yeah. But in the meantime, there's just going to be a lot of these deep fakes and cheap fakes and shallow fakes that we're just going to have to be on the lookout for. Is there new advice that you have for trying to figure out whether something is fake? If it doesn't feel quite right, it probably isn't. A lot of these images don't have a good sense of spatial awareness. because it's just pixels in, pixels out. And so there's some of these, like, concepts that we as humans find really easy,
Starting point is 00:21:00 but these models struggle with. I advise people to be aware of it's like sort of trust your intuition. If you're noticing weird artifacts in the image, it probably isn't real. I think another thing as well as, like, who posts? Oh, that's a great one. Yeah. I mute very liberally, like on Twitter, any platform. I definitely mute a lot of.
Starting point is 00:21:21 of accounts that I noticed be either caught posting something, either like a community note or something will reveal that they've been posting fake images or you just see it and you recognize the design of it. And so I just mute that kind of content. Don't engage with those kind of content creators at all. And so I think that that's also like another successful thing. On the platform level, deep platforming is really effective if someone has sort of three strikes in terms of producing a certain type of content. That's what happened with the Taylor Swift situation where people were disseminating this, you know, Taylor Swift images and generating more images. And they were, they just went after every single account that did that, you know, completely locked down her
Starting point is 00:21:57 hashtag, like that kind of thing where they just really went after everything. And I think that that's something that like we should just do in our personal engagement as well. Andrea, that idea of personal engagement, I think, is such a tricky part of all of this. I'm even thinking back to what we were saying before about Ring and the interplay we were getting at between the individual and the collective. In some ways, it's the same tension that we've been thinking about with climate change and other really broad, really complicated problems.
Starting point is 00:22:33 Yeah, yeah. This, you know, connection between personal responsibility, but also the outsized role that corporate and government actors will have to play when it comes to finding solutions. And with so many of these surveillance technologies, were the consumers with all the agency that that would seem to entail, but at the same time, we're also part of this broader ecosystem
Starting point is 00:22:57 where we really don't have as much control as I think we'd often like to believe. So our agency has this giant asterisk, and consumption itself in this networked environment is really no longer just an individual choice. It's something that we do to each other, whether we mean to or not. Yeah, you know, that's true, but I do still believe in conscious consumption so much we can do it. Like, even if I'm just one person, it's important to me to signal with my choices what I value. And in certain cases, I value opting out of being surveilled so much as I can control for it. You know, maybe I can't opt out of facial recognition and facial surveillance,
Starting point is 00:23:37 you know, because that would require a lot of obfuscating my face. And I mean, there's not even any reason to believe that it would work. But there are some smaller things that I personally find important. Like, I'm very careful about which apps I allow to have location sharing on me. You know, I go into my privacy settings quite often. You know, I make sure that location sharing is something that I'm opting into on the app while I'm using it. I never let apps just follow me around all the time. You know, I think about what chat apps I'm using, if they have encryption. You know, I do hygiene on my phone around what apps are, you know, actually on my phone because they do collect a lot of data on you in the background. So if it's an app that I'm not using or I don't feel familiar with,
Starting point is 00:24:16 I delete it. Oh, that's really smart. And it's such a helpful reminder, I think, of the power that we do have here. And a reminder of what the surveillance state actually looks like right now. It's not some cinematic dystopia. It's, sure, the camera's on the street, but it's also the watch on our wrist. It's the phones in our pockets. It's the laptops we use for work.
Starting point is 00:24:41 And even more than that, it's a series of decisions that governments and organizations are making every day on our behalf. And we can affect those decisions if we choose to, in part just by paying attention. Yeah, it's that old adage who watches the watcher, and the answer is us. That's all for this episode of How to Know What's Real. This episode was hosted by Andrea Valdez
Starting point is 00:25:12 and me, Megan Garber. Our producer is Natalie Brennan. Our editors are Claudina Bade and Jocelyn Frank. Fact-checked by Anna Alvarado. Our engineer is Rob's Mara. Rob also composed some of the music for this show. The executive producer of audio is Claudina Bade, and the managing editor of audio is Andrea Valdez.
Starting point is 00:25:32 Next time on how to know what's real. And when you play the game multiple times, you shift through the roles, and so you can experience the game from different angles. You can experience a conflict from completely different political angles and re-experience how it looks. from each side, which I think is something like, this is what games are made for. What we can learn about expansive thinking through play.
Starting point is 00:26:00 We'll be back with you on Monday.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.