How to Talk to People - How to Keep Watch
Episode Date: June 3, 2024With smartphones in our pockets and doorbell cameras cheaply available, our relationship with video as a form of proof is evolving. We often say “pics or it didn’t happen!”—but meanwhile, ther...e’s been a rise in problematic imaging including deepfakes and surveillance systems, which often reinforce embedded gender and racial biases. So what is really being revealed with increased documentation of our lives? And what’s lost when privacy is diminished? In this episode of How to Know What’s Real, staff writer Megan Garber speaks with Deborah Raji, a Mozilla fellow, whose work is focused on algorithmic auditing and evaluation. In the past, Raji worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products. Write to us at howtopodcast@theatlantic.com. Music by Forever Sunset (“Spring Dance”), baegel (“Cyber Wham”), Etienne Roussel (“Twilight”), Dip Diet (“Sidelined”), Ben Elson (“Darkwave”), and Rob Smierciak (“Whistle Jazz”). Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Get ready for puck drop at BetMGM, an official sports betting partner of the National Hockey
League.
BetMGM.com for terms and conditions.
Must be 19 years of age or older to wager.
Ontario only.
Please play responsibly.
If you have any questions or concerns about your gambling or someone close to you, please
contact ConX Ontario at 1-866-531-2600 to speak to an advisor free of charge.
BetMGM operates pursuant to an operating agreement with iGaming Ontario.
What's 2FA security on Kraken?
Let's say I'm captaining my soccer team, and we're up by a goal against, I don't
know, the Burlington Bulldogs.
Do we relax?
No way.
Time to create an extra line of defense and protect that lead. That's like 2FA on Kraken. You know, I grew up as a Catholic and I remember the guardian angel was a thing that I really
loved that concept when I was a kid.
But then when I got to be, I don't know, maybe around seven or eight, like your guardian
angels always watching you.
At first it was a comfort and then it turned into kind of like a, are they watching me
if I pick my nose?
Do they watch me?
And are they watching out for me or are they just watching me?
Exactly. Like are they my guardian angel or my surveillance angel?
Surveillance angel.
I'm Andrea Valdez. I'm an editor at The Atlantic.
And I'm Megan Garber, a writer at The Atlantic.
And this is How to Know What's Real.
I just got the most embarrassing little alert from my watch.
And it's telling me that it is, quote, time to stand.
Why does it never tell us that it's time to lie down?
Right. Or time to just like go to the beach or something.
And it's weird though because I'm realizing I'm having these intensely conflicting emotions
about it because in one way I appreciate the reminder.
I have been sitting too long.
I should probably stand up.
But I don't also love the feeling of just sort of being casually judged by a piece of
technology.
No, I understand.
I get those alerts too.
I know it very well.
And you know, it tells you stand up, move for a minute, and you can do it.
You know, you can almost hear it go in like, bless your heart.
Bless your lazy little heart.
The funny thing too about it is like, like I find myself being annoyed, but then I also
fully recognize that I don't really have a right to be annoyed because I've asked the
watch to do the judging.
Yes, definitely.
I totally understand.
I mean, I'm very obsessed with the data my smartwatch produces, my steps, my sleeping
habits, my heart rate,
you know, just everything about it. I'm just obsessed with it. And it makes me
think, well, I mean, have you ever heard of the quantified self movement?
Oh, yeah.
Yeah. So, quantified self, it's a term that was coined by Wired Magazine editors
around 2007. And the idea was it was this movement that aspired to quote unquote
self-knowledge through numbers. And I mean it's worth remembering what was
going on in 2007-2008. You know I know it doesn't sound that long ago but
wearable tech was really in its infancy and in a really short amount of time
we've gone from you know our Fitbit to as you said Megan this device that not
only scolds you for not standing
up every hour, but it tracks your calories, the decibels of your environment.
You can even take an EKG with it.
And you know, when I have my smartwatch on, I'm constantly on guard to myself.
Did I walk enough?
Did I stand enough?
Did I sleep enough?
And I suppose it's a little bit of accountability and that's nice, but in the extreme it can feel like I've sort of opted into self-surveillance.
Yes, and I love that idea in part because we typically think about surveillance from the opposite end, right?
Something that's done to us rather than something that we do to ourselves and for ourselves. Watches are just one example here, right? There's also smartphones and
there's this broader technological environment and all of that, that whole ecosystem, it all kind of
asks this question of who's really being watched and then also who's really doing the watching.
So I spoke with Deb Raji, who's a computer scientist and a fellow at the Mozilla Foundation.
She's an expert on questions about the human side of surveillance,
and thinks a lot about how being watched affects our reality.
I'd love to start with the broad state of surveillance in the United States.
What does the infrastructure of surveillance look like right now?
Yeah, I think a lot of people see surveillance as a very sort of
out there in the world physical infrastructure thing,
where they see themselves walking down the street and they like notice a camera.
They're like, yeah, I'm being surveilled.
Which does happen if you live in New York, especially post-911.
You are definitely physically surveilled.
There's a lot of physical surveillance infrastructure, a lot of cameras out there.
But there's also a lot of other tools for surveillance that I think people are less
aware of.
Mm, like Ring cameras and those types of devices?
I think when people install their Ring product, they're thinking about themselves.
They're like, oh, I have security concerns.
I want to just have something to be able to just like check who's on my porch or not.
And they don't see it as surveillance apparatus, but it ends up becoming part of a broader
network of surveillance.
And then I think the one that people very rarely think of, and again, is another thing
that I would not have thought of if I wasn't engaged in some of this work is online surveillance.
Faces are sort of the only biometric, it's not like a fingerprint, like we don't upload
our fingerprint to our social media.
Like we're very sensitive about like, oh, this seems like important biometric data that
we should keep guarded.
But for faces, it can be passively collected and passively distributed without you having
any awareness of it.
But also we're very casual about our faces, so we upload it very freely onto the internet.
And so, you know, immigration officers, ICE, for example, has a lot of online surveillance tools
where they'll monitor people's Facebook pages and they'll use sort of facial recognition
and other products to identify and connect online identities, you know,
across various social media platforms, for example.
So you have people doing this incredibly common thing, right?
Just sharing pieces of their lives on social media.
And then you have immigration officials treating that
as actionable data.
Can you tell me more about facial recognition
in particular?
So one of the first models I actually built
was a facial recognition project.
And so I'm a black woman and I noticed right away that there were not a lot of
faces that looked like mine.
Yeah.
And I remember trying to have a conversation with folks at the company at the time.
And it was a very strange time to be trying to have this conversation.
This was like 2017.
There was a little bit of that happening
in the sort of like natural language processing space.
Like people were noticing, you know,
stereotype language coming out of some of these models,
but no one was really talking about it
in the image space as much that,
oh, some of these models don't work as well
for darker skin individuals or other demographics.
We audited a bunch of these products
that were these facial analysis products.
And we realized that these systems weren't working very well for those minority populations,
but also definitely not working for the intersection of those groups, so like darker skinned female faces.
Some of the ways in which these systems were being pitched at the time were sort of selling these products
and pitching it to immigration officers to use to identify suspects.
Imagine something that's not 70% accurate and it's being used to decide if this person
aligns with a suspect for deportation.
That's so serious.
Right.
Since we've published that work, it was this huge moment in terms of it really shifted
the thinking in policy circles, advocacy circles, even commercial spaces around how all those
systems worked because all the information we had about how well these systems worked
so far was on data sets that were disproportionately composed of lighter skinned men.
And so people had this belief that, oh, these systems work so well, like 99% accuracy, they're
incredible.
And then our work kind of showed like, well, 99% accuracy on lighter skin men.
And could you talk a bit about where tech companies are getting the data from to train their models?
So much of the data required to build these AI systems are collected through surveillance.
And this is not hyperbole, right? Like the facial recognition systems, they're built on top of,
you know, millions and millions of faces in
these databases of millions and millions of faces that are collected, you know, through the internet
or collected through identification databases or through, you know, physical or digital surveillance
apparatus. Because of the way that the models are trained and developed, it requires a lot of data
to get to a meaningful model. And so, a lot of these systems are just very data hungry and it's a really valuable asset.
And how are they able to use that asset?
What are the specific privacy implications about collecting all that data?
Privacy is one of those things that we just don't, we haven't been able to get to federal
level privacy regulation in the states.
There's been a couple states that have taken initiatives. So, California has the California
Privacy Act. Illinois has BIPA, which is sort of a biometric information privacy act. So,
that's specifically about biometric data like faces. In fact, they had a really,
I think BIPA's biggest enforcement was against Facebook and
Facebook's collection of faces, which does count as biometric data.
So in Illinois, they had to pay a bunch of Facebook users a certain settlement amount.
Yeah.
So, you know, there are privacy laws, but it's very state-based.
And it takes a lot of initiative for the different states to enforce some of these things versus
having some kind of comprehensive national approach to privacy.
That's why enforcement or setting these rules is so difficult.
I think something that's been interesting is that some of the agencies have sort of
stepped up to play a role in terms of thinking through privacy.
So the Federal Trade Commission, FTC, has done these privacy audits historically on
some of the big tech companies.
They've done this for quite a few AI products as well, sort of investigating the privacy
violations of some of them as well.
So I think that that's something that, you know, some of the agencies are excited about
and interested in, and that might be a place where we see movement, but ideally we have
some kind of law.
And we've been in this moment, I guess a very long moment,
where companies have been taking the ask for forgiveness
instead of permission approach to all this.
So erring on the side of just collecting as much data
about their users as they possibly can while they can.
And I wonder what the effects of that
will be in terms of our broader informational environment.
The way surveillance and privacy works is that it's not just
about the information that's collected about you.
It's like your entire network is now caught in this web
and it's just building pictures of entire ecosystems
of information.
And so I think people don't always get that.
But yeah, it's a huge part of what defines surveillance. Comfort terms and conditions must be 19 years of age or older to wager Ontario only please play responsibly
If you have any questions or concerns about your gambling or someone close to you, please contact con ex ontario at 1-866-531-2600
To speak to an advisor free of charge betmgm operates pursuant to an operating agreement with iGaming ontario
Do you remember a surveillance cameraman, Megan?
Ooh, no.
But now I'm regretting that I don't.
Well, I mean, I'm not sure how well it was known, but it was maybe 10 or so years ago,
there was this guy who, he had a camera and he would take the camera and he would go and
he'd stop and put the camera in people's faces.
Oh, wow.
And they would get really upset.
Yeah.
And they would ask him, why are you filming me?
And you know, they would get more and more irritated, you know, and it would escalate.
And I think the meta point that surveillance cameraman was trying to make was, you know,
we're surveilled all the time.
So why is it any different if someone comes and puts a camera on your face when there's cameras all around you surveilled all the time, so why is it any different if someone comes
and puts a camera on your face when there's cameras all around you filming you all the
time?
Right.
That's such a great question.
And yeah, the sort of difference there between the active being filmed and then the sort
of passive state of surveillance is so interesting there.
Yeah.
And, you know, that's interesting that you say active versus passive.
You know, it reminds me of the notion of the panopticon,
which I think is a word that people hear a lot these days.
But it's worth remembering that the panopticon is an old idea.
So it started around the late 1700s with the philosopher named Jeremy Bentham.
And Bentham, he outlined this architectural idea, and it
was originally conceptualized for prisons. You know, the idea was that you have this
circular building, and the prisoners live in cells along the perimeter of the building.
And then there's this inner circle, and the guards are in that inner circle, and they
can see the prisoners, but the prisoners can't see the guards.
Oh my goodness.
And so the effect that Bantham was hoping this would achieve is that the prisoners would
never know if they're being watched, so they'd always behave as if they were being watched.
And that makes me think of the more modern idea of the watching eyes effect, this notion
that simply the presence of eyes might affect people's behavior, and
specifically images of eyes. Simply that awareness of being watched does seem to affect people's
behavior.
Oh, interesting.
You know, beneficial behavior, like collectively good behavior, you know, sort of keeping people
in line in that very Bentham-like way. We have all of these eyes watching us now in,
I mean, even in our neighborhoods and at our apartment buildings
in the form of, say, ring cameras or other cameras
that are attached to our front doors.
Just how we've really opted into being surveilled
in all of the most mundane places.
I think the question I have is,
where is all of that information going?
And in some sense, that's the question, right?
And Devrajee has what I found to be a really useful answer
to that question of where our information is actually going,
because it involves thinking of surveillance,
not just as an act, but also as a product.
For a long time, when you, I don't know if you remember those, you know, complete the picture apps or like spice up my picture.
They would use generative models.
You would kind of give them a prompt, which would be like your face.
And then it would modify the image to make it more professional or make it better lit.
Like sometimes you'll get content that was just, you know, sexualizing and inappropriate.
And so that happens in like a non-malicious case.
Like people will try to just generate images
for benign reasons.
And if they choose the wrong demographic
or they frame things in the wrong way, for example,
they'll just get images that are denigrating
in a way that feels inappropriate.
And so I feel like there's that way in which AI for images has sort of
led to just like a proliferation of problematic content.
So not only are those images being generated because the systems are flawed themselves,
but then you also have people using those flawed systems to
generate malicious content on purpose, right?
One that we've seen a lot is sort of this deep fake porn of young people,
which has been so disappointing to me, just, you know, young boys deciding to do that
to young girls in their class. Like, it really is a horrifying form of sexual abuse.
And I think, like, when it happened to Taylor Swift, I don't know if you remember,
someone used the Microsoft model and, you know,
generated some non-consensual sexual images of Taylor Swift.
I think it turned that into like a national conversation.
But months before that, there had been a lot of reporting of this happening in high schools.
Anonymous young girls dealing with that, which is just another layer of like trauma,
because you're like who do you, you're not Taylor Swift, right? So people don't pay attention in the same
way. So I think that that problem has actually been a huge issue for a very long time.
Andrea, I'm thinking of that old line about how if you're not paying for something in
the tech world, there's a good chance you are probably the product being sold.
Right.
But I'm realizing how outmoded that idea probably is at this point, because even when we pay
for these things, we're still the products.
And specifically, our data are the products being sold.
So even with things like deepfakes, which are typically defined as using some kind of machine learning or AI to create a piece of manipulated media.
Even they rely on surveillance in some sense.
And so you have this irony where these recordings of reality are now also being used to distort reality.
Yeah. You know, it makes me think of Don Fallis, this philosopher, who talked about the epistemic
threat of deepfakes and that it's part of this pending info-pocalypse.
Oh my goodness.
Which sounds quite grim, I know.
But I think the point that Fallis was trying to make is that with the proliferation of
deepfakes, we're beginning to maybe distrust what it is that we're seeing.
And we talked about this in the last episode, you know, seeing is believing might not be enough.
And I think we're really worried about deep fakes,
but I'm also concerned about this concept
of cheap fakes or shallow fakes.
Mm, mm.
So cheap fakes or shallow fakes,
it's, you know, you can tweak or change images
or videos or audio just a little bit.
And it doesn't actually require AI or advanced technology to create.
So one of the more infamous instances of this was in 2019. Maybe you remember there was a video of
Nancy Pelosi that came out where it sounded like she was slurring her words.
Oh, yeah.
Right. Yeah. But really, the video had just been slowed down using easy audio tools and just slowed down
enough to create that perception that she was slurring her words.
So it's a quote unquote cheap way to create a small bit of chaos.
And then you combine that small bit of chaos with the very big chaos of deepfakes.
Yeah.
So one, the cheap fake is it's her real voice.
It's just slowed down, again, using like simple tools.
But we're also seeing instances of AI generated technology
that completely mimics other people's voices.
And it's becoming really easy to use now.
There was this case recently that came out of Maryland
where there was an athletic director at a high school.
He was arrested after he allegedly used
an AI voice simulation of the principal at his school.
He allegedly simulated the principal's voice saying some really horrible things.
It caused all this blowback on the principal before investigators,
they looked into it, they determined the audio is fake.
But again, it was just a regular person that was able to use this really advanced-seeming
technology that was cheap, easy to use, and therefore easy to abuse.
Oh, yes.
And I think it also goes to show how few sort of cultural safeguards we have in place right
now, right?
Like the technology will let people do certain
things and we don't always, I think, have a really well-agreed-upon sense of what constitutes
abusing the technology. And, you know, usually when a new technology comes along, people
will sort of figure out what's acceptable and, you know, what will bear some kind of
social cost and will there be a taboo associated with
it. But with all of these new technologies, we just don't have that. And so people, I
think, are pushing the bounds to see what they can get away with.
Yeah. And we're starting to have that conversation right now about what those limits should look
like. I mean, lots of people are working on ways to figure out how to watermark or authenticate
things like audio and video and images.
Yeah.
Yeah.
And I think that that idea of watermarking too can maybe also have a cultural implication.
You know, like if everyone knows that deepfakes can be tracked and easily, that is itself
a pretty good disincentive from creating them in the first place, at least with an intent to fool or do something malicious.
Yeah.
But in the meantime, there's just going to be a lot of these deep fakes,
and cheap fakes, and shallow fakes that we're just going to have to be on the lookout for.
Is there new advice that you have for trying to figure out whether something is fake?
If it doesn't feel quite right, it probably isn't.
A lot of these images don't have a good sense of spatial awareness, because it's just pixels
in, pixels out.
And so there's some of these concepts that we as humans find really easy, but these models
struggle with.
I advise people to be aware of it, sort of trust your intuition. If you're noticing weird artifacts in the image,
it probably isn't real.
I think another thing as well is like, who posts?
Oh, that's a great one. Yeah.
Like, I mute very liberally, like on Twitter, any platform.
I definitely mute a lot of accounts that I notice
be either caught posting something,
either like a community note or something will reveal that they've been posting fake
images or you just see it and you recognize the design of it.
And so I just mute that kind of content.
Don't engage with those kind of content creators at all.
And so I think that that's also like another successful thing.
On the platform level, deplatforming is really effective if someone has sort of three strikes
in terms of producing a certain type of content.
That's what happened with the Taylor Swift situation where people were disseminating
this Taylor Swift images and generating more images and they just went after every single
account that did that, completely locked down her hashtag, that kind of thing where they
just really went after everything.
I think that that's something that we should just do in our personal engagement as well.
MUSIC
Andrea, that idea of personal engagement, I think, is such a tricky part of all of this.
I'm even thinking back to what we were saying before about Ring and the interplay we were getting at
between the individual and the interplay we were getting at between the individual and the collective.
In some ways, it's the same tension that we've been thinking about with climate change and
other really broad, really complicated problems.
Yeah, yeah, yeah.
This, you know, connection between personal responsibility, but also the outsized role
that corporate and government actors will have to play when it comes to finding solutions.
And with so many of these surveillance technologies, we're the consumers with all
the agency that that would seem to entail. But at the same time, we're also part of this
broader ecosystem where we really don't have as much control as I think we'd often like to
believe. So our agency has this giant asterisk,
and consumption itself in this networked environment
is really no longer just an individual choice.
It's something that we do to each other,
whether we mean to or not.
Yeah, that's true, but I do still believe
in conscious consumption so much as we can do it.
Like even if I'm just one person,
it's important to me
to signal with my choices what I value.
And in certain cases, I value opting out of being surveilled
so much as I can control for it.
You know, maybe I can't opt out of facial recognition
and facial surveillance, you know,
because that would require a lot of obfuscating my face.
And, I mean, there's not even any reason
to believe that it would work.
But there are some smaller things that I personally find important.
Like I'm very careful about which apps I allow to have location sharing on me.
You know, I go into my privacy settings quite often.
You know, I make sure that location sharing is something that I'm opting into on the app
while I'm using it.
I never let apps just follow me around all the time.
You know, I think about what chat apps I'm using, if they have encryption, you know, I do hygiene on my phone,
around what apps are, you know, actually on my phone,
because they do collect a lot of data on you in the background.
So if it's an app that I'm not using or I don't feel familiar with,
I delete it.
Oh, that's really smart.
And it's such a helpful reminder, I think, of the power that we do have here.
And a reminder of what the surveillance state actually looks like right now. It's not some
cinematic dystopia. It's sure, the camera's on the street, but it's also the watch on
our wrist, it's the phones in our pockets, it's the laptops we use for work. And even
more than that, it's a series of decisions that governments and
organizations are making every day on our behalf. And we can affect those decisions if we choose to,
in part just by paying attention. Yeah, it's that old adage,
who watches the watcher? And the answer is us.
That's all for this episode of How to Know What's Real. This episode was hosted by Andrea Valdez and me, Megan Garber.
Our producer is Natalie Brennan.
Our editors are Claudina Bade and Jocelyn Frank.
Fact check by Ena Alvarado.
Our engineer is Rob Smirciak.
Rob also composed some of the music for this show.
The executive producer of audio is Claudine Bade,
and the managing editor of audio is Andrea Valdes.
Next time on How to Know What's Real.
And when you play the game multiple times,
you shift through the roles,
and so you can experience the game from different angles.
You can experience a conflict
from completely different political angles
and re-experience how it looks from each side,
which I think is something like,
this is what games are made for.
What we can learn about expansive thinking through play.
We'll be back with you on Monday.