Your Undivided Attention - From Russia with Likes (Part 2) — with Renée DiResta
Episode Date: August 1, 2019In the second part of our interview with Renée DiResta, disinformation expert, Mozilla fellow, and co-author of the Senate Intelligence Committee’s Russia investigation, she explains how social med...ia platforms use your sense of identity and personal relationships to keep you glued to their sites longer, and how those design choices have political consequences. The online tools and tactics of foreign agents can be very precise and deliberate, but they don’t have to be -- Renée has seen how deception and uncertainty are powerful agents of distrust and easy to create. Do we really need the ease of global amplification of information-sharing that social media enables, anyway? We don’t want spam in our email inbox so why do we tolerate it in our social media feed? What would happen if we had to copy and paste and click twice, or three times? Tristan and Aza also brainstorm ways to prevent and control disinformation in the lead-up to elections, and particularly the 2020 U.S. elections.
Transcript
Discussion (0)
Ultimately, propagandists have to reach an audience, and so that's one of the things that we look for, is when you have to reach mass numbers of people, what do you do to do it and what makes that action visible?
You're listening to Part 2 of our interview with Renee DiResta, disinformation expert and co-author of the Senate Intelligence Committee's Russia investigation and a Mozilla fellow.
If you haven't heard part one of our interview, we highly recommend you go back and listen now.
It's an essential guide to how disinformation spreads online and why all of us.
From the platforms to the users to law enforcement have been caught flat-footed by this rapidly evolving threat.
But we can catch up if we take the time to look at the technology that's driving this harmful virality and take steps to stop it.
We'll consider a few of those solutions in part two of our interview with Renee DeResta.
I'm Tristan Harris.
And I'm Isaraskan.
This is your undivided attention.
If you follow an antivax page or join an antivax group,
Just one of them.
Yeah, just one.
Facebook would start to show you other ones because the algorithm doesn't understand.
It's amoral.
It does not understand what it is showing you.
It just knows that, mathematically speaking, if you like this, you are statistically likely to like this other thing as well.
The word like there is so deceiving.
It means you hit a button in a moment of choice.
In the literal.
Like not in the sense of my whole nervous system is telling me this sort of deep human life-long desire sort of version of like, yes.
Like trademark.
But what we would see was like a.
accounts that had engaged with this content would not only see more of that content,
because that's sort of a content-based filtering model, right?
But then there's the collaborative filtering, which says that here is what people who are
similar to you enjoy or engage with or like capital L trademark.
Or are most susceptible to.
And that's where you start to see anti-vaxxers getting referred into chemtrails, groups
getting referred.
That was how I found the flat earthers actually for the first time was looking at some of that stuff.
And then you get at the political, the political conspiratorial communities like PizzaGate was
something that was popping up a lot then, too.
And this is in the Facebook group recommendation system.
Yeah, it's just a Facebook group recommendation.
Fascinating about your work, Renee, is you've shown how the same pattern of the conspiracy
correlation reinforcement is happening on Amazon, it's happening on Instagram, happening on
Twitter recommended users with ISIS users.
And that's because it's just recommendation engines are broken across the internet
because recommendation engines are largely the same across the internet.
There's probably various weightings and various nuance and various ways in which, like,
each of the platforms has different visibility into who you,
are based on what kind of information they have about you. So you'll notice on Amazon they
have this multi-tiered recommendation engine. People who searched for this also looked at. People
who looked at this also looked at people who looked at this also bought. So there's the conversion
kind of coming into play. You know, if you look at a topic, they will send you an email with
other topics if you don't convert and buy something on the site. So they're always driving
to their goal and they're showing you all manner of things that you are statistically likely
to be interested in from their statistical model of human behavior and of user behavior on the site.
And ultimately what flows into their business models. So Amazon's, these are the things that people
tend to buy. So they're just going to keep doing whatever works at getting you to buy it.
In Facebook's case, these are groups that if I got you to join them, but we can predict that
you're likely to use it a lot more than if you don't join this group. I mean, this was actually
how the people you may know suggested friends feature came up. Like when you first joined Facebook,
they said, oh, you're not using it enough. How do we like lock you into using it?
this thing, let's start recommending as many people, and specifically we'll recommend another
user who also doesn't use it very much, and we'll put you into a double-blind, so now you'll
both have increasing reasons to use it, and you're sort of filling up the fuel tanks across
all of Facebook at once.
Yeah, it's, I think recommendation engines, they've been sort of one of the areas that I think
we have the greatest amount of leverage over, as far as that is something that you don't need
regulation to make that happen.
You need a platform to decide that it's going to go and do it.
has made some efforts, particularly the start of this year. Facebook has also made some efforts
at the start of this year. Antivax content taken out. A lot of the pseudoscience conspiracies
taken out. You know, it's interesting because this is where we get at the nuance around
conversations about expression versus amplification. You can still run your page and run your content
on Facebook. You can still put your video up on YouTube. But you're not going to get the free
amplification and lift of it being recommended to other people. So you can build your audience. You can
send them there. But the platform's not going to do the heavy lifting for you. That, I think,
is a pretty marked change that we've only started to see them really work on. I used to call it
do not recommend, but I mean, we don't have a good, good meme for that one yet. But it's the,
how do we recalibrate as we talk about things like the First Amendment? You have a right to
expression. You have never had a right to amplification. And this is a nuanced and interesting
topic of conversation that I think we're starting to have conversations about what that
looks like.
I think when I think about recommendation systems, I also think of that about them as a kind
of autocomplete for the human mind or an auto-complete for society or an auto-complete for your
consequences.
For consequences.
They start making realities happen.
They're unfolding realities.
Do you guys run marketing campaigns ever?
I've done two startups and had marketing responsibilities for both in the very early
stages before we hired somebody better than me.
and I would go into HubSpot
and I would try to think of
what's my drip campaign email look like
and it has these remarkably sophisticated
because they're built
from an aggregate of like
all of the millions of people
who have written emails in HubSpot
so they have this incredible aggregated analytics
and it'll actually tell you like
your email is 13% too long
or it'll like highlight words
and like you know
you can go through basically
this marketing message craft or AI
that tells you that this is what
statistically works for people
and it's interesting because then people
do kind of get wise
to these sort of manipulative, you know, the worst emails are the ones where it says like, hey, you know, and you like think you know the guy. Or are you colon to make you think that's right. Right. Yeah. There's, you know, I always say that if you want to see where disinformation is going, just look at where the, you know, look at where the marketers are because like, because it's incentives. Because, you know, the difference between people who are doing it for political motivation, you know, the economic motivation really pushes people to get to the, the forefront a lot faster. It's a very. It's a very.
very powerful motivator. It's where I think we actually can learn a lot from from our old
efforts and anti-spam and things like that. Nobody ever feels sorry for the spammer whose email
doesn't make it to the top of your inbox. Nobody's ever like, oh man, we're really censoring
that guy who wants to sell me Viagra. You know, it's a fundamentally different way of thinking
about what has the right to appear in our field of vision online and how we should think about
what platforms serve us. If platforms didn't have some protections where they say this is a low-quality
domain, our inboxes would be flooded with crap all the time. That is not a sensational thing to say, but when you kind of port that over to algorithmically disseminated content, particularly, especially on YouTube, if you've seen the videos that are just generated by AI, it's weird to think that we would treat our inbox experience different than our social experience. And that's because I think there's still some sort of residual belief we have that what we engage with in the social space is real people. And that's just not the case.
I think a lot of it is, how can we shine a light on this?
How can we shine a light on tactics, make people realize?
There is a downside to that.
So this is what I would say, which is go on Twitter, click into one of President Trump's tweets
and look at how many people are screaming that the other person is a bot, right?
It's really very, very high percentage of people.
And those of us who study bots knew that that was going to come, because as people became
more aware that bots were a thing.
Then you distrust everything.
Then you distrust everything.
And that's why so much of this stuff, there's no downside.
because if you can convince people
that everyone around them
is a Russian spy or a troll or a bot
Then you win.
Right.
Then there's,
you know,
either you win by staying secret
or you win because you have made people
so deeply suspicious
and unable to recognize
that sometimes there are just people
who hold different opinions.
Right.
That's the point about deepfakes
is that we don't even need
to have deepfakes flooding the market.
Right.
The threat of the deep fake.
Yeah.
And being able to point at it
and say, oh, that was deep fake.
Impuning the integrity of the person
or the speaker or the voice.
And that's where,
we try to find what are the ways in which you can educate people that this sort of thing
is happening. This is where I always say like the platforms. But in a way that doesn't cause them to
be paranoid about everything. Right. Right. And the one thing I want to add here is, you know, when you get
people into a paranoid mode, once you are convinced that you're part of this paranoid game,
my friends did this thing once for my birthday where they told me to go to this place in the dock in New York
and then someone came up, photographer, was taking Polaroid photos of something. And suddenly she
came up to me and she took these Polaroids of me. And then I realized,
she was part of this script that my friends had coordinated
and I was part of this game that I didn't realize
and they told me to go to this other place
and suddenly when I was looking around
I was paranoid of everyone.
I thought everyone could be in on this thing
that I didn't know was happening to me.
And I say that because the thing that's challenging
about getting people to be skeptical or paranoid
is it makes people,
like it flips your whole nervous system
into a very different relationship with reality.
It's not just like it's slightly different.
It actually flips it in this very different way.
Yeah, constantly on edge.
Right.
So how do we come up with a kind of calm
cynicism or calm epistemic vigilance is what one of the terms is for it, where you are
vigilant about the things that are coming your way without going to the sort of extreme,
it's all fake news, it's all just, well, what is truth anyway, which is a totally unsophisticated
way to deal with a kind of vigilant way of looking at reality?
Yeah, I don't have a good answer to that question.
I think I've been to Estonia and I've been to Stockholm, and I was fortunate to be able to
have those experiences last year where I met with people who actually work on devising the sort
of like citizen curriculum. And they begin to have these conversations like very early because
propaganda has been so much an integral part of, you know, countries that were sort of border
in Russia and they feel that they've experienced this kind of propaganda, this sorts of
interference for actually quite some time. And they have these sort of handbooks. And they're
actually relatively, they're not sensational at all. It's very matter of fact. This is how it looks.
This is what it does.
Does this somehow trip your emotional, like, does this get your hackles up?
Think about why.
Think before you share.
You know, it's not the...
Attacking your emotional ergonomics.
There's high degree of trust in the government still there.
So when the government puts out these things, people aren't like, oh, the government is lying
to me about there being Russian trolls in the world.
So in some ways they're in a better position than we are, honestly.
And that's where that foundation of trust in something has to come from.
And once you realize that, again, we're polarized right now as a country.
We don't trust our institutions.
But ask how much of that distrusted institutions, was there an attempt to make me have so that now there's no place from which we can say, groundedly, let's look into this.
Because back in the 1940s, we've talked about this, the Committee for National Moral and the Institute for Propaganda Analysis, these are two nonprofit organizations that were meant to try and protect the U.S. democratic psyche from foreign fascist propaganda, saying this is what it looks like, just like you're talking about with Estonia against Russian propaganda.
And we used to have programs like that recognizing that the human mind and the democratic psyche is vulnerable.
But I think it does come from, you know, this sort of sophistication about human nature and a different view of the mind, not as being this authoritative point of all of beliefs and choice, but as something that's much more vulnerable that needs protecting.
Hey, listeners, we're going into the 2020 elections without having solved a lot of the problems, Renee,
has told us about. And now there are even some new ones. So Tristan and I talked about what we could do
about it. Okay, so what should we do for election 2020? What are the solutions? You know, Facebook and
YouTube and Twitter are the only companies with the living, breathing access to the crime scenes,
you know, and to know what is going on there. You know, like they own the satellite network.
So this isn't a solution. It's just, it's a, it puts us in a very tough spot. I think, you know,
one thing we absolutely need is fast lanes for researchers. Fast liens for researchers. Fast liens for
researchers just means there's a lot of people who are reporting things to tip lines, but when
you have special people like Renee or Guillaume, who we interviewed last time from YouTube, who
really, really know and spot these things early, making sure that they have immediate fast
reporting directly to the product teams at these companies. That's sometimes an issue, is that they'll
send stuff and the relationships have to get developed so that that stuff is treated as high
priority when it comes in. You know, as we found out that Facebook during the India
election, they had a tip line, but the tip line only operated in 10 of the 22 different
languages in India. So, you know, there are ways that you can kind of scale up more faster
response times for, you know, tips and claims, but even there, you're going to be resource
strapped. And some companies are going to have more resources than others. So how do we tie
the responsibility back to the companies? Let's say we get through the 2020 election
And we look back because we, you know, we get Freedom of Information Acts or, you know, the fast lines weren't fast enough, but we now have the data.
And we see that Facebook had some large percentage of the direct political ads were, in fact, just false.
The first target, I would imagine, to lock down, would be the Facebook advertising system, which is currently still based on custom audiences, look-alike models so I can pump in a list of, hey, here, I've got these 20 user IDs who I know are moms who believe in conspiracy theories.
Hey, and I'm Russia. Hey, Facebook, I want to order up 10,000 other users who lookal like these 10 users that I just handed you.
You know, lookalike models can be really useful because I can say, here's 10 people who like Nike shoes.
And I know they like Nike shoes. And I'm Adidas. So I want to say, hey, Facebook, give me 10,000 users like these because I know they'll like shoes.
But they can also be really bad tools. So again, I think until we have a ridiculously rigorous trust layer that we say, well, who do we trust to use a tool like look alike?
likes. Like, what is a trustworthy actor to be able to say, like, let me target people who look
like these other people? Who can guarantee that they are a good faith user of a system like
that? And until that can be guaranteed, shut it off. If I was a engineer or designer sitting
within Facebook or Twitter, what could I be doing to even have a small positive impact here?
The first thing is, I would just ask these questions on the inside. What do people say,
When you say, do we have a way of protecting this from happening?
I mean, here's the structure of it, just everything we've laid out.
The answer is going to be no, but are people even allowed to have that conversation?
But we need to recommend that they do.
Because I think everyone in the company needs to understand what are you going to be complicit in, you know?
All right, back to Renee's interview.
One of the interesting things that I've confronted recently is ways in which things that privacy activists want,
I do not want as a disinformation researcher, ways in which there are certain channels that you can push people into that make it next to impossible to see what propaganda is spreading where and how.
So it's how is disinformation going to evolve and what do you do to detect it?
And that is something where as we push people into encrypted channels and things, the AI is not going to be able to do very much.
So you can have the best moderation AI tool in the world and if it can't see what people are sharing.
you can't do anything.
So this is important because Facebook is just moving all of their conversation.
They've announced a whole directional shift to move from a public platform where there is the
opportunity to research to move the entire thing into these private.
It's a very clever move, right?
Because I can feel inside of me.
I'm like, oh, the part of me that's like privacy matters, I don't want my data leaking out to
just be for me.
I'm like, oh, yay, that's a great move.
And then the cynic in me is like, wait, isn't this just because Facebook's getting hammered
for all the stuff that people see that's wrong.
And if they move it to encrypted channels,
then nobody can see what's wrong
and then there's no accountability.
But the question becomes more,
do people maintain a presence on overt platforms like Facebook
where there's more discoverability?
Does everybody fracture back into small friend-based groups?
I spend more time on WhatsApp now than I ever have before.
Funny enough, talking about Twitter.
It's this really weird, you know,
people push tweets from Twitter into WhatsApp
because they want to have the conversation privately
because they don't like the mob on Twitter.
And so it's a way to have a conversation.
with like a trusted smaller group of people about something maybe that's controversial that you don't want to say to, you know, inadvertently attract the Twitter trolls. I once heard Chris Poole describe it as it's not who you share with. It's who you share as. So the idea that we don't have the same identity in all places. And so if you are somewhat public on Twitter, you would welcome the opportunity to have these conversations in encrypted channels where you don't have to face the same risk of, you know, being exposed in some way.
What should I know about my own psychology as though, a denizen of the internet, to give me antibodies against this kind of disinformation?
I think the acknowledgement and internalization that you are vulnerable to it is a good start.
I think that there's a lot of defensiveness.
I mean, I think I get two, three tweets a day from somebody telling me, well, maybe you're an idiot and you'd fall for it, but I never would.
I always get a kick out of those people.
But, um, magic we say the PhDs are the easiest to manipulate.
It's, yeah, it's, I was looking at a page yesterday that I'm, that I'm pretty sure is not authentic, but you're just constantly second guessing what you're seeing and like, and trying to come up with some probabilistic waiting of when do I potentially embarrass myself by giving this off to somebody else and saying, what do you think? You look at it now. It's an interesting, it becomes a really interesting question because if it's done well, it triggers some little sense of like the uncanny valley almost. This isn't quite normal.
This isn't quite right, but you can't quite put your finger on why.
And so I find when I get into those situations, I just try to actually just step away
and go look at something else for a while before going back to it and starting again.
You know, when somebody's serving an ad at you, like you should be ultra-suspicious of why you
are getting that ad.
And that's a thing where there's an innate distrust for product ads on the internet.
Oh, they're just trying to get my money, right?
But I don't think we have that same innate distrust of political ads, and that's because they don't look like ads a lot of the time.
Often they're not even ads. They're just content or pages. Right. It's just content or pages. Issues. It's issues, not candidates, basically.
And this is also come up a little because people think, oh, those ads, I would just not look at the ads or I don't even look at the ads. But you should talk about how most of this, especially the information warfare, happening from all these different countries now who've entered the game. It's not even happening from ads per se anymore.
It's not happening from ads anymore.
Either they have built up pages, and then once you build up pages with the right content,
you'll get the lift of organic shares, or you can just go into a group where people have already declared what they are,
and you can share your article into the group.
And you infiltrate using these techniques that we've got to see here.
Yeah, and so there are a lot of fake personas.
There's been a lot of speculation about the yellow vests movement, and to what extent, you know,
as those protests were coordinated in Facebook groups,
were outside agitators coming in to either amp up the more violent elements,
the people who are more likely to set things on fire versus just the people who march.
Whether there were foreign infiltrators, you know, this is, of course, a ripe environment.
You've got really heated, passionate people who are going to march in the streets.
You know, can you push them to do things that are-
Tap into grievances and just make them more violent.
There's actually in this list, we've got this list of techniques here,
form sliding, consensus cracking.
And one of them here is called anger trolling.
So is there a way to peel off the most angered-driven forum posters, most inclined to violence, identify who they are, and then go directly after them?
And this is one of the techniques that I see you talking about.
I always think of, not to put this on Russia, but, you know, if you think of Putin as that kid on the playground, when two kids look like they're about to start getting into a fight, then the other kid goes, yeah, fight, fight, fight, fight, and that kind of changes the way you see the situation because that kid's really annoying.
He's kind of a brat, and he's making everybody else fight, and whenever he sees people just barely start to get into this battle, I'm trying to make it more extreme.
And I find that image helpful because what I think we have to realize is that the antidote to this would be a supernatural ability to find common ground or to be calm and to not fall into emotions, to have more recognition of ways in which the system might be rewarding us for being outraged, rewarding us for being polarized, rewarding us for getting into fights, as opposed to who.
who can be the most calm, nuanced thinker, I mean, or speaker?
Who can speak the less, speak the least, but say the most?
And this is actually a question I really wanted to ask you, Renee.
So, like, you can think of the whole virality machine, the attention,
extractive attention calmly, tilting everything towards a structure where the least nuanced,
most outrageous, most polarized thing wins, knowing what you do about both disinformation
and human psyche and how information travels, how should we start to think about
redesigning sort of the infrastructure, the shape of our social graphs, like at a structural level
to make them more impervious to disinformation?
I think some of that is the signal that you can get about the content, the dissemination
pattern, the content for authenticity, but then also, you know, Zuck put up that graph, I think,
where he says that, you know, distribution trends, the closer you get to that inappropriateness line
where they would moderate it away. So I think that they're trying to think about what that
looks like and how their incentive structures drive that.
Incentive gradients, I think they call it.
Yeah, I think there's, you know, there's the React G, right?
If everybody's anger reacting, you know, what do you do with that?
Is there a way that, you know, I'm actually really curious.
I'd love to hear them talk about how they react G figure into their ranking algorithm.
I mean, I'm thinking about WhatsApp deciding to limit the number of people you can
share to, which changes the structure of the problem.
So it's not hiring some 10,000 people to try to hold the boulders back.
It's saying we're going to change the landscape that human beings are communicating on.
to let the more nuanced thing have a greater chance.
Yeah, what are the kinds of...
Friction around dissemination and virality, right?
Have you read the article or are you clicking share
before even actually clicking into the article?
Do you even know what it says?
I talked to someone at Facebook who said the number one thing
that he thinks would stop a lot of these problems
is why in the world did we invent instant impulsive sharing?
I mean, there's a story even about Steve Jobs
when the first podcast app was created.
Someone had suggested, oh, we should make it see.
you can instantly share a podcast and a feed
and everybody else can see the podcast that you're doing.
And he said, no.
He said, if it's that important,
someone will make a conscious choice
to, like, copy the URL and open up a message
and send it to one specific person or two specific people
because it's worth their attention.
Yeah. I agree with that.
In a time when attention is so fragmented
and it's hard to actually say
that we're concentrating or spending attention
on any of the right things,
that actually matter to navigating, like, civilizationally, including in our personal lives,
having that level of a kind of threshold, of what is actually genuinely crossing that bar of threshold
of what's worth our attention.
What are your closing thoughts?
Are there, are there reasons to be hopeful?
What can we, like, learn about?
I think we've come a long way, like a remarkably long way, even just in the framing of the
problem in the last couple of years, right?
There's a long way to go.
There's going to be some really interesting tensions, I think,
between the disinformation versus privacy.
I feel like I am a proponent of decentralization, actually.
I can't quite figure out how we get there safely and effectively, so to speak.
But I do wonder a lot about this idea that we have a global public square.
That's actually ludicrous.
That should never have existed.
The idea of that shouldn't even make sense to people, right?
We don't even have a national public square.
There's no such thing.
And there is something to be said for smaller scales of communication, for almost like an internet federalism, if you will, like a way in which people find communities. They join their communities. There's less of this like constant loggerheads battle for attention. All information flows through these five channels. So controlling them is critical to controlling the conversation across the entire planet. I really do believe that decentralization would be a net force for good.
What I'm hearing you say there is like in the back of my head, I hear things like, you know, the human social.
animal was adapted to living in smaller groups. And that's just, it's where we're naturally
most brilliant. And so our technology in many ways should fit the ergonomics of our social
systems. I think that that's true. I think I would agree with that. I'm just not sure how that
happens. And so I've been trying to read other people, including people who I, you know,
traditionally would have disagreed with. Mike Maznick has an interesting thing on protocols, not
platforms. Just people who are thinking about, what does it look like to facilitate a system in which
we aren't trying to push everybody into one central conversation. Can you possibly have a
healthy conversation with all of these people gathered in one place all of the time? I would argue
that probably the answer is no. And so I'm curious to see how the platforms resolve that tension
and how we as users push them to resolve that tension, how regulators push them to resolve that
tension. And I think that that's going to be something that's going to come up a lot over the next two
years. Yeah, let's add one more thing to that, which is, you know, what are the group sizes where you feel
a really inclusive level of participation.
You know, if you have six people, people can really raise their hand.
They know everyone else pays attention to them,
and everyone's getting certainly enough of an opportunity to speak.
If you have 20 people in a group, suddenly you have 20 of the smartest people in the world,
but then they're going to be beating each other out.
So it's just a product of a kind of physics, a kind of human social physics,
that some group sizes are conducive to open-mindedness, civility, decency,
and others are not.
And I hear you saying less the decentralization and more breaking up the big public square into the right sized appropriate sizes.
I mean, I think the way you do urban planning is you have plazas and you have warrens.
You have big plazas in the center and there are certain things that happen there.
And then you have little side areas and warrens and little benches.
And there's these different sized units, these different like attentional vehicles for group conversation.
And that's one of the things that I would love to see people get sophisticated about are what are those different sized spaces that are best for different.
kinds of conversations. Yeah, I agree. Cool. Thank you so very much, Renee. It's always so much
to talk to you. Likewise. I learned so much every time. You always blow our minds. On our next
show, we'll be talking to Gloria Mark, a professor of informatics at the University of California
in Irvine. We carried around stopwatches, and we timed every single activity that people
did to the second. So they turned to their email. That would be start time. Click on this
stopwatch. Then they turn away from email. That would be stop time. Click on the stopwatch.
And we recorded all these things so we could be as precise as possible. What was Gloria
measuring? She's clocking our attention spans down to the second. They're eroding fast,
as you might have guessed, but just how fast and what's the problem with that? Gloria will share
those findings and tell us what we can do to stop the downgrading on the next episode of Your Undivided
Attention.
Are you concerned about the next elections are happening around the world?
Do you have ideas?
Do you want to chime into this conversation?
Do you want to put pressure on the technology platforms?
Do you want to help them?
After each episode of the podcast, we're holding real-time virtual conversations
with some members from our community to really react and share solutions.
You can find a link and information about the next one on our website at humanetech.com slash podcast.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi.
Our associate producer is Natalie Jones.
Original music and sound design by Ryan and Hayes Holiday.
Henry Lerner helped with a fact-checking
and a special thanks to Abby Hall, Brooke Clinton,
Randy Fernando, Colleen Hakes, David J.,
and the whole Center for Humane Technology team
for making this podcast possible.
And a very special thanks goes to our generous lead supporters
at the Center for Humane Technology
who make all of our work possible,
including the Gerald Schwartz and Heather Reisman Foundation,
The Omidyar Network, the Patrick J. McGovern Foundation, Craig Newmark Philanthropies, Knight Foundation, Evolve Foundation, and Ford Foundation, among many others.