Offline with Jon Favreau - GOP's "Black Nazi" Porn Posting, Instagram's New Rules, and Tristan Harris's Guide to Humane Technology
Episode Date: September 22, 2024Tristan Harris, co-founder of the Center for Humane Technology and ex-design ethicist at Google, joins Offline to chat about the attention economy, why tech execs don’t let their own kids on the app...s, and how our AI arms race is one giant game of Jenga. But first! Jon and Max break down Instagram’s new sweeping changes for teen users—do they address child safety concerns? Why now? Will kids be able to outsmart the new rules? Then they turn to pet-obsessed Springfield, Ohio, which has been suffering through some of the most pestilent (and catchy) misinformation of this election cycle. To close it out, the guys break down North Carolina Lt. Governor Mark Robinson’s slew of scandals, and how Republicans are shamelessly endorsing him nonetheless. For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast.
Transcript
Discussion (0)
I don't think we should be settling for slightly less harmful social media.
I want the version of sort of the Mr. Rogers, Sesame Street, consciously developmental,
you know, tech that's designed for kids.
I think that we shouldn't be happy until the parents of teenagers who work at these companies
happily give Instagram to their own children.
That is not currently the case.
And that's the easiest way to know whether something is good or not. The CEO of Lunchables Foods did not give
his own kids Lunchables. So that tells you everything you need to know.
I'm Jon Favreau. And I'm Max Fisher. And you just heard from this week's guest,
executive director and co-founder of the Center for Humane Technology, Tristan Harris.
Love that guy.
So he's come up quite a few times on this pod.
I first became familiar with him from watching the Social Dilemma, that 2020 documentary.
Sure.
It's a very offline doc.
It is.
In which Tristan, who's a former design ethicist at Google, outlines the tech industry's three main goals,
to keep you scrolling, to keep you coming back,
and to make as much money as possible.
Simple, right?
The social dilemma cemented Tristan
as one of Silicon Valley's most outspoken critics.
Since then, he's been a vocal critic of social media,
the internet, most recently AI,
where he recently talked to Oprah.
Heard of her?
Okay.
Yeah, for an ABC special about AI.
It was the offline booking that got him on Oprah.
That's what it was.
Yeah, she knew.
She wanted to get him first.
So I wanted to finally bring him on to chat about the state of the tech industry
and ask him if we should still be afraid of AI.
Turns out we should.
Sorry for the spoilers.
It was a great conversation.
Excited for everyone to hear.
But before that, welcome back, Max. Thanks, pal. How was your trip? Anything Excited for everyone to hear. But before that,
welcome back, Max.
Thanks, pal.
How was your trip?
Anything happen?
Well, I caused some news.
I did get engaged to Julia.
Woo!
The first Crooked engagement,
I think.
Yes!
Yes, it is the first Crooked engagement.
I don't know if I've told you this yet,
but when I proposed,
she was wearing Crooked merch.
So it technically was a company event and I will be billing.
I will be billing accounting for it.
That is, I'm so
happy for you guys. Thanks, man.
And it was in New England you did it?
We were in Cape Cod. I mean, even better.
I know, I know. It's my dream.
Cape Cod was beautiful. Well, Julia is a
asshole. She's a big, so I knew that
she would love that. That would lull her into a false
sense of security. The next day we
went to Boston. We weren't planning on getting a ring, but just like wandered into this old jewelry store.
And there was this lovely old guy behind the counter who I swear was an extra from The Departed.
You have never met a more Boston guy in your life.
Until you go to the next store.
That's true.
Really had like off-duty police detective written all over him.
But just like knew a lot about rings
and various gemstones
and was a lovely guy
and so it was a very
Massachusetts engagement.
Yeah.
Isn't there a movie
about that?
It's Departed meets
what's the one with
I mean The Departed
is a romance.
What's the gem
Hidden Gem?
No, what was the one
Uncut Gems?
Uncut Gems.
Oh, come on, man.
That's based in New York.
It's not a Boston movie.
That's why I was like
it was if you
if you made Uncut Gems if you Bostonized Uncut Gems. By way of love it or leave it. That's based in New York. It's not a Boston movie. That's why I was like, if you made uncut gems, if you Bostonized uncut gems.
By way of love it or leave it.
That's what I'm thinking.
Yeah.
Well, congratulations.
Thanks, man.
Very happy for you guys.
And it's great to have you back.
We have a lot of news to cover today.
We do.
Including the extremely online conspiracy theory about Haitian immigrants eating pets
that has dominated the campaign and the news um also the lieutenant
governor of north carolina who's running to be governor republican nominee it's not all he's
running for he's got a very he's got like an og uh internet scandal i kind of love the throwback
message boards from the from the 2000s i have been waiting years for a message board drama political scandal, and it's finally here.
It's here.
It's here.
Okay, but first, we're going to talk about Instagram.
This week, they announced sweeping changes to the way teenagers use the social network app. make teen accounts private by default, place their accounts into more restrictive content settings,
and show teens a new daily limit prompt after they used Instagram for 60 minutes.
I'm sure that's going to stop them.
Users under the age of 16 will need a parent's permission to change any of these built-in protections.
What do you think?
Real progress or an attempt to deflect criticism and impending legislation and policy changes?
Yeah, there is some real stuff here.
But I think for the most part, this is mostly, like you said, meant to preempt legislation and regulation that is already coming down like this.
What they've rolled out is very conspicuously just a like funhouse mirror, super softened version of this law that New York State passed
a couple of months ago. The law in New York State was going to require under 18s, so instead of
under 16s, to see non-algorithmic feeds unless they had parental approval, barred notification
between midnight and 6am. So again, sounds very familiar to what Instagram is doing.
But the difference, or one of the big differences was the New York state law put the onus on the social media companies to verify that and would impose a $5,000 fine.
And under Instagram's version or Meta's version, rather, the under 16 year old has to proactively identify themselves as under 16, which is just like the easiest get around in the entire world.
And if they don't do that, then the social media
company can say, well, we tried.
We could. Apparently they said that
in the future they're going to use
Meta has said this, that they're going to use AI
to try to verify whether
the teenager is in fact
a teenager. Really? It's like going to ask
them a series of prompts and see if their
knowledge of 1990 TV show holds
up to... They're going to do that. There's also, they're supposed to like, there's ask them a series of prompts and see if their knowledge of 1990 TV show holds up.
They're going to do that.
There's also, they're supposed to, like, the other concern is because there's supposed to be a parent that when you sign up as a teenager, you're supposed to give them your
parent's name and email address and stuff like that.
And then the parent gets some sort of overview accounting.
Right, but they could always make up the parent's email address.
There's a, or you could just not say that you're under 16. You could just, when you log into it, I mean, it's like logging
into any website where it says like, you know, like, I don't know if you go to like a winery
website and it's like, are you under 21? And you just click whatever, you know, so I was born in
1911. Fine. Good. Yeah. But then I wonder what, like if, if age age verification is at like the heart of this.
Yeah.
If you had a company who wanted to do the right thing here or say you had like a legislation passed, right?
Sure.
What do you what is the right design for this?
So this is part of what New York State is figuring out because they passed this law in June and they have another year where they're going to figure out basically the guidelines for what they are going to do to require the social media companies to, like you're saying, proactively confirm that someone is under 18. There are versions of this. Like if you go online to any government website
and you want to like sign up for a passport renewal, there's a thing that you do where you
just like hold up your ID next to your face. It takes a picture of it. It goes to a call center
somewhere in the world. Someone looks at it and just clicks a yes button.
Like these are solvable problems.
Like, yes, doing a real identity verification is expensive.
But if the fucking U.S. State Department can afford it, MENA can afford it.
Let me read this out to you.
MENA's revenue for 2023 was one hundred and thirty five billion dollars, which is a 16 percent increase from the year before that.
They can afford this they just don't want to do it because it would cut down usage by kids because that's what all the
regulations are deliberately designed to do and they don't want to do that and right now meta is
in an absolute dogfight with tiktok for capturing the youngest generation of users who are the most
aggressive and most frequent users of social media. And also, as we learned from the cigarette companies, you got to hook them young.
What about Apple saying that like you can't like they know how old you are.
Right.
When you get a phone.
Right.
I mean, yes, we think that's another way.
Maybe no phones in school and class when we talked about that.
But if you are underage, under 16, maybe 16 or under, and you have an iPhone, you shouldn't
be able to put Instagram on it.
Right.
Or Twitter, right?
Or any of these.
Yeah.
Now, I know that's probably extreme.
But if I was doing legislation, I wonder why that legislation has not been.
That seems like that could be effective legislation.
I mean, that's going to be the next step is how do you actually require age verification?
Because you're right.
That's also a way that you can do it.
Because people sometimes raise privacy concerns.
Right.
When you have a phone number, anytime you do two-factor authentication, they send you a text message.
Your phone has, because you had to use a credit card to buy it, that has your date of birth associated with it.
So Apple, if you're buying it through or if you're downloading an app through your phone, can actually look at that.
So there is a way for them to actually check all of this. But I just all of this is just about
trying to limit the ability of governments to regulate social media access for kids,
because they know this is coming. They've known for years it's coming. And it's a huge threat to
their business model. And they fight this harder than they fight absolutely
anything else. The Trump stuff, all of the antitrust stuff, like they fight that. But the
tenacity with which they fight anything about kids, like all the stories that I ever reported
on social media abuses, I never got the kind of pushback that I did when I reported stories about
online child sexual exploitation. I mean, I reported a about online child sexual exploitation.
I mean, I reported a lot of stories before that that were like bad for the social media
companies, political influence, like literally causing genocides.
And they would fight a little bit.
But when I had these stories about like kids on social media being sexually exploited because
of the way the apps are set up, I am telling you like.
Why do you think that is?
Lawyers coming after you, like threats.
My boss's boss is getting phone calls. And I think
the reason is just because they knew that if there is a place where they are really vulnerable
from legislation and from regulation, it's from the government saying, okay, under 18,
you can't use it anymore. Or you can't use an algorithmic feed, which is what's in the New York
law and is what is maybe going to be in this California law. That's a copy of the New York law.
That's a real thing that can happen and could cost them tens of billions of dollars.
Yeah. So it's money.
It's money.
Absolutely. It's money. Of course it's money.
Money, obviously it's money, but it's like, this is where they're making a ton of money.
It's where they're, and it's where the future of their business is too. They're thinking about 10,
20 years from now, if they don't get the youngest consumers, are they not going to get addicted to
social media? Again, it's just like cigarette companies. And I think this is like, I think we
kind of miss that the companies have the capability to do this fucked up stuff because we think of
tech companies as like kind of in our social cohort. But if we were talking about literally
any other industry, of course, we would assume that they're going to do any dirty tricks or
hijinks they can come up with to avoid regulation. It's going to cost them this kind of money. In my interview with Tristan, he mentions that he's like, look, the optimal product
for kids is one that the people who work at these tech companies would want on their
kid's phone.
Right.
He goes, and right now, he goes, because I talk to people, he's like, a lot of them
don't want to put Instagram on their kid's phone.
Because they know it's terrible for them.
And there's so much research that shows that it's so much more addictive if you're a kid
it's so much more emotionally and psychologically harmful of course it's terrible for them all right
uh let's talk about the pets of springfield ohio uh are the pets too online the pets yeah for the
past two weeks the republican nominees for president and vice president of the united
states have been spreading a completely false rumor that Haitian immigrants in Springfield have been eating people's dogs and cats.
We have covered this at length on Podsafe America, obviously.
But I did want to spend some time talking about it here because it's one of those stories that started as viral disinformation.
Right.
Became a topic of a presidential debate
seen by tens of millions of people,
and then ended up terrorizing both an entire city,
but also entertaining people on TikTok.
Which is just...
That's our new reality.
That is, as you say,
like the perfect encapsulation
of everything we're dealing with right now,
thanks to the phones
and the social media platforms it's like that ursula k le Guin story about the paradise where
some but some child somewhere has to suffer there's always in america there has to be some
small town that's being terrorized by social media rumors so the rest of us can have funny videos to
laugh at so dark what are you what are your thoughts on this uh uniquely american online
mess so do we i maybe we feel like people at this point know how the rumor spread or do you think we want to like tell people the actual like genesis?
We could do a quick summary.
Let me do a quick – because it's – I do think this is instructive because this is like the cross between real world and online happens a lot and is like so much more pernicious and I think so much more common
than what we thought was going to happen, which is like AI spreading deep fakes. Okay, so the
context is that there were a small number of Haitian immigrants who settled in this town in
Ohio over several years. It was completely fine. But of course, there was some racist backlash.
There was a woman in this town, Springfield, Ohio, who told her neighbor that she had heard
a secondhand rumor
about someone who had lost their cat and later discovered it being butchered by a nearby house
of Haitian immigrants. It was completely untrue. There's nothing to it. That neighbor posted about
it on a private Facebook group called Springfield, Ohio Crime and Information, which you know is just
solid information on that group. That's your best source. That's where I get all my news.
And the person neighbor who posted this now fourth-hand story
embellished it with something she later admitted that she just made up.
She said, I've been told they are doing this to dogs.
They have been doing it at Snyder Park with the ducks and geese,
as I was told by rangers and police.
Then a small Twitter account tweeted a screenshot of this post from the group
and then a big Twitter account at nwokeness.
John, do you follow a lot of posts from nwokeness?
Unfortunately, I do now because it's like Elon Musk's favorite account to post and also Trump.
Like there's a lot.
nwokeness is all over the place now.
It's like another libs of TikTok.
That's a new wing of the Republican Party.
It's fucking terrible.
Okay, so at nwokeness, QT'd it.
Trump picked it up immediately after that, I'm sure.
NWokeness has like 3 million followers, by the way.
Yes, right.
And then there was separately a photo went viral of a black guy carrying a dead goose,
but it was later reported that, number one, this guy was neither an immigrant nor Haitian.
Number two, the goose had been hit by a truck.
And number three, that wasn't even in his ring. Probably driven by RFKian. Number two, the goose had been hit by a truck. And number three, that was an England springfield.
Probably driven by RFK Jr.
Driven by the whale, actually.
Okay, so I think there are several lessons from this.
This kind of story, I think, is so much more common than we realize.
The thing bouncing between real life rumors.
This happens, then you get your presidential candidate on a debate stage saying they're eating
the dogs, they're eating the cats. And then it becomes truth.
It becomes hard truth. And that
becomes verification for people.
It's like, oh, well, I heard a rumor and then I saw
a post, or I saw a post and then I heard a rumor
so it's like, it's in two places so it must be
true. These
local community watch Facebook
groups, they're fucking worse
than 8chan.
They are really –
Oh, absolutely.
I've never encountered them.
So I have reported like a bunch of stories about how these groups, they exist everywhere in the world, every small town.
And I cannot tell you how many times they will spin up basically the exact same two rumors, which are immigrants are stealing our kids or immigrants are eating
our pets.
Literally, those two rumors pop up all the time because it's just like you get you get
a hundred people together in a Facebook group.
You prime them with like we're on the watch for like threats to our family and our kids
and our community.
People who spend way too much time online and they just start like posting crazy rumors
and getting each other worked up.
And it's always this.
This is what Nextdoor has become as well.
Right, exactly.
It's Nextdoor plus Facebook like private group.
Which by the way,
I don't know if you listened to my interview
with Robert Putnam,
but at one point he talked about Nextdoor
as like,
because I think one of his students developed it,
as like a potential for like local people
getting together.
And I'm sure,
and it's funny because I didn't get into it with them,
but I was like, I'm sure that was the intention starting it.
But my experience with like seeing what's happened on Nextdoor is not great.
Which is just an example of how these things start with the best intentions.
What an incredible irony that you start an app literally to end the social isolation
caused by the internet and just make it so much worse.
Oh, there's also there's like this is like also a perfect rumor because there's all this research that shows that the most cognitively attractive misinformation for anybody is something that is negative, that elicits outrage or disgust, which this definitely does because it's people eating pets and that denigrates social outgroups and being like ridiculous on its face is totally fine because if you hit those triggers it just
like breezes past all of your usual cognitive barriers that are supposed to check for like
is this plausible at all or not so trump brings this up at the debate which I knew it would happen. Yeah. Really? Before the debate, I said to my fellow Positive America hosts, how long into the debate until
Trump brings up the cats and dogs?
Because it was out there.
J.D. Vance said something about it.
They were going to talk about immigration at the debate.
It was primed.
Right.
And I mean, it wasn't like any wonderful prediction because clearly the moderators were ready for it as well.
David Muir then fact checked because he, so they were ready for it.
So then Trump says it, it's like you said, absurd on its face.
Everyone thinks it's ridiculous.
Um, this became a, a tick tock sensation.
The remixed, a remixed song of Donald Trump saying this.
Let's listen to the clip.
They're eating the dogs.
They're eating the cats.
Eat the cat.
Eat, eat the cat.
They're eating the dogs.
They're eating the cats.
Eat the cat.
Eat, eat the cat.
That's a great beat, honestly.
I have had the challenge of being horrified and disgusted by the story and the effects.
And also not being able to get that song out of my head.
Walking around my house the day after the debate saying, eat the dog.
And Charlie heard it.
And Emily's singing.
It's like, it's so, because it's a little catchy, but it's horrifying.
What do you think? What's the TikTok dance to go with the eating the cats and dogs?
Oh, I've seen them.
Are people really dancing to them?
Yeah.
Austin's like, yeah.
Oh, my TikTok feed now is, the algorithm is giving me the dances.
And it's also, it's crossed over because there's some like MAGAGA accounts pushing it because they think it's cool.
That's so funny.
It's like counter-cultural Trump, you know?
And then there's liberal accounts saying, oh, this guy's fucking crazy.
Right, right.
This is what's happening.
Right before we were recording, I was telling my friend from college, sent it to our college friend text chain this morning.
I was like, I can't get this out of my head.
It's so funny.
And you and I were both like, I can't believe it's taken him this long.
We've been talking about this for weeks now.
I know.
And I was 48 hours behind it because I had no idea it was going to be on the way because I was out getting engaged, which goes to show you never get engaged because you're going to miss important.
I didn't know that it was going to come.
Also, I'm so sorry to call you out here.
You keep doing something that I realize that you have been doing for months you
keep tapping your phone to look at the screen and it's upside down the screen isn't even turned up
i don't think you even realize you do because i was getting notifications and i'm like i gotta
turn why my phone is here recording a podcast turn the phone around we're gonna start putting
the phones out of the room that's a good. I don't know why I brought it in.
Anyway, so I think there's actually something really, really kind of cool about this.
Not just that it's like we're making fun of Trump, which is part of the whole Tim Walls, like make him really small.
Although it's definitely that.
Like, we maybe accidentally stumbled onto the first effective counter misinformation strategy maybe ever.
Because every strategy that has been like
fact checks like famously don't work because people don't see the fact checks because they
never go as viral as the initial misinformation or if you challenge people that never works
because then they just dig in like is turning it into a funny meme that you can laugh at that will
go viral on its own that gives you a way to like roll your eyes at it and reject it like actually finally the solution to this so i think if you're talking about it from a
national politics standpoint yes donald trump says it even comal harris laughing at him
while he says it was good that was and there's also plenty of tick tocks of um people watching the
debate and like live debate reactions from crowds him of him saying that and so there's like people
in bars there's people in some people's homes and like everyone's laughing and i i think that is
incredibly effective in terms of like you know reminding people that donald trump is crazy and
showing people that everyone else thinks he's and. And that it's untrue. I do
worry about, and the story,
the longer it's gone on, the angrier
it's made me. Because
a lot of outlets
have now started reporting from Springfield,
and done a lot of reporting in Springfield,
Ohio, and there's all these
bomb threats that have been called
into schools and
hospitals, and now fucking J.D. Vance is like, well, the governor said that the bomb threats that have been called into schools and hospitals.
And, you know, now fucking J.D. Vance is like, well, the governor said that the bomb threats were actually from foreign countries.
Right.
So this is like a, you know, foreign interference.
And it's like, OK, it's fine.
Right.
I was going to say, like, the kids still have to evacuate the school because of a rumor
because of a bomb threat.
How do you think the foreign governments or whoever, foreign bad actors
figured this out? Because of you and your
fucking idiot that's running for president.
We really are the easiest country to troll,
maybe, in history. And it's also, like,
so, you know, Springfield
has all this immigration.
All these Haitian immigrants came over the last several years.
By all accounts,
it has improved the economy. People love it.
They found all these jobs.
They're working hard. They've escaped from...
So, from the standpoint
of the Haitian immigrants, they have
left this war-torn,
violent country that basically
has no government right now. It's just like
gangs wandering around.
Very bad. They want to take
their children somewhere else, their family somewhere else, so they
come to the United States. They hear there's jobs in Ohio because other Haitian immigrants have moved there. They want to take their children somewhere else, their family somewhere else. So they come to the United States.
They hear there's jobs in Ohio because other Haitian immigrants have moved there.
They go to Springfield.
And then you've got people in the community who a lot of them are hiring Haitian immigrants at their companies.
Right.
And every time there are newcomers and immigrants that come to a community like this, there's going to be tensions, right? And in this community, there's tensions around language barriers in schools and Haitians
driving different than we drive, right?
Because they, and so, you know, there's all this sort of friction.
But then you have these, this online viral disinformation campaign and these politicians, these Republican politicians, sort of amplifying this.
And now it comes back to the town and it's hurting.
Obviously, it's terrorizing the Haitian immigrants in the town.
But it's also hurting the people of Springfield who Donald Trump and J.D. Vance are purporting to want to help.
And these people could figure this out on their own.
In fact, they were.
They were going to town meetings.
And the original letter that sparked this,
that brought it to J.D. Vance's attention,
is the town manager in Springfield who said,
we're having issues with housing, right?
Housing prices have gone up because basically what happens is
some Haitian immigrants who come there,
like five or six of them will live in one house,
and so they can afford or hire rent, and so it's pretty not proper.
And they're like, they didn't say in the letter,
we need help with immigration.
They said, we need help with housing, right?
And so this is a community trying to work to figure this out together.
And now they have become this sort of national,
this target and this national symbol, And it is just fucking disgusting.
But it's also what can happen with,
but you know,
we always say Twitter isn't real life.
Twitter is real life.
Yeah.
Online stuff does come back and become a real life challenge to deal with.
There is something really perfect about Trump and Vance,
just like absolutely bulldozing this community ostensibly, quote unquote, to help them.
But in a way that is clearly just meant to create more conflict and strife, not just in this town, but in other towns and encouraging people in all sorts of places to be like be suspicious and skeptical of and hostile toward neighbors who don't look like you.
Yes.
And like the social media of it really makes me think about if you talk about this story from India in 2013, it was like the very first thing that ever made me aware of like maybe social media is harmful it's like
basically the first big instance of social media driving real world violence it's a very complicated
story we don't get into but there's this part of india where there's like a lot of diversity and a
rumor online started with like fake photos that members of one minority group were violently attacking.
Oh, I remember reading this in your book.
Oh, okay.
Yeah.
I was like, I knew I've heard this before.
And it led to, and then it got picked up by like basically bad actors in a completely different part of the country who seized on this like very small incident to try to turn people against each other, which led to riots.
And again, a totally different part of the country that led to 300,000 people
being pushed out of their homes by violence.
So nobody was helped by it.
And it's just like something about social media that abets this.
And it was really only a matter of time until first in India with Narendra Modi.
And now, obviously, here with Trump and Vance, people learned how to use this to their advantage
in just the worst and most malicious possible way. And one final anecdote on this.
This woman who said that she lost her cat and filed a police report that she lost her
cat and thought that maybe Haitian immigrants had stolen her cat.
And of course, and wokeness.
The account promotes this.
And they're like, the media owes everyone an apology.
The Democrats, there really was so it turns out that uh her cat his name is uh aunt sassy i think oh it's the cat
yeah and cat was located in her basement the whole time she found the cat come on really they had
they had footage this woman is a trump supporter yeah And she felt so bad that she went over to her Haitian neighbor's home.
Really?
And with a translator app in her, a relative in a translator app apologized with her like Trump t-shirt on for like doing this.
I know.
Wow.
I know.
That is just America right there.
It is America.
Just all the good and the bad just really wrapped in.
Well, and it also shows you like without, when you're in person and these are your neighbors and this is your community, like you can still have different views and make mistakes and say horrible things, but like you can figure it out.
Yeah.
Not so much when you have politicians and algorithms involved. This is the other thing I was going to say is that something that I feel like I, from like reporting on stories like this, when people are confronted in person with like, hey,
this thing that you spread turns out to be a racist myth or a piece of disinformation that's
causing harm to people, like eight times out of 10, when people are confronted like that,
they're horrified and people don't want to do that. They only want to do it when they're online,
when they're in front of their phones and there's so much incentivizing them encouraging them to do it but
you take them away from the device and they don't want to i actually interviewed this cop in in
germany of all places who was trying to track down again it was the exact same thing it was
local community policing facebook groups community safety facebook groups that were spreading racist
rumors that were leading to real world violence and he would just track down who had spread the rumors and knock on their door and go to their
house and say this post led an Afghan immigrant to be attacked. And they were always horrified,
he said. They would always take it right down. He said the social media companies were never
helpful. He would call them. They wouldn't return his calls, which is the whole thing.
In other creepy Republican news, CNN has broken an explosive story that North Carolina Lieutenant Governor Mark Robinson,
who's currently running for governor, is a Nazi-loving OG porn forum troll who spent the late 2000s on a site called Nude Africa,
writing sex fantasies involving transgender porn and his sister-in-law. CNN's Andrew Kaczynski discovered Robinson's alias because he used it elsewhere,
along with his same birthday and email address.
Great OPSEC.
And mentioned Greensboro in all of them.
Just incredible stuff.
And even on the fucking, I don't even want to say it, nude Africa.
He kept referencing like, oh yeah, here in Greensboro.
By the way, my birthday is 1968.
I mean. What is wrong with this guy?
Might as well just put his fucking picture up.
Robinson referred to himself as a black Nazi in one post.
Also said, quote, slavery is not bad.
Some people need to be slaves.
I wish they would bring it back.
I would certainly buy a few.
Wait, you missed the best part of the quote.
I wish they would bring it Parentheses slavery back
Just in case anyone
He just wanted to make sure in case an oppo
Researcher ever stumbled on that quote years
Later that he would not have
No, you want to be really clear on this
He also admitted to spying on college
Girls showering when he was a teenager
Many people
Thought these revelations might cause him to
Drop out of the race at the last minute.
And some Republicans did call on him to do so.
But alas, he's sticking around, denying the claims.
North Carolina Republican Party rallying to his side.
Donald Trump, who has endorsed him, who called him Martin Luther King times two.
And told everyone we should cherish Mark Robinson like a fine wine.
He has that wine aging these days, Don.
He is not appearing with him at a rally now.
Apparently, Trump's going to be in North Carolina
and Robinson's no longer going to appear at the rally,
but is not unendorsing or anything like that.
Of course.
Hasn't said anything like that.
How about this old school internet web forum scandal
that you've been pining for for so long?
Look, web forums have always been everything that is bad on the internet at some point traces back to web
forum culture which just like we talk a lot about algorithms we talk a lot about infinite scroll
all the tech developments the major platforms but there is something about an og web forum i don't
know if you were ever on any back in the day, but like they really bring out the worst in people and they just like something awful, like 4chan, like they're really the
progenitors for internet culture writ large. And I just, I love that we finally got our web forum
addict shitpost, which I'm sure JD Vance is true. I'm waiting for the day for the web forum,
JD Vance logs to leak. Oh yeah, for sure. Well, he was like, he was like a blogger
too. That's true. Right. So, you know, he was deep in the comment section, somebody's blog.
Deep in the comment section. But I just like, look, Democratic Party, oppo researchers,
racist porn site message boards. This has got to be your prime focus from here on out. The number
of Republican candidates who are waiting to be taken down. It's wild to me that these posters are from 2011.
These have been out there for 13 years,
which I credit to the other posters at Nude Africa
for not giving up their boy.
That's loyalty.
That's standing with your soldier.
Do you think Nude Africa is still cooking?
I don't know, but I know that if it is,
that after the RFK news broke last night of Mark Robinson breathed a big sigh of relief when he was logging back on, he was like, it's over now.
It's also very, like, it's a very older person scandal.
That's true.
Because like, millennial and younger, like they have better OPSEC, you know, than the old guy on the message form who's just being like, and this is my birthday in Greensboro.
Did you see also a part of the way they got him was he kept using this weird malapropism.
What was it?
It was a frog's fat ass.
I missed that.
And he uses that in his official Twitter account, too.
He's tweeting about the frog's fat ass and he's posting it.
Do you know that's how they got the Unabomber?
Well, it's interesting
you said that
because someone on Twitter
made this point
that I thought was right
which is like,
we're used to calling
a lot of Republican politicians
like, oh, it seems like
an internet troll
or someone from
the comments section
became a politician.
But it's even worse than that.
This is like the profile
of a school shooter.
Yeah.
You know?
That's where this party has been going.
When you look at some of these people who have been in mass shootings and stuff like that,
and you go through their website, this is the kind of shit that you see.
Well, speaking of, the story that we bumped to make room for Mark Robinson
was yet another attempted assassination against Donald Trump by someone.
Didn't make it.
Didn't make enough news this week.
By another like extremely internet broken brain.
Right.
Super online.
But like you're saying is like someone who had fallen down this rabbit hole that like has all the harm marks that we're like really good at recognizing now of school shooter politics of disaffected, weird, all over the like yeah you're deep into male grievance loving
tulsi gabbard trump voter then turns into a biden voter then is also like like in vivek and nikki
haley and aaron's like you know trump's trying to use it to be like he's a democrat democrats
pushing back be like oh he's trump but it's it's sort of what we talked about a couple episodes
ago which is this sort of crank politics yes right and he you spend enough time on the internet
and you start getting radicalized
in all these ways
that don't necessarily fit
into neat political categories.
Right.
It's just crank.
It's just weird fucking people.
I really think that you are onto something
with your theory
that the big political divide now
is cranks versus non-cranks.
And you see it so much
with who's sorting into the...
Even the, like you're saying,
the top-level endorsements,
like Tulsi Vivek
RFK
that's like
it's the cranks
and then you have
the non-cranks
in the Republican Party
both of them
who are endorsing Kamala
well that's why
you have like
you know
Tom Tillis
Republican Senator
from North Carolina
is like well
we have to focus on
we gotta win the presidency
still
we gotta focus on the Senate
you know
and then Mike DeWine
Republican Governor of Ohio just had a york times op-ed today defending springfield and the haitian immigrant
community okay that's even as he says in the in the op-ed as a supporter of donald trump and jd
vance okay it just like he's like so upset but he can't bring himself to not be a supporter of them
but he's basically the whole op-ed is attacking them. I really feel like 2017 was the line from like,
we believe the good Trump.
It's this is the day he became president
is going to finally come out.
Unbelievable.
It's just with these guys,
it's just, it's always Susan Collins
voting for Brett Kavanaugh saying,
I believe that he's going to be a good guy in the end.
Yeah, well, not so much.
They're eating the dogs and they're eating the cats.
All right.
Some quick housekeeping before the break.
First, we got a new podcast, Empire City.
That's so good.
The NYPD is in the headlines again, stirring renewed outrage and questions about police contact.
But what if I told you the NYPD is working exactly the way it was always designed to?
Empire City, the untold origin story of the NYPD, takes you on a journey to uncover the hidden history of the largest police force in the world, from its roots in slavery to rival police gangs battling across the city
to everyday people who resisted every step of the way. As our society debates where policing is
going, Empire City will tell you where the police came from. From Wondery, Crooked Media, and Push
Black, follow Empire City wherever you get your podcasts, binge all episodes early and ad-free
by joining Wondery Plus in the Wondery app or on Apple Podcasts.
Also, in case you missed it, politics reporter Jane Koston has joined Crooked as the new host of our daily news pod, What A Day.
Woo.
Woo.
We love Jane.
What A Day still offers the same quick listen, but now with even more curated headlines, in-depth reporting, and analysis about the stories that shape how you live.
And we even got Jane to move to L.A.,
just like we did to you.
Come on.
It's great out here.
Is that?
Well, now I'll look at you now.
I know.
Especially this time of year,
it's gorgeous.
It's great.
Okay, so tune in Monday to Saturday
to get the top news and stories
that matter most,
all in just 20 minutes.
Oh, Max can riff about
hosting Saturday episode.
Wait.
I was not paying attention whatsoever
I was about to either
Little brackets and italics
Max can riff about hosting Saturday app
Yes
How we got here every Saturday in the What A Day feed
With Aaron Ryan and we have a great
Episode this week, so please tune in
What a day
What a day, what a Saturday
Search for What A Day on YouTube now and don't forget to subscribe so you never miss an episode. Tristan Harris, welcome to
Offline. Good to be with you, John. I can't believe we haven't talked yet, considering how often your name and work has come up on this podcast.
This podcast might not exist if you hadn't started sounding the alarm more than a decade ago about technology hijacking our minds and attention.
These days, I know you're warning about the dangers of generative AI, which I want to get to.
But just a step back for listeners who might not
be familiar with your story, you were a design ethicist at Google when you left to co-found
an organization called Time Well Spent, which is now known as the Center for Humane Technology.
Do you remember the moment at Google when you first realized that the design of these platforms and technologies
could be a huge problem?
Yeah, well, first, it's great to be with you.
I'm super excited to have this conversation.
There was quite a long moment because at Stanford, where I studied computer science, I specialized
with a program called Human-Computer Interaction.
And I studied with a lot of people.
It's sort of the intersection, you know,
between behavioral economics, you know,
Daniel Kahneman's work, System 1,
thinking fast, thinking slow.
You know, the way that you set up the choices in a menu
determines the kinds of choices people make.
As a magician, I was interested
in the psychological vulnerabilities of the human mind
and the fact that if you sort of sequence things
in a certain way or phrase things in a certain way,
you get different results. You know, I studied neuro linguistic programming. And, you know, in college, there was a class called the Stanford persuasive
technology lab class, where the co founders of Instagram, and, you know, early people who joined
Facebook and Twitter, you know, all studied what was called persuasive technology, which is how do
you design technology in a way
that, you know, causes people to fill out a form? Like, let's say you're going to a gym,
like, I want people to get an evil idea in their mind, like you want to go to a gym,
oh, people fall off the treadmill, they don't come back. So is there a way we can design
technology persuasively, so it'd be easier for people to come back and do the things
that you might want them to do fill out a form on LinkedIn, your profile is 75% complete,
don't you want to fill out the rest of your profile? So it's 100% complete. It's called the nearness
heuristic. So I think what that did is it sort of put on this pair of goggles that I was been
wearing my whole career, which is how technology design influences people's choices. And then there
was, to your question, kind of a moment in 2013 where I just felt in my body that there was something wrong with the way all of this was trending.
This is 2013.
I had, you know, Google Plus was a social network that was getting started.
Google was participating in the social media race.
I saw a path of social network most people don't know anymore where they added a notification.
So if you look at someone else's photo, it would send you a notification that someone looked at your
photo. It was like we're getting to that point in the notification arms race where people were
designing for growth and engagement. And I actually, you know, in our work, we often reference
the Charlie Munger quote, if you show me the incentive, I will show you the outcome. And it was
starting to get clear to me in 2013, that I could
kind of see the next 10 years, if this entire tech industry, where you have multi trillion dollar
market cap companies, who over the next 10 years, collectively, are going to be racing for human
attention, attention and engagement, how frequently do you come back? How long do you stay time on
site, not time well spent. If you optimize for those metrics,
I almost had, my friends called it pre-TSD for the future. I was sort of traumatized by feeling into
we're going to have a more addicted, distracted, shortened attention span. You know, we're going
to optimize for personalized news feeds because those are better at getting people's attention.
Therefore, you're going to break shared reality. I do not claim that we saw the full spectrum of all of those risks. But the general sort of blur your
eyes, you could see the destination at the end of this incentive. And it's a weakened society,
a weakened social fabric. And I, you know, with some friends said, How are we going to change all
this? And I was just a single individual inside of Google made that presentation that was part of the
film, The Social Dilemma was a viral presentation, I sent it to 15 individual inside of Google made that presentation. That was part of the film, The Social Dilemma.
It was a viral presentation.
I sent it to 15 people inside of Google and it went viral.
There was, you know, 500 people looking at it the next day.
You know, the top right-hand corner of Google slides show you the number of simultaneous
viewers.
And so it clearly sparked a nerve.
I tried to change Google from the inside for a few years.
That didn't work.
They were very generous in hosting me as I thought I could make from the inside for a few years. That didn't work. They were very generous in hosting me
as I thought I could make some change inside there.
But then I left to start this independent organization,
the Center for Humane Technology.
And we have been trying to not just warn,
but really offer interventions and solutions
for what will change those incentives
so that we can change that outcome.
So I'm a political nerd.
I was a speechwriter.
I think a lot about persuasion. I've often wondered if one reason people have been so slow to recognize these dangers is that we don't like to admit that we can be persuaded to do things and believe things and act in certain ways without knowing it.
What do you think about that? Is that right?
Yeah, that's a great point.
There's early on in this work,
when talking about technology being persuasive,
it violates people's sense of autonomy and agency.
Because they think, well, maybe some people
are going to be persuaded, but not me.
And to be clear, we're not talking about
persuading you to completely change your mind
from things that you hold. We're just talking about the ways that you phrase things. You know,
I, yeah, I know a little bit about your work. And, you know, big fan of people like George Lakoff,
who wrote a book called metaphors we live by. Example I like is how you frame language,
you know, controls how people see reality. So for example, invoking the nation as a family, you know, we don't send
our sons and daughters to war. We don't want those missiles in our family's backyard, you know,
the motherland or the fatherland. By invoking the nation as a family, it's like the persuasion is
happening way upstream before we even really get to it. If you frame things as illegal aliens versus
undocumented workers, you have very different results about how people respond.
And, you know, there are politicians who nefariously and maliciously make use of this.
And then there's others who try to be aware of how this is shaping reality and say, how
do we be as effective as possible in trying to create the most pro-social outcome?
That's the healthiest result for, I think, Western societies.
And that's what informing our work.
But I think the question you're pointing to is, you know, what is ethical persuasion? How do
you ethically shape people's choices when you are making a design choice that you can't not make?
There's no such thing as there not being a phone in your pocket. There's no such thing as there's
not some feed of information that's coming at you. So the question is, how do you ethically hold
that asymmetric stance? Well, I'd love to talk about that. I mean, you've been at this for
so long now. What design changes or new policies do you think have made the biggest positive impact
on our relationship with technology that you've seen?
Well, that's a great question. I'm not sure that I've seen that many, unfortunately. I mean,
I think if you look back at the last six years, in the social
media environment, it's broadly gotten worse. And that's because we haven't changed that incentive.
You know, TikTok has shortened attention spans even more with these short video reels, you know,
short viral video clips. It's important to note that, you know, we have no we've been working at
the intersection of Silicon Valley.
We have people inside the tech companies we talk to.
We have an inside perspective.
We're not outside critics.
We're builders.
I started a small tech company.
So we have the respect of people inside major AI labs or social media companies.
And we've been trying to advocate for change from within for a long time.
And there were some changes that YouTube made to not preference shorter videos
or that Facebook made and Instagram made
to limit people's time on the app.
But of course, when TikTok comes along
and is willing to go lower
in that race to the bottom of the brainstem
by doing the short, bursty video clips,
you notice that YouTube introduces YouTube shorts.
You notice that Instagram introduces Instagram Reels.
This isn't about good people or bad people.
It's not about whether I like the CEO of YouTube or Facebook
or OpenAI and Sam and they're a nice guy
and they seem calm and relaxed.
Like it's just not about those things.
It's only about the incentives.
I will say that, you know,
things like Apple's screen time features
by even introducing the concept of screen time,
some of that came actually from our time well spent work and the pressure we put on Apple in the early
days. The do not disturb features, the bidirectional focus mode, so someone can be focused. And then
when you text them, you see, oh, you know, John's focused right now. And that creates a social
permission space that people could do that they can say, hey, I'm going to be focused for the next
for this weekend. And that's okay. And then if you really want to escalate the message, you can click notify anyway. That
actually came from a TED Talk I gave in 2013. So what I want people to take away from that is
there are different design choices, but to really get to that other design universe that I think is
an aspirational view of what technology could be, we have to change the incentive that is maximizing
attention and engagement.
Yeah, it does seem like you can make some progress
by just introducing more friction,
which is what some of the changes
you were just talking about do.
We had an offline challenge here last year
where we tried to reduce our screen time
and the screen time notifications on the iPhone
really did well shaming us for a while.
As did things like grayscale helped and some of the other, yeah, that was.
Right. Well, we actually, I remember popularizing the grayscale idea way back in 2017. And what was
cool about popularizing that idea is it was actually a little viral invitation for a conversation. So
anybody who had their phone
be grayscale, if the colors are not igniting your nervous system and getting you sort of
jacked up in just the excitement of what a red dot might mean, that helps a little bit.
But what it really does is it invites other people who look at your screen and say,
is your phone grayscale for some reason? And then it starts the whole conversation on,
yeah, there's this attention economy, the race to the bottom of the brainstem. And that's why we actually wanted to popularize
that. To be clear, I think there actually is a solution, something that could help that I could
mention if you'd like. My co-founder of the Center for Humane Technology was Aza Raskin,
or is Aza Raskin. And his father was Jeff Raskin, who started the Macintosh project
at Apple in 1978. And actually, our name Center for Humane Technology comes from his father's
book called The Humane Interface, the idea that you could design technology in a way
that is considerate of human frailties and human vulnerabilities. And that's what informs our work.
Instead of just designing technology to extract off of human vulnerabilities and manipulate us, how do you design in a way that's considerate and empath, you know, has compassion for human vulnerabilities. And he was the inventor, Eiza, of infinite scroll. So this is, people have to remember that these are design choices made by human beings, like someone had to invent that. And he invented that in 2006, I think. This is before social media. So before you want to throw your tomatoes at him and he wastes is that that
invention wastes millions of human lifetimes every, every month or year or so. But, you know,
he invented this feature originally in the, in the era of blogging. So you finish a blog post
and then instead of saying, Hey, I want to click on the homepage and then like load some more options, it just would scroll and you could
see, Hey, there's the next blog post right there. Or you're on a Google search results page and you
had to click on, Hey, next page of results and said, well, that's stupid. Why don't, you know,
every time you make a user make a choice as a designer, we see that as a designer, as a failure,
because you're invoking people's sense of consciousness and friction, why should we create fiction. So he invented this with good intentions,
and he was been devastated to see the horrible impact on society. Because he's experienced that
he has thought a lot about how do you fix it. So he came up with a little solution that he actually
built for himself, which is basically a little latency manipulator.
So what it does,
it's based on the insight that Amazon found
that for every 100 milliseconds
that a website loads more slowly,
they lose 1% of revenue.
For every 100 milliseconds
Amazon site loads more slowly,
they lose 1% of revenue.
This is a big insight about how much tech companies
invest in making their websites and their content load super fast and super zippy. Because if it's
just slightly slower, people won't use it, or they'll drop off. It's like being on an airplane,
right? And you're loading Twitter on an airplane, and it's loading, but it's not loading, and it's
loading, but it's not loading. You're like, okay, I'm going to just do something else.
That's when I get work done.
Exactly. So, so the, to sort of say, if we really wanted to fix the screen time problem,
um, you know, instead of Apple sort of, uh, you know, interrupting, you've spent this much time
on an app. Do you want to stop? Uh, you would do a different thing. You would have a local VPN
on the phone, which basically says, as you're using Twitter, after you set a time limit, 10 minutes, after 10 minutes, just imagine
your phone added between zero and 400 millisecond delay randomly. So you're doing this to yourself,
by the way. It's not so annoying that it's a, it's not a seatbelt that's saying, Hey, you have to
stop and you're not going to use anymore. And you just say, get out of the, get out of my way. I
want to keep going.
It just makes it just slow enough, just a little bit, just unnoticeably, that you end up using it a little bit less.
If Apple and Google were to implement this directly into phones, I think this would save people millions of hours around the world per day.
Because this is just such a, it's such a choke point. And there's
a bigger conversation we could have about how this insight leads you to an interesting toy model of
what a policy solution could look like. But we can get there later if you'd like.
No, I'd love to. I'd love to hear that. Because I was going to ask you,
what do you think the big changes or policies are that we should be pushing for?
Yeah, well, I mean, obviously, people in the social media debate about policy tend to focus
on content moderation and free speech versus censorship.
And then Facebook says, we're going to start the content oversight board.
We're going to have transparency.
We're going to have oversight.
We're going to have this sort of Supreme Court for violating content.
But what the companies are incentivized to do is create kind of Potemkin villages or like sort of fake forms of governance that are about the side issues that don't actually attack their business model.
Because by making the issue about social media, about content, it sort of distracts people's attention from the core incentive, which is they have to maximize engagement. And if you really want to tackle engagement, what you would do is imagine just
this is a toy model. So don't think this is I'm proposing a policy solution. But Eiza sort of
realized, okay, that idea about slowing things down a little bit, what if we said, we're going
to grade these different companies, TikTok, Facebook, Twitter, Instagram, on how much they're
contributing to the downgrading of our democracies
and our mental health and our attention. And they get different grades based on different things.
So for example, there's something called inability to disconnect, which is something
that's documented in psychology literature. It's how much do you feel like it's difficult for you
to disconnect from this app because of social pressures and things like that? So for example, Snapchat, which uses the streaks feature that
tries to get people hooked to posting every day, would rank an F in inability to disconnect because
it's actively designing to make it impossible for especially young people to disconnect by putting
this artificial number of a streak. You've texted this person 100 days in a row. Now you don't want
to lose it. You have to keep coming back to the app. So they would get an F. But YouTube, for example, would get like an
A. They don't actually actively design in a way that makes you feel social pressure for not coming
back to YouTube every single day. They do other things, autoplay, you know, etc, YouTube shorts,
but they don't do that. So imagine that each of these social media companies for all the various harms that we care about, shortening of attention spans,
inability to disconnect, mental health issues, increasing perception gaps in terms of driving
polarization. These would be sort of separate columns in a scorecard. And the scorecard would
be democratically decided by some panel of citizens. And you take basically an A to F rating and you turn that into a latency tax.
So just like we tax cigarettes
for making them more expensive,
this would say,
if Snapchat is doing a really bad job
with inability to disconnect,
imagine that an AT&T or Verizon
implemented a latency tax
in terms of the traffic flowing to that app, they would announce
it six months before it's going to happen and say, hey, Snapchat, you can fix this problem. You can
not have this latency tax. But all you have to do is start to change your design so that people do
have an ability to disconnect. And the reason that something like this could work is it incentivizes
all of the social media companies to actually change all of their design features to reduce these harms. Now, I want to name that you have problems of government capture,
politicization of this. This is sort of conscious non-net neutrality. But I wanted to offer it as
like a toy model for something that would actually have the hope of changing the design incentives
of the tech companies. Does that make sense? Because
it would actually change their behavior. Like nothing else is going to change their behavior.
If you tax them a fine, it's just a cost of doing business.
Yeah. I was imagining how Elon Musk might respond to that for X.
I'm sure he would claim-
To your point about politicization, right? He would be-
Right. And this is one of the ironies of social media is it's driven up distrust and suspicion
that everything is politically motivated.
And this would only work
if it was done in a completely transparent,
democratic way,
where you had experts weigh in in meetings
that were all recorded
for public visibility and transparency
with transcripts,
in which you saw that the people
who were putting an agenda of,
you know, what do we care about? Addiction, shortening attention spans, mental health, that that was all done in a totally
open, transparent way. So Elon Musk tweeting, Hey, you're just trying to come after me and say,
well, look, here's the transcript of the meeting where the citizens deliberated and they came up
with this solution. Um, and then again, he has every incentive to, he doesn't have to have the
latency tax. All he has to do is change the design to not drive up division and polarization.
Yeah, no, I agree.
That would be wonderful.
What do you think of Instagram's latest changes?
They did an overhaul of their privacy and safety features for teenagers.
We were just talking about that earlier.
Yeah, I have some notes here in front of me.
I'm not following the news every day,
so my understanding is this places users who are under 16 into private accounts.
Teens have to accept new followers in order to interact with them,
in order to make content visible.
There's new parental supervision tools that lets parents or guardians manage their Instagram settings and see who their
teens are messaging with. You know, I think that this is probably a good step in the right direction.
But what I'll say is, why has it taken like 10 years for this to happen? Yeah, I want people to
really think about that. Like, It's been so obvious for so long
how devastating these harms are for young people. And there has been denial of those harms. There
has been misinformation. The companies have said that this isn't a problem. They've minimized it.
They've said we're giving people what they want. And meanwhile, we know people who are inside of
the companies. And I know people who have met with the CEOs privately and the CEOs said, well, I would do something, but I can't stop TikTok from just continuing to ruthlessly go after the kids in that way.
And the reason that all this is happening is the key to actually unlocking some of these changes is changing the race dynamic,
the race between the companies for attention.
But I will say that I don't think we should be settling for slightly less harmful social
media.
I want the version of sort of the Mr. Rogers, Sesame Street, consciously developmental,
you know, tech that's
designed for kids. I think that we shouldn't be happy until the parents of teenagers who work at
these companies happily give Instagram to their own children. That is not currently the case.
And that's the easiest way to know whether something is good or not. The CEO of Lunchables
Foods did not give his own kids Lunchables.
So that tells you everything you need to know.
Yeah, it definitely seems like a Band-Aid solution to stave off further policy changes or regulation or legislation that might be passed.
Yeah.
So let's talk about generative AI, which you've been focused on lately. So there have been lots of different fears and concerns ranging from, you know, the robots are going to take our jobs to
the robots are going to destroy humanity. How would you summarize your biggest concerns with AI?
Wow, that's a hard thing to summarize. So we gave a presentation called the AI Dilemma last year. And actually,
there's a new special from Oprah Winfrey called AI and the Future of Us that actually
happened because she saw this presentation, the AI Dilemma. That presentation influenced policy
at the Biden administration. It helped get some of the executive orders that happened to happen by Gavin Newsom in California.
So it's had quite a bit of an impact.
And in that presentation, in terms of how we outline the issue with AI, it's not whether AI is good or bad.
AI is confusing because it offers this sort of infinite set of benefits.
I mean, literally solving all the issues in medicine. You can have AIs that are doing 10 years of scientific research in less than a month because
you're automating all the human cognitive labor that a scientist would do.
And now you're just running it all through a machine.
You can have AIs that write programming code that actually do all automated AI research.
That is a crazy set of capabilities and benefits. But there's this sort of inconvenient truth in AI that you can't separate the promise
from the peril.
The same AI that speaks the language of biology and gives us all these new cancer drugs or
gives us new antibiotics.
The first new antibiotic in 60 years was discovered by AI in the last year.
That's amazing.
But does
the AI that speaks the language of biology, can you separate that from it generating new dangerous
things in biology? Does the AI, biological weapons, does the same AI that can generate,
you know, funny images of Donald Trump and Joe Biden, you know, laughing at a campfire together, which some people
have seen online, can you separate that from an AI that, you know, is dealing is causing this
proliferation of deepfake nudes in the classrooms, because, you know, it's been trained on images of
human faces and bodies. So that's the challenge is the same promise is in connected from the peril.
In terms of the risks that we
are worried about and what we presented
in this AI dilemma talk
it's that we have to be aware and learn
the lessons of social media and that's
why I think your show is so interesting to talk about this because
what were the stories that we told about
social media about what was going to do versus what was the actual
outcome so the stories we told are
it's going to help connect people with their friends
help people join like-minded communities, make advertising more efficient for small,
medium-sized businesses. And all the stories were true. That actually did happen. But we also saw
that that wasn't the whole story. It also causes more addicted, distracted, polarized society.
That's not an accident. You can predict all of that if you go beneath the stories. You can put
on your x-ray glasses and say, I don't care about the stories. I want to see what's the incentives
behind these stories. The incentives were driving up engagement, which is correlated with personalized
news feeds, which break reality. It's correlated with beautification filters, which hurt people's
self-image. It's correlated with driving up engagement so that everybody can contact everybody
else, which drives harassment. Those are all design decisions that were created by that engagement incentive.
And so we can see through the stories we tell and instead look at what's the future we're
going to get based on the incentives.
With AI, we're telling a whole bunch of stories.
AI is going to solve cancer drugs.
It totally can do that.
AI is going to automate a bunch of labor and make our jobs easier.
It can totally do that. It's going to make a bunch of labor and make our jobs easier. It can totally do that.
It's going to make coding more efficient.
It's going to increase GDP.
It's going to give us military advantages, scientific advantages.
It can totally do all those things.
But what is the actual incentive of the AI companies that are building it?
Is it to improve science?
Is it to strengthen democracies?
No.
What is it?
Their incentive is what we call the race to roll out, which is the race to build and launch and deploy the most sophisticated AI system like GPT-01 that just released by OpenAI, which does PhD level chemistry, math, and physics now. That's what just happened just recently.
And they're racing to release that.
Why?
Because then they get more users on their platform,
more people using their chat GPT.
They raise more venture capital when they're leading the race.
They get to raise the most money.
They then get to hire the best people
because they already have a sort of a flywheel
of some of the best people working on AI.
So the best people want to work on the best AI models, not the third or fourth best AI model. And they
can only even influence policy if they're actually leading the AI race. So let's say you're a safety
oriented person. We've heard from people at Anthropic who say, even we who care about safety,
we can only influence policy if we're also at the frontier of leading the AI race. And so you end up again with this race to roll out,
the race to take shortcuts,
and it's the race to recklessness
that causes all these risks.
There was just a Senate hearing a couple of days ago
in the US with some of the whistleblowers from OpenAI
where they basically said that OpenAI's systems
weren't even secure.
So an internal employee,
if they were paid by someone in the Chinese Communist Party,
could have easily stolen the AI model and leaked it outwards.
So what does it mean when we say we have to beat China to AI if we're not even racing
in a way that secures what we're building when they have whatever we have?
So these are the kinds of questions that I think about when we talk about how do we get to a safe future? Now, we don't have to have those. We can talk about solutions.
It's just that this is the current state of affairs that lets you predict the future outcome.
It's a whole bunch of risk driven by the race to shortcuts. Well, I was just going to ask because
you hear AI leaders like Sam Altman and others talk about, well, we welcome government regulation,
right? Like we want some rules of the road here. But then in the same breath, they'll say, but, you know,
if we don't do this, China will get there first. And so there is this, like, we can't slow down
because the rest of the world will not. And so we might as well all, like, go as fast as we can
together. It does present an interesting problem though,
because you don't have global governance here, right?
So is the only way to sort of make AI safer
and sort of avoid some of these decisions
that we made around social media
to have some kind of like diplomacy with other,
you know, it is hard to prevent a China
or another bad actor somewhere from
sort of moving forward on this. Yeah, so this is so important that you're raising,
and I want people to memorize something. When you're in a conversation about AI,
there's this inevitable moment where people say, but if we slow down, then we're just going to
lose to China. So when someone says, what about China? What I want people to memorize is what about social media? Because the US beat China to social media. Did that make us stronger
or weaker? And the point is that it didn't have to make us weaker. What we're actually competing
for with the new technology, when we're competing and doing great power competition,
is we're competing for who's better at wisely integrating a technology such that it strengthens every
layer of your society. The metaphor we often give is like, you know, that board game Jenga,
right? It's like the way we are currently building our future with AI is we're doing it by playing
the game Jenga. We add this new capability at the top of society.
Now anybody can make cool videos in AI art instantly
or a podcaster, you can just clone your voice
and anybody, you can sort of type what you want to say
so you don't have to actually record another podcast.
That's amazing.
That's a new capability.
But we just, in doing that,
pulled out this lower level block
of now no one knows what's true or real
because the same ai that can generate
your voice can clone grandma's voice can make a fake deep fake of a politician saying something
and hot mic um you know the same ai that adds new cancer drugs again at the top also pulls out this
foundational building block of biological safety and so we're building our future to race with
china in a way that's creating a more and more wobbly
foundation. And at the end of the day, I want people to think like, this is dumb. Like we can
do so much better than that. And once you see it that way, you say, it's not a race for who has the
tallest wobbly tower that's going to collapse. It's a race for who is actually better at wisely
building up an enduring and lasting foundation for the future.
And while that might seem daunting, like we're not doing anything right now.
So we can kind of, it's very easy to do more than what we're doing.
We can start with some basic things like liability. If you were liable for pulling out a block from the bottom
or creating more risk in society and that risk happens.
And if you were liable for that, you would be much more hesitant to build in a way
where you're pulling out blocks from the bottom to build a taller tower.
It's a basic framework, you know, more liability, more accountability.
Another thing is protecting whistleblowers.
If there's a whistleblower, if you don't have a government that has that much AI expertise yet because it's a new technology, let's protect the people who are inside the AI labs that can see the flashing red lights when there's some problem going.
And they can warn us about a new wobbly block
of the foundation.
And then we can also be using AI
to strengthen the foundations of society.
I know you had Robert Putnam on your podcast.
I'm a big fan.
And for all the division issues
and all the breakdown of truth issues,
we can be imagining using AI
to actually help add context that bridges across divides
so that you're never seeing kind of one-sided,
hyper-politicized,
hyper-divisive speech.
We could be using AI
to fill that in.
The great digital minister
of Taiwan, Audrey Tang,
has a system where
for trending topics
that are in the system,
she actually uses an LLM,
an AI system
that is trained
on a fact-checking database
that's collectively contributed
by the citizens of Taiwan
to actually add context
to any new trending topics. So that instead of a, there's a famous saying in our world that if you
can make it trend, you make it true, which is the risk of trending topics and disinformation.
But if you added automated context to it with AI, you can imagine that AI is part of the solution
for building this lasting, enduring foundation. So we're not anti-AI, we're pro-steering. Last question is one about strategy around your advocacy. You are unique in that you
publicly pressure a lot of these companies from the outside, but as you said earlier,
you also engage in conversations with people who run these companies, people who work at
these companies. What have you learned about that approach in terms of how to be an effective advocate?
It's a really interesting question.
You know, one of the, just to sort of orient the situation, I always go back to E.O. Wilson,
the fundamental problem of humanity is we have paleolithic brains, medieval institutions,
and godlike tech. In a world where we didn't have
governance that was so lagging behind the technology, we'd be using that governance.
We should be living in a world where we have governance that lets us govern new technology.
We should be living in that world. Because we're not, we kind of have to take these creative
approaches, like what you're talking about, where you have to have a combined strategy of inside advocacy of people inside tech companies, public communication that changes what we call the Overton window, right?
Just the global zeitgeist, the global consensus about that social media is a problem versus people saying there's no problem, or the AI.
And then policy.
You need there to be law that says we're going to have to do things differently
so that all companies have to abide by that new set of incentives. So, you know, if, you know,
if culture is upstream from politics, currently tech is upstream from culture. If we want to
change the way that tech is printing both a culture that's more disoriented and confused
and doesn't know what's really going on, which is leading to a lack of ability to pass laws. We have to kind of go upstream. I think the
thing like the social dilemma created a shared reality about the problem. And then it enabled
the attorney generals, 42 state attorney generals to sue Facebook and Instagram for intentionally
addicting children. It enabled the surgeon general to say, we have to put warning labels on social
media.
It helped enable Jonathan Haidt and his book, The Anxious Generation, which is now leading to this huge movement of phone-free schools and the reversal of these trends, especially years ago, you if I told you in a room with everybody smoking, 60 years from now, no one's gonna be
smoking in the room, everyone would say that's crazy. And you have to be believing that we can
actually change all this. What happened with big tobacco is you have Jeffrey Wigand, the 60
minutes whistleblower coming out on a 60 minutes interview, you have the Attorney General's suing
and doing the master settlement agreement.
You have the people who made the insiders
and whistleblowers who made that clear, the data clear.
You have the film, The Insider.
You have things like the Truth Campaign,
large public service announcement
that swung the Overton window in consensus,
the Surgeon General's warning.
And then now, 60 years later,
no one is around you smoking and the rates are way down. If you look at the social media work, I think we think about
it in a similar way. You have, you know, doing 60 Minutes interviews, you know, I came out,
Francis Haugen, the Facebook whistleblower came out, people making it extraordinarily clear,
Arturo Bahar, Sophie Seng, there's so many whistleblowers. And then you have the Attorney
Generals that are now suing just just like we did with big tobacco.
You have the surgeon general doing the warning about social media.
You have Jonathan Haidt and his new book,
the anxious generation,
which is like the silent spring.
So I see a world where we can kind of swing this all the way back around
and kind of restore basic sanity for,
for democracy.
Unfortunately,
we're still a little bit further away and AI is about to supercharge it.
If we don't get ahead of it there. I don't know if i really answered your question you're really asking
it does it's um it's it's it seems like the the right strategy um at a time i was going to say
where you know politics is is so broken and polarized and it's so hard to get anything done
that um you do need the sort of this all of the above strategy where it's culture,
it's politics, it's pressure from the inside, it's pressure from the outside. Thank you so much for
chatting with me. It is so refreshing to hear someone's take on this who's so thoughtful and
optimistic. And thank you for all the work that you've done and all the work that you continue to
do. Well, thank you for raising awareness the way that you have
and trying to give people pathways to hope yourself on this podcast.
It's really great to meet you.
You too, Tristan. Take care.
Offline is a Crooked Media production.
It's written and hosted by me, Jon Favreau, along with Max Fisher.
It's produced by Austin Fisher and Emma Illick-Frank.
Jordan Cantor is our sound editor.
Charlotte Landis is our engineer.
Audio support from Kyle Segland.
Jordan Katz and Kenny Siegel take care of our music.
Thanks to Ari Schwartz, Madeline Herringer, Reed Cherlin, and Adrian Hill for production support.
And to our digital team, Elijah Cohn and Dilan Villanueva, who film and share our episodes as videos every week. Thank you.