The Daily - Israel Kills The Leader of Hamas
Episode Date: October 18, 2024Yahya Sinwar, the leader of Hamas, played a central role in planning the deadly assault on Israel on Oct. 7, 2023, that set off the war in Gaza. His killing was a major win for Israel, and prompted ca...lls from Israeli leaders for Hamas to surrender.But what actually happens next is unclear.Ronen Bergman, who has been covering the conflict, explains how Israel got its No. 1 target, and what his death means for the future of the war.Guest: Ronen Bergman, a staff writer for The New York Times Magazine, based in Tel Aviv.Background reading: Analysis: Mr. Sinwar is dead. Will the fighting stop?A chance encounter led to the Hamas leader’s death.Obituary: Mr. Sinwar was a militant commander known for his brutality and cunning.For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Hey, everybody. It's Sabrina. I'm here to talk to you one last time this week about
the New York Times audio subscription. Sorry. We just know that change is hard and we want
to make absolutely certain that all of you are lovely, incredible audience understand
what's going on. So two things. First, today's show is not as long as it looks. This week,
we're doing this kind of unusual thing
where we attach episodes of other New York Times podcasts
right to the end of our show.
So today, that's the fabulous podcast, Hard Fork,
where hosts Kevin Roos and Casey Newton
break down the latest tech news
with smarts, humor, and expertise.
We're doing that because,
and here's the second part of all of this,
the New York Times has launched an audio subscription this week. That subscription gets you full
access to shows like us and Hardfork, Ezra Klein, The Run-Up, The Interview, and Headlines.
So this is included for people who subscribe to all of the New York Times, the big bundle
with news, cooking, and games.
But you can also sign up for audio separately.
Either kind of these subscriptions will allow you
to log on Apple Podcasts, Spotify,
and you'll have access to all past shows.
Bonus content, early access, stuff like that.
Reminder, you don't have to subscribe
to keep listening to The Daily.
Recent episodes will still be free.
But we hope you'll see it as a way to support the show and the work we do.
Okay, thank you for bearing with us and with all of these announcements all this week.
TGIF.
We just want everyone to know the deal.
And as always, thank you for listening.
From the New York Times, I'm Sabrina Tavernisi.
And this is The Daily.
We are following a major breaking story out of the Middle East.
Israel says the leader of Hamas is dead.
Yahya Senwar, the leader of Hamas and the architect of the October
7th attack, was killed by Israeli forces in Gaza. Today the images of Sinwar's body lying in rubble
surrounded by Israeli troops sent shockwaves through the region. Sinwar's assassination
did a major blow to Hamas amid the threat of wider escalation in this region. It was a major victory for Israel
and prompted calls from Israeli leaders for Hamas to surrender.
This war can end tomorrow.
It can end if Hamas lays down its arms and returns our hostages.
But what actually happens next is unclear.
Today, my colleague Ronen Bergmanman on how Israel got its most wanted man
and what his killing means for the future of the war.
It's Friday, October 18th. So Rodney, we're talking to you around 3 p.m. New York time, 10 p.m. your time.
Just a few hours ago, we started getting hints that Israel possibly killed the leader of
Hamas, Yahya Sinwar.
And just a little while ago, it was announced that he was in fact killed.
What was your first thought when you heard the news?
So candidly, I thought, oh, here we go again, because on the 25th of August, I got a call from a source who said, we believe Sinoir is dead.
And then the source called again, said they thought it's Sinoir, but it was not.
So I thought that the beginning, maybe it's the same thing going again, but then we got the first picture. And when you look at the picture that was just taken from the site.
Though I'm not a forensic expert, but it looks like the body of the leader of Hamas.
When an hour later my hunch was that it was him, though not yet confirmed, I thought this is a watershed
moment where the war can end and maybe, maybe the hostages could come back.
This is a critical moment where things maybe can go for the first time in so long for the
better.
Okay, we'll get to that.
But remind us first who Sinwar was.
Sinwar was one of the most important people
in the history of Hamas.
Yeche Sinwar was born in a refugee camp in Chanyunas,
south of the Gaza Strip, and he was one of
the young first followers of Sheikh Ahmed Yassin, the founder and the leader and the
spiritual compass for Hamas, the jihadist religious movement he founded in the 80s in Gaza.
Sinwar was leading a small, tough, brutal unit that was in charge of the internal security,
executing what they saw as collaborators with Israel and people doing what they saw as blasphemy.
Then he was sent to prison in Israel for multi-life prison sentences.
He was released in the Shalit deal in 2011.
Shortly afterwards, he took control over Hamas in the Gaza Strip, but he was the chosen political leader as well as the most important
military leader.
He took control basically over these two wings, and in that capacity throughout the last four
years he planned, organized, pushed, motivated, executed the horrific attack of October 7 that he
masterminded and he stayed in Gaza Strip during the year after the Israeli
devastating invasion leading the movement and in spite of numerous
attempts by thousands of people and intelligence operatives and commandos,
American intelligence joining it for a year, nobody knew exactly where he was or was able
to get near him.
Right.
He was the invisible man, basically.
He was a ghost.
Which is why from the very beginning of this war that came out of the October 7th attack Israel has had him as their number one target they've been trying to kill him for a whole year and now they finally have tell us how they finally got him.
internal domestic secret service that is taking a prime role in the current terrorism.
War against the boss who said sin or is so strict with field security.
And maintaining secrecy that i am sure any said that over a year throughout the last year.
He will be taken out by mistake. Interesting. So no sophisticated, you know, cyber, SIGINT, visual intelligence, operatives on the ground.
A small unit, manned mainly by soldiers
who are nine months in the military,
in platoon commander's course on Wednesday in a regular patrol trying to dismantle
some bomb that had a male function.
So some people that they soon learned are terrorists walking in the horizon.
One of them was throwing grenades at them.
They sent the drone, he, according to their description, even threw stones at the drone
trying to take the drone down.
And in the firefight, one sniper, that force, and again, nine months in the military, those
are like very young, between 18 and 19 years old.
Trainees basically.
Trainees, one sniper shoot one of the terrorists
in the head, then drones came to help them
and took part of the building when they were hiding down.
Hours later, they flew a drone to enter the building. Those are like miniature drones with cameras
They saw a body of one of the terms the drone got near the body
Suddenly some of one of them said
That looks like C-2R
Hmm and as more evidence from the scene came
And as more evidence from the scene came, the news in the circles of secrecy, the intelligence community and the leadership of the military and later the prime minister office heard
that there is a strong chance that he is dead.
It's all by coincidence.
So basically, after a year of trying to kill him, using all of the technology and intelligence
that Israel has at its fingertips, they killed him kind of by accident is what you're saying,
like almost, you know, by mistake.
And it was a bunch of trainees who did it.
And he wasn't even in a tunnel, like everyone assumed he was just kind of walking around out there.
So one of the things that we already identified this phenomenon which was discovered by Israeli intelligence only in the later stages of the war.
Is that it is very hard to spend constant days and weeks
in the tunnels.
You know, our embed with the idea of tours in Gaza,
I have been in many of these tunnels.
And let me tell you, Sabrina,
it's even in the tunnels that are built
for Hamas leadership, we have been in a few,
it's very hard to stay.
Humid, claustrophobic, small, narrow, and everybody needed to go out from time to time.
Sinwar thought that the area is free from enemy hostiles.
He was wrong.
He was killed.
Now Ronan, I understand from your reporting recently
that this was not the first time that the IDF
was at least close to killing him.
Tell me that story.
Yes, okay.
So in January of 2024, so this year, Sinwar is in a tunnel under Chanyunis,
but this is where he was born, with his family, hostages, bodyguards, and other Hamas militants,
and he's watching the nightly Israeli news as he does, because, you know, he's fluent in Hebrew and an avid
consistent student of Israel.
And Galan, the defense minister, starts talking about Sinwar, about the hunt for Sinwar, the
number one wanted on the Israeli kill list. And he says that Sinwar can hear the D9 bulldozers
hammering above his head.
And Sinwar is like, wait, that's correct.
I can hear the bulldozers.
So he realizes that the Israelis must know where he is.
So he leaves in a hurry.
He leaves everything behind.
He takes his family,
but leaves behind a lot of money,
one million dollars.
And importantly, he also leaves behind a computer
with a bunch of documents.
And these documents turned out to be the most important insight on the attack of October
7 and Sinua's roles in it.
We'll be right back. So what are these important documents on this computer that they found, and what's the story
that the documents reveal? So what is there is the first understanding of the decision-making process,
the preparations, the deception, everything that Hamas leaders were doing
throughout the last two years before the war.
There are the minutes of these meetings, 10 meetings from July 2021 until the 7th of August
2023, so exactly two months before the attack, where Hamas leaders, Sinwar and five other military leaders were talking freely because they were sure that
Israeli intelligence doesn't listen in to that rule.
So these documents are minutes of high-level meetings about military plans from 2021 on? Yes, it's the first understanding of the decision-making process, the preparations
that Hamas leaders were doing during these two years.
So Ronan, what story do those minutes tell?
So I think the first story is that the deception is so sophisticated, it's tactical and strategic.
They have been preparing.
Sinuar is telling his troops that during the last year, do many military exercises. So the Israelis will get used to see you drilling and having forces
marching or running from one place to another. The Gaza Strip would not look odd or as part of
preparation for war, but just another military exercise. And also, he says, do not do this in
hiding. Because if you try to hide something, then the Israelis will think that you
are trying to hide something.
But if you do it in the open, if you bring television, that they
think that you don't mean it.
So he's trying to deceive Israel here.
He's trying to lull Israel into a sense of complacency.
Exactly.
And the other thing that they were doing was to try and recruit partners, to
convince the other members of the so-called Axis of Resistance, so Iran and Hezbollah,
to convince them to join in, to attack together. And in one of the minutes, Sinuah is saying, let us not forget that beyond breaking the
defenses of Israel, our goal is to destroy the state of Israel, to collapse the state
of Israel.
If we attack alone, he says, we will probably cause damage, a lot of damage, but if we get the front, the axis of resistance
to join us, we might be able to achieve our primal goal to take down the state of Israel,
collapse it, or at least with rule the state of Israel years backwards.
And so they sent a special envoy.
They sent him secretly to Beirut and to Tehran to try and convince the Iranians and Hezbollah
that this is the time to attack.
Now as a senior Iranian intelligence official told him, and he updates the military council
when he comes back.
This is in August, just two months before the war.
The Iranian told him, listen, we agree with the plan and principle.
We are with you, but we need a little more time to prepare.
So that actually clarifies something we've been wondering throughout this entire war,
which has been to what extent did Iran know about and participate in the planning and
execution of October 7th?
Exactly.
Hezbollah in Iran tells them, we might support you if you start a war, but not joining you on that day to a synchronized multi-front regional war surprise attack, because
we need more time.
And Sinuar decided that in spite of the fact he is not getting their consent to a synchronized
attack, he will attack anyway.
And they give two reasons. One is that they are afraid that the new anti-missile
defense system is going to be deployed by Israel in 2024. And the other one is a breakdown
of the new government of Israel. Netanyahu's attempt for the judicial overall is weakening Israel's society.
And just to explain, you're talking about that very divisive effort by Netanyahu's new right-wing
government to overhaul the country's court system. A lot of Israel was very unhappy about it. There
were huge protests. For a moment there, the country basically ground to a halt.
For a moment there, the country basically ground to a halt. Yes.
And so, Sinuar says, this is a historical opportunity, historical window for us to attack.
And this is why they decide to attack even when they are alone.
And he wants to attack in Yom Kippur, but then they decide on the last holidays of the high holidays, the 7th of October.
They set the date in August and they strike.
And we know what happens next.
1200 Israelis die on October 7th.
Hundreds of hostages are taken and some of them, of course, today still are in Gaza,
around 100. Many others are dead.
And then in Gaza, over 40,000 people have been killed.
And now the war has moved to Lebanon and potentially even Iran.
In a way, Sennouar, a year delayed, but he got what he wanted, a regional war.
And Ronan, just explain what his death means for this regional war.
Sinwar had a plan.
He knew that he's the most wanted person by Israel.
And long ago, they already agreed what would happen if he's killed.
And this is for the continuity of this organization.
Sinwar is being succeeded by his brother, Muhammad, who is considered to be even a more
hardliner, more extreme, more lethal and brutal than his brother.
He has already assumed control.
So as important as Sinwar was, they planned for this.
And a new leader is literally already in control just hours later, his brother, Mohammed.
Yes.
And he will need to make a call.
Is he going to agree for a deal, thrive to end the war, or continue it?
Well, let's talk about that deal, the potential for a ceasefire deal between Hamas and Israel that would cease hostilities and bring hostages back home to Israel. I mean, this was something that Israel and the US accused Sinwar of really being the blockage for. What becomes of that deal
now?
I am not sure it was just Sinwar. I believe that it's 50-50 responsibility with the government of Israel, with the Prime Minister of Israel, and maybe
even a different balance.
And at the end of the day, the government of Israel will need to make two critical decisions
now.
This is a watershed moment in history.
They can use that and have the Qataris pick up the phone, call Hamas and
say, let's reconvene and let's have a deal to free the hostages because they are dying.
And there are dozens of hostages that are still alive, and there's a chance to free
them. And the other decision is whether to use that moment to end this multi-front regional war
and maybe use that moment to bargain some kind of a deal with Iran that would also bring peace to the north with Hezbollah, and at the end form something that could turn the
page into something better, a better day for the Middle East.
Unfortunately, I'm not sure that this will happen because Israel is now gearing towards a massive attack on Iran.
But Ronan, I want to understand that because Israel has always said since October 7th
that they want to eliminate Hamas, and a huge part of that was killing the head of Hamas, Sinwar.
And now they've done that. And Gaza is quite devastated.
So why is this not a reason to say mission accomplished and end the war?
The first thing is Eupres.
Israel has gained some successes in its war against Hezbollah.
Israeli defense establishment regaining its pride for these
successes, and they feel that this can go on.
And even without a clear exit strategy, some of them believe they can just win again and
again.
The second is Netanyahu is coerced by parts of his extreme parts of his coalition
to continue, not sign a ceasefire, and start some kind of implementation of military rule
in Gaza. So, Israeli coming back to Gaza, and we hear parts of the coalition talking
about reestablishment of settlements in Gaza that were taken down.
What is red disengage from the strip and.
Also, when the war is over officially over so there's some kind of agreement.
There will be questions that he will be asked there will be a demand to establish an inquiry panel.
be questions that he will be asked. There will be a demand to establish an inquiry panel. Maybe elections. Ben thinks that he is afraid of.
So in other words, he's concerned with his own power.
And the continuation of the war, as horrible as it sounds, is good for the integrity of
the coalition. Ronan, do you know what the reaction has been inside Gaza to this death?
So our colleague Bilal has spoken with different people in Gaza.
And you know, it's diverse and on a vast for it is you can expect some people blame cedar.
Four october seven everything that happened after so they happy some people.
So him as a local and as a regional hero and they are sad they see him as the symbol for.
they are sad, they see him as the symbol for Palestinian struggle and Palestinian defiance who caused the enemy by far the most devastating damage.
And there are many who say it just won't change anything.
And I think maybe this is the most gloomy and sad where even after the death of Sinwar, the mastermind
of the attack, it would just continue the same.
Ronan, what about inside Israel?
What has been the reaction inside Israel?
You're in Tel Aviv.
What do you think Sinwar's killing means for the Israeli people, for the Israeli
psyche?
People are cheering.
They see him as nothing less than the Satan himself.
They identify him, and I think with all good reason, with the atrocities, the kidnapping,
the sexual abuses, the mass murder of October 7, and the war that ensued.
And so, first of all, they have a sense of retribution.
I think most of the public would realize that there is no more symbolic end to the war than
this one.
But they also understand it's not the end.
And they are very much concerned.
And this is common between all Israelis.
They are very much concerned to the fate of the hostages.
And they also understand now, I think better than a year
ago, but they understand now it's not just about Hamas, but it's also the Netanyahu government.
Ronan, thank you.
Thank you, Sabrina.
On Thursday, during a campaign visit in Milwaukee, Vice President Kamala Harris, when asked about
Sinwar's death, said, quote, This moment gives us an opportunity to finally end the war in Gaza
Meanwhile in Israel Prime Minister Benjamin Netanyahu told Gazans that if they set aside their weapons and returned the hostages
Israel would quote allow them to leave and live but he made no mention of bringing the war to an end
He made no mention of bringing the war to an end. We'll be right back.
Here's what else you should know today.
On Thursday, an independent panel reviewing the failures that led to the attempted assassination of
former President Donald Trump in Butler, Pennsylvania, said that agents involved in the security
planning did not take responsibility in the lead-up to the event, nor did they own the
failures in the aftermath.
The panel, which included former Department of Homeland Security Secretary Janet Napolitano, called on the Secret Service to replace its leadership with people from the private sector
and shed much of its historic role investigating financial crimes
to focus almost exclusively on its protective mission.
And federal prosecutors have charged a man they identified as an Indian intelligence
officer with trying to orchestrate from abroad an assassination on U.S. soil.
An indictment unsealed in Manhattan on Thursday said that the man, Vikash Yadav,
whom authorities believe is currently in India, directed the plot that targeted a New York based critic of the Indian government, a Sikh lawyer and
political activist. The charges are part of an escalating
response from the US and Canada to what those
governments see as brazenly illegal conduct by India,
a long time partner.
Today's episode was produced by a longtime partner.
Today's episode was produced by Mouj Zaydi, Rob Zipko, Diana Nguyen, and Eric Krupke.
It was edited by Paige Cowitt and M.J. Davis-Lynn,
with help from Chris Haxel,
contains original music by Rowan Niemesto and Diane Wong, and was engineered by Chris Wood.
Our theme music is by Jim Brunberg and Ben Landsferk of Wonderly.
Special thanks to Patrick Kingsley.
Remember to catch a new episode of The Interview right here tomorrow.
David Marchese talks to adult film star turned influencer Mia Khalifa.
I am so ashamed of the things that I've said and thought about myself and allowed others
to say and jokes that I went along with and contributed to about myself or about other
women or anything
like that.
I'm extremely ashamed of that.
That's it for the daily.
I'm Sabrina Tavernisi.
See you on Monday.
We just got another email from somebody who said they thought I was bald.
I have an apparently crazy bald energy
that I bring to this podcast.
What do you think is bald seeming about you?
I think for me, they think of me as a wacky sidekick,
which is a bald energy.
Is it?
I think so.
I don't think of, I don't associate wacky and bald.
Because I'm thinking Jeff Bezos.
I know a lot of very hardcore bald men.
Oh, interesting.
So do you think that maybe people think that I sound
like a sort of titan of industry, plutocrat?
I would not say that's the energy titan of industry, plutocrat.
I would not say that's the energy you're giving,
is plutocrat energy, but.
Oh really, cause I just fired 6,000 people
to show that I could.
You did order me to come to the office today.
I did, I said there's a return to office
and effect immediately.
No questions.
I'm Kevin Reuss, a tech columnist from the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, are we reaching the AI end game?
A new essay from the CEO of Amthropic
has Silicon Valley talking.
Then, Uber CEO Dara Kosrashaw, he
joins us to discuss his company's new partnership
with Waymo and the future of autonomous vehicles.
And finally, internal TikTok documents tell us exactly how many videos you need to watch
to get hooked. And so I did. Very brave. God help me.
Well, Kevin, the AI race continues to accelerate, and this week the news is coming from Anthropic.
Now last year, you actually spent some time inside this company, and you called it the
White Hot Center of AI Doomerism.
Yes.
Well, the headline of my piece called it the White Hot Center of AI Doomerism.
A classic reporter blamed the headline.
Well, you know, reporters don't often write our own headlines,
so I just feel the need to clarify that.
Fair enough, but the story does talk about
how many of the people you met inside this company
seemed strangely pessimistic about what they were building.
Yeah, it was a very interesting reporting experience
because I got invited to spend several weeks
just basically embedded at Anthropic as they were
gearing up to launch an update of their chatbot, Claude. And I sort of expected they would go in
and try to impress me with how great Claude was and talk about all the amazing things it would
allow people to do. And then I got there and it was like, all they wanted to do was talk about how
scared they were of AI and of releasing these systems into the wild.
I compared it in the piece to like being a restaurant critic
who shows up at like a buzzy new restaurant
and all anyone wants to talk about is food poisoning.
Right, and so for this reason,
I was very interested to see over the past week,
the CEO of Anthropic, Dario Amadei,
write a 13,000 word essay about his vision of the future.
And in this essay, he says that he is not an AI doomer, does not think of himself as one,
but actually thinks that the future is quite bright and might be arriving very quickly.
And then shortly after that, Kevin, the company put out a new policy, which they call a
responsible scaling policy that I thought had some interesting things to say about ways to safely build AI
systems.
So we wanted to talk about this today for a couple reasons.
One is that AI CEOs have kept telling us recently that major changes are right around the corner.
Sam Altman recently had a blog post where he said that an artificial super intelligence
could be just a few thousand days away.
And now here, Amadei is saying that AGI
could arrive in 2026, which check your calendar, Kevin,
that is in 14 months.
And certainly there is a case that this is just hype,
but even so, there are some very wild claims in here
that I do think deserve broader attention.
The second reason that we want to talk about this today
is that Anthropic is really the flip side to a story
that we've been talking about for the past year here,
which is what happened to OpenAI during and after
Sam Altman's temporary firing as CEO.
Anthropic was started by a group of people who left OpenAI,
primarily over safety concerns.
And recently several more members of OpenAI's founding team
and their safety research teams have gone over to Anthropic.
And so in a way, Kevin, Anthropic is an answer
to the question of what would have happened
if OpenAI's executive team hadn't spent the past few years
falling apart.
And while they are still the underdog compared to OpenAI,
is there a chance
that Anthropic is the team that builds AGI first? So that's what we want to talk about today, but I
want to start by just talking about this essay. Kevin, what did Dario Amadei have to say in his
essay, Machines of Loving Grace? Yeah, so the first thing that struck me is he is clearly reacting to
this perception, which I may have helped create
through my story last year that sort of he and Anthropic are
just do-mers, right?
That they are just a company that
goes around warning about how badly AI could
go if we're not careful.
And what he says in this essay that I
thought was really interesting and important is,
you know, we're going to keep talking about the risks of AI.
This is not him saying, I don't think this stuff is risky.
I've been, you know, taken out of context
and I'm actually an AI optimist.
What he says is it's important to have both, right?
You can't just be going around
warning about the doom all the time.
You also have to have a positive vision
for the future of AI because that's what,
not only what inspires and motivates people,
but it matters what we do.
I thought that was actually the most important thing
that he did in this essay was he basically said,
look, this could go well or it could go badly.
And whether it goes well or badly is up to us.
This is not some inevitable force.
Sometimes people in the AI industry,
they have a habit of talking about AI
as if it's just kind of this disembodied force
that is just going to happen to us.
Inevitably.
Yes, and we either have to sort of like get on the train
or get run over by the train.
And what Dario says is actually different.
He says, here's a vision for how this could go well,
but it's gonna take some work to get there.
It also made me realize that for the past couple of years,
I have heard much more about how AI could go wrong
than how it could go right from the AI CEOs, right?
As much as these guys get knocked for endlessly
hyping up their products, they also have,
I think to their credit, spent a lot of time
trying to explain to people that this stuff is risky.
And so there was something almost counterintuitive
about Dario coming out and saying,
wait, let's get really specific
about how this could go well.
Totally.
So I think the first thing that's worth pulling out
from this essay is the timelines, right?
Because as you said, Dario Amadei is claiming
that powerful AI, which is sort of his term,
he doesn't like AGI, he thinks it sounds like too sci-fi,
but powerful AI, which he sort of defines
as like an AI that
would be smarter than a Nobel Prize winner in basically any field and that it could basically
control tools, go do a bunch of tasks simultaneously. He calls this sort of a country of
geniuses in a data center. That's sort of his definition of powerful AI. And he thinks that
it could arrive as soon as 2026. I think there's a tendency
sometimes to be cynical about people with short timelines like these, like, oh, these guys are
just saying this stuff is going to arrive so soon because they need to raise a bunch of money for
their AI companies. And, you know, maybe that is a factor. But I truly believe that at least Dario
Amadei is sincere and serious about this.
This is not a drill to him.
Anthropic is actually making plans, scaling teams, and building products as if we are
headed into a radically different world very soon, like within the next presidential term.
Yeah.
Look, Anthropic is raising money right now.
That does give Dario motivation to get out there in the market and start talking about curing cancer
and all these amazing things that he thinks that AI can do.
At the same time, you know, I think that we're in a world
where the discourse has been a little bit poisoned
by folks like Elon Musk, who are constantly going out
into public, making bold claims about things
that they say are going to happen, you know,
within six months or a year,
and then truly just never happen. And our understanding of
Dario based on our own conversations with him and of people who work with him
is like, he is not that kind of person. This is not somebody who lets his mouth
run away with him. When he says that he thinks this stuff could start to
arrive in 14 months, I actually do give some credibility. Yeah, and you know,
you can argue with the time scales and plenty of smart people disagree about this,
but I think it's worth taking this seriously
because this is the head of one of the leading AI labs
sort of giving you his thoughts on not just what AI
is going to change about the world,
but when that's going to happen.
And what I liked about this essay was that it wasn't
trying to sell me a vision of a glorious AI future, right?
Dario says, all or some or none of this
might come to pass.
But it was basically a thought experiment.
He has this idea in the essay about what he calls
the compressed 21st century.
He basically says, what if all AI does
is allow us to make a hundred years worth of progress
in the next 10 years in fields like biology.
What would that change about the world?
And I thought that was a really interesting way
to frame it.
Give us some examples, Kevin,
of what Dario says might happen
in this compressed 21st century.
So what he says in this essay is that if we do get
what he calls powerful AI relatively soon,
that in the sort of decade that follows that,
we would expect things like the prevention and treatment of basically all natural infectious
disease, the elimination of most types of cancer, very sort of good embryo screening
for genetic diseases that would make it so that more people didn't die of these sort
of hereditary things.
He talks about there being improved treatment for mental health and other ailments.
Yeah, I mean, and a lot of this comes down to just understanding the human brain, which
is an area where we still have a lot to learn.
And the idea is, if you have what he calls this country of geniuses that's just operating
on a server somewhere, and they are able to talk to each other, to dream, to suggest ideas, to give guidance to human scientists in labs, to run
experiments, then you have this massive compression effect, and all of a sudden you get all of
these benefits really soon.
You know, obviously the headline grabbing stuff is like, you know, Dario thinks we're
going to cure all cancer, and we're going to cure Alzheimer's disease.
Obviously, those are huge.
But there's also kind of the more mundane stuff,
like do you struggle with anxiety?
Do you have other mental health issues?
Like, are you like mildly depressed?
It's possible that we will understand
the neural circuitry there
and be able to develop treatments
that would just lead to a general rise in happiness.
And that really struck me.
Yeah, and it sounds when you just describe it that way, just lead to a general rise in happiness. And that really struck me.
Yeah, and it sounds when you just describe it that way,
it sounds sort of utopian and crazy,
but what he points out,
and what I actually find compelling is like,
most scientific progress does not happen
in a straight line, right?
You have these kind of moments
where there's a breakthrough
that enables a bunch of other breakthroughs.
And we've seen stuff like this already happen with AI,
like with Alpha Fold,
which won the freaking Nobel Prize this year in chemistry,
where you don't just have a cure for one specific disease,
but you have a way of potentially discovering cures
for many kinds of diseases all at once.
There's a part of an essay that I really liked
where he points out that CRISPR was something that we could have invented long before we actually did but essentially
No one had noticed the things they needed to notice in order to make it a reality and he posits that there are probably
hundreds of other things like this right now that just no one has noticed yet and if you had a bunch of
AI agents working together in a room and they were sufficiently intelligent,
they would just notice those things
and we'd be off to the races.
Right, and what I liked about this section of the essay
was that it didn't try to claim that there was some,
you know, completely novel thing that would be required
to result in the changed world that he envisions, right?
All that would need to happen for society
to look radically different 10 or 15 years from now,
in Dario's mind, is for that sort of base rate of discovery
to accelerate rapidly due to AI.
Yeah, now let's take a moment to acknowledge folks
in the audience who might be saying,
oh my gosh, will these guys stop it with the AI hype?
They're accepting every premise
that these AI CEOs will just shovel it.
They can't get enough and it's irresponsible.
These are just stochastic parrots, Kevin.
They don't know anything.
It's not intelligence and it's never gonna get any better
than it is today.
And I just wanna say, I hear you and I see you.
And our email address is Ezra Klein Show
and playtime suck up.
But here's the thing,
you can look at the state of the art right now
and if you just extrapolate what is happening in 2024
and you assume some rate of progress
beyond where we currently are,
it seems likely to me that we do get into a world
where you do have these sort of simulated PhD students
or maybe simulated super geniuses
and they are able to realize a lot of these kinds of things.
Now, maybe it doesn't happen in five, 10 years.
Maybe it takes a lot longer than that.
But I just wanted to underline,
we are not truly living in the realm of fantasy.
We are just trying to get a few years
and a few levels of advancement
beyond where we are right now.
Yeah, and Dario does, in his essay, make some caveats about things
that might constrain the rate of progress in AI,
like regulation or clinical trials taking a long time.
He also talks about the fact that some people may just
opt out of this whole thing.
They just may not want anything to do with AI.
There might be some political or cultural backlash
that sort of slows down the rate of progress.
And he says, you know, like that could actually
constrain this and we need to think about some ways
to address that.
Yeah.
So that is sort of the suite of things
that Dario thinks will benefit our lives.
You know, there's a bunch more in there.
You know, we think sort of will help with climate change,
other issues, but the essay has five parts.
And there was another part of the essay
that really caught my attention, Kevin.
And it is a part that looks a little bit more seriously
at the risks of this stuff,
because any super genius that was sufficiently intelligent
to cure cancer could otherwise wreak havoc in the world.
So what is his idea for ensuring that AI always remains in good hands?
So he admits that he's not like a geopolitics expert. This is not his forte.
Unlike the two of us.
And there have been, look, a lot of people theorizing about what the politics of advanced
AI are going to look like.
Dario says that his best guess currently
about how to prevent AI from sort of empowering
autocrats and dictators is through what he
calls an Entente strategy.
Basically, you want a bunch of democracies
to kind of come together to secure their supply
chain to sort of block adversaries
from getting access to things like GPUs and, and semiconductors, and that you
could basically bring countries into this democratic alliance,
and sort of ice out the the more authoritarian regimes from
getting access to this stuff. But I think, you know, this was
sort of not the most fleshed out part of the argument.
Yeah, well, and I appreciate that he is at least
making an effort to come up with ideas
for how would you prevent AI from being misused.
But as I was reading the discussion around the blog post,
I found this interesting response
from a guy named Max Tegmark.
Max is a professor at MIT who studies machine learning,
and he's also the president of something called
the Future of Life Institute,
which is a sort of nonprofit focused on AI safety.
And he really doesn't like this idea
of what Dario calls the Entente,
the group of these democracies working together.
And he says he doesn't like it
because it essentially sets up and accelerates a race.
It says to the world that essentially whoever invents
super powerful AI first will win forever, right?
Because in this view, AI is essentially
the final technology that you ever need to invent
because after that it'll just, you know,
invent anything else it needs.
And he calls that a suicide race.
And the reason is this, and he has a great quote,
horny couples know that it is easier
to make a human level intelligence
than to raise and align it.
And it is also easier to make an AGI
than to figure out how to align or control it.
Wow, I never thought about it like that.
Yeah, you probably never thought I would say horny couple
on the show, but I just did.
So Kevin, what do you make of this sort of feedback?
Is there a risk there that this effectively serves
as a starter pistol that leads maybe our adversaries
to start investing more in AI and sort of racing against us
and triggering some sort of doom spiral?
Yeah, I mean, look, I don't have a problem
with China racing us to cure cancer using AI, right?
If they get there first first like more power to them
But I think the the more serious risk is that they start building that kind of AI that serves Chinese interest, right?
That it becomes a tool for surveillance and and control of people rather than some of these more sort of democratic ideals
And this is actually something that I asked Dario about back last year when I was spending all that time at Anthropic, because this is the most common criticism
of Anthropic is like, well, if you're so worried about AI
and all the risks that it could pose,
like why are you building it?
And I asked him about this and his response was
he basically said, look, there's this problem of
in AI research of kind of intertwining, right?
Of the same technology that sort of advances the state of the art in AI research of kind of intertwining, right? Of the same technology that sort of advances
the state of the art in AI also allows you to advance
the state of the art in AI safety, right?
The same tools that make the language models more capable
also make it possible to control the behavior
of the language models.
And so these things kind of go hand in hand.
And if you want to compete on the frontier of AI safety,
you also have to compete on the frontier
of AI capabilities.
Yeah, and I think it's an idea worth considering.
To me, it just sounds like, wow,
you are really standing on a knife's edge there
if you're saying in order to have any influence
over the future, we have to be right at the frontier
and maybe even gently advance the frontier
and yet somehow not accidentally trigger a race
where all of a sudden everything gets out of control.
But I do accept and respect that that is Darius viewpoint.
But isn't that kind of what we observed
from the last couple of years of AI progress, right?
Like OpenAI, it got out there with ChatGPT
before any of the other labs had released anything
similar.
And ChatGPT kind of set the tone for all of the products that followed it.
And so I think the argument from Anthropic would be like, yes, we could sort of be way
behind the state of the art.
That would probably make us safer than someone who was actually advancing the state of the
art.
But then we missed the chance to kind of set the terms
of what future AI products
from other companies will look like.
So it's sort of like using a soft power
in an effort to influence others.
Yeah, and the way they put this to me last year
was that they wanted, instead of there to be
just a race for raw capabilities of AI systems,
they wanted there to be a safety race, right?
Where companies would start competing
about whose models were the safest rather than whose models could, you know, do your math homework
better.
So let's talk about the safety race and the other thing that Anthropic did this week to
lay out a future vision for AI.
And that was with something that has, I'll say it, kind of a boring name, the responsible
scaling policy.
I understand this, you know, this maybe wasn't going to come up over drinks, you know, at the club this weekend.
RSP.
Yeah, but I think this is something
that people should pay attention to
because it's an example of what you just said, Kevin.
It is Anthropic trying to use some soft power
in the world to say, hey, if we went a little bit more
like this, we might be safer.
All right, so talk about what's in the responsible
scaling policy that Anthropic released this week. Well, let so talk about what's in the responsible scaling policy that Anthropic released this week.
Well, let's talk about what it is.
And the basic idea is just that as large language models
gain new abilities, they should be subjected
to more scrutiny and they should have more safeguards
added to them.
They put this out a year ago and it was actually
a huge success in this sense, Kevin.
OpenAI went on to release its own version of it.
And then Google DeepMind released a similar scaling policy
as well this spring.
So now Anthropica is coming back just over a year later
and they say, we're going to make some refinements.
And the most important thing that they say is,
essentially we have identified two capabilities
that we think would be particularly dangerous.
And so if anything that we make displays these capabilities,
we are going to add a bunch of new safeguards.
The first one of those is,
if a model can do its own AI research and development,
that is gonna start ringing a lot of alarm bells,
and they're going to put many more safeguards on it. And second, if one of these models can
meaningfully assist someone who has a basic technical background in creating a chemical,
biological, radiological, or nuclear weapon, then they would add these new safeguards.
What are these safeguards? Well, they have a super long blog post about it. You can look it up. But it includes basic things like taking
extra steps to make sure that a foreign adversary can't steal the model weights, for example,
or otherwise hack into the systems and run away with it.
Right. And this is some of it is similar to things that were proposed by the Biden White
House in its executive order on AI last year. This is also, these are some of the steps that came up in SB 1047,
the AI regulation that was vetoed by Governor Newsom in California recently.
So these are ideas that have been floating out there in the sort of AI safety world for a while.
But Anthropic is basically saying we are going to proactively commit to doing this stuff,
even before a government requires us to. Yeah
There's a second thing I like about this and it relates to this as B1047 that we talked about on the show
Something that a lot of folks in Silicon Valley didn't like about it was the way that it tried to identify
Danger and it was not because of a specific harm that a model could cause. It was by saying, well, if a model costs a certain amount
of money to train, right?
Or if it is trained with a certain amount of compute,
those were the proxies that the government was trying to use
to understand why this would be dangerous.
And a lot of folks in Silicon Valley said, we hate that
because that has nothing to do with whether these things
could cause harm or not.
So what Anthropic is doing here is saying,
well, why don't we try to regulate
based on the anticipated harm?
Obviously it would be bad if you could log on to Claude,
Anthropic's rival to Chachibet, and said,
hey, help me build a radiological weapon,
which is something that I might type into Claude,
because I don't even know the difference
between a radiological weapon and a nuclear weapon, do you?
I hope you never learn.
I hope I don't either, because difference between a radiological weapon and a nuclear weapon, do you? I hope you never learn. I hope I don't either,
because sometimes I have bad days, Kevin,
and I get to scheming.
So for this reason,
I think that governments, regulators around the world
might want to look at this approach and say,
hey, instead of trying to regulate this
based on how much money AI labs are spending
or like how much compute is involved,
why don't we look at the harms we're trying to address
and say, hey, if you build something
that could cause this kind of harm, you have to do X, Y,
and Z.
Yeah, that makes sense to me.
So I think the biggest impact that both the sort of essay
that Dario wrote and this responsible scaling policy
had on me was not about any of the actual specifics
of the idea.
It was purely about the time scales and the urgency.
It is one thing to hear a bunch of people telling you
that AI is coming and that it's going to be more powerful
than you can imagine sooner than you can imagine.
But if you actually start to internalize that
and plan for it, it just feels very different.
If we are going to get powerful AI sometime in the next,
let's call it two to 10 years,
you just start making different choices.
Yeah, I think it becomes sort of the calculus.
I can imagine it affecting what you might want to study
in college if you are going to school right now.
I have friends who are thinking about leaving their jobs
because they think the place where they're working right now
will not be able to compete in a world
where AI is very widespread.
So yes, you're absolutely starting to see it
creep into the calculus.
I don't know kind of what else it could do.
There's no real call to action here
because you can't really do very much
until this world begins to arrive.
But I do think psychologically, we want people to at least imagine, as you say, what it would
be like to live in this world because I have been surprised at how little discussion this
has been getting.
Yeah, I totally agree.
I mean, to me, it feels like we are entering, I wouldn't call it like an AI end game,
because I think we're closer to the start
than the end of this transformation.
But it does feel like something is happening.
I'm starting to notice AI's effects in my life more.
I'm starting to feel more dependent on it.
And I'm also like, I'm kind of having an existential crisis.
Like, not a full blown one, but like typically I'm a guy who likes to plan, I like to strategize,
I like to have like a five year and a 10 year plan.
And I've just found that my own certainty about the future and my ability to plan long
term is just way lower than it has been for any time that I can remember.
That's interesting.
I mean, for myself, I feel like that has always been true.
In 1990, I did not know what things were gonna look like
in 2040, and I would be really surprised
by a lot of things that have happened along the way.
But yeah, there's a lot of uncertainty out there.
It's scary, but I also like,
do you not feel a little bit excited about it?
Of course.
I, look, I love software, I love tools,
I wanna live in the future and it's already happening to me.
There is a lot of that uncertainty
and that stuff freaks me out.
But if we could cure cancer, if we could cure depression,
if we could cure anxiety, you'd be talking about
the greatest advancement to human wellbeing,
certainly in decades, maybe that we've ever seen.
Yeah, I mean, I have some priors on this
because like my dad died of a very rare form of cancer
that was, it was like a sub 1% type of cancer.
And when he got sick, it was like,
I read all the clinical trials and it was just like,
there hadn't been enough people thinking
about this specific type of cancer and how to cure it,
because it was not breast cancer.
It was not lung cancer.
It was not something that millions of Americans get.
And so there just wasn't the kind of brain power
devoted to trying to solve this.
Now it has subsequently, it hasn't been solved,
but there are now treatments that are in the pipeline
that didn't exist when he was sick.
And I just constantly am wondering,
like if he had gotten sick now instead of when he did,
like maybe he would have lived.
And I think that is one of the things
that makes me really optimistic about AI is just
like maybe we just do have the brain power or we will soon have the brain power to devote
world-class research teams to these things that might not affect millions of people,
but that do affect some number of people.
Absolutely.
I just, I don't know.
It really, I got kind of emotional reading this essay
because it was just like, you know, obviously it's, you know,
I'm not someone who believes all the hype,
but I'm like, I assign some non-zero probability
to the possibility that he's right,
that all this stuff could happen.
And I just find that so much more interesting and fun
to think about than like a world
where everything goes off the rails.
Well, it's just the first time that we've had
a truly positive, transformative vision for the world
coming out of Silicon Valley in a really long time.
In fact, this vision, it's more positive and optimistic
than anything that has been like
in the presidential campaign.
Yeah.
It's like when the presidential candidates talk about the future of this country,
it's like, well, we'll give you this tax break, right?
Or we'll make this other policy change.
Nobody's talking about how they're gonna
fricking cure cancer, right?
So I think, of course we're drawn to this kind of discussion
because it feels like there are some people in the world
who are taking really, really big swings,
and if they connect, then we're all gonna benefit.
Yeah, yeah.
When we come back,
why Uber has Waymo autonomous vehicles on the road
than it used to. Okay, see, one of the biggest developments over the past few months in tech is that self-driving
cars now are actually working.
Yeah, but this is no longer in the realm of sci-fi.
Yes.
So we've talked, obviously, about the self-driving cars that you can get in San Francisco now.
It used to be two companies, Waymo and Cruise, now it's just Waymo.
And there have also been a bunch of different autonomous vehicle
updates from other companies that are involved in the space.
And the one that I found most interesting recently
was about Uber.
Now, as you will remember, Uber used
to try to build its own robo taxis.
They gave that up back in 2020.
That was the year they sort of sold off their autonomous driving division
to a startup called Aurora,
after losing just an absolute ton of money on it.
But now they are back in the game
and they just recently announced a multi-year partnership
with Cruise, the self-driving car company.
They also announced an expanded partnership with Waymo,
which is going to allow Uber riders to get AVs in Austin, Texas
And Atlanta, Georgia, they've been operating this service in Phoenix since last year and that's going to keep expanding
They also announced that self-driving ubers will be available in Abu Dhabi through a partnership with the Chinese AV company
We ride and they've also made a long-term investment in Wave, which is a
London-based autonomous driving company. So they are investing really heavily in
this and they're doing it in a different way than they did back when they were
trying to build their own self-driving cars. Now they are essentially saying,
we want to partner with every company that we can that is making self-driving
cars. Yeah, so this is a company that many people take several times a week, Uber, and yet I
feel like it sometimes is a bit taken for granted.
And while we might just focus on the cars you can get today, they are thinking very
long term about what transportation is going to look like in five or 10 years.
And increasingly for them, it seems like autonomous vehicles are a big part of that answer.
Yeah, and what I found really interesting, so Tesla had this robo taxi event last week
where Elon Musk talked about how you'll soon be able to hail a self-driving Tesla.
And what I found really interesting is that Tesla's share price plummeted after that event,
but Uber's stock price rose to an all-time high.
So clearly people think that, or at least some investors think that Uber's approach
is better here than Tesla's.
It's the sort of thing, Kevin,
that makes me wanna talk to the CEO of Uber.
And lucky for you, he's here.
Oh, thank goodness.
So today we're gonna talk with Uber CEO,
Dara Kazer-Shahwy.
He took over at Uber in 2017
after a bunch of scandals led the founder of Uber,
Travis Kalanick, to step down.
He has made the company profitable
for the first time in its history,
and I think a lot of people think
he's been doing a pretty good job over there.
And he is leading this charge into autonomous vehicles,
and I'm really curious to hear what he makes,
not just of Uber's partnership with Waymo,
but of sort of the whole self-driving car landscape.
Let's bring him in.
Let's do it. Dharak, how's our showy?
Welcome to Hard Fork.
Thank you for having me.
So you were previously on the board of the New York Times Company until 2017 when you
stepped down right after taking over at Uber.
I assume you still have some poll with our bosses though,
because of your years of service.
So can you get them to build us a nicer studio?
I didn't have poll when I was on the board
and I certainly have zero poll now.
I've got negative poll I think.
They're taking revenge on me.
Well, since you left the board,
they're making all kinds of crazy decisions
like letting us start a podcast and stuff.
Oh my God.
But all right, so we are going to talk today all kinds of crazy decisions like letting us start a podcast. Oh my God. Yeah.
But all right, so we are going to talk today
about your new partnership with Waymo
and the sort of autonomous driving future.
I would love to hear the story of how this came together
because I think for people who've been following this space
for a number of years, this was surprising.
Uber and Waymo have not historically
had a great relationship.
The two companies were-
It was a little rocky at first.
Embarrassed in litigation and lawsuits and trade secret historically had a great relationship. The two companies were- It was a little rocky at first. Embarrassed in litigation and lawsuits
and trade secret theft and things like that.
There was, it was a big deal.
And so how did they, did they approach you?
Did you approach them?
How did this partnership come together?
I guess it's time healing, right?
We, when I came on board,
we thought that we wanted to establish a better relationship
with Google
generally Waymo generally.
And even though we were working on our own self-driving technology, it was always within
the context of we were developing our own, but we want to work with third parties as
well.
One of the disadvantages of developing our own technology was that some of the other
players of Waymo's of the world, et cetera, hurt us, but didn't necessarily believe us.
It's difficult to work with players that you compete.
So one of the first decisions that we made was,
we can't be in between here.
Either you have to go vertical,
or you have to go platform strategy.
You can't achieve both, and we have to make it better.
I mean, we either have to do our own own thing or we have to do it with partners.
Yeah, absolutely.
And so that strategic kind of fork became quite apparent to me.
And then the second was just what are we good at?
Listen, I'll be blunt, we sucked at hardware, right?
We tried to apply software principles to hardware.
It doesn't work hardware is a different place different demand in terms of perfection etc.
And ultimately that fork do we go vertical and there are very few companies that can do software and hardware well apple tesla are arguably one of the few in the world and we decide to make a bet on the platform.
in the world. And we decided to make a bet on the platform. And so once we made that bet, we went out and identified who were the leaders. Waymo was a clear leader. First, we had to make peace with them
and settle in court, etc. We got Google to be a bigger shareholder. And then over a period of time,
we built relationships. And you know, I do think there's a synergy between the two.
relationships and you know, I do think there's a synergy between the two so it just makes sense the
relationship and we're very very excited to
On a forward basis expand it pretty significantly
So this was I feel like maybe your most consequential decision to date as the CEO of this company
If you believe that AVs are gonna become the norm for many people hailing a ride in 10 or 15 years. It's conceivable that they might open up the Waymo app, right? And not the Uber app.
Waymo has an app to order cars. I use it fairly regularly, right? So what gave you
the confidence that in that world it will still be Uber that is the app that
people are turning to and not Waymo or whatever other apps might have arisen
for other AV companies. The first is that it's not a binary outcome.
Okay, I think that Waymo app and Uber app can coexist.
We saw it in my old job in the travel business, right? I ran Expedia.
And there's this dramatic, is Expedia going to put the hotel chains out of business or the hotel chains going to put Expedia out of business?
The fact is both thrived and there's a set of customers who have books through
Expedia. There's a set of customers who books Hotel Direct and both businesses
have grown and interactivity in general has grown. Same thing if you look at food,
right? McDonald's has its own app. It's a really good app. It has a loyalty program.
Starbucks has its own app, has a loyalty program, yet both are engaging with us through the Uber Eats marketplace.
So my conclusion was that there isn't an either or.
I do believe there will be other companies.
There'll be Cruises, and there'll be WeRides and Waves, et cetera.
There'll be other companies and self-driving choices.
And the person who wants utility, speed, ease, familiarity will choose Uber,
and both can coexist and both can thrive
and both are really gonna grow
because autonomous will be the future eventually.
So tell us more about the partnership with Waymo
that is going to take place in Austin and Atlanta.
Who is actually paying for the maintenance of the cars?
Does Uber have to sort of make sure
that there's no trash left behind in the cars? Like what is Uber actually doing in addition to just making these
rides available through the app? Sure, so I don't want to talk about the economics because they're
confidential in terms of the deal, but in those two cities Waymo will be available exclusively
through the Uber app and we will also be running the fleet operations as well.
So depots, recharging, cleaning,
if something gets lost,
making sure that it gets back to its owner, et cetera.
And Waymo will provide the software driver,
will obviously provide the hardware,
repair the hardware, et cetera.
And then we will be doing the upkeep
and operating the networks.
For riders, if you want to get in a Waymo
in one of those cities through Uber,
is there an option to specifically request
a self-driving Waymo or is it just kind of chance?
Like if the car that's closest to you happens to be a Waymo,
that's the one you get.
Right now the experience, for example,
in Phoenix is that it's by chance.
I think you got one by chance and you can say,
yes, I'll do it or not.
And I think that's what we're gonna start with,
but there may be some people who only want Waymos,
and there are some people who may not want Waymos,
and we'll solve for that over a period of time.
It could be personalizing preferences,
or it could be what you're talking about,
which is I only want a Waymo.
Do the passengers get raided by the self-driving car
the way that they would in a human-driven Uber?
Not yet, but that's not a bad idea. What about tipping? Like, if I get out of a self-driving car the way that they would in a human-driven Uber? Not yet, but that's not a bad idea.
What about tipping?
Like if I get out of a self-driven Uber, is there an option to tip the car if it did a
good job?
I'm sure we could build that.
Why not?
Honestly, it would just be...
I don't know.
I do wonder if people are going to tip machines.
I don't think it's likely, but you never know.
It sounds crazy, but at some point someone is going to start asking because they're going
to realize it's just free margin. You know, it's like even if only a hundred customers do it in a whole year, gonna start asking because they're gonna realize it's just free margin
You know, it's like even if only a hundred customers do it in a whole year. I don't know
You know, it's just free money
I mean that the good news is tipping a hundred percent of tips go to drivers now and we definitely want to keep that
So we like the tipping habits, but whether people tip machines is TBD. Yeah, and how big are these fleets?
I think I read somewhere recently that Waymo has about 700 self-driving cars operating
nationwide.
How many AVs are we talking about in these cities?
We're starting in the hundreds and then we'll expand from there.
I know you don't want to discuss the economics, even though I would love to learn what the
split is there.
I'm not going to tell you.
But you did recently talk about the margins on autonomous rides
Being lower than the margins on regular uber rides for at least a few more years
That's not intuitive to me because in an autonomous ride, you don't have to pay the driver So you would think the margin would be way higher for uber
But why would you make less money if you don't have to pay a driver?
So generally our design spec in terms of how we build businesses is any newer business, we're going to
operate at a lower margin while we're growing that
business. You don't want it to be profitable day one.
And that's my attitude with autonomous, which is again,
get it out there, introduce it to as, as many people as
possible at a maturity level. Generally, if you look at our
take rate around the world, it's about 20%, we get 20%, the
driver gets 80%.
We think that's a model that makes sense
for any autonomous partner going forward,
and that's what we expect.
I kind of don't care, honestly,
what the margins are for the next five years.
The question is, can I get lots of supply?
Can it be absolutely safe?
And does that 20-80 split look reasonable going forward?
And I think it does.
Yeah.
I want to ask about Tesla.
You mentioned them a little earlier.
They held an event recently where they unveiled their plans
for a robo taxi service.
Do you consider Tesla a competitor?
Well, they certainly could be, right?
If they develop their own AV vehicle
and they decide to go direct only through the Tesla
app, they would be a competitor.
And if they decide to work with us, then we would be a partner as well.
And to some extent, again, both can be true.
So I don't think it's going to be an either or.
I think Elon's vision is pretty compelling, especially like you might have these cyber shepherds
or these owners of these fleets, et cetera.
Those owners, if they want to have maximum earnings
on those fleets, will want to put those fleets on Uber.
But at this point it's unknown what his intentions are.
There's this big debate that's playing out right now
about who has the better AV strategy
between Waymo and Tesla in the sense that the Waymos
have many, many sensors on them.
The vehicles are much more expensive to produce.
Tesla is trying to get to full autonomy
using only its cameras and software.
And Andrei Karpathy, the AI researcher, recently said that Tesla was going to be
in a better position in the long run because it ultimately just had a software
problem, whereas Waymo has a hardware problem and those are typically harder to
solve. I'm curious if you have a view on this, whether you think one company is
likelier to get to a better scale based on the approach that they're taking
with their hardware and software?
I mean, I think that hardware costs scale down over a period of time.
So sure, Waymo has a hardware problem, but they can solve it.
I mean, the history of compute and hardware is like the costs come down very, very significantly.
The Waymo solution is working right now.
So it's not theory, right?
And I think the differences are bigger, which is Waymo has more sensors, has cameras, has
LIDAR, so there's a certain redundancy there.
Waymo generally has more compute, so to speak.
So the inference of that computer is going to be better, right? And Waymo also has a high definition maps
that essentially makes the problem
of recognizing what's happening in the real world
a much simpler problem.
So under Elon's model,
the weight that the software has to carry is very, very heavy
versus the Waymo and most other player model
where you don't have to kind of weigh as much on training
and you make the problem much simpler
as a compute problem to understand.
I think eventually both will get there.
But if you had to guess,
like who's gonna get to sort of a viable scale first?
Listen, I think Elon eventually will get to viable scale,
but for the next five years, I bet on Waymo
and we are betting on Waymo. I'll say this, I don't wanna get into an, but for the next five years I bet on Wema and we are betting on Wema.
I'll say this, I don't want to get into an autonomous Tesla in the next five years.
Somebody else can test that out.
I'm not going to be an early adopter of that one.
That's this game pretty good.
Have you used it recently?
I have not used it recently.
It's really good.
Yeah?
All right.
It's really good.
Now, again, it's the, for example, the cost of a solid state lighter now is $500, $600,
right?
So why wouldn't you put that into your sensor stack?
It's not that expensive.
And for a fully self-driving specialized auto, I think that makes a lot of sense to me.
Now, Elon has accomplished the unimaginable many, many, many times, so I wouldn't bet
against him.
Yeah, I don't know.
This is all my secret dream for you. Obviously, you should stay at I wouldn't bet against them. Yeah, I don't know. This is all, you know, my secret dream for you,
I, you know, obviously you should stay at Uber
as long as you want.
When you're done with that,
I actually do think you should run Tesla,
because I think you would be,
just as you've done at Uber,
you'd be willing to make some of the sort of easy compromises,
like just put a $500 fricking lidar on the thing
and we'd go much faster.
So anyway, what do you think about that?
I have a full-time job and I'm very happy with it.
Thank you.
Well, the Tesla board is listening.
Just keep this guy in mind. That's all I'm saying. Thank you. Well, the Tesla board is listening. Just keep this going.
I don't know if the Tesla board listens to you too.
That's true.
I made too many Kennegan jokes.
We're opening up the board meeting
with an episode of Hardfork, everybody.
They can learn a lot from this show.
What's your best guess for when say 50% of Uber rides
in the US will be autonomous?
I take close to eight to 10 years is my best guess,
but I am sure that'll be wrong.
In which direction? Probably closer to 10.
Probably closer to 10.
Closer to 10?
Yeah, okay, interesting.
Most people have overestimated, you know,
again, it's a wild guess.
The probabilities of your being right
are just as much as mine.
I'm curious if we can sort of get into a future imagining
mode here.
Like in the year, whether it's 10 years or 15 years
or 20 years from now, when maybe a majority of rides
in at least big cities in the US will be autonomous,
do you think that changes the city at all?
Like do the roads look different?
Are there more cars on the road?
Are there fewer cars on the road? Are there fewer cars on the road?
What does that even look like?
So, I think that the cities will have much, much more space to use.
Parking often takes up 20, 30% of the square miles in a city, for example, and that parking
space will be open for living, parks, etc. So there's no doubt that
it will be a better world. You will have greener, cleaner cities, and you'll never have to park
again, which I think is pretty cool. I'm very curious what you think about the politics of
autonomy in transportation. In the early days of Uber, there was a lot of backlash and resistance
from taxi drivers.
Sure.
And, you know, they saw Uber as a threat to their livelihoods.
There were some, you know, well publicized cases of sort of sabotage and big protests.
Do you anticipate there will be a backlash from either drivers or the public to the spread
of AVs as they start to appear in more cities?
I think there could be, and what I'm hoping is that we avoid the backlash by having the
proper conversations.
Now historically, society as a whole, we've been able to adjust to job displacement because
it does happen gradually.
And even in a world where there's greater automation now than ever before, employment
rates, etc. are at historically great levels. But the fact
is that AI is going to displace jobs. What does that mean? How quickly should we go?
How do we think about that? Those are discussions that we're going to have. And if we don't
have the discussions, sure, there will be backlash. There's always backlash against
societal change that's significant. Now, we now work with taxis in San Francisco and taxi drivers who use Uber
make more than 20% more than the ones who don't. So there is a kind of solution space where new
technology and established players can win, but I don't know exactly what that looks like.
But that calculus does not apply to the self-driver. You know, it's not like the
Uber driver who's been driving an Uber for 10 years, that's their main source
of income, can just start driving a self-driving way.
You don't need a driver.
No, you don't need a driver.
It's not just that they have to switch the app they're using, it's that it threatens
to put them out of a job.
Well, listen, could they be a part of fleet management, cleaning, charging, et cetera?
That's a possibility.
We are now working with some of our drivers.
They're doing AI map labeling
and training of AI models, et cetera.
So we're expanding the solution set of work,
on-demand work that we're offering our drivers
because there is part of that work,
which is driving maybe going away
or the growth in that work is going to slow down,
at least over the next 10 years. And then we'll look to adjust but listen these are these are
issues that are real and I don't have a clean answer for them at this point.
Yeah. You brought up shared rides earlier and you know back in the day I
think when uber X first rolled out shared rides like I did that a couple
of times and then you know I don't know I like got a raise at my job and I
thought you know from here on out I think's just going to be me in the car.
How popular do you think you can make shared rides?
Is there anything that you can do to make that more appealing?
Well, I think the way that we have to make it more appealing is to reduce the penalty,
so to speak, of the shared rides.
I think the number one reason why people use Uber is they want to save time, they want
to have their time back.
And a shared ride would, you know,
you would get about a 30% decrease in price historically,
but there could be a 50% to 100% time penalty.
We're working now.
You might end up sitting next to Casey Newton.
That would be cool.
That would be amazing.
Thank you, Tara.
I would feel very short.
Otherwise, I would have no complaints.
People, so far we've heard,
don't have a problem with company, it really is time.
And they don't mind riding with other people.
There's a certain sense of satisfaction
with riding with other people.
But we're now working with both algorithmically,
and I think also fixing the product.
Previously, you would choose a shared ride
and you'd get an upfront discount. So your incentive as a customer is to get the discount, but not would choose a shared ride and you get an upfront discount.
So your incentive as a customer is to get the discount, but not to get a shared ride.
So we would have customers gaming the system, they get a shared ride at 2am when they know
they're not going to be matched up, etc.
Now you get a smaller discount and you get a reward, which is a higher discount if you're
matched.
So part of it is we're not customers aren't working against us and we're not working against customers, but we're working on tech, we are reducing the time
penalty, which is we avoid these weird routes, et cetera, that's going to cost
you 50% of your time or a hundred percent of your time.
Now in autonomous, if we are the only player that then has the liquidity to
introduce shared autonomous into cities that lowers congestion, lowers the price.
That's another way in which our marketplace
can add value to the ecosystem.
Got it.
Speaking of shared rides,
Uber just released a new airport shuttle service
in New York City.
Costs $18 a person, you book a seat,
it goes on a designated sort of route on a set schedule.
I don't have a question.
I just wanted to congratulate you on inventing a bus.
It's a better bus.
You know, exactly when it's coming, picking you up,
like just knowing exactly where your bus is, pick up,
knowing what your path is, real time.
It just gives a sense of comfort.
We think this can be a pretty cool product.
And again, is bus gonna be hugely profitable for us long long term? I don't know, but it will introduce us to a bigger
audience to come into the Uber ecosystem. And we think it can be good for cities as
well. If you're in Miami, by the way, over the weekend, we got buses to the Taylor Swift
concert as well. So I'm just saying.
Well, I mean, look, it should not be hard to improve on the experience of a city bus.
Yeah. Like, do you know what I mean? So I like city buses. Well, I mean, look, it should not be hard to improve on the experience of a city bus. Yeah.
Like, do you know what I mean?
So I like city buses.
When was the last time you were on a city bus?
Well, I took the train here.
So it wasn't a bus, but it was the transit.
He doesn't take shared, he doesn't take bus.
I'm a man of some people.
I like to ride public transit.
You're an elitist.
No, I would love to see a picture of you on a bus
sometimes in the past five years,
because I'm pretty sure that's never happened.
Let me ask you this.
I think we can that's never happened. Let me ask you this.
I think we can make the experience better.
So far I've resisted giving you any product feedback, Dara,
but I have this one thing that I have always wanted
to know the explanation for, and it's this.
At some point in the past couple of years,
you all, when I ordered an Uber,
started sending me a push notification
saying that the driver was nearby.
And I'm the sort of person, when I've ordered an Uber, Dar, I'm going to be there when the
driver pulls.
I'm not making this person wait.
Okay.
I'm going to respect their time.
And what I've learned is when you tell me the driver is nearby, what that means is they're
at least three minutes away and they might be two miles away.
And what I want to know is why do you send me that notification?
We want you to be prepared to not keep the driver waiting.
Maybe we should personalize it.
I think that's a good question,
which is depending on whether or not
you keep the driver waiting,
I think that is one of the cool things
with AI algos that we can do.
At this point, you're right.
The experience is not quite optimized.
But it's for the driver.
It's for the driver.
Time is money.
I get it.
And if I were a driver,
I would be happy that you were sending that,
but you also send me this notification
that says the driver's arriving.
And that's when I'm like,
okay, it's time to go downstairs.
But it sounds like we're making progress on this.
I think the algorithm just likes you.
It just wants to have a conversation with you.
Yeah, they know that I love my rides, yeah.
Well, Casey has previously talked about
how he doesn't like his Uber drivers to talk to him.
This is a man who-
Doesn't share.
Oh my God. Listen, does any
coast through life in a
president bubble?
Casey, I thought so highly of you for this.
I mean,
Here's what I'm saying.
If you're on your way to the airport
and it's 6.30 in the morning,
do you truly want a person you've never met before
asking you who you're going to vote for in the election?
Is that an experience that anyone enjoys?
By the way, I, I, I drive, I drove,
and reading the rider as to whether they want to have a conversation or not.
I was not good at the art of kind of conversation as a driver.
Were you too talkative?
Hey, no, no, no.
Hey, how's it going?
Are you having a good day?
Going to work?
And then I just shut up and have a nice day.
To me that's ideal.
That was, but I don't know if that's.
No that's perfect.
That's going to give you all the information that you need.
I'll be your driver. I'll be your driver
I know this is Casey's real attraction to self-driving cars is that he never has to talk to another human
Look, you can make fun of me all you want. I am NOT the only person who feels this way
Let me tell you know when I check into a hotel safety like yeah, did you have a nice day?
Yeah, but where are you coming into it? Yeah, just key
I would love to see you checking into hotels. Did you have a nice day?
And you're like, well, let me tell you about this board meeting I just went to because the pressure I'm under.
You don't want to hear about it.
All right. Well, I think we're at time.
Dara, thank you so much for coming.
Thanks Dara.
This was fun.
When we come back, well, AI is driving progress and it's driving cars.
Now we're going to find out if it can drive Casey insane.
He watched 260 TikTok videos and he'll tell you all about it. driving progress and it's driving cars. Now we're going to find out if it can drive Casey insane.
He watched 260 TikTok videos and he'll tell you all about it.
Well, Casey, aside from all the drama in AI and self-driving cars this week, we also had some news about TikTok.
Mm hmm. One of the other most powerful AI forces on earth.
No, truly.
Yes.
I unironically believe that.
Yeah, that was not a joke.
Yes, so this week we learned about some documents
that came to light as part of a lawsuit
that is moving through the courts right now.
As people will remember, the federal government
is still trying to force ByteDance to sell TikTok. But last week, 13 states and the District of Columbia sued TikTok, accusing the
company of creating an intentionally addictive app that harmed children. And Kevin, and this is my
favorite part of this story, is that Kentucky Public Radio got ahold of these court documents
and they had many redactions. You know, often in these cases, the most interesting sort of facts and figures
will just be redacted for who knows what reason.
But the geniuses over at Kentucky Public Radio
just copy and pasted everything in the document.
And when they pasted it, everything was totally visible.
This keeps happening.
I feel like every year or two,
we get a story about some failed redaction.
Like, is it that hard to redact a document?
I'll say this. I hope it always remains this hard to redact a document because
I read stuff like this, Kevin, and I'm in heaven.
Yes. So they got a hold of these documents. They copied and pasted. They figured out what was behind
sort of the black boxes in the redacted materials. And it was pretty juicy. These documents included details like TikTok's knowledge
of a high number of underage kids
who were stripping for adults on the platform.
The adults were paying them in digital gifts.
These documents also claimed that TikTok
had adjusted its algorithm to prioritize people
they deemed beautiful.
And then there was this stat that I know you honed in on,
which was that these documents said,
based on internal conversations,
that TikTok had figured out exactly how many videos
it needed to show someone
in order to get them hooked on the platform.
And that number is 260.
260 is what it takes.
You know, it reminds me, this is sort of ancient,
but do you remember the commercial in the 80s
where they would say, like, how many licks does it take
to get to the center of a Tootsie Pop?
Yes.
To me, this is the sort of 2020s equivalent.
How many TikToks do you have to watch
until you can't look away ever again?
Yes, so this is, according to the company's own research,
this is about the tipping point
where people start to develop a habit
or an addiction of going back to the platform
And they sort of become sticky in the parlance of social media at the disgusting parlance of social media apps
It becomes sticky
So Casey when we heard about this magic number of 260 tick-tock videos you had what I thought was an insane idea
Tell us about it. Well Kevin. I thought was an insane idea. Tell us about it. Well, Kevin, I thought if 260 videos is all it takes,
maybe I should watch 260 TikToks and here's why.
I am an infrequent user of TikTok.
I would say once a week, once every two weeks,
I'll check in, I'll watch a few videos.
And I would say generally enjoy my experience,
but not to the point that I come back every day.
And so I've always wondered what I'm missing
because I know so many folks
that can't even have TikTok on their phone
because it holds such a power over them.
And they feel like the algorithm gets to know them
so quickly and so intimately
that it can only be explained by magic.
So I thought if I've not been able to have this experience
just sort of normally using TikTok,
what if I tried to consume 260 TikToks
as quickly as I possibly could
and just saw what would happen after that?
Not all heroes wear capes.
Okay, so Casey, you watched 260 TikTok videos last night.
Tell me about it.
So I did create a new account.
So I started fresh.
I didn't just reset my algorithm,
although that is something that you can do in TikTok.
And I decided a couple of things.
One is I was not going to follow anyone,
like no friends, but also no influencers.
No enemies.
No enemies.
And I also was not going to do any searches, right?
A lot of the ways that TikTok will get to know you
is if you do a search.
And I thought I want to get the sort of broadest,
most mainstreamy experience of TikTok that I can
so that I can develop a better sense of
how does it sort of walk me down this funnel toward my eventual interest.
Whereas if I just followed 10 friends
and did like three searches for my favorite subjects,
like I probably could have gone there faster.
And so do you know the very first thing
that TikTok showed me, Kevin?
What's that?
It showed me a 19 year old boy flirting
with an 18 year old girl trying to get her phone number.
And when I tell you, I could not have been any less
interested in this content.
It was aggressively straight, and it was very young,
and it had nothing to do with me,
and it was not my business.
And so over the next several hours,
this total process, I did about 2 and 1 half hours last night,
and I did another 30 minutes this morning.
And I would like to share, you know,
maybe the first nine or 10 things that TikTok showed me,
again, you know, the assumption is
it knows basically nothing about me.
Yes.
And I do think there is something quite revealing
about an algorithm that knows nothing,
throwing spaghetti at you, seeing what will stick,
and then just picking up the spaghetti afterwards and saying,, seeing what will stick, and then just
picking up the spaghetti afterwards and saying, well, what is it, you know, that that was interesting.
So here's what it showed me. Second video, a disturbing 911 call, like a very upsetting sort
of domestic violence situation skip. Three, two people doing trivia on a diving board and like
the person who lose has to jump off the diving board. Okay, fine. Four, just free booted clip of audition
for America's Got Talent.
Five, vegetable mukbang.
So just a guy who had like rows and rows
of beautiful multicolored vegetables in front of them
who was just eating them.
Six, a comedy skit, but it was like running
on top of a Minecraft video. So one of my key takeaways after my first six
or seven TikTok videos was that it does actually assume
that you're quite young, right?
That's why it started out by showing me teenagers.
And as I would go through this process,
I found that over and over again,
instead of just showing me a video,
it would show me a video that had been chopped in half
and on top was whatever
the sort of core content was and below would be someone is playing subway surfers, someone
is playing Minecraft or someone is doing those sort of oddly satisfying things.
This is a growth hack.
I'm combing through a rug or whatever.
Yes.
And it's like, it's literally people trying to hypnotize you, right?
It's like, if you just see the, oh, someone is trying to smooth something out or someone is playing with slime.
They're cutting soap, have you seen the soap cutting?
Yes, soap cutting is huge.
And again, there is no content to it.
It is just trying to stimulate you
on some sort of lizard brain level.
It feels vaguely narcotic.
Absolutely.
It is like, yes.
It is just purely a drug.
Video number seven, an ad.
Video number eight, an ad.
Video number eight, a dad who was speaking
in Spanish and dancing.
I think he was very cute.
Now, can I ask you a question?
Yeah.
Are you doing anything other than just swiping
from one video to the next?
Are you liking anything?
Are you saving anything?
Are you sharing anything?
Because all of that gets interpreted by the algorithm
as like a signal to keep showing you more
of that kind of thing.
Absolutely. So for the first 25 or so videos,
I did not like anything,
but because I truly didn't like anything.
Like nothing was really doing it for me,
but my intention was always like, yes,
when I see something I like,
I'm gonna try to reward the algorithm, give it a like,
and I will maybe get more like that.
So the process goes on and on,
and I'm just struck by the absolute weirdness
and disconnection of everything in the feed.
At first, truly nothing has any relation to anything else,
and it sort of feels like you've put your brain
into like a Vitamix, you know?
Where it's like, swipe, here's a clip from Friends.
Swipe, kids complaining about school.
Swipe, Mickey Mouse has a gun and he's in a video game.
Those are three videos that I saw in a row.
And the effect of it is just like disorienting.
Yeah, and I've had this experience when you like go
onto YouTube but you're not logged in, you know,
on like a new account and it's sort of just showing you
sort of a random assortment of things
that are popular on YouTube.
It does feel very much like they're just firing
in a bunch of different directions,
hoping that something will stick.
And then it can sort of,
it can then sort of zoom in on that thing.
Yes, absolutely.
Now I will add that in the first 30 or so videos,
I saw two things that I thought were like
actually disturbing and bad.
Like things that have never, should never have been
shown to me.
Was it a clip from the All In podcast?
Yes, no.
That fortunately it didn't get that bad.
But one, there was a clip of a great in like a busy city
and there was air blowing up from the great.
And the TikTok was just women walking over the great and their skirts blowing up
That seems bad. That was in the first 20 videos that I saw. Wow. It was this video, okay?
I guess if you like that video it says a lot about you, right?
But it's like bad. The the second one and I truly I do not even know if we are
Will want to include this our podcast because I can't even believe that I'm saying
that I saw this, but it is true.
It was an AI voice of someone telling an erotic story
which involved incest,
and it was shown over a video of someone making soap.
Wow.
Like, what?
This is dark stuff.
This is dark stuff.
Now, at what point did you start to wonder if the algorithm?
Has started to pick up on your clues that you were giving it well
So I was desperate to find out this question because I am gay and I wondered when I was going to see the first gay
content like when it was actually just gonna show me two gay men who were talking about gay concerns and
It did not happen. Ever? No. It never quite got
there. On this morning. In 260 videos. In 260 videos. Now, it did show me queer people.
Actually, do you know the first queer person, identifiably queer person that the TikTok algorithm
showed me? Are you familiar with the very popular TikTok meme from this year, very Demure, very mindful?
Yes.
The first queer person I saw on TikTok,
thanks to the algorithm, was Jules LeBron
in a piece of sponsored content,
and she was trying to sell me a Lenovo laptop.
And that was the queer experience that I got
in my romp through the TikTok algorithm.
Now, you know, it did eventually show me
a couple of queer people. It showed me one TikTok algorithm. Now, you know, it did eventually show me a couple of queer people.
It showed me one TikTok about the singer,
Chappell Rhone, who is queer, so I'll count that.
And then it showed me a video by Billie Eilish,
you know, a queer pop star.
And I did like that video.
And now, Billie Eilish is one of the most famous pop stars
in the entire world.
I mean, like, truly.
Like, on the Mount Rushmore of famous pop stars right now.
So it makes a lot of sense to me that TikTok
would show me that also incredibly popular with teenagers.
And so I liked one Billie Eilish video
and then that was when the flood gates open
and it was like, okay, here's a lot of that.
But just from like sort of scrolling it, no,
we did not get to the gay zone.
Now I did notice the algorithm adapting to me.
So something about me was, because again,
I was trying to get through a lot of videos
in a relatively short amount of time,
and TikTok now will often show you
three, four, five minute long videos.
I frankly did not have the time for that.
The longer I scrolled,
the shorter the videos were that I got.
And I do feel like the content aged up a little bit.
It started showing me a category of content
that I call people being weird little freaks.
You know?
It's like somewhat, these are some real examples.
A man dressed as the cat in the hat
dancing to Sierra's song, Goodies.
Okay.
There was a man in a horse costume
playing the Addams Family theme song on an accordion
using a toilet lid for percussion.
This is the most important media platform in the world.
Yes, hours a day teenagers are staring at this.
And this is what it is so-
We are so screwed.
Yeah, you know, it figured out that I was more likely
to like content about animals than other things.
So there started to become a lot of dogs doing cute things cats doing things or you know other other things like that
But you know there was also just a lot of like
Here's a guy going to a store and showing you objects from the store or like here is a guy telling you a long story
Okay, can I ask you a question? Yeah, was it any any in these 260 videos were there any that you thought like, that is a great video?
I don't know if I saw anything truly great.
I definitely saw some animal videos
that if I showed them to you, you would laugh
or you would say that was cute.
There was stuff that gave me an emotional response.
And I would say particularly
as I got to the end of this process,
I was seeing stuff that I enjoyed a bit more,
but there, I did this morning,
I decided to do something, Kevin,
because I'd gotten so frustrated with the algorithm,
I thought it is time to give the algorithm
a piece of data about me.
So do you know what I did?
What'd you do?
I searched the word gay.
Very subtle.
Which like, in fairness, is an insane search query,
because what is TikTok supposed to show me in response?
You can show me all sorts of things,
but on my real TikTok account,
it just shows me your creators all the time
and they're doing all sorts of things.
They're singing, they're dancing,
they're telling jokes, they're telling stories.
So I was like, I would like to see
a little bit of stuff like that.
Do you know the first clip that came up for me
when I searched gay on TikTok to train my algorithm?
Who was it?
It was a clip from an adult film.
Now, like explicit, unblurred?
It was from, and I don't know this,
I've only read about this, but apparently at the start
of some adult films, before the explicit stuff,
there'll be some sort of story content,
you know, that sort of establishes
the premise of the scene,
and this was sort of in that vein.
Mm.
But I thought it, if I just sort of said offhanded,
you know, oh, TikTok, yeah, I bet if you just search gay,
they'll just like show you like porn.
People would say like, it sounds like you're being insane.
Like, why would you say that? That's being insane.
Obviously, they're probably showing you
they're like most famous queer creator,
you know, something like that.
No, they literally just showed me porn.
So it was like, again, so much of this process for me
was like hearing the things that people say about TikTok,
assuming that people were sort of exaggerating
or being too hard on it,
and then having the experience myself and saying like,
oh no, like it's actually like that.
That was interesting.
An alternative explanation is that the algorithm
is actually really, really good,
and the reason it showed you all the videos
of people being weird little freaks is because you are actually a weird little freak.
That's true. I will accept those allegations. I will not fight those allegations.
So, okay, you watched 260 videos. You reached this magic number that is supposed to get people addicted to TikTok. Are you addicted to TikTok. Kevin, I'm surprised and frankly delighted to tell you,
I have never been less addicted to TikTok
than I have been after going through this experience.
Do you remember back when people would smoke cigarettes
a lot and if a parent caught a child smoking,
the thing that they would do is they say,
you know what, you're gonna smoke this whole pack
and I'm gonna sit in front of you
and you're gonna smoke this whole pack of cigarettes
and the accumulated effect of all that stuff
that you're breathing into your lungs.
By the end of that, the teenager says,
dad, I'm never gonna smoke again.
This is how I feel.
After watching.
After watching.
It cured your addiction.
After watching hundreds of these TikToks.
So, okay, you are not a TikTok addict.
In fact, it seems like you are less likely
to become a TikTok power user
than you were before this experiment.
I think that's right.
Did this experiment change your attitudes
about whether TikTok should be banned in the United States?
I feel so bad saying it, but I think the answer is yes.
Not ban it, right?
My feelings about that still have much more to do
with free speech and freedom of expression.
And I think that a ban raises a lot of questions
that the United States approach to this issue
it just makes me super uncomfortable with.
You can go back through our archive
to hear a much longer discussion about that.
But if I were a parent of a teen
who had just been given their first smartphone,
hopefully not any younger than 14,
it would change the way that I talk with them about what TikTok is and it would change the way that I would check in with them about what
they were seeing, right? Like I would say you're about to see something that is going to make you
feel like your mind is in a blender and it is going to try to addict you and here is how it is going
to try to addict you and I might sit with my child and might do some early searches to try to addict you and here's how it is going to try to addict you. And I might sit with my child and might do some early searches to try to precede that
feed with stuff that was good and would give my child a greater chance of going down some
positive rabbit holes and seeing less of, you know, some of the more disturbing stuff
that I saw there.
So if nothing else, like I think it was a good educational exercise for me to go through.
And if there is someone in your life, particularly a young person,
who is spending a lot of time on TikTok,
I would encourage that you go through this process yourself,
because these algorithms are changing all the time.
And I think you do want to have a sense of,
what is it like this very week,
if you really want to know
what it's going to be showing your kid.
Yeah, I mean, I will say,
I've spent a lot of time on TikTok.
I don't recall ever getting done with TikTok
and being sort of happy and fulfilled
with how I spent the time.
Like there's a vague sense of like shame about it.
There's a vague sense of like,
sometimes it like helps me turn my brain off
at the end of a stressful day.
It has this sort of like, you know,
this sort of narcotic effect on me.
And sometimes it's calming,
and sometimes I find things that are funny.
But rarely do I come away saying like,
that was the best possible use of my time.
There is something that happens when you adopt
this sort of algorithm first, vertical video,
mostly short form, infinite scroll.
You put all of those ingredients into a bag and what comes out does have this narcotic effect, as you say.
Well, Casey, thank you for exposing your brain to the TikTok algorithm for the sake of journalism.
I appreciate you.
And, you know, I will be donating it to science when my life ends.
People will be studying your brain after you die. I feel fairly confident. I don't know
why they'll be studying your brain, but there will be research teams looking at it.
Can't wait to hear what y'all find out. Hard Fork is produced by Whitney Jones and Rachel Cohn.
We're edited by Jan Poyant.
Today's show was engineered by Alyssa Moxley.
Original music by Mary Lozano, Sophia Landman, Diane Wong, Rowan Nimisto, and Dan Powell.
Our audience editor is Nel Galoeghly. Video production by Ryan Manning and Chris Schott.
As always, you can watch this full episode on YouTube at youtube.com
slash hardfork. Special thanks to Paula Schuman, Huying Tam, Dalia Haddad, and Jeffery Miranda. You can email us at hardforkatnytimes.com.