Your Undivided Attention - America and China Are Racing to Different AI Futures
Episode Date: December 18, 2025Is the US really in an AI race with China—or are we racing toward completely different finish lines?In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separat...e fact from fiction about China's AI development. They explore fundamental questions about how the Chinese government and public approach AI, the most persistent misconceptions in the West, and whether cooperation between rivals is actually possible. From the streets of Shanghai to high-level policy discussions, Xu and Sheehan paint a nuanced portrait of AI in China that defies both hawkish fears and naive optimism.If we're going to avoid a catastrophic AI arms race, we first need to understand what race we're actually in—and whether we're even running toward the same finish line.Note: On December 8, after this recording took place, the Trump administration announced that the Commerce Department would allow American semiconductor companies, including Nvidia, to sell their most powerful chips to China in exchange for a 25 percent cut of the revenue.RECOMMENDED MEDIA“China's Big AI Diffusion Plan is Here. Will it Work?” by Matt SheehanSelina’s blogFurther reading on China’s AI+ PlanFurther reading on the Gaither Report and the missile gapFurther Reading on involution in ChinaThe consensus from the international dialogues on AI safety in ShanghaiRECOMMENDED YUA EPISODESThe Narrow Path: Sam Hammond on AI, Institutions, and the Fragile FutureAI Is Moving Fast. We Need Laws that Will Too.The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hey everyone, welcome to your undivided attention.
I'm Tristan Harris.
In 1957, two events turned up the heat on the Cold War between the United States and the Soviet Union in a major way.
The first was the launch of Sputnik, which showed the world that the Soviets were far ahead in the space race.
The second was the release of a government report called the Gaither Report that warned of a, quote,
missile gap between the two superpowers.
And according to the report, the USSR had massively expanded their nuclear arsenal
and America needed to do the same in order to ensure mutual destruction.
JFK made the missile gap a central theme in the 1960 election.
And after he won, he dramatically accelerated the buildup of American nuclear weapons
starting what we now think of as the nuclear arms race.
But today, we know that the Gator report was wrong.
historical counting from Soviet documents and early satellite imagery
showed that the USSR was actually far behind the U.S. in nuclear capability.
Rather than the hundreds of ICBMs that the report claimed that they had,
the Russians at the time only had four.
The point of the story isn't that the U.S. shouldn't have taken the USSR seriously as an adversary.
The point was, before we open a Pandora's box with the potential for global catastrophe,
we need to have the maximum clarity and situational awareness
and not be led astray by false narratives or misperceptions.
And if we had had that clarity in the 1960s,
we might have been able to do more to avoid the nuclear arms race
and seek diplomacy and disarmament instead of racing.
Well, today, we're on the brink of a potentially new catastrophic arms race
between the United States and China on AI.
And China had their own kind of Sputnik moment
when Deep Seek was launched in January of this year,
showing that their AI technology was nearly on par with frontier American AI companies.
And now you're hearing a lot of top voices in the U.S. government and technology use the same
familiar rhetoric of the past, the idea that if we don't build extremely capable AI, then China will.
And we must win at all costs.
So in this episode, we want to get to clarity on what the state of AI actually looks like in China.
Do they see the AI race like we do?
Are we racing toward the same things?
Are we in a race at all?
And what kind of concerns does the Chinese government and tech community have about AI
in terms of the risk versus rewards?
Today's guests are both experts on AI and China.
Selina Shoe is a technology analyst who's written extensively about the state of AI in China
and co-authored a powerful op-ed with Eric Schmidt in the New York Times.
Matt Sheehan is a senior fellow at the Carnegie Endowment for International Peace,
where his research covers global technology issues with the focus on China.
Selina and Matt, welcome to your undivided attention.
Thank you for having us.
Thanks. Great to be here.
So I want to start by asking you both a pretty broad question.
What do you each see as the most persistent misconception that Americans have about China and AI?
For me, the biggest misconception is the idea that Xi Jinping is personally dictating China's AI policies, the trajectory of Chinese AI companies,
that he has his hands very directly on all of the key sort of decisions that are being made in this space.
And, you know, Xi Jinping is the most powerful leader since Mao.
He runs an authoritarian single-party political system, so he clearly has a lot of power.
But just on a very practical basis, most of this is happening at levels of detail that he's just not involved with.
And that even senior officials within the Chinese Communist Party are not involved with.
there's a huge diverse array of actors across China within the companies, within research labs, within academia, the bureaucracy, that all have a major influence on China's AI trajectory, how they see risks, how they see the technology developing, and those people are constantly feeding into the political system. They're shaping how the government thinks about the technology. They're developing the technology themselves without really hands-on guidance from officials in some cases, in many cases, and understand.
that diversity of actors and the role that they play in the ecosystem is critical to being
able to understand where China is going and in some cases maybe affect where they're going on this.
Just to briefly elaborate on that, because there is just this narrative that China is run by the Chinese Communist Party
and she runs the Chinese Communist Party. So it feels from external views that he really is running things.
How do we know that things are coming from these different places? What's sort of the epistemology we use?
One of the main focuses of my research is to essentially reverse engineer Chinese AI regulations.
So to take a Chinese AI regulation like their regulation on generative AI and say, where did all the ideas in this regulation come from?
Can we trace them backwards through time and find, oh, this idea originated with this scholar at this university who essentially popularized this concept?
And I'll just give like one very practical example of this.
Their second major regulation on AI was called the Deep Synthesis Regulation.
And specifically, what they were trying to do is they were trying to regulate deep fakes.
And so for a long time, the conversation in China is how we're going to regulate deep fakes.
And then Tencent, one of the biggest technology companies in China who creates WeChat,
who has a ton of money invested in entertainment, video games, digital products, all things that use generative AI.
They started like, everyone talking about deepfakes all the time isn't so great.
We need to just kind of pivot this conversation a little bit.
So essentially, they did what a lot of American companies do.
They did corporate thought leadership, where they started releasing reports on deep synthesis,
how that's really the better term for this technology, and we should really understand all the benefits of it.
And we see just very directly that term, it originated from inside of them.
It made its way into official discussions, and it became the title of a regulation and affected how that regulation was made.
And that's happening at a bunch of different levels across companies, across academics,
think tanks. So it's, yeah, it's a diverse ecosystem. I think the way to think about
Xi Jinping in relation to it, or just say senior leaders, they're kind of the ultimate
backstop. You know, if they are directly opposed to an idea and they're aware that that
thing is happening, they're going to be able to put a stop to it. But in most cases, they don't
have an opinion on the details of AI regulation. They don't have an opinion on what is the most
viable architecture for, you know, large models going forward. And so those things originate
elsewhere. That's super helpful. Selina, how about you? What are some of the most powerful
misconceptions about AI in China? I think this is one that I think increasingly more people have
started talking about, which is that we've heard a lot of AGI and U.S. and China being in a race
to what's artificial general intelligence, which is AI that is human level intelligence. And I think
if you look at what's really happening on the policy level and including in a lot of companies,
they're outside of some of the few frontier labs like Deep Seek.
Most of these companies are thinking very much about AI applications,
AI-enabled hardware, or thinking about,
oh, if you're a local government official,
how do you integrate AI into traditional sectors,
into things like manufacturing?
So I think this is the kind of the thing you're seeing on the ground in China right now
instead of this very scaling law-motivated,
very leveraged economy on deep learning.
Okay, so Selena, ostensibly both the U.S. and China,
you know, the U.S. at least, thinks that we're racing to this sort of super god in a box,
you know, AGI racing towards superintelligence.
And that's what this whole race is about.
Because if I have that, I get this permanent dominating runaway advantage.
And you're saying that China does not necessarily see AGI as the same prize.
Could you just elaborate on this?
Like, let's really get to ground on this because it is the central thing that's driving
the kind of U.S. approach to AI right now.
Yeah.
Yeah, I think caveat here, first and foremost, it's hard to exactly know what China's top leaders are thinking,
but we can look at what has been happening on the ground in the industry and also in policies.
So if you're looking at the AI Plus plan, for instance, which is this major national strategy that was released,
you don't really see, there is no mention of AGI.
Secondly, when you look at what is actually they're championing for, it's very much embedding AI into,
traditional sectors like manufacturing and industrial transformation and also emerging sectors
like science and innovation or even like governance. So it's very much application focused
and all of the stuff that they were trying to push for is very much how do we use AI in a way
massively deploy it so as to actually see a real productivity boost and improve our economy.
So that is kind of the way people are thinking about AI. It is a bit instrumentalist.
They aren't trying to build AI. They're trying to like make a profit.
There isn't this kind of anthropomorphic machine god or like the lingo that you see here in the Bay Area.
And that might be because of China's history with other kinds of technologies,
which is kind of interesting philosophically.
But I think also at the same time, it's very much because they don't have the cultural context in the past
that a lot of people in Silicon Valley have been educated on, like from like the Matrix to like her
and like thinking about AI and like the Turing test way.
Yeah, let's break that down a little bit more because so much of this comes down.
to the philosophy or religion almost
or the kind of historical roots of where
your conceptions of AI come from.
And would you both just comment a little bit more
on kind of the roots of the AI philosophy
in Silicon Valley versus the roots of
what are the philosophical or even sci-fi
or just other sort of cultural lineages or ideas
that inform what AI is
for both cultures?
You know, the leading labs
in the United States, they were founded
very much on the belief.
And at the time, I would say it very much
was a belief that we were going to get to artificial general intelligence,
and then that was going to rapidly transform into superintelligence,
and this could have essentially infinite benefits,
or it could wipe out the human race entirely.
Like, that is baked into the DNA of OpenAI, Anthropic,
and some other leaders, a lot of leading researchers in this space.
Ilya and Sam Altman were writing about this in the 2014-2015 kind of days,
or people talking about AGI, Shane Leg at DeepMine,
you know, talking about this in the early 2000s on Internet Forum.
This is like a very deep, sort of almost transhumanist, influenced cultural idea.
And yeah, you know, it builds on a legacy of the Terminator movies.
It builds on a legacy of science fiction.
And it's not to say this is all siloed in the United States.
Like Chinese people also read international science fiction.
They, many people in China sort of share some of these beliefs.
But I'd say when you think about the DNA of the leading companies, it's very unique in the United States.
When it comes to the Chinese companies, you know, again, we kind of have to disaggregate the different actors here.
and even just individuals.
I think the way Selena characterized
the Chinese government's position on this is exactly correct.
They are very focused on application.
They're saying, how can this technology
help me achieve my political, economic, social goals?
How can it upgrade my economy?
How can it jump over the middle income trap?
How can it empower the party to have greater control?
That's their focus.
But you also do have some people, like the founder of Deepseek,
who is himself, you know, as we'd say in the U.S.
AGI pill.
He does believe that sometime,
In the perhaps not too distant future, we will achieve something like Artificial General Intelligence.
This will probably have a lot to do with how much computing power we put into the models.
Pretty similar, I think, from what we can tell, from the public statements he's made,
to the way that people like Sam Altman view this.
You know, he's operating with an ecosystem.
He has limits on the compute that he can access.
He has limits on the government that he's dealing with, the talent that he has at his disposal.
So it's not to say that because the founder of DeepSeek believes in AGI,
that means, you know, that's where China is heading.
But there is this diversity of actors, government,
sort of influential policy people, entrepreneurs, engineers.
Selena, do you have any parsing of that on top of what Matt shared?
Yeah, but I would say in response,
I think the main thing here is deep seek has been pursuing a slightly different path
than some of the U.S. Frontier Labs, possibly because of compute constraints.
They are very much more efficiency focused.
And that's why I think they've poured so much, like, technical resources and attention into basically achieving highly efficient models.
And that is kind of the goal he's going towards.
So that's why in January, when people kind of woke up to Deep Seek, part of the surprise was that how good it was, bearing in mind, you know, the kind of cost and compute, even though that's kind of vague and murky, but it's definitely, you know, at least an order of magnitude lower than some of the training costs in U.S. Frontier Labs.
So I think that's kind of a different approach that they're pursuing.
They are aGI-piled, but even then I think what they're doing is not like, oh, scaling and building ever bigger data centers that can compare with Anthropic and Open AI.
And that's just not the reality in China.
May I build on that a little bit?
Yeah, please go ahead.
You know, one way to think about this is like, where is the government putting its resources and do companies need the cooperation of government resources in order to achieve their goals?
I think in the United States, especially over the last year or two, the way Open AI has been obvious.
not just with the U.S., but with governments around the world,
is this belief that fundamentally this is going to be a large-scale, energy, computation,
huge financial costs, you know, striking deals around the world
to build out these data centers that they believe are going to be essential.
And so if we're sort of thinking about it through that lens,
and we look over at China and we say, okay, where is the Chinese government putting its bets down?
And I think the AI Plus plan that Selena described earlier is a pretty clear signal
that where they are putting their money down
and their sort of bureaucratic resources down
is on applications.
The AI Plus plan, it sounds a little weird to our ears.
It basically means AI plus manufacturing,
AI plus healthcare.
Essentially, we want to use AI to empower all these other sectors.
And that's where they are telling their local officials
saying, you know, if you're going to subsidize an AI company,
subsidize an application that makes sense in your area,
subsidize these things.
They're not saying, hey, let's all consolidate
all our computing resources and devote them
just to deep seek so that they can push their one sort of mission.
Well, this is very interesting.
And Matt, you said earlier in a different interview that the Chinese Communist Party is like a big
HR department, that it's kind of run like there's these performance reviews, and they set
these top-level goals as a nation, and they say our goal is to make sure we're applying
AI to all these different industries, and we measure the performance of each local official
in each province and then down to each city according to how good they are at doing that.
And what you're saying is they're not saying to all those officials, we're going to
judge you based on how good you are creating a super-intelligent God-in-a-box Manhattan
project. We're judging you based on the application of AI. Still, there might be some who are
listening to this and saying, yes, but how would we know what if China's secretly pouring
a Manhattan project-sized amount of money into Deepseek? Because it's important to recognize that
they did recently start locking down and tracking the passports and employees of Deepseek. They're sort
of treating it kind of like the nuclear scientists. One could sort of view it that way. I'm trying to
steal man these different perspectives because there's sort of
of this, as we talked about in the opening, with this missile gap idea, there is this
deep fear that if we get this wrong and they are building a Manhattan project, and that is
the defining thing, then we could lose here. So how would you sort of further square those
pictures? Yeah, I think it's very important to steal me in these, and to also acknowledge how
much we don't know and can't know about what's going on inside China. And I do not rule out the
possibility that sort of somewhere deep in a bunker in Western China, they are slowly trying to
accumulate some level of chips that would, you know, power a supersized data center. Like,
we cannot rule that out. I hope our intelligence agencies are very much on this and would have,
you know, awareness of it before anything came to fruition. But I think, again, to just kind of
where are they putting their money and their bets down, like, if that's what you're trying to do,
we know that China as a country on the whole is compute constraint. They have a limit on how much
computational power, how many chips they have in the country, largely due to U.S. export controls.
And just explained that for a moment, just for people who may not be tracking.
So the U.S. started these chip controls in what year was it?
We stopped basically giving China these advanced AI chips.
Yeah.
So the big restriction came in 2022 and has been updated every year since then,
2022, 2022, 2023, 2024.
And I guess the sort of simplest way to understand it is that in order to train and deploy
the best AI models, you need a lot of computing power.
and you want that computing power in the form of very advanced chips
that are called GPUs made by Invidia,
a super hot company right now.
And basically what these different executive orders have said
is we will not sell the most advanced chips to China
and we will not sell the equipment needed to make the most advanced chips to China.
We're going to ban the export of these things.
Now, these export controls are very imperfect.
They have a lot of holes in them.
They're smuggling.
Essentially, they've needed to update it
because the companies, NVIDIA, specifically,
are constantly sort of working their way around it.
But despite all those sort of, you know,
holes in the export controls,
they have imposed large-scale compute limits on China.
The United States and U.S. companies,
if they want to access maximal compute, they can do that.
And Chinese companies just have less,
Chinese companies and government.
And so if you're in that situation,
just say that you have, you know,
5 million leading chips.
That's probably more than they actually have.
If you have 5 million leading chips
and you want to lead this kind of Manhattan project thing,
you're probably not going to tell your local officials all around the country
to be deploying AI for health care and manufacturing
and all these local scenarios.
Because they'd be using up all the chips.
So you're saying if they succeed in this AI Plus plan,
then it would take away from their success as a Manhattan project.
They couldn't do both realistically given the finite number of chips
that are currently available to them because of these controls.
Yeah, a lot depends on how many chips you end up needing
for the quote-unquote Manhattan Projects.
project, but just in terms of signaling, the signaling that they're sending to their own
officials is focus on applications and they're deploying resources in that direction.
Yes, Selena, do you want to add to that?
Completely agree. And also, I think the TLDR is just that, like, if they're trying to
build a Manhattan project for AGI in China, that a sheer amount of chips that are required
for that, that's being smuggled in, I think there's no way that any intelligence agency or
invidia itself would be unaware.
Selina, you recently attended the World Artificial Intelligence Conference in Shanghai,
and we'd just love to take listeners on kind of a felt sense for what AI feels like as it's deployed,
because I think the physical environment of AI reaching your senses as a human is very different
in China than in the U.S. currently.
So could you just take us on a tour like viscerally?
What was that like?
Yeah.
And there are a lot of different kinds of AI, I would say, and I don't know whether you, Tristan,
have been to China, but pre-generative AI and like LLMs and chatbots.
Like there was already digital payments, people paid with their like palm or like facial recognition
while you're entering the subway.
Those are other kinds of AI that's already very visceral and kind of all around you.
This time around in July for the World AI Conference, on top of all of that, I think one of the
biggest things that really struck me was how just like pervasive robots were. They were everywhere.
So we, it was basically in this huge expo center. And I think about like 30,000 people were there.
All the tickets were sold out. A lot of like young children, families, even some grandparents.
It was like whole of society kind of thing. And it was like a fun weekend hang out. And everybody was
just like milling around the exhibition booths, shaking hands with robots, like watching them like fight
each other MMA style.
There were also robots just like walking around.
Some of those were like mostly remote controlled by people.
There were a lot of AI-enabled hardware stuff like glasses or like wearables,
including some like AI plus education like dolls, you know,
so all kinds of innovative applications of AI in like consumer-oriented ways.
And you just see people interacting with AI in a very physical, visceral way
that you don't really see here in the U.S.
Like, here people talk about AIS, it's like, oh, far away, machine got thing.
But, like, in China, it was very palpable.
It was extremely integrated into a real world environment.
Some of it is hype.
Like, a lot of the humanoids and robotic stuff is still very nascent and not very mature.
And you can see some of the limits of that when, like, robots fell down or didn't really react in the right way.
But I think the enthusiasm and, like, the optimism really was very, very interesting.
Like, people were actively, like, excited about AI, right?
Versus here it's more like The Terminator or something.
Yeah, I wanted to ask about that because I feel like if you went to a physical conference like that
and given there are far fewer robots and robot companies in the U.S., although we do have some leading ones,
I still feel like the U.S. attitude is more this bad.
Like a lot of the feeling is just, this is creepy, this is weird, I don't really like this.
But the thing that I keep hearing is that when you're there walking the grounds,
everyone is just pumped and excited and optimistic about AI.
And I'd like to develop that theme a little bit more here about why one country seems to be very pessimistic more about AI and the other.
China's largely optimistic.
But Matt, just curious here to add on to Celine's picture here.
You also were, I believe, in China in the 2010s as the mobile internet was kind of coming online.
And that kind of had as a role, I think, in how China sort of sees technology optimistically versus more pessimistically here.
Absolutely.
And I think maybe first touching on the sort of optimism, pessimism towards technology more broadly,
and then we can bring it into AI.
I think, you know, there's a lot of questions about exactly what do the survey results show
or these good survey results.
You know, how do we know this?
It tends to rely a lot on anecdotes and sort of, you know, vibes.
But I think maybe the most important factor here is that the rise of information technology,
eventually the Internet, now AI, the way it's come into people's lives in the last 45 years since, say, 1980.
And if you look at what happened to China since 1980 versus what happened in the United States since 1980,
It's very different.
This has been essentially the biggest, longest economic boom in Chinese history.
And normal people have seen their incomes multiply by factors of, you know, 10 or even like 20 over that period of time.
Basically, since information technology came into the world, Chinese people's lives have been getting better.
United States is very hard to say, you know, are Americans' lives better?
But a lot of people associate technology with impacts on labor, with more dysfunction at a political level,
misinformation, the damaging effects of social media on kids. And this has just been a period of time
when the United States has largely turned kind of more pessimistic about our society, our prospects
at a national level, and I think at an individual level. Or, you know, you can take it to the last
10, 15 years since the rise of the mobile internet. This has been, you know, one of the most fractious
times in American political history. And it's been, with some exceptions, a pretty good time in
China, at least from the perspective of someone who's just trying to earn more, live better,
have more convenience in their lives. So that's a very, you know, 30, 40,000 foot level take on
the sort of optimism, pessimism, but I think it is pretty foundational to how people look at these
things. Yeah, I lived in China 2010 to 2016, and this was really the explosion of the mobile
internet in China. Obviously in the U.S., you know, mobile internet was expanding rapidly to, but this is
when China was very rapidly sort of catching up to and then surpassing the global frontier
of mobile internet technologies. What is the mobile internet doing for ordinary people?
And to me, some of those sort of visceral memories from that time are around 2014, 2015,
when mobile payments kind of kicked into high gear, you suddenly had this explosion of different
real world services that were being empowered by the mobile internet. So here, you know,
in the United States, obviously we have Uber and Lyft. These are, you know, real world service
empowered by the mobile internet in china they had their own uber and lift but they also had just a
huge diversity of um local services you know as of 2013 2014 someone will come to your house and do your
nails for you with just like four clicks the guy who's literally selling like baked potatoes out of an
oil drum has a QR code up there in 2014 to have you pay via that it was this very visceral feeling
that like technology is integrating to every factor of our lives and in large part it's making
things way more convenient. Like when I got to China in 2010, if you wanted to buy a train ticket,
especially during Chinese New Year, means you get up really early and you wait in a super long
line for a very slow bureaucratic in-person ticket vendor to sell you the ticket. When sort of
we chat, mobile payments, all that got integrated into government services, including ticket selling,
suddenly it became way more convenient, way easier to do these things. And of course, like mobile
Internet has led to convenience in both places. But having, you know, lived at sort of the center of this
in both countries. I just think it had a much more tangible feeling in China and a feeling that
it's genuinely like making our lives better at this point in time. Just to add to that,
I mean, the thing that I hear from people who either visit China or even Americans who lived
in China the last little while and no longer come to the U.S., when you visit China, it feels like
going into the future. And everything just works like your 10 or 20 years further into the future
than in the U.S. than when people actually have been in China for a while, they come back to the U.S.
it feels like you're going back in time
and things feel less functional and less integrated.
I'm not trying to criticize one country or another.
I think it's actually based on kind of leapfrogging, right,
where the U.S. had to build up a different infrastructure stack,
and they didn't jump straight into kind of this 21st century,
gig economy, immediate mobile payments built into everything,
whereas China really did do that.
Yeah.
And I just on our earlier conversation on China in the 2010s,
I should note that simultaneous to this mobile internet transformation
was the huge rise in AI-powered surveillance.
of citizens. You know, facial recognition everywhere. You want to literally enter your sort of gated
community, and in China gated communities are much more common. They don't indicate wealth.
To just enter your little housing community, you might need to scan your face. And so, you know,
at the same time that we're pointing to all the conveniences of this, this also has a very much
a dark side that is just important to note here. Absolutely. I think it is really important to
note how, obviously, the surveillance-based approach, which we would never want here in the West,
the other side of it is the just fluency of convenience where everywhere you walk, you're already
sort of identified, which obviously creates conveniences that are hard to replicate if you
don't do that. And that's one of the hard trades, obviously. Yeah, absolutely.
In a recent Pew study, it showed that 50% of Americans are more concerned than excited about the
impact of AI on daily life. And a recent Reuters poll showed that 71% of Americans fear AI
causing permanent job loss. What is the public mood in China versus the U.S. on AI and job loss,
actually? Because I think this is one of the most interesting tradeoffs that these countries are
going to have to make, because the more jobs you automate, the more you boost GDP and
automation, but then the more sort of civil stripe you're dealing with, if people don't have
other jobs they can go to, unlike other industrial revolutions.
I think it's definitely something on people's minds, but not necessarily related to AI.
Like, in the past few years, youth unemployment has been a very serious issue before the government stopped releasing the statistic.
I think it was about like at least 20 to 25 percent of youths are basically unemployed in China.
So that's, I think, something that the society has been grappling with and something policymakers are obviously concerned about.
Did you say 20 to 25 percent youth unemployment?
Yeah.
Wow. Seems high.
Yeah, it's quite crazy.
And because it was so high, they stopped releasing the statistics.
So we can only speculate, like, how high it is.
I expect it to be around the same range.
But if you're talking to, like, you know, young people in China now
who are trying to funnel into, you know, STEM fields or AI vacations,
there is, like, a huge pool of AI engineers and, like, increasingly limited number of jobs.
So I think this is something definitely that young people are facing and there's real anxiety.
But on the other hand, when you're talking to, you know, policymakers and experts in China,
The sense I've gotten is they're strangely, mostly positive about AI,
and they're kind of slightly buzze about, like, oh, the effects of unemployment.
Like one person I spoke to who basically advises the government talked about, like,
the example where they went to do field research in Wuhan,
which is a city in China that has a huge penetration of autonomous vehicles.
And they talked to some taxi drivers about, hey, how concerned are you about self-driving cars?
And they said taxi drivers generally told them that they are excited
to work fewer hours and are excited about the improvement in labor conditions. And I'm like,
okay, that is the kind of sentiment that they're trying to basically use to justify, I think,
how people are feeling about it. They're slightly probably concerned, but the main thing is
to upskill them. And in general, this is a better thing for society. Obviously, the tune would
change. I think in China a lot of times the pendulum just swings based on, you know, how policymakers
think. Right now, it seems to me they're pretty positive on AI as like more of a productivity booster
rather than like a drag on labor,
but obviously that might change down the road.
And in terms of just everyday people,
I think youth unemployment is just something
that they're really just thinking about
and everyone knows and acknowledges.
I don't know how much they tie it to AI.
But I've heard from friends who work in the AI industry
about just how cutthroat it is to get a good job
and like the sheer amount of like PhD graduates
who are trying to get like the right number of citations
and the right journals so as to secure a job at a place like Tencent or Alibaba.
May I chime in on that?
Yeah, please.
Yeah, the picture I have of this is slightly different, or at least I think it's evolved substantially
in pretty, in the last, say, six months to a year.
I agree that if you go back maybe a year or maybe two years, both Chinese policy scholars,
you know, the people sort of advising the government.
And it would seem the Chinese government were very blasé about the unemployment concerns around AI.
Like one of the things I do in my job is I facilitate dialogue between sort of Chinese policy,
AI policy people and American AI policy people. And in one of our first dialogues, we had
everyone from the two countries rank a series of risks in terms of how worried are you about
this risk from existential risk, military applications of AI, or privacy of seven or eight different
things. And in that risk ranking, which I think this is taking place in early 2024, the Chinese
scholars ranked the unemployment concerns second to last out of, I think, eight risks. It was really
low. And when I was thinking about, you know, why is this at the time? My sort of shorthand for it
was China has undergone just incredible economic disruption and transformation in the last
30 years. And it's basically come out okay. In the 1990s, they dismantled a huge portion of their
state-owned enterprise system. Millions of people became unemployed because of reforms to the
economic system. And they're like, basically, if we grow fast enough, this will all come out in the
wash. And of course, there are, you know, long-term costs that, but they, they seem to have this
faith that, you know, if you can just keep growing at this extremely high rate, then the job
stuff will figure itself out. That, I think that has changed a bit over the last six months to a
year. Again, this is partly, you know, anecdotal speaking to people over there, kind of reading
between the lines of some policy documents. But I have heard people saying that this is a sort of rising
in salience as a concern for the government. And, you know, in some ways, the signals they're sending
are somewhat conflicting.
Like, on the one hand,
they're essentially like all engines go on applying AI
and manufacturing on robotics.
So they're pushing the automation as fast as they can,
at the same time that their concerns about the labor impacts
are also rising.
You know, we might say that that's not a totally coherent sort of strategy,
but government policy is not always 100% coherent.
They're still feeling out these two things.
But people have been suggesting that essentially this is rising in salience
and it might end up affecting sort of AI policy going forward, but it's speculative.
That's fascinating, Matt, that the economic disruption from the past and the fact that they were able to navigate that successfully
means that people see that maybe their job's going to get disrupted, but no big deal.
We did that once before.
We'll retrain.
Of course, what's different about AI, especially if you're building to general intelligence, is that it's unlike any other industrial revolution before,
because the point is that the AI will be able to do every kind of job, if that's what you're building.
So there actually is a secondary benefit of approaching narrow AI systems,
the sort of applied narrow practical AI,
because you're not actually trying to fully replace jobs.
You're maybe augmenting more jobs,
but you're not having the AI eat every other job.
And then when you kind of zoom out,
the metaphor in my mind for this visually is something like
the U.S. and China, to the degree they're in a race for AI,
they're in a race to take these steroids
to boost the kind of muscles of GDP, economic growth, military might,
but at the cost of getting kind of internal organ failure,
like you're hyping up the attention economy addiction doom scrolling thing you're hyping up joblessness
because people's jobs are getting automated at the cost of boosting the steroids level
and so both countries are going to have to navigate this but it's interesting that if you do
approach more narrow AI systems you don't end up with as many of those problems because people
can keep moving to do other things I think that's a great metaphor I've never heard that before
but steroids is is about right on the sort of you know we've been through disruption before
or we can deal with it.
I would say I would differentiate a little bit
between the Chinese government,
which is thinking in a 100% macro perspective
from an individual person.
I think if you told an individual Chinese person,
your job is going to be automated.
They might have something to say about that.
I guess the question is,
similar to the U.S. question for UBI,
let's say we live in a completely automated society,
people don't have to work,
but is AI going to be able to generate enough revenue
to support literally billions of people
on universal basic income?
Like, is that the math, as far as I've heard in the West, is that that math doesn't work out.
Yeah.
I mean, does the math math in this situation?
I don't know.
I think it's mostly, in many cases, it's going to be a political decision.
And, you know, I think at a very high level, we might think, okay, China, one-party system, communism,
like they should be all good with just, you know, massive redistribution.
And I think that's possible that it does pan out that way.
But quite interestingly, you know, Xi Jinping, who's a very dedicated Marxist in terms of ideology
or Leninist in a lot of ways.
He personally, from the best we can tell from good reporting on this,
is actually quite opposed to redistributive welfare.
He thinks it makes people lazy.
And, you know, China, despite being nominally a socialist on its way to communism country,
she has a terrible social safety net.
You know, people are largely on their own,
much less of a social safety net than the U.S.
And so...
Really, than the U.S.?
Yeah, yeah.
I mean, they have essentially like welfare that is...
paid to people who cannot work or disabled. It's extremely low. There's nothing like Obamacare
over there. Maybe a lot of people have health insurance in some form, but access to actually good
medical care is really not great. And yeah, it's one of these contradictions of modern China.
They are simultaneously a communist party and sort of deeply committed to certain aspects of
communism, while at the same time being more cutthroat in terms of individual responsibility than
even the United States. That's so interesting. It's definitely not, I think, the common view of what
you're from externally, knowing that it's a communist country, you would think the opposite.
Let's just add one more really important piece of color here that I think speaks to a long-term
issue that China's having to face, which is that China's population is aging very rapidly,
and they're facing a really steep demographic cliff. Peter Zihon, the author, is written extensively
about this. There's a sort of view of demographic collapse. I believe, if I just cite some
statistics here. China's had three consecutive years of population decline, down 1.4 million since
2003. They're on track to be a superaged society by 2035 with one retiree for every two
earners, and that would be among the first in the world. And so how can you have economic growth
if you have this sort of demographic collapse issue? And this has led a lot of people in the
national security world to say that China is not this strong, you know, rising thing. Maybe it looks
that way now, but it's actually very fragile, and demographic collapse is one of the reasons.
Now, some people look at this and they say, but then AI is the perfect answer to this,
because as you are aging out your working population, you now have AI to sort of supplement all
of that. And I'm just curious how this is seen in China, because this is one of the core things
that has been sort of named as a weakness long term. I think one of the reasons that the
Chinese government and also a lot of the companies have been in a frenzy about like humanoid robots and
other kinds of industrial robots is precisely because of this reason. If you're thinking about in terms
of the demographic decline, the shrinking workforce, a lot of the gap has to be filled in by like
automation and that's in the form of industrial robots. If you're looking at like installations,
I think China has outstripped the rest of the world over the past few years. But if you're thinking
about elderly care, companionship, how do you help the elderly
and, like, the growing silver economy continue to expand, you kind of do need AI, not just in terms
of, like, AI companions, but also like, oh, humanoids in some elderly homes, which I think some
local governments have already started to push forward in pilot programs. So I think that's how
people have been grappling with that. But I think apart from that, like, whether AI would be able
to really help elderly people in, like, brain machine interfaces, that's still something that
people are starting to research on and I don't think there's a very clear sign of like you know how
close we are to that yeah just building on that I think you know the dynamic you described is is sort of
right on all the fundamentals and there's this idea like essentially we have all these problems
this isn't unique to China or just aging we have all these problems they're getting worse
we don't have any solution for them but is AI going to be this you know rabbit that we pull out of a
hat that's going to resolve them and I would call that a little bit of magical thinking or at least
you know, wishful thinking. It's important to put the aging stuff in the context of their sort of
broader population policies. You know, China for decades had the one-child policy, which was the
greatest sort of population-limiting policy that you can have, even though it was never exactly
one child per family. It took them a long time to realize the damage that this was going to have
on their economy long term, but they did realize it. When I was living there working as a reporter,
was when they put an end to the one-child policy. And since about 2015, they've actually been saying
people actually have more children, have more children. Here's subsidies to have children. And it's just
not having the effect that they want. And it's a very sticky and intractable problem. And it's not just
China. It's across a lot of countries in East Asia as well as, you know, other societies that aren't
bringing in that many immigrants. So. Which is another issue for China is that they're not actually
bringing in lots of immigrants from all around the world because they value, you know, their, yeah.
Yeah, absolutely. So, you know, is AI going to be the sort of magic wand that gets waived and
resolve us all these problems. I can see why people in government, in society, want to believe that
and it could end up being true, but probably not something that you should, you know, bank on if you're
the leader of hundreds of millions of people.
So now switching gears, yet again, in the U.S., there's a deep sense that we're in a major
AI bubble, the amount of money that's been invested and, you know, the sort of circular deals
that are going on between invidia and open AI and Oracle.
And this is just a big house of cards.
I'm just curious, is there a view that there's a big bubble in AI in China?
From my sense, not yet.
Maybe in terms of robotics, I've heard from several VC people that, hey, there's totally
a robotics bubble right now in China in terms of the sheer amount of funding, new companies.
If you're looking at some AI adjacent stuff, like if you're looking at like self-driving cars,
There was a bit of that previously.
But now if you're thinking about LLMs, a lot of consolidation has happened.
And right now, the AI space, I think a lot of the funding has dried up for Frontier Model Training.
And most of the funding has gone into, like, AI applications.
So I think in LLM or like AI Frontier stuff, there isn't really a bubble in China.
Yeah, you know, to have a bubble, you need to have huge amounts of money flowing into something and overhyping the evaluations.
And the very sort of ironic or difficult to grasp thing in China today is that despite the headlines,
despite how well a lot of leading Chinese models are doing when you kind of compare them on performance,
the Chinese AI ecosystem is actually very cash-strapped.
They're very short of funding.
That's one of the biggest obstacles, especially for startups, but also for big companies.
And there's a lot of reasons behind that.
I'd say, you know, the venture capital community in China is very new.
It kind of started around 2010.
So it's only 15 years old.
And around the year 2022, that venture capital industry basically collapsed due to a bunch of things, COVID, the Chinese tech crackdown of that period of time when they were sort of being up all their information tech companies and just the fact that sort of a lot of the first wave VC investments didn't pay off.
So when you look at the actual total amount of venture capital that's being deployed in China, it's been going down every year since 2021.
And even in AI, which is, it's almost hard to wrap our head around, but the venture capital being deployed is actually going down in China.
Now, there are companies that can get around this, you know, essentially deep seek.
They started as a quantitative trading firm so they can sort of print their own money and don't have to take on as much venture capital.
Some of the big companies, Tencent, Alibaba, they have, you know, huge profit-making arms that they can sort of funnel the money into it.
And so it's not to say that everybody is broke, but, you know, the investment is low.
then people might say, well, what about the government?
You know, isn't the government just like flooding them with resources?
On, you know, the government is putting a substantial amount of money into this.
But the government is actually also much more cast-strapped today than it has been at any point in the last 20-plus years.
This is in large part due to the sort of the collapse of the real estate bubble in China, the one real bubble over there,
has led to huge shortfalls in local government money, which means the central government has to give money to local governments.
It's, you know, it's a complex system, but I'd say the kind of the shorthand is just,
like, well, the U.S. seems to just be having money flooding into it from a bunch of different
directions. In China, it's very cash constrained. We'll just double tap on what Selena said about
robotics. Robotics is one area where there probably is a bubble. You have a bunch of these
startups that shot to huge evaluations that are trying to list very quickly. And they might have
good technology, but it's basically like demonstration technology at this point. It's not actually
being used to make money in factories. And those companies are, I think many people would say they're
do for a correction. So we might have our LLM bubble burst and their robotics bubble burst and then
you know, where do we go from there? And actually just to add one more thing, I think instead of
hearing bubble, the word I hear the most in China the past few years is involution, which in Chinese
is Neyuan, which essentially just means excessive competition that's self-defeating because there's just
ever diminishing returns no matter how much more effort you put in. And that's been something that has
spread from electric vehicles to like AI chatbots to solar panels to everything.
Essentially all these companies grind on like ever-slimbing profit margins and don't really
see a way to get their profit back.
And there's kind of no way out because of the list of reasons met has listed, you know,
it's hard to exit, it's hard to IPO, they want to go overseas, but there's just so much
competition and there's some pushback in other Western countries.
So I think that's a phenomenon that's being seen in China right now, like involution.
And how does that match with this sort of view from national security people in the West
that China is deliberately making these unbelievably cheap products to undercut all the Western makers of solar panels and electric cars and robots and things like that?
And this is part of some kind of diabolical grand strategy to, I'm not saying one or the other.
I'm just reporting out things that I hear when I'm around those kinds of people.
How do you sort of mix those two pictures together?
I think essentially both things are true, like the involution, which basically it means price wars.
It means there's way too many companies that have flooded into the new hot sector, and they're forced to compete on price, and they essentially sell their products for less than it costs to make them, and it leads to long-term consequences.
And that happened in solar panels when I was living there in the 2010s, but it's one of those things where you can have a sort of a price war, a collapse of the industry, and then what emerges at the end is actually still a quite strong industry.
That's what happened in solar panels.
The government, I think, at a very high level, does have a strategy of essentially, if you undercut international markets on price, you can dominate the market and then you can hold it permanently.
It's what's called, like, dumping in international trade law.
You sell something for cheaper, you destroy your competitors, and then, you know, yeah, some might say that was what, you know, companies like Uber might have done to taxis domestically.
So it's both like a self-destructive practice that bankrupts.
tons of companies in China. And it also might be something that the government is okay with
on some level. They're currently having an anti-involution campaign sort of policy-wise. They think
that this is at this point more destructive than helpful. So they're trying to limit the damage of
this. But it's a complicated system. One of the main things that we often talk about in this podcast
is how do we balance the risk of building AI with the risk of not building AI, aka the risk of building
AI is the catastrophes and dystopias that emerge as you scale to more and more powerful AI systems
that either through misuse or loss of control or biology risks or flooding deepfakes, you know,
the more you sort of progress AI, the more risks there are. And at the other hand, the risk
of not building AI is the more you don't build AI, the more you get out competed by those who
do. And so the thing we have to do is straddle this narrow path between the risk of building AI
and the risk of not building AI. And all the way at the far side of that is the risk of, you know,
really extreme existential or catastrophic scenarios, which it seems like both the U.S. and China
would want to prevent. And yet, open sourcing AI has lots of risks associated with it, and
China is pursuing that. And one of the sort of key things that comes up in this conversation all
the time is, as unlikely as it seems that the U.S. and China would ever do something like
what the nuclear nonproliferation treaty was for nuclear arms, would something like that,
negotiating some kind of agreement ever be possible
between the U.S. and China, given shared views of the risks?
I think it's very possible. It's just like there's a long list of stuff
that the two presidents have to talk about. And obviously it doesn't have to
happen in this administration. A lot can change with the technology. I think
there is general consensus from experts, policymakers on both sides
when they talk in some of these track two dialogues, which are basically non-government
to non-government, and track one are government to government. And these track two
dialogues, people generally can agree on a lot of things. These can include things like very basic
areas of technical research, like interpretability. How do you understand what's actually going on in an
AI model, like under the hood? There's also things like general safety guardrails,
evaluations, monitoring, things like that. And then some other stuff that was agreed on
during C and Biden's Trek 1 dialogue on like, you know, keeping a human in the loop when you're
talking about nuclear weapons. So I think there's a lot of stuff that's possible.
I think it's more of like a matter of mutual trust and that's something that's quite lacking today in our political climate.
Trying to say we need to cooperate with China on anything seems quite poisonous.
But I think if we can expand our imagination a bit and really just grapple with like the sheer necessity and the gravity of the situation,
there's a lot that can be done that's, you know, low risk and like an easy lift.
And I would just say it can start from people to people instead of just government to government.
can be from companies to companies, experts, experts, and stuff like that.
And just to elaborate this in a visceral sense for listeners, did you attend the international
dialogues on AI safety?
Yeah, I did.
So could you just take us inside the room?
As an observer, but yeah.
Yeah, could just take us inside the room?
For listeners who don't know, there are these dialogues where, just like during the Cold War
age, the American nuclear scientists met with the Russian nuclear scientists.
There actually was the invention of something called permissive accident and control
links, which was a way of making nuclear weapons not fire in some accidental way.
there's a control system.
And there's a history of collaboration like that.
Could you just take us inside the rooms, Elena, of, you know, what does it feel like?
Do you hear Chinese AI safety researchers working with American researchers on, are they agreeing to specific measures?
Yeah, it's a great dialogue.
This year was the first time I actually was in the room for it in Shanghai.
So this happened on the sidelines at the World AI Conference when you had people like Nobel Prize winner, Jeffrey Hinton, visit China and participate in both this dialogue in the conference.
And then you also had other people.
from the Chinese side like Andrew Yao, Zhang Yatine, and like people like that.
So it's a group of very leading AI scientists, and they get together to basically talk about
what are the risks and red lines that they most agree upon, and they issue a consensus statement.
So for anyone who's curious, you can read the Shanghai consensus afterwards.
But essentially, I think whenever you're in any of the sessions, there was always a lot of areas
of convergence. Essentially, people always agreed on fundamental things like loss of control, all
of these are very well known. But I think the real issue today is that you need the companies
who are building the technology to agree to these things. And right now, the raise dynamic,
profit incentives, all of that is just like not converging to allow them to take these risks
very seriously. And even if you have the best scientists agree on these things, the current
landscape is basically that the companies are the ones who are building the technology. And it's very
different from, you know, the SLO-R conference when like that was very much held in the hands of
like universities and, like, those labs.
So, Matt, with that said, when you kind of ask, what would it take at the political level
if it's not going to happen at the researcher level, what do you see as possible here?
I think it's helpful to think of sort of a spectrum of worlds or outcomes.
You know, on one end is the most binding regulatory approach where the U.S. and China
agree on a very high level of, you know, very top-down system where we're both not going
to build dangerous superintelligence.
And then that international agreement gets filtered down into the two systems.
regulate domestically, and, you know, everything is safe. On the other end is just total
unbridled competition in which we think the other side is racing as fast as they can. They don't
have any sort of guardrails, and so we need to race as fast as we can and sacrifice the guardrails
in the interests of winning. And I think, you know, the first one of international agreement that
trickles down is at this point quite unrealistic, at least in the short term. In the halls of power,
there is such, such deep distrust between the countries. That might not apply to the president
himself or, you know, individuals. But when you look at sort of the entire national security
apparatus in the two countries, they tend to see each other as fundamentally in a rivalry.
Any promise would just be bad faith. You're just saying that to slow me down and I'm going to
keep, you're still going to keep building it in a black project somewhere. And so I got to keep
racing. Exactly. And given that, I think my hypothesis is kind of something in the middle, which
what it fundamentally rests on is the idea that I think the most important thing is going to be
how the U.S. regulates AI domestically for itself, for its own reasons, and how China regulates
AI domestically for itself and for its own reasons. China actually has a lot more regulations
on AI. There's a lot more compliance requirements, mostly centered around content control,
but now expanding beyond that. And essentially, I think both countries are going to be moving
in parallel here. They're both going to be advancing the technology. They're both going to be
seeing new risks come up. And my sort of thesis here is that we have safety in parallel,
where both countries are moving forward and regulating because the risks are not acceptable
themselves. And there can be this sort of light touch coordination or maybe just communication
between the two sides. We're not going to have any binding agreement. You know, I'm not going
to do something in the United States because I 100 percent believe you're doing the same thing over
there. But, you know, we have a best practice over here. We have something that we've learned,
like you gave the example of permissive action links.
We think this is a method by which you can better control AI models.
We're going to do it domestically, and we're going to maybe open source that, or we're going
to share it.
We're going to have a conversation with our Chinese counterparts about it.
It's not relying on trusting one another, but it's sort of building touch points and sharing
information about how to better control the technology as it advances.
And then maybe if we get to a point where both countries have developed really powerful
AI systems, and they've also, in some sense, learned how to regulate them domestically,
or at least they're trying to regulate them domestically, then maybe we're already in pretty
similar places, and we can choose to have an international binding treaty around this.
There's also the getting to the point where we had so many nuclear weapons pointed to each other
that the risk was just so enormous that it was existential for both parties.
And even Dario Amadai from Anthropic has said, don't worry about Deep Seek because we still have
more compute, and we're going to do the recursive self-improvement.
when you signal that publicly you're telling the other one oh you're going to if you're going to take that risk then I'm going to take that risk but then that collective risk can be existential for both parties and you know I heard you also say you know the need for basically red phones like we had communication between the two sides and also red lines how do we have red lines of what we're not willing to do and you can imagine there being at the very least some agreement of not building superintelligence that we can't control or not passing the line of recursive self-improvement or another one I've heard is not shifting to what's called
neuralese. So instead of right now, the models are learning on their own chain of thought,
which is like their own language, so the models are kind of learning from their own thought
in language. But what happens when you move from words that you're thinking to yourself in to
neurons that you're thinking to yourself in? And when you have that, that's when you're in some new
danger. So anyway, this has been such a fantastic conversation. I'm so grateful to both of you,
and I think this has given listeners, hopefully, at both a lot of clarity around the nature of how
these countries are pursuing this technology and the differences. And also the possibility
for doing this in a slightly safer way than we currently have.
Anything else you want to share before we close?
This has been great.
I've loved talking through this stuff with both of you.
And, yeah, I'd encourage people to try to read some of the good work
that's being put out there about what's happening in China on AI
and not expecting anybody or everybody to become experts on this topic.
But the thing to know is that the Chinese are much more aware
of what's happening in the U.S. than we are aware of what's happening in China.
they're much more interested in learning from what's happening in the United States
than the U.S. is in learning from China.
We have this mentality that that's an authoritarian system,
therefore we can't learn anything from the way that they regulate technology.
You know, their arrival, we can't learn from them.
China doesn't see it that way.
They say if there's a good idea in the United States,
let's adopt it and let's adapt it to our own ends.
And that's a huge advantage for them,
being willing to learn from the United States.
I think if we can kind of break down some of those mental walls
and actually take seriously what's happening over there
and see if there are lessons for the United States,
I think that would be a huge boost.
I 100% agree.
And I just think if there is more mutual understanding
and if people try to visit China if you can
or read some of the interesting research
or pieces they're coming out,
including Matt's substack, a gentle plug here.
I think that makes for a better world.
So, you know, if you're thinking about
and listening to this and thinking about,
oh, what can I do?
Like, understanding is the first part.
Matt and Selena, thank you so much for coming on Your Individed Attention.
This has been one of my favorite conversations.
Thanks so much. This has been really great.
Thank you for the great questions and for having us.
Your Undivided Attention is produced by the Center for Humane Technology,
a nonprofit working to catalyze a humane future.
Our senior producer is Julia Scott.
Josh Lash is our researcher and producer,
and our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sue Dakin, original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team for making this podcast possible.
You can find show notes, transcripts, and much more at HumaneTech.com.
And if you like the podcast, we'd be grateful if you could rate it on Apple Podcasts,
because it helps other people find the show.
And if you made it all the way here, let me give one more thank you to you
for giving us your undivided attention.
Thank you.
