Moonshots with Peter Diamandis - Eric Schmidt: Singularity's Arrival, the 92-Gigawatt Problem, and Recursive Self-Improvement Timelines | 241
Episode Date: March 24, 2026This episode was filmed at the 2026 Abundance360 Summit. Learn more at a360.com Eric Schmidt ignites Abundance Summit 2026: AI's reasoning boom crushes code (20-80 human split), scaling laws unboun...d, orbital data centers, China's robot edge, energy/power crunches, recursive self-improvement asymptote, and steering ASI to American values amid geopolitical races. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Eric Schmidt is the former CEO of Google; Chair and CEO of Relativity Space. Dave Blundin is the founder & GP of Link Ventures – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: www.fountainlife.com/peter _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Eric: X Linkedin His latest book Listen to MOONSHOTS: Apple YouTube – *Recorded on March 9th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
We're living through a historic moment right now.
The next thing that's really interesting and terrifying also is recursive self-improvement,
but we don't have it yet.
What we do have...
I keep asking my friends, when does the asymptote arrive and when does the curve slow down?
It is actually true that there is a limit to our craziness.
We have not found it yet.
And that's the great thing, frankly, about America.
The American competitor, not enemy, but competitor is China.
They have lots of money.
They're very, very smart.
Their work ethic is equal or stronger than ours, and they dominate key industries.
But at the moment, it sure looks to me like the robotic hardware of China is the winner.
I don't want to lose the robotic revolution, in my view, the way we lost the electric vehicle revolution, at least on the low end.
It's possible.
But it requires...
Eric, do you remember the first time we met?
Yes.
Larry Page introduced me to you because he was on your board.
Yes, yeah.
So I got a call from Eric out of the blue, which was a great honor, and said, Larry says I should meet you.
When are you going to be down here or up here in San Francisco?
I was in L.A., and I said, how about tomorrow?
Typical, Peter.
And I remember something which was so, I loved the story.
We sat down to lunch at Charlie's Cafe.
And of course, I'm running a nonprofit, and my mission is always raising capital for a nonprofit.
And so we sit down to lunch, and before we get started, you said, Peter, so what is your
highest level of giving of membership?
And I said, well, Eric, it's our vision circle for $2.5 million.
And you go, okay, I'm in.
Now let's have a conversation.
I'll buy them all.
It was crazy.
Yeah.
Oh, there you go.
But, you know, somehow you have, the reason, when we spoke, the reason I wanted to come here is this has become the epicenter of the abundance movement.
And the abundance movement is correct.
That's the important thing.
Thank you.
Yeah.
I just want to thank you for all the support you've given myself, XPRIZE, all these years.
So, I'll start with a question that I'd love to hear you.
expand on, which is we're living through a historic moment right now. Could you define the moment
we're in and give us sort of a state of the union of what's going on in AI? We're 10 or 15%
into the impacts of this, and you can see it, you can feel it, and some of it will happen,
some of it will take longer, right? So, for example, hardware takes longer than software,
Sort of robots take longer than digital systems on traditional hardware, things like that.
We've not, the next thing that's really interesting and terrifying also is recursive self-improvement.
It's not happening yet.
And so it's easy to convince yourself that you're going to have human agents,
sorry, computer agents that are human-like completely within a year or two.
We don't have the science for that yet.
People are working on it.
I can describe how I think it'll play out,
but we don't have it yet.
What we do have is reasoning systems
that are perfect partners for human beings,
for good and bad, right?
And that has a lot of implications.
So if we stop today, which we're not,
and it's not stoppable or controllable
by any government or any single individual or corporation,
we would still have advanced humanity
because of these reasoning agents.
How fast do you imagine is it going to accelerate?
There's a thing which I call the San Francisco consensus, and the reason I call it is because
everyone in San Francisco believes this, everyone I know anyway, which is that it's easy to understand.
This is the year of agents, which we can discuss, why agents will take over everything this
year.
During this year, the scaling of the use of agents and reasoning will sort of grow at this enormous
rate.
where everyone's at electricity.
It's a real boom, right?
It's like the biggest boom I've seen,
and I've been through three or four of these in my career.
In this thinking, once you have recursive self-improvement,
where the system can begin to improve itself,
you have intelligence learning on its own,
and in this argument, it will learn faster than we can
because we're biologically limited.
And the way this is expressed in San Francisco,
and I'll give a simple example,
You have a tech company with a thousand fantastic AI researchers.
So one day they turn on AI research, that is an AI research agent.
Well, how many AI research agents do you have?
Well, as many as you're limited by electricity, right?
You don't have to feed them, they don't need housing,
there's no more housing in San Francisco, all that kind of stuff.
You don't have those problems.
You don't have an HR department for them, if you will.
And you don't have to pay them, you just have to feed them in electricity.
You just have to feed them electricity.
So how many could you have?
Well, maybe a million of these agents.
Now in AI, the way you determine that you've made progress is you have clear metrics that
the reasoning or testing or whatever, the evaluation framework, is better, right?
So that's what happens.
So in that scenario, the slope goes like this because you're already at this slope, then
you add more people, then you get the agents and you go like this.
this, and this is essentially a super intelligence moment.
The belief in San Francisco is this occurs within two to three years.
The evidence in favor goes something like this.
Claude Code came out a couple months ago, the latest one, Opus, whatever it is.
4.6, yes, thank you.
And everyone I know in the Bay Area that's doing software says it was 80, 20, now it's 2080.
The best analysis I can come up with
is it's not the Claude code part
it's that the underlying LLM
can produce more reasoning over time
better quality tokens over time.
It's a deeper thinker, right?
And all the labs now are competing for that.
This is not just the size of the context we're into,
it's actually the reasoning skill
and the length of which it can think.
It can just think longer and produce more stuff.
I watched this stuff
I moved to the Bay Area when I was 21
and I was a programmer in high school
way back when
and I was a pretty good programmer
and I watch what it does
and I go my God I'm over
you know there's not a thing
that I could do that it cannot do
so when they wrote a
C compiler
in Rust
I could it's over
you know like declare six
so I think part of this
part of this is because the people who are building it
are also seeing the diminution of their own skill.
They're being forced to go from programmers,
which is what I'm very proud to have been,
to being the director of a programming system.
And the most likely scenario, by the way,
there's a lot of implications for this.
One is that it's always been true,
speaking as your local arrogant programmer,
that the very top programmers
were worth ten times more
than the ones right below.
There's something special
about the mathematical reasoning skills of programmers.
Those people will become more valuable,
not less valuable,
because these systems need to be controlled
by humans at the moment.
Those people will be capable of grasping
the parallelization and the activities of this.
It also means that you're going to have,
in my view, a relatively small number
of very large companies.
Yeah, yeah.
And this is a big deal.
Yeah.
And a very large number of very small companies
because you don't need as many people.
And you're watching that play out this month.
I mean, this all happened in the last three.
I was in one startup I'm involved with.
I was talking to the programmer,
who was a perfectly brilliant young man.
And I said, well, what's the truth?
He said, well, here's what I do.
He's working on U.I's of various kinds.
And he said, I write the spec of what I want,
and then I write a test function,
an evaluation function,
and then I turn it on.
I said, what time?
And he goes, seven o'clock in the evening.
And I go, okay, what do you then do?
Well, he has dinner with his wife.
And he goes to sleep.
And I said, do you wake up?
No, I sleep very well.
When does it finish?
Oh, four in the morning.
And then he gets up, has breakfast, you know, does whatever he does.
And then he sees what's been invented.
I mean, it's mind-boggling.
And this stupid example I use with this young man,
this is what the power of these systems are.
If you can define the evaluation function
and you can let it run,
and if you have enough hardware,
you're inventing worlds.
I mean, this stuff would have taken me
six months and ten programmers at Google
to do the same thing.
This poor guy's sleeping.
It's so funny you say that
because I was literally backstage,
they said, Eric Schmidt's coming now,
and I have my lid open on my Mac,
and I'm trying to get the jobs
onto the cloud so I can close the lid.
Because if you close the lid,
it'll break the jobs.
So I've got these six concurrent quad 4.6.
You can leave it open on them.
It's like important what you're doing.
Tony, to interrupt for me.
You're important too, you know.
No, it's crazy because when we got together in Davos just, what, two months ago,
it was in this kind of auto-complete mode.
You'd write the code and then it would help you get it done.
You're about 10 times more efficient, but you're still babysitting it.
Now, literally, it's working right now.
When I get off stage, it will have solved six problems that I launched.
And I appreciate the excitement in the industry,
but I can tell you when I was,
I used to work on BSD,
I basically worked at Berkeley on Unix at Bell Labs
and at BSD Unix.
And programmers invent what they need.
And so we invented the first email system
and the first messaging system.
And nobody thought about it.
It was like, well, we just need this thing.
Yeah.
So one key thing to understand
about digital intelligence
is the first inventors
are the people who are solving problems
of themselves, who are programmers.
Right?
You shouldn't be surprised by this.
You should have expected this.
The other thing that's interesting about programming is it's both scale-free,
which means there's no particular limitations except electricity.
You don't need a lot of data, and you already have GitHub and the equivalents.
And it's also a fairly limited language set.
So the number of language components, if you will, compared to human language is less.
Smaller language, clear objective function, all you need is electricity.
Yep.
Now, how far can this go?
it'll get to the point where you don't have the ability to do completely new things.
Isn't it really quaint and crazy to think that we can sit here and say, yeah, I wrote a ton of code
when I was younger. No one will ever do that again after the end of this year. It'll be like
riding a horse, you know, those faint skills that we all used to have. But I do have a proposal
for universities. Those of you are associated with universities, you used to stop everything else
you're doing in the university right now and design a course for freshmen, men and women,
who starting in September,
which is a prompt engineering class.
Why university?
Why not in high school?
God, you're so aggressive, Peter.
Let's start with universities.
You can improve my idea.
I thought 18-year-olds would be young enough.
Maybe you think it's younger.
Here's the most important thing.
Spend a quarter or a semester.
The first thing they learn in university
is how to use these tools.
Universities are completely opposed to my idea, as usual.
Yeah.
Because it violates every one of their tenets.
But if you think about the student, and I mean every student, liberal arts, you know,
math, whatever, they're going, this platform will be the expression platform for their art,
their music, their writing, and so forth.
Why wouldn't you teach them immediately?
Peter, improve my proposal.
No, I just feel like we're, that AI is going to impact every,
student in high school today and that they're living in a natural life by not engaging with it.
And when they hit universities for those that still exist,
well, plus your kids are that age, and so they're literally right now doing exactly what you're describing.
People here who have teenagers, you know what I'm talking about, because they're all in it already.
So I think that's an improvement to my argument.
There's a problem of age restriction.
You really have to think about vulnerable
teenagers with this technology.
I did some analysis
of where the real problems are with this stuff.
A simple summary is that
at some point there will be
jobs impact from this stuff.
We're seeing it in software and we're seeing it
in certain customer service industries,
not across the board. At some point that will
happen. That's an issue.
Another one is how do we
as a country
maintain our moral values
while we're also racing against
China. Another one is the impact on young people. It is not okay for 13-year-olds to be committing
suicide because of an LLM. It's just not okay. It needs to be addressed. It needs to be addressed
right now. For sure. And there's all sorts of other issues. The other one I came up with was
in agent orchestration, agents can be combined. I've always been worried that when you put
the agents together, especially if they're from non-compatible vendors, you get
unpredictable effects.
So these are problems to be solved.
So we herald the future and we solve the problems that it brought.
So we're going to talk about China and government and jobs.
But before we do that, I want to stay, I'm on the savor of the moment's kind of mode right now
because I feel like the world a year from today will be nothing like the world today.
And everything we're doing right now, I've enjoyed so much for so long.
And I just want to savor the moment.
But reminisce for one minute about the fact that, well, you were running Google.
The transformer got invented there.
The TPU got invented there.
Dennis Hasabas solved protein folding, which is now universally used.
You know, does the work in, I think it does the work in an hour that used to take a PhD student four years.
Four years.
Yeah.
It's like 300 million times more efficient.
All of that and all the diaspora from that, all the people working in the field in San Francisco is,
you mentioned, they all were your people.
And so you were there at the creation of everything we're experiencing right now.
Is there anything like that profoundly strikes you about that moment?
Did you even realize at the time?
No, I think when you're making history, you typically don't know it.
I give a lot of credit to Larry and Sergey, because they were ahead of me.
I'm an operating CEO, and they pushed, pushed and pushed for excellence.
And I'll give an example.
in the early years of Google,
Larry would, my favorite interaction
was one day I said,
we need to hire some people doing Java.
And Larry and Sergey said,
this is the stupidest idea we have ever heard.
And I could never tell with them
were they being serious or not,
or they were just joking with me.
But their argument was that real programmers
were programming one level lower.
Today, Google has many thousands of things.
But they were so precise
and so driven to excellence in technology
that I could not fool them.
I couldn't market around them.
I needed to have the technical expert.
And they say, oh, that's boring.
Don't do that.
That's another one of your ideas.
We want a new idea.
And I give them a lot of credit for it.
But what about the TPU in particular?
That to me, because I didn't even hear about it until much later,
and it takes years to design and build your own internal chips.
And now it's about to explode.
I don't know how much is public or not, but it's just.
So the TV version one was essentially a matrix multiplier of a particular con.
When they went to version two, they changed the algorithm in a complicated way.
And it's particularly good for inference.
Whether it's brilliance or just luck, those decisions made 10 years ago set up the TPU as the perfect inference engine.
And for everybody's benefit, inference is what the reasoning steps that I'm describing on.
So Google is particularly well positioned.
As you know, NVIDIA purchased GROC for the reason of getting that integrated.
Yeah, trying to catch up to what you thought of 10 years ago.
And what's interesting about NVIDIA, if you look at them, I was looking at the Rubin architecture.
They managed to do, NVIDIA managed to do what Intel could never do.
Intel could never get control of the complete server architecture, and they tried.
Invidia has managed to build real supercomputers that you can really buy with enough time and money and so forth,
and it will really be delivered to your, and it just do the whole thing.
These are major industrial achievements, and that's why both companies will do incredibly well.
Eric, in the AI exponential growth right now, talk to me about where the constraints are.
You were in Congress talking about energy, chips, people, capital, where are the constraints right now?
It's interesting.
I started a data center company with my friends.
in my testimony I said there was an estimated 92 gigawatts shortage of power in America, between now and 2030.
And by reference, a nuclear power plant is about 1.5 gigawatts.
So that's about 60 nuclear plants, and we're doing essentially zero or one, right, depending on how you count.
So I got interested in the question of what's the real resource?
constraint in America and it's electricity. We have the universities, we have the smart people,
we have the economics, and we have these amazing finance people who will give all of us
billions and billions of dollars on a wing and a prayer. I mean, there's no country where the finance
people are sufficiently crazy to do that. It's not true in China. It's certainly not true in Europe.
And these guys are incredibly jealous of the American financial system. So I always start, thank you to the
finance people for funding our dreams, whether they
work or not. Thank you. So there's usually a retort at this point where people say, well,
the algorithms will ultimately require less energy. I am sure that that is true. There is this
property that as the power, as the power of the hardware goes up, as the algorithms become more
efficient, you don't need less power, you need even more power and even more computers because
we discover new uses.
Jevins Paradox.
It's called Jevons Paradox.
And so I think that
because humans have trouble with exponentials,
everyone says, oh, well,
in six to nine months, it'll be the bubble,
and so forth and so on. There's no sign of this.
We've been, I on a team
have been working on this for years,
the ultimate
essentially scaling laws
are not done yet.
I keep asking my friends,
when does the asymptote arrive and when does the curve slow down?
We've not seen it yet.
There will be one, right?
It is actually true that there is a limit to our craziness.
We have not found it yet.
And we're running to the wall.
And that's the great thing, frankly, about America.
You think it's a limit to the capital or a limit to,
if you just add more and more and more scale and parameters,
something just doesn't work?
Well, the first question is,
is there a limit to the capital available?
A gigawatt of power corresponds to about $50 billion
of hardware software data centers on the order of,
depending on what numbers you use.
Okay.
So 100 gigawatts is, do the math.
Can we raise $5 trillion over five years?
Yeah.
That's the strength of American.
Could we double that?
You know, we're getting, the data center build out is 1% of the GDP of America growth right now.
We're back to a pilot program.
Right, wow.
Thank you.
And the current estimate of electricity use in America is that electricity, 10% of the electricity in the United States will be used in the data centers.
And these data centers, these are not the data centers I used to build at Google, which seemed tiny by comparison.
They were immense at the time.
Yeah.
The standard data center that's being built is on the order of four.
400 megawatts. These things are plus or minus, and they're about half a mile long and about
500 feet wide, right? And they're essentially air flow machines. Take the air, send the air out,
cool it in the middle using typically air cooling, and then they have a water system inside to keep the chips.
Using Nvidia as an example, the chips are water cooled, and also the HBM 3E, and now the four
memories put out so much heat that they have to be water cooled. The chips are two kilowatts. I mean, this is insane.
These things will kill you.
You want to hear something truly astounding and funny in hindsight.
When you bought DeepMind, everybody thought it was like $800 million or something.
600 million.
$600 million.
Everyone thought, why on earth would you waste $600 million on this zero revenue, AI?
All it does is play Go.
And then years later, it came out that the acquisition paid for itself just by controlling the air conditioning more efficiently in the data centers,
the entire acquisition price was paid off.
And that became the AI that's changing the world.
And the credit of that one actually goes to Larry Page.
Larry had studied AI when he was at Stanford graduate student.
And we always deferred to him in this thing.
And he said, this is the best team.
And I think Elon and Larry competed over it.
There was some complicated kerfuffle there.
And so Jeff Dean, who's the chief scientist, went over,
and then he and I basically finished the deal.
And I still remember, you know,
They're in one floor, you know, these sort of British people.
And led by sort of Greek, British person, Demas.
And they were smart, but Google's full of other smart people.
To give you an example, in 2016, so Demas announces that we're going to win the game of Go.
I figured, well, and by the way, at this point, they're a separate group.
We've let them alone because they have to grow and figure out what they're doing and all that.
And this is the patience of capital.
We could let them do that.
We didn't require that they do anything.
And so he goes, I'm going to go and say, well, I'm going to come too.
And I said, okay.
So I fly to Korea, and it's all one floor.
And I go and I meet the team that's winning the game.
And, of course, all of these Koreans, they're all very excited about this,
because they know they're going to beat the computer.
And so the Koreans are in one room.
I'm in another.
I go to the Korean room, and they're all saying, you know,
we're going to beat the crap out of this Google, Google, Google,
Google group. And I go into the Google room and it's very quiet and there's a monitor.
And there's an basically what I now understand is an RL prediction mechanism of whether we're
winning or not. Yeah, cool. Okay. And it starts at 50-50. So I watch, I go listen to the
Koreans talk for a while and then I go watch the screen and it goes to 51%. Okay. And then it goes to
52%. And then David, who is the guy who is the architect,
says, well, we just plan for it to get to infinity.
Okay.
So basically, it's the abundance theory.
So it's just, boom.
And everyone is, all the humans are crushed.
And deep-minded people say, yeah, yeah.
They were what they were supposed to.
Yeah, welcome.
And then I understood the genius of the deep-mind people.
And you can see this today with Gemini.
Gemini 3 is probably the broadest of the non-Chinese.
these systems in terms of its depth.
And because it's multilingual, multimodal, and so forth.
And so many moments in your life are just turning points in history.
And I don't know if you realize them in the moment,
but that was one of the last moments where humans used to look for challenges
where the computer could try to catch up like chess.
And then I think Go was the endpoint.
And we knew that, by the way.
We understood that Go, the game of Go was sort of incompatible by normal algorithms.
Yeah.
There was lots of math.
that said you couldn't solve it.
And so it was the common,
they came up with a two tree model
with two different RL trees.
So one of these,
other thing I learned about these things
is it's not just write me a Go program
that will win the game.
You actually have to understand the game and so forth.
So they took the same team.
They got bored with Goh, having won.
So then they took the same team
and had them work on protein folding.
Yeah.
And in protein folding,
they took a whole bunch of protein scientists,
which I know.
Just think about the genius of that.
Like nobody would ever connect
the game of it.
of solving all of biology.
But Demis had always wanted to work on this.
Larry and Serga were very interested in this.
And protein folding is the perfect problem
because you had a contest.
A defined endpoint.
So what happens is people get excited about AI,
but you need to have a validation function
because these things don't have common sense yet.
You have to show them what it's good.
Ultimately, they produce this thing called Alpha Zero,
which essentially self-learns.
Let's talk about data centers in space.
I'm in favor of it.
Eight, nine months ago, no one was discussing this.
I mean, all of us-
Do you know why I'm in favor of them?
I do.
You can mention it as you wish.
But all of a sudden, everybody's talking about them.
What are your thoughts?
I'm a part owner of a rocket company and we need-
Which I love having you come into the space community.
You understand this far, far better than-
It's-
Rocket science has named that for a reason?
Rocket science is really, really hard.
And I don't know that much about rockets,
although I certainly know how to manage tech people.
But I think that the opportunity is large and interesting.
There are challenges.
There's an issue of getting heat off of...
Again, you know this much better than I do.
Getting heat off of it because you don't have oxygen.
And you also have radiation issues.
Those have to get addressed.
But it makes the business plan for every rocket company
that's large enough, right?
The small guys aren't going to launch it,
but relativity space,
and, you know,
Blue Origin and SpaceX.
I mean, you know,
Elon's predictions were,
you're, I think,
a launch per hour
to populate the constellation he wanted.
But do you think technically
we're going to get there?
Well, the technology is understood.
Technically, for the data centers in space,
for heat dissipation.
Yeah, that technology is understood.
To me, it's a business question, and the question is, where should the data center be?
Should it be in space with these other issues, but it has other benefits, including infinite power and so forth,
versus on the ground where you have fiber and, you know, it's not shaking too much.
Yeah.
I mean, the energy argument says space wins by far, and I think the cooling is a very big challenge,
but I think it's largely figured out now.
But then the politics of space, you know, one of the turning points in AI history also is you getting in front of Congress
and saying, hey, we need to find almost 100 gigawatts.
And at the time, it seemed outrageous.
And now, of course, it's mainstream.
And it looks like, you know, the crazy investors are solving the problem.
The unleashing of the money has happened.
Welcome to American capitalism.
Tech industry.
I mean, we have another set of problems.
Well, if the next frontier is space, then,
but there's no investment community in space,
but there's also no military jurisdiction in space.
Or maybe there is.
I would have never thought my childhood dreams of going to the moon and Mars
would be fueled by data centers?
Never.
Mine can't carry you at the moment,
but it can in the future maybe all the way.
I read a time op-ed piece you wrote last night.
China can dominate the physical AI future.
Can you summarize that for us?
It was an important conversation.
In the geopolitical context,
and I've said this many times, I'll say it again,
the American competitor, not enemy, but competitor, is China.
By the way, I think it's an important distinction for you to make.
Not enemy, competitor.
And how to understand them as a competitor.
They have lots of money.
They're very, very smart.
Their work ethic is equal or stronger than ours,
and they dominate key industries.
With respect to robotics,
we somehow decided it was okay for them to dominate the electric vehicle industry.
This was an error.
To be very clear, it's an error.
Why do you not understand it's an error?
Because we don't allow their cars in.
Spent some time outside of this country in Chinese cars, trust me.
They are real competitors.
They've done a great job.
As I understand it, China is capable of vertical integration
and build these gigafactories at a scale that we can't for all sorts of reasons.
That's got to get addressed.
So if you want to compete, and I want to compete and win with China, and I compete not enemy,
I want us to have the same kind of system.
So in robotics, it turns out robotics, you can understand robots as essentially actuators,
these little stepper motors, click, click, click, click, and a brain,
ignoring the appearance and the eyes and all of that kind of stuff.
It turns out that the industry of the electric vehicle produces the same kind of motors.
and the same kind of systems.
They have an expertise that we don't.
My own view is that at least for very low cost,
China is going to win that.
And that was what I was trying to say in that piece.
And I worry about that.
Now, today, these are not particularly useful.
You know, they're fun toys,
their replacement for the dog.
If you get mad at your dog, sorry, I love dogs.
But you get the idea.
So we need to address this.
But at the moment it should,
sure looks to me like the robotic hardware of China is the winner at the low end.
I'm not talking about the high end. I'm not talking about expensive stuff. I'm not talking
about industrial robots. And if you're confused, watch the Unitree robot dance with the humans
that came out about a month ago. We have Unitree here in the Tech Hub and the co-founder will be
on stage with us later today. Pay attention to them. They're very, very impressive.
And I spent some time with him last time I was in China. And they're, and they're
They're one of many.
And the way China works is that they have brutal competition.
Brutal, brutal.
It's like unbelievable.
I was talking to my friend, we teach at Stanford.
And he said, you know, in China, we don't have the board dinner.
We have a two-hour meeting, we get back to work.
And there's no preamble.
We're not, you know, we don't say, hello, how are you, how's the family, that kind of stuff.
We're, boom, boom, right?
It's just cultural.
Yeah.
And the work ethic, the precision, and the scale that is
possible in China is a real competitor. I don't want to lose the robotic revolution, in my view,
the way we lost the electric vehicle revolution, at least on the low end. Interesting.
Because the Chinese model very much has a well-built-out supply chain with many vendors in the loop.
But we've been on a worldwide tour of all the humanoid robotics companies, just by coincidence, I guess.
But when you look at the Gigafactory and look at Elon's vertical integration, then also Brett Adcock,
figure, same thing, it's all vertically integrated.
Why? Well, because there's no vendor.
I have no choice.
So it appears that to get to, again, we're in, this is the abundance group,
the abundance club.
The way you get to abundance is you drive prices down and you get vertically integrated,
and Elon in our country pioneered that, to his credit, right?
You know, the old joke about Google was we would build anything including the buildings.
Well, Elon is actually doing that, right?
And why?
Not because he's insane, but because he actually, that's how you drive cost down.
He truly believes, and I'd love to get your take on it.
He truly believes that the robot building the robot is imminent.
And I didn't get it until we toured the gigafactory,
and I realized that almost all of it's automated already.
It's just the last piece is the human controlling a few knobs.
And that, you know, the humanoid robot can actually do.
Let me define the boundary because it's important.
Let's use batteries, for example.
Batteries are predictable, straightforward manufacturing processes at huge scale, right?
Things like that will have gigafactories.
One of the questions, and I, of course, used LLMs to do my deep research as a new person in the rocket company,
was how much of the human labor to build a rocket could be replaced by robots.
The current limit, which, of course, will change, is in our company,
we have these extraordinarily talented, essentially,
assembly people. And they're more than welders and they're more than, you know, mechanics.
They understand precisely how the little, the tubes and so forth and so on go together and they
use precise tolerances. That kind of assembly is beyond current robots. I'm sure it will
eventually show up, but not for a long time. People don't realize the majority of cost of a rocket
is labor. Exactly. It's not material. Fuel is 2%. And furthermore, when they get in some
the rocket, they understand what they're doing and they see what's wrong and they use human
judgment. We don't have those systems yet. Now, perhaps we will in the future, or perhaps this
will be one of the last things to go. But at the moment, high-skilled mechanical labor is very
important. Low-skilled labor of any kind gets stuffed up. So both of those, there's AI self-improvement
and then there's robotics building, robots building robots, self-improvement. Both of those loops.
We talk a lot about closed loops being...
If I can interrupt you.
Yeah, please.
The term I like to use is learning loops.
In a business, try to figure out all the different learning loops
and then try to accelerate the learning.
Fastest learner wins.
Sorry.
Well, I'm going to lead the witness here a little bit,
but if you asked Daniela Roos over at Seasale a year ago,
Eric Brinjolfson at Stanford a year ago,
do we need one more big breakthrough in AI core science,
or will scaling what we've already got
lead to self-improvement,
and then that will be all we need.
To get to AGI.
And then self-improvement.
I spent the last week doing RSI reviews,
recursive self-improvement reviews.
The scientists do not agree
on the exact approach to work yet.
So I think it's too early to know the question.
There's evidence that it will work.
There are tests in the lab that show it,
but they show it in limited cases
that are kind of demos.
Right? Real recursive self-improvement is the following. Start now, learn everything, discover
things, and tell me what you learned. That query doesn't work yet.
We're seeing all of the frontier labs sort of constantly leapfrog each other, right? I mean,
literally every week, it's the new model. Unbelievable. And it's extraordinary. Do you imagine
they're all converging towards the same endpoint, or is anyone you going to pull ahead?
So if you go back to this question of capital, how much room is there in the world for these companies?
How many can there be?
I'm going to make some numbers up.
And these are made up.
I think there's at least 10 in the world at this scale.
I think there'll be a few in China.
I think the majority will be in the United States, the usual suspects most likely.
I think they might be one or two in Europe, depending their electricity costs are a problem.
There might be one in India, right?
There's not going to be one in Russia
because of the war and so forth and so on.
So can the world accommodate 10?
Yes.
Do they track together?
I don't think so.
One of the key things to understand about China
is China's approach, which has produced
Deep Seek v4, Quinn, Kimi Version 3,
et cetera, and more coming,
is it's all open source, open ways.
They've managed to do this.
this with our ship limitations against them, which annoy them no end, shows you how clever
they are.
They're also, the Chinese strategy is a bit different.
It's less central computing and much more edge computing, which has to do with enveloping
their Chinese customers with AI around them all the time.
We're much more AGI, ASI centered, which is fine.
So the patterns are diverging.
Within the companies, it's a jumble now.
But fundamentally, Microsoft and Google
have these large cash flow streams from Enterprise,
so they can fund that.
Anthropics done a fantastic job of raising money
from those other companies, and they use, as you know,
Google TPUs.
And they've become the sort of leading player
in this sort of clawed API within the enterprise agent system.
And the OPENI is now shifting a bit of its strategy
to include the new things it's doing.
I don't think we can predict.
But the key thing to understand,
is that they need so much money.
Right.
I mean, look at what Sam is trying to raise.
They need so much money
that they're forced into these situations
where they have to win those battles
and they're busy winning them.
I mean, this is all good.
I think it's, in a year,
we'll know better the answer to your question.
I have two questions I want to close us out with it.
I think are important.
You made a statement a couple of times
once in our podcast,
once elsewhere,
that regarding AI
safety, the world may need to have a modest Chernobyl-like death event in order for us to wake up.
Do you still believe that's the case?
Yeah, and by the way, I'm not endorsing that.
I'm doing it as a descriptive, not as a proscriptive.
What are the real dangers of this?
There are biological dangers.
I mean, there's obviously the dangers of kids and democracies and so forth.
But like, let's think about like a biological attack, a nuclear attack that's spawned by these things.
It may take such a tragedy, hopefully a small one, to awaken the world to understand that these things are, they do have negative power.
And so I can imagine on making this up that there's some bad thing that happens.
And then all of the leaders, China and the U.S., we all have a meeting.
And we basically say, like, what are we going to do?
You know, we're in brutal competition.
We hate each other.
You know, I don't like you.
You know, you don't speak the same language, blah, blah, blah.
But we are all in it together, right, over this issue.
My sense is that will happen, but I don't know when.
We had a congressperson on this stage last year, and one of the crowd asked,
how much time do you spend talking about AI in Congress?
Well, it's definitely way less than 1%.
Yeah.
And so without that wake-up call, I don't see how you have...
I mean, governments, I spend lots of time with governments.
Governments are super busy, right?
And they're driven by political things.
They're driven by, at least in democracies, by political sentiment.
I'd like us as a nation to focus on the following.
I want to win the AI race.
I want us to do whatever it takes to do that.
This government is doing a very good job of making energy permitting more accessible.
The rate at which data centers are getting built is now accelerated, solving the grid problems.
I also want lots of immigrants in our country because those immigrants, at least the high-stakes
high-skills immigration, is what we need because we need the smartest people in the world
on our side to build these systems.
This is a unique moment in history, right?
I would take us home on a positive note.
So, you know, you said we're going to get to ASI at some point, whether it's two years out,
five years out, someplace in this next decade.
So the questions is, what steps can we take to steer artificial superintelligence
towards abundance, towards uplifting and in alignment with humanity?
You know, to make this abundance thesis materialize, what's your advice to us,
companies, governments?
There's an over-reliance in our society on people like me to work on this.
Why don't we have the smartest people in politics, history, human psychology, governance,
ethics, working together to make sure this stuff stays in human values and human alignment?
I want the system that we build in America to reflect American values.
the values of freedom and freedom speech,
freedom of association, all those things you learned in college,
excuse me, in elementary school and high school,
they're still important to our nation.
They're the enablers for the next generation
of our children and grandchildren.
I desperately want that,
and I don't want America to ever get on the wrong side of that battle.
There's lots of people working on this.
Lots of people understand the technical details.
I happen to run an informal group
that discusses this every week.
So it's possible.
But it requires political will
and it requires an understanding
that this can be done
without screwing up the genius of America.
In other words, I'm not suggesting
slowing anything down,
and you said it so well,
shaping it.
Making sure we don't cross lines.
Like I already mentioned
the underage kids problem.
That's a line we can't cross.
We've got to solve that problem.
There are others.
