Moonshots with Peter Diamandis - Elon’s Predictions For the Future: AI, Free Energy, and More | EP #129
Episode Date: November 8, 2024In this episode filmed at the FII8 Summit, Peter and Elon discuss Elon’s predictions for AI, if AI will be good for humanity, humans on Mars, and more. Recorded on Oct 29th, 2024 Views are my own... thoughts; not Financial, Medical, or Legal Advice. Elon Musk is a businessman, founder, investor, and CEO. He co-founded PayPal, Neuralink, and OpenAI; founded SpaceX, and is the CEO of Tesla and the Chairman of X. Learn more about the Future Investment Initiative Institute (FII): https://fii-institute.org/ _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Blog Learn more about my executive summit, Abundance360: https://www.abundance360.com/ _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Transcript
Discussion (0)
Ladies and gentlemen, prepare for a fun conversation with a very special guest
and someone I'm proud to call a friend. Welcome Elon Musk.
Elon, welcome to Riyadh. It's been quite an incredible 90 days for you, my friend.
Yeah.
XAI Colossus came online 122 days.
You're about to double in size again.
Extraordinary success of the fifth mission of Starship and the booster capture.
Amazing.
The launch of CyberCab.
Starlink helping in disaster relief, Progress with Optimus,
the second Neuralink human patient, new record users on X.
And that's in 90 days.
I'm sure I've missed a few things.
Welcome.
Yeah, I've been playing a significant role in this election.
I think I saw you someplace on X dealing with election issues.
Let's talk a little bit about AI, which is on the tips of everybody's conversations here.
When we spoke last in March at the abundance summit, your prediction was that AI was increasing at a rate as fast as 100 times per year and that by 2029 or 2030
We might see AI as capable as 8 billion humans. Are you still seeing that pace?
Yeah, it's been a difficult as it's good thing to quantify exactly but
I
Just like feel comfortable saying that it's getting 10 times
better per year, which is, let's say it's four years from now, that would mean 10,000
times better.
So maybe 100,000.
Yeah.
And it's it's it's it.
I think it will build to anything that any human can do.
Possibly within the next year, year or two.
And then that's much longer than that does it take to be able to do what all humans can do combined?
I think not long, probably only three years from that point.
So, like 2029 is 28, something like that.
The other conversation we've had, and you came out in the same side as Jeffrey Hinton in this, was 80% probability it's going to be awesome.
20% probability we're screwed.
Are you still on those odds?
Yeah, I mean, it's say it's it's it's it's most likely gonna be great and there's this some chance
Which could be 10 to 20 percent that?
It goes bad
The chance is not zero that it goes bad
But overall at one point we could say that the covers 80 percent full
Is one positive way to look at it maybe 90 percent
All right, I like that increasing.
What keeps you up at night besides running six companies?
Well, waking up early to participate in talks like this one.
I felt very guilty asking you to do this but thank you for
for joining us on the lease. In terms of worried about the future, well I think AI is a
significant existential threat and something we should be paying
close attention to. It's probably the most significant near-term threat. The longer term than
that is the global population collapse. You know, both rates have been collapsing
pretty much worldwide and we're headed to, you know, a situation where, for
example, based on current birth rates, South Korea
would have about a third of its current population, perhaps much less.
Europe would have about half of its current population, perhaps much less.
And I should say those numbers are if the birth rate were suddenly to return to 2.1
per woman, which is a stability point, which
it's not doing.
So if the current compounding effect continues,
you would see really many countries become
5% of their current size or less within three generations.
I know you've been doing your...
I would consider that to be a very big problem.
And I think for most countries, they view the birthrate as the single biggest problem
they need to solve.
If you don't make new humans, there's no humanity.
And all the policies in the world don't matter.
I know you've been doing your part
to maintain the birth rate in the US.
Yes, I am.
I mean, you've got to walk the talk.
So I do have a lot of kids,
and I encourage others to have lots of kids.
But on the AI, and I'm sorry to go negative on this, but what are you doing right now
that's most important for countering that 10% probability of dystopian outcomes?
Is there something or is there a regulation that you're promoting?
How do you think about, you know, the upside will take care of itself.
How do you protect against the downside?
Well, my thing with respect to AI safety is that you have to create a maximally truth-seeking AI,
which may sound obvious, but that's what I'm seeing being produced is not maximally truth-seeking.
what I'm seeing being produced is not maximally truth seeking. It tends to be trained to be politically correct.
And for a lot of the AIs that are being trained in the San Francisco Bay Area,
they have taken on the philosophy of the people around them, which kind of makes sense.
So, you know, you have sort of a woke, nihilistic, in my opinion, philosophy that is being built into these AIs,
and they're being taught to say crazy things in some cases that are very troubling.
So you really want to have a maximally truth-seeking AI.
And I can't emphasize that enough. That's incredibly important.
And obviously, build an AI that loves humanity.
And I think these...
So I'm a little concerned. That's why I created XAI,
which is to have an AI
that is maximally true seeking,
that aspirationally does love humanity
and will seek the best interests of humanity going forward.
No, you just tweeted that you're doubling the size
of the Colossus cluster.
What are your thoughts?
We already have with XAI,
the most powerful training cluster in the world.
And we're about to double it.
Energy is a point of conversation here.
How concerned are you about providing sufficient energy
for the growing hungry clusters globally?
Yeah, I think things will things are currently
Chip limited or they're not quite chip limited there. It gives the point where they're limited by
voltage transformers and installation
And they will become limited by energy.
So there will be a tremendous amount of energy that's needed for digital
intelligence and for also for electrification of transport. So those two things are a big deal. Yeah, we're gonna need a lot of energy.
The long term, almost all the energy that we will get is gonna come from the sun. So
one way to look at civilization is progress on the Kardashev scale.
We're just barely getting to one.
Well, we're far from- Okay, half.
I think we're probably, we might be close.
It's not clear to me we're above 1%
on the Kardashev scale one.
Because Kardashev scale one means
you've harnessed all the power of a planet.
I think we've probably harnessed less than 1% of the power of Earth. Now Kardashev
scale 2 is you've harnessed all the power of your Sun. The Sun is overwhelmingly the
largest source of energy in the solar system. Everything else is maybe amounts to about a trillion of the energy in the solar system
compared to the sun.
Let's go safely less than a trillionth of all the energy is non-sun in our solar system.
Yeah, we're using one eight thousandth of the sun's energy hits your surface of the
earth.
Just that hits the surface of the Earth. Yeah. Yes.
And the percentage of the Sun's energy that hits the surface of the Earth is less than
a trillionth of the energy that the Sun produces.
So almost all energy long term will be solar.
It rounds up to 100%.
So it rounds up to 100%. So it rounds up to 100%.
That's how much of the energy in the future
will be solar when you view things from a watershed
standpoint.
You know, Ilan, we have a number of national leaders,
corporate financial leaders from the Middle East here.
What's your advice to decision-makers here
in the room that don't want to miss the AI transformation
and will be part of the leadership of that AI
transformation?
Do they need to build their own clusters here?
Are they partnering?
Yeah.
Well, I think there's probably all countries will have their own AI clusters over time.
It's currently very difficult to actually build an AI cluster and have it run.
It's not like just pulling a computer out of a box.
They are currently very difficult to run.
And you have to say,
are you gonna be training a frontier model?
Because if you're training a frontier model,
then you need a massive amount of compute
and a level of technical skill
that is only a few companies possess.
But over time, I think every country will have
AI compute clusters.
It's just gonna be a normal thing that every country has.
So, yeah.
So basic infrastructure for every nation, like they have an electrical grid.
Yeah, it'll be something like electricity or, you know, or, you know, having an airline
or something like that.
It's every country will have AIs or multiple AIs.
So and there will be a lot of robots.
There'll be a lot of robots. There will be a lot of robots.
Way more robots than people.
Yeah, let's have that conversation a second because people are concerned about, as you
said, dwindling populations, AI and robots have potential to help support the GDP.
Yes.
Congratulations on Optimus 2 and soon Optimus 3.
Your prediction on the number of robots by 2040, humanoid robots to be specific, what
order of magnitude?
By 2040?
Yeah. I think by 2040, probably there are more humanoid robots than there are people.
So on the order of 10 billion.
And your price point on these humanoid robots?
You're pretty good on pricing.
Sometimes you're off on timing.
Yeah. But sometimes you're off on timing. Yeah, I'm often optimistic on timing.
But although the press will report when I'm late,
but not when I'm early.
For example, our Shanghai factory,
we thought it would take about a year and a half,
and we did it in 11 months.
Our Kigunavata factory, we thought two years.
We did it in 18 months.
Or the Colossus cluster.
Texas factory, two years, we did it in 14 months.
So I've been early actually many times.
It's just not reported.
So when I make a prediction, I try to figure out, I try to say, what is the 50th percentile
likely, which means that half the time I should be wrong.
So I'm not sand backing essentially.
So, but I think it's,
once you get out 2040, that's a long time from now.
Over 25 years, there'll be at least 10 billion humanoid robots.
And price point?
Yeah, the price point will be I think quite low, probably 20, 25 thousand dollars for
a robot that can do anything.
We will be in a future, assuming we are in the good path of AI, I think we will be in
a future of abundance.
Obviously, you wrote a book called Abundance, so I think you would agree that that is probably
the outcome.
Basically, anyone will be able to have any goods and services they want. The actual marginal cost of goods and services
will be extremely low in the future.
Let me, in our last four minutes,
let me change the subject to something near and dear
to both of our hearts.
Congratulations on Starship.
It was literally awesome, probably
the engineering feat of this decade, if not more.
Not bad for humans. Not bad for humans. We did that with no AI was involved in that whatsoever.
So I'm glad to say that we're all entirely with human brains and without AI.
I think in the future the AI might look at that and say not bad for a bunch of monkeys.
I think in the future the AI might look at that and say, not bad for a bunch of monkeys.
When are we on Mars?
When is Starship on Mars?
I think we'll be able to launch some Starships
to Mars in two years.
So the next Mars window, which is in about 26, 27 months,
but we're just about to start,
or we're at the beginning of a Mars transit window now
and they occur every 26 months.
So just over two years,
we'll be sending our first uncrewed Starships to Mars.
And then if those work out well,
and we don't increment the crater accounts on Mars,
then we'll send humans two
years after that.
So a challenge to be on the surface of Mars before the end of the decade sounds like a
a reasonable proclamation for either side of the White House, hopefully. Yeah. I feel more optimistic about it under a Trump White House than a non-Trump White House
because the biggest impediment to progress that we're experiencing is regulatory, is overregulation.
Got to keep those whales and sharks safe. Yeah. Yeah, exactly. It just takes us...
It takes longer to get the permit to launch than to build a giant rocket.
And the bureaucracy in the US has been growing every year
and has particularly grown under the Biden administration.
And unless we do something to scale back the overregulation,
I call it sort of America is getting,
and a lot of countries are getting slow strangulation
from overregulation.
Unless something is done to push back on that,
it'll eventually become illegal to do
almost any large project.
And we won't be able to get to Mars.
Last subject, congrats on the CyberCab rollout.
Pretty extraordinary.
Kathy Woods predicted it to be multiple trillions of dollars of potential GDP growth and impact. Give us some predictions on when we'll see CyberCab,
when we'll be ordering a CyberCab.
Yeah, well, Tesla, unsupervised full self-driving,
we expect to be working in the US next year
with the Model 3 and Y.
So you don't have to wait for the robot taxi
or a Stavacap to for Tesla to release autonomy.
We're currently expecting to exceed human safety levels
in Q2 next year, and then substantially go beyond that
thereafter, and so really it's just a software update
to the cars to
be able to launch our self-driving network. So we expect to do unsupervised full self-driving
and start that in California and Texas around the middle of next year. And then we have 7 million cars on the road.
We'll have 9, 9.5 million cars by the end of next year.
So, and eventually we'll have a fleet
of 100 million plus vehicles
and they'll all be autonomous.
The Sauvacap with no steering wheel or pedals,
we're expecting to reach volume production
in 2026.
So that's certainly interesting.
But like I said, the actual launch of a robotic taxi on two-bys full self-driving is actually
next year. And at the Tesla autonomy event, we had 50 cars, 30 Model Ys that were driverless, and
20 of the cybercabs.
And so autonomy is here, is what I'm saying.
And all cars will drive themselves.
This is a no-brainer.
And they'll get to where they're 10 times safer than human-driven cars,
which will save, I don't know,
past a million lives a year, globally.
And then Optimus starts limited production
next year, 2025,
and then should be in volume production in 26 and then will
grow to I think ultimately be the biggest product of any kind ever.
So I kind of see, I kind of agree with with all converse and Kathy would that
Autonomy like sort of robotic taxis
Makes Tesla kind of like a about a five trillion dollar company
The optimist robot I think makes Tesla a 25 trillion. Yes company
It does this it's not even clear what money means in that
We end up in a post capitalistcapitalist society at some point. Elon, you make it look... Yeah, we do sort of end up, it does become kind of post-capitalist. And like
I said, and I know you agree with this, that we're looking at the most likely bright side,
we're headed for an age of abundance where anyone can have any goods and services they want.
It won't be a case of universal basic income, it'll be a case of
universal high income is the most likely outcome. You make it look easy my friend. Thank you for
making some time available. I know it isn't easy. Let's give it up for Elon Musk everybody.
Thank you. Thank you my friend.