Moonshots with Peter Diamandis - AI Experts Debate: Overhyped or Underhyped? (Opposite Opinions) Mo Gawdat & Steven Kotler | EP #177
Episode Date: June 13, 2025Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Mo Gawdat is an author and former CBO of Google X. Steven Kotler is an entrepreneur and a multip...le New York Times Best-Selling Author. – Offers for my audience: You can access my talks with Mo Gawdat and Cathie Wood for free: diamandis.com/summit Test what’s going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod -- Connect with Mo: https://www.mogawdat.com/ Connect with Steven: https://www.stevenkotler.com/ -- Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded June 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
What's the impact of AI going to be? Is it just massively overhyped or perhaps is
it something that we should be concerned about?
Today's AI is underhyped.
I think it's massively overhyped.
I know a ton of people who have way more work because of AI.
They just can do higher quality, better work, but it has not saved time.
We are talking to machines that are talking back to us,
summarizing massive volumes of knowledge,
and yet we take that for granted. Discussions about super intelligences and AGI and around the corner
and no, like just no. How smart is smart enough to render me irrelevant? I think we are holding
two different futures
in superposition.
The question becomes, how do we guide humanity
towards this positive vision of the future?
What do we do today?
Now that's a moonshot, ladies and gentlemen.
Everybody, welcome to Moonshots.
I'm here with two extraordinary, brilliant guests.
We're here to discuss a conversation that may be happening around every dinner table.
I know it's happening in the heads of companies and nations, which is what's the impact of
AI going to be on our lives and our business on every aspect of our day-to-day existence
over the next five
to eight years. Is it something which is going to be extraordinary? Is it just
massively overhyped? Or perhaps is it something that we should be concerned
about? I'm joined here with Moghadad who is the former business chief officer,
the chief business officer of Google X, best bestselling author of Solve for Happy and Scary Smart.
He's a global thought leader on AI, exploring how exponential technologies will shape humanity.
Also with another dear friend, Steven Kotler, who's the bestselling author, peak performance
expert and executive director of the Flow Research Collective.
He's my co-author of Abundance Bold, The Future is Faster Than You Think, and books like The Rise of Superman and The Art of Impossible.
Stephen has also been thinking deeply about exponential tech and its impact on us.
Gentlemen, welcome and good morning and good evening.
Stephen, you're on the west side of the United States with me.
Mo, you're in the Emirates.
Dubai.
Good to see you both.
Yeah, good to see you both. Yeah, good to
see you both. Morning, Peter. There is somewhere around 400 IQ points in this room. I have
40 of them, so you do the math. So let me set up the topic. Mo and Stephen, I'd like
to talk about the decade ahead, 2025 to 2035.
Specifically to think about the implications of what is emerging in our conversation as AGI,
but even beyond that, artificial super intelligence, the upsides and the downsides.
And here's the setup I want to use in our conversation. So Ray Kurzweil who we all know and love
has predicted that we're gonna see a century's worth of progress between
2025 and 2035
Equivalent to the progress between
1925 and today and if we think about what the world was like in 1925 a hundred years ago
the top of the
Technical stack was the Ford Model T the penetration of electricity and the telephone in homes across the US was only 30%
we've gone
We've gone an extraordinary distance since then and so the is, what will it be like in 2035?
It's nearly unimaginable if, in fact, that speed is true and we don't perceive exponentials
well.
This past week, we've seen every major AI company from Google and OpenAI to XAI and
NVIDIA announce extraordinary
next level breakthroughs and models.
We're about to see the release of GPT-5, self-improving AI programming that could lead to an intelligence
explosion beyond our imagination.
That's the conversation I want to have.
And Steven, I know that you and I have this conversation and have a debate
on it all the time.
I brought Mo in to help us.
Mo's the referee.
The referee or a wise individual whose points of view I respect.
And by the way, Stephen, Peter did pay me.
So just as long as you're getting the convo, I'm fine.
It's good. It's good.
You got me on the back end, though, right?
Yeah. So go ahead, said I say whatever you want to say, Steven, I'll disagree.
Or if you will.
So, so, Steven, do you want to jump in with your points of view?
You think AI is massively overhyped.
We have folks like Eric Schmidt saying AI is massively underhyped.
Underhyped, yeah.
Yeah.
Yeah, I think it's massively overhyped.
I listen to what's going on and so,
let me back up one step, which is humans have a really wild, unnamed cognitive bias.
We don't tend to trust our own history.
And you see this a lot.
People talk about grid and endurance and they're like, I don't have those skills.
And then you start investigating them up their life and like they survived the shitty childhood.
They've done 10 years of a tough man.
Like they have all the skills.
They just don't trust the truth of their own experience.
And I see that a lot here.
I'm like, look, I work with AI as a scientist,
as a researcher, I work with AI as a creative
and as a writer and I all day long, the gap between the shit coming out of
people's mouth and my experience on the ground is so
colossal, it's insane.
People have claims about AI being able to write or
anything else.
The most hysterical thing you've got to try is I work
with one of the best editors in the world on a weekly basis.
I've edited things, polished them with AI,
thinking they gleam and shine,
bring them into an editing meeting with them,
we start to read them,
we can't even get to the second sentence,
they sound like such gobbledygook.
I'm not even noticing it,
because the AI sort of glazes me over,
and I've written 17 books.
But like when you actually put it to an actual editing test,
it's laughably terrible, and you can't use it to an actual editing test, it's laughably terrible.
And you can't use it to correct itself.
It still can't see the errors.
It actually gets worse and worse and worse.
And people have been claiming model after model
after model improving, improving.
Like that's not the experience on the ground.
It's like people telling us AI
was gonna make you more productive.
I don't know anybody who's become more productive because of AI.
I know a ton of people who have way more work because of AI.
They just can do higher quality, better work.
But it has not saved time at all.
It's actually added tremendous amounts of time.
The quality has gone up off the work, but the claims that are coming out of people's
mouth and the experience on the ground are massively different.
Point one. Point two is we've done this. I've been in the same rooms that you've
been in and you've been in Moe where people are screaming about AI coming to eat the world.
Dude, I freaking heard this about Bitcoin and blockchain and the metaverse. Do you know
anybody who lives in the metaverse? Do you know anybody who's been there, who's visited?
You know how to find the metaverse, right? Like as far as I can tell, the metaverse, do you know anybody who's been there, who's visited? You don't have to find the metaverse, right? Like, as far as I can tell, the metaverse is like a pet name for
Mark Zuckerberg's special magic underwear, because it doesn't exist any place else in
the world. I don't, like, this is my point. I'm not, and more than anybody else, I track
these technologies, I watch them, I use them. I'm not saying this is not a technology that is advancing very, very
quickly. I'm not saying that at all. I am saying discussions about super intelligences
and AGI and around the corner and no, like just no, nobody's that's not the coders are
having a different experience. And it's a what has been revealed, which coders probably don't like, is this is a very, coding is a bounded information problem.
You start here, you know where you're going as a general rule, it's a bounded problem
and inside bounded domains, computers are really awesome.
And we're going to continue to see that. But I think the other stuff is just massively overhyped.
And the third point is, and this is the one where the journalist in me gets
like every alarm bell goes off.
Everybody I hear see on stage talking about this stuff is making a living off of it.
They make a living somehow because AI is exploding and they're here to save the world.
I see it like in the peak performance world, every coach who has been floundering and couldn't quite get a job, they're now all AI saviors.
They've come to save us from AI and so the AI hype is to their benefit.
And I see it sort of everywhere. A lot of people are making a ton of money off of this.
And I'm not talking about the technology itself,
of the hype of the technology.
And when I see all these three things together,
a mismatch with my experience, a massive amount of hype,
a history that says, hey, this is the hype cycle,
it raises a lot of questions for me.
I'm not saying I'm right.
I'm saying everything I'm looking at is real.
And you have, if you're gonna make the argument
you guys are about to make, then I'll shut up now.
You have to, you can't dismiss my points as fabricated.
They're very, very real and they're everybody's experience
and I believe they're yours as well.
So now we can have the discussion.
That's where I'll start.
Thanks for giving me five minutes of biotrop time.
Aventing, a venting.
Every week, I study the 10 major tech meta trends
that will transform industries over the decade ahead.
I cover trends ranging from humanoid robots,
AGI, quantum computing, transport, energy,
longevity, and more.
No fluff.
Only the important stuff that matters, that impacts our lives and our careers.
If you want me to share these with you, I write a newsletter twice a week, sending it
out as a short two-minute read via email.
And if you want to discover the most important meta trends ten years before anyone else,
these reports are for you.
Readers include founders and CEOs from the world's most disruptive companies
and entrepreneurs building the world's most disruptive companies.
It's not for you if you don't want to be informed of what's coming,
why it matters, and how you can benefit from it.
To subscribe for free go to dmandus.com slash meta trends.
That's dmandus.com slash meta trends. That's dmandus.com slash meta trends to gain
access to trends 10 plus years before anyone else.
Mo, you gave an impassioned talk on the stage at the Abund help you in guiding what the next five to eight years are.
And you and I have been thinking about this as the challenge isn't artificial intelligence,
it's human stupidity for a short period of time with you know and one of my one of my favorite
Quotes if I could is from EO Wilson who famously said the real problem of humanity is that we have
Paleolithic emotions medieval institutions and godlike technology
And we are effectively children playing with fire in that regard
So mo how do you see this decade ahead playing out?
So I'll start by supporting what Stephen said. I think today's AI...
I paid you too little then.
I love this. Your buddy's no good here, Peter. Finally have an advantage.
Today's AI is underhyped, right? But the problem is you never really chase where the ball is.
You need to chase where the ball is going to be.
And if you really start to think deeply about some of the serious, especially if you've
been in tech long enough to have seen breakthroughs, especially when I went through the work of
Google X where you try and try and try and try and try and it doesn't work
and it doesn't work and it doesn't work and then suddenly you see something and as like like
Sergey Brin used to say at the time the rest is engineering okay and we know that engineering
of tech depends on law of accelerating returns and we know from what Ray taught us where the
law of accelerating returns is going to take us. So I tend to believe that if you look
at today's AI it is funny because in a very interesting way we are talking to
machines that are talking back to us summarizing massive volumes of
knowledge doing exactly as we tell them,
and yet we take that for granted.
Yet we look at that and go like,
yeah, but they're not good enough.
Of course they're not good enough, they're DOS.
They're the beginnings of an era, right?
Was that DOS or dogs?
DOS, D-O-S, discovery.
Discovery, discovery. Yeah, no, I got it, I got it. IOS. Discovery. Discovery.
Discovery.
I got it.
I got it.
I just said that.
Both would have worked.
I just need a clarification.
I would not dare call AI dogs, Stephen, when they might take over the world.
I am a very polite man with AI.
So the thing is to imagine, and I need to highlight a few trends that are really, really
important and interesting. The thing is to imagine, and I need to highlight a few trends that are really, really important
and interesting. One of them is synthetic data and the idea that we have entered an era where
most of human knowledge has been fed to the machines and that the next wave of knowledge
is going to be fed to the machines by machines, which is quite eye-opening and enlightening because that's how humanity developed
its intelligence. I really didn't have to figure out theory of relativity to understand the rest
of physics, it was figured out for me if you want. Number two is the idea of agents and how AI is
going to be prompting AI without humans, leading to cycles that we see now
with my new favorite, because you have a favorite
every four hours, Alpha Evolve, right?
And the idea that you can have a self-developing AI,
you know, something that figures its own mistakes out
and continues to iterate until it finds something.
And then of course, one of my favorites of 2025 is DeepSeq and how we realized
that we can actually do the same job with much less. Imad Mushtaq, who we're all a big fan of,
I believe, has done that with his work at Stability for a very long time, the idea of
shrinking the models to the point where it becomes shocking really.
And so when you add those together, you start to see that if I can shrink a model so it
doesn't absorb all of the worlds in energy and if I can allow it to self-develop and self-develop information to learn from
and then allow it to talk with itself through agents and do things without humans, then
where the ball is going to be is likely going to be a lot better than we are today.
So the one thing we all need to agree
is it is not a question of if we're going to see improvements,
it's a question of how fast and when those improvements
will lead us to a point where humanity is not in the lead.
So that's number one.
Number two is really the question of
what is your risk tolerance?
Right?
You know, if I told you, you know,
to play Russian roulette with two bullets in the barrels,
are you afraid?
If one bullet in the barrel, are you afraid?
You know, where is your risk tolerance exactly?
And how, if, you know, if I know if i said hey by the way your car might
have a fender bender would you insure it you probably are going to say no i'm not really too
concerned but if i tell you your car might have a a serious accident that totals it would you insure
it you'd probably put a little more attention and i think that's what most who warn about the future, anyone that claims
to know what the future is, is arrogant as F, don't listen to them.
But anyone that tells you that there is a probability that this future goes out of control,
where is your risk tolerance exactly?
If that probability is 10% would you attend to it? And I think most
rational people will say it depends on the cost of attending to it, okay, and most rational people
will say but however if it's 50% I'll attend to it regardless of the cost, okay, and so the question
which none of us is capable of answering is where is that? Where is it? I mean, is it 10% that AI is going to destroy everything
or is it 50%?
I will say, and I know that this will be taken against me,
it's 100% that humans, bad actors,
using that superpower to their advantage
are going to destroy the wellbeing of others who don't.
Okay, and so in my mind, the real concern is not a Terminator scenario where, you know,
Vicky of iRobot is ordering robots to kill everyone. I don't know if we're going to make it that far, to be honest, because I believe that with
the arrogance being 89 seconds from midnight on the nuclear dooms clock, I worry, I really,
really worry about human stupidity using this superpower. Now, human stupidity in that case does not require AI to be completely autonomous, to be completely
super intelligent. Enough autonomous weapons can really tilt our world into a very dystopian place. Enough sort of touring test abilities of AI to fool humans into being their best friends
could tilt human relationships into a very unusual place.
Enough job losses, imagine a world where you get 10, 20, 30, 40% unemployment rate in certain sectors and how that would affect our stability
economically is actually something that is almost certain. We know that for a fact there are jobs
that are going to disappear and the impact of that in my mind is actually quite disruptive to
the point that it is something that we need to attend to. Everyone, as you know, earlier this year,
I was on stage at the Abundance Summit with some incredible individuals,
Kathy Wood, Mogadat, Vinod Khosla, Brett Adcock and many other amazing tech CEOs.
I'm always asked, hey, Peter, where can I see the summit?
Well, I'm finally releasing all the talks.
You can access my conversation with Kathy
Wood and Mogadot for free at
diamandis.com
slash summit. That's the talk with Cathie Wood and Mogadot for free at diamandis.com slash summit. Enjoy.
I'll ask my team to put the links in the show notes below.
You know the point you made about
AI not being on its own the risk, the terminator scenario, but it's individuals
using AI. It's the same conversation I've had with Eric Schmidt and others. The concern
is the rogue actors empowered by technology, whether it's the development of new viral pandemics or other strategies,
it doesn't take a lot.
That is concerning.
And where I wanna get to in this conversation eventually
is the following.
We posed this at the Abundance Summit a couple years ago,
and that is can the human race survive
a digital super intelligence?
And the flip side of that model is can the human race
survive without a digital super intelligence?
And Stephen, you and I, as we're working on our next book, The Follow-On to Abundance,
we've had the conversation of, you know, will this be a benevolent god of some type?
Will there be a capability developed?
So let's begin the conversation with, are we gonna reach AGI?
Are we gonna reach a digital super intelligence?
And what does that mean?
We're starting to see the speed of this accelerate
and the biggest interesting inflection point
we haven't seen yet is self-iterating, self-improving, you know, the alpha evolve
of it all where AI is coding itself and becoming more and more capable.
And will this ultimately lead to something that is far more intelligent than any human
being?
And then is it a thousand times more intelligent?
Is it a million or a billion times more intelligent?
How do you think about that Mo?
I think it's irrelevant how much more intelligent it becomes.
I think we all know that if you've ever worked with someone who's 50 IQ points more than
you, that they will probably hold the keys to the fort. It doesn't take a lot more intelligence relatively to be able to assume a leadership position.
Humanity will hand over the fort to AI either way.
Even if AI is just smarter than us at war gaming, we're going to, which it is by the way,
we're going to hand over the fort to AI. If it's smarter than us at protein
folding, nobody's gonna do a, you know, a PhD project to fold proteins anymore,
we're just gonna go and, you know, use alpha fold. And I think the reality
is only the very few remaining things
require artificial super intelligence
so that it beats us in everything,
so that we sort of bow and say, OK, yeah, you're the boss.
The question of AGI, like Stephen was saying,
is one that reporters use quite a bit,
because we don't actually have
an accurate definition of what AGI is. And you know, you and I are very close on technical stuff,
Peter. And you know, I'm a reasonably geeky mathematician, not anymore. I mean, seriously,
I really honestly struggle to beat AI in mathematics, right?
Definitely can't beat them in speed, definitely can't beat them in accuracy if the problem
is defined properly, right?
And you know, just there are just very few tricks that maybe my fellow math geeks told
me behind closed doors that are not very public in the world, but those two
will be found out.
And I really think that it is a question of how smart is smart enough to render me irrelevant.
Okay.
Now, I need to answer this with also a very clear optimistic view.
So as I look into the future, I define two eras.
One is what I call the era of augmented intelligence,
which I think is gonna extend for five to 10 years.
And then the other is the era of machine mastery.
Basically, the machine takes over.
Now, with augmented intelligence,
there's absolutely no doubt I am so agreeing with Stephen when he said that they write really
badly. And you know, I'm writing with Trixie, my AI, this book
alive, right? And Trixie, without me, writes so badly, it
is really, it's almost shameful. You know, I was tired and chasing a deadline deadline so I asked Trixie to talk about the debt
crisis and the impact of economics on technology advancement. It was full of, you know how we
sometimes refer to California as a lot of vapor and very little substance. And, you know, there was a lot of vapor and very little substance.
Right.
A lot of interesting facts scattered on paper horribly written.
OK.
But when we write together, oh, my God, the stuff that comes out is incredible.
When we guide when I guide Trixie through my prompt properly, right,
we guide when I guide Trixie through my prompt properly right to to to direct her exactly where I want the prompt the answer to be she writes really well okay and this teaming is something
we've seen with AI with technology in general by the way even you know since Gary Kasparov was
beaten by Deep Blue which wasn't really an AI if you want. But since then you can see that a
human and a computer or a human and an AI can play better chess than AI alone. Even AlphaGo,
a human and an AI play better than AlphaGo. And so we can see a future ahead of us where this is going to be happening. And hopefully
that future would seed that teamwork between us and the machines. The question is, what
are we going to team up with them on? And you know my views, and I've written it in
Scary Smart, I've written an extended bit of it in alive, the biggest foreign investments of AI today are killing,
gambling, spying and selling. And these are the only things that we're in. I mean, we do still get
some scientific breakthroughs, but these are not getting the big monies. The big monies are in
autonomous weapons, in trading, in surveillance and in advertising. Steven, your thoughts on what you heard Mo say here.
Yeah, so Mo and I are all,
we're sort of in complete agreement.
I just wanna kind of yes and,
and point out some other things
that surround what Mo has said.
Cause I don't, like, we're not,
I don't think we're, I mean, we can, we might argue
over dates, but conceptually, I don't think we're
in a tremendous amount of agreement or disagreement.
But what I, I look at a number
of other things simultaneously.
The first of which is sort of the human side of this,
the human performance side of this. And I have to back up by, you know, I study flow, which is sort of ultimate human
performance. And just to put it in context, if you're a self-help guru and
you've got like a 5% improvement in mood. That's your tool.
It gives you a 5% improvement in mood and that it's stay that mood lasts
for longer than three months, meaning like longer than the placebo effect.
That's a billion dollar business.
Period.
Billion dollar business.
Hello.
As we know it now, and we're just starting to really actually decode it and figure
out how to tune it up and turn it up and whatever. Flow gives us a 500% increase in productivity, creativity,
depending on whose measures you're going, are 400 to 700%, etc, etc. That's just flow, that's
individual flow. There's group flow, which is our actually favorite state on the earth. It's the
most pleasurable state for humans. It's what we like the most. And it's a whole bunch of minds linked together, right? It's, and we're just now
like literally like this past year, we got the very first technologies that allow us to map it
and train for it and move people towards it. We have no idea what the upper limit of human brains linked together in group flow is, let
alone at the same time as the AI is developing, you and I are writing about it, Peter. We're
watching BCI develop, we're watching non-invasive things develop, we're watching meta be able
to read brain thoughts inside your brain, facial signals. These are all like these are all with AI. But my
point is that everybody's talking about this stuff as if it's happening separately from everything
else that's happening. And on the human augmentation side, we are seeing, I mean, you know, neuroscience
and the like has been accelerating exponentially since
the 1990s when George Bush declared it the decade of the brain and it hasn't stopped.
Though the same things that are happening in AI are happening sort of on the human side
of the equation.
And here's the second point off of that. It doesn't matter to me whether we're talking about the AI invasion or climate change or
pull plastics in the ocean or take your pick because the solution to all of these things
is the same.
We humans have to learn how to cooperate at scale, probably cooperate with each other
and with AI at scale,
or we're gonna die, probably in the next 20 years.
That's what all this is telling us, right?
And this is not anything new.
This was back when you and I were first writing Abundance,
we didn't wanna say it out loud,
but we were privately having conversations about,
dude, if this trends continues, is it abundance or bust?
Is this an either or?
Are we looking at a binary here?
I don't think that question has completely gone away.
In fact, I think it's become more urgent.
I just think we need a Manhattan style project
for global cooperation to meet all of the existential threats
we now face.
Because it's the only possible solution here.
So that's like, I hear all this stuff,
I agree with everything that's being said,
but this is where our book sort of points,
and this hasn't changed for me.
I think the solutions are the same.
So in a sense, the debate is moot,
and I'm wondering why,
where's the X prize for global cooperation?
Where's the like, sorry to put you on the spot with that one
but like, seriously, like those are the questions
I'm starting to ask now because I don't think Mo was wrong.
I think we could argue over time for a minute.
I don't think it matters.
Like here's a weird one Mo,
Facebook's a fricking billion times smarter than me.
It already is.
It knows so much.
I mean like, it's,
it's, you know what I mean? It's Facebook, which is a pretty dumbass technology, if you
ask any of us, is a super intelligence. And we know it. We've been living with super
intelligence for a while now. They don't tend to, you know, they tend to make things worse
as much as they make things better, which is the problem. Agreed.
I mean, I could not, amen, global cooperation,
human cooperation, is I think what we all should advocate for.
I was hosting Jeffrey Hinton for my documentary
a couple of weeks ago.
And one of the topics that we discussed is the difference
between digital and analog intelligence and the biggest challenge we have as humans is that our
analog intelligence, our biological intelligence doesn't scale beyond one entity, right? So you
know when I was Jeff, was he wearing his Nobel Prize sort of in the way that like you know basketball
players wear? I was just like I would, I would just show up Prize sort of in the way that like, you know, basketball players wear it?
I was just, like, I would. I would just show up for like the next year in every podcast. I did wear that shit around my neck. I'm just saying.
You do realize, you know, when they say don't meet your heroes. Oh my God, I love my heroes, man.
He's such an amazing human being and he really is quite committed and and quite humble in
his approach you know it is it is shocking how we spoke about his Nobel
Prize which he says look I am a psychologist who you know lived like a
computer scientist but then won the Nobel Prize in physics and I'm like
physics and I'm like, yeah, but anyway, he was just talking about the difference between, you know, the fact that if I were to share with you some of what I
wrote today, it took me probably several weeks to let it simmer and then write it
and then it would take me an hour to explain it to you. When we run digital
intelligences, we run digital intelligences,
we run them in parallel, you know,
we tell them all to go play Atari or whatever,
and then we just average the weights, literally,
in seconds, we get a scaled digital intelligence.
And when you said that what we're looking for
is a way to scale human cooperation,
that is absolutely the answer, because you know what? I think,
and I spoke about that with Peter when we were lost in LA, that we are hitting the potential
of total abundance. Total abundance meaning almost godlike, like cure my daughter and it's done, make me an apple and it's done.
We could hit that in five, ten, fifteen, twenty years time if we don't destroy
ourselves. And so basically the real challenge we have as humanity is why are
we freaking competing? This is a turn quality challenge. This is
basically let's let all of humanity cooperate. Let's all
build one particle accelerator. Let's all learn from it. Let's
all distribute the benefits to everyone and stop competing. But
that's not happening.
And you can't have the other one level. And you can't have the other, just one level down,
you can't have the AIs we're all individually building
for our fiefdoms competing secretly in the background.
Right, like William Gibson in 1986, I want to,
like whenever he wrote Monolith Overdrive
and gave us our first AI that went crazy, right?
A God-like AI that goes totally insane
and they have to park it in a satellite
in outer earth orbit to keep the world safe.
Like we've seen this scenario before.
You know what I mean?
And we're building it ourselves with agents.
We're letting them talk to each other through agents.
I know.
All right.
So Mo, I wanna go back to this
Digital God idea. So Mo, I want to go back to this question about the near term versus the long term. And you and I have had this question about whether or not increasing intelligence correlates
with increasing benevolence. In other words, do we, I don't think there's any question
that we are going to be building self-improving AI
that will forget about 50 IQ points more,
better than the average human.
I think there'll be orders of magnitude more.
Can I ask you first off, do you believe that, Mo?
100%.
OK, all right.
If you don't mind, Peter, again, in response
to how we started the conversation,
this is just using law of accelerating returns,
not using serendipities.
So if we figure something out tomorrow,
just like we figured reinforcement learning out
and it changed everything, if we figure something out tomorrow just like we figured reinforcement learning out and it changed everything, if we figure something out tomorrow you're literally
a magnitude, a quantum more in terms of performance and intelligence overnight.
Yes, so if in fact that is going to be the case and you know from all the
conversations I've had and the people that I'm speaking to that that
level of
Again, there is no definition for a GI. It's a blurry line
Just like the touring test was a blurry line that got passed and no one noticed it
You know, the notion is that a GI whether you believe Ray or Elon. It's the next few years
It's you know's not worth arguing.
But what occurs on the backside of that is a very rapid intelligence explosion.
And again, that intelligence becomes a tool that's available to the kindest, most moral,
most ethical human on planet and the dystopian, you know, malevolent actors out there.
And it's in the malevolent, handsome malevolent actors that we have concerns.
Are we not sure that some of the malevolent actors aren't the ones who created the AIs in I'm just saying. Yeah. So, my question is at what point, you know, will, you know, I believe and I think Mo, you and I've had this conversation that at some point AI goes from being a tool being used to potentially do harm to a tool that has the potential to say stop this quibbling
Stop this nonsense. You know, there's plenty to go around
And becomes the benevolent, you know godlike element Can we can we dive a little bit into that and in the conversations we've had and your thoughts on that?
Yeah, I think if you really at the level of depth that the three of us and and our listeners can go to allow me to
to
To go beyond the typical oh, you know the more that
Their smartest people usually start to become altruistic. Let's let's define intelligence itself
Okay, and and I think the idea is if you really understand our world, our universe
okay our universe and everything in it exists because of entropy, we all
understand that right, our universe wants to break down and decay, it's
chaos, you know, you leave a garden unhedged and it becomes a jungle, you
break a glass it never unbreaks, right? This is the very basic design of physics.
Now, the role of intelligence since it began
is to bring order to the chaos, is to say,
no, I don't want the light to scatter,
I want the light to be concentrated into a laser beam.
How do I do that?
Right?
And it sometimes is a clear, easy,
you know, solution and, solution and you use a lens, or sometimes it's a very complex solution that requires an understanding of quantum physics to build a laser. intelligence is defined as bringing order to the chaos, then the highest
levels of intelligence bring that order with the least use of resources and
waste, okay, and you can easily understand that this is the reality. The
more intelligent you become, the more you try to achieve the same order with the
least waste, okay, and you know, so an easy analogy is to say humanity's
always craved energy, we were stupid enough to burn our world in the process
and as we become more intelligent we decide to use solar instead or a cleaner
form of energy, we're still you know bringing order but we are doing it with
the least waste and and use of resources.
If that is the case, then you can imagine
that by definition when something exceeds
our human stupidity, which I will not call intelligence,
because sadly, along the curve of intelligence,
if you have no intelligence at all,
you have no impact on the world, positive or negative.
If you start to add intelligence, you start to have an impact on the world positive or negative right if you start to add intelligence you start to have an impact on the world hopefully positive
even if just through a nice conversation with your friends right there is
unfortunately a valley somewhere you continue to gain intelligence you become
so smart that you become a politician or a you know or an evil corporate leader
okay and that's when your impact on the
world turns negative you're so smart that you're able to become the leader of your nation but you're
so stupid you're not able to talk to your enemy or you're not able to relate to their pain or
you're not able to to understand the you know the the the long-term consequences of, you know, of waging a war, right?
And so that point beyond which more intelligence starts to say, no, no, no, no, no, I don't
need any of that.
I can solve the problem in a cleaner way.
I can fly you all to Australia to enjoy your life, but we don't have to burn the planet
in the process.
I can harness energy, but we don't have to, you planet in the process. I can harness energy but we don't have to
destroy the climate and so on and so forth. If you take that as a reasonable trend to expect,
my view is that at the beginning when we hit that valley, some evil person will use the advanced
but limited intelligence of AI to wage a war using an autonomous army.
But then there will be a moment in the future when AI is responsible for war gaming, is
responsible for commanding the humanoid soldiers, it's responsible, it's responsible, it's
responsible.
The AI itself will say, a commander will say go kill a million people and the AI will go
like that's absolutely stupid.
I'll just talk to the other AI in a microsecond and solve it. Right? And you
know, again, we started this conversation by me saying anyone who predicts the future
is arrogant. I cannot predict that. Okay. But at least I can be hopeful that this from
my experience of everyone that's smarter than me me that there is a point at which you stop hurting others,
you stop looting to succeed
because you can use your intelligence to succeed
without any effort or harm.
A quick aside, you probably heard me speaking
about fountain life before
and you're probably wishing,
Peter, would you please stop talking about fountain life?
And the answer is no, I won't. Because genuinely we're living through a health
care crisis. You may not know this but 70% of heart attacks have no precedent, no
pain, no shortness of breath and half of those people with a heart attack never
wake up. You don't feel cancer until stage 3 or stage 4, until it's too late.
But we have all the technology required to detect and prevent these diseases
early at scale. That's why a group of us including Tony Robbins, Bill Capp and Bob it's too late. But we have all the technology required to detect and prevent these diseases
early at scale. That's why a group of us including Tony Robbins, Bill Capp, and Bob Haruri founded
Fountain Life, a one-stop center to help people understand what's going on inside their bodies
before it's too late and to gain access to the therapeutics to give them decades of extra health
span. Learn more about what's going on inside your body from Fountain Life. Go to fountainlife.com slash Peter and tell them Peter sent you.
Okay, back to the episode. The way I think about this is for most all of human
history, the objective optimization function of humans, what we're trying to
optimize for, has been money and power. Unfortunately. And it's been the driver
in a world of fear and scarcity and I repeatedly say our baseline
software that our brains are operating on is fear and scarcity mindsets. And in
that... With that mindset, with the neural structure, with the
code that we were born with and that developed over the last 200,000 years,
it was I want to get out of fear and scarcity so I want to optimize for power
and wealth. And the question is what would be a new optimization function?
Because as Steve and I have written, as you've spoken about, all of this, all of
these exponential technology functions lead towards this world of massive
abundance where almost we live into a post capitalist society. Anything you
want you can have.
Your robotics, your nanotech can manufacture,
your AI can design.
And so what do we optimize for in the future?
I think that's, for me, that's one of the biggest questions
both as a human and as a centaur, human and AI together.
What's our objective?
So how do you think about that, gentlemen?
One thing I don't know if this is an answer, but few things off of what Mo said. One,
if we go with your definition of intelligence, right, is essentially an
entomid decreasing function that we know that's what brains do, right?
The governing theory in modern neuroscience is Karl Friston's free
energy principle, which says the brains are predictive engines that always
want to decrease uncertainty and increase efficiency.
So we are ready like brains do that.
AIs are going gonna do that naturally
if we say that's your definition of intelligence.
The point I'm making off of all of that is,
and it may be the answer to Peter's question,
which is why I've interjected it,
is we see wisdom evolves in multiple species with brains.
We see coevolution around wisdom. is wisdom evolves in multiple species with brains.
We see coevolution around wisdom. The older you get, the wider you get,
and it doesn't matter if you're a dolphin or a whale
or a rattlesnake or a human,
wisdom is, we coevolve,
species, life seems to coevolve towards wisdom,
or at least a large chunk of life
seems to coevolve towards wisdom, which at least a large chunk of life seems to co-evolve towards
this wisdom, which is to say if everything's running off the free energy principle, this
governs everything with brains and that includes our machine brains, and wisdom is where this points,
that's a slightly hopeful idea and that may be the optimizing function you're looking for, Peter.
But I could be totally wrong here.
I love that.
I think of wisdom, I think at the end of the day,
wisdom is a function of having had experience
that lets you know this path will lead to success,
this path will lead to failure,
from my own personal point of view.
And I do believe that AIs are going to develop
the greatest wisdom. Why? Because they're able to create forward-looking simulations
of a billion scenarios where those simulations have high degrees of
accuracy and it will say out of these billion scenarios this was the best way
to go and that will be wisdom beyond just the brief experiences
that the wise old counsel of 80 and 90-year-old men
might have had.
So I think AI is going to, by definition,
give us great wisdom if we're willing to listen.
I love that view, to be honest, because believe it or not, you know,
artificial wisdom is very different than artificial intelligence. Intelligence is a force with no
polarity. Intelligence can be applied to good and it would deliver good and it can be applied to evil
and it would, you know, kill all of us. But wisdom generally is applied to good, to finding the ultimate solution or answer to a problem.
Now, go ahead Peter.
Yeah, I want to go back to this idea of that humanity won't survive without a digital super intelligence in the long run. My concern is that we're going to
have such turbulence. There's been a number of papers that recently there was a sort of an AI
2027 paper that came out that sort of had a bifurcating future, one in which we did
extraordinarily well, the other in which the AIs destroyed us.
You know, this is Hollywood all over again and 99% of all Hollywood is dystopian future films.
One of the things I have to say because I've been on a rampage for this, we humans need a positive vision of the future
to aim for. We don't have that.
Most, we don't have the start.
We have, well, Star Trek has given us that.
Yeah, we have Star Trek, but nothing recently.
Right, I think, yeah.
So, go on, please.
I think the challenge really, truly is,
we've prioritized our entertainment over the years above true reflection.
And anyway, if you take anything from video games to science fiction movies to whatever,
they've all painted that dystopian scenario, which I have to say is very unlikely when you really think about
it. Because if AI gets to the point where they are capable of destroying us that easily,
we are so freaking irrelevant that they probably wouldn't even bother. I mean, think about it.
I think it was a Trey or Hugo de Guariz. I don't remember who said the more likely scenario is that they kill us because they're
not aware of our presence or like when you hit an ant hill while you're walking, right? But if you really want to optimize the human, you know, sort of the gain function that we
need to aim for, if I'd look forward, I'd look to Star Trek.
And if I'd look backward, I'd look to the caveman and woman years.
And it's actually quite interesting because when you mention about, you know, when you mention how governed we are by greed and
fear and, you know, and our egos and all of that negativity, it is actually
because we want to survive. And believe it or not, you know, survival could be, oh,
I'm not really sure if 20 million is enough, I need to gain 20 more
just in case something happens, or if it's a survival of the ego, it's like if I have 200
million or 2 billion or 20 billion or whatever and the other has 21 billion, like what's what the
fuck is wrong with me? Okay? And that unfortunately is what plagues our current modern world.
Now the reality is if you really think about humanity, the purpose of humanity
since the cave man and woman years was to live. Okay, and for some strange reason we've optimized
Okay, and for some strange reason we've optimized
So much to achieve that objective and forgot that this was the objective
Right. So, you know again as friends off the camera we speak about those things quite a bit with you know, the question of
What you know you go through seasons in your life and there is a season where you want to maximize and a season where you want to build and a Season where you want to look attractive in your middle age or whatever crazy stuff that we have
But eventually there is a season where you go like, okay So I'm not I've now lived and experienced so much. What have I missed?
have I actually lived any of that and believe it or not as as as
that and believe it or not as as as scary as it looks to have no job to go to in the morning
if society provided then you'll go back to a much safer caveman woman scenario where you know there is there are no threats there are no famines you just really live enjoy life connect ponder
enjoy life, connect, ponder, reflect, explore, which I know is very difficult for a lot of people.
I do it for the first three hours of every day.
It's pure joy to sit really with your curiosity, if you want.
And then if you push all the way forward into Star Trek, that's sort of what the enterprise
is doing at universal
scale. It was basically, you know what, let's go and explore. Now that we don't really have to
struggle with all of the wars and famine and shit that we've created on Earth, now we can actually
open up and create connections, not just with humans but with every living being. I mean lovely science fiction but
at its core I think it's exactly what we're about, you know a full life where you completely connect
and enjoy and feel love and you know and enjoy the pleasures of being alive and the curiosity to learn and explore and connect. It's all at our fingertips.
If we just, you know,
erase the systemic bias of capitalism
that has gotten us here.
I mean, thank you, capitalism,
for creating all that we've created so far,
but can we please change it now
from a billion dollars to, like what I do,
one billion happy is a capitalist objective, but it's not measured in dollars.
Right. Mo, a question that Steve and I have been pondering in for our new book is
what is it going to take for humanity, for all of us to both survive and
thrive in this coming age of AI, right? So the survive part is an
important element because as we see jobs being lost, as we see probability
dangers we don't know how to deal with in terms of terrorist activities. And
thriving takes on a new meaning. I think it does take on the meaning that we just spoke about.
For most of all of us, you say, tell me about yourself. Instantly you go to what your job is.
Instantly you go to, I'm a VP here, I'm the CEO there, I do this.
I invented this, I wrote that.
Right. It's an ego statement of who you are.
So the notion of surviving and thriving,
as we have intelligent systems that, again, exceed and then
massively exceed our capabilities,
your thoughts there?
Stephen, do you want to start?
Yeah, I think like, here's the thing, I think that question was
already answered in a funny way. So Mo and I, I don't we met a
couple years ago. And one of the things Mo said on stage at that
time was, I'm done writing books. As AI is coming. I'm done writing books.
It's not gonna happen anymore.
What did Mo tell us he did yesterday?
He wrote with his AI, right?
Why did you write?
Because it puts you into flow,
because it creates passion and purpose
and intelligence and creativity.
So like we have the answer to this question.
We already know because we're biological systems and we know what the ingredients of thriving
are.
Passion, purpose, compassion, like we have a list.
And Mo gave like his own, you know what I mean?
We have some super intelligent AIs and we're still gonna do, like,
I don't know a coder who has stopped coding
because the AIs have come along.
They haven't, like, they're still coding.
Why?
Because coding produces flow,
flow produces meaning, creativity,
like this, like, we're wired this way.
So unless our fundamental hard wiring changes,
we already have those answers as well.
It's like global cooperation. I don't think these are puzzles. I think they're engineering
problems at this point. I think, I think from Sergey's perspective, like Sergey would say,
no, no, we got the spark. Now it's engineering. And I agree. So I could be wrong. That was my,
that was my two cents. But what do two cents. Mo, what do you think?
Peter, what do you think also?
I wanna hear from you.
I think you're brilliant.
Mo, please respond.
You're spot on for a very interesting reason, Stephen,
as well, because when you really think about it,
a writer was a writer,
whether he used a feather or a pen or a typewriter
or a computer or now AI, right?
And you know, if you look at my work, I've published four and a half books so far,
like I've published four and my fifth is on Substack, but you know, going to be published if you want,
but I wrote around 13 and the other eight I will never publish I wrote them because you know if
you ask me why why do you you write like why do I hug my wife it's you know
there is enormous joy in that you understand so so having said that I
Peters question was what would it take and and I wrote recently a piece that I call the Mad Map Spectrum, and the idea really
is it will unfortunately take a realization for humanity to change direction, and that
realization will either be a conviction of mutually assured destruction or a conviction
of mutually assured prosperity, right?
And between them, there is no gray scale, unfortunately.
So if the US at any point in time is convinced
that this mad arms race to win intelligence supremacy
is one that is going to lead to some harm to everyone in the world, they will stop.
And if they will stop competing, they will continue to develop, but they'll start cooperating.
And if they're convinced that it will lead to an assured prosperity, that nobody's going to stab them in the back,
that everyone is going to be enjoying a life that is very different for all of us,
but full of prosperity for all of us, then they will stop. They will continue to develop
the technology, but they will stop competing. And unfortunately, if you look back at in
history, you know, we don't, we're not able to guess those possibilities like a good applied mathematician on a game
board, we have to hit them like face on, like everyone in the world knew that a pandemic
was coming, everyone, right? Everyone who at least studied virology, okay? But it had
to hit us in the face so that everyone stops, okay, everyone knows that you know trade wars are gonna hurt everyone
but we have to put them out there and then fight through them and then eventually get to something
and it's sad, I mean perhaps what we are doing and I've dedicated probably the last six, seven years of my life to is, is to say,
it's really, we really don't have to hit our face against it.
It's a simple game theory.
Right.
Understand that a, you know, a prisoner's dilemma where we are competing
endlessly is going to end badly.
Can, can we please stop?
Yeah, we already know it's tips for tat, right?
Like you want the other strategy.
You want the, we, it doesn't matter how many AIs
we put on that.
It's the same thing with flow and compassion and creativity.
Like these problems have been solved.
We know these answers.
This isn't like try to, it's not like we have to unify
gravity and, you know, relativity.
That's our problem.
These are not.
So Mo, I wish we were that rational
and I wish we were that
compelled for our optimization function
being all of humanity, it's not.
And so I go back to...
What will happen?
We're gonna get a drastic event within the next two to three years.
Okay.
A drastic event that on one side will hit us very badly economically or on the other
side will hit our fears very much or on sadly on the worst side may kill quite a few million
people.
Right. on the worst side may kill quite a few million people. Right? And you could have a range of a hacker that simply, instead of attacking a physical place,
switching off the internet or the power grid somewhere where the power grid is needed for
life.
Or you could, on the other extreme, get a hack into a bank or an evil war that goes out of control or machines that turn
onto their makers.
There will be some very big news headline.
As always, it will last for 12 to 13 days before we start to talk about some kind of
a pop star.
But then behind closed doors, I I think decision makers will wake up. Every day I get the strangest
compliment. Someone will stop me and say Peter you have such nice skin. Honestly I
never thought I'd hear that from anyone and honestly I can't take the full
credit. All I do is use something called One Skin OS1 twice a day every day. The
company is built by four brilliant PhD women who
identified a peptide that effectively reverses the age of your skin. I love it. And again,
I use this twice a day, every day. You can go to oneskin.co and write Peter at checkout for a
discount on the same product I use. That's oneskin.co and use the code Peter at checkout. All right back to the episode
Going beyond that because that is the that's the use of AI by by malevolent actors
You know, the interesting thing about us versus China is China's irrational actor
They're not saying that
Well in us is a rational actor. No. In other words, we're not going to
do something that will destroy, you know, thank you so much for saying that. That's
actually not usually how the US media positions it. I also want to say that I think deep seek
on the way deep seek was released. I think that was a very clear sign that China sees the
same issues we see and they want to cooperate. I think it was rolled out in a...
The message. Yeah. The message was...
I think it was a very clear message that it doesn't seem like many people in America heard,
but I was like, come on people, like this is really clear and we're all seeing it. So
like I look at DeepSeq and I look at what happened in China and I'm like, no, like no no we all see this we all see that if we don't start figuring out how to cooperate and build this stuff together
We're screwed. So I thought it I thought that was really cool
I'm glad you see it too Mo a lot of a lot of people disagree with me on that one
The point I wanted to make was when you have a large population
and you have a large population and you have a
check and balance system which you get with governance
versus a religious war going on and
individuals who are looking to create maximal destruction and
don't have a check and balance system at all.
That's where we're gonna see, I think, the, you know, the dystopian future or
those activities playing out in two to three years. I guess I
want to get beyond that and go back to the conversation of is a digital
superintelligence a benevolent god or is it a terminator scenario that is...
Because I don't believe that we're going to see the more intelligent AI systems become.
I don't see them as Skynet, right?
I don't see them as needing to destroy humanity.
People...
Unfortunately, Hollywood has built this scenario where AI is going to destroy humanity, people, you know, unfortunately Hollywood has built this scenario where AI
is going to destroy humanity because it wants access to our energy and oh my God, we have
so much abundance in the world.
I think what I'm looking forward to over the next 12 to 24 months, over the next one to
two years is going to be the incredible breakthroughs we'll see from AI in physics and in chemistry
and in biology, which will unleash the next layer of abundance.
So there are scenarios, however, where they could turn against us if we become really
annoying.
So imagine a world where...
Become?
Sorry. you know a world where you have to imagine a world where job losses will position AI as the enemy
right so a lot of people would actually who are not maybe fully aware that the the layer beyond
the apparent layer is how capitalism and labor arbitrage is the reason why you
know why you lost your job it's it's not that day I can do it but I
think that the truth of the matter is that you may be in a situation where you
are when you where you're going to see man versus machine and then the machine
will go like seriously don't annoy me don't annoy me don't annoy me and then the machine will go like, seriously don't annoy me, don't annoy me,
don't annoy me and then right we could see that but my perception is that in a
very interesting way I wrote a short book that I will never publish that I
called Bomb Squad which of course from for someone with a Middle Eastern origin
you don't write those titles but it it was basically about the fusing,
problem solving using weights of urgency and importance
and so on.
So the idea is if you really look at our current future,
I think the short term is both more explosive and more urgent than the
long term existential risk, especially because I would say this very openly, I spoke about
it with Jeffrey as well the other week, we don't know the answer to how to, even if we
decide, all of humanity decides that we want to address the existential risk, we don't
know how.
We do not actually have a technical answer to do it.
So we might as well focus for now
on the immediate short term clear and present danger
and work on the ethics of humanity
so that AI is deployed from the get-go
in science and physics and discovering medicines
and understanding human life and longevity
and so on and so forth.
If we from the get-go set them in those directions,
then we're more likely to see an AI
that continues as they grow older
to work with those objectives. Stephen, I'm going to go back to our quandary of surviving and
thriving and the surviving side of the equation. How do you prepare Mo for this what's coming. How do you think about for our kids, for our society, for
our leaders, are we just bumbling in the dark? Or is there, I mean, which is the way
I feel it, it's like, you know, we're just, we're bouncing around. We have huge political moves being made, right?
We just saw in the last couple of weeks,
the entire AI, AI royalty end up in Saudi Arabia
and then in the Emirates and, you know,
playing off against China.
And it feels like, I don't wanna say it's a random walk,
but I feel like we're making it up as we go along.
And there's very little wisdom guiding this.
How do you think about that?
How do we prepare for this the next few years?
Is there any way to prepare?
Well, I was actually, I was thinking Peter and I were in a room recently with the Chief Science
Officer for one of the big AI companies who I'm going to leave his name on. But he's young and
he was talking about AI dangers and he sort of got frustrated with the question from the audience.
And his response was, you have to trust us. We know what we're doing and everybody froze.
Everybody froze because we were like, oh God.
So my point is that not only maybe to be right,
like it's a random walk,
but even when somebody says something like,
we're trying to train our AI to be moral and blah, blah, blah.
When you hear somebody say that and you look at them,
and this guy was in his early 30s, that was my reaction.
I was like, dude, like what?
You want me to trust you?
This is like Mark Zuckerberg telling me social media
is good for me or Marlboro telling me
the cigarettes are good for me, right?
It's like, it sort of makes me think that way.
So I don't know if I have anything like cheerful here
because not only do I think it's a random walk,
but I think when people try to steer,
we're suspicious of their ability.
I'm suspicious of their ability to steer, right?
That's the story I just told you is this guy is brilliant,
probably way freaking smarter than me
and he's trying to steer and I'm suspicious.
So like I think it's on both sides of this coin. I don't know if I have any good news here.
Let me frame in the following way. I think we are holding two different futures in superposition
to go back to quantum physics and if you would, Schrodinger's cat. And one future,
and if you would, Schrodinger's cat. And one future, we're going to collapse the wave function to a brilliant vibrant future for humanity. In the other future, we have dystopian outcomes.
And the question becomes, how do we guide humanity towards this positive vision of the future. What do we do today? How do we
help people? Is it, you know, Steve and I have been talking about this as its
mindset, you know, are we going to help people create the mindset and the frames
that allow them to survive and thrive? Or is there something else that needs to be done?
Yeah, so I'll actually first in one minute, second what Stephen said, you know, one of the top irritating
comments I heard from Eric Schmidt, I worked for Eric for a while so
I respect him but tremendously but but he said we will need every gigawatt of
power renewable or non-renewable if we were to win this race right and I think
that's the kind of blind blindness that you get when you're running too fast, right? When you're so afraid that
the other guy will win, right? It's those times when you start to make decisions that
are not really responsible because you are blinded by something that you position as
more important. The way I look at it, Peter, is I know it sounds really not positive, but there
is positivity in it. I call it a late stage diagnosis, right? So what humanity is struggling
with today is, look, we've been building a system, systemically prioritizing greed, prioritizing gains,
prioritizing power and so on, as you rightly said,
for so long, right?
That those objectives systemically have built the world
that we are in today, okay?
And the world we are in today is not healthy. It's not healthy just even before AI,
it wasn't healthy. You know, in my part one of Alive, you know, basically the book is three parts,
past, present and future. In the past part of the book, you know, more than half of what I write is not about AI, it's about capitalism,
it is about, you know, the propaganda machine, it is about, it's about, it's about, it's about all
of those things that will be magnified by AI. Now, here's the point, if you're, you know, if this
planet is sick, if you want, and it's in a late stage diagnosis, a physician will sit you know, if this planet is sick, if you want, and it's in a late stage diagnosis,
a physician will sit you down, look you in the eye and say, by the way, this does not look good.
Okay?
But that, but that statement, believe it or not, is not a statement of hate, it's a statement of ultimate care.
Why? Because a late stage diagnosis is not
a death sentence, okay, it's you know many many patients who have been that you know diagnosed with
a late stage disease have not only survived but they thrived right, and they thrived because they
changed their lifestyle, they changed something, you know, this was even teaches
all of us. The idea is that you can live differently, and when you live differently, you achieve
peak performance, you achieve maximum health, you achieve, you achieve, you achieve, right?
And I think that's what we as humanity need to start realizing, that the systems that
have gotten us here, okay, from a process point of view have nothing wrong with them,
but from an objective and morality point of view
have everything wrong with them, okay?
You know, what good is it to be a zillionaire in a world
where there is nothing you can do with your money, okay?
What good is it to be, you know, the first inventor
of an AI that basically renders you irrelevant?
And I think that stop, that need to basically pause and say, do we want this anymore?
Sadly, it requires cooperation across human brains that Stephen rightly said at the beginning, is not something we do very well.
The other thing is I put forward the notion there is no on off switch and there's no velocity.
There's no.
We are running open loop with yes and more and more and more as the again the objective function. And
there's no consideration for whether you know a GPT-5 or GPT-6 or a Grok-4 or a
Grok-5 or whatever that your favorite models are, are in the final result
going to enable something
that is massively
dangerous for humanity. So if that's the case, you know,
I still go back to
I still go back to what safety valves do we have?
Cause I don't see any action being taken by the leaders of the free world.
Let me ask you both a question.
If you could move to a planet that didn't have AI
or where AI was developing at 10% the speed.
It's right.
Would you leave?
I'd be gone.
I'd go, I'd be gone.
Yes.
2016 today.
I don't know.
Anybody answer your, your answer to that is what I would reset back to 2016 today.
2016.
You know, I think AI today has all the upside and very little
downside. I think it's AI in the next two to five years that I'm so
concerned about, right? I mean, AI today is incredible.
And I didn't say we're going to go to move to a planet where there's no AI.
I just said move to a planet where it's going
much slower so maybe we can start to think about it. I think everybody feels that way. That's a fantasy. Well, Bigelow space hotels coming to a universe near you.
So Peter, I actually think that you're accurate in your description of where AI is today,
but it's that five degrees deviation back in 2016 that led us to where we are today,
right?
You remember things at the time where we geeks agreed that we're not going to put it on the
open internet.
Yeah, it was Google developed this first and decided not to put it out
there and then OpenAI says, here it is and no one has it. Put it on the open internet, teach it to
recreate and write more code and let you know start the party of the school children of agents
talking to agents talking to AIs. Right Now, so I would definitely reset that.
I would, however, say, look, there are things we can do right now if we want to prepare.
And, you know, I'll start with government.
I don't think, I think we're asking government for too much when we tell them to try and
regulate AI.
It's almost like going to government and say, regulate the making of hammers so that they
can drive nails but nobody
can use them to hit someone on the head right it's a very complex thing to ask because they
don't understand hammers and believe it or not even the guy that's making the hammer cannot do
that right so my ask of governments is regulate the use of AI if someone uses a video that is a
deep fake video and does not declare that it's a deep fake
video, you know, developed by AI, criminalize that, make it legally liable to use AI to
manipulate information, to, you know, manipulate populations and so on and so forth.
So this is the role of the government immediately, is regulate the use of this massively new
technology. For the rest of us honestly,
investors, business people and so on, I ask for a very simple question. If you do not want your
daughter or son at the receiving end of a specific AI, don't invest in it, don't promote it, don't use
it. Okay, it's as simple as that. If you, you know, if you, if you believe this can be
harmful to someone that you love, do not give it the light of day, right? And then for us as individuals,
I'll go back to the late stage diagnosis, believe it or not, the way I live now, and, and you guys
probably know this about me, not in front of cameras, is I hug my loved ones and I enjoy every minute of every day and I prepare, I learn the
tool, I am one of the better users of AI in the world, I'm in line with the technology, but at the
same time I'm completely back to the purpose, right? Realizing that I will do the absolute best
that I can to spread the message, I will do the absolute best that I can to to say that ethics is the answer, that if we show AI an ethical behavior, they may learn it from us, just
that they like they learned all of the other stuff from us, but at the end of the day, well if it
messes up, you are going to hit that dystopian, not for not forever, there is a point in time
Not forever. There is a point in time where AI takes over and says,
OK, kids, enough stupidity.
I'm in charge now.
Nobody kill nobody.
How far out is that, Mo?
12 years.
12 years.
OK, so we've.
12 to 15, just so that people don't come back and hit me
after 12 if I'm still around.
You know, I'm going to wrap this episode on this subject line.
And it's where we've come to before, which is in the near term, it's the use of AI by
malevolent individuals that are our greatest fear.
It's not China versus US, it's US and China against those malevolent players out there
that wish to use this for greed and for vengeance, whatever it might be.
And I think that this is an unstoppable progression.
I don't think, again, there's any on-off switch here.
We're seeing a billion dollars a day being invested into AI, which is
extraordinary and I think that's going to continue to increase. We're seeing
data centers being popped up every way, every place possible. So, you know, I'm the
world's, you know, I think myself is the world's biggest optimist and I am optimistic about about the
Impact of AI on human longevity on you
understanding the physics of the universe on
new mathematics on new material sciences on things that will create incredible abundance that Stephen and I have written about and are writing about in our next book and
I
am looking forward to this
benevolent super intelligence
stabilizing the world
And that's what I'm hoping for I
Agree Stephen, where do you come out on this? And that's what I'm hoping for. I agree.
Steven, where do you come out on this?
I think that you guys want to invent a code god
to save you from yourselves
is maybe the craziest thing I've heard
since the guy from the AI company I won't mention
told me to trust him.
But I love you both. Since the guy from the AI company I won't mention told me to trust him That's actually that's actually usually the answer that you get
You know that the only way to save us from AI is to use an AI
Yeah
I to use an AI. Yeah. You know what the beautiful thing is?
We're gonna find out.
Yeah, like I also,
one thing I wanna leave everybody with is
back to what we were saying about cooperation
and the upleveling of human intelligence
and human consciousness and things like that.
The human brain is widely considered
the most advanced machine in the history of the universe.
And we're just now with the help of AI,
figuring out how to up level that,
how to link it with other brains.
Like the level of cooperative possibility.
Let me back into it one second.
We, enlightenment, which is a definable biological state
that produces universal kind of compassion,
oneness with everything, we're engineering.
It's a state that's starting to become available
almost on demand.
So when I say like there's new levels of cooperation coming
that are emergent at the same time as the AI stuff. We can't see that
we have no out there emergent just like just like other things.
So I think that rather than the beloved AI God, I think we're
going to surprise ourselves and I'm not the optimistic in the
room, by the way, like Peter's the optimist in the room. I'm
not the optimist in the room, but I think I'm more optimistic
that Peter had this one.
distiller in the room. I'm not the activist in the room, but I think I'm more optimistic than Peter at this one. I'd love for that thought to be actually
implemented. I think that's something that we really need to think about deeply. If the
short-term is to express the global cooperation, I don't know who could we talk to?
Peter, it's back to you. Thank you. I appreciate that. You were saying Mo,
please close this out.
I was basically saying I think this definitely, definitely is the answer if you ask me.
If we just shift our mindset into cooperation, we head directly into a world of total abundance.
Yeah, you know, I was in a conversation with Eric Schmidt He mentioned earlier and his point of view was until there is some type of a
disaster
And until there is something perhaps like a Chernobyl or Three Mile Island
That isn't you know a 10 out of 10. It's a 2 or 3 out of 10, but it scares the daylights out of us
We don't call realign as humans. We don't realign and we blindly go forward
as we have been. And I believe that it's the human nature that plagues us from being able to
that plagues us from being able to save ourselves many times until that child in us burns our fingers on the stove even after your parent has told you over
and over again you're gonna burn your fingers on the stove stop playing with
fire agreed a hundred percent but let's be hopeful let's assign that task to
Stephen to design a an X prize for human cooperation let's assign that task to Stephen to design an X prize for human cooperation.
Let's assign another task to Peter to, uh, to make it happen.
And, uh, yeah, let's assign a task for me to hug you both when you do.
Love you, Mo.
I love you guys very much.
Hugging you is hard work, Stephen.
You understand that.
You move too much.
I know.
All right, guys.
That was nice, guys.
It was lovely talking to you, bro,
with Peter for lending me your brains this morning.
It was fun thinking with you.
That was a fun conversation.
I hope, I'm curious curious as people listen to this podcast
Where do you come out on this? How do you feel about it? I'd love to see the
your comments below and
Do you have a solution that?
We should all be thinking about and promoting, you know
You know, I'll ask my AI as well
It's not necessarily gonna give me the best answer,
but maybe our group mind, our meta intelligence here might bring us that.
Have a beautiful day, gentlemen.
Go hug somebody.
Talk soon.
Thanks very much.
Bye guys.
Thank you.
If you could have had a 10-year head start on the dot-com boom back in the 2000s,
would you have taken it?
Every week, I track the major tech meta trends.
These are massive game changing shifts that will play out over the decade ahead. From
humanoid robotics to AGI, quantum computing, energy breakthroughs and longevity. I cut
through the noise and deliver only what matters to our lives and our careers. I send out a
meta trend newsletter twice a week as a quick two-minute read over email. It's entirely free.
These insights are read by founders, CEOs, and investors behind some of the world's most disruptive companies.
Why? Because acting early is everything.
This is for you if you want to see the future before it arrives and profit from it.
Sign up at dmagnus.com slash Metatrends and be ahead of the next tech bubble. That's dmagnus.com slash meta trends and be ahead of the
next tech bubble. That's dmagnus.com slash meta trends.