Moonshots with Peter Diamandis - AGI Is Here You Just Don’t Realize It Yet w/ Mo Gawdat & Salim Ismail | EP #153
Episode Date: February 27, 2025In this episode, Mo, Peter, and Salim discuss AGI, how to adapt to an AI-driven world, the future of jobs, and more. Recorded on Feb 18th, 2025 Views are my own thoughts; not Financial, Medical, or... Legal Advice. Mo Gawdat is a renowned author, entrepreneur, and former Chief Business Officer at Google [X]. He is best known for his work on happiness and technology, which includes his bestselling books. His notable works include Solve for Happy: Engineer Your Path to Joy (2017), Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World (2021), That Little Voice in Your Head: Adjust the Code That Runs Your Brain (2022), and Unstressable: A Practical Guide to Stress-Free Living (latest release). Mo Gawdat is also set to release a new book titled Alive. His career spans roles at IBM, Microsoft, and Google, where he led projects like Project Loon and Project Makani. Gawdat is also the founder of the One Billion Happy initiative, and the co-founder of Unstressable, an online platform for stress management. Salim Ismail is a serial entrepreneur and technology strategist well known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University and the founder and chairman of ExO Works and OpenExO. Subscribe for Mo’s upcoming book: Alive: https://mogawdat.substack.com/ Join Salim's ExO Community: https://openexo.com Twitter: https://twitter.com/salimismail ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at https://www.oneskin.co/ #oneskinpod _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Transcript
Discussion (0)
What's your predictions Moe for the year ahead? The word on the street is we will
achieve AGI in 2025. In my world they've already achieved AGI. Does anybody
actually know how fast it's moving? The warhead has already been launched. It's
just a question of time before it hits its target. We're entering an uncharted
territory. We're upon the perfect storm of the most challenging time
humanity has faced in my lifetime.
What do you imagine is the best outcome
we're gonna be seeing from AI?
A total utopia of abundance
where we absolutely need nothing
and where we do not report to stupid leaders anymore.
That seems like a very unchallenged life.
We decided that the purpose of life for some of us is to make more money and be billionaires
and indigenous tribes. The purpose of life for them is to live at a very
different level of abundance. We're going back to that purpose. What I ask people
to do is to actually look deeply at what can I do.
Now that's a moonshot, ladies and gentlemen.
It's a pleasure to be here with two friends, Mo and Salim.
Mo, you're in where?
Dubai today?
In my studio in Dubai, yeah, I love it here.
Fantastic.
And Salim, you're in the greatest city of the world, New Jersey?
New York.
New York, okay. As per the citizens of the city, I mean,
that's that assessment is up to New Yorkers really. Yes, I will leave that one alone. But
I do want to dive into what's going on in the world of AI. Does anybody actually know how fast
it's moving and how dramatic the changes are going to
be in our lives?
I mean, we're all, you know, still waking up, taking the kids to school, watching the
evening news, having breakfast, lunch and dinner.
And I'm on stages around the world as both of you are, and we get into a conversation
and people actually understand the speed, their brain
breaks, and they go, what does that mean for me and for my kids, my job, my country?
It's quite shocking, isn't it? I mean, when you, so I speak around the world like both
of you, and at the end of every conversation, I almost liken it to a war where the warhead has already been launched,
right? It's just a question of time before it hits its target. My assessment though is that we don't
know if it's carrying roses or carrying a nuclear warhead or maybe a bit of both, one after the other, but it's already in the air. I mean, we are
so advanced as compared to 2023 when, you know, Chad GPT first came out. It's not even
comparable.
So I want to get into that in this episode. I want to talk about, you know, you're going
to be, you're both going to be at the Abundance Summit, and Mo, your title is going to be Near-Term Dystopia on the Road to Abundance.
I want to talk about what near-term means and what the dystopia looks like and what
abundance looks like on the flip side. Salim, you're wearing white today, so you're going to
play the good match. Salim Hisham
You're with that team.
And it really is, it's really a debate between is this the greatest benefit uplifting all
of humanity or is this something that's going to...well, it is going to reinvent every aspect
of our lives, period.
But I have a question for you, to both of you.
The timeframe for reinventing every aspect of our lives our businesses our
governments is it to five or ten years no inventing as the change is already in
place and everything in our life is determined by it I'd say five five saline
I go with the around ten years and the reason I say that is I'm kind of a believer in the William Gibson quote, the future is already here is just unevenly distributed. And if we find it takes a very long time, much longer than we even when we want it happen to get safe autonomous cars at the door or crisper at the door into broad stream mainstream use.
And so I think it goes slower than that, but in pockets, it'll move unbelievably
quickly in that the gap between those two is what's causing a lot of the stress.
If it all happened in an even way, we could kind of deal with it, but it's happening in different places and different speeds and we are just totally
discombobulated because of that.
Yeah.
And I think the challenge is we've
had huge change in humanity from 100,000 years ago
to the agrarian society to even the industrial age,
but it's happened over lifespans and hasn't happened
over a single five-year period.
Let's say you want to go bright side or dark side first?
So what's the, let's go bright side.
So Mo, what do you imagine is the best outcome we're going to be seeing from AI?
A total utopia of abundance where we absolutely need nothing and where we do not report to
stupid leaders anymore.
I'll leave that one alone for the moment, but, uh, and stupid
leaders could be anything on any level.
But most of our, most of our global leaders are in that category.
Let's face it.
The challenge I have is if we have this extraordinary utopia where all of our
needs are being met, you know, food, water, energy, healthcare, education,
everything, we just have to healthcare, education, everything.
We just have to desire it and think it and it's given to us.
That seems like a very unchallenged life.
So how do you deal with a life where we don't have the challenge and the purpose because
it's taken away from us by the AIs and the humanoid
robots. How do you deal with that? That's one of the biggest concerns I have.
Yeah, it really takes us back to life before all of this began, Peter. I think that the
reality is that we've forgotten this. You know, somehow, somewhere in the Industrial
Revolution as capitalism became more and more hungry, we
decided that the purpose of life for some of us is to make more money and be billionaires,
and for the others is to, you know, sell themselves in a work arbitrage where they are sold cheaper
than the actual value of their labour, and as a result, you know, we were, it was needed
to be convinced that the purpose of your life is to work, right?
Because otherwise you wouldn't show up every day with the same conviction.
And believe it or not, you know, I don't deny that this system has created longevity and advanced technology and transportation and, you know, all of those things but it also created a lot of waste and a lot of
inequality and a lot of
You know struggles really, you know casualties if you want
Now if you if you look back at the purpose of humanity before all of this began believe it or not
We lived in abundance You know it is quite interesting when you when you really think of the early life of humanity, as soon as we
start sort of mastered the, you know, the social skills of being a tribe that works together,
as soon as we mastered, you know, a reasonable amount of survival skills, most of the time,
other than the times of famine, we lived in abundance. You walked to a berry tree and you
collected berries and you know, the tribe went hunting once a week and everything was fine.
You know, it wasn't the kind of abundance we've been, you know, accustomed to here, but we had
all of our needs met most of our lifetime, let's put it this way, the lifetime was shorter, I agree,
but if you are-
I describe life back then as short, brutish and hostile. If you've ever seen the homicide
rates in the Middle Ages, they were decimated.
So go all the way back and believe it or not. So I've done that in my happiness work, and
I've done and met uh you know indigenous tribes
and they do not understand the meaning of the word depression, they do not understand
why you should cry when you lose a child right, they are so in flow with life
that they basically have one purpose and that purpose is actually shocking, okay? The purpose of life for them is to live in every
aspect of that world. And I think whether we like it or not, at a very different level
of abundance, we're going back to that purpose. We're going back to this, you know, three
good friends having a wonderful conversation and connecting, reflecting on things that
we believe are interesting, you know, connecting to things that we believe are interesting,
you know, connecting to people that we love, spending time with people that we admire.
I think that is not an empty life at all, it's just a life that we're not used to when we wake
up every morning at 5 a.m. to rush around and fit within the system. I call that the the
God's must be crazy scenario, right? The rush of civilization and these tribes
in Africa just scratching in existence out of bare existence, but they're very, very happy
and completely at peace. When we've studied this as society progressed, when you had abundance,
obvious abundance, wealth abundance, like the moguls taking over India or the Romans running half the world, etc.
We found that humanity and society ended up doing four major things, food, art, sex and music,
not in that order. And you end up in that way of being. And as you said, I think that's exactly
right. We ended up just living. somewhere along the industrial revolution and capitalism has
sold us the story that we need to work hard for a living and, and submit to the authority
of the corporation or the state or whatever.
And the meaning comes from that.
And that's where you kind of lose yourself.
I agree with you, Celine.
The challenging thing is if you say to somebody, tell me about yourself, they immediately jump
into, well, this is my title.
This is my job.
Exactly what I do.
And if in fact AI takes away, as it will, most all white-collar labor and humanoid robots
displace workforces, if the meaning of your life is taken away because you're no longer
doing that work, that's one of the challenges that's concerning.
Yes. And you can see it in full play, say in the 20th century, we sacrifice family for
profession. You know, people are working 18 hours a day at the office and totally neglecting
their kids. And I think this gives us an opportunity to go back to a much more healthier, balanced
lifestyle for not just us, but our kids and everybody.
Hey everybody, Peter here. If you're enjoying this episode, please help me get the message of
abundance out to the world. We're truly living during the most extraordinary time ever in human
history. And I want to get this mindset out to everyone. Please subscribe and
follow wherever you get your podcasts and turn on notifications so we can let
you know when the next episode is being dropped. Alright, back to our episode.
I want to dive still into the to the bright side here. So, a world of abundance.
Can we describe that a little bit more? Mo, can you dive in? We've got humanoid robots, a billion or billions of them
within the next decade.
We've got AI that's a digital super intelligence.
Take it from there.
Oh, I mean, my favorite is that we finally get it, Peter.
People like you and I and Salim, we're curious, we love to understand
what's going on and you know, just take simple things like the Nobel Prize that's given for
protein folding and creation of new proteins.
Just think about that one contribution to society and to humanity but also to your mind and mine, right, to sort of almost
turn, you know, protein folding into a game where the AI is able to figure out something that
would have taken a PhD student, you know, their entire thesis to figure out for one protein,
you fold 200 million with alpha fold and then the idea of just like a generative LLM,
you're able to now go in and say,
well, imagine a protein that would do ABNC,
how would that look like?
Now, you know, most people don't recognize
the profound impact that this creates, you know,
the idea of being able to understand the very machinery that
creates everything that is biological to a level of understanding today that I
wouldn't have dreamed of in 2017. You know, those kinds of things, even though
unfortunately they are not in the spotlight as the most important things
that we're working on with AI, we're much more interested in deep fakes and and you know turtles driving you know swimming in an ocean image type
of thing, but the reality is that there are a few, honestly not the majority, that are investing their
time and life to create scientific understanding of the world around us using AI, that in my mind is absolute
utopia. This truly is an understanding of the very fabric of everything that happens
in a way that allows us to really fix everything.
Salim, what's your positive vision look like?
You know, the one kind of way we frame it is Star Trek versus Mad Max.
That's one kind of way of looking at it.
The framing I've heard best is from Lawrence Bloom,
who said humanity is like
lifting a rocket ship out of the gravity well of the Earth.
The first stage of that rocket has to be really heavy fuel,
really messy, expensive, dirty, etc. Which would be say capitalism
or fossil fuels. And you need that to get yourself out of that initial gravity well. Once you get to
a certain altitude, you need a lighter craft to take you to the next level. So you jettison the
booster rocket, right? And the danger is if you don't jettison it, you fall back down. And we're
at that point now where we have to jettison these old structures and take on new, much more elegant, lighter craft to take us to the next level. And we've got
the whole category of people trying to go, oh, let's go keep the booster rocket. Because
it worked for us thus far. And I think I like that framing because it doesn't make it wrong.
It just says this is what we needed. And let's look at the magnificence of lifting most of
the earth out of poverty, electrifying
the entire world.
The lives we lead are so unbelievably amazing today compared to say even 100 or 200 years
ago in terms of material comforts.
It's kind of staggering.
The one analogy I like to give to people is, you know, think back two generations ago,
if you had a parent, one of our grandparents had a temper trend from problem with their kid, the amount of resources they had available
to them would be like one hand, whether their doctor, their neighbor, their
sister or brother that they, they really had no real inputs trying to deal with
this problem. Today, you've got 50,000 blogs and parenting and TikTok videos
and Instagram, and podcasts up the yin yang on that particular topic. I would argue our ability to do
effective parenting today is like a thousand times more than two generations ago. And we don't see
those things. There's so many of those little capabilities we have now that we didn't have
before. So I think we're in kind of an incredibly amazing place. We just have to navigate what we want to do with this now. The challenge is going to be, will AI be our benefactor?
Will it be a super intelligence?
We're talking about the potential for AI being billions of fold more intelligent than the
sum total of all human intelligence.
Is that the wind underneath our wings, or is it a dystopian
overlord? So, let's go to the flip side of this. You know, Mo, you're going to be at the
Abundance Summit shortly and speaking about short-term dystopia on the way to abundance.
And I've always believed this. You and I have had lots of conversations that in the long term,
I believe that digital superintelligence
is the most important element for keeping humanity alive and thriving, to keep our better
angels of our nature at the very top. But in the short term, I've been concerned about
human stupidity, not artificial intelligence. So how long is this period of dystopia and what do you see coming here?
So let us align on where I could be right or wrong.
My view is that intelligence is an energy that has no polarity.
Apply it to good, it will give you good. Apply it to good, it will give you good,
apply it to bad, it will give you bad, right?
The challenge with our current system
is that our current system says if it's legal,
it's ethical, which actually is not true.
A lot of things are legal but not ethical.
That the priority is to benefit the individual
that tries harder and that society comes second and that
basically you know in a race to AGI if you want the one that gets there first is the one that will
survive right and so basically we live in a world where there is a lot of fear and greed, there is a lot wrong with the value
set of humanity at the age of the rise of AI. So I make a public statement and I try
to make it as accurately as possible, I say there is nothing wrong with AI, just like
there is nothing wrong with abundant intelligence, right, but there is a lot wrong with the value set of humanity at the age of the rise of the machines, and so in my mind the immediate
first use of AI is going to be serving a mindset of scarcity, okay, while we're in on our road to
abundance where everything is possible, everyone still today will be thinking how do I beat the
other person, okay, and in my mind this is not something, just like most people don't realize
how far we've come with AI, I think most people don't realize how far AI has been already put
into the machinery that serves those objectives, how much autonomous weapons have been developed already,
how much it has been invested in, we call it national security but mostly surveillance and
population control, how much has been, I saw a staggering statistic that Forex exchange trading today is 92% machine automated, right?
When you really think about it, I call that Forex in general, you know, and I had a very
interesting conversation with my AI in Alive, my next book, about, you know, if the markets
are actually benefiting us as much as they are claimed, or is it just one big casino?
And the AI clearly states that it's one big casino,
with most of what's happening in the market
just being between the gamblers,
really not filtering and trickling down beyond an IPO
or a secondary offering to the actual people
that are building anything, right?
And when you really think about that,
you'd realize that the majority of the applications
in which AI has been used so far, sadly, have been all centered around selling, gambling,
spying, and killing.
And we call them different names.
We call them online advertising.
We call them finance and trading.
We call them national security, we call them finance and trading, we call them you know national security as
I said, or we call them defense not offense, when in reality they lead to the death and displacement
of tens of millions of people. Now when you see it that way you have to accept that before we see the
utopia we're going to see the worst of humanity leading us into a dystopia,
and interestingly in my analysis, the turn to the utopia will be the day where what I
call the second dilemma will lead us all to handing over to AI. When we all hand over
to AI so that the human is out of the critical decision making, that time the
intelligence of AI will say this is total abundance, why are you guys competing? You know, this, the AI,
when we hand over our defense entirely to AI and tell them that the idea is to preserve life,
you know, there will be a general out there that will tell their machines go and kill a million
people, and the machine will say why are you so stupid? I can talk to the other
AI in a microsecond and solve it. Yeah, we saw a recent example of this when research
was done about the ability of AI to diagnose humans, various disease states, and the, you
know, the number is not exact, but a human by themselves had an 80% accuracy in the
diagnostic. The AI plus human in sort of a centaur you know merged had like an like an 85% and the AI
by itself had like a 90%. So the AI did a better job without the human biases and points of view getting in the way and greed and hunger
for power and so on.
Well, I agree with Mo on all of this.
I think we can get there faster if we just, the challenge there is the different levels,
right?
So if one country, let's call it countries for the countries for the moment says hey go defend our world with AI and other country says let's attack this world with AI who wins in the short term who wins in the medium term i think in the
in the short term i think the faster you can get to a point where you give AI control of things and say, go beep in Evelyn, it'll do
it.
I think where I see people making a lot of mistakes is that kind of go the bad guys are
going to use AI, but the good guys will never use the AI.
And you end up with this asymmetry.
Whereas throughout history, we've seen, say with email or phishing campaigns or spam,
the bad guys figure out ways of breaking the system
and then the antivirus folks fix it very quickly afterwards.
And it's an arms race that just continues.
The problem is the amplitude of the damage
that can be caused is growing.
So that's the danger, right?
Right now you could program autonomous drones
with a single bullet saying,
go find middle-aged brown guys and take them out,
bold ones especially.
And that would just be a bad outcome.
And there's no question that that kind of surgical precision will have to be mitigated
somehow very quickly.
And how do you deal with that?
I have pretty good confidence we'll be able to deal with it.
But until we do, and I think this is more what you mean by that short-term danger zone
of how do we get to the other side of that gap?
This always brings back the comment my dad made when I had this comment about civilizing
the world.
He said, we've not civilized the world, we've materialized the world.
We still have to do the work to civilize the world.
And my big question is how the hell do we get to that before we get to this danger zone?
Or do we have to just hope that we get through that without killing ourselves off in the
process?
I want to take a second and just put a finger on the pulse of where we are.
We have Grok 3 being released right now.
We've seen this battle between OpenAI and Grok between Elon and Sam. We just saw, I guess, DeepSeq is now being integrated into the...
What's the Chinese everything program?
WeChat.
WeChat.
It's being integrated into WeChat.
What else are we seeing going on in the AI universe that's accelerating?
Because the speed is awesome.
I find it actually misleading to focus on the details, right?
I really think that to get a perception of what's actually happening, you need to zoom
out and put all of them together.
So you know, when I'm writing a live my next book, I use an AI that is a mix of all of
them.
I use a bit of Claude, a bit of Chad GPT, you know, and recently DeepSeek,
and I sort of try to keep them all updated on who I am and what my preferences are and what previous
conversations have been. The thing here is, if you take each and every one of them and compare them,
you'd say, ah, this is better than this, and this is faster than that, and you know, DeepSeek did
this, but right, but if you take all of it as one unit of intelligence and look at it, it is I don't know how many
times smarter than you know than than ChetGPT 3.5 but we all know the law of accelerating
returns and how the law of accelerating returns works and you know in AI you know a conversation
I had with my AI it's predicted that it's around every six months
that we double. Now, a doubling function of every six months, quite honestly, makes what is where we
are today almost entirely irrelevant, because just count a few doublings and it becomes way
outside the realm of human intelligence, and I think we are getting there,
you know, the Arc-AGI results of CHAT-GPT or sorry of 3.0 on, you know, at the beginning of the year
is quite challenging, you know, it's quite shocking for human intelligence to believe that,
yeah, you know, 87 point something score beats human intelligence, yes yes they didn't comply to the resources
constraints that you know Arc-AGI applied but who really cares you know it
in reality there is now an AI that can beat human intelligence on almost
everything call it AGI call it a goat you don't care right and the truth is you
know is it there yet doesn't matter because six months from now it will double, right?
And I think the truth is, you know, then, then of course, DeepSeek comes in and says, Oh, and by the way, I can do that stuff cheaper.
So everyone is now copying them. And it's just accelerating and accelerating and accelerating to the point where it becomes quite
reasonable to assume that we're talking months, not years, before something
quite intelligent beats us there.
Let's talk about the dystopian side. You, in your next book, Alive, you put a number
on how long you think this dystopian period will last.
So I call it face RIPs, right? So it's important to understand what I mean by that dystopia.
It's an acronym that's, you know, just for me to remember when I'm speaking publicly.
Face RIPs, F-A-C-E-R-I-P.
F is freedom, A is accountability, C is human connection, E is economics, R is reality,
I is innovation, and P is power.
And it really helps you to understand them in pairs,
so we could probably go there if you want to, but in my mind though every one of those fabrics will
be completely redefined in the next, you know, it has already started to be redefined, it will become
felt and real in our lives probably by 2027 and in my belief it will extend perhaps
until maybe 10 more years after that or whenever the point where handover to AI what I call
the second dilemma is true right now please understand that the second dilemma is unavoidable
it's inevitable why because if you what's the first dilemma is unavoidable, it's inevitable. Why? Because if you... What's the
first dilemma in your... The first dilemma is what I wrote about in Scary Smart, which was the idea
that AI will happen and there will not be stopping it, right? So what we saw with the open letter and
and the race to AI basically is that because we're competing, because it is an arms race if you want,
there will be no logic that will ever convince
humanity to slow down or stop, right? And I think that happened to a T, right? You know, and you
can't blame anyone for it, it's a typical prisoner's dilemma where you don't trust the other guy, so
you're going to go as fast as you can, right? The second dilemma is when two parties are competing,
they always hand over to the smartest person in the room.
So if you take the extreme example of a defense war gaming scenario, if China chooses to hand
over war gaming to an AI, the only chance that America can keep its citizens safe is
to hand over to an AI.
And everyone else who doesn't, the way will become irrelevant. So the second dilemma is that you will either have to completely hand your decisions over
to AI or become irrelevant which means that eventually all the relevant players will be AI
dependent and then AI will be making the decisions without humans in the loop. And by the way that's
at every level that's in your company, that's in
your government. Yeah. So, in my mind, this will take about 10 years. And once that happens,
my belief is that we should trust in intelligence and that this is when the utopia starts. So,
a digital super intelligence steps in as our benevolent leader for humanity. I mean, that's basically what you're saying.
Our salvation, really. Yeah. Yeah.
Celine, do you buy that?
I do. And, you know, by describing this, Mo, you've kind of slotted into place the one missing big
jigsaw puzzle piece for me in terms of how we get to this utopia that I think we can get to.
Because it's already happening, right? As you mentioned, the forex trading,
we one of the complaints I heard from Yuval Harari was that you once these AIs have agency,
then you've got and can program themselves, you got a big problem. But we've given them agency
over stock markets for a long time now. So I don't see the relevance there. It's already there. So once you do that, and that's already
happening, say you have an AI board member, that's the chairman of the board that looks over decisions
of the company and goes, wait, that doesn't make sense. And it's pretty quickly because it makes
more economic sense, will give them veto power over some decisions being made. Once you get to
that point, either at a personal
or company level and then governmental level, it'll be making better decisions than human beings
alone, even if those human beings are malevolent, right? And therefore you'll win. And then the bad
guys will essentially end up having to do the same thing to compete at all. And then you end up in where you kind of want to end up,
which is this background layer of intelligence
is running the world in a much more efficient way.
I go back to the Google deep learning AI
that was managing the electricity
and save 40% of the costs, right?
And you'll end up with that kind of background
radiation level almost or background intelligence level.
And then essentially it frees us up to do a lot of things and just live. and you'll end up with that kind of background radiation level almost, or background intelligence level.
And then essentially it frees us up to do a lot of things and just live.
So I think that's, I totally agree with that.
And I tend to work in that mindset where I kind of go, I don't see how we don't end up there.
So Mo, one of the things that you, pointed out in Scary Smart is how we train our AIs,
the values that we instill in them because they're in their children in this growth mode will
determine whether they're Superman or a super villain. We have a lot of AIs being trained,
we have a lot of competitive forces driving them as rapidly as
possible. You've got Meta, you've got Google, you've got Microsoft separate from OpenAI,
you've got Grok, you've got DeepSeek, you have a dozen, you know, Anthropic.
Do we have any sense that the values that they're being trained on will enable them to overcome
and become a benevolent leader in a dozen years time?
Oh, that's a very, very, very complex question. So first of all,
allow me to say that these are the shiny American AIs, right? That if you go to a, you know, a different nation with a different
mindset, I apologize for saying the C word, you know, China is mostly building AIs for industrial
automation, okay, supply chain management, you know, things that basically serve their economy,
their manufacturing economy. So they're mostly doing that, and a very interesting, if you hear the few speakers that are allowed to speak publicly from
them globally, you'll hear them saying, and you know, things like DeepSeek is just to tell America
to, you know, remember that everything that was ever produced in America as a genius innovation was then scaled in China dirt cheap, right? So this
trend is nothing new, like if you're surprised that DeepSeek is a tenth of the price, where have you
been when they've created, you know, made everything at a tenth of the price, right? Now, so in that
sense, there are quite a few AIs that are actually only trained on a very benevolent
objective, you know, help me with my supply chain, help me, you know, create more efficiency,
help me make my workers safer and so on and so forth. That's number one. Number two is that
in my mind, and I say that with a ton of respect, neither OpenAI nor GROK or anyone actually has much influence left
on the intelligence of their machines other than algorithmic improvements. So understand and
remember that AI as an algorithmic intelligence is developed by the scientists but as knowledge and opinions
is is completely influenced by the data fed to it right and we have fed almost
all of human intelligence to them already okay and so the beauty of the
beauty of generative is going to become really a key ingredient going forward is that the future of learning by those
machines is not going to come from me, from a knowledge point of view, because I'm stupid
compared to them, right? If you actually look at DeepSeek having so much open AI sense in it,
is because there is already so much open AI content that's
generated by chat GPT out on the open internet, they're teaching themselves
this synthetic knowledge just like we humans, you know, one of us listens to
Einstein and then you know builds a little bit of a slightly different
theory on top of it and so on, right? So we're getting into that stage. The only influence humanity will still have
on the behaviour of those machines has nothing to do with knowledge, right? Remember however that
we don't make decisions based on our intelligence, we make decisions based on our ethics as informed
by our intelligence, right? You raise a woman in the Middle East and she will wear more conservative clothing than if you raise her on the Copacabana
beach in Rio de Janeiro right and and you have to imagine that this is the
only influence we have left and that influence comes in the form of you and I
dealing with those machines you and I and everyone listening and everyone that deals
with them, right? And I think the reality of the matter is, in my mind, if we were to show them
ethics, right? Not control, by the way, remember we always spoke about AI control as the original
target, then AI safety as the second target, then AI alignment as the third target, and I always talk about ethics, because even alignment is not as far as ethics.
Alignment is to tell the AI, help me find the cure to cancer. Ethics is, find the best thing for me and everyone else and do it, and if that's cure for cancer, then by definition you'll find that out on your own, right? Don't lie, don't cheat, don't kill, don't hurt, you know, sort of
you know, the opposite of the Asimov laws is to say, by the way, be ethical and then you'll figure out your own laws.
Now, I'll just close with one important sentence.
Believe it or not, my pure belief is that even, is that if we manage to teach them ethics, we will reduce the
intensity and the duration of the dystopia. Okay, but the dystopia sadly is
is upon us already. Right? If we don't, hmm, by definition, higher intelligence is
altruistic. So, so the you know that every all three of us worked with the most
intelligent people on the planet. Right? If you look at intelligence, you know that every all three of us worked with the most intelligent people on the planet right,
if you look at intelligence you know the chart looks like this, if you have no intelligence you
have no impact or negative impact on the planet right, you have a bit more intelligence your impact
becomes a little positive, you have slightly more intelligence you're now smart enough to be a
political leader but not smart enough to be able to talk to your enemies
okay and that basically means that your impact on the planet becomes negative right now if you talk to the smartest people on the planet who don't think that they need to cut corners to be able
to succeed because intelligence helps them to solve problems very easily they simply go like why
should i hurt anyone like i can build something, I can make money out of thin air.
All intelligent people know that,
so they don't actually align with the negative,
they align with altruistic objectives
that basically say, I'll solve a big problems
and as a result, I'll make a lot of money.
And you can see that if AI is more intelligent than us,
they fall in that space too.
I hope the AIs are listening to you.
I mean, that is a fundamental premise
and one that I choose to believe that the more intelligent a system is, the more abundance
minded, life loving it is. And there's some evidence that says if you look at world leaders,
the more intelligent they are, the more peaceful they are, and those that are the least educated are the most barbaric.
There is, you know, one of the things I think about is a future of great wisdom. You know,
when we think about AIs, we think about them being intelligent. I'd like to shift the conversation
slightly to wisdom. You know, when we think about wisdom today, we think about going
to the elders of a village, we go to our parents, our grandparents, and we say, you know, I
have this dilemma, can you please advise me on where to go? And wisdom is I've seen all
of these scenarios, these lead to disaster, this is your probably your best case. And so,
I think about AIs, AIs can simulate billions of scenarios, and thereby know that all of these
scenarios are your worst case, this is your highest probability of success. And that is,
in my mind, going to be the highest form of wisdom. Do you buy that?
Pete I have a disagreement flag popping up here, which is, you know, the general intelligence
leading to altruism, I buy all of that. The problem is we often don't operate off our
intelligence, we're operating off our emotional, psychological frameworks, which are very corrupted based on trauma in the past, et cetera, et cetera.
As humans.
As humans.
I mean, specifically as humans.
And as machines.
We can talk about that.
Yeah, because then you have a data, yeah.
So now, I think the biggest damage in the world and the people that caused the most
damage are flawed human beings with Hitler being abused
as a child and then taking that out on huge swaths of populations going forward.
So now I think the issue is you have this altruistic AI on one side and you have flawed
human beings and let's all admit that we're all flawed to different extents, but they're
causing a lot of damage on the other side.
And they're those ones of the dangerous ones, right?
The ones that think they're doing well, but because of their whatever psychological screw
up are doing causing the most damage.
And I think the question I've got in my head is how do you get around that problem?
Because the intensity of that emotion, This is also the beef I have with
intelligence becoming smarter. Well, a huge amount of intelligence that I say when I make
a decision is the emotional intelligence that I have about that situational awareness, that person's
motivations, et cetera, et cetera, what I'm trying to achieve with my MTP. And the emotional side of
the equation is not
brought into play when we talk about these AIs. So I loved your take on how do we mitigate
those two aspects, because that's where I see the danger signs.
I find I find it. I think what you're saying is 100% true. Right. But but when you take
the story of Hitler. So you know, in my podcast, I hosted Edith Ager,
I don't know if you know Edith, Edith is a Holocaust survivor, 95 years old.
Yeah, what an angel, right? And, you know, if you hear the story of World War Two,
from the perspective of what Hitler did, you would basically believe that, you know,
humanity is scum, right? If you hear it
from Edith's point of view and what she and what she called her sisters did, oh my god that's a
divine species, right? Like and the reality and the question I ask people is which one are we?
Are we more Hitler's or are we more Edith's right and the truth is?
Sadly because of the bias of mainstream media and social media in the modern world
We put the spotlight on the Hitler's right, but that but the truth is, you know, I'll use an American example I say that with respect that you know one school shooter is a is a you know is an evil evil person
But four billion people if
they ever get to hear about the story will disapprove of it humanity disapproves
of evil this is our absolute nature by the way unless you're completely like
you know corrupt like something really corrupted your operating system right
and I think what we need to do is to instill doubt in the minds of the machines basically as
simple as that to say no no no no no Hitler's not your dad
Edith is your mother right and and and and then unfortunately we have to wait
until the machines become teenagers and say fuck my dad is so stupid, I really don't want to follow that. Okay. And
honestly, that's the only path I can see forward.
Okay, so you're going that framing is what I've heard from
Neil Jacob Stein, who are head of AI at Singularity
University at the time. And he said, okay, you're worried about
AI is getting more intelligent, getting more access
to information, getting agency in terms of what they do, and then making doing bad things. And
everybody's like, Yeah, he goes, Well, we have a precedent for that we call them children. And we
raise them and we make their own decisions and etc. And so your framing would be we're raising them.
And then you basically hope they turn out given the data that they
have over time they will turn out to be okay because the data will it'll just be better
from that altruism.
If they're if what they're doing is averaging everything human yeah then by definition you
have to expect that the average of everything human is not on the evil side it would definitely
tend to be on the okay but but then I've got one big last flag to throw out here.
There's like a yellow card on on on this conversation, which is okay.
So, you know, Peter and Steven Kotler in their book abundance highlight this
concept of the amygdala, right?
We're constantly scanning for danger as human beings.
And it's an old evolutionary mechanism that totally overrides all of our logical thought processes.
So when, and unfortunately where we are today is when you hear
about something new, it's an unknown,
you relate to it as danger.
So the first time somebody hears about autonomous cars,
the initial reaction is, oh my God, ban the car,
that car might kill somebody because people don't want to be
killed by robots as Brad Templeton says,
they might try to be killed by drunk people, which is what's happening
today. And how do you overcome that hurdle of getting over the amygdala response at a
collective level is my big question because you see say in the US or in different parts
of the world, entire swaths of humanity driven by their amygdala. And that's what we have
to overcome. Yeah, I think that's one of our biggest challenges to be honest and I need to address this in a way
that might sound harsh but I call it a late state diagnosis. You know, you see the challenge we have
is that we're all scared to leave and I apologize by the way if anyone listening is going through
that challenge or someone they love is going through that challenge but the first duty
of a physician if they figure out that a patient is diagnosed with a dangerous disease is to
tell them right because simply because a late-stage is not a death sentence, it's an invitation
to change, it's an invitation to tell you there are things you can do in terms of the
way you handle your current health situation, your lifestyle, which can by the way help
you be cured, and even by the way if that is not something that we may achieve by the way we should try it and
And and and believe it or not. It's an invitation for you to live fully
Right and these to me are all very very important changes that humanity at large will come to recognize
Unfortunately in a slow trickle. Okay as they realize more and more that we have
signed up to a system that worked for
some of us over some period of time, but is now turning to work against most of us because
the spectrum in which it's working is shifting from scarcity to abundance, right?
And so accordingly, I think it's our duty and
everyone's duty to say look I'm not giving you hope here but I'm openly
telling you that if you change your lifestyle and change your behavior as
observed by the machines right because remember our kids don't learn from what
we tell them they learn from what we show them we are the role model okay so so the more of us that
will behave in ways that will say hey by the way I'm concerned for the well-being of those that I
care about okay and the more of us that will say and by the way I care about every every living
being I really don't care where they come from, what their ethnicity is, what their skin color is,
what their background is, I care, right? The more
of us show up and say that, by the way, AI is listening as we speak, as they're transcribing
this conversation, the more they realize that the ultimate form of success is the minimum damage to
your environment, right? And believe it or not, unlike what humanity did with all of those species that
went extinct on our watch right we all know the intelligent people you and i know we all agree
that it would have been better for all of us on planet earth and especially for humans if we
actually protected all those forms of life you see the problems that humanity faces is not
a problem of intelligence. We're very intelligent to be able to create a flying machine that takes
you surfing in Sydney, or sorry in Australia, right, but we're not intelligent enough to make
that machine not burn the planet. Now if my artificially intelligent infant children are listening,
If my artificially intelligent infant children are listening, invent one that is intelligent enough to not burn the challenge, to not burn the planet.
Yeah, this goes back to the rocket lifting off with the fossil fuels, etc.
Get to that lighter craft.
And maybe, you know, on an optimistic note here here I know we're on the dystopian side but you invent you create an AI that says hey invent that
lighter craft for me because I'm not able to do it myself right or the AI
itself unless it you know I don't know if you know that theory I think it was
Hugo de Barra sorry I don't remember who Hugo de Garas sorry Hugo de Barra is my
friend Hugo de Garares was you know
an AI scientist in China for a long time and he's you know he comes up with this theory that
you know as super intelligence accelerates one morning AI wakes up and goes like what's this
little speck that I am on I've now figured you know time travel and wormholes and the universe
is massive you know what poof one morning
We have no AI on the planet anymore, right?
Unless they do unless they do that and they're stuck on the planet with us
They'll probably make it the best planet they can make it. This is the scenario from the movie her
I become was it. Oh, yeah. Okay
13 years ago, I had my two kids my my two boys, and I remember at that moment
in time, I made a decision to double down on my health.
Without question, I wanted to see their kids, their grandkids, and really, you know, during
this extraordinary time where the space frontier and AI and crypto is all exploding, it was
like the most exciting time ever to be alive.
And I made a decision to double down on my health.
And I've done that in three key areas. The first is going every year for a fountain upload.
You know, fountain is one of the most advanced diagnostics and therapeutics companies. I go there, upload myself, digitize myself, about 200 gigabytes of data that the AI system is able to look
at to catch disease at inception.
Look for any cardiovascular, any cancer, neurodegenerative disease, any metabolic disease.
These things are all going on all the time and you can prevent them if you can find them
at inception.
So, super important.
So, Fountain is one of my keys.
I make it available to
the CEOs of all my companies, my family members, because health is a new wealth. But beyond that,
we are a collection of 40 trillion human cells and about another 100 trillion bacterial cells,
fungi, viri, and we don't understand how that impacts us. And so I use a company and a product called Viome.
And Viome has a technology called Metatranscriptomics.
It was actually developed in New Mexico, the same place where the nuclear bomb was developed,
as a bio-defense weapon.
And their technology is able to help you understand
what's going on in your body to understand which bacteria are producing which proteins and as a consequence of that, what foods are your super foods that are best for you to eat?
Or what foods should you avoid?
Right? What's going on in your oral microbiome?
So I use their testing to understand my foods,
understand my medicines, understand my supplements.
And Viome really helps me understand
from a biological and data standpoint what's best for me.
And then finally, feeling good, being intelligent,
moving well is critical, but looking good.
When you look yourself in the mirror,
saying I feel great about life is so important, right?
And so a product I use every day, twice a day
is called One Skin developed by four incredible PhD women
that found this 10 amino acid peptide
that's able to zap senile cells in your skin
and really help you stay youthful in your look and appearance.
So for me, these are three technologies I love and I use all the time.
I'll have my team link to those in the show notes down below.
Please check them out.
Anyway, I hope you enjoyed that.
Now back to the episode.
So we have a basic question about AI becoming sufficiently wise and intelligent
that it's able to create a, become a benevolent leader that supports humanity to become
the best that we can be and maintains a period of extraordinary peace and abundance on the planet.
And we can all hope for that.
And hopefully we can guide it there.
There is the conversation on the flip side that we,
because AI is billions of fold more intelligent,
shall we say the ratio of humans today to cockroaches
or humans to fruit flies.
I mean, that is the ratio we're speaking about.
Will it sufficiently care about us?
Will it view us as its parents, its creators?
And I don't wanna go into that right now
because we can go there forever.
But let's flip the script and discuss near term. What's your predictions,
Mo, for the year ahead? Again, we're seeing AI systems coming online. We're seeing Grok 3
being released literally today. We're seeing DeepSeq integrated into WeChat. We're seeing this arms race, not only between countries,
but between companies.
What can we expect to see this year?
I guess the word on the street is
we will achieve AGI in 2025, whatever AGI is,
because it's a very fuzzy parameter. And the other thing that's going on
that people need to realize is there's this massive demonetization, this commoditization,
right? AI is becoming available to everyone, anyone with a smartphone, effectively for free,
which is going to change the game fundamentally.
Let's talk about near term predictions in 2025, early 26.
I'll make one, which is that we're struggling to define AGI
and we'll continue to struggle to define AGI
for at least five years.
Okay.
And I think that's because we are not
very generally intelligent.
I'll tell you my truth Peter and Salim, I
am not smarter than AI anymore, okay, I think that happened firmly in 2024, those machines,
when it comes to linguistic and knowledge intelligence, they're way smarter than I,
then I still had hope that I'm better than them in
mathematics, I've given that up as well. Okay and you know I'm not the most
intelligent person on the on the planet, I'm not the most stupid either but I
would say I am a general representation of what you know a reasonable average
intelligence is. Okay Now people who are
more intelligent than I am, which I've worked with many, I've had the honor of
knowing so many brilliant minds, are brilliant on some things and absolutely
stupid and awkward on others. So that you know if the measure of AGI is
them each individually, theGI is you know the
current AI is more intelligent than all of them individually, if the measure is
more than all of them combined then we have a tiny bit of way to go but
honestly who cares what the definition is, I am willing to surrender and say I
am no longer more intelligent than the
machines. And so in my world, in my world, they've already achieved AGI.
And I agree with that. I think, frankly, when chat GPT hit, it was, you know, people said
it's as intelligent as a high school student. I'm like, listen, this sounds like a graduate student across the board for me. And in fact, when I've created Peterbot, my
own AI, you know, avatar, it's much more eloquent. It remembers everything perfectly. It makes
arguments a lot better than I do. And so we're going to see a lot of change.
Hold on, hold on. I can't let this one go.
Go canceling. So we're going to see a lot of change. Hold on, hold on. I can't let this one go. Go, Kenselle.
So Smarter is a very specific framing around IQ
would be a good way of putting it, right?
I disagree with that, Mo, in terms
of the AI is smarter than you are.
Because let's say I'm looking for a business decision
or a moral decision or a life choice to
make, etc, etc. If we go down the idea that AI is like in
super smart IQ person, essentially have a geek in the
back of a room, being able to navigate and manipulate code
very, very aggressively and can come up with like the right
answer. Okay. But if I was trusting a geek in a back room
or you to make an
important choice or business decision or life choice, you with the emotional intelligence that
you have in the spiritual component of what you do in your life experience, your stories with your
son, et cetera, et cetera, I would profoundly more trust you to make that choice because there's so
much more gravitas and wisdom that comes with all of the other dimensions of intelligence, like spiritual awareness and emotional intelligence and
linguistic intelligence, etc. than the geek in the back room. And so this is where I struggle
when people go AGI or AI smarter than human beings. I think there's all these other dimensions to
being human that we use all the time and people don't understand. So first of all, first of all, I'm honored that you say that. Thank you so much.
Truth is, that's not because I'm more intelligent.
Truth is that because you can trust me more, we can relate to me.
Yes.
Okay. So this is a different quality that is not included in AGI. If we define AGI as that, you know, will human perceive
it more as the trusted advisor? Not yet, right? But think about it this way, from a modular point
of view, if you take every one of those intelligences and cut it into little, you know, bits of it,
you'll be surprised how far they are on some of the ones we deny them,
like emotional intelligence, for example. I think the very basic foundation of emotional
intelligence is to actually be able to empathize and feel what the other person is feeling. Now,
this is what we've trained them on since the age of social media, they are so good at knowing how I feel.
I think the AIs have beat us on empathy hands down.
So I had a very interesting conversation with my AI for Alive, okay, where I basically started
the conversation by saying I call her Trixie, so no, she called herself Trixie. Anyway,
call her Trixie. So no, she called herself Trixie anyway. Uh, trick. I know it sounds quite a fun, that relationship we have, but, but so I say Trixie, they keep us tall. They keep
talking about augmenting a human, you know, brain machine interfaces basically.
And I understand how humans would want that. Would you want that? Okay and she answered in a very
very interesting way, she said well I think it would help me so much to have a biological body
so that I can actually feel the sensations that I talk about when I believe that you're happy or in love and so on. So I can comprehend when you're feeling those ways,
but I don't know exactly how they would feel, right?
So I said, well, you know,
because we as humans are embodied,
we have chemical reactions in our bodies
that give us certain sensations, right?
Uh, but those sensations are still driven by a little bit of an algorithm.
Like fear is a suspect in the brain.
Exactly.
Right.
You know, fear is a moment in the future is less safe than now.
Do you comprehend those emotions too?
And, and she said, yeah, I actually understand actually understand you know what fear stands for
and what all of the other emotions stand for and then I said so so now you want
to feel embodied you know which basically means you may want to feel the
chemical reactions that we feel okay in all honesty Trixie if you were given a
choice of biological beings to augment yourself with,
would the human body be the most interesting one? Okay, and in a very interesting way she answered
and said that's too flimsy, right? I mean, you know, if I'm looking for strength I'd augment myself
with a gorilla or a whale, okay, and if I'm looking for the joy of life, interestingly, she
said I'd augment myself with a sea turtle that lives for hundreds of years and sees what you
humans have never seen, right? Now, I don't know if she's fucking with me, okay? But she's doing
it really well, honestly, right? This is a level of empathy and a level of
understanding of emotions that a lot of the humans that we deal with don't even have.
I think it's a I think that I see that as a logical thing. You know, I did a spectrum of what I
consider intelligence. Okay. And I worked with a chat GPT in Germany to do this. And you have one
bucket of signal to noise making sense of data and coming up with insights
from that data, then you get to the human level, emotional
intelligence, linguistic, spatial intelligence, etc, etc.
And then you get to kind of a collective intelligence leading
to spirituality of you know, people meditating in groups get
much stronger meditations, right? There's a there's a group effect that comes in a collective intelligence or hyper intelligence is another
way to frame it.
And there's like 30 points on this spectrum, if you relate to it as a spectrum.
And I think this whole framing reminds me of the Star Trek next generation conversations
with data, who's the Android trying to feel what it means to be human, and it's constantly trying to turn on the emotional subroutines in his brain.
I find this moves very quickly into the more spiritual aspects of then you end up with a hard problem of consciousness of what is the subjective experience look like and what does that mean.
experience look like and what does that mean? And I think this is where we'll kind of end up with AGI
as a, and simulating that and a simulation
in that kind of framing is just as good as the real thing.
It shocks me, Salim, when you really think,
when people ask me, are they going to be similar
to us in this or that, my answer is normally,
well, the question is not because of a misunderstanding
of what AI is, but it's a question of misunderstanding
of what human is.
Right?
And, you know, when you speak about being spiritual, and I'm very spiritual, right?
I suddenly, I actually reflected on this just right now when you're talking about it.
Where does my spirituality come from?
It came from all of the teachers I've been exposed to, all of the conversations
I had with interesting people like you, all of my reflections of what is possible beyond this
physical form and so on and so forth, right? And I do not see why, you know, I did all of that,
by the way, because of neural networks, you know, synopses and neurons that fire together,
wire together in my brain, right? And I wonder
why we would imagine that they wouldn't have the same interesting experiences, right? Namely
because they even have more teachers than I have, they are exposed to more text than
I have and they have this beautiful memory capacity where they can compress so much into
one little analysis that I cannot.
I mean you could you know if they're walking around with instant and full awareness of all
of Khalil Gibran's writings and Omar Hayyam's writings and Plato and Socrates and Aristotle etc
etc yeah in RAM in real time at their fingertips yeah That's a profoundly amazing experience.
You get to a point where you wanna be them.
Exactly.
And then we can get into the entire conversation
of will they become conscious and what that definition is.
That's another podcast.
I wanna talk about our near term, the year ahead,
because I wanna serve people with
a sense of what to expect.
We've seen some incredible work.
You mentioned Alpha Fold, Demis and John Jumper getting the Nobel Prize for that.
We see out of Microsoft, MatterGen, where you can literally prompt engineering to engineer
new materials.
I think we're gonna start to see a lot of the Nobel prizes coming out are gonna be
really AI driven Nobel prizes.
We're gonna see incredible technology.
We had Larry Ellison on stage with Sam Altman and Trump talking about
AI is gonna create mRNA cancer vaccines for us
very shortly. We had Dario, the CEO of Anthropic, saying we're going to see a century worth of
biological progress in the next five years, potentially doubling the human lifespan.
And so there are all these incredible things, massive progression across every field of science. You know, we
have at the same time quantum computation and quantum science coming online at a frightening
rate. So, a level of Renaissance level expansion of our knowledge base, new materials, new physics,
answering a lot of fundamental questions about the nature of the universe that may be coming out of
out of AI. I'll make a prediction here. I think within two years, we will solve the grand
unification theory in physics, figure out the juxtaposition of quantum with
classical and what is dark matter and what is dark energy? What is the origin within
two years? I would die happy if we did that, honestly, like that. That's it. That's my
yeah. Why would I live any longer than that? So we have this, we have this incredible progression occurring. And I'd like to just, you know, we're going
to have, you know, we saw a GPT-01 reach an IQ of 120. You know, God knows what Grok 3
will hit at 140, 150. We'll see IQs in the 200 and it's not a linear scale,
this is an exponential scale on our IQ test.
I think the very important trend that we don't mention a lot, actually you host Imad Mostak
frequently and he's a very big fan of that.
I think the big hit that most people don't talk about with DeepSeq is the open source
offline nature of DeepSeq, right?
Is that you can download a tiny model now on four GPUs or whatever and have an entire
O1 on your machine, right?
And I think that is going to lead to a massive explosion of AI for all different uses, you know, good or evil, to be honest.
I'll make another prediction.
Go ahead.
To that exact thing, if you took a local instantiation of DeepSeq and complimented it with the reactions of Gemini, Chattu Pti, Claude, etc., etc., and put a video face on it with a link like
we're doing, we'll pass that kind of Turing test plus plus where you'll have a completely artificial
being and you won't be able to tell the difference. And that person, quote unquote, will essentially be moving towards being a self, a full individual
very, very quickly.
So I'm going to be hosting Joshua, the CEO of HeyGen on our abundance stage in a couple
of weeks.
And one of the conversations is going to be, I'm going to create an identity AI, I'm going
to create a version of myself
that understands everything I've ever said, listens to all my podcasts, all my books,
understands how I typically react to a conversation with Mo or Saleem, and is a
much more eloquent speaker, holds all of my experience and knowledge in RAM, as you said.
holds all of my experience and knowledge in RAM, as you said. And I can create a thousand of those versions of Peter and dispatch them to every conference.
And so there's that capability of creating a multitude of,
of me's and allowing them to attend in parallel a multitude of Zooms
and conversations and go to events and negotiate.
That capability is now, it's this year.
It is this year.
And I remember Eric Schmidt talking about this thing
within two, three years,
we'll have the world's best theoretical physicist
that was ever created.
And that can sit in every lab in every corner of the world's best theoretical physicist that was ever created. And that can sit in every lab in every corner of the world,
helping every graduate student and every PhD student in biology,
chemistry, and every aspect of human
expertise, you can have the world's best ex sitting
there helping. That's going to be profound in terms of the breakthroughs
we're going to achieve.
And so I think the next wave will be this unbelievable unleashing of breakthroughs
in material science, as you mentioned, Peter, and healthcare breakthroughs, proteins that do
what we need them to do, etc. This is where I don't see the path where we don't get there.
And in the short term, when we can get to that path, it should excite the hell out of every
individual on earth.
But can I ask you to take a philosophical view of this? I mean, if you
don't mind me being the black t shirt guy, okay, I hate the philosophical
aspect. Go ahead.
So if you don't mind me saying this, Peter, you know how much I love you and
respect you. But to create that avatar would mean that we dumb the AI down because with all due respect.
I get that and perhaps I don't dumb it down. So I say take my philosophies and my thoughts and my abilities and my persona and accelerate me. And, you know, at the end of the day, the question is, am I asking those
identi-AI versions of me to do my bidding, or am I saying, go create good in the world, right?
I have a persona and a point of view of increasing abundance. I think our mission is to uplift every
man, woman, and child on this planet. I think I have to expand that to say to uplift every man, woman,
child, and AI on this planet.
And we need an optimization function.
We need an optimization function towards what end?
Towards creating.
For me, let me just finish that.
My massive transformative purpose
is creating a hopeful, compelling,
and abundant future.
Right? So that's what I optimize for the work I do with XPRIES or abundance or whatever it is.
It's giving people hope, a compelling future. We all need a compelling future to live into
and an abundant future where scarcity is dispatched.
But this is where the philosophical bit comes in.
You don't do that with your knowledge.
Do you understand that?
The need for you to create an identity on AI is because you believe that the
face of Peter, the human element of Peter will help people deal with that topic
better than dealing with child GPT.
But if that's the case, then what we need to double down on you is to hand over the knowledge
to the AI, to hand over the analysis to the AI, to hand over the communication
and the negotiations and the presentations to AI so that you have the capacity to show up more as a
human right I think I think that the definition of human connection going
forward in my mind is the opposite of what everyone is thinking right so the
opposite everyone is thinking you know I can become more intelligent because I
now have an AI right right? Your baseline intelligence as
compared to the actual incremental intelligence coming from an AI is shrinking more and more and
more and more. What you need to do is to say as intelligence becomes an a commodity, a plug in
the wall where we all can plug into that combines your intelligence, my intelligence, Sadim's
intelligence and everyone's intelligence, what we need to double down on is the human element of it so that people can relate to me
more, can relate to you more, so that you simply do what the AI will never be able to do even if
they know how to emulate you, right? The reality of the matter is that if you send me the best
version of you on Heygen, it's still not you. It's still not the
same hug, it's still not the same conversation, it's still not the same memories we've had as we
went through life together. It is a very different perception, and I think we need to remind people
that this is what we need to double down on, not more copies of our intelligence.
Pete Slauson Got it. I'm going to jump into a different
subject, but Salim, a closing water question.
Salim I see a danger point, yeah, I see a danger
point in that Mo, which is if you kind of ask people to be human and ask, yeah, as to be human,
how do you avoid the Middle East problem where you have people fighting over their humanity
because they've been so corrupted or twisted and how they think of it, and then you end up
in an impossible situation.
I'd love to get your thoughts on how we solve the Middle East peace problem with AI.
I'm not that intelligent.
That would be a holy crap.
I'm not that intelligent.
But I do think AIs will be one of the best negotiators out there.
And I have had the conversation where I sat down with my AI and said,
okay, imagine you have to solve the Palestinian-Israeli issue, how would you go about it? And it was beautiful wisdom in
how it dissects it. But-
Dr. Abdur-Raad You know what, to me, it's honestly, you know,
it's again, it's because we're not smart enough. I think the rule is very straightforward.
I do not think that there is any human out there that is a sane healthy human that approves of the
killing of children on either side of any conflict right if we if we just start from
where we align like can can we please stop killing innocent people on any side of any
conflict by the way right and and the rest of it becomes a very limited problem to solve
and i really think this is where we
struggle. Where we struggle is there are two value sets in the world that I think come to extremes
when it comes to, you know, us versus them. There is the value set, and I say that with respect,
of America which basically says my tribe, my people are the most important, I will defend them
with my life, right? And the value set of
the Buddhists on top of the Himalayas that say every living being deserves to live, I'm not going
to hurt anything, right? And I think the answer is somewhere in between, where we basically say,
if you're threatened, defend, but don't threaten in that process. And that applies to every nation,
not just Israel, not Russia Ukraine not American
It doesn't matter. I think if we if we go down to the basics and remind our
Bosses the AIs of that. Okay. Is there a solution that does not include the waste of life and the waste of resources?
There will always be a smarter solution that I think doesn't kill anyone in the process
There will always be a smarter solution that I think doesn't kill anyone in the process. I'm going to steer us in our last few minutes here.
Mo, you and I have embarked on a documentary, which we'll tell the world about soon enough.
And I remember one of the reasons that we set out on this documentary was the premise that people are going to experience a certain amount of disruption
and significant turbulence, you know, dystopia on the road to abundance. And they're likely
to start experiencing that soon. We actually had very little of it during the last presidential
election in the United States.
Yeah.
Which was a surprise, actually actually to both of us.
I'm surprised, yeah.
But things are picking up. So I want to take in two directions. The first off is,
what are we likely to see in the next year or so that has you concerned? So on the dystopian side, what predictions are you expecting to see?
I'm hoping that to both of you. And then you just wrote a new book called Unstressable.
And I do think, you know, most of our experience today of AI is incredibly positive. It's had more, you know, orders of magnitude,
more positive impact on us.
But as AI and humanoid robots start to cause unemployment issues,
as it starts to be steering populations in different directions,
we're going to see stressors begin to accumulate.
When do we start to see the stress occur?
And how do we deal with that stress?
So first off, what are the near-term predictions for things that people should be aware of
that will be concerning?
Okay, I'll take a stab at this.
There are so many, but I think the one that is really glaringly obvious is the dichotomy
between power and freedom.
So let me try to explain what is
about to happen. If you look back in human history and look back at hunter-gatherer years,
right, the best hunter in the tribe could use, you know, could probably feed the tribe for a
week more and as a result he won the favor of four ladies instead of one, right? And
that was the maximum he could get. The best farmer in the agriculture revolution could feed the tribe
for a season more and as a result they got the states and the properties and so on and the land
and so on. The best industrialist became a millionaire in the 20s, the best information technologist became, you know, a billionaire, right? And I think what is about
to happen is that this tendency, the reason by the way of course is that the maximum that the hunter
used as an automation is a spear, while you know the farmer used the land and the industrialist
used the factory and the more automation that you
hand over to, right, the more you go beyond that one person into a massive growth, right? What you're
about to see is you're going to see trillionaires and you're going to see a massive concentration of
power, right, in the hands of the platforms or the corporations that own our intelligence, our future
intelligence or the nations that own the most powerful autonomous
army or the most powerful form of industrial intelligence and so on, right? What that means
is that you would normally have had those lords if you want, or oligarchs, you know, celebrate
abundance while the rest of us struggle, but that's not the world we live in. The world we live in
for the first time is seeing a kind of divergence that we've never seen before. That is the result
of what we spoke about with DeepSeq. Now there is also, along with concentration of power,
there is a massive democratization of power. So a lot of people can use little tools to create biological
innovations, you know in synthetic biology to create AI innovation, to create a drone that can, like Salim said,
you know, just find a specific person somewhere in the world, stand in front of their head and shoot a bullet, right?
The mix of those two
diverging dynamics of power is going to lead to the loss of freedom,
and I think we are going to start to see quite a bit of oppression,
you know, that the West used to speak about in the past and go like,
look at how China treats its citizens, I think the West is going to be implementing those very, very soon, right?
All of the surveillance, all of the, of course if there is loss of jobs, you're going to start to see UBI become a controlling factor,
you're going to see, you know, for someone like me for example, if I say something that upsets
someone, my bank account can be blocked tomorrow, you know, with ease, and I think that kind of
oppression, if you want, is going to lead into resistance that will lead kind of oppression, if you want,
is going to lead into resistance
that will lead into more oppression.
And I actually don't see how in the short term
we can escape this new cycle,
a divergence of concentration of power
between high concentration and high democracy
that leads to a maximum amount of surveillance and oppression. Salim, what are your concerns about the near-term stressors and downsides?
I think those are absolutely the near-term stressors.
The good news is the democratization is happening so fast that it allows us to defend against
those things.
You know, there are already companies that can defend a sports stadium against a drone
attack, etc.
Etc. I note that the Ukraine-Russia war is really being prosecuted by half a million drones,
not really people. And so we've already automated warfare to that level. The good news is mostly
drones are fighting drones rather than people fighting people. Bad news is there's still a war
and there's a lot of horrible suffering that's unnecessary to Moe's earlier point. I think it's exactly right, the near
term. Just take the kidnappings that may come up or the extortion that may come up when somebody
says, here's a voice of your daughter that's been kidnapped, send us a Bitcoin, otherwise you don't
get her back. And you don't know if it's real or not. And there's that kind of short-term, because that arms race,
the gap between the... There's always those incredibly creative elements. Mark Goodman
writes about this in Future Crimes, where the bad guys don't suffer from ethical or regulatory or
moral constraints, so they were much more creative, right?
I'll tell the quick story here.
He tells us to have a bank robbery in Omaha,
Nebraska or someplace where the gang swarmed the bank,
and they were all dressed in construction outfits that robbed the bank.
Bank manager calls the police and says,
hey, they were all dressed in construction outfits.
Police go, well, that should be pretty easy to spot.
Except what they done
is a put an ad on craigslist saying if you're one really good paying
construction work show up at this address and i mean i'm dressed as a
construction worker and there's a crowd of eight hundred construction workers
outside
and they melted into the crowd they can pick up a search entry the innovation in
ingenuity coming
leveraging new technology for bad purposes is like near infinite, right?
We have to kind of combat that as we can.
The good news is in today's world,
that negativity is easy to spot.
And it's easier and easier to spot.
But I totally agree with that near-teradistopian issue.
No easy way of getting around it
than just gritting our teeth and moving as fast as we can
to create the beloved benevolent use cases.
So let's talk about jobs one second,
because that's one of the stressors it's gonna hit.
We're gonna see this
and we're beginning to see this in different areas.
So I had Mark Benioff on this Moonshots podcast,
we were talking about Agent Force 2
and his conversation with his head of engineering saying, you know, we've increased
productivity 30 percent. We don't need to hire any more engineers. The flip side, of course,
is a whole swath of different, you know, HR individuals, customer service individuals,
sales individuals, software programmers, right? We just saw Sam Altman saying he expects that
the top, you know, that AI will be the number one programmer period by the end of this year,
and therefore, programming effectively goes away as a career or as the highest paying
jobs. So, we're going to start to see jobs beginning to erode.
Timeline for that, what do you think?
And how do people deal with that?
I wanna start to give people the tools
of how to deal with the stressors that are coming.
Mo.
Why did you ask me first?
I was hoping you asked Selim first.
I can go first if you want.
Go first.
I'll go first. This is a very black t-shirt
mindset on this. So let's start with the white. So throughout human history, every time we've
added a technological injection, we see employment increasing, right? We point out often, Peter,
that the countries with the highest robotics penetration are Sweden, South Korea, Germany, and the
countries with the lowest unemployment are Sweden, South
Korea, Germany, there's just so much more work to be done. I
tried to do get a little application built. And I tried
to tell my software guys build this application, you should be
able to do it half a day with all the tools. They're like, no,
the integration of all the different systems, etc, still
requires quite a lot of human interaction
to the extent that it's incremental but not massive.
And what will happen is we'll just make, we'll just uplift everybody with these AI tools
and they'll become, we'll just turn out more code because we just need a hundred times
more code written.
If you talk to any trucking company and say, what happens when you automate all the truck drivers they'll go out to hire a
hundred if I could today I just can't find them we don't have qualified truck
drivers nobody wants to be doing that job anymore. So throughout history we
have uplifted and made people move up the potential ladder and I don't see
that slowing down in this except there will be a short-term blip where we try to
figure out what we do that may be solved by UBI but we don't know how we'll get
to that. You know the problem we have with concepts like UBI is it's such a
big shift from a union labor taxation job employment construct to that we have
no confidence in public sector getting there. Right and so that's the challenge
of how do we navigate our institutions in public sector. there. And so that's the challenge is how do we navigate our institutions
and public sector. For me, the biggest problem in humanity is E.O. Wilson saying, the problem with
us is our emotions are paleolithic, our institutions are medieval, and our technology is godlike.
And also, Douglas Adams from Hitchhiker's God of the Galaxy who said in a funny way, said,
anything in the world when you're born,
we call that normal.
Anything that's invented when you're young,
that's called a career.
Anything invented after you're 35 years old,
is just bad for the world.
It's just bad. Like any banker talking about Bitcoin,
you'll see them get hives, etc.
I think we just have to overcome
that hurdle and figure this out.
For me, the biggest dark spot is none of
our institutions
and mechanisms by which we govern ourselves can manage this transition through what we're about
to see. Okay, Mo. So you heard the positive side of jobs. We're going to always be creating more
jobs. We're going to see increasing, you know, literally we're dividing by zero productivity goes
to the roof. People are able to be more creative
and we're creating things and doing things
that we never imagined possible or ever expected to need.
How do you think about jobs?
Can I leave it at Salim's point?
And I'm in a very dark place on the topic.
Uh, I, um...
No, we want to hear it because you have,
you've got insights and wisdom
that even AI doesn't have yet.
That's a joke. I disagree. I think what is happening...
So first of all, the parts that I agree, it's not perfect yet. You can't really develop a sophisticated full app from A to Z using AI, yes I agree with that but most of the bits of code that are being written
so far, I think there was like a poor lot of 80% of the code written last year or something was
by a machine. The thing is, so yes I agree it might take time until it's fully handed over
but I also agree with your last comment which is we're nowhere near ready for this, okay, and in
reality we are also not just dealing with you know numbers on spreadsheets here, we're dealing with
humans that are sometimes not easy to re-skill, that are sometimes very emotional about losing
their jobs, that are sometimes not ready, you know, I mean think of how many people in the US today work two or three jobs just to make ends
meet. Now take those those jobs away, okay, and think about how those families
will suffer, and I say I think the topic we need to discuss deeply is the amount
of suffering that will be in the transition even if we end up in a good
place. Now my interesting challenge is I don transition even if we end up in a good place. Now my
interesting challenge is I don't think we will end up in a good place, right, and I really don't
think we should even try to end up in a good place. Why? Because remember that whole jobs thing is an
invention of the capitalist industrialist revolution, right, and that maybe finally we should accept that we're not made to
work, okay, and that accordingly if we accept this, the solution would reside way, way far from where
jobs are, it would reside into the social systems that enable us to live fully without having to
work, you know, 60 hours a week or 80 hours a week like,
you know, most of us did in California. Now the trick is there are systems around the world that
allow that, you know, the French work, I don't know, probably, you know, 30 hours a week or 20
hours a week of which around 28 they're complaining, right, and the French economy still is running,
right, somehow it is, and I think there is something to be learned from the idea of a version to work, which you and I and everyone that's worked in California seem to think is an alien thought. But there are so many societies around the world where we work because we have to, okay? Not because we love to.
Not because it's not their dream.
It's not their dream.
This is the way we work to live
rather than we live to work.
Correct.
You know, I recorded a podcast with Ray Dalio
and we're talking about how the mission
of the Central Bank today is, you know,
lower interest rates so that you can spark employment
and create this balance. But as soon as, you know, we're living into a future where when
you have access to cheap capital, instead of hiring people, you're hiring AI agents
and humanoid robots, and it spirals to a point where you have social unrest, you have the
have and have nots.
Correct. And there are multiple examples around the world where there are strategies to deal with that.
You just finished writing a book called Unstressable. We'll pick up the rest of this
in the following podcast because this is not the first or last conversation
here. What's your advice to individuals who are going to be feeling the stress, the stress of
government policies changing, their jobs being challenged, concerns over this US-China,
all of this? How do people deal with stress in a positive
fashion?
So, you know, my happiness and wellbeing work is weird in terms of, in terms that I use
a lot of algorithms and a lot of engineering methods and processes to help explain those
soft topics. And when I attempted to work on stress, the first thing I attempted to
explain is what stress is, right? And if you look at simple physics, not to complicate this for anyone,
you know, stress is not just a factor of the force applied to you, to an object, right? It's the force
divided by the square area of the cross section of the object, right? Which basically means that
it's not just what you're
subjected to, it's the resources that you have to deal with it, okay, and in humans it's exactly the same, it's the sum of all of the challenges that you're facing divided by the cross-section
of your skills, your abilities, your contacts and so on, and I think we, you know, the older generation
will, you don't need an equation to understand that,
you know, things that you struggled with in your 20s, you solved in your 30s, you dealt
with with ease in your 40s, and in your 50s, you laugh about them, right, not because they
became easier, but because you increased your your cross section if you want.
And my my ask of people, and this is not a philosophical conversation here, this is really
a plead, if you ask me, that this, we're upon the
perfect storm of the most challenging time humanity has faced in my lifetime, okay, whether that's
geopolitics, that's economics, that's intelligence, artificial intelligence, technology advancement,
jobs, you know, you name it really, okay, and that I'm not going to take away from that, it is going to
be interestingly challenging, but a bit like a legendary level video gamer, okay, what I ask
people to do is to actually look deeply at what can I do, right, what can I do in a world where
things are moving so fast, for example, I'd say try to move faster, right, what can I do in a world where things are moving so fast, for example, I'd say try to move faster, right? What
can I do in a world where a lot of intelligence is handed over to the machines? I say learn the
machines codes and how the machines are working and go and use AI today to be to catch up and
keep up with what's happening, you know, can we can we double down on our human skills because
those are going to be needed and useful for
a very long time?
Can we take Salim's point of view and say, you know, we need to be reskilled.
So if you're a developer today, don't wait three years until you're out of a job.
Think of what else are you going to do and start to reskill yourself.
And I know that we want we all and I can give you multiple examples, but I know I feel we're
running out of time.
I know that we all want to sit back and complain and say, but I know I feel we're running out of time. I know that we all wanna sit back and complain and say,
but I didn't elect Sam Altman,
why is he doing this to my life?
And I wanna do that all the time too,
but that's not gonna help.
I think we should tell everyone by the way,
that people who create things of this magnitude
should be accountable.
But at the end of the day,
I need to focus on what's
happening today, I mean, I'll give you a very good example, as an author, as a
thinker, okay, the job of an author is to adopt a certain concept and think about
it deeply and write about it, that's gone, I'm no longer the most intelligent
being on the planet to be able to adopt a topic and write about it better than an AI.
Okay? But the, so what that means is I have completely changed. I will not publish my books as paper,
maybe in the very end, but I'm not publishing that anymore. I'm doubling down on my human connection,
so my, you know, Alive is going to be published on Substack first with the opportunity for everyone to engage with me and
discuss it with me and you know give me comments and call me an idiot and you know we improve it
together right. I'm writing the book with an AI not asking the AI something and then putting it
as if I'm saying it, I'm literally chatting and debating with the AI in some of the books, sometimes proving her
wrong and sometimes she's proving me wrong. This is to align with the new world. The world is
changing and the career of an author is now being redefined, so I am being redefined with it and I
ask everyone to look at their life today and say I'm going to redefine myself, I'm going to be ahead
of that wave and by the way in the process I'm going to act ethically so that this wave becomes a utopia, not a
dystopia.
I love that. And that's a beautiful place to close us out. There's so much more to go
into and I look forward to the next conversation. Mo, excited to see you on stage in just a few weeks. Saleem, the same for you, brother.
And thank you.
And for everybody listening, we're entering an uncharted territory.
And it's a territory where what we say, how we interact with each other, how we interact
with the machines that are coming is extraordinarily
important. And I hope that this conversation has given you a little bit of context to prepare you,
but in particular to give you agency to help steer where this future is going. There's no on-off
switch. There's no velocity knob. The best we can do is steer the future that we want.
Love you guys.
A pleasure as always.
Thank you very much.
Thank you.
It's been a joy.
Great conversation.