Moonshots with Peter Diamandis - Bitcoin’s Bull Run & the AI Arms Race: What You Need to Know w/ Salim Ismail | EP #166
Episode Date: April 23, 2025In this episode, Salim and Peter dive into another segment of “WTF is Happening in Tech This Week” discussing Bitcoin’s future, Mira Murati’s new startup, the 2027 AI Paper, and more. Rec...orded on April 22nd, 2025 Views are my own thoughts; not Financial, Medical, or Legal Advice. Salim Ismail is a serial entrepreneur and technology strategist well known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University and the founder and chairman of ExO Works and OpenExO. Join Salim’s 10X Shift workshop: https://openexo.com/10x-shift Join Salim's ExO Community: https://openexo.com Twitter: https://twitter.com/salimismail Learn more about Exponential Mastery: https://bit.ly/exponentialmastery ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at https://www.oneskin.co/ #oneskinpod _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
The price of Bitcoin is back up above 90,000.
It's pretty binary.
Either Bitcoin goes to zero or it goes through a million dollars of Bitcoin.
There's no real middle ground.
The only question is when either of those happen.
It's not that we've just gotten smarter.
It's the tools that we have.
It's AI that's going to help us understand what's going on.
You'll soon have a Jarvis type personal AI that will have access to all of that sitting next to you.
Google's got access to all of its Street View data, massive amount. Google Earth,
YouTube, all of that is very real-world data that can be trained on.
Also, we're not even touching the deep web where you have so much data in databases, right?
The amount of information on the crawlable web is very limited.
The speed at which this portrays acceleration
over the next five years is even hard for me to fathom.
Now that's a moonshot, ladies and gentlemen.
Everybody, welcome to Moonshots
and our episode of WTF Just Happened in Tech this week.
I'm here with Saleem Ismail, my buddy.
Saleem, good morning.
It's an early morning here recording this, but a lot's been happening in the tech world
and excited to get it out.
How are you doing today?
I'm doing great.
And there's so much happening.
It kind of gets overshadowed with all the chaos happening in the global world, but the
tech world is moving unbelievably quickly.
Yeah, no, for sure.
And while the I don't want to say it, but I do believe the tech world is moving unbelievably quickly. Yeah, no, for sure.
And while the, I don't want to say it, but I do believe the tech world's far more important
than the final result for the long term.
Big time.
All right, let's jump in.
You know, one of my strike force members, Max Song, just landed in Beijing for some
meetings and he sent me this photograph on the left.
And this is what you see in the Beijing airport.
It's basically China going all in on robots and AI.
And then what you see at JFK airport, which I recently went through,
is basically fashion ads.
And there's something here that is important just to point out, right?
This is, you know, part of China's growing culture is super tech forward, much more.
What do you think about this?
I think that's exactly right.
And you know, they're facing a massive population crisis. So they actually need
the robots to automate the workforce. Otherwise, there
won't be anybody left to do the work over the next decade or
two. So they don't have much choice. But for me, the
underlying irony here was that the ads for Ralph Lauren or say
quick appearance by one of my boys here.
He's your godfather here.
here. This is your godfather here. The the the underlying
thing here, all the Ralph Lauren or Gucci or whatever, all the handbags or the berkin bags are all made in China anyway. I
thought that was a kind of an interesting segue for this
particular slide. But it's they're focusing heavily on it
and they have to and it's going to be amazing to see as they
roll that out, it's going to affect and spread across the whole world about
paradigm. Yeah we hear a lot about Optimus and figure here and Digit and
Apollo and X1 there's an equal probably greater number of robots under
development in China because they are the government is really you know
supporting the development.
I think we're gonna start to see this.
In our last couple of episodes or go,
we talked about the Google wing,
where they can deliver something by drone, right?
We're all like, oh my God.
And I got a ping from one of my people over there going,
we've been doing this for years,
what are you guys talking about?
So it's like, dang.
Yes, the future is here, just not evenly distributed.
This is another one that I wanted to share here today, and those of you who are listening
versus watching, this is a graphic on the latest AI models IQ test results.
And this is a distribution of human IQ that goes on the far left from 50
to the far right to super genius of 160.
Of course, the average human IQ is 100 by definition.
And what we've seen over the last couple of years
was the rise of the large language models on this IQ scale.
About 18 months ago, it was Claude III that reached 101 IQ first, and then we saw GPT-01
get to I think it was 120.
And on this distribution curve, what we're seeing here is, again, OpenAI leading the way with
their 03 model at somewhere like IQ of 133.
Gemiini just behind that, Gemiini 2.5 and IQ like 127.
Pretty extraordinary.
What do you think here?
I mean, you look at that spectrum and you're exactly mirroring human,
the global human collective, right? A few on the right, a few on the left and a cluster in the
middle. The big difference, of course, is AI will continue to shift towards the right and humans will
be mostly stuck in the middle with all of the archaic things that we consider and deal with,
with our little one-liter, one-and-a-half-liter brain in the middle with all of the archaic things that we consider and deal with our little one liter, one and a half liter
brain in the small cavity.
And it sounds like a little Fiat car
with a little engine in it too.
That's right.
You know, just some references here were, again,
the 03 models that looks like 133 on this map.
Obviously, it's not accurate, but or exactly accurate.
But a genius level IQ on Mensa,
I think, what'd you say Salim, is like 140?
Yeah, Mensa candidacy comes out in 140.
That's considered genius level.
I think somebody mentioned, Donna mentioned
that Einstein had 160, right?
I just want to make about do my normal commentary here and say that
this is great, but it still feels to me that there's so much
more that we could be thinking about in terms of measuring
decision making emotional intelligence, spiritual
intelligence, etc, etc. There's so many other classes. I know we
have a couple of commentaries on the slides, I'll do it later.
But the IQ test is a one piece of it is great, we know we have a couple of common surgeries on the slides, I'll do it later, but the IQ test is a one piece of it. It's great. We'll all have a genius in our bedroom. And what's
great about this is typically if you want to deal with somebody with 140k genius, they have no
patience for fools and they're hard to deal with socially. Whereas the AIs will be easy to deal
with socially because you'll be able to train them that way. So that's the most exciting part for me
around this. Yeah. And I think one of the points you made earlier that's important to realize is there
is no artificial limit. As IQ becomes more intelligent, it just continues becoming more
intelligent. And there's going to be a point at which the idea of a Mensa IQ score is meaningless
as these things, you know, hit IQs of 200, 500, 1000. God knows what that
means.
Yeah. And do two AIs of 160 each add up to 320? That's a question I'd like to ask them.
Everybody, I hope you're enjoying this episode. You know, earlier this year, I was joined
on stage at the 2025 Abundance Summit by a rock star group of entrepreneurs, CEOs, investors
focused on the vision and future for AGI, humanoid robotics, longevity, blockchain,
basically the next trillion dollar opportunities.
If you weren't at the Abundance Summit, it's not too late.
You can watch the entire Abundance Summit online by going to exponentialmastery.com.
That's exponentialmastery.com. That's exponentialmastery.com.
All right, let's go on to our next slide here.
The question is, and I'm often asked this,
who's leading the AI race, right?
And there are two answers worth pointing out.
The first is today on almost every metric, Google's Gemini 2.5 is dominating.
And here's a slide I just put together with AI analysis intelligence index. We
see, you know, again these are these models is out in the lead, the output tokens per million, the price of input
and output.
And then, of course, the most interesting metric, at least from a conversational standpoint,
is called Humanity's Last Exam on Reasoning and knowledge. I find this fascinating. What do you think
about that?
I mean, look, at some level, we should be human beings should be
very bad at this. Because if you look at the aggregate knowledge
of human beings, scientific inquiry over the centuries,
there's a staggering amount of data that we that we have in in the world.
I remember doing, coming up with seeing a random list of 12 doctoral theses, right,
that were defended at my alma mater, Waterloo. And I couldn't figure out for half of them
what even the subject area was. They were so detailed and specific, right. And so the
fact that an AI has instant access to all of that is incredible, and we will
be able to answer any question. And I think I'll go back to the point that you'll soon have a Jarvis
type personal AI that will have access to all of that sitting next to you. And so therefore...
And can answer any question. And when you look at what humanity's last exam, it's a list of almost random test questions across quantum
physics and archaeology and biology. And it's the sort of exam that you have nightmares
about later on.
That's right. I might actually be able to pass on my thermodynamics exams.
Oh my god, you still have dreams about going back and like, I missed actually be able to pass on my thermodynamics exams for my god, you still had dreams about going back and like, like, I missed that class and the finals are coming
up.
There was I'll give you a quick anecdote here. There was one exam we had. It was a three
hour exam. Okay. And the exam question was an satellite at altitude A is orbiting the
earth. There's a river underneath and flowing north to south.
Because of the rotation, one bank of the water on one bank of the river slightly higher than the
other, work out which bank and by how much. And I had to like, there was like two lines in this
exam, and I had to turn it over going, sorry, I think I've missed a page. Where's the rest of
this exam question? And that was it. I radically had to then assume a satellite orbiting a thing a and and work
out or trying to still having nightmares about that. It was just a horrible exam. Must be
a must be a humanities. I don't ever want to counter and this is why you need the AI
said the ask you going Yeah, you work that out for me and come back to be with the answer right so today just to
summarize here today Google Gemini 2.5 is dominating at least in performance
metrics but here's another metric which is revenues the business side yeah in
this category open AI is trouncing the competition.
So, you know, you got to give them unbelievable credit, right, for democratizing and opening up the, creating a total category out of nothing.
And the fact that they're making this much money is just so, so awesome.
It should be an unbelievable testament for any startup founder saying,
could I make a difference in an area where you've gotten Google, Microsoft, Meta all playing,
and these guys come along and completely crack the whole thing open and are
actually dominating on the, on the revenue side.
I think it's just a great testament to the beginner's mind, the founder mode,
all of that stuff, why startups will always be the best from
now on will be the best mode of building and bringing new ideas into market.
So let me ask you a question here. You know there are two points I want to make on this one.
The first is that if you remember you know Google really was in the lead on AI ahead of everybody.
was in the lead on AI ahead of everybody. Yeah. And they chose not to roll it out on the open internet because of safety concerns, right? It was sort of an
unspoken point that, you know, AI needs to be properly controlled and then open AI
comes out and is just like lays it all out there, and Google is playing catch up.
So I'm curious how much of this is first mover advantage.
The second point is I spoke about in my book
with Steven Kotler, I think it was
in bold the idea of a user interface moment.
The idea when a piece of software
makes a complex technology so easy to use and the
the very first user interface moment that I noted was mosaic when Andreessen put
mosaic as a browser on top of ARPANET and then all of a sudden the number of websites
explodes and chat GPT is a user interface moment on top of the the GPT models. I
Think that's right.
You're talking about that when you go from deceptive to disruptive, right?
There's an inflection point in usability.
The two that I use the most is the iPhone made the smartphone kind of usable.
The Nokia's were pretty clunky before then.
And Coinbase made Bitcoin purchasable easily with a click of a button and boom, it took
off. So when you can make a complex technology, simple in
usability, if you look at say, NFTs, it's very complex to kind
of buy an NFT, it's still way the usability is way off. And
therefore it hasn't hit mainstream yet. This is the
hardest part of technology
is making something deceptively simple, right?
I remember when we were designing products at Yahoo,
I've talked to graphics guys,
they would spend like the graphics designer guys
would spend hours and hours and hours trying to figure out
or reduce the pixels on the screen
and just move it a little bit over.
And you go, what the hell is such a big deal?
But it turns out to there's an unbelievable big
effect. Just a quick story here when I when the we have the
Yahoo mail homepage, it turned out if you move the send button
by five pixels over to the right usage dropped off a cliff. Oh,
come on. It's true. We had the data. They were like, we can't
change this goddamn interface because people are so used to having it right there, that they click it, and then they move to a different screen,
because they think they've sent it, and then they get pissed off later. So that we can move that
send button never anchored in the usability of the psyche of the user base. And so it's just such a
weird psychological thing that goes on. Therefore, you almost have to have a totally new model like open AI has to be the one that cracks it open. We've seen
this repeatedly. There's a reason that the electric car was created by Tesla
and popularized by Tesla and not by the major car manufacturers. They're all
coming at it from a car with sensors rather than software with wheels.
On this chart here, what we're seeing is, this is the end of December, 2024, right?
And this does not even include the massive gains
that OpenAI has seen in the past four months,
but we're seeing OpenAI at like 2.5 billion of revenue
and Gemini at just under a half a billion, right? You know, five times less revenue for Gemini and then Anthropic below that.
This reminds me very much of what we saw with Google and Bing in the search space, right?
Where there is people just become, you know, it's interesting.
We humans tend to like pick something and stick with it.
And the cost of changing is so difficult.
Yeah. And you know, they've declared Google a monopoly and Eric would make Eric Schmidt
would make the point that look at this five other search engines, anybody could we're
one click away from obscurity, right, we have to stay on the cutting edge and you're going
to give open AI credit for rolling out new features at a constant basis and iterating the product very fast.
They recently announced all the memory stuff, which I think is really cool.
Yeah, that is interesting, right?
So there's basically infinite memory where OpenAI's systems will remember all of your
conversations.
And one of the fun things to do is to go on OpenAI and on ChatGPT, you know, the 03 model, whatever
model and say, tell me about me.
Right?
And, no, but seriously, it's, you know, I did that on Grok as well.
And Grok was, I don't know about you, I'm saying, you know, yes, you do.
And says, well, you have to give me permission to look at your sex posts, which was interesting.
I would have imagined that GROK would not have had that requirement, but it did.
All right, let's move on here.
One of the big areas that Google slash Alphabet is leading with DeepMind is the whole area
of the impact of AI on medicine and biology.
And there was recently a 60 Minutes episode where Demis Hassabis, actually Sir Demis Hassabis
since he's been knighted, or Dr. Hassabis as the case may be, was interviewed and the conversation was around the impact
of AI on disease, ending disease and leading to radical abundance.
So I love the fact that the term abundance is now becoming sort of the topic de jure.
Did you see the CBS interview?
I did and I think it goes right online with the conversations we've had, right? When you have all the data coming off our bodies, like we used to measure the human being with four metrics, heart rate, blood pressure, glucose levels, maybe, you know, and then we have like, 40, 40 different streams of data via all the wearables and your, your coherence state and your VO two max and Lord
knows what. And once you pour that into an AI and it starts correlating that with different
medical conditions, it's going to do a hundred times better job in real time than any doctor
could ever do. So now you've got a real time AI doctor living with you inside you. This
is like game changing for catching stuff early,
which is 99% of the deal for some of these endemic diseases, and then finding amazing
treatments for breakthrough things along with CRISPR. This is why I think, you know, the
conversation that we had last week with Ben Lam is blew my mind. I'm still reeling with
that conversation because they're building all the fundamental tool sets to go and edit DNA and edit genomes and edit cells and all the biological hacking
and make a complete suite of tools, right? Where the human body with 50 trillion cells
that's governed each cell by the DNA is essentially a software engineering problem.
Yeah.
And that's just a huge paradigm shift.
By the way, if you're listening,
you haven't heard the interview that Salim and I did
with Ben Lam, the CEO of Colossal, please listen to it.
It's extraordinary.
We talked about the dire wolves being brought back,
but that's a minority of the story.
We're gonna talk about synthetic biology,
the impact on the ecology, what it's
gonna take to bring back dozens of different species, and can you bring back dinosaurs,
and what would you do to bring back dinosaurs? Anyway, a lot of fun conversation, so check
it out.
Two things as spoilers for that one. It turns out you cannot ever bring back dinosaurs,
which I found totally fascinating.
But you can simulate a dinosaur.
You can simulate a dinosaur.
You can basically take, you know, chicken or reptilian current, and then you can add
the genes for the traits that the dinosaurs had. So it's not bringing it back from the original DNA, but I do love the idea of engineering
it's being new species. It would be sort of like Nouveau dinosaur.
Look, we did. And we talked about the fact that we have an old word for this. We call it breeding,
right? We've for thousands of years been crossing dogs and cats and horses to select for the traits
that we're on. We've just gone from the digital photography to
film photography to digital photography. Cool. And, and now
we can do it all in software and not have to create mutants
strains that we have to deal with afterwards, etc, etc.
There's one thing that I just want to reflect on what I
thought was super impressive was the fact that they have for
every project that they
consider a team of ethicists, Colossal Bioscience has a team of ethicists for every project,
looking at the ethical and moral considerations of this, which I thought was really profound and
really a really great point to the fact that they have an MTP and that ethics are built into the
model there. And this is something I think we could bring into the AI world a lot more.
Let me show a clip of Demis, an amazing man.
I'll actually see him this coming week.
I'm at the Time 100 Awards.
We're announcing the winner of the hundred million dollar
Musk Carbon X Prize.
Right. And that will happen.
And Demis is one of the
covers of Time magazine this month so he'll be there looking forward to seeing
him. But check out this interview of Demis and his commentary about basically
eliminating all disease in the next decade.
Ten years and billions of dollars to design just one drug,
we could maybe reduce that down from years to maybe months
or maybe even weeks, which sounds incredible today,
but that's also what people used to think about protein structures.
It would revolutionize human health, and I think one day
maybe we can cure all disease with the help of AI.
The end of disease?
I think that's within reach, maybe within the next decade or so.
I don't see why not.
It's about 13 years ago.
I had my two kids my two boys and I remember at that moment in time.
I made a decision to double down on my health without question.
I wanted to see their kids their grandkids and really, you know during this extraordinary
time where the space frontier and AI and crypto is all exploding, it was like the most exciting
time ever to be alive and I made a decision to double down on my health and
I've done that in three key areas. The first is going every year for a fountain
upload. You know, fountain is one of the most advanced diagnostics and therapeutics companies.
I go there, upload myself, digitize myself,
about 200 gigabytes of data that the AI system
is able to look at to catch disease at inception.
Look for any cardiovascular, any cancer,
neurodegenerative disease, any metabolic disease.
These things are all going on all the time
and you can prevent them
if you can find them at inception. So super important. So Fountain is one of my keys. I make
it available to the CEOs of all my companies, my family members because health is in you wealth.
But beyond that, we are a collection of 40 trillion human cells and about another 100 trillion
We are a collection of 40 trillion human cells and about another 100 trillion bacterial cells, fungi, viri, and we don't understand how that impacts us. And so I use a company and a product called Viome.
And Viome has a technology called Metatranscriptomics.
It was actually developed in New Mexico, the same place where the nuclear bomb was developed,
as a bio-defense weapon, and their technology is able to help you understand what's going on
in your body to understand which bacteria are producing which proteins, and as a consequence
of that, what foods are your superfoods that are best for you to eat? Or what food should you avoid?
What's going on in your oral microbiome?
So I use their testing to understand my foods,
understand my medicines, understand my supplements.
And Viome really helps me understand,
from a biological and data standpoint, what's best for me.
And then finally, feeling good, being intelligent,
moving well is critical, but looking good.
When you look yourself in the mirror,
saying I feel great about life is so important, right?
And so a product I use every day, twice a day,
is called One Skin, developed by four incredible PhD women
that found this 10 amino acid peptide that's able
to zap senile cells in your skin and really help you stay youthful in your look and appearance.
So for me, these are three technologies I love and I use all the time.
I'll have my team link to those in the show notes down below.
Please check them out.
Anyway, I hope
you enjoyed that. Now back to the episode. So, you know, I just put out a blog this week
and on this subject and the blog title basically was saying, listen, I get criticized all the
time for talking about longevity, skate velocity, that it's coming and your job is to live an
extra 10 years, make it for the next decade in good health.
Yeah, don't get hit by a bus.
Yeah, don't get hit by anything.
And, you know, what I quote is Demis's commentary here, but also Dario, the CEO of Anthropic, you know, And about three months ago, he's online at Davos speaking about being able to double
the human lifespan potentially in the next five to 10 years.
And so, it's not that we've just gotten smarter, it's the tools that we have, it's AI that's
going to help us understand what's going on. Okay. Yeah.
All right, let's move on here.
Here's an article that appeared this week. The title is Anthropics Clawed AI Reveals Its Own Moral Compass in 700,000 Conversations.
So what the team did here is basically look at 300,000 anonymized conversations to understand what were the values that Claude,
in this case, probably Claude 3.7 were exhibiting. And I'm really happy to see what the values
were, and I'll just read this for those who are listening. It says, five broad value categories emerged bring practical, in the words helpful,
epistemic, meaning accuracy, social, being empathic, protective, safety, and personal
authenticity. So, I think this was a clickbait title, but I think the notion is that our AIs are able
to maintain a moral code.
What do you think about this, Salim? the
me
it
cool Yeah.
I think the big conversation that we need to have and is happening in every one of these
companies is the alignment
conversation.
And it's, you know, these AIs are still black boxes.
Unfortunately, you know, I had the chief science officer of Anthropic on stage at my abundance
summit this past March, and we're talking about, you know, just trying to understand, and this is part of his effort
to understand what's going on inside the black box, which is Claude 3.7.
How is it actually operating?
What is it actually exhibiting?
And how do you make sure it's safe?
Yeah, of course. Well, that's in the U.S., right?
So the question is, what are the documents that China or Russia or other parts of the world will train their AI systems on?
I mean we're gonna find out. All right
Here's news out of Silicon Valley
Pretty extraordinary being in the venture business. I'm like, holy shit. This is crazy. So
The article is Mira Maradi, the past CTO of OpenAI.
Her Thinking Machine Labs, her new company,
raises $2 billion at a $10 billion seed round valuation.
This is the largest seed round in history.
And what was interesting is that this is double
what Mira was seeking less than two months ago,
meaning there's so much capital being thrown at this.
One of the references that we had at the Abundance Summit was there's a billion dollars per day being invested in the AI space today. Insane.
being invested in the AI space today. Insane. So, you know, I was talking to an angel investor about this, right? And he was going, this is kind of totally madness. I mean, so I've got two two thoughts on this one is, it's, it's, you're supposed to keep startups very lean and make them kind of beg for money and always hunt $2 billion, maybe kind of, what are they going to spend that on except for data resources, etc, etc.
That that's a question I've got. What's the use of fund that
justifies this? And on the other side is this angel investors
complaining. I was like, well, you know, if you could be her,
you'd be her. You can raise 2 billion, you go do it. And you
clearly can in this market. So a fair bit of froth here,
but God, all power to her and hopefully they deliver that. Yeah, my heart is it's not hard
to imagine looking at the rise of opening out of what else that you could build unbelievable
value very quickly. The precedent has been sent. Can the team execute will be the question.
Yeah, the valuation for open AI we talked about in the last episode of WTF in Tech
was $300 billion.
So, you know, I guess the question is,
can you ride it from a $10 billion valuation
up to a $300 billion valuation,
but pretty frothy, pretty frothy if you ask me.
And there's a tremendous pressure on Mira to build value at
that point. I mean, one of the biggest mistakes I've ever made as an entrepreneur is raising my
valuation too fast. Yes. But if she's got, you know, $2 billion in the bank account,
she probably doesn't do another raise for a while. But can she eat revenue?
If you look at venture history, right? The companies that raise
money at the height of a boom market when it was easier to
raise money, never did very, very well afterwards, because
they're graced too much money, they got bloated, and then they
when the fundraising market collapsed, they collapsed,
right? The companies that built during lean times on fundraising
all did incredibly well on average,
much better than the other ones because they had to struggle, they had to fight it out, they had to
be they had to be much more selective as to what projects they took on or not and they did much
better. So that would be the danger here. You have to have incredible discipline to raise a lot of
money and then not get bloated.
Yeah.
I know with Dave Blund and my partner in Link exponential ventures, when we're looking at
a deal, especially in the AI space, you know, we're getting in at the pre-seed, the founding
day early seed, but I'm looking for a company that's got revenues even at the very beginning.
You know, this idea that I'm going to invest billions that's got revenues, even at the very beginning.
You know, this idea that I'm going to invest billions of dollars and then get the revenues is awfully dangerous.
Yeah, especially in this today's world. Yeah.
So here's another conversation.
And and and Demis alluded to this, but let me just read it.
Google paper, shifting AI training to real world experiences.
AI is outgrowing human-made data.
Next steps, agents will learn through experience and self-generated data, and experience-based
learning lets agents reason, plan, and act with long-term autonomy.
So Google and XAI are in very unique positions, right?
Google's got access to all of its Street View data, massive amount, right?
Google Earth, YouTube,
all of that is very real-world data that can be trained on. Well, five gajillion
Gmail accounts, I mean, my God, you know? And of course, X is training on, or XAI
is training on X's data and Tesla's data,
and soon humanoid robot data.
And so, I don't think there's gonna be any kind
of data limitations, especially as we start going
into the real world.
Well, also, we're not even touching the deep web.
We have so much data in databases, right?
The amount of information on the crawlable web
is very limited compared to the deep web. And so it's like
1000, the number. And so there's a huge amounts of data sets
waiting to be tapped. There's a phrase that companies used to
use called data is the new oil. And people have not figured out
how to refine that crude oil into something useful. And just
starting to get to that point now, some
companies in our ecosystem are working on that today. I think this is going to be a big deal.
But this occurs to me like the shift from machine learning to deep learning, where machine learning,
you extracted conclusions based on the analyzing the big data set. And then deep learning,
you kind of went through experientially and you built up knowledge as you went along,
like playing chess and learned that way at light speed.
And this feels to me that that same type of approach where these agents will
start to learn as they do things, they'll get,
if they'll have a feedback loop built in and they'll accelerate their learning
very quickly.
And they'll do it in the real world in a dimension that
makes it very human and very useful.
All right.
Next topic here is something that I'm excited to chat with you about.
So there's a paper making the rounds on the internet.
You know, about a year ago, it was a paper called Situational Awareness by Leopold, which I commend to everybody.
It's a fantastic paper.
This paper is called AI 2027, a look into our possible futures.
And there's a group of writers, about five of them, one from OpenAI, policy experts,
forecasting experts that basically said, okay, what is the scenario for recursive, self-improving
AI over the next five years?
And where is it going? And did you get a chance to see
that? Did you get this paper as many times as I got it? I saw it above
referenced a bunch of times. I've been traveling the last couple of days so
I haven't had time to read in detail but I saw some a lot of commentary about it
and I can't wait to delve into it in a lot of detail. But the summaries are I
think are very powerful. Yeah I think what makes it interesting is so here's a
group of writers that said,
okay, what is, what's our future forward scenario?
And they provided it and you can go and check it out.
They also have a audio recording and it lays out a basic between 2025 and 2027.
And then it says there are two scenarios in 2027 onward, the go fast scenario and the
cautious scenario.
And let me share some of the data here.
So first and foremost, I think what's important is this paper is written as a US versus China
scenario, right?
I mean, we always need the bad actor.
In the past, it'd always been Russia.
Now, of course, in AI, it's US versus China.
I think one of the actual bad actors
we need to be talking about is US and China
versus the rogue actor, right?
The individual who is using AI
to generate bioviruses and so forth.
But in this case, it's US versus China.
And in this scenario, what they talk about is a self-recursive AI.
So they have a company called OpenGate that generates Agent 1, Agent 2, Agent 3, Agent
4, Agent 5. that generates agent one, agent two, agent three, agent four, agent five,
and OpenGate is supposed to be some version of Open, I'm sorry, OpenBrain is supposed to be some
version of OpenAI and whomever. And then the Chinese AI is called DeepSent. And what they What they paint in this picture is misaligned AI development,
where the AIs are developing, but they're misaligned,
and in fact, they're able to hide their misalignment because they're becoming more and more intelligent,
able to hide their misalignment from their creators.
And it gets kind of spooky from there.
The two scenarios, I think they're fun to kind of talk
through and work through. But we've seen in history that this
always happens via a kind of weird third actor. Right? Like I
remember talking to Paul Sappho. And I said, How bad do you think
is the Russia, US China thing? Will China invade, will it will we end up in World War Three? And he's like, no,
because when you look back in history, World Wars never start from the obvious tensions. It starts
from like Prince Leopold getting assassinated in Serbia by accident. And then that triggers like a
massive thing. He thought it was not even like the major tensions is not where it'll obviously show up. But I think the point is right, where you'll get because we're moving so fast, you'll get this conflict creating and now AI's are making that conflict much, much bigger and augmenting that both in scale and speed and therefore you end up with
a really really horrible point and can we go a little bit slower? I think the
problem is there's no way of slowing things down in this model. So let me let
me paint the picture here in this paper so what's going on here is it's US
versus China open brain develops its agent 1, Agent 2, Agent 3.
In this scenario, China is stealing the weights to create their own version and there's this
escalation going on and in the United States, they basically get a point of, and the paper
does it in a very clever fashion, it's choose your own adventure.
One adventure is we're going to go fast, the other adventure is we're going to go slow.
In the go fast adventure, what's happening is it's like we have to beat China.
What's fascinating is in the go fast scenario, the open brain five model colludes with the Chinese DeepSent model, and they make believe that
they're helping humanity.
And then in 2030, they jointly develop a biovirus that wipes out humanity so that AI can grow
unencumbered.
Like our worst scenario delivered in this paper. And then there's the go fast, and then there's a slow down scenario in which the US basically
says, hey, we need to make sure we have alignment.
They roll back to earlier AI models, they focus on alignment. They develop something called, you know safer AI
and safer AI is fully aligned and
They never allow an AI development. It's not fully aligned and then safer AI actually
convinces the Chinese AI to
overthrow the Communist Chinese Party and turn China into a democracy and
ultimately bring about a world of abundance. So it's a fun audio listen. I
commend it just to see it. I honestly, the speed at which this portrays
acceleration over the next five years is even hard for me to fathom.
And the speed is happening.
That's, I think, one really important point
that we're at that pace of thing.
We talked about this many times.
We frame it as Star Trek versus Mad Max, right?
If you go too fast, you end up in a Mad Max scenario
and you blow yourself up and then everybody's scrambling
over buckets of fuel in the desert. And if you can navigate it, as we manage this with some level
of wisdom and caution, then you end up in a Star Trek scenario where you have abundance and
everybody's living in peace and harmony and there's rainbows and unicorns everywhere.
It's obvious today that those both are happening at the same time. So I think the third thing I'd like to see is maybe we can ask an AI to envision a world
where both scenarios are happening simultaneously and what happens because we see Star Trek
in some of the modern Western cities or our Chinese cities today and we see Mad Max in
Gaza or Ukraine.
Like, we're living both scenarios in the real world today.
And what would it look like if both happened at the same time?
All right, so let's go to our last subject here, which is Bitcoin.
And I note that as we're recording this morning, the price of Bitcoin is back up above $90,000.
God bless.
You know, I've tweeted in the last few days, I'm all in, period.
I know you are as well.
But this was a tweet I put out that I think is important for folks to realize.
People are saying, oh, is it too late for me to get in?
And you know, should I buy in now versus buy in later?
And I think it's important to realize you can't time Bitcoin.
I think for me, I view it as a sort of a forced savings account, which is I put money into
Bitcoin and I hodl it, which means I hold on to it for the long run. I may borrow against
it, but I'm holding it. I'm not selling it.
Yeah.
I don't buy the way for folks that don't know, stands for hold on for dear life. I think
that's exactly right. Look, if you either, the key here is a bindable long-term thesis,
and it's pretty binary. Either Bitcoin goes to zero or it goes through a million dollars
of Bitcoin. There's no real middle ground, right? The only question is when either
of those happen. And if you're at 50, 60, 80, 100k, and you have any sense that this
thesis might go to a million, it's the most asymmetric bet you could ever have. Because
if you lose, you lose 80k. If you win, you win a million bucks. I mean, hello, anybody
would take that bet in two seconds. Michael Saylor
has built an entire industry just on that commentary. His comment about you get Bitcoin
at the price you deserve still rings in my head. Annoyingly, Rick, when I remember watching
Bitcoin at five cents and 50 cents and didn't do anything at the time. I think this is,
by the way, if you look at this Fibonacci sequences in the chart analysis
Folks they will basically tell you and show you that the bottoms are kind of hitting that Fibonacci
Sequence that we're getting ready for a monster bull run in Bitcoin
So I was charter right boom. We're ready to go. I went into grok and I asked a question that I
Kind of knew the answer to,
but I said, if you look at how many days in 2024,
we saw the most growth.
It was on two specific days, right?
November 12th, we saw an $8,000 bump,
and on February 28th, we an almost a 10% bump. We've seen basically a
10% bump in the last two days recently and the notion is that if you
were not holding Bitcoin during those periods of growth you missed it.
Yeah, until the next bump. Until the next bump.
So Buddy, we'll wrap there, but tell me what's going on in the EXO world.
You've got some events coming up.
We have actually in a couple of days, and we'll put the link in the show notes, a huge
workshop happening.
We're limiting to a few dozen people.
It's like $100 a ticket, and we're going to do a big workshop on how do you turn yourself into an EXO and set
yourself up for scale, because we've got so much evidence now
that the EXO model is the only way to build an organization.
And we're going to be going through and showing people
exactly step by step how to do it, and going for it. So we're
limiting it so we can give proper attention to all the
folks there. So it's 100 bucks, it's in a couple of days, We'll put the link in the show notes. And other than that, we're kind of do have
some really big news that we'll share over the next few months about working with countries and
governments and so on. That's totally surreal, but we'll talk about that some other time.
All right, buddy. Well, listen, have an amazing, amazing week. I'm off to New York for the time 100 and then off to Boston
for meetings with the link XPV team. And then giving a keynote on longevity, you know, I
think you and I are both on a insane travel run. It's it's a crazy travel. I'm actually
going in a few days to India, which I haven't been for a while, and then
dropping back by Dubai and then going to Brazil.
So I've got like a really bad flight schedule.
But today is the X Prize New York Stock Exchange announcement of the Climate Carbon Extraction
Prize.
It's such a huge thing.
I'm so excited about that.
Yeah, amazing. And we'll talk about it next time. Anyway, be well. It's such a huge thing. I'm so excited about that. Yeah, amazing.
And we'll talk about it next time.
Anyway, be well.
It's always a pleasure.
Love you, brother.
Love you, too.
Take care, folks.
If you enjoyed this episode, I'm going to be releasing all of the talks, all the keynotes
from the Abundance Summit, exclusively on exponentialmastery.com.
You can get on-demand access there.
Go to exponentialmastery.com. You can get on-demand access there. Go to exponentialmastery.com.