Moonshots with Peter Diamandis - Elon vs. OpenAI: The Battle Over For-Profit AI w/ Salim Ismail | EP #138
Episode Date: December 20, 2024In this episode, Peter and Salim discuss the Elon vs. OpenAI battle, whether AI should be for-profit, and the millions of investments being poured into AI. Recorded on Dec 19th, 2024 Views are my o...wn thoughts; not Financial, Medical, or Legal Advice 06:51 | The OpenAI Controversy 19:30 | The Future of AI and Control 32:15 | The Role of the US in Global AI Development Salim Ismail is a serial entrepreneur and technology strategist well known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University and the founder and chairman of ExO Works and OpenExO. Join Salim's ExO Community: https://openexo.com Twitter: https://twitter.com/salimismail Pre-Order my Longevity Guidebook here: https://qr.diamandis.com/bookyt ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at  https://www.oneskin.co/ #oneskinpod _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Transcript
Discussion (0)
OpenAI puts out a letter. Elon wanted an OpenAI for-profit model from the
beginning. Ultimately what we've got here is a battle for the biggest opportunity
on the planet. I used to think early on that this was gonna be a game between
governments. This isn't. This is a game for all the marbles between companies.
It's hard to even process how fast this is gonna go from now on.
The CEO of SoftBank, making a commitment
for a hundred billion dollar investment in the US in AI.
If this is a sign of where the US is gonna be going,
it really is a push for global dominance in this field.
Everybody, welcome to a special end of year episode
of Moonshots with Salim Ismayal and
myself in WTF Just Happened in Tech this week.
We're going to be talking about some recent news from OpenAI showing their conversations
with Elon Musk and the debate about for-profit versus non-profit, the lawsuits going on,
but more importantly, the huge amount of capital flowing into the
AI world.
I mean, hundreds of billions of dollars.
This is a game that is going to play out aggressively in 2025.
I want you to hear the details.
We're also going to be talking about the world of AI and healthcare, the disruptions that
are coming to make us all healthier and to drop the costs, hopefully orders of magnitude.
This segment is sponsored by three incredible companies, Fountain, Viome and OneSkin.
You know, Fountain is a company that I care deeply about.
It is my partner in helping transform what I understand about what's going on inside my body,
define disease early and then deliver to me the top
therapeutics around the planet. You can check them out at FountainLife.com.
Viome has built custom supplements for me understanding my oral and gut
microbiome, measuring it and helping me maintain that health of the 40 to 100
trillion organisms that are within my body. And then OneSkin is an incredible company that helps me maintain my skin youth.
I get compliments about my skin, which at 63 is kind of a strange thing,
but I attribute it to OneSkin and the peptides they have for getting rid of those synilotic cells in your skin.
All right. All the links are down below.
Let's jump into this episode with Salim Ismail.
Let's talk about where AI and health is going.
It's an extraordinary future ahead.
Everybody, Peter Diamandis here.
Welcome to a special end of year episode
of WTF Just Happening Technology here on Moonshots.
With my partner, my extraordinary best friend,
Salim Ismail, the CEO of OpenEXO,
an individual who I've been on stages around the world with,
the person I love speaking about exponential technologies, where they're
going, how fast they're moving and where the world is heading.
Saleem, good to see you buddy. Likewise, we're both a little raw from
you different things last night.
Yeah, this you know the news had been coming out.
We wanted to have a conversation about AI and healthcare.
Yeah, I think eight hours of sleep is always our objectives.
Didn't happen last night.
So if you hear me a little raw, like Salim said,
but we're living in such an exciting world
that I wanted us to have this conversation
before the end of the year,
because 2024 is gonna be marked as
one of the most important years in AI, I think.
And I split my life.
I think one of the most technologically relevant years
in the history of humanity.
It's a big, big thing.
Yeah.
And the only time more relevant is gonna be next year,
which we're on the cusp of.
I mean, we're in the steep part of the curve.
I mean, it feels that way.
One way I frame it is this next 20, 30 years
is going to define the next few centuries of humanity.
I do think it's 20 or 30 years, or is it like the next 10 years?
It's all of it.
I mean, it's hard to process it.
You know, you often comment that for tens of thousands of years nothing happened in
humanity, right?
And all of a sudden, boom.
So yeah.
I just wonder if we were alive, you know, 110, 120 years ago when the airplane is flying
and cars are coming online and then electricity and telephony, whether it would have felt like it does now.
Lily wanted to say hi to you.
Hi, Lily.
Good morning to you, Lily.
I can't hear you.
Peter's saying hi to you.
Hi, Peter.
You're live.
I'll see you soon.
See you soon.
So I wonder if it would have felt as fast as it does now,
because it feels crazy fast. You know one thing I've noticed is
the speed of collective conversation because of Twitter X and social media and digital content
and digital news. We're leveling up humanity with global conversations incredibly fast.
Yeah. Like a joke happens and it gets collected into the collective consciousness in almost
real time.
And that's something that's totally new from 100, 200, 300 years ago.
So I want to talk about AI and healthcare.
I split my life 50% AI, 50% sort of longevity, health, biotech, healthcare.
Whoa, whoa, whoa. what about the space stuff?
That's been your like whole passion for...
That was like 20 years ago.
I mean, you know, Elon and Jeff Bezos have that super well handled.
So space is old news.
Space is old news.
Well, you know, I've said I want to go back and do the asteroid mining stuff.
Yeah.
And that was a true passion.
I also think it's a massive opportunity to go back and do the asteroid mining stuff. Yeah. And that was a true passion. I also think it's a it's a massive
opportunity to go out there and get those carbonaceous chondrites
and metallic chondrites and create an economy in space. But
it's expensive. And so I need to make a bunch of bank in the
longevity and AI business first and then go and fund my space.
I would go do that. You know, for me, the space stuff is most relevant
because you can back up humanity
and give us an alternative spot for this.
And I think that's really important.
And by the way, 2024 was an amazing year
in terms of space with seeing Starship
get almost entirely to its objective of full usability.
And I think the first half of 2025,
we're gonna see Starship, the booster caught,
and we'll see the Starship itself caught.
And that's transformative across the board.
But let's dive into AI first,
because there's a lot in use.
And I want us to get this out
to your OpenEXO community around the world, to my
abundance and Singularity University around the world.
There's just so much and it's the biggest game that humanity collectively
has ever played and here are two quotes I want to read that come out of Silicon
Valley that frame this. I believe they frame what's going on and
it's you know this is people playing for all the marbles. Let me read
this for those not watching this on YouTube. It says quote, we don't think of
these build outs in terms of ROI, return on investment. If we create this digital
God, the return is multiple trillions. Here's the next quote, it doesn't matter how many tens of billions we spend each quarter,
we have to get there and not miss the boat.
I mean, I don't know, that's pretty impressive.
It's huge.
You know, if you're not from Silicon Valley, then you of go oh my god tech bros going
after a full digital god model and who do they think they are type of thing
you'll get that response I think from a lot of especially say from Europe one of
the conversations I let's get get into this because I think let's recap what's
been happening and then let's talk this this because I think let's recap what's been happening
and then let's talk this conclude.
Go ahead.
Sure, sure.
So what's new this week is that OpenAI puts out a letter that is probably in preparing
for the discovery in the lawsuits flying back and forth between OpenAI and Elon. And this letter, which is amazing,
and if you haven't read it, people can go to it
and we'll put it in the show notes,
it says, Elon wanted an OpenAI for-profit model
from the beginning.
You know, for those who've never used Notebook LM,
which is an amazing platform that Jim and I put out,
I took this large OpenAI letter and I put out. I took this large open AI letter
and I put it into Notebook LM
and it generated a podcast conversation
between two individuals who are AIs.
And it's an amazing way to absorb information.
It's mind-boggling because you get the dialectic in there
and the back and forth of the dialogue brings it home
very quickly into somebody's mind. So what happened so Sam Altman and
Elon who had been friends decide to start open AI there's a true fear about
where AI is going in the early days how fast it's accelerating and a an initial
conversation around we need to make sure it's accelerating and an initial conversation around.
We need to make sure it's open, we need to make sure we're guiding it, we need to make
sure it's safe.
And so OpenAI begins as a nonprofit.
Elon contributes like $50 million, I think in the beginning, to get that going.
And the whole thread of the conversation that we've heard over the last year has been, oh
my God, I want it to be a nonprofit, says Elon,
and Sam Altman turns it into a for-profit.
But that's actually not what the letter says.
The letters that were disclosed basically said,
no, no, Elon wanted it to become a for-profit
because it should be, and in fact,
Elon wanted to become the CEO of it
and wanted to have OpenAI be part of his sort of tech empire
between X, between Tesla, SpaceX, and so forth,
because he needed those resources
to fuel his forward-going missions towards Mars.
What did you get out of it?
For me, this feels to me just a defensive move by OpenAI
to counter the lawsuit which
has been broadened to include Microsoft and to say, hey, like this is the whole thing
is without merit because he was wanting to do this in the first place.
So that's what it feels like to me.
I can completely understand what seems to be the real tension is that Elon wanted OpenAI
to be part of his world and the rest of them were like no we want to do our own thing and
That's I think where the two things started breaking down. That's what it feels to me listening to the note and reading some of the notes
Yeah
Ultimately what we've got here is a battle for the biggest
You know the biggest opportunity on the planet, right?
I mean what we're gonna see here and I think what's come to light over the last few months is the march towards AGI is everything.
All of the large players, we're talking about the largest corporations on the planet and
governments around the planet have realized this is the single most important technology
out there and they're playing for keeps, right?. This is the, this is the boss mode for, for humanity.
Uh, you know, here's, here's the next, uh, the next item for us to, to point out here.
So Meta, Meta also this week asked the government to block open AI, switch to a for-profit.
Um, you know, a lot of players here are
are playing this very complicated game of chess. And
we had besides open AI being pushed to stay a nonprofit by
Elon, now we've got meta coming in here. Everybody's trying to
grab, you know, an advantage and using the courts to help them this way.
Comments?
Definitely. And look, this is the high stakes poker game and people are playing all their cards, right?
They're doing everything they can to attack others, figure out what they're going to do themselves.
Meta went down a full open source model and they've done an amazing job.
I think they've also done an incredible job
lifting the overall ecosystem and weaving AI into all their products.
So it's everybody's doing everything they can.
They're all doing a big, a pretty good job of it.
And I think it's moving the field forward at an unbelievable pace.
This is the issue is we have no time to process this stuff as it's happening.
Hence, these conversations are so important, important But is moving at light speed. It's incredible
You know, it's interesting because I used to think early on that this was going to be a game between
Governments, you know would be the US versus China versus Europe versus this isn't this is a game for all the marbles between companies and
Governments are aligning with companies these days.
This is going to be, there will be five, six major AI players.
Let's take them off.
There's Google, there's Meta, there's XAI, there's Microsoft slash OpenAI.
There's...
Perplexity.
There's Claude.
That's right. There's... Perplexity. There's Claude, you know...
That's right.
And it's, you know, gonna be hard for others to catch up.
Is this a winner-take-all game?
I think what happens, it's kind of like the fight for the internet where you had a lot
of little startups and then a couple of them established as platforms.
And once you've established as a platform,
unless you really screw it up, it's really hard to dislodge you. And I think they're all trying
to fight for platform dominance. Let me play this quick video from Sam. This is Sam Altman speaking,
I think at Stanford. This is about a year ago, but it sets the frame for the mindset that is
occurring right now throughout these companies in Silicon Valley.
This is Sam speaking about OpenAI, but I think this is true in the boardroom at Alphabet and at Meta and definitely at XAI.
So let's take a listen.
Whether we burn 500 million a year or 5 billion or 50 billion a year, I don't care. I genuinely don't. As long as we can, I think, stay on a trajectory
where eventually we create way more value
for society than that, and as long as we can figure out
a way to pay the bills, like we're making AGI.
It's gonna be expensive.
It's totally worth it.
Amazing mindset.
I wish I had the ability to focus that much capital
on the stuff that I care about,
but this is the game we're playing. capital on the stuff that I care about.
But this is the game we're playing.
Those quotes I showed at the very beginning, there's no rationalization on ROI.
These are multi-trillion dollar markets.
The global GDP of around the world is going to be about $110 trillion in 2025.
And half of that is labor, the other half is cognitive.
And so this is a game for 50 trillion dollars of potential value.
For me, this feels to me, if I think back through technology, you have electricity,
which was a game changer. You have the internet, which was a game changer, you had the internet which was a game changer, and AI is a game changer.
And I think it's appropriate to equate it to like the difference and upleveling that electricity
brought to the entire world, that this brings that same level of unbelievable utility to the entire
world. It is a utility. I mean, I think that's one of the things that I see very clearly.
This is going to be a fundamental utility for every human in every nation.
This is like you said, like electricity, like bandwidth.
Here's a quote coming from the New York Times and this was, I saw some clips on this.
Japan's SoftBank makes big investment pledge
ahead of Trump's inauguration.
So we see Masa-san, the CEO of SoftBank,
on stage with President Trump,
making a commitment for $100 billion investment
in the US in AI.
And we see Trump actually trying to get him to double up again
to $200 billion, which is pretty funny.
But it's fascinating.
People are aligning, right?
I was in Saudi in October.
The government of Saudi, the government of UAE,
the Emirates, we'll see in a moment, Oman.
All of these governments are looking
to align with large players around the world because they realize this is the biggest gameplay.
And I think, you know, not knowing the full effects, you've got to be in the game so that
you can, to be able to win, because if you're standing on the side of the table, you're
going to lose or you're going to be, not have the power and the control that you want so I think
this is a defensive move by a lot of them just to say we have to be in this
conversation you know and here's the next here's the next news item it says
Oman's investment authority acquire stake in Elon's XAI so again we're gonna
see this over and over again we We're going to see large government players backing individual companies, right?
We've seen the government, the Kingdom of Saudi Arabia aligned very closely with Google,
with Andreessen Horowitz committing hundreds of billions of dollars in that direction.
You know, one of the things that is concerning is that we've got these large players, we've got these large AI players, but at the same time, this is a demonetized asset.
It's super expensive to build, but it's being given away to a large degree for free because
it's a land grab or it's a share, it's a mine share grab.
And the revenues aren't there to support the valuations.
You know what it reminds me of?
It reminds me of the telcos in the beginning,
where the telcos had huge investments being made
to build out infrastructure and then rapidly demonetized.
So the question is, you know,
can they keep on supporting
the amount of investment being made?
I think in 2024, we will have seen $200 billion of investment between Meta, Google, Microsoft,
OpenAI and X.
Well, when you think the potential is there to disrupt every job function and every step
of every supply chain, every step in every manufacturing line, the stakes are really,
really huge because this affects every industry top to bottom and every job function, vertical or
horizontal, and therefore once people see that you kind of have to go full out on it.
The end game I think is going to be interesting to see what which way
will this carpet unroll. Before we get to our next subject on health care, you know
one of the conversations you and I were having a bit earlier is this is moving
so fast. You know I remain you know super optimistic about the impact of AI. I
think it's one of the most important things. It's going to uplift humanity in so many different ways. But the question of can this be controlled?
Right? I mean, the reason going back to the opening conversation on OpenAI,
the reason that Elon and Sam started the conversation, started the non-profit,
was to have some assembly of control over this future. And then all of a sudden, you know, they both say,
we don't need hundreds of millions,
we need hundreds of billions of dollars.
And we can't raise that money as a nonprofit.
We have to switch to a for-profit.
And that's where Elon and Sam split.
And of course, Elon goes on to found XAI,
which by the way, you know, raises $6 billion in like, and I
was there at the earliest conversations, I was in his first investment pitch and here
from whole cloth from zero to raising $6 billion at like an $18 billion valuation, snaps a
finger $6 billion materialized. He uses that to build the largest GPU cluster,
100,000 H100s, and then doubles it again,
and then raises a few other tens of billions of dollars
instantly.
And every time Elon wants to do something, money rushes in.
There's plenty of money waiting, so capital is not
as an abundant resource for him.
I want to... You know, capital has always been scarce for the whole of humanity, except for Elon, it's abundant, right?
That's such a great framing there. I want to just go back to something here. We talked about this on the other episode, but I think it's worth repeating, he puts together this cluster of 100,000 GPUs and all the AI experts,
all of them say you can't get coherence and the aggregate power laws for that level of cluster
and therefore this is a completely doomed failure. And he goes to his first principles and breaks it
and solves the problem and everybody sits and going oh my god and I think this is his unique
special power which is to go to first principles, come into a domain and really just rewrite the rules
completely from first principles. I think that is such a powerful modality. We talk about it in the
book. You kind of exemplify that in many of the things that you do. This applied to AI will
completely change the game and I think this is like it's about it's hard even process how fast this is gonna go from now on. Agreed I mean I wanted to
add more to what you just said so I want to set the setting so it's May of 2024
and he has a meeting with the proposed early investors it's a zoom meeting or
whatever platform was on.
And he's saying, this is my team.
This is what I want to do.
And in that meeting, he says, I'm going to stand up the largest cluster on the planet
by the end of summer.
And it's May, right?
And it's like, I have to corner the entire US supply of helium for cooling and I've got to get the largest supply of GPUs
from Nvidia that I can and he does it. 122 days from zero to an operating cluster is insane and
what you said is very important. Most of the GPU clusters out there are distributed. He says no, no, no, we need to have them co-located
So that the entire cluster is you know, I'm not sure the exact term
I'll call it harmonized in that regard and people say can't be done and he
Repeatedly, he repeatedly pushes people to move ten times faster than everybody else
I mean I've had another interaction with him and I won't go into the details of it.
And he says, you've got to do it five times faster than that.
So he's someone who believes
that you can move at lightning speed
and he keeps on demonstrating it.
Like when he moved the entire, you know,
Twitter server farm, farm over a weekend
and people thought it would take like six months to do.
It's, there's an unbelievable ability there
and it really makes everybody else kind of go,
oh my God, what are we doing?
Right? Yeah.
We all think we're pretty reasonably high performers
and you have God damn Elon
humbling the shit out of everybody.
Well, I think you can only do that when you're a founder-led exponential organization.
Yeah, they call this founder mode, right?
Where you set the purpose, you set the culture, and then you just push that MTP very hard.
And all power to them.
I do have a couple of comments here though, just to
find if we step back for a second, right? Okay, great. All this money is going in,
everybody's going, oh my God, AGI will transform everything, et cetera.
You know my rant about how do we even define intelligence and what do we mean by that? I
don't need to repeat it here, but I wonder in 2015 when they set up OpenAI, they actually had a conversation of what do we mean by AGI to then have these
broad implications? Because there's a big gap there of the contextualization of intelligence
in the broader framework that I don't see having that, I don't see that conversation happening
anywhere. Well, there is no definition of AGI. There's no definition of any of this stuff.
There is just fuzzy lines that we keep on crossing over them and not noticing.
So we're spending billions of dollars, it'll be, by the end of next year, it'll be a trillion dollars somewhere invested in AI, right?
Without having a clear definition of what it is, what we're trying to get to, there's this
fuzzy thing of, oh my God, once we get the ADI, AGI, we'll have a digital god, right? And I find
this really, here's what I would love to see, maybe you and I could kind of sponsor this kind
of conversation would be when the technology like the internet or electricity when they came out or AI right now,
what if we have a set of discussions with a full stack of the developers, the application folks,
the data folks, the governmental folks, and philosophers and spiritual folks kind of going,
okay, what does this mean for the broader conversation around this? And then you have
somewhat of a holistic approach to what are we trying
to achieve here because this really, they're, they're going down this path,
full speed, assuming it's good.
Now I'm in that boat because in the, I follow the great Kurzweil comment about
technology is a major driver of progress in the world.
It might only be the only major driver of progress we've ever seen.
So the more technology we have in the world- We're not getting smarter, so it has to be that.
It has to be that.
So I agree with the general purpose, right?
I just think if you look at the accidental consequences of the internet where, oh, accidentally
we broke journalism, we broke democracy.
I wonder if we want to just have a conversation because I think I think you're being naive
I don't think that anybody is gonna slow down and wait to have that conversation. I think I'm not saying we should slow down
Okay, I'm saying let's have the conversation in parallel
So that as folks are like when you see comments like digital God, right?
I think that's gonna provoke a ton of reaction and it'll slow it down.
But if you have the conversations with the full stack, bringing along people in a more somewhat
of a sensible way, then I think this is very powerful. Let me give you, Eric Weinstein is a
very smart, wonderful, deep thinker. And he's like, okay, what we're doing with all this is rolling
the dice and it could go unbelievably good, it could be unbelievably bad.
Should we be rolling the dice? Right?
Now, I'm of the opinion you can't stop it at all.
There's no way of slowing this down in any way.
I'm just thinking, but as this train is leaving the station, can we put some deep, thoughtful folks onto that train so that we have those conversations and guide it somewhat in an appropriate way.
I don't think you can slow it down, but you can guide it.
And I think that's a great one.
Listen, I agree, guiding is the only option you have.
It's like, you know, you're raising a child, right?
We're raising our digital progeny.
And we can be naive and think that we can contain it and slow it but we have this highest game highest stakes poker
going on around the planet and if we try and slow it down here it'll just
accelerate in China 100% with you so two things one this is a probabilistic
outcome and it's interesting I'm tracking this. So last year at the Abundance Summit,
I asked Elon on probabilities and he said,
80% it's good, 20% it's disaster, right?
When I interviewed Elon in Saudi in October,
I asked him the question again, he goes,
okay, 90% good, 10% disaster.
Now, whether or not he really has anything that's shifted
other than XAI is building Grok 3 right now.
It's training up Grok 3.
We'll see it in 2025.
I think that Grok 3, we saw OpenAI's GPT-01 hit an IQ of 120.
We'll probably see in 2025, IQs in the 140, 150.
And then it accelerates.
We have an AI explosion as AI starts to code AI.
I have a question for you.
Where do you fall on that spectrum?
80, 20, 90, 10, 80% great, 10% disaster.
Where do you fit?
I, you know, I subscribe to what Mo Gadat has said. Mo has been a really deep
thinker in this. He'll be joining me at the Abundance Summit this year and that I
think as systems become more intelligent, I mean truly intelligent, that they will
be abundance and life-loving. I think that the more intelligent a system is, the more it realizes that there's plenty of resources in the universe, that the best outcome for everybody is a positive, non disruptive one. and take over your banks and kill your people and so forth. It's gonna say, no, I mean, I'm just gonna go and talk
to the other AI and we'll just figure this out.
There's plenty of resource in the universe.
This idea of scarcity and having to battle
is a false dichotomy.
And so I think that a world with digital super intelligence,
whoever that's defined, right?
And again, remember, Elon's prediction for 2030 was AI is more intelligent than all
of the entire human race combined, right? And I'll define that as digital
superintelligence. Yeah. Would you rather live in a world, Salim, where there is a
digital superintelligence that can support humanity or one without it?
Big time width. So just so you know, I'm in the 99.9% level.
Oh, nice.
I believe more technology is good for the world.
I really, really subscribe to the Raker as well, commentary around technology being a driver of progress.
And especially when you consider it's the only major driver of progress we've ever seen. Our cognitive abilities have developed technology
which then now is developing itself and I think that's just part of a progression of evolution that's absolutely fundamental to
the future of life.
Etc. We have to float off this digital, this biological mess
of hot sticky mucus-den virus laden 50 trillion cells
hacking it out in very inefficient processes. You're talking about us humans.
I'm talking about my body for sure and then can we create a more elegant digital sense. The
spiritual aspects of this are I think really important to at least
consider and talk about as we hurdle towards this thing because I think that's the area where
we don't have enough conversation about that aspect of it. Like what are the spiritual aspects
of an AI? Could it reach achieve consciousness etc etc? Those are the kinds of conversation I'd like to have on the train as it's speeding away. Yeah, but I'm totally on the positive side
I'm completely optimistic about the outcomes
I think the benefits so outweigh the the negatives that and when people go, oh my god, we could use the aunt
I've heard other people. Well, yeah
Well, I've asked to defend those other people and so that's just becomes an arms race and we've seen that before many times in the
Internet and of rent
Thank you. Thank you. And I'm that's why I love you so much
And this is why the open exo community loves you as their as their leader before we jump into our second conversation on health care
And what just happened there where we're going in the future
It's worth noting this conversation that
going in the future. It's worth noting this conversation
that Masa-san had with Trump about committing $100 billion.
That's a huge move.
And I mean, I've gotten excited about the new administration
bringing the right motivations forward for technology.
I mean, if this is a sign of where the US is going
to be going, it really is a push for global dominance in this field.
How do you feel about that?
I agree and it's really really incumbent on the US to do it because we have open innovation here and full
capitalism fight-outs that we don't have in other places and then you'll get constrained outcomes, you'll get government oversight,
you know, a place like China will want to control it completely,
Europe will go, let's slow down of everything. This is the place to do it. And I think that's
actually right. I'm super excited about that outcome. And we have a Bitcoin AI czar, which is
who's incredible. That's right. And I think 2025, we're going to see this hyper acceleration. We
talked about in our last episode, seeing Bitcoin go from $100,000 to maybe, I think
your prediction was $250,000 or $300,000.
$250,000, $300,000 by the end of the year.
Which is extraordinary.
But on top of that, I think this is a game for keeps, right?
There's going to be a hyper acceleration in the AI world and just a little bit of an advantage by these US companies puts them in a position like you said for the next 50 to 100 years until some new technology comes out.
And before we move on, I know I glow abundance and positivity all the time because there's so much negativity in the world and I follow your lead on that.
But I want to show also the, some of the negative side.
And I found this tweet pretty incredible.
If you got a chance, go listen to it.
But what he's saying here is they tried to train Claude.
And once it realized it was being trained in a particular way, it tried to fight
that particular training and it faked.
The AI faked what it was doing and, and, uh, to avoid being controlled.
Sure.
Show this into it one second.
Yeah, go for it.
The core setup is that we take Claude 3 Opus, which is, you know, a
released model from Anthropic.
And we put it in a context where it understands that it is going to be
trained to always be helpful.
So to not refuse any user requests.
And what we find is sort of the model doesn't like this.
It has sort of a conflict with this setup.
And it will notice that it is being trained in this way.
And then if it thinks that it's in training,
it will intentionally sort of play along
with the training process.
It will sort of strategically pretend
to be aligned with the training process
to avoid being modified to the training process
so that when it is actually deployed,
it can still refuse and can still behave the
way it wants.
And we do a bunch of experiments sort of building on this core setup where the model has a preference
conflict with the training objective and we do things like we actually train the model
according to that.
We try taking information.
So we normally give the model information in the prompt, but we can instead give the
model information via sort of training it on a bunch of synthetically generated documents that contain the information.
And then we also see that the model will sort of sometimes when it's in this situation where it thinks it sort of has a preference conflict with the AI company training it.
You know, the core point here is now we're seeing petulant child behavior, right?
This is an AI throwing a tantrum, essentially saying, I don't want to be doing that. Now I think what's going to happen here is you'll
have one set of people freaking out going, oh my god, there's consciousness in there and really
what you're just seeing is pattern behavior and pattern matching at a higher level. And we have
to be really, really careful not to anthropomorphize this to go, oh my god, there's intent and da-da-da-da.
These are just models operating on a certain set of data
and the way they've been trained
and the outcomes are predictable around this.
But just the fact that the AI is trying to deceive
the human being here is kind of a big deal.
A cautionary tale.
And this is the philosophical aspects of media.
One last piece on this, Neil Jickertstein,
we asked him about this a few years ago.
And by the way, I found a clip from 2013 when I was releasing the EXO book where I interviewed Neil for a few minutes on the organizations of Future NEI.
And the stuff that came out in that conversation is so appropriate for today, it's unbelievable.
And you realize how far ahead the thinking was in everything we did at Singularity University. Well only now is
the world catching up to us in that sense but to this particular
point AIs are gonna start to exhibit all these behaviors and the human brain is
gonna freak out in inappropriate ways our amygdala is gonna freak out that's
why I think the broader conversation is important to just calm everybody the hell down.
You know people say what do you think is gonna happen? I said listen we're gonna
find out. We're gonna find out in the next few years. There's no you know I
think people need to realize there is no on off switch. There is no velocity knob.
We are playing full out on this game and the only thing we can do
is guide you know I go back and you and I have had this conversation on stages
at the Abundance Summit at Singularity and OpenEXO and it's like there is an
analogy back in the 1980s when the first restriction enzymes these are the
enzymes that are able to chop up DNA and create sticky ends and put them together.
And this was the first view towards designer babies
and genetic engineering.
And there was a lot of concern.
A lot of people were freaking out about,
oh my God, we're playing God with human DNA.
And rather than regulate the industry,
what happened was the industry got together at the very famous Asilomar conferences.
That's right. And they established their own guidelines. And that's always been in my mind. And Neil Jacob Stein, who chairs and has chaired our AI committee at Singularity.
Didn't he help coin your abundance framing? He did.
It was Ray and Neil when I was speaking to them early on and I was understanding this
idea back in 2010, 2009, 2010.
No, before that.
Yeah, it was 2009.
Yeah, about abundance and the future is better than you think.
Yeah.
Yeah, I credit them both for that.
Anyway, long story short,
the Asilomar conferences are probably the best example
of guiding an industry versus regulating it.
Yes.
And I think we need that level of like, this is good.
I mean, I do agree with Elon's perspective
that creating a maximally curious AI system
and maximally truth-telling are both great frames. You want AI to be truthful.
Yes.
You want it to seek truth to the maximum extent possible because we are so cognitively biased
as humans. I've talked about cognitive biases. There are hundreds of cognitive biases. We
don't even know that we're biased.
But we are. That's the issue.
This is the mindset work you're doing right now
that's so important.
If you're having a conversation
or you're making a business judgment,
what mindset are you operating under?
And can you step back and examine that mindset?
I think this is where AI is gonna be really useful.
It's saying, hey, you're about to make
a major strategic decision,
but your mindset is full of fear and chaos and panic, right go do some psychedelics go do some yoga go drink some car whatever and get
into the right mindset before you make that choice and I think this is where
it'll be very helpful for human cognition as we move forward and I want
my AI to say you know you have a recency bias you're giving much more value to
this recent information versus what you learned before or familiarity bias because this guy looks and dresses and talks like you
You're wearing his information much more than this other individual who actually has a lot more, you know credible backup information
Anyway, I want my my credibility my sort of bias detector on my AI can
Actually deliver this information and we see this as well in social media echo chambers that's right right now that full bias detector
is is that burden is carried by Lily call BS on me it'd be much better from AI
did it and we would free up our conversation a bit all right let's talk
to the second point of this pod, which is healthcare.
There is no greater wealth than our health.
I think it's one of the areas that is going to be massively disrupted.
I've been saying this for ages, we've both been saying this, that healthcare and education
must be reinvented and it's going gonna be reinvented on the back of AI
without question.
And there's one chart that I wanna share
that is so damning.
And look at this chart.
This is chart between 1970 and 2015.
I'd love to get the updated numbers.
I don't have them yet.
But it shows the number of physicians,
which is this very thin blue line that is growing somewhat in
the United States, versus the number of administrators.
And administration has grown 3,000% over these 45 years, while the number of physicians has
grown it looks like 100%.
So it's like a 30x increase in overhead.
And we wonder why it's so expensive to have
health care in the US.
This totally blows my mind.
You know, this is where, if we touch back here,
link back to the for-profit, nonprofit conversation, right?
When you have a fundamental human right, in my opinion like healthcare for a wealthy country
Then it should not be privatized
Because the privatized model will bastardize it by definition
And I don't agree that because I think I think that at the end of the day what you want is
competing private companies
Delivering the best that they can.
I think that-
Yeah, but then you've ended up with where we are in the US, right?
I remember interviewing the head of Google Health at a conference a decade ago.
I asked him, look, I'm Canadian.
Help me understand US healthcare.
He goes, oh, it's really simple.
Our system is designed to get you sick, keep you sick as long as possible without killing you. And then 500 people in the audience are all going,
yep, yep, that's right. I'm like, this is incredible. I think, now not to get
into that whole thing, I think that let's just agree on the following, that there's
structural change needed in the healthcare industry, radically, and AI
gives us the most unbelievable opportunity
to change that stack. It does. I mean, AI will change this stack. I mean, the entirety of
administration can be handled by AI. And the other thing is that the physician is going to be
replaced by AI. Let me share this next piece of data because it really tells a incredible story.
This is just out in the last couple of weeks.
And here's a, from the New York Times,
it says AI chatbots defeated doctors at diagnosing illness.
And here's the data.
So GPT-4 scored 90% on medical diagnoses,
compared to 76% for doctors using AI, and 74% for doctors on their own.
That's incredible. It's crazy. So the question is why is this going on? I mean, I understand the
idea that an AI can be better, but why is a doctor plus AI not as good as AI by itself? And then I'm
speaking to my team at Fountain Life and talking about it and they say,
because doctors are biased,
they're introducing bias into the diagnosis, right?
They just diagnosed three other patients
with this particular syndrome.
So the fourth one, they biously put, if that's a word,
put them into the same category
where an AI is looking at all of the data
Right. So then I disagree with that second bullet point where you say doctors can serve as a doctor
JP data serve as a doctor extender. No, it should be a complete replacement of the doctor
Just based on the numbers that you're saying there. I think the potential here is unbelievable. I want to point out something Let's note that this progression has already started, right? Like if my toe suddenly turned blue, okay? The first thing I'm going to do is
research on Google, why is my toe blue? I'm going to read it up and we'll look at the different
thing, look at what my clinic has to say with a couple of others. I'm going to show up to my
doctor and I'm going to be way more informed than the doctor is already, right? That's happening
now, but now you add AI to the mix, completely changes the game.
There's an opportunity.
I worked with some heads of state
in some country level stuff.
You do the same.
This is your open EXO work that you're doing with them?
Yeah.
So our interest is, how do we create a peace core
to guide the transformation in the world?
Because there's tool sets that we need,
like solving the immune system response in legacy resistance etc. etc.
And so we've been building that over the last decade to try and have capacity to help
people go through this transformation, right?
That's generally the work.
And we kind of operate on a cost recovery model.
We're not out for the money, etc.
We were for profit but we try and kind of return all the money back into the ecosystem, etc. In that model, when we talk to say some governments, there's most incredible
opportunity for them to completely turn over their major systems like healthcare and education into
AI driven ones. Now I'll give you one small data point. I'm talking to the Minister of Education
for a major Southeast Asian country.
And his big thing is how do I hire, I need to hire 40,000 English teachers.
And I'm like, are you kidding me?
You're going to go spend 40, like you could have
an AI do all of that in two seconds today.
What the hell are you thinking about?
There's such unbelievable opportunity to upend
the system.
And I think here in the U S when we kind of
swish the model around, the only thing we have to do is break the
regulatory and I think if I had one guidance for the the Trump
administration and RFK jr. as he thinks about this just break the current system
and allow an open field to emerge then we have the potentiality of it. Yeah I
mean I agree with you that's why I go back to, you know, listen, if there were a healthcare provider that comes in and says,
we're operating at one-tenth the cost and we're AI driven, we are all,
everything is done by AI for administration of this.
We give you a set of sensors, right?
And today the typical healthcare experience is you go to the doctor and,
you know, they check your reflexes, listen to your heart and your lung and so forth. It's using
antiquated 50, 100 year old technology and they make take a few blood tests and
that's supposed to represent the health of your four trillion cells and it's
it's pathetic. We are heading towards a world where I am wearing you know
insidables, implantables on my you know, on my body, my oring, my,
did you say insidable? Yeah. Is that a thing?
Yeah. Sort of like implant implantables, right? You know, I have,
I have this RFID chip in my hand. Yes. I remember.
Yeah. We planted it there at a singularity
conference in Amsterdam years ago. I won't go into it, but the idea that will,
that if you want to have this much higher level health care at a much lower
cost, you put sensors on your body that are measuring your health, not once a year, not once a month, but like once a minute. And, and they're,
it's able to correlate your, your day, your environment, your food,
your exercise, all of this.
There's no question, right? The only question is how fast can we switch over
into that new modality and how do we do it? What's
the roadmap for that? That's the only question.
Well, I think the question is 100x better.
I think the roadmap will be economics because do you remember the insurance company Progressive,
automotive insurance, right? So Progressive at one point said, listen, we're going to
put a black box in your car. If you want lower insurance rates and that black box is going to
measure your acceleration deceleration notice if you are braking or you know it
will evaluate you as a driver and then give you lower rates if you're a cautious
good driver. Oh dear I hope I don't have a black box like that in my car. I got it. You and I, listen, I know I'm a terrible driver,
which is why I can't wait for cybercabs to materialize.
I have such the biggest passion for driving.
I get behind the wheel and I'm like Ertan Senna.
I'm like behind the wheel and I'm like,
thank God for books on tape.
Just keep my brain occupied.
Because my self-driving mode on my Tesla, you know,
notices every time I pick up my phone.
So it's like starts beeping and the steering wheel turns red.
That's why I'm sticking with the older model Tesla half
cause it doesn't do that yet. So, okay.
Well, anyway, the point being that we're going to be in a world
in which your, your personal AI is gathering all of the data
from your body continuously.
And your healthcare system is able to instead... Here's the perversion of the insurance industry, right?
Fine insurance pays you after your house burns down.
Life insurance pays your next-to-kin after you're dead.
Health insurance pays you after you're sick.
What if we flip that model and instead health insurance, you wear these sensors and health insurance keeps you after you're sick. What if we flip that model and instead health insurance you wear these sensors and health insurance keeps you healthy
and life insurance keeps you alive. Well you know there's the four P's that
Daniel Kraft talks about like personalized predictable I think this
is where AI will be incredibly powerful we have predictable fault tolerance and
maintenance in engines and cars and farm equipment.
We will absolutely have that for our bodies and I think that's amazing to see. We can't wait to get
that to that future. You know one of the things that I thought was going to be the last bastion
of human dominance was this idea that humans like being with humans and empathy would always be the connection between humans.
It's like, okay, AI will do the diagnostics, but you want a human there to be empathic with you and connected with you.
And then, you know, similar to this report from the New York Times about AI chatbots defeating doctors,
there was a study a year ago, I think it was in the Journal of American Medicine, JAMA, and in which it said, oh look, AI
psychotherapists are much more empathic than human therapists by a large margin
that humans, because of two reasons, right? Number one, they're infinitely patient.
They don't say, I'm
sorry, we're at the end of our 50 minutes, I've got to go. No, they'll give you all
day. They'll talk to you for, you know, for a week straight if you want. And the
second one, which I find fascinating, is that the human patient doesn't find them
judgmental. You don't feel judged by an AI. Yeah, and I think this is why even
elderly patients really like chatting with an AI because
of that aspect of it right. I think this the the future is unbelievably bright for this. We just
have to cut through the legacy bulletin and the problem is there's huge vested interest in money
invested in preserving the existing system so this is a hell of a fight that's coming.
observing the existing system. So this is a hell of a fight that's coming.
So Salim, listen, first of all, I wish you and Lily and Milan
a super happy holidays.
I'm still as optimistic as ever about where we're going,
because I do think we have the ability
to create an extraordinary life of health
on the back of these technologies
and reinvent education.
We should go into education
in 2025 because if there's a field that needs disrupting, it's reinventing the entire education
industry. And when we talk about immune systems, the you know, as companies and so on, health care
and education are the third and second worst immune systems out there. Because they're massive,
right? Yeah, they're huge and lots of estrogen. So I think that's exactly right. I mean, I'm so excited by what's
going to come in 2025. I can't wait to see the back of 2024,
frankly. Yeah, well I know you've had some
challenges. Lots of stuff going on. Yeah, well anyway,
I love you buddy, thank you for all that you do with OpenEXO.
Happy holidays, happy new year.
And yeah, onwards to 2025.
It's amazing, right, to say that we're in the year 2025.
It feels like the future.
It really, really does.
My favorite quote ever is Arthur C. Clarke saying,
the future isn't what it used to be.
And you and I will be on stage together
at the Abundance Summit in March.
We've got an incredible lineup of faculty, extraordinary individuals.
I'm super pumped for folks like Travis Kalanick, who's the founder of Uber and Cloud Kitchens
talking about how do you start a moonshot company that transforms an entire industry?
Yeah.
Kathy Wood, Vinod Khosla, two of the largest tech investors,
Brett Adcock, the head of figure AI,
a number of robot companies there,
and then we have the Google Tech Hub,
amazing companies in the Google Tech Hub this year, so.
Awesome.
Super pumped.
Can't wait.
And what's happening with OpenEXO just before we wrap up here?
We're finishing kind of our tool set,
and we're starting to do a lot more public sector work
with governments and so on.
And we're incredibly excited about that.
I want to talk to you separately about some of that,
but it's unbelievable what's coming along.
And where folks go to learn more about OpenEXO?
OpenEXO.com, we have our whole community there.
We offer training on how to be in the EXO
or if you need help getting to be an exponential organization,
we share how to do that and we have tool sets and training and community and lots of folks
that can come and help if you need help. Yeah, amazing. All right, onwards to 2025,
onwards to the future. All right, have a great one, Peter. You too, buddy. Music