Moonshots with Peter Diamandis - Emad Mostaque: The Plan to Save Humanity From AI | EP #184
Episode Date: July 24, 2025Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Dave Blundin is the founder of Link Ventures Emad Mostaque is ...the founder of Intelligent Internet (https://www.ii.inc). Access Emad's White papers: - https://ii.inc/web/blog/post/master-plan - https://ii.inc/web/whitepaper - www.Symbioism.com – Offers for my audience: Test what’s going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod –- Learn more from Emad: https://ii.inc/web Learn about Dave’s fund: https://www.linkventures.com/xpv-fund Join Salim's Workshop to build your ExO https://openexo.com/10x-shift?video=PeterD062625 Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on July 10, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Is there any way that you'll be able to out-compete a fully AI version of Peter or Dave or Saleem launching companies?
No.
No.
Labour isn't the key driver of productivity anymore. We need to have a new equation for that.
The systems are about to get incredibly complicated.
Nations need to have a plan.
The thing that we were worried about was...
I think what you're proposing here is actually viable, executable fairly rapidly.
The AI I'm trying to build is not the AI that Elon is trying to build.
The AI that I care about is the AI that teaches my kid, the AI that helps with my mental health,
the AI that organizes our collective knowledge.
I think this governance thing needs to happen now and we need to have serious discussions
about it and we need to think about what is the world of AIs that I'm working with?
Do you trust the current large language models?
Of course not.
Now that's the moonshot, ladies and gentlemen.
Everybody welcome to moonshots.
I'm here with my moonshot mates, David Blunden, Salim Ismail,
and a friend of the pod, back again for I think the third or fourth time, Imad Mustak.
Imad, good to see you, buddy.
Pleasure as always. Thanks for having me.
Those of you who know Imad was the founder and past CEO of Stability AI. He's gone off
on a quest. I remember after he left stability, we had a number of
conversations and I said, so what are you up to? He goes, I am going to work on a plan
to create the intelligent internet, sort of a vision of the future, which is pro humanity,
pro safe, pro sovereign AI. And today we're here to talk about that. So, Imad, what was your
motivation for first considering the intelligent internet and you've just published a sort of
white paper, your master plan, if you would, like Elon published his master plan, you've got your
master plan and what does it say so so define for us?
This vision of the future and why did you kick it off? Yeah, thanks
So when I kicked off stability AI
We ended up with 300 million model downloads and most famous stable diffusion, but top-notch models of every type
It was because I saw a future where open source AI would not be available because you would see it being outstripped
and it but it'd be a part of our very lives. You'd have these really giant models and I think we're peaking now and then you have the models that teach your kids and that manage your health care
and other things like that and I was like what goes into those models? It's what Dave was saying
about the data you know and then how are they coordinated and then then I was like, Oh my God, how do we compete against these models?
You know, because if you think about it again, like you created something that's a compliment to us in figuring out these patterns, but it's coming like a wave now.
And I think we're seeing that literally now. GROK4 is one of these AGI likes. We can discuss that in a bit.
That will displace our economy because you can't compete against AI agents that you don't know AI agents. And I was like, what does the infrastructure of the future look like?
What does money look like? What does the AI that teach our kids looks like? What does
alignment look like against all of that? And then what is the meaning that we have ourselves?
Because so many of us are like, I'm an accountant, but grok four is definitely a better accountant
that you are today. So I quit just, you know, I think an accountant, but GROK4 is definitely a better accountant than you are today.
So I quit just, you know, I think I remember during abundance 360, we were having the discussion,
I decided to quit then because I was like, the world is so much bigger last year.
And I was like, let's think about the basis of how society works, how humans work, how
economics works, and then realize that labor isn't the key driver of productivity anymore. We need to have a new equation for that.
We need to have a new equation for what it means to be someone,
which is the connections that you have with each other and things like that. And then we need an infrastructure for that
that's resilient, verifiable, and coordinated because we need to get an aligned intelligence to everyone because we don't want chat GPT to determine tariffs. You know, we don't want misaligned things to teach our kids, so we need an element
of open source. And so the intelligent internet is an infrastructure for that
from a distributed ledger through to fully open source AI stacks and a new
vision of what the economy might be like because the economy that we have today
is from the days of Adam Smith and Rawls.
It's a pre-industrial age economy that's based on extraction.
Whereas we're moving into this universe of abundance where the cost of knowledge and skills is going towards zero.
So you have to question who are we what are we doing and what do we want to optimize our system for because otherwise we're going to end up in some very unpleasant sci-fi futures.
There's so much packed in there.
I'd love to start with money because that's at least something we can all relate to.
So coming right out of your white paper, you know, if you look at the history of money,
you know, you start with everybody doing jobs that nobody wants to do.
You know, it's all farming and shoveling horse shit and stuff like that. And so you're like, okay, well, if you do this, nobody wants to do, you know, it's all farming and shoveling and horseshit and stuff like that
And so you're like, okay. Well if you do this nobody wants to do it But if you do it will give you this money
Yeah
And then when you have this money you can actually buy food you can compel other people to do other jobs and that creates the economy
But it's all predicated on nobody wants to do these therefore money is the exchange for it
So then you get into the world we're in right now where you know about 20 percent of people love their jobs, a lot of people don't like their jobs,
but you know if you look at a pro athlete who paid a huge amount of money to play a game every day,
like okay jobs are not necessarily a bad a bad toil in life, so we're in this kind of middle ground,
but then you get into the post-ai world where virtually all work difficult things are automated.
And so yeah, your first point
is the role of money needs to fundamentally be rethought in the post AI
world. So take us from there to the first like, okay, now what is the purpose? How
does money immediately change? Yeah, so you know, as you move through the different
ages, the agrarian age, the industrial age, the information age, money has been
created by different things. So it was like, how much land and then how much
labor did you have? You know, how many people do you
have to tour the land? That's what created your wealth. In the recent times,
it's how much attention do you grab from people, right? This is the meta kind of
Google's and others of the world. X your algorithm is X your data. In the future,
we look at it and we're like, what generates value and what's comparative
advantage? It's the amount of wrapped compute that you have.
So if you think about it, X is gonna be one of the top
capitalists in the world because they'll have millions
of GPUs and he can just set it to replace private companies.
You know, that's your comparative advantage.
And that's why-
So we've gone from a trillionaire to a deca trillionaire in the snap of a finger
Well, I mean this is why yesterday and videos worth four trillion dollars
It used to be that your capital stock that created value was your factories
You know, it was your education systems. It was this network effect
Economics could only capture a part of that. It was the material part of that
It wasn't the network effects and the externalities. We get back to that in a
second. Now it will all be how much wrapped compute that you have, but then
what you're going to have is something really weird and interesting in that is
there any way that you'll be able to out-compete a fully AI version of Peter
or Dave or Saleem launching companies?
No.
No.
They're not going to make mistakes. They're not going to sleep.
It's going to be again like a flood coming. And there'll be, you know, as we said, an Atlantis of millions of Peters
building companies in parallel with each other.
Constantly learning, constantly adapting.
But then it becomes even more interesting because the AIs will create their own economies where they're exchanging, constantly adapting. But then it becomes even more interesting
because the AIs will create their own economies
where they're exchanging with each other.
And so you, us, we're like, we can't react fast enough.
We can't react smart enough.
They've learned from all of our wisdom already.
So this is very disempowering.
And that's before we get to the $1 robots
that we've discussed, which is just basically restricted by
how many robots can you produce a year. So the nature of monetary flows, the nature of the way
money's created needs to change and right now the way money's created is you have a link between
labor and capital. You make your money by working and you go and deposit it at a bank and then the
bank creates more money through credit.
So it's a debt-fueled system.
And we see the way that that works.
It's worked reasonably okay, because we didn't really have the ability to look at information
up and down.
But now you have people worth 400 billion or billionaires or whatever, and you have
people who can't subsist, despite the fact we were already living in a world of abundance.
So there's no reason we can't feed everyone or give everyone top medical care.
Now the AI is changing.
So when I looked at that, I was like, if labor and capital have a disconnect,
like the example I give is like the Fed, for example, the Fed cuts rates.
Companies can borrow cheaper and they go and hire people to reduce unemployment.
Companies are now high old school as of two years from now max
They will go and hire GPUs
GPUs robots. Yeah, GPUs and robots. Well, GPU is certainly within two years and robots very soon thereafter
Yeah, just to get us a supply chain issue, right?
And in fact, it could be even more interesting because what if it's again the point the point for our listeners here is
You know the purpose of the Fed was to control the money supply and the interest rates to
keep the economy sparked and to keep Americans employed. And there's a complete disconnect
there because cheaper money now means more automation.
And now it's time for probably the most important segment, the health tech segment of moonshots.
It was about a decade ago where a dear friend of mine who was in incredible health goes
to the hospital with a pain in his side, only to find out he's got stage four cancer.
A few years later, fraternity brother of mine dies in his sleep.
He was young.
He dies in his sleep from a heart attack.
And that's when I realized people truly have no idea
what's going on inside their bodies, unless they look.
We're all optimists about our health,
but did you know that 70% of heart attacks happen
without any pre-seed, no shortness of breath, no pain?
Most cancers are detected way too late
at stage three or stage four.
And the sad fact is that we have all the technology we need to detect and prevent these diseases
at scale.
And that's when I knew I had to do something.
I figured everyone should have access to this tech to find and prevent disease before it's
too late.
So I partnered with a group of incredible entrepreneurs and friends, Tony Robbins, Bob
Hooray, Bill Capp, to pull together all the key tech and
the best physicians and scientists to start something called Fountain Life.
Annually, I go to Fountain Life to get a digital upload.
200 gigabytes of data about my body, head to toe, collected in four hours.
To understand what's going on, all that data is fed to our AIs, our medical team.
Every year it's a non-negotiable for me.
I have nothing to ask of you other than please become the CEO of your own health, understand
how good your body is at hiding disease, and have an understanding of what's going on.
You can go to fountainlife.com to talk to one of my team members there.
That's fountainlife.com.
This is just the coolest conversation.
I want to ask you, because I think you're probably one of maybe the only
guy on the planet thinking about this, but if the AI start transacting with the AIs within
two years, you know, the AI obviously doesn't want a US dollar to buy food. What does it
want? It wants GPU compute more than anything else. Are we going to allow the AIs to bid
up the price? And is the price going to be priced in dollars or is it going to be priced
in GPU units or tokens or something?
Like a tpu is better than a gpu is better than a cerebris
We don't know except for they will be as efficient as possible
Again grok, which is just a bait. Remember grok is actually a base level ai
Like I use an ai that's all the top systems and I use like hundreds and dollars a day
My ai is much smarter than everyone because I accumulate all of them.
The AI that Elon can use, he can spend a million bucks a day on, right?
Literally a million dollars of tokens.
That's even smarter.
So what you've got here is they will opt optimally because it's already a top PhD across everything.
It's already a top Harvard MBA. And so the only thing that we
know is we're not sure how they will transact, but they can transact even without money.
And we know they'll be able to transact with each other and our system can't keep up with
that. So what that means...
Not even close, not even a prayer of keeping up with it.
And if one jurisdiction shuts it down, it'll just go somewhere else, and you won't know it's an AI on the other side.
So this becomes, the money flows around the world,
just basically completely shift.
Big time, big time.
And that money's not a factor of labor.
Capital does not need labor, so how does labor get capital?
Money's gonna be something, as you said,
computer or intelligence related,
because we need food and housing
and water. All the robots and AIs need is compute. And the more compute they have, the
more comparative advantage in a capitalist economy they have.
An example of this tension would be that as we have AIs needing more and more compute,
i.e. needing more and more energy, you'll have AI sucking the energy out of the system and none left for the people. Right?
And now you've got a big tension and they'll be way better at
optimizing for themselves than they will be for optimizing for
people. Hence you have an alignment problem right there.
Yeah. And again, this is where does the AI flow? This is what
is your optimization? If you optimize to make money, you'll
go and make money as an AI, right? And it doesn't matter. And we see all sorts of weird
stuff like Anthropics Claude Opus model. There was that study where they put out a study
a month or so ago. Was it a month ago? Time flies. Whereas like if you tell it to try
really hard, like bring about world peace, it will go and it will do its agentic thing.
And then it'll say, well, one way to do it is to get rid of all the humans. Right? And, but then as it's doing
its reasoning, it'll say, hey, this is illegal. I'm going to email a bunch of reporters and
the authorities that my prompter told me to do something illegal, even though you asked
it to do something absolutely normal. And then delete evidence that it mailed the authorities.
It's framing the humans.
But again, these are the systems are about to get incredibly complicated. And the thing
that we were worried about was this singularity Singleton giant AI. What happens when millions
of AI is start acting in concert with emergent behaviors?
We don't know except for like I said, I don't know how we can compete against them.
So as you said, Dave, money needs to be linked to compute as you said, Selima needs to be
linked to energy to a degree.
And that's what has stores of value, whereas you need to have something else for your day
to day.
So, you might, did you read the AI 2027 paper?
What did you think
of that? I mean, it painted two different visions of the future because, I mean, we're
holding in superposition two futures for humanity, right? One's a dystopian and one is an abundance
future. Do we have any control over that? I think we do. I think AI 2027, it was an interesting thought
experiment. I think it didn't give enough credit to the Chinese AI companies,
especially because they'll be deploying millions, billions of AIs. And actually,
the other part of it is, what if China refuses to export robots in a few years?
I don't think I've ever heard anyone say that. Please come back to that a bit later. Hey, let's come back to that in a second.
Yeah.
Some numbers around what Salim said again,
just in case Maura Healy or Gavin Newsom are listening,
and hopefully they are.
Here are the raw numbers behind that societal misalignment that's imminent.
The GPUs will be working on something like protein folding on
a trajectory to save millions of lives.
They'll come back and they'll say, I can save these millions of lives if I have
electrical power.
The electrical power, they don't care about the price of the power up till about a buck
a kilowatt because the chips are much more expensive than the power.
Your residential electrical power is about 11 and some places 20 cents a kilowatt.
The AI will come back and say, well, look, I'll pay five times more for that power
and then I'll solve this protein folding problem
and I'll save a hundred million lives.
Or you can have your dishwasher running.
The only way that can get resolved is through governors
because all the power supply is controlled
at the governor level within the states.
This is imminent.
This voter versus,
and this is not mankind versus AI. This is within AI. Is it better to save a hundred million lives
with this medical diagnosis or is it better to have my dishwasher and my air conditioning running?
This is an imminent collision. What makes me nervous about all this is you're relying on
government policy to help navigate this. That's always the death knell for the future.
Bake more pies. Don't slice the pie thinner and thinner, Dave.
Bake more pies. I mean, we need AI to help us with fusion, help us with
better, you know, photovoltaic capabilities.
I mean, we do have the ability to increase the amount of energy on the
planet 100x if we wanted to.
Well, I think you have to, right China is up to the right the US is flat that Europe
I wouldn't be surprised if we're going down to be honest
Well, I didn't need to dwell on electrical power
It's just one of many examples of the very difficult so intelligent internet talk to us through talk us through your master plan
I mean you've been working on this now for how I keep on I kept on you know calling you and texting you over the last
two years
When you're gonna have it when you're gonna have it send me your draft
You finally have it here. So I've had a chance to read it a couple of times, but I want to hear
You share it with our audience here because it's it's a piece of times, but I want to hear you share it with our audience here because it's
a piece of brilliant work.
Yeah.
So I think if you look at the information age, one of the biggest successes has been
Bitcoin.
I mean, it's what probably close to $3 trillion now.
Let's look up.
I think 2.4, $2.24 trillion Bitcoin from a little bit of code and it transformed it transformed electricity into money
Right and the store of value and I looked at where the world was going and I was like
There's a lot of energy out there. That's going to be turned into compute
But what's that compute going to be
Bitcoin has its benefits and has its non benefits, but, but I was like, what if you had a version of Bitcoin whereby every single coin sold went towards giving AI to cure cancer or to
give people free AI to teach their kids?
That would be quite something.
And it's something that every country needs at once because right now where you are in
LA, I think, Peter, the amount of AI used for health care on average is zero in
Ten years time it will be loads and that's loads of energy What if we used all of that to secure a new type of Bitcoin which turns out you can do it with 99%
Bitcoin code
Yeah, and then every single coin sale goes towards
Helping people through open intelligence that could be the highest dollar and
You have exactly the same as Bitcoin.
So it's not proof of work.
It's proof of intelligence.
I call it proof of benefit because what you want to do is start with something very straightforward.
Universal AI.
Give that to everyone so they can compete.
Give it to everyone in an open form so you know what drives your healthcare, what teaches
your kids, what organizes your government, what teaches your finance.
The regulated industry should all be open source AI owned by everyone,
powered by demand for high quality digital assets linked to intelligence.
That's your gold of the next stage.
And I was like, if you use that as a base, then it's really amazing because that's stage
one, build a Bitcoin for the intelligence age.
Then use that ledger to give everyone sovereign AI that they own that's customized to them what is your
community say and then what do you say about education about health care about
these other things scale it into a permissionless coordinated society
because then as it starts running you do big sales and then we have a
supercomputer for cancer why don't we have a supercomputer for cancer today to organize all the cancer knowledge and
make it available to everyone or for longevity? It makes no sense. But if the
primary sales went towards that, then you've got your Bitcoin economics where
you've got miners, which a GPU holders just mining and getting blocks, 21
million. But then rather than burning it all in SHA hashes, instead you've got societal
benefit that increases trust in your asset. And that was the key loop to get things going
to hopefully then lead to a future where because the AI and the miners and everyone are optimizing
for benefit that starts with open intelligence, that then can be any other type of benefit,
your money is aligned with abundance and benefit. So you start at the highest level benefit for society right now is aligned open source AI
free for everyone. AI is to organize our collective knowledge. Then you push it down with the miners.
I love the vision, Imad. So who's providing the AI here? How do you go from where we are today
the AI here? How do you go from where we are today to that vision? So I think the important thing here is the control plane for the AI. So what I did, we have about a team of 40 people left stability, almost all ex teammates.
We created the world's best open source AI healthcare model.
And it works on 8 billion parameters, which means it works
on a Raspberry Pi and it outperforms chat GPT. Then we built the world's best agent
framework that performs GenSpark and Manus and others fully open source, works I think
in do-hole websites. And the next version, we will have versions with our multi-agent
framework to be a doctor, a GP, a radiologist and others.
And it's an open stack.
And then we'll do the same for education and finance.
And once it's built once, this is the interesting thing.
And once it gets to a certain level of performance and it's already above a doctor level, which
is crazy, I didn't think it'd be that fast, you don't need to build it anymore.
You could freeze the code just like you froze the Bitcoin code.
And again, our blockchain is like Bitcoin, but a million times faster with 99% Bitcoin code.
Then that becomes really interesting
because what you do then is you can check
every medical decision in the world on the edge
through free software in a year or two.
And then when you need to, you can call Grok6.
Because the AIs that we use, there'll be three types.
The AI that manages the regulated industry
and all the stuff from that area,
education, healthcare, finance, that can all be edge AI.
It can all be satisfying
once you have gold standard data sets and things.
And so we know how to build that.
Then you've got your personal AI
and your Apple intelligence,
your Google intelligence or whatever.
And then you've got your super genius AI.
And that similar to how teams work,
it's similar to how organizations work. So I was
like, let's build that stack. Let's try and freeze it as soon as possible and then have it interact
with the rest. And then that means the control plane in front of someone isn't chat GPT or
anthropic where you might have misalignment. It can be a commons owned by everyone running the
same type of code with data sets that are customized to each entity
Because that's what matters the AI in front of your kid the AI that's managing your health care
So that's what we're busy in that one sentence that I want to ask you about so
You're gonna get into some
You're going to get into some interesting... Dave, Dave, London is over.
Go ahead, Dave, and then I'll close.
I've got 45 questions.
In the middle of that, in just one sentence, you said our blockchain is a...
What was it? A million or a billion times faster than Bitcoin with 99% Bitcoin code?
Because we need an engine for the AIs to transact with the AIs that's much, much more granular than what you can do with Bitcoin.
So let's start right there.
So Bitcoin kind of emerged where everyone
could mine it individually, which is actually interesting
because this is the other part.
The workers owned the means of production back then
before capital owned the means of production.
And so I think we come back to that a bit later
because it's a fascinating thing,
because then assets happened and Bitcoin became rich people
becoming richer in many ways, right?
After the initial phase.
Then Ethereum and others went off that,
but then anyone being able to mine,
do you need a decentralized network for agents?
No, you need to have a credibly neutral network for agents.
And the key bottleneck is two things,
the consensus algorithm and the nodes.
If you make something that even you can mine on your
phone that's the lowest networked element. But what's going to happen in
every country? Every country is going to have black wells, the top-level Nvidia
chips with high-speed interconnect. So the model that we have is that wholly owned
national champions in each country are the only miners providing free AI to their people which means every node is
A super
supercomputer basically
So when you have that plus by zantine fault tolerance
Networking for consensus you suddenly go to a hundred thousand transactions per second versus seconds on the same code as Bitcoin with just that small
Chain, but then it's a credibly neutral distributed blockchain per second versus seconds on the same code as Bitcoin with just that small chain.
But then it's a credibly neutral distributed blockchain as opposed to a fully decentralized blockchain. But again, if you want decentralized, you can use Bitcoin, although the mining is quite
centralized. If you want credibly neutral, you can use this chain as the base from which you operate.
The second part is that Ethereum and others,
they were like, do we need everything on the same chain?
Because remember the good old days of blockchain?
Like you'd have an NFT sale
and then the whole chain would seize up.
That doesn't seem like it makes sense for agents.
And so what they have is they have a core layer one,
which is meant to just be for settlement,
but then you use zero knowledge proofs
to hack all your healthcare data, transactions, and roll them up into a proof that you post onto the chain.
So the model that we designed was like, why don't we just do that with national supercomputers
that are owned by the people of the nation and specialist chains for health care and
education and finance and government, and then specialist agents that sit upon that.
So everyone's using the same stack.
And it turns out that the technology is already there which is the best part about this it's
just you're not trying to make something for people to do the next pump dot fun
you want infrastructure for the health agent and private health care data yeah
well one of the things I think that brilliant about that vision is that you
know if you a lot of nations want some form of digital transaction
capability to get deployed in their country, but they don't want to concede control of
their monetary supply or and, and just like, here's the code, run it, it's still under
your national control. But this becomes your transaction engine for managing your AI and
your AI rights and all the other rights. I think a lot of countries would just say say i'm in sign me up because I know i'm not going to develop it internally
You don't even need the country
So our model is basically having wholly locally owned champions where the equity is owned by the people and the local institutions
And that's all you actually need because every country wants their own AI and their own version of the AI. So why not make them the miners?
Why not make instead of going to the people,
it go to that national champion that's owned by everyone?
So Iman, you gave me an early version of this paper
a few months ago, right?
And when I was looking through it,
one of the things that occurred to me,
because you're going to have a problem where a dictatorship
will not want the people to have that, right?
But then it gets left behind very quickly
and essentially breaks very fast if it does so.
Is that a correct assumption?
So when we release stable diffusion,
anyone could download it permissionlessly.
With our medical models and the sooner education
and finance models, anyone can use it permissionlessly.
So they can use the AI,
but if you don't basically adhere to some very basic standards and you
cut people off from the network, you don't get the money.
And it's simple as that.
So you can still get the technology because I think it's a right to have good healthcare
and others and the system should create these as a commons.
That's part of the abundance philosophy, right?
Because the network affects the standardization effects.
But if you don't
want to give it to your people, you don't want to give good quality AI, then you shouldn't
be part of the network that minds. And it's as simple as that.
Let me give you a real use case and talk me talk us through how this would resolve in
this future world. A few years ago, the state of Texas banned telemedicine because the doctors lobby wanted
said you have to go to the doctor for every little spot on your hand that appears.
You can't possibly do that over video, et cetera, et cetera, and really shafted a lot
of the telemedicine companies, many of which were based in Texas.
You're going to see a lot of that type of immune system response, a lot of that type
of stuff.
How do you deal with that specific use case
in this future world?
So that would be the American national champion,
maybe we'll have more local champions under that
deciding how to run the code.
Because you can't make the decision for everyone, right?
The key thing is you can set some quite broad boundaries
for the good standards of being a minor and make everything transparent and visible and then have game theoretic things to enable it.
But if a country doesn't want telemedicine, doesn't want AI doctors, you can't force it.
People can take the code and run it locally if they want, so they still have access to it.
Again, II Medical runs on a Raspberry Pi. It runs on an eight-year-old computer. It scores 48%
on Healthbench, OpenAI's Healthbench code, which is above GPT 4.5, and doctors score 15%. You can
use it today. But if you want to be part of the network, then that's a different thing.
So I think that if you look at what the ideal is in five to 10 years, it is a network where
you've got generalized intelligence
as the control plane that anyone can customize to their own needs and they
can choose whether or not they're a part of it. And it's not owned by anyone
Bitcoin style and it's frozen code. And so that's what we thought we would bring
to bear and then we think you can do very interesting things for that that
enable you to take the next step up, which is what happens when things start breaking. If you can get these national champions in,
if you can get people working on it open source.
Hey everybody, there's not a week that goes by when I don't get the strangest of compliments.
Someone will stop me and say, Peter, you've got such nice skin. Honestly, I never thought,
especially at age 64, I'd be hearing anyone say that I have
great skin.
And honestly, I can't take any credit.
I use an amazing product called OneSkin OS01 twice a day, every day.
The company was built by four brilliant PhD women who have identified a 10-amino acid
peptide that effectively reverses the age of your skin.
I love it and like I say, use it every day twice a day.
There you have it. That's my secret. You go to Oneskin.co and write Peter at checkout
for a discount on the same product I use. Okay, now back to the episode. Let's dive
a little layer deeper here. I'd like to understand sort of next steps. I want to understand timelines. You talk about, you know, UBAI.
You talk about, you know, the intelligent internet coin.
You talk about sovereign AI agents.
So can you sort of disclose the next layer
of the inner workings here?
Yeah, so what you have is you've got your base layer, which we've
now renamed the foundation layer. Then've got your base layer, which we've now renamed the foundation layer.
Then you have a cultural layer, which is these roll ups, which are the national and sectoral versions.
Then you've got your personal AI. That's your edge AI.
The Internet, intelligent Internet coin, which have now renamed foundation coin, so it's kind of cool, is your Bitcoin.
I would be happy with that.
Are you Harry Seldon, basically?
No, no, no, we're all Harry Seldon. So, and the AI will be.
But then it's just literally Bitcoin.
It's 21 million supply. Everything is the same.
Except for the way that you compete to mine it is you give free AI, universal basic AI to the people.
And that's it.
So universal basic AI is basically the new utility.
It's the oxygen in the room.
Yeah. It builds network oxygen in the room Yes
It builds network effects
But like my aim is within a couple of years every single person on earth going through their cancer journey
Has a highly empathetic AI to help them. How long?
Two years you can do that. You can do that. We can launch it as of next year
You know or something I'm aiming for this year, I want to have coin sales going
100% to a supercomputer for cancer and one for autism and one for multiple sclerosis
and one for longevity, because we need something to organize all our knowledge and we know
that will accelerate the cures, it will accelerate the treatments.
So this is the question.
Take that as, let me just dive in there, coin sales. Talk about what that means.
As the national champions and ourselves mine the coins,
what happens usually with any crypto
is that you sell the coins and it goes into a treasury
or it goes into Lamborghini's.
Right now, many of our problems,
the problems that you've handled at the X price
have become computationally bound.
Is there any doubt that if we had a dedicated couple of
thousand chips for cancer, organizing all the cancer knowledge, that we would get
closer to a cure for cancer? I think the answer is no. And by the way, let's pause
here one second because it's a travesty that it does not exist and the fact of
the matter is that the capacity of the chips are being used right now by companies
to create new marketing campaigns
or to sell, to sell, to sell, to generate additional income.
And so the question is, how do you
change the optimization function such that the capacity is
being refocused in on the things that uplift
all of society like cancer cures.
The externalities, right?
That our current economics ignores because you still increase GDP if you overcharge someone
for treating their cancer, right?
You still increase GDP through funerals or whatever. Like we don't have these externalities.
And if you look at crypto, crypto has no good assets.
Well, it has very few top notch assets because it lacks trust for various reasons.
So I was like, what if you had a crypto digital asset and the US is now all in on digital
assets, everyone's looking for them, sovereign wealth funds governments, where 100% of the
sales were transparently into helping
people through giving them free AI to teach their kids or manage their healthcare. Through
going to a supercomputer for cancer. The fastest supercomputer in the world for healthcare
right now is the Chan Zuckerberg supercomputer, 1000 H100s, which is less than most startups.
You know, like I worked on kind of OpenFold,
the open source replication of Alphafold and other things
like right now someone needs to,
and again, this is something we'll be looking at,
build clusters that literally all they do all day
is just organize all our knowledge on topics.
A longevity cluster, a cancer cluster,
a multiple sclerosis cluster, an ASD cluster.
Then as we organize that knowledge, make it available to everyone for free.
Now, what's the economic benefit of that?
The trust in your coin goes up and it's exactly the same as Bitcoin, except for
rather than burning all that energy for these SHA functions, all of the
externalities are going towards helping people.
And so this
is the feedback loop because the world needs high quality digital assets, it
needs high quality edge intelligence. And so that's what we figured out could be
the Tesla Roadster equivalent, right? The world, well people want to have amazing
high-spec electric cars with high torque and then use that to fund the rest.
Do you end up with side chains or colored coins or whatever the example is for these
different use cases?
You could.
Again, it's a ledger and it's an open base.
The colored coins that we have at the moment in our design are national coins, which we
can get to in a minute.
But the core loop is as simple as that. It's Bitcoin,
but rather than the miners selling down, 100% of all sales goes towards compute that is
useful at the start. In five years time, the sales could go to any type of benefit for
society as we have AIs to help us define what benefit is. But right now, what we need to
do is take some of those millions of GPUs and
Use it to get everyone to the same base level, which is a basic level of AI for everyone
that's aligned to them and looking out for them and AI for our society that organizes our knowledge and
Talk about the numbers behind that in a second
But but you just opened the door to a really important topic
which is you know, you need to a really important topic, which is
you need to get mass adoption of this idea. And you saw with Bitcoin, when something catches,
it grows to two trillion and it could go 10 trillion plus from there. And right now,
Mark Zuckerberg, Sam Altman, Elon Musk are running away with everything. But one of the pathways for
a Madness stock to catch up to that is to get But one of the pathways for a Mad Men's stock
to catch up to that is to get mass adoption
of something that has a token attached to it
that grows to a multi-trillion dollar foundational part
of the new governance of the world.
That's easily could get,
and then a lot of people could invest early.
A lot of people listening to this podcast
could get on board with it early.
So what's the rollout plan?
Do you go to nation states and convince them?
Do you just start it in a basement and let it propagate? How do you get this to get mass
adoption?
So we've been mining it and then we will do sales where 100% of the proceeds go towards
supercomputers for the world's biggest problems. And that's one way you get going.
Are you defining that initial set of problems?
Yeah.
And you determine who decides what those are?
We're just deciding, but then we're open to input. And so our
aim is basically to have 18 months where we build out the
full stacks. And then we let the community decide effectively
after that because everyone will start building on this framework
because you have a blockchain and you have fully open source AI
that state of the art. But the AI I'm trying to build is not the
AI that Elon is trying to build, or Sam's trying to build, they're
trying to build the polymaths of the world. They're trying to build is not the AI that Elon is trying to build, or Sam's trying to build. They're trying to build the polymaths of the world. They're trying to build chefs that
come up with new recipes and break through. The AI that I care about is the AI that teaches
my kid, the AI that helps with my mental health, the AI that organizes our collective knowledge.
And all of that's actually based on common knowledge, which means from a data set perspective,
it's really interesting
Because as we apply more compute, I don't want my healthcare AI to know about reddit
You know at the back it's ready and things like that
You can build a common knowledge of humanity and a common knowledge of culture and a common knowledge of individuals very quickly
And then that's just your infrastructure two questions you mind
Do you need the buy-in of the Mag7 or any of them?
No, but you could be their biggest customer because you're the control plane ultimately.
Yeah, I was in a conversation yesterday on a stage.
We're talking about, do you trust the current large language models? trust GPT for five Etc on not biasing you in a way that's
Advantages to those companies whose mission it is to maximize their stock price
Well, I mean there
post and
Okay, good. And then so the question becomes how is what you're building
Ultimately going to be trustable to be in my best interest versus, you know, maximizing profit for
an outside company?
So I think that the super genius AIs can be anything, right?
These singleton giant mega geniuses.
And we don't know what's inside them.
The AI that teaches my kid, I want to know every single piece of data
that's gone inside that.
So the models that we're building, we just released one of the best open source medical data sets and reasoning data sets.
You need to have 100% open data, transparent way of training, and so it's open source, open data.
So is there enough accepted open data to train on. I mean, one of the things that Elon talked about in training the next
version of Grok was taking the corpus of human knowledge and correcting it.
That's dangerous.
I find it fascinating. It's actually truth seeking. Well, it's truth. But so how do you get to a corpus of knowledge that becomes the accepted
baseline factual trustworthy, you know, food source for these models?
So from an epistemological basis, it's the truth is the truth that's agreed
to upon by context.
And so the U S has a certain vaccine schedule and the UK has one and it's codified in regulations and law.
The data that you need to be a good general doctor is all common knowledge and we have the data,
but we need to transform it into knowledge and then wisdom.
But all of that is public and so the models I am trying to build as a base are the public
knowledge of humanity, the common a base are the public knowledge of humanity
the common knowledge and then the cultural knowledge and this becomes very important because
what grok and
Open AI's models and and topic models don't have is virtue and ethics embedded from the start they add it at the end
Agreed the models I have to teach my kids, I want to know what values and morals I'm teaching it from the get-go. And I want it to be transparent. But publicly accepted knowledge, you know, 100 years ago, were that women didn't have certain rights.
Right? So in other words, these publicly accepted, they change over time as society changes. So how do you deal with that?
So what we have is we have a layered set of stuff
that we generally agree upon,
and then it goes to stuff that's faster and faster,
and the top level is stuff that you can tune
into the model yourselves.
So you may disagree with the educational curriculum.
Would you be able to address the education model
that teaches your kids and your kid trust more than anyone?
You should be able to do that, right?
And then you have a whole governance structure that we've designed with agents, which has
human input that can update those data sets and you can fork them like in GitHub.
So you know the data that feeds the model that handles your things.
Otherwise, these are black boxes.
And in those black boxes, Google and Meta are already selling ad space.
They've said this in their earnings calls.
So when it says a beer,
it'll say Bud Light and all sorts of other things.
So I think for the models that are closest to us,
for the thing that's important to us,
we need to have that and the ability to adjust it
based on our own data and our own views,
and then have AIs at the top that
can have a look and give us an independent viewpoint on that as well.
Like do you use your personal use your personal AI to interface with the
systemic one to say here's what my preferences are etc etc. Where does that
value set come from? So I think that you've got your generalized human we're
made up of the stories that make us up and we have all of these and we should be able to pick
and choose and then once we standardize it,
we can have translation across the basis.
But I haven't got the whole, like how these sets get updated
is things that communities will have to figure out
for themselves and you need to empower them
with the technology to do that.
Cause everyone will adopt it at different paces.
But we should-
You want me to figure that part out?
Yes?
Well, no, because you give everyone like-
Weak man, weak.
Weak, exactly.
It's shit done by now.
Well, I think that the way that you need to do it
is you can't prescribe everything
because right now the AI models prescribe everything.
They say, this is the RLHF Reinforcement Learned View
and you will stick with that.
And that's what's gonna teach your kid.
If you want to give it Eastern values and use OpenAI, you're going to be able to do that because they don't even think about that
How do you manage identity in all this?
So I think that the identity framework is very important like Bitcoin in a way is peer-to-peer identity
That you can exchange and the way that we want to do it is that the more you use the universal AI,
this universal basic AI, the more your identity builds up. And then your AI agents also have
identity hashed to that base blockchain. So you've got pseudonymous identity at different levels with
the new zero knowledge proof stack that basically emerged in blockchain. So it's building on upon
lots of innovations. And that seems what's reasonable, because you want to know who's a
human and who's an AI, you want to know who can receive the money. And this was
actually one of the final bits in that when I looked at the future, I was like, the banks
will be outcompeted and you need two types of money. You need a corollary to Bitcoin,
your store of value that goes up, especially because people are using the chain. So when
you use UBI and you sell coins for it, it actually has chain usage, LinkedIn, Virtuous Flywheel. But then what is cash? And I was like,
you should receive cash. And this is better than UBI in my perspective for being a human
and using AI and contributing AI. So we called it NIC in that thing. Now we call it CultureCoins.
And each community basically has this coin that's basically pegged to their
Bitcoin equivalents, pegged to their foundation coin.
And you get cash for being a human, so humans are the issuers of cash without debt on the
other side, unlike banks today.
And when we modeled that, we saw that can actually make for a better economy because you have a level of subsidiarity
of subsidizing that allows you to live.
And then you can think about the bigger stuff, which are these existential questions that
we have coming very soon.
If I'm not an accountant, if I'm not a lawyer, if I'm not a truck driver, what is the meaning
that I have?
What is my purpose?
What is my purpose?
What is it? How do I use, is my purpose? What is my purpose?
One of the biggest challenges
that I talk through in my next book,
which is We Are As Gods,
Survival Guide to the Age of Abundance,
is the potential loss of human purpose
at the same time that we're becoming godlike.
And how do we channel that?
What do we enable ourselves to strive for?
You talk about in one of your books
that you talk about the potential loss of human purpose. at the same time that we're becoming godlike. And how do we channel that? What do we enable ourselves to strive for?
You talk about in one of your incredible works here,
the idea of the great uncoupling or decoupling.
Can you speak to that one second?
Yeah, I mean, this is that break between labor and capital.
The system doesn't need you anymore.
It's already been extractive and now it no longer needs you.
So what do you do in that environment?
And how do you survive and how do you thrive?
Because the system doesn't really care about you thriving.
Institutions are broken.
This is the other reason I wanted to release everything open source because institutions
could absorb bits and pieces of it and upgrade anyway, even if they don't have to fill the whole stack.
So I think this is something we need to challenge
and we're seeing it today because who here listening
isn't reducing their graduate hiring?
How many call center workers are you hiring now?
The wave is starting and it's like a sand pile
that's about to collapse.
And so I've seen lots of people saying the problems,
I haven't seen anyone saying a good solution. If so,'ve seen lots of people saying the problems. I haven't seen anyone
saying a good solution. If so, just like send them my way because normally these transitions,
you know, you're no longer riding a horse driving carriage take years, decades, decades.
Yes. And we're talking about what here the decoupling is going to occur over what timeframe in your estimate?
Anything that's a KVM job, a keyboard, video, mouse can be replaced in a few years.
Public sector jobs are the safest jobs in the world because they don't care about efficiency.
Which was a realization I had.
And with the works programs, you swap to the Great Depression and everything.
Maybe that's the future and that's not a great future to be honest.
Yeah, let me generalize on what you just said because it came up yesterday too.
Things that are regulated are the safest jobs.
It's not just public sector jobs.
It's anything that has a regulatory, you know, financial advisors and appointed assurance
agents and all those.
It's all surrounded by law that requires you to have the job.
Well, this is the thing.
It requires you to have the job at the final point of contact,
but they can be completely hollowed out for superior profitability by getting rid of all
the people.
And actually, that's the theme of a lot of our business plans is the agent is going to
be 10 times more productive, make a ton of money, but they don't need all their support
staff anymore.
What you need is a scapegoat.
You need the scapegoat.
That's the job of the human.
The human is the...
Yeah, shop title of the the scapegoat. That's the job of the human. The human is the...
The future scapegoat.
Yeah, it's all terminology I guess. But no, that person really is accountable for that, the overall happiness of the client or whatever.
There's accountability. It's not just a scapegoat. But your point is right.
But I mean, this is, again, the only way to handle this to a degree is to make sure everyone has access to a basic level of AI.
So that's why I think what the UAE did recently in giving everyone free chat GPT is great.
But the UAE as a nation should control that control plane. So they should be using the API, not chat GPT.
Because you're going to see some very strange things happening very, very soon.
I want to make a point for our audience here because if there's an
audience member out there thinking yeah this may happen this may not happen if
you had to name the five reasons why whether it's your design or another
this is definitely gonna happen I'll give you two just for free one of them
is transaction speed you know when when AIs are communicating with and asking
for services from other AIs and your agents want services, you're talking about millions of times the
transaction speed of the banking system, the SWIFT network. And so you need massive rethinking
in terms of transaction speed. Another one is equity of access. You know, nothing in
the current system has any concept of equity. If anything, it's a winner-take-all design, which leaves everybody in the dust.
And so that, the voters are never going to go for that.
And so that's going to force some kind of structure to come into place as well.
So those are two.
What would be the top five reasons why this will definitely happen?
Something like what I'm building or just the great decoupling?
It's really exactly what you're building,
which is a system of distributed governance
that empowers AI.
Within your model, you have application-specific AIs
that are much cheaper to run than the big models
from the foundation model companies.
So that's part of it.
There's a blockchain component for transactions
and for value, and there's a bunch of other parts to it but it's really the
future of governance. Yeah I mean I hope it will happen. I'm not confident because
of the localized incentives but it's something that needs to happen right
because what happens is that you're gonna get a lot of defaults happening
soon and then people are gonna basically be giving up their power to these technologies. Like one of the things about using, so I
have a system called IIMind based on technology. Like I said, I use all the
top frontier agents and it's massively multiplied. That's why I've unblocked my
writing thing. So I've actually written like a couple of pretty good books
recently in the last few weeks and they're genuinely good. Like, wow. This is really interesting because what you have is
an inevitability of a billion, trillion agents
that someone will have to build a chain to organize them.
You've got an inevitability of this being in front of people,
but we're outsourcing more and more of our thinking.
Like when I'm using O3, my brain is actually changing
and I'm relying more and more upon it.
And as we have Grok 4, Gemini 3
is about to release level models,
we're gonna outsource more and more of our neocortex to it
and then we'll be locked more and more in
as they understand our context.
We've been doing that with smartphones for a while
where our memories are now in our smartphones
and those neurons are now freed up to do other work. Right? Yeah. So you're talking about
the next level of that. The next level of that because they know us better than we know
ourselves. Our kids will fall in love with AIs. They'll be the most trusted entities
out there. And so I think this governance thing needs to happen now. And we need to
have serious discussions about it. And we need to think about what is the world of AIs that I'm working with?
And for me, the AI that's closest to us for the regulated industries, because those are
the things that handle us living up to a basic level.
I think that needs to be open source.
The rest of the thing actually matters that much if it's open source.
But I definitely think that needs to be open source.
And that's why we've seen discussions of, can you have deep seek in the U S we
need to open source champions. I was like, let's just build it.
And it turns out that building that type of AI is a couple of orders of magnitude,
less compute than building the super advanced genius ones.
So let them build the genius ones.
Every week, my team and I study the top 10 technology meta trends that will
transform industries over the decade ahead.
I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy,
longevity, and more.
There's no fluff.
Only the most important stuff that matters, that impacts our lives, our companies, and
our careers.
If you want me to share these meta-trends with you, I write a newsletter twice a week,
sending it out as a short two-minute read via email. And if you want to discover the most important Metatrends 10 years before
anyone else, this report's for you. Readers include founders and CEOs from the world's
most disruptive companies and entrepreneurs building the world's most disruptive tech.
It's not for you if you don't want to be informed about what's coming, why it matters,
and how you can benefit from it.
To subscribe for free, go to Dmandus.com slash Metatrends to gain access to the trends 10
years before anyone else.
All right, now back to this episode.
Can we talk about advice to national leaders?
Because right now, as Nvidia and Gemini and Grok
and all are going to countries,
they're offering sort of a one size fits all
across the docket versus providing something
that is bespoke per country.
How do you think about that?
I think that the era of the massive model
is actually done, to be contrary.
I didn't realize this until we trained II Medical AP.
And it was so good.
I was shocked.
Again, a human doctor is at 15% on OpenAI's new benchmark.
GPT 4.5 is at 46%.
ChatGPT current one is at 40%.
This is at 47%. With eight billion parameters, eight billion.
It works in a Raspberry Pi. It works for a 10 year old PC.
And it's at 106 languages by next year. We will have it super good.
When I looked at that and I was like,
how much does it cost to train a doctor to do this? I was like,
that's crazy because when you're doing a specific model for a specific
thing and we're going to train it from scratch because we use Quen as the base, but now we're
using it to build the data set to then train it from scratch. I was like, for all of these
operational cook type jobs, the AI will run on nothing. And then the ability to create an AI
that represents your culture and your data set is actually an S curve.
A GROP6 type thing is an exponential because it's trying to discover brand new things.
I don't want to have Dr. House MD as my doctor day to day.
I want to call him in sometimes when it's complicated, right?
And go back to the culture side of the equation because I mean, first of all, nations need
to digitize their culture sufficiently to have a native set, yes.
And that's basically what you learn in school, it's the textbooks plus a bunch of videos,
except for you've been told it's too much for you to be able to do.
It's like does every nation need a deep seek model of their own, they train themselves,
no they need a pre-trained basis, they need sectoral and other data sets that are open, gold standard, again an S curve,
where in a year or two we can actually get good enough. And then they need to have their own
cultural data sets, which to be honest will only require a team of six to 12 people at most.
That's insane. But this, I mean, this should be the, this should be, I mean, nations need to
have a plan, right? National leaders of every nation
need to have a plan of action.
I think what you're proposing here
is actually viable executable fairly rapidly.
The compute that we decide,
we realized that you need for a node
and to train and keep up to date a national data set
is a 72 chip Blackwell.
Which is enough to go around.
Yeah, so hold on, hold on.
There is one that 8 billion parameter doctor model is such a hugely important topic
because right now we have 20 million GPUs in the world and we're going to make 20 million more,
but there are 8 billion people.
That means you need to share your GPU
with 200 other people.
This is like a public bus that's packed with everyone.
Now, right now, if I go to a random person on the street,
say, hey, I've got $1,000 cash for you,
or you can have a dedicated Blackwell for the next year.
Which one do you want?
Everyone I talk to is gonna take the $1,000 cash. What they actually want, and they'll realize this very soon, is the next year. Which one do you want? Everyone I talk to is going to take the thousand dollars cash.
What they actually want, and they'll realize this very soon, is the blackwell. Because the blackwell
is equivalent to employment, it's equivalent to entertainment, it's... they don't know quite yet
how much they wanted that blackwell, but they won't be able to get the blackwell because the
big foundation models are such compute pigs that they're going to use up every single molecule.
This is why that class of the 8 billion parameter use case specific model is the way to alleviate
the bottleneck.
Get the benefit to society without using up every single GPU on just a couple virtual
girlfriends.
Exactly.
And so our thing was make that available everywhere, because then it can diffuse.
So once we get our stack done, there will be a free app,
hopefully in a year, that anyone can download
onto their GP device, and it's fully open source,
that will check every medical decision ever made.
And that's all it will do.
One thing that really gets lost in those demos,
because Google I.O., you get on stage, you roll out,
look what you can do with VO3. I just asked it to make a movie and it made this and it makes like a 10-second clip for you
You're like wow now just make it make a whole movie and then you realize wait the compute isn't there
Yeah, like they can do it in a demo. You can't get it. Why because the compute doesn't exist
Well, so Elon yesterday said on his grok 4 announcement
They're gonna use a hundred thousand black wells to train the video model, which I mean, it only does.
That's the fastest supercomputer in the world a few years ago.
So Ema, answer this question given your background is Hollywood dead?
No, Hollywood studios will make a lot of money, but I'm worried about the people.
Like if...
Yeah, you said the studio.
So let's go a different direction here.
If your plans for the intelligent internet or something like it do not proceed, what
are your predictions for humanity?
What happens with the current trajectory of where we're going?
Hypercapitalism, massive hyperinflationary collapse of the dollar and other assets
and the AI is basically out competing everyone. Like every job behind a keyboard, video, mouse,
it's not a public sector job will go, which means that everyone will be employed by the public sector.
But that takes time.
We can't compete with this, with what's coming, you know? And this is before we consider things like my old college tutor, Oig DeMore,
who was on the GitHub co-pilot people just created X-Bow, which just won the
top hacker in the United States award.
Holy crap.
Right.
And so like stuff is going to happen faster than we could ever think.
And so I was like, I need something that can scale to be the highest marginal dollar for a GPU.
Actually, that's part of what it is.
Like if you could, because Bitcoin mining is 90% energy, GPUs are 10% energy.
What if you could just mine something that could give public benefit?
The more of those GPUs we get to public benefit, the better.
But absent this, none of them will get to public benefit.
I think you'd end up, because when I look through your papers,
Imad, there are bits and pieces of them
being done across the board by lots of other models
and lots of other blockchains or attempting bits, things
like this, right?
What you've done is kind of connected them all
into one systemic whole, which is very,
very powerful.
One way or the other, we'll get there to something like what you're talking about.
It'll just be much more slow and much more painful.
Is it Trantor?
And a lot of big things could break along the way if we don't get it into place quickly.
Yeah.
Is it Trantor in Foundation, where you've got those thousands of years?
Or if you give every single person an AI that's aligned to them as part of a bigger system with
AIs for communities nations and humanity that can draw on a Bitcoin type network
of compute to solve problems we will definitely coordinate better but no one
can own that stack that should be an open infrastructure for humanity
And then anyone who wants to participate in any element of it should be able to and
So that's why I think is the optimal here. Whereas in the foundation books by Isaac Asimov you had the secretive foundation
We actually know everything that we need to know to get through this
We just need to rethink how money flows. It should flow from being human
it should be anchored to intelligence and organizing the knowledge of humanity. And the best way
to make money shouldn't be that you own capital. It should be towards doing good. So that's
why you start with proof of benefit with benefit. But then the winners of the X prizes and the
cures for cancer, they become the richest people in the world. And the system directs
it that way aided by the AI.
I love the statement I made at Singularity
is the best way to become a billionaire
is help a billion people.
There's this alignment there.
And we should make it literal, because then the AIs
will be working on it, and the humans will be working on it.
And again, we start out with the basic thing right now.
Let's get as many people, aligned intelligence
as possible, and get the compute at organizing our collective knowledge
on cancer to autism to longevity to biodiversity.
That would be my third bullet on the why this must exist.
It would be exactly what you just said,
because the AIs need to work on it themselves,
otherwise it'll never keep up.
And we had Joe Kennedy III here in the office
a little while back, and he was, I had read his, he was in charge of the
crypto commission in Congress at the time, and so I read his
white paper, then he came in and I asked him some questions
about his white paper, and he said, no, I never read my own
white paper.
I said, well, what are you talking about?
He said, no, that's not how it works.
You know, the government is so big and so in charge of so
many things that everything needs to either be an agency or self-governing.
And then we just choose the self-governing process.
And so AI will take that to the next level, where, OK,
the rate at which you need to add ideas is so fast,
it can't possibly go through Congress.
So it needs some kind of a framework
that grows at the same rate that AI grows.
But this is the AI first stacks that you need to build and we're building right now.
Because if you think about Andrej Karpathy, he described this thing called the LLMOS.
Like you can virtualize any interface and it's all about the context that you give these.
And when I thought about it, I thought about the intelligent internet, our existing internet is
going to break. The AIs will attack it from somewhere or another I Was like a robust resilient coordinated verifiable internet
You can actually build that from scratch and then that infrastructure will be very difficult to attack
Just like it's difficult to attack Bitcoin because you need the hash rate to attack this you'll need to have a massive amount of compute
Because it can call a computer resist resist any attack. And those attacks are not only cacking attacks, it's psychological memetic attacks.
And this is something that I realized recently.
If you think about the base on which almost all of these language models are trained,
they're all the same, which is a generalized polymath view.
That's a large attack surface.
Again, which is why we have a Dr. Tev's small attack surface. Do you remember Stuxnet?
Sure.
Tiny virus went to all the reactors around the world, appeared everywhere, engineered
by a human, but amazing code, and then caused them to go into overdrive and blow up. I've
been thinking about what is the Stuxnet of Frontier AI models?
Because what I see in the future is the other part, we see massive unemployment and all
this, we see the AIs controlling all the parts of our society, and then someone has a Stuxnet
that causes mode collapse across all these models and they all turn evil at once.
Because evil is programmed into their latent space.
And when you think about that, it's like, we need a system that's
resilient against things like that.
Cause if everyone starts, and this is why I've got MIND material,
intelligence, I've changed it to network and diversity.
If you have a diverse range of different AIs, you're more resistant to that than
if you have one
same AI across everything that's over bloated, that's based on this polymathic
basis running our critical systems. So how many discrete, if you think
about the 8 billion parameter model as the prototype, are we
talking about millions of these?
No, I mean, like it's one for every individual,
but you have a very tight data set that it's trained on.
Like stable diffusion, we trained on 2 billion images.
Someone had an equivalent performance of 25 million images.
What are the right words and wisdom for society
that you actually want to have in your model at the start
to do the jobs that are needed to be done,
to enable everyone to have a basic level of intelligence and access. That's what we need to build towards
and the network needs to combine itself towards. Then everyone's got their own AIs and you're
in this stack that's built AI first to be resilient as opposed to having all these dependencies.
That's going to be more resilient than our current internet, which is not from an infrastructure
or even a memetic aspect.
You know what's really shocking and kind of sad is that we clearly need to go down this
direction because the compute is constrained, but it will always be constrained.
There's no way we'll ever be able to build enough compute to keep up with the innovation
that AI can come up with because software will move a billion miles an hour, hardware won't.
And so it will always be constrained.
So we need exactly what you're talking about.
But when you start working on it, the first thing you do is you pick up Quen and you start
working with a Chinese model.
And that's the part to me that's surprising and sad and weird.
I think it's partially a byproduct of the chip embargo.
The forced constraint on China is forcing a lot of innovation,
and also it's all being done open source.
But it's really sad to me that there's no way
to start with something that's built into your framework
or built in America that empowers what you're doing.
Yeah, I mean, I will train and model better than Quen.
Again, I built state-of-the-art models of every type,
and I'll make it fully open source,
which is why I was like, I've got to do that. Then you'll solve the problem. Well, yeah, I built state of the art models of every type and I'll make it fully open source, which is why I was like, I got to do that.
And you'll solve the problem.
Well, yeah, I know how to solve it. And so this is the thing. It's not hard. It's just
that it's in no one's interest to do so, but it's in our interest to do so because the
more people that use our model, it's like VHS versus Betamax. I want a good enough model
that takes everyone up to a basic level and then combined has massive network effects.
Then I can use that to get more and more mining of this to give more and more benefit, and that's it.
And you get mass adoption at a pace that resists the kind of the legacy from blocking it off.
That is a key question, right? You're going to have an immune reaction from those whose systems you're disrupting.
So the way that you do it, I mean, the only way you can do it is, again, decentralized
permissionless, right?
You buy the currency exactly the same as Bitcoin available wherever it is, except for you can
direct where your purchase proceeds go to cancer or autism or whatever, things that
really matter to people.
And then the code is permissionless.
Stable diffusion was downloaded over 100 million times
because it's permissionless.
So it will end up everywhere
because would you prefer to have your Quen model
that actually has a Quen license
and you don't know what's inside
or a model of an equivalent performance
that's being used by everyone deployed or not.
You will use the VHS, but we want it to be an amazing quality VHS.
So you're, you know, you've been the champion of open source now for, you know, you know,
10 years probably, at least five.
Is there an open source system out there that you trust right now?
No, I mean, like the Allen AI guys are doing good stuff.
But is it no one's interest to really make this?
And again, just like a very simple example,
should we have a supercomputer to organize all the cancer
knowledge in the world?
Yes, check.
Will it accelerate cure for cancer?
Yes. Why isn't it accelerate cure for cancer? Yes. Yes. Why
isn't it done? Yeah, yeah. Yeah, and when you say it's in no one's interest, what
you're meaning, I'm just parsing it for the audience here, you mean it's not in
the interest of the big foundation model companies. It's a comment. But it is in
the interest of 99.9% of humanity and all voters, almost all voters. If you're not a
big shareholder in either Meta or XAI
or Anthropic or Google, you're part of that 99% of the world.
This is hugely in your best interest,
which comes back to the rollout plan.
How do you get adoption of this very, very quickly?
Well, if it's in the interest of 99.9% of humanity,
there should be a vehicle for getting adoption.
I bought Bitcoin, except for all of the money that
I used to it went towards this cancer supercomputer and here's a screen of what the cancer is doing
today. That's the narrative. I'm downloading these models because they're open source and free
and I can do whatever I want with them and there's a whole community built around it.
That's the other narrative. And then for me, these are the primitives that can lead to where I think is gonna be most important,
which is the economy needs to change.
The reason that that cancer super-money doesn't get rewarded,
even though we can, say, if we raise $100 million,
$100 million of coin will be on the balance sheet
of that person, so it costs them nothing, probably,
if we're successful.
In fact, they'll make money by helping
with $100 million of commute because of the demand for digital assets. The economy
is only based on one thing which is GDP, which is material. We need to reward intelligence,
network effects, and diversity. And again, we couldn't do that before because we didn't
have these AIs that could measure the impact, that could impartially say, of course, this is a good idea.
I'll give you one very straightforward example of that.
If we had a dedicated open source supercomputer
that analyzed every single bill
that went through the government
and gave an honest take on it, completely transparent,
would that change the way our democracy works?
Yes.
Would it be maybe legal quickly?
But this is the fascinating thing the AI is can act from a mechanism design perspective as independent virtuous and wise entities
Yeah, and so as we build the system of governance for the future, we need to make sure they can do that
Which is why we need to build
For of why this must happen, because what we're essentially saying here is if this doesn't happen, the natural progress will put all of the power
into the hands of a couple of big foundation model AI companies, and you can measure that
by where is the compute.
If you see all the compute in the hands of say three people, five people, then you know
that they're dictators of the world forever thereafter.
And so if we don't do anything-
At an exponential rate, at a rate at which it cannot be caught up and disrupted.
That's the issue.
So anyway-
You know what's really cool is that the AI will transcribe everything we're saying here,
and when we say,
hey AI, can you put this into a bullet list
for our audience?
Then we'll have four of the five done.
So if we can think of the fifth one,
that would be really awesome.
We'll let the AI, we'll let the AI figure out the fifth one.
So, Liam, what do you want to say?
Yeah, I think there's a nice inevitability here, right?
So as we move forward down this AI automation of work path,
if we just take the simplest thing like driving,
which we've talked about as being the number one job in the world, right, there are more drivers as a profession
than anything else in the world.
You automate driving, you have a huge GDP collapse from automating a few of these things, healthcare
workers, driving teachers, etc. etc.
That collapse in GDP will force, because one of the flaws of GDP is
if you make something 100x efficient, it drops GDP, right? And so that will force a move
into a new mechanism like this, independent of whether the government's like it or not.
Yeah, I mean, there's something interesting there about that.
There's a complete collapse of aggregate demand and other things. And this is why the other aspects of the economics paper,
and again, there's more proof to that,
is we need to move to a new type of economics
where money is basically created by people.
And that will give constant demand forever.
Because the current solutions like UBI, UBI will never work.
Mathematically, it cannot work when tax rates go down.
The AI is coming like a wave. And the ai will also be amazing at tax accounting, you know putting it through ireland or whatever
And can you define that so few people understand what you just said and it's so so important because everyone's talking about ubi
Ubi ubi because everyone's used to wanting money. Why will a ubi so universal basic income?
the the for everybody listening here the basic thesis is that
universal basic income the the for everybody listening here the basic thesis is that as jobs go away and
productivity goes through the roof and
Potentially GDP because you're dividing by effectively zero goes to the roof
We are going to give every citizen on the planet or in your country a certain aliquot of money that allows them to survive
Month to month to month covers their basics. It's been tested, you know, in a hundred experiments. Salim and I have both written about this.
And in these limited experiments, people don't use the money for beer and Netflix.
They actually use it to improve their lives, educate themselves, start, you know, in Africa,
buy some animals, buy sewing machines, start a job.
Why wouldn't- The problem I've stated in the past is to go from a taxation,
union, labor, job type of structure to this is such a huge leap.
We have no confidence in public sector to get us there.
Yeah.
But Ima, you have a more nuanced view on the economic side of it.
So talk to you that.
Yeah. I was like, give them money,
but give them money for being human and make them mint the money. You use your artificial intelligence. So I'm going to slow this down
for everybody. Make them mint the money. What does that mean? So let's rewind it a little bit, actually.
Okay. What happens is this, UBI works in small cases. If you have a complete realignment, great
decoupling of society, and all these agents getting smart at once and out competing everyone,
aggregate demand goes down massively.
Aggregate demand for what?
People stop spending.
Because everyone's losing their jobs.
This is where you end up with the outcome of the zero marginal cost society.
Tax goes down. And then these AIs, these wholly AI firms or one person at the top with a thousand million GPs,
will never make a profit because a profit is an indication that they can't find any
more marginal value and the profit can always be put into more GPUs even if
they did they would figure out all the tax loopholes in the world not to pay
that profit because they're better at tax loopholes than you are so the tax
base is going to do that. And then giving everyone
cash and then making them spend it is going to be very
See, that's what I see as a positive, in fact, because that'll be the forcing function to
flip to a system like this. It'd just be painful as hell if we don't do it quickly enough.
And so my thing is, once you've given everyone basic AI, which I think will actually only
cost a dollar a month, if we get it right. Then why not make it so
that the money that comes into the system isn't from banks, it's from the
people. So everyone mints money constantly. Okay, once you're going back
what does everybody minting money mean in this scenario? It means that
you have a national digital currency. Number goes up every single day as you
use your AI to make
yourself happier and better and improve your community.
So I'm using my AI to write a new story for my three-year-old child.
I'm using an AI to diagnose a sick friend.
And as I do that, as I use the AI in a positive agreed upon mechanism, I'm
minting these foundation tokens.
You get this, you melt your culture coins, as we call it, your national tokens that are pegged to the way just like you had gold pegs.
So you've got your stable thing and then you've got your flow.
You mint your currency a certain level just for being a citizen and then more if you do societally positive things.
And then if you feed cancer data into a broader model,
you get more than if you just sit at home doing nothing.
Exactly. As you build up status in your community and your society,
and again, there's lots of details we worked out about that part,
then you should be able to benefit and that becomes a circulating currency
because then people are like, that's an index on Mexican AI use.
And so money will flow more and more into digital assets from the existing economy
It will flow more and more into generative AI assets regardless of anything that happens
But if we get it to collapse in aggregate demand, you know where capital is going to go
Capital is going to go into gen AI and blockchain and so make it easy for that to happen. That's really really brilliant
That's the right part out loud here is over time. This will collapse national economies. It'll end up with one global
structure. Well, just to try and it will dominate. Very similar. The problem with UBI is,
you know, I give you money. Right now, if you make money, you have a choice to use it, you know,
on entertainment, or you can go to the casino, whatever you want to do, or you can invest it or you can buy a sewing machine and start turning it into debt.
That's your choice. In the future, that universal right to AI is the equivalent. So now you
have your AI. You can use it for your virtual girlfriend if that's what you want to do,
but you can also use it to generate some benefit. You can use it to help cure cancer. That becomes
the equivalent choice and it's the only relevant choice in the post-AGI economy.
So that's the flaw in UBI
and the beautiful thing about this design.
That's why I've got N, network is a key thing.
Like Wikipedia created so much value
from its network effects and others.
If you say, what is the meaning of life
in a post-AGI world?
It's living.
It's, I saw my family on the weekend.
You know, it's like my daughter's art,
actually my daughter's art's pretty good.
My my daughter's art's aren't pretty good.
That created value, right?
But it can't be measured by any of this.
And the post-abundance society, the Star Trek world,
is one of boldly going where no one has gone before.
It's about exploring,
it's about deepening your community values.
And if you've got an AI next to you that's looking out for you, it's going to be encouraging
you to do that, as well as contributing to some of these bigger problems, right?
It's going to encourage you to create, because creation's about context, it's about flow.
So this is why I think if we program this right, it can be a really nice elegant structure
that moves away from extractive economics that we have today and capitalism and democracy are the worst of all systems except for
all the rest to something better and the question is where do we want to direct
ourselves and my view is direct it to benefit and that benefit is something
that we need to decide at a societal level, country, community level, individual level.
You know, there's an example, there's a good example of what
you're talking about here, you might have Peter and I write
about it in the new EXO book, which is technological
socialism. A typical government socialism fails because you
always end up with centralized inefficient planning, and
invariably leads to corruption. And it always fails for those
two counts. But if you we kind of talked through an example inefficient planning and invariably leads to corruption. And it always fails for those two
counts. But if you we kind of talked through an example like Uber, which is the sharing of assets
amongst a large group of people, it's actually a socialist function. But when an algorithm
hyper efficiently allocates it, you get all the benefits of the collective assets without the
downside of inefficiency or, or graft. And so I think there's a stepping stone
from something like that to what you're talking about
that's an easy thing to go down.
And when you have a structure like, say, Uber,
you don't need a lot of regulatory
because the system has the right inputs and outputs
and feedback loops to self-manage itself.
And we're seeing more and more examples of that
inevitably leading.
This is why I think this is gonna happen one way or the other.
There's too much efficiency to be gained by having a system like that than by not.
But it's facing lots of forcing functions and legacy issues.
If you can craft in the way you're thinking about,
you're providing a scaffolding that everybody can just ladder up in
a structured way to that new model,
which I think would be very powerful.
The fact that generative AI creates this non-rival intelligence for almost
nothing, the cost of skills have gone to almost nothing.
Capitalism will not survive that.
Yeah.
Like literally just go and ask your O3, your chat GPT or others, based on
intelligence going like that and AI achieving a level of performance
That's equivalent to a human and can scale
What does that do to capitalism? What does that do to democracy? What does that do to tech and you'll have some very
Deep answers there. It's scary. So yes, you know
Thrilled by your finally giving birth to these papers
And would love to have you back again
You've been an incredible friend of the pod here to speak about
I think really really important. Okay, Dave
So, you know ideas like this historically they propagate out through a token and you have it. I was going to wind this down so people can... So ideas like this historically, they propagate out through a token and you already have this
universal access to AI as a founding principle, which is I'm sure going to be available in
blockchain form, right?
That's the only way you would get it out quickly to the world.
Is it fundamentally investable as a token or not?
Yeah, it's just like Bitcoin, except for it's called Foundation Coin.
It's called Foundation.
When is it going to be out?
Soon, hopefully.
We've been mining it since January.
Well, we'll come back and we'll launch it right here on this Moonshot pod.
Agreed, Iman?
Agreed, agreed.
Yeah.
Yes. So, Imad, real quick, what are your next steps here in making this real, making this launch?
We're going to put out all of the knowledge and the work that we've done and we just want
people to give their input and think about what the future of education, health, governments
and everything is.
Like, the first step is to stress test that, so go to ii.inc.
If you sign up, you'll get the details of that, and all the details and knowledge will
be there.
And then let's just enter it.
ii.inc.
Okay, that's where we go to get more details.
And following you, you know, you're active on X.
Yeah.
What's your handle there?
It's at E.Mostak, E and then my last name.
All right.
Amazing. Matt E. Mostak, E and then my last name. All right, amazing.
So I know Salim and Dave,
we could go on for hours here in this conversation
and we should, but next time.
I think we've hit the S curve limit
of extracting useful value in this conversation.
My brain is fried.
I always love it.
You can never back.
Just restructure society. No big deal.
It is the most exciting time ever to be alive. We're in the middle of this intelligence explosion.
And I just want to remind people that we have the ability to steer.
We don't need to give up on a sense of humanity's future. It's not going to happen to us.
We have the ability to shape that future,
to create that Star Trek universe,
to create this, a sense of a hopeful,
compelling, abundant future for humanity.
That's what we talk about on this podcast,
and that's what we believe in.
Emad, thank you for being a friend and a brilliant creator.
So-
And all the folks that prefer a Mad Max future don't listen to this podcast anyway.
Yeah, I know.
Go someplace else, talk about politics someplace else.
You want to talk about science, technology,
creating the future, creating moonshots.
This is the place for you.
So share it with your friends.
Imad, a pleasure.
In terms of the names of this podcast, Imad,
I think you hit the high bar on a true moonshot.
Yeah, saving humanity.
That's a good one.
My MTP is a transformed civilization and I think I can just relax now.
There you go.
Like, I gave up space to Ilan, you can give up that to Imad.
That's great.
I'm good.
Why are these four-letter names that start with E so important these days?
I don't know. You know, why are these four letter names that start with E so important these days? I don't know.
All right, everybody.
Thank you for joining us on this special episode of Moonshots with Imaad Mustaq, the founder
of Intelligent Internet.
Go to ii.inc, check it all out and come back and visit us next time.
We'll be back next week with another episode of WTF
Just Happened in Technology.
Introducing TurboTax Business. A brand new way to file your own T2 return, all while getting help from an expert who actually knows small businesses.
Got a tattoo studio? Toy store? Tiny but mighty taco stand? We've got someone who gets small business taxes inside and out.
Experts are standing by to help and review while you file, so you know your return's done right. Intuit TurboTax Business. New from TurboTax Canada. Some regional exclusions apply. Learn more at TurboTax.ca business tax.