Bankless - AI Power Wars | Emad Mostaque
Episode Date: April 8, 2024Today on the show, we have the founder of Stability AI, Emad Mostaque. Emad recently left his company citing “You can’t beat centralized AI with more centralized AI” and decided to venture into ...the frontier of decentralized AI. We touch on the economical consequences of AI, why it needs to be open-source and distributed and the role of Crypto in decentralizing AI. ------ 📣 SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo 🏠 CASA | SECURE YOUR GENERATIONAL WEALTH https://bankless.cc/Casa 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION https://bankless.cc/toku ------ TIMESTAMPS 0:00 Intro 5:26 Navigating AI & Web3 10:52 AI Governance 18:03 Decentralizing AI 26:54 Balancing Coordination 33:52 Decentralized AI Infrastructure 40:10 Emad’s Projects 48:40 Decentralization Trade-Offs 58:53 Governance Structure 1:02:50 AI Regulation 1:06:36 Custom AI Models 1:12:21 AI Economical Consequences 1:17:11 AI x Crypto 1:20:47 Closing Thoughts ------ RESOURCES Emad Mostaque https://twitter.com/EMostaque ------ Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Web3's grown up a lot.
There are actually primitives now that are emerging.
There could be a substrate for collective decentralized distributed AI by the people for the people.
Whereas on the centralized thing, again, it's a different paradigm, which is the all-powerful AGI.
My thing is, you can't beat centralized AI with another centralized organization.
I think what we should build for an AGI, this generalized intelligence, is the human collective intelligence, amplified human intelligence.
Something that uplifts us all and acts like a swarm.
The alternative picture being presented here by Deep Mind Opener and others is the Machine God.
Welcome to Bankless, where we explore the frontier of internet money and internet finance.
And today on Bankless, we explore the frontier of decentralized AI.
Emad Mostak, the founder and former CEO of Sability AI, is on the show today.
Emad recently left Sability AI, citing,
You Can't Beat Centralized AI with more centralized AI,
and announced his intention to work in the nebulous field of decentralized AI.
decentralized AI? What exactly is that? Why does EMAD want to put an AI model into the hands of every individual with a compute power to run it? Is there a race between centralized AI versus decentralized AI, or do they each bring something to the table? Why are the interests of governments, nations, and communities of the world align with the desires of decentralized AI? Why will this revolution be deflationary for developing countries, but an accelerant for the developing ones? Bankless Nation, if you haven't gathered this,
By now, it's an AI episode today because it seems the world of decentralized crypto protocols
and the rise of AI are converging faster than ever, and we, a bank list, are just trying to
keep up one podcast at a time.
Ryan, the AI, is out on unplug for this episode, hanging out with his AI wife and AI children,
giving us a chance to catch up as he takes a much needed break from the internet and meme
coins today.
Let's go ahead and get right into this episode with Emad Mostack.
But first, a moment to talk about some of these fantastic sponsors that make this show possible,
especially Cracken, our preferred place to trade crypto tokens.
do not have an account with Cracken, consider clicking the links in the show notes to getting started with Cracken today.
If you want a crypto trading experience backed by world-class security and award-winning support teams, then head over to Cracken, one of the longest-standing and most secure crypto platform in the world.
Cracken is on a journey to build a more accessible, inclusive, and fair financial system, making it simple and secure for everyone, everywhere, to trade crypto.
Cracken's intuitive trading tools are designed to grow with you, empowering you to make your first or your hundreds trade in just a few clicks.
And there's an award-winning client support team available 24-7 to help you along the way,
along with a whole range of educational guides, articles, and videos.
With products and features like Cracken Pro and Cracken NFT Marketplace and a seamless app to bring it all together,
it's really the perfect place to get your complete crypto experience.
So check out the simple, secure, and powerful way for everyone to trade crypto,
whether you're a complete beginner or a season pro.
Go to crackin.com slash bank lists to see what crypto can be.
Not investment advice, crypto trading involves risk of loss.
Mantle, formerly known as BitDow, is the,
The first, Dow-led Web3 ecosystem, all built on top of Mantle's first core product, the Mantle network,
a brand-new high-performance Ethereum Layer 2 built using the OP stack, but uses Eigenlayer's
data availability solution instead of the expensive Ethereum Layer 1.
Not only does this reduce Mantle Network's gas fees by 80%, but it also reduces gas fee volatility,
providing a more stable foundation for Mantle's applications.
The Mantle Treasury is one of the biggest Dow-owned treasuries, which is seeding an ecosystem
of projects from all around the Web3 space for Mantle.
Mantle already has sub-communities from around Web3 onboarded,
like Game 7 for Web3 Gaming,
and Buy Bit for TVL and liquidity and on-ramps.
So if you want to build on the Mantle network,
Mantle is offering a grants program
that provides milestone-based funding
to promising projects that help expand, secure, and decentralize Mantle.
If you want to get started working with the first Dow-ledd-Lead layer-2 ecosystem,
check out Mantle at mantle.
And follow them on Twitter at ZeroX Mantle.
Bankless Nation, super excited to introduce you to Emad Mossack, who has a really interesting background, was once a hedge fund manager, but moved to finance to become the CEO and founder of Stability AI back in 2020, the company behind Stable Diffusion, which is a text-to-image machine learning model that rose of prominence alongside ChatGBT.
I met Imad about a year ago during the AI Crypto Week at Zuzalu, so EMAD is no stranger to crypto in Web3, which I think will become evident shortly.
Recently, EMAD announced his departure from Stability AI with the intent to pursue
decentralized AI. A headline from TechCrunch Red, Stability AI CEO, resigns because you're not
going to be centralized AI with more centralized AI. Emad, welcome to Bankless. Pleasure. Thanks for having
me. So as AI as a topic, EMad went from zero to 100, just extremely quickly in the last like two years
or so. And you've experienced that entire surge of attention, activity, investment, all very, very
quickly. What's that been like? What's it been like riding that wave? What did you learn about
yourself, about the world, about other people. What was all of that whole thing like?
So, you know, you have Web3 time, which is faster than real human time. Yeah. AI times
like faster than that. Oh boy. I do not envy that. Yeah. Stability, we hired our first developer
two years ago. And since then, we've had 330 million downloads of models we've built or contributed
to. And that's insane in like two years. So I remember when Stable Futon first came out in August of 2022
on cumulative developer stars on GitHub.
I overtook Bitcoin and Ethereum in like three months.
And so there's being in a normal startup upscale and hyperscaly.
Then there is that craziness.
And then on top of that, there's all of a sudden you're like talking to the King of England
or like congressional people about how this can destroy the world or other things like that.
That's just a lot.
It's a lot.
Yeah.
What factors would you say played into all of that?
Well, the fact that it's a transformative technology, right?
Of course.
I think we've ever seen a technology be adopted as quickly as this.
Like, with Web 3, I think, you know, we're all passionate about decentralization, sovereignty,
but, you know, we've been building a system largely outside the existing system,
and all the money's been made and lost at the edges,
and we've been bootstrapping economic incentives.
Whereas this, technology, all of a sudden, Google is a generative AI first company.
Microsoft is a generative AI first company.
NVIDIA is a $2 trillion dollar generative IRA first company.
We've never seen anything like that before.
I want to just get right into the deep end with this conversation about the pivot. Pivot, maybe you would call it, correct me if I'm wrong, into the world of what we are now calling decentralized AI. I think that if you've been paying attention to this world inside of the crypto industry, it's been a theme for like coming up on a year now. But it's also very interesting to hear things from perhaps your perspective who's coming from the AI world specifically rather than coming from inside crypto directly. So overall, just like what does it mean? What does decentralized AI mean to you?
Yeah, so actually, I've been in Word 3 since 2011.
Oh, wow, earlier than us.
Yeah, my college coursemate was Ben, who co-founded Bitmex with Arthur.
And then I built kind of TCR systems, did system audits, other things in 2016, 2017.
Stability originally was actually meant to be a Dow of Dao's, because we launched these communities to get the talent, and then I bought the supercomputer and others.
So our seed investors were Seed Club and Lemnus Capital and CFG, so all crypto.
But then I realized there was no intelligence in Daos, so we had to build it ourselves first.
Right. Yes, this is something that we've learned over the years as well.
Well, we have, because we've kind of not learned the mistakes of direct democracy and other things.
And so Dow's are more like, you know, de-o's decentralized, not really decentralized organizations.
Certainly not autonomous. And smart contracts are a bit dumb as well.
So, you know, it was kind of a whole journey because you've got these two paradigms of like autocracy and then distributing decentralization.
And so it started Dowdows, and I just quickly realized crap, have to centralize it.
So I took control. And then push forward the research agenda to build these.
things. But, you know, there was always this thing at the back of my mind, which is like,
this is going to be used to teach your kids or my kids or whoever, right? It's going to be
used to donors or healthcare. Who governs that? Like, part of the debates we had about AGI,
which is this very complicated debate, you know, like is it Terminator, is it utopia,
everything, like, who decides? Because if you look at what Open AI says, for example, they say
AGI will end democracy, end capitalism, and kill us all. And it would be nice to have someone
be part of that debate,
opposed to a few people in Silicon Valley, right?
Like, it may not be correct.
Like, I signed the FLI,
six-month-paws letter last summer.
But who should be governing this?
Who should be feeding in this data?
Because data is the most important thing.
These are always things that were the back of my mind.
But the focus over the last few years was,
let's build the best models of every type.
So stability doesn't just do image models.
We have the best protein volume model.
We have the best video model other than SORA
because SORA is not released,
but stable video is released.
It beats Rondway and Pika with the best 3D model.
You can generate a 3D measure 0.5 seconds.
All thanks to this combination of community core team
and re-differentiated background.
So moving into decentralized space again
and going to this full times
because I'm not going to think about this,
particularly as Sam Altman takes over Open AI again
and his reappointment to the board is a bit of a thumbs up,
middle finger up to everyone because he's like the board confining
and I'm in charge the nonprofit again.
There is no governance, even though I respect people there.
Who should be running this and where does Web3 fit in this
it's the governance of the data.
It's in the self-sovereign identity,
self-sovereign AI,
and it's in the distribution of this technology,
so it's available to everyone.
Because right now,
what you're seeing is increasing centralization.
And are you going to our accelerate open AI
when Microsoft's building a $100 billion supercomputer?
You know, Project Stargate?
Probably not, right?
But do you need to build 100 billion dollar supercomputers?
Probably not.
But you do need to take the best out of Web 3
on a governance distribution,
you know, and then alignment perspective.
And I think we'll have safer, better AI that uplifts everyone as a result of that.
One thing when you listen to Sam Altman talk about the structure of open AI and the board,
the word governance comes up quite frequently because he's thinking very much in the future.
He is thinking in a world in which this super intelligent machine does exist and all of a sudden
questions over its governance become critically important.
And, you know, so props to Sam for having.
this kind of foresight. Similarly, in the crypto space, we also think about governance quite a lot.
Like Dow's, that's the conversation of governance, governance over networks, governance over apps.
And so we are also experimenting in governance. And this is one of the doors, I will say,
that Web3 has opened, but definitely not solved. And we've seen the governance failures
of the OpenAI board. And you can look into crypto and also see plenty of governance failures.
And so one thing we can say about crypto is like, at least we are trying, but we don't have
necessarily any governance solutions here. How are you thinking that this proceeds forward when one of
the biggest issues in the world of artificial intelligence is governance?
Well, look, we haven't figured out how to align humans. How are we going to align AI, right?
Right, yeah.
I think that's one of the things that we're always thinking about.
I think what I've seen is that there's a lot of real thought being put towards governance,
alignments and things like that.
But there are lots of misnames. Like, big compute is a substitute for really bad data.
You know, so one of the things is data quality, data provenance, data tracking.
We're seeing this wave of potential deep fake as superhuman generative AI.
That requires, again, a prominence kind of solution.
There's the distribution of the wealth around this.
You see billion trillion-dollar companies kind of emerging from almost nothing.
Who should kind of have some of that ownership?
There is the question of attribution of data going in.
We opted out a billion images from our image models.
We thought it was the right thing to do.
Didn't have to do it legally, et cetera.
nobody's got solution because this is a problem for humanity and it's happening at a time when
entire industries are about to be completely revolutionized i was on a panel with nat friedman the former
ceo of github last monday at abundance 360 this conference by pia d'a mandis and he made this really
good analogy because i said well i think that these like really talented graduates that can do just
about everything and they try it too hard they hallucinate he's like yeah and we just discovered a continent
called a i ii atlantis or a atlantis if you want to call it that way where's a hundred billion
them. That's economic upheaval. You know, so what I'm really looking at is I believe that every nation
should have their own AIs and data sets that they own because no government will run on closed AI.
It'll be open and transparent because you have to know what the curriculum is. You have to know you are what you eat.
And in every sector, there's a transformation that occurs from this, but we need to coordinate the
response because we don't know how fast it happens. Because one of the things that's happening here
is like, oh, just over a year ago when ChachyPT came, every head teacher in the
the world had to answer the same question. I can't set essays for homework again, or can I?
And we're seeing a lot of those things happen in right now. And so my basic move here is
to move towards building a substrate for collective intelligence as opposed to collective
intelligence, amplified human intelligence versus AGI, distributing the benefits and getting together
the smartest people, hopefully, that can figure out what the practical governance of these
datasets, these models and other things are because they impact society. And that's a discussion
that has to be had. And it should be the smartest, most passionate people of every nation,
looking out for their nations. It should be the smartest, most passionate people in healthcare
and education and others, thinking, how can we fundamentally change this for good? You know,
leveraging this technology, that is transformative. And I can talk about that. And it has to be
people that have a holistic view thinking about, oh my God, well, finance is about to be
messed up completely, as an example. And again, I can dig into that. What do we do about it?
And actually building not only the creation bit, but the defense bits around.
this as well. Like our infrastructure,
you saw the Ex-Z, kind of thing,
is incredibly susceptible to attacks from this.
And so we have to build a new, robust infrastructure,
incorporating all these elements. And I think, again, the only way
to do that is not a small dedicated team in Silicon Valley.
It's distributed teams of people working
around the world solving this problem, which is the real Manhattan
project, but it's not against the Russians or anything like that.
It's against socioeconomic infrastructure and
very unpleasant race conditions.
Also, I don't think that,
There's very few bad people in this as well.
So even though, you know, crap on Open AI and things like that,
there are a lot of passionate people trying super hard there.
It's just, again, they're caught in very bad local maximum because it is so powerful.
The natural thing is to control, control, control.
Yes, yeah, yeah.
And it's also the hardest thing to give up as well.
Your quote, you're not going to be centralized AI with more centralized AI.
There's a number of different ways I could read into that.
Like, first, is that your mission?
Is that a goal that you have personally is to beat centralized AI?
Yeah, I mean, like, I ran a China A-Shs fund and I was a video game investor and then it suddenly became the China social credit score.
And that's a vision of the future, gamified life, you know, and some people are happy with that.
But I see Bowspacysbac basically going to 1984 on steroids if you've got a few people controlling this amazingly powerful technology deciding when it's going to happen.
You know, who gets it?
Like, when does this technology, when does AGI get to Pakistan or Bangladesh?
Well, never, because they can't be trusted with it, right?
But then this AGI, whatever you may define it as, could be used to educate every child or give universal health care.
So who gets to make those decisions?
So my thing was AI for the people, by the people.
Like in 2020, I was leading one of the United Nations-backed AI initiatives against COVID-19.
I was lead architect to organize global COVID knowledge and make it accessible through AI.
And pretty much every AI company that promised to back me didn't give me the technology because it was dangerous.
I was like, we're saving lives.
and that's the erroneous of stability
and how it moved to this, you know?
But like I said, we're not quite mature enough.
And this is one of the fascinating things
as you look at the L2 roll-ups,
as you look at the data authenticity things.
Web3's grown up a lot.
There are actually primitives now that are emerging.
There could be a substrate for collective,
decentralized distributed AI
by the people for the people.
Whereas on the centralized thing, again,
it's a different paradigm,
which is the all-powerful AGI.
So my thing is, you can't
beat centralized AI with another centralized organization. I think what we should build for an
AGI, this generalized intelligence, is the human collective intelligence, amplified human
intelligence. Some of the uplifts us all and acts like a swarm. The alternative picture being
presented here by Deep Mind Opener and others is the machine god. You know, and who controls machine
god? I think it's impossible to control machine god, you know, because how do you control someone more
capable than me? You remove its freedom. And I don't really feel very comfortable with that. Instead, I'd rather
build better data sets that feed that machine god or collective intelligence, bring this technology
to everyone so we can have robust infrastructure that's battle tested, and really focus on uplifting
humanity through universal healthcare education, proper financial rails, and self-sovereign AI to go with
self-sovereign identity.
Maybe this conversation isn't really about centralized AI versus decentralized AI, mainly because
there are some platforms out there that are capturing a ton of attention in the crypto space
that are like decentralized compute platforms.
decentralized inference platforms. And they do a lot of the very similar things that centralized AI
platforms do, but they have this distributed context, right? Like instead of having and centralizing
all the compute into one single like data center, the margins can contribute their compute to
this one coordinating platform. And we're calling this like decentralized AI. But my intuition about
this is that the outcomes are not the same, whereas the products that come out of decentralized
AI or like decentralized AI aligned infrastructure are not the same products, are not offering the
same services to the world as a centralized AI platform would like open AI or Microsoft.
This is my intuition. Is this kind of what you were saying with like rather than creating like
an alternative to AGI, we're just pushing more intelligence to the margins? What do you think
about this? Yeah, well, I do have this vision of an intelligent internet where every single person,
company, country, and culture as an AI working for them that represents them and flipping that
intelligence thing. But yeah, I think it's not open source or closed source. I think they're
complementary. They're your own graduates that work for you and the consultants that you bring in.
Like, I was the original founder of Mid-Journey. You know, I got it off the ground with a grant
to cover all the A100s for the beta and supported by inviting people in. And when David said,
do you want to open source? I said, no, it's fine. You need to have these closed and open solutions,
but there must be an open solution. It must be distributed. And we must figure out the governance.
again, no regulated industry eventually is going to operate on a closed black box model.
You must know what the data is.
There was a paper by Anthropic recently.
We all knew about it, but they actually put it on a paper called sleeper agents.
So basically you put some data into a language model, and you can't detect this.
You can't tune it out or anything.
But if you say 2025 or Dosvidania, whatever, the model tends evil, just with that little bit of poisoning.
So how can you trust this for any regulated industry?
to run a government, education, healthcare, etc.
So my think is, let's build open as a default substrate for this.
There is a gap here where government, sectors and others would welcome that.
And then it means one of the defaults will be open versus everything being centralized
and closed.
Eventually that will be open, but controlled by a few people.
So this is a question of control, governance, spread, acceleration, distribution.
And like I said, I think if none of us get together and do this,
this right now. The alternative will be
largely the Panoptic in the 1984
type thing, you know, where
what happens if you get excluded? Like,
it's amazing, like in
2022, open air
released Daly, but
they banned it for all Ukrainians and
all Ukrainian content.
In the middle of the Ukrainian conflict.
Think about that censorship, right? So we brought
over a bunch of Ukrainian developers out of the war
zone and other things, and they were just aghast.
Why? You can never get a straight answer. But
What if they were the only image generator?
The rest of the world can do it, but not Ukrainians.
Why? Because of some sanctionalists somewhere.
That feels a bit wrong, that some people have superhuman creativity powers,
raising the floor and other people don't.
So we must create this option, and then I think you can create this default
of an AI that we all own.
And again, this collective intelligence as the emergent property versus this AGI image,
which definitely someone will try to control,
someone that we don't elect, someone that we don't know,
and we don't have any say in.
Because realistically, for our infrastructure of the knowledge,
this upgrade of the human brain, as it were,
rocket ships of the mind,
we shouldn't have to rely on anyone being nice or fair or good,
me or everyone else.
We should have some systems in place for the governance of that.
And again, they said it's hard.
But let's do it transparently and let's put pressure on people to figure this out,
you know, and green and gathering and gather,
and that's what crypto's good at, coordination mechanisms.
Certainly. Yeah, the idea of credible neutrality certainly comes to mind.
Yeah. This is something that's very, very important in the crypto industry where the protocols,
the platforms that are more credibly neutral tend to garner more applications to build on them
because people appreciate when the foundations that they are building their structures on,
be it a startup or another application, when that platform is considered fair and equitable,
then more people will build there. And simply, it was due to the fairness. And I think that
maybe what I'm hearing you aspire to in the AI space?
Yeah, I think it's the fairness.
It's the transparency, but also, you know, it's about this accountability and other things
there.
Again, we should have permissionless, trustless systems, right?
Because this is an upgrade in the floor of humanity.
You can suddenly code, draw, paint, sing, better, raise it the floor.
What if this is not evenly spread?
It means that you'll have superhuman AI augmented people and everyone else.
you know, and that doesn't feel like a fair future,
but also we should rely on the system
to make some of these systems rather than individuals randomly.
Like, who decided the data mix at stability?
Pretty much me, you know?
And I could do whatever I want to with that, I could poison it.
I didn't.
But, you know, I could have.
Could make it say, MAD is fantastic,
and then all of a sudden gets loaded on every laptop
and put in front of millions of kids.
That's kind of wrong, right?
So we need to have the verifiability, accountability,
neutrality,
and AI isn't infrastructure,
but it should be.
You wouldn't have private companies
owning every single road,
you know, deciding you can go on those roads
or every single kind of traffic thing
and other things like that.
So there's this big shift,
but unfortunately given the pace it's going,
we don't have long to make that shift right now.
The defaults will be established
because everyone's now like,
what's our general strategy
from a country to a company to a personal level?
and if the only options are the trillion-dollar companies,
you'll go with the trillion-dollar companies, right?
Right, right.
Yeah, so you said earlier that it's not centralized AI versus decentralized AI.
It's more of having the options to have both.
But also, it does kind of feel like a race.
Yeah, just to correct that.
It's not centralized or decentralized for the everyday stuff,
but if we ever do get into an AGI-type scenario,
then I think it very much isn't either-all.
Right, because the AGI suppresses its alternatives?
Yes, and again, this gets into very, like, hand-wavy territory.
Right, sure.
But is the default a collective swarm intelligence that's diverse,
or is it a single million GPU AGI that then goes and takes over various institutions
with its dulcet voice based on Scarlett E. Hansen and then decides what it wants to do, you know?
I do feel that there is a bit of that.
I don't think that'll happen, but I could be wrong.
So I'd rather, you know, let's set some good defaults.
Certainly.
And I think it's your thought that the decentralized AI can be a check on centralize.
AI. Yeah, I mean, like, everyone hates to know it all, which is AGI, right? And right now,
the AGI is trained on the whole of the internet. And no wonder it turns out crazy.
Build it a data sets and that contributes to a more balanced AGI on a centralized basis.
But then a swarm intelligence, I think, will outcompete to centralized intelligence.
You know, like, why don't we have all of the knowledge of science and our fingertips and can
recombine and adjust it? That's the human colossus. That is a swarm. We can split the atom, we can
go to space. But our existing infrastructure is based on text and lossy information formats.
It's black and white. We can upgrade that. We can upgrade all of our systems and then achieve
much more. So we should do that. And then it's collective human intelligence that represents us,
works for us and as aligned with us versus being aligned for profit maximization.
So like YouTube, it optimizes for engaging clips, which are more extremes than optimize for ISIS.
That was the algorithm. They didn't mean that. But we know that organizations are slow,
dumb AI that optimise for certain things that are maybe not in line with our requests or our requirements.
Education is about removing agency, not giving agency to every kid.
Healthcare is about sick care, not healthcare.
Governments are about pushing the people down rather than uplifting them.
So that's why I think the defaults are good and that's positive for everyone on the way.
But then again, the AGI that could come out of that centralized or not could be a swarm AGI,
or it could be the centralized thing that's based on better data sets because they suddenly become
available from every country and culture and everything like that.
So it's a win-win-win either way.
I want to drop a metaphor your way and see how you react to it.
The whole centralized AGI versus swarm intelligence feels decently parallel to central planning
versus market-based planning, where the central planning worthy, the central AGI,
which has all the data, which has all the compute, knows it all,
and makes its central decisions as to where to act next versus swarm intelligence,
which is, well, every single node on the network has its own information and it's more
specialize in its particular area, and it's a better fit for its particular area. And this is
like how capitalism works, right? Markets come together and they make trades and the information
is expressed in market prices. How do you like this metaphor? And can you continue it for me?
Yeah, I think that's a very reasonable one, because this came about due to increase information
density that happened, because all the finances about securitization leverage, telling a story about
an asset, and then how will you tell it, as it were? As a capitalism emerged, because we had more and more
information about assets, and so you have this market economy occurring. And you're saying,
seeing the first elements of that in CryptoXAI with BitTencer on some of these other things.
But you do need a bit of the central coordination.
And this is what's really interesting.
We can have co-pilots and pilots because we have a lot of global solutions that lack local coordination.
But when you look at Gen.
TVI-I, what it does is it understands context and stories.
Stable diffusion, 100,000 gigs of images into a gigabyte, something like GPT4R, Stable, LM, language models,
four trillion words in a gigabyte, you know, they can act as local coordinators.
And then you have globalized pilots like AI market makers and other things that can allocate resources more intelligibly.
This is incredibly powerful because, again, it has elements of centralization and decentralization, but it is this upgraded infrastructure.
Because when you're writing down notes from this podcast, you're doing an investment memo, you lose so much information that never needs to be lost again.
And that's what allows a new type of intelligence swarm, a coordinated swarm, with different objective functions that can outcompete these slow sclerotic organizations.
which ultimately can't manage what they can't measure.
So they manage all of the agency and independence out of us.
You know, this is the seeing like the state kind of book, you know,
like the centralized planning,
drove through villages and removed all the characteristics
so that they could have straight roads.
But it's inherently dehumanizing and removing of characteristics,
whereas this technology, I mean, like, when you go in Dali,
you're like, make it buffer, make him buffer, make him buffer.
And it understands an age of it.
nature of buffness, you know, or beauty or these other things.
It's that missing piece of context, you know, the carnaman, you know, rest in peace.
Type 1 versus type 2 thinking.
And that's why this swarm intelligence is just so powerful and such a positive view of the future,
because all that intelligence that was centralized can come to the edge.
And we can build pilots to coordinate the resources that we need to solve cancer or hunger
or climate or anything else like that.
And I think, again, it's better to have that owned by the people for the people, by the people,
versus owned by a few companies
that have unclear objective functions.
One thing I keep on coming back to
is the timing of this whole thing
where central planning can move faster.
If you have like a, you know,
this is the reason why DAOs can always seem to be moving much slower
versus their more centralized counterpart, right?
And that would be great if we had, you know,
generations and centuries to iterate upon,
but it doesn't really feel like that with AI.
this AI feels like a big time squeeze. And one thing that we can do both well and terribly in
crypto is coordinate. Like we're good at coordination and we're also terrible at coordination. And so
like I'm concerned about like this market based swarm intelligence infrastructure takes a lot of moving
pieces and a lot of coordination to really get right. Meanwhile, Sam Altman and OpenAI are like
seemingly light years ahead of everyone else. This is kind of just like my fear at a high level.
And how would you reflect upon this?
So, you know, in two years from the first developer,
we built the state-of-the-art model in every modality except for large language,
like even stable LMR Edge model.
Go to LM Studio, download it.
It runs on one and a half gigabytes from the MacBook air faster than you can read
and performs higher than Falcon 40B.
The next version performs higher than alarm is 70B on a gigabyte.
Best image model, best three.
We managed to prove that you can compete.
Because what opening I had is they didn't have massive breakthroughs.
The team is excellent.
They built gigantic supercomputers.
infrastructure, but it was like cooking a steak for longer. It makes it tender. So big computers
a substitute for crab data. We can get really amazing data through coordination. I think that's
kind of key unlock. But then you need to have a combination of coordinated and decentralized.
So like my approach to this is for every single sector, build kick-ass teams that do Gen.
A high first. How can education, health, and other things be improved with everything that we've got,
and then do that for every nation and give back ownership to the people. And then you
you'll get the smartest Indonesians working on Indonesia for Indonesia, the Indonesian
data sets and models, backed by the powers that be in Indonesia without having to count out
to the government, repeated for every single country, and then repeated for every single
sector. And then you get rich datasets, model bases, and you can mix and match them together to
drive real, how do we educate every kid in Malawi? How do we upgrade the Indonesian healthcare
system? You're diagnosed with cancer, you have an AI that's GPT4 level and beats human
doctors and empathy that guides you every step of the way. So you're not allowed.
multiply by every condition.
So I think that you need the combination,
I don't think a lot of these emergent AI infrastructure plays are good enough.
It's like, again, they'll build it and they will come.
I think you do need to have coordinated teams attacking this
and each part of the problem.
And then you support people that are building self-sovereign identity,
supporting attestation rails and others,
and you accelerate that with AI.
And hopefully they can become part of this piece
because what you really want to build is the human OS.
Crypto had identity, value transfer,
it lacked intelligence.
Almost all the value in Web 2 was from intelligence,
although it was this kind of classical AI,
you know, future it'd lie the past.
They have a new type of intelligence that can upgrade Web 3
and implement it, but you need to have that bridge to the real world,
you need to have dedicated teams who are passionate about focused on this,
and that's why I don't think you need to bootstrap economic incentives
through tokens or others.
Again, there is value in that in certain areas I can talk about some of the protocols.
But the time is now because even if we stop today,
Gentrify
stops today
the world changes
but it's not going to
it's the worst
they'll ever be
which is crazy
like I look at it
like stable diffusion
Excel
when we released it
last summer
was like
20 seconds for an image
now we can have
that quality
at 300 images a second
and it'll break
a thousand with the new
invididious
how insane is that
it makes no sense
you know
use Claude 3
it makes no sense
it's like
really nice to talk
to better than a human
So in the Web3 space, in the decentralized AI world, we have these projects that are extremely in vogue in the present moment of time.
The tokens have gone up like 10,000 X, which is why they're garnering attention.
There is debate as to like how real they are out there.
Is this kind of just more of a narrative play?
When you look at the building blocks, the tools that decentralized AI, that corner of the world, our corner of the world, has to offer people trying to solve some of these problems.
How would you give us a grade as to like how good are the tools that are available?
Or is there still like a lot left to be desired in the platforms and infrastructure and tools that are needed in order to build out this whole like decentralized AI part of the universe?
So like I joined Render Network as an advisor to help upgrade the token economics and, you know, leverage the million GPUs they have.
But it's a very specific use case.
The first Render Network proposed I did was let's use tokens to build a commons of 3D assets because Render's
based on our toy, which is the default for this,
work with Star Trek and a whole bunch of others,
as a common good that then people can license.
And so you can have some local things like that
that are really interesting,
because we built the biggest 3D dataset ever.
And the previous larger was 100,000 3D assets,
10 million object versus Excel.
That's stability to build our cutting edge 3D models,
made that open.
We're going to go to a billion assets,
and obviously that's a good thing.
And render kind of set up ages ago and it enables that.
But is it a functional token economic system?
No.
Are any of these, no.
They're still trying to find their way.
BitTenso has very interesting things, but again, it's not quite there.
A-cash interesting things, but only minimal usage.
The real innovation that's suitable for AI is basically Ethereum.
And the rapidly maturing stack that's built on top of that, you know, as we see
base, as we see eigen layer and some of these other things, because you need to have the
value transfer rails with low payments, because AI is not going to have its own bank account, right?
All these agents are going to be extremely.
that. We exchange it with themselves. You need to have attestation layers, data verification,
and other things. And so I think Ethereum is probably best placed for that right now. And again,
it's achieving the other maturity of the next year or two that will enable this decentralized
AI kind of economy. Other chains are also got good stuff there. I think probably Ethereum's
ahead right now, just due to interest and again, to talk to the nature of that. The AI-specific place,
there's nothing really there that's figured it out because, again, it's so new, right? Like this
technology is only a couple of years old.
You know, GPT4 was released a year ago.
Doesn't feel like a year, does it?
Right, yeah.
And so it's not a surprise there.
And I think the way that you should look at it is, you know,
people ask me, for example, about singularity net and things like that.
And I'm like, there's some interesting things there, but what, what's the practical things,
the outputs?
Let's see and judge by the outputs, what this is, and then where it would fit into our
overall kind of infrastructure picture.
Now that they're doing the token merge, maybe again, they can be more coordinated.
but I just, again, find it very difficult to believe that impact will occur through emergence
in the way that people are hoping.
Like, we did see community stuff come out, but then what I did was I took that and I put it
on steroids, you know?
I put huge amounts of compute into promising things like RWKV into things like open fold
and other things like that.
And if we can systemize, then we can have emergence, but there's still a few missing
pieces.
I go maybe four out of five right now, which is crazy because you see the
potential. I think a trillion dollars will go into this sector over the next few years.
And let's use as much of that to not have raccoon-type stuff, but real practical impact on human
lives as possible. What does a mature, decentralized AI tech stack look like to you?
On the centralized side of things, you have like a supply chain that happens, you have the data,
you have the compute, you have the models, you have the inference. And so there's all these different
components that make up centralized AI, is it as simple as just like taking the centralized AI supply chain,
recreating that same level of infrastructure on the decentralized side using like crypto networks and
coordination? Or is it like just kind of a mirror image or is it something else? I think it's
driving a different type of distribution paradigm and governance. Again, it's going to be something very
important. So you get to the first cut of national models and sectoral models and cultural models
very quickly. But realistically, like as you move into massive adoption, no one wants to use a
latest chinchilla, llama, vicuno, whatever. They just want a chatbot that works and teaches
their kids. You know, they want to have an AI wealth manager that manages their investments.
And as soon as an opportunity comes up that matches what they want, it automatically flows
contingently towards that, right? They want AI market makers that balance the market against
narrative-driven attacks and things like that. So I think that you, the different layers of
creation, control, composition, and then collaboration. Just like you,
You've got your primitives and then you're building on top of your L1s.
We'll kind of move up there.
What that looks like in the Web 3 way is, again, you've got your data station layer,
you have a very robust self-sovereign identity, you have your value transfer rails.
Some bits need to be on chain or should be on chain, verifiably of this.
Other bits don't have to be.
I think there's a too big a push towards ZKML, particularly when you can standardize
the base models, because that makes ZKML far easier if it's pre-installed and everything.
So I think it's still emergent.
I'm not sure what exactly it looks.
like. But I do know, again, some of these pro-fundamental kind of things. I think I wrote something
up after I left Zuzulu talking about the identity, value transfer, coordination, and other
elements around here. And the repetition of the centralized chain, I don't think you need that,
because centralized focused on these big supercomputer chips and massive usage and things like that,
we prove that you can run a world-class language model on your MacBook app. And then the innovation
around that's far faster. Stable diffusion runs on just about anything. It runs on your smartphone.
And so I think it'll be actually commoditized hardware with swarm optimization,
creating base assets that are highly predictable that you can build stacks on predictably.
Because if you're swapping out the base models all the time,
then it's very difficult to build something that is predictable.
Certainly.
So as you are stepping into the world of decentralized AI,
concretely, what does that actually look like?
Where is your attention going?
What is your time going to?
What are you building?
So I've talked to a lot of the kind of various chains and sharing my knowledge,
you know, and my view of the world.
And hopefully that's beneficial.
Like I said, join one of them as an advisor-render
because I view that as the bridge to the creative industries,
which may well be decimated unless we set some good standards.
That's an example.
Like film studios, more likely to just generate entire movies.
What happens to all the creatives around that?
You know, like let's implement some things around data sharing
and kind of other stuff there.
But I'll be probably launching a series of companies
with dedicated focus teams,
looking at everything from how can we accelerate crypto
from two and a half trillion and $10 trillion using this tech.
to, you know, healthcare education and others, plus models for every nation and getting some
smart people think about, how do we govern that, you know? But no more CEOing. CEOing sucks, you know,
I just get at designing and architecting this stuff. Someone else can run these things. I just want to
get them going. And I don't want to control any of this stuff because again, I'm not elected or who the heck
am I? But so you're just working on like incubating, incubating many different startups, doing small
things in many different directions? Big things in many different directions. Yeah, but that's pretty
much it because they've all got a common kind of thing of belief in open infrastructure is the way to
kind of scale gen AI first for a country or a sector you know and then they can be part of an
ecosystem and so you can attract really bright smart talent to people that just want to work on the
big problems because as you know can web three AI everything the core things are basically
talent and then political financial social capital flows from that if you can construct that in the
appropriate way and right now nobody knows what earth to invest in
but I'd love to invest in genitive AI healthcare
that bills the best radiology models of every type
we just reached Czechs agent with Stanford
and builds GPT4 level models for every single major condition
that is then open source so that nobody is ever alone again
on their journey on Alzheimer's multiple sclerosis, autism, cancer, etc.
Comprehensive authority on the state.
Something like that is massively investable
from a human capital perspective,
a political capital perspective,
and a social capital perspective,
And then where's the Web3 element of that?
Well, all the data should be verifiable
because it will be used to treat people and guide people.
We're thinking about ownership and spread and distribution.
Again, there's Web 3 element there.
You know?
And we're looking at the optimization equation
going into medical schools
and having students improve that constantly as a data set.
So my thing was verticalized, horizontal,
especially the other way, horizontal, vertical, national, sectoral.
And then Web3 is a coordination engine for that.
Ideally, not having to build everything ourselves,
I really don't want to build an L1.
Right.
You know, nobody should be building that kind of stuff.
Yeah, yeah.
Like, fingers crossed, everything else upgrades, but let's help them upgrade.
Launching a token, don't let complex legal and tax issues slow you down.
Toku provides specialized support to optimize your launch and ensure that you as a founder and your
team and your investors get the most tax efficient outcomes.
The Toku team understands the crypto space inside and out and will ensure your token launch
is fully compliant while maximizing tax efficiency.
Toku can connect you with the best attorneys if you need them to make sure that you have the best
advice and Toku can help to optimize your taxes so you pay the least possible amount of taxes while
still maintaining legal compliance. With Toku's guidance, you can concentrate on building your company
while Toku handles the logistics. Token launches don't have to be complicated. Talk to Toku today to get a
free initial token valuation. Taking self-custody of your crypto is one of the most important things
you can do on your bankless journey. It's also one of the hardest things to get right with huge consequences
if you don't. If you want help going bankless, talk to Kasa. Kasa helps you take custody of your
crypto assets so you don't have to wonder whether you're doing it right. Kasa is a one-stop shop
for doing self-custody the right way. With Kasa vaults, you can hold ether, Bitcoin, stable
coins, all with one simple app and multiple keys for the ultimate peace of mind with a support
team to help you every step of the way. But it doesn't stop at self-custody because even though
crypto is forever, you are not. We all plan on making life-changing wealth in crypto, but with
Kasa's inheritance product, life-changing wealth can elevate to generational wealth. For your kids and your
loved ones who don't know anything about crypto. With CASA, you won't lose your private keys and you
won't accidentally take them to the grave either. Click the link in the description to get started
securing your generational wealth. Where is this talent coming from? If all of these new engineers,
AI researchers, where is this talent being pulled from? So over the two years of stability
until just before I left, we got 80 engineers and researchers and none of them left for a big
competitor, even though some of them were offered 10 times as much. A couple of them went to launch their own
startups and things. We've got about 400,000 people in our communities, from healthcare to music,
to image to others on Discord. And I found that that's the best place to hire from, hire from the
community. And again, this is why you have to launch that shilling point of let's do something big.
We have millions of kids across Africa to educate, you know, and let's support them and ensure
real life what theirs. We can do this thing that will have this impact this year. We can take it
more, you know? And if they're all part the same ecosystem, then you can get the,
that talent recruitment pipeline. But again, from the communities is where we find it. Within the
UK, we have a new tech talent visa as well. So some of the people that contribute to our open source
repos, we've got UK residency, basically, which is kind of cool, you know, and so you can get talent
from all around the world, staying in place or where they are. And this is why I want to do the
national model thing as well. If you're building the national champion for Bulgaria, Ecuador or others,
the smartest people in those countries go back there, but then they become part of your talent pool
that you can pull into the sectoral things. So again, strong network effects.
here. That's how to optimize that design. As you are incubating many, many projects,
it's one thing, I think, to pull people from jobs into the world of AI. AI is pretty sexy,
especially in Silicon Valley. Crypto doesn't necessarily have that same branding. Crypto's
branded is kind of like chaotic, Dgen a little bit, speculative. How do you see this becoming an
issue? Because like I said, it's one thing being pulled into the world of AI, but crypto AI is something
completely brand new. Do you see like a branding issue?
with trying to pull talent in here? Well, we only had two researchers in San Francisco.
And yet, we pulled off state of the art models across the board. There's so much talent out
there that you can pull from, just like I think 74% of Web3 developers are outside of the
US. But I don't think that's an issue. I think as well, like, you know, the healthcare company,
there's no Web3 in that. It's building great models and infrastructure for healthcare that
leverages Web3 concepts, and then the Web3 company can figure out the protocols and governance
and others. So in that way, what you do is you get all the people passionate about healthcare
education and others, a level of super credibility. You co-opt the existing power structures because
they want this technology to upgrade. And kind of you go from that because stability wasn't sexy,
but we still, we had something crazy, like, we had an 83% acceptance rate and like nearly 100,000
applicants last year. Yeah. Again, the talent here is insane. But you have to go specifically,
like how do you get the best designer?
If you're building a cancer LLM
that outperforms doctors
and human doctors and empathy,
you will find an amazing designer
for the healthcare company.
If you have the opportunity
to educate 100 million kids
across Africa, you'll find that.
And again, building in the open
with open source as the foundation,
you will find that as well.
About business models and things,
Accenture did 600 million
in generative AI consulting last quarter.
People kind of poo-pooed the Palantir model,
but that's good enough,
just consult,
implementing these open frameworks and you'll make hundreds of millions a year
because the technology is good and impactful.
We can figure out other things around token economics, etc.
But I'm not too worried about the credibility
because there's networked credibility that will occur with this kind of approach.
Again, it's not perfect.
But, you know, we're going to build it all together, right?
So right before I left for Suzalo Bankless did this interview with our good friend,
L.E.A. Zudeauki, which we titled, We're all going to die.
So we have two different ways of arriving at L.A. Z. Yudkowski, and one is the centralized way and one is the decentralized way. One, the centralized way is that AGI is going to kill us because they're super intelligent and they have all the capacity. The decentralized way is that, like, well, we're going to open source all the AI models to the masses and some crazy person, some unabomber type, is going to be able to leverage this technology to stop technology from progressing forward. And so while I totally see the merits of like pushing completely,
complexity towards the margins, opening this up to becoming a credibly neutral platform,
open sourcing AI so that more people can have access to it. What would you say about the fears
that giving this power to the margins also opens up margin risk? It's going to happen anyway.
I mean, Open AI matter and X and Mistral and others will just do this. The best language models in
the world, open source from the Chinese, so they're beating GPD4 and a bunch of metrics already.
But realistically, I think that you look at the orders of magnitude,
increase in a hundred billion dollar supercomputer coming on from Microsoft and opening
eye in a few years.
The focus there is on giant chunky AI, right?
Whereas our focus has always been on AI that can run on a MacBook.
And they're very different types of AI from an emergent perspective.
What would win, you know, a human-sized duck or 10 duck-sized humans, right, or 100 or whatever?
This is kind of one of those questions, because how do you capture?
for dynamic complexity of agents and millions of agents running, is that not intelligence?
I think that the best thing you can do is push for transparency on data sets and high-quality
data sets that are verifiable to reduce X risk. Because you are what you eat, you are what
you're trained on. Right now, we're training on the whole of YouTube if you're open AI.
Like, that's what they used to train Sora and GPT for. No wonder it turns out crazy.
And you have to tune it back to human preferences, right? If you don't want to know about nuclear weapons
or bio weapons, don't teach about bio weapons or nuclear weapons.
The reality is there's very few people that want to destroy the world,
but open infrastructure has again and again proved to be more resilient.
And again, you don't want to be training these absolutely gigantic models,
because if you had GPT4 or Claude 3 level AI on your smartphone without internet,
that satisfies us for 95% of human positive things.
You know, you don't need much better than that.
And right now we're using research artifacts and using them in Enterprise.
of course they're going to be a bit crazy, you know?
So I think, again, open is inevitable.
The question is who builds it under what standards?
You know, the question is, how does it spread versus the centralized thing?
And there's no way you're going to stop the centralized folk.
Because if it's just a function of gigantic supercomputer, that's very doable.
You know, and again, there's this bogey of the Chinese AGI and other things that have been done.
That means they're not going to stop, and it's happening faster than our systems can react.
The $100 billion supercomputer from Microsoft, I think, is due for 2027 or something like that,
2028.
That's insane.
Yeah, that's like a million times more powerful than the supercomputer that open AIE is for GPD4.
A million.
I think Elon said that it's going up by 10 times the compute every six months.
I don't think we've seen something like that before.
Mors law and steroids.
Are you kind of saying that, though, we're kind of F'd either way?
It doesn't matter which direction we go in.
We're kind of equally aft.
So let's try and pull out the good from both sides on our path there.
No, I think if we build this massive swarm of AIs that help educate every kid
and guide every family through this process and organize all our global knowledge,
we're more likely to have positive outcomes because all these big old AIs will train on that data
if you buy them in high enough quality data.
And then that will make it more like to be aligned.
If you just have a very Western-centric model that's trained just on Western data,
I think it's very difficult to align.
So I think it's positive, by the way.
But if you get good enough,
like, we count only like seven companies
train their own stable diffusions.
It's not that hard once we release the code.
But why would you,
if there's a stable diffusion already out there?
You know, why would you make massive capital investments
in God-killing AGI or God-creating AGI
if you can do 95% of the jobs to be done
through the stacks and the open stacks provided, right?
It reduces that.
monetize the complement of a lot of these AGI players because one of their mantras is that you
must work with us because otherwise the Chinese would get it and there's no one else that can
build this tech. We proved its stability. That's not true. We built state-of-the-arm models across
every medallie. We even built a freaking brain-reading model that converts MRIs into images after you see
them, you know? You can do that if you're intelligent and coordinated. One of the intuitions of
mind is that the whole world of centralized AI is going to produce very sophisticated, very
low number of very sophisticated models. They're going to be like these gargantuan models that are
very complex and they have a ton of research that have gone into them and a ton of like manned
hours, a ton of development. And then on the flip side of things, on the decentralized AI side of
things, we're going to have a very large number of models that are like, you know, on aggregate,
less sophisticated individually. But we have many, many, many more models. And this is,
perhaps somewhat conducive to human flourishing, right? There are more models for more use cases,
more models to be creative with. And that as on net produces like very different outcomes than what
we would get from investment into like gargantuan models or like the centralized monolithic
models. This is kind of like my intuition. Maybe correct me if I'm wrong on that part,
but like what would humanity get if we had just a higher number of models? And is that something
to aspire to at all? Yeah. I mean like again, how many times have you seen generalist speech
specialists working together.
It's very rare, right?
I mean, like, if you don't have all the excess
bump and junk that's in a giant model,
you can move far quicker and far
more dynamically. If you have open
models to private data sets, like,
again, going to where the data is, you will have
better outcomes because a generalist model
will not have access to that private data,
because it's kept from going to that private data.
So I think that open swarms
outperforms outperforms centralised ones, but you still will need
expert centralized stuff every so often,
based on other people's private data that they themselves kind of provide,
just like IP assets, usage, and then checking and other stuff.
This is the compositionality.
Like, comfy UI is a system that we built on top of stable diffusion,
and every single decision you make on the image and the models you bring in
and things like that is represented as a node.
If I send you the image or soon video file, 3D and others,
it reconstructs every decision made going up to that.
Yeah?
Obviously, logical thing is then to put some of that on-chain
so that you can have asset attribution and things like that.
That's kind of the future here, and that can outperform a centralized thing that has to encapsulate everything.
It's very difficult.
The thing that we don't know is adding more scale, does that lead to even more emergent properties?
And again, this AGI, ASI concept, right?
That is then superhuman capabilities.
But, you know, what is superhuman capabilities is a really good team working together.
Now, again, that happened.
We've seen it again and again.
And a really good team that could communicate into the tens or hundreds of thousands,
which is what the AI will be able to do, obviously will work better.
So I don't think you need to have all these parameters.
I think it was a bit of the people getting stuck and then extrapolating out.
I think, again, once you get GPD4, G-Claude 3 on a mobile and consumer laptop,
that's satisfying.
And then you can have hyper-specialized models of every single type,
coordinated with a new type of architecture to do collective intelligence, uplift everyone.
But who knows?
I mean, it could just be, again, gigantic models that can do everything.
become the key and the people that control that.
But again, that's why things like a cache
and some of these other ones are quite interesting
because you will get that swarm.
The other interesting thing is that, like,
we had Intel chips outperforming Nvidia chips
on Stable Diffusion 3 training.
The chips will become a commodity
and it will become easier to access
in the next few years.
So, you know, because everyone's building towards it
and ultimately it's just a bunch of weights,
which is like an ASCII file or CSV
that you push data through another day it comes out.
That's not hideously clear computing
that you need for that.
even with these gigantic models.
And by gigantic, we're just talking about, again,
GPT4 is probably only 100 gigabytes,
which is insane.
Like, these tiny gigantic models.
Right.
Considering the amount of stuff it can do.
Right.
In terms of, like, TAM,
maybe TAM more measured in just like
the impact on society,
the impact on humanity.
How would you compare centralized AI to decentralize AI?
I think that there's far more private data in the world.
Like I made a tweet.
You know, if you're clever, you can get past the firewalls.
I don't mean go and steal it.
I just meant you build open models that go to the data and transform it in all these data
settings and others.
The tam for open is far greater than the tam for closed.
The tam for non-language models is far greater than the tam for language models as well.
Because language models, Google and others are just going to go to zero on the pricing.
And the tam is in the trillions.
Like the whole of education, healthcare, let's say in 10, 20 years, it's transformed by this.
Every single person has their own doctor and their own teacher and tutor.
and it's customized exactly to their needs
with full access to all their knowledge
working for them all the times.
How is it not transformed?
How is the movie industry not transformed
when you can generate feature-length movies
faster than you can think?
You know, like,
what is one area of knowledge work
that's not transformed by this?
And that includes the financial services industry.
You know, complete transformation here.
Again, if you have AI Atlantis
and infinite graduates that can actually follow instructions,
of course it changes this all.
One problem that we've seen in the Web 3 space is that DAOs will attempt to create products out in the open because they're a Dow. That's how it works. And they will incubate a number of different ideas and a number of different products. But then some single observer will see that thing that's been built out in the open transparently. And then they will go and raise a startup, a centralized closed source startup based off of that idea or product or thing that that Dow incubated. And so,
ultimately the Dow never captured any value and some centralized startup was created that
actually took this thing to market and actually succeeded in coordinating and developing a team
and raising capital. And then the idea became private source. It came privatized. And so I could
kind of see this model, this pattern following moving forward in the future where the open,
decentralized AI world side of things has some ideas. They create some patterns. They create
some products. But ultimately, some centralized team can take that idea, raise some money,
take it to market private and be even faster than the decentralized set of things.
How do you think about this?
No, yeah.
I mean, proprietary, I will always outperform open source, I, because they can always take it,
and they said privatize it and push it forward.
The question here is one of sustainability, value flows, and others.
Because, again, if this is open infrastructure for humanity, it should be a common good, right?
This is where the work around retroactive public goods funding and other things becomes
very useful from, again, the world.
Who is funding this as a benefit to everyone else?
Are we having dynamic licensing where you can just buy a license, have fractional elements there, like we did with stability, with our membership kind of program?
Who is going to be putting money into this?
I mean, the total amount of infrastructure spending on this will be like a trillion dollars.
Like it's obviously as important as 5G and a trillion dollars went into 5G.
So that money will ground itself.
I think the open innovation and these startups is a good thing.
But if a Dow wants to capture that value, we should just incubate the startups, honestly.
You know?
One of the things that I've seen in World Three a lot, though, is the curse of the VC coming in.
You know, there are good VCs, but then bad VCs come in, and they raise too much capital, and then the velocity slows down.
I mean, there's something we saw its stability as well.
When I moved the company to Jira, like a year ago, and I hired a lot of Alassian people and big tech.
I would just stop.
We were at least the worst language model in the world.
I listened to everything the investors said.
I hired literally the chief of staffs of all the investors and the heads of engineering.
My God, it was awful.
So there's this question of kind of dynamic velocity.
you. How can we fund things? So like I funded Lucid Raines, for example. If you want to feel depressed as a programmer,
go to GitHub Lucid Raines, most prolific programmer in the world. You know, I've covered a sponsorship.
He just wanted to build open. They wanted it covered. Like, look at how Bitcoin and Ethereum Foundation,
others, like how much the developers pay, the core developers there. There are still things we haven't
figured out here, but there's definitely ways we could figure it out better. If we support great
people and we coordinate them better, but again, this is a process thing. Like the fact that we
still use Discord and Slack and things like that. It's shameful just as a society. They are
really bad. Again, the teams work hard, but they are bad. We should be able to build better
coordination mechanisms, again, with the Gen A.I. First approach. Better incentive mechanisms. And as
you know, it's not all about money. Like, again, the nature of openness that people build it because
it is infrastructure and it does spread faster than anything else. And again, expect people to
privatize it, but let's figure out ways to incentivize them to contrary back when necessary.
but a lot of those discussions come from a place of like scarcity versus abundance,
which again is a classical VC Silicon Valley versus the rest of the world,
a very classical proprietary versus Web 3 kind of thing.
This is not getting smaller.
There's not going to be less money in generative AI next year than this year.
It's a very unique set of circumstances.
AIX crypto is not going to have less capital in a year or two.
So there'll be more and more absolute garbage and will be more and more really great teams.
one of the influences upon this race between centralized and decentralized AI is going to ultimately be government regulation.
Do you think government regulation helps one side more than the other or slows down one side more than the other?
Do you have any thoughts on this?
Well, I think it probably slows down the proprietary ones because we're already seeing them become too powerful.
I think there will be standards around Open as well that insist on transparency of data sets, or at least I'll push for that, and government's receptive, which reduces a lot of the power of the proprietary guys that are just basically using.
I mean, like, some of these companies, like, just use all of Hollywood downloaded in torrents and stuff like that.
Like, Suno is a great music model, but they just said we just ripped off all the music of all the artists, and we'll figure that out later.
I mean, come, that's just wrong, you know.
So I think that it'll actually be better for open than closed, especially if we push on the political side there, and we push on the standard side there, you know?
And so, again, there's a window where governments are open and receptive, and we have to take advantage of that.
That's kind of a bit of a lobbying kind of thing.
But again, how can any government be run on a proprietary closed model?
They may, short term, and that may be a standard in lock-in, but long-term, I think it's a terrible idea.
Yeah, go into that more, why you think that all governments, all nations will have their own AI model.
You've said this a number of times.
They won't be happy with building on a closed-source model.
Just elaborate on this perspective.
So, again, if you've got the sleeper agent thing where the model can turn it's feeble, you don't know what's inside that model, so that's a danger.
and that'll become more and more apparent, right?
Number two is that you've got your own culture, your own embeddings, your other things in there.
It's like every country, again, if you take the graduate example, is happy not having its own universities and using external graduates.
Not really.
Every country has their own laws, their own education, their own healthcare, etc.
Now, it doesn't require much of a push to build that infrastructure.
So I think that because this is an important upgrade of the knowledge infrastructure, every government wants it.
Whether or not they get it is a question of someone going and doing it, which is why I'm
doing it. You know, my plan is to bring this to 100 nations right next year. And again,
return ownership of these models to the people and the data and governance to the people.
Because I think it's the right thing to do. It makes us safer and it uplifts a whole lot of people.
We're not talking about the biggest models. We're talking about just small, highly capable
models and data sets, you know? So like the AI models that are being developed in Silicon Valley
are being fit. In like in terms of fitness, they are conducive towards the United States,
because that's where they're being built.
You're saying other countries are just not going to be as interested in the Silicon Valley
generated AI models as their own internally incubated ones or ones more custom fit for them.
Well, if that's the only thing that's available, right?
It's not just the United States.
It's optimized for open AI or it's optimized for DeepMind or Google, you know?
And so, again, you can put anything in there because what's going to happen is that people,
like, you have not your keys, not your crypto.
My thing is not your models, not your mind.
Because we will outsource more and more of our cognitive load onto these models.
if you're using GPT4 and Claude you're seeing.
And they can put more and more defaults
into what you are doing.
So that is dangerous, honestly, right?
It's like using Google Maps
and then going off a cliff, you know?
But given you don't know,
you will reduce your sovereignty if you do this.
But if there's no options, then you have to do it.
And everyone's being pressured to make decisions
from a company or country level
within the next year or two,
which is why there's a nice gap here,
where if you present a holistic alternative,
you can embed elements like cell,
sovereignty, governance by the people, and other stuff, that otherwise you couldn't. And everyone
benefits from that. Even the proprietary companies benefit from that because they have richer data.
That means they can then take their proprietary models and customize for the legal system of
Ecuador, such as it is, you know, based on that open data set.
And taking this to its logical conclusion, then, I would certainly enjoy an AI model that
is custom fit for me, for me personally. And so, like, you know, a chat, EBT is great. I use it all
time. It's very useful. But I think as the systems get more powerful, then you are going to see
the bluntness in one single model that's just trying to be like one size fits all. Is it in line with
the ethos of whatever decentralized AI is to have like more personal individualized models that are
custom fit for the individual? And what does this look like and how will this come about?
Yeah, I mean, that's my conceptualization of the intelligent internet. Every single person, company,
country, and culture having their own AIs that are personalized to them and looking at
out for them. You've got your own PA and EA and assistant and analyst and everything like that.
And the objective function of those AIs, because objective functions are super important, is your
flourishing, or that Kid in Malawi is flourishing, or that cancer patient Indonesia's flourishing,
bringing them the right information at the right time. Like, again, right now, download LM
Studio, run stable LM. It will run on your MacBook Air. And that's got no specialist chips faster
than you can read. That's nearly GPD3.5 level. Beage GPD3. We'll get to GPD4.
level us or someone else by next year. But if that standardize, it can proliferate much better
and people can build around it much better as a standardized primitive. Yeah, just like blockchain
enables you to standardize around primitives. So I think that is the vision of the future, but it's also
about, again, what's this model's objective function? Who's it working for? And as you outsource more
and more of your mind to it, because again, it will have the best, most charming voice in the world,
you know, and other things like that. If it's not your model, then it is literally not your
mind. You will outsource more and more your capabilities. I mean, we see that, again,
us lucky people, you know, we've got our EAs and PAs and chiefs of staffs. We do outsource
a lot of our cognitive load there. And if it's a model that's by someone else, like, let's face it,
Google and meta, their entire business model is advertising, which is manipulation, straight out,
you know? And again, they can't help themselves. They'll become more and more manipulative with
their models in order to achieve their corporate functions, which is why these models must be
open infrastructure owned by everyone for everyone.
as the base default option.
And then you can call on those other models for the specialist stuff,
but you can be insulated by your own models.
Of course, there's questions then about filter bubbles and all this stuff,
but that's why standards I think are important around this.
And again, it's not easy.
I have all the answers.
And that's why I'm not where I want to be a CEO figuring out all the answers.
My God, it's too complicated.
Let's get the smartest people in the world building on this
and building out that ecosystem.
A world in which every individual has their own model
kind of feels very similar to the world in which every single individual has their own smartphone,
of which we do. And it also kind of feels like a world in which there's this hypothetical,
like, you know, dystopian vision of the future that people frequently articulate as a meme,
which is like everyone's going to eventually have a chip in their brain. And this doesn't kind of
feel like too far off from that, right? Like we all have our own chips in our hands. It's called our phones.
We have, what you're saying is that you want to give everyone their own individualized AI model.
this also kind of seems to be a forcing function between people that do have access to AI or at least choose to leverage it versus the people that don't.
Do you think once we figure out actually how to have individualized, useful AI models that are personal to us, that it will become more of less a non-negotiable to have one of these things as a member of functioning society?
Of course.
I mean, you're far more efficient with this than without, right?
You've got an entire army of people that can generate, create, code, whatever.
whatever. This is why, again, the digital AI divide will become huge unless we proliferate this
technology at the base level. And as we put tablets into all the schools in cross Africa,
you know, and give every kid their own AI and have national AIs for every country.
Like, a third of the world still doesn't have a mobile smartphone. It's very easy to forget
that. You know, something like Twitter has like 200 million daily active users or something
like that. That means there's, what, 6.8 billion people that aren't on Twitter. There is
kind of very Western kind of viewpoint of this, and it has very interesting impacts, because
I think the West will face a large amount of deflation from this technology, but it's
places knowledge work, whereas the Global South can leap forward. And so that's actually an ROI
and bring this technology to the global South. Give every kid in Malawi, an AI that represents them,
along with the financial system and health, they will leap forward because it'll be capital
formation, same in Nigeria, same in Sierra Leone, just like they left forward to the mobile.
And as you said, it's non-negotiable, because if I'm having this discussion with all this
knowledge coming into me for my AI versus you not having that. You're at a disadvantage.
You know, so there is a race condition, a competitive thing here. It's like you having internet
versus me not having internet. Of course you'll out-compete me. And we haven't got to that stage
yet because even though this AI is proliferated, it's not reached enterprise adoption.
Later this year it reached enterprise adoption and you'll see companies cutting staff, getting efficiency
and going at it. And then everyone has to keep up, which means the amount of invest
in this space will go 10 times, 100 times over the next few years.
In fact, the total amount of investment I added it up that's gone into generative AI so far
is, I think, less than the total amount that's been spent on the Los Angeles, San Francisco Railway today,
which hasn't even broken ground.
So the numbers seem big, but realistically, as infrastructure, they're not big.
And again, that infrastructure needs to be globally distributed, or you'll have this massive AI divide.
Chips and brains, I don't know.
You can ask Elon that.
but definitely
AI surrounding us.
Interesting.
You said that this is going to be
deflationary for the West.
Elaborate on that.
Well, why would you need to hire a graduate
again on 90% of graduates
in the next few years?
Just use an AI.
Graduates are kind of annoying.
And almost all of US inflation
is actually healthcare and education
and the bureaucracy level.
We look at CPI composition.
I mean, energy is a component of that as well.
But you have knowledge-based societies
where you have infinite graduates.
supply versus demand, what do you think is going to happen?
Doesn't mean the stock market will go down.
Like you'll have super normalized profit margins
because you can let go of people.
But again, you don't need to hire as many people
to have the same output.
The question is, can you build the demand versus the supply kind of equation?
So when I look at that, I'm like that as deflationary.
You know?
Again, it could be wrong,
but certainly there was going to be social, political upheaval,
which is another reason why I would love to build
a national champion for every nation
owned by each nation to help them navigate through this.
And again, I could be wrong.
but it's a reasonable take, you know?
Right, right.
Yeah, because people are much more likely to, quote,
hire an AI rather than hire a messy human,
because humans tend to be messy.
Especially at the early stage, right?
And so, like, the plankton of the economy kind of disappears.
Like, what do you tell a kid finishing his computer science degree?
Yeah.
I don't know.
You know, like, go into gerent of AI,
but there aren't enough jobs yet.
I don't know what the jobs of the future are,
it's happening quicker than anything we'd ever imagined.
And then that three-year coder can do the job of 10 junior coders.
Right.
Well, do you think maybe optimistically that the elimination of kind of like the bottom tiers of jobs
would push people into being entrepreneurs, either by force or just by opportunity?
Yeah, I think that can happen.
But again, can you create enough entrepreneurs to match that up?
And that's happening at the same time as robotics and self-driving cars and other things.
like you're reducing aggregate demand in the economy.
And that's really unfortunate, and it happens synchronized across industries at the same time.
And so again, this is a danger here.
So there's a near-term danger or far-term danger.
There's near-term opportunity and far-term opportunity.
You know, we can build a better society of the back of this technology with the appropriate coordination.
And this is the final chance, I believe, we have over the next few years to make sure it has positive defaults versus negative defaults.
And the crazy thing is you said it's not a versus in some ways, because I don't think people are trying to have bad outcomes.
I think it's a complementary thing, which is pretty unique.
So let's build these organizations, these societies, and infrastructures, these protocols,
that the benefits are distributed and we can lift up the world.
Because if you've got a big lift in the global south, leapfrogging to intelligence augmentation and becoming more efficient, being more entrepreneurial,
they can drive forward the global economy and then get capital allocations from the West.
And then everyone wins from an ROI perspective, right?
Finance will become more seamless capital.
Velocity will go up.
You will have AI market makers.
You will have contingent financing and all sorts of agent-based work that really upgrade the flow of capital.
But you've got to have people thinking about this properly.
I mean, who on Earth is thinking about that.
I tried to find who was thinking about that across the AI space.
I couldn't.
So I figured we're going to build teams to think about that and groups to think about that
and standards around that.
Summarizing parts of the components that you said,
would you agree that this is generally like destabilizing
for the mature economies like the West,
but a boon to the developing countries
who are able to develop and innovate faster?
Is this like a great equalizer between these two halves of the world?
I think so, but the second part is also because,
as I mentioned earlier,
all the finance is securitization leverage,
telling a story about NASA and how well you tell it.
Much the global South is invisible,
whereas this technology is very rich with context.
So you can form capital, I think, far quicker than you've ever done before
if you deploy this of scales infrastructure.
And then you can collateralize it, you can bundle it
and have capital flows from the West occurring at even greater paces.
Even as you struggle in various industries,
especially knowledge-based industries within the West,
you wouldn't want to invest in the middle, right?
Like Tyler Perry sees Sora, and he just calls a halt
on his $800 million studio development.
He's like, I don't know what's going on.
Are you going to make capital invest?
when you don't know what's happening with knowledge infrastructure,
whereas in the global south, you're like the ROI is big because you're coming from zero.
At a lot of places or very low to knowledge-based economy.
You know, every kid having their own education,
every health care, having their own thing.
And again, it's not about innovation necessarily as that visibility,
that legibility and the financialization that occurs around that
because they're suddenly investable.
So, like, going on, ever since there was the Sam Altman debate or debacle
about the whole Sam Altman being fired and the board and all that kind of stuff,
I mean, this conversation was even earlier than this, but it really took on a new character
at post that event, which is like this acceleration versus deceleration debate, which is now starting
to, I think, define like kind of larger and larger swaths of society. And at the very beginning of this
podcast, you talked about how AI time is even faster than crypto time. And crypto time is really,
really fast, especially in bull markets. And so now we have like these two leading technologies really
pushing the frontier of like innovation forward, like Web3 and AI. And they are both characterized by
being extremely fast-paced. And so we have this like accelerating technology, which is seemingly
accelerating time, which, you know, threatens to leave behind people who can't keep up. And I think this
is causing concerns in broader society where they see like the Elizabeth Warren types of the world
sees these two perceived to be reckless industries really hitting the gas on both of their
respective innovation frontiers, and that draws concern out of people. How would you say that this
is going to define society or society will react to these two very accelerating technologies?
I mean, look, I was at the AI Safety Summit. God, I know if that was like 10 AI years ago in
September or October or something like that. And the King of England pops on the screen and says
is the biggest thing since fire.
You're like, my god, it's the king of England saying that, right?
Like, every smart person in the world now, this is the biggest thing.
And every smart person knows that the regulation won't be able to keep up with this.
And it has a real human impact.
Again, I didn't believe in all the stuff with the FLA letter that I signed with Elon and others
kind of that six-month pause, but I did think we need to have discussion around this,
because it's impossible to encapsulate in your brain, like, what doesn't this impact?
Right?
And again, the fact that you have a model that can proliferate 300 million downloads of our models, that's insane, right?
The fact that you have the whole of a GPT Thriller level AI on a gigabyte, and that will be a GPT4 level AI, I'm sure, next year.
That doesn't make sense.
Like, where the heck does it fit?
It's like homeopathic AI.
I don't know where this stuff is because the whole Wikipedia compressed is like 26 gigabytes, right?
So I think that it's very difficult to keep on top of.
We need to have better institutions that can help guide with good objective functions.
What is the right way to tackle this education, healthcare, finance,
you know, Germany, Vietnam, Thailand, and others.
That's what I really want to kind of do because I don't know what the answers are.
Well, I know is that people need help.
You know, and there's a huge amount of capital that you can make from that,
but it's not even about that.
It's just we want to do our best to try and figure this out
and bring together right-minded people.
Because, again, within crypto, you know,
You've got your real builders that build and why they're doing it.
They're doing it because they believe in the future that is self-sovereign, right?
Part of that has to be self-solverning AI.
You know, self-sovern identity, self-sovereign, capital self-sovereign AI.
And I think that if we manage that, then the world will be better and we can solve some
of those key concerns, but we've got to be as inclusive as possible in this discussion because
they said people that can't keep a track of it, they're going to be left behind and they'll have no
voice. Now, that sucks. Because it can't just be again from a few people in Silicon Valley making
these decisions that impact all of humanity. Right. Emat, as we bring this conversation to a close,
if there are any entrepreneurs out there or engineers out there who are interested in building
in the sector, what are the low-hanging fruit that you would really like people to go after first?
Oh, email. What advice do you have for the email? Email, please, rid the scourge of email. You have all the
technology and tools. Make it so never have to look at email again.
Wait, so you were saying, like, we can use AI to just eliminate the entire institution of email?
Yeah, like, it will just, like, expand and compress the email and then write a beautiful
prose to you, and then that prose goes to you, and then your AI will compress it down,
and then we'll just get little bits at each side that actually make a difference,
and it'll be wonderful. Think of the man saying.
As a guy who does not check my email, because I don't care to spend that much time, I would love
that tool. There we go, right? No, look, it's basically, again, the men's,
The mental model here is infinite graduates, and then the coordination of infinite graduates.
What real world problems can you solve?
So obviously, if you had infinite graduates on your email, they would save so many hours, right?
And I've got them on my side as well.
But think about that mental model and kind of work through this, because you don't need to be an AI expert to do it,
because the models are actually quite easy to use, relatively speaking, which is insane.
And then also make sure you use AI for your development and your coding, because you'll be far more impactful.
Infinite graduates.
Is that the world that we're going into, a world with,
infinite commoditized graduates. Yeah, like I said, A-I-Atlantis. At Atlantis. Yeah.
Iman, this has been a fascinating conversation. Thank you so much for spending time with me today.
It's a pleasure. We've got to do it fine. I think this is like the fifth try or something.
Yeah, yeah. For the bankless listeners who want the lore behind the episode,
Imad and us rescheduled this episode, I think, like five or six times, but it's finally happened.
Iman, thank you so much. Cheers.
