Bankless - The AI Arms Race: Who Will Dominate the Next Industrial Revolution? | Josh kale
Episode Date: March 3, 2025The AI arms race is on, and the stakes couldn’t be higher. Bankless podcast producer Josh Kale breaks down which players are poised to dominate this next industrial revolution. From the transformati...ve power of AI across industries to the geopolitical and economic implications, we explore the critical dynamics shaping our technological future—and what this means for investors, builders, and society at large. ------ 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium ------ BANKLESS SPONSOR TOOLS: 🪙FRAX | SELF SUFFICIENT DeFi https://bankless.cc/Frax 🦄UNISWAP | SWAP ON UNICHAIN https://bankless.cc/unichain ⚖️ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🌐CELO | BUILD TOGETHER AND PROSPER https://bankless.cc/Celo ----- ✨ Mint the episode on Zora ✨ https://zora.co/coin/base:0xa717530992cd97a2680bf2c696433dd18b177d9a?referrer=0x077Fe9e96Aa9b20Bd36F1C6290f54F8717C5674E ------ TIMESTAMPS 0:00 Intro 0:25 A New Life Form? 4:23 The AI Rabbithole 11:49 xAI & Grok 3 26:51 Deepseek Breakthrough 28:47 English as a Programming Language 30:19 Open Sourcing LLMs 34:13 Other Models 39:29 Meta AI 44:04 AI Bets 46:27 Elon Musk vs Sam Altman 51:59 Worldcoin 55:48 From OpenAI to AGI 1:03:01 AI Doomerism 1:10:06 Neuralink 1:15:20 AI vs Humans 1:22:08 Frontier Tech 1:24:43 Crypto’s Role 1:28:00 Closing & Disclaimers ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Just this week, in frontier technology, we had Microsoft announced this new quantum computer.
GROC came out and it was the new frontier model that was made in 12 months.
There is computation improvements in genetic mutations and gene.
There's all these crazy frontiers happening downstream of this new form of intelligence that's happening.
It's allowing all this research in these other parallels to happen.
David, do you think humans are the bootloader for artificial intelligence?
Meaning, are we the scaffolding that exists to build this next superfluing?
form of intelligence after humans.
I feel like this is a leading question.
I feel like your answer is yes, it absolutely is.
My answer is a little complicated, but mostly yes.
I definitely believe we are here to build this next crazy form of intelligence.
But I'm curious, what do you think?
Yeah, let me try and break down your question.
So we have biological consciousness.
We have biological life.
And I think society is like having this big conversation of like, what does it mean to have
not just artificial intelligence, because I think in 2025, we do know that artificial intelligence
is like totally possible because you can go to like chatGBT.com and access some form of
intelligence. And there's this AI arms race going on where everyone is trying to create this like
model, which is highly parallel to some sort of brain, which is like biological human
intelligence, not even human intelligence, like just biological intelligence, because animals have
it too, like animal brains and human brains are the same tissue. And so we already know that there is
like artificial intelligence out there built into like circuits and code and software. And it's
highly parallel to the knowledge and intelligence that we know of in biology. And I think the broader
question, even more zoomed out from that, is that, is there like actual consciousness there? And is there a new
life form out there that is downstream of what we create when we create artificial intelligence.
And again, the question was very leading. And I think your answer is yes, there absolutely is.
It certainly would appear as if so. And it's interesting that we're kind of going faster into
artificial intelligence than we are our own biological intelligence, where we understand more
about the brain of a neural network than we do our actual brains. So is the end state of this
understanding more biological intelligence and creating better versions of that through artificial,
or do we just accelerate to this crazy artificial world of robots and computers and intelligence
far beyond what we need to the point where we are possibly not even needed anymore?
Yeah.
I think your bootloader word is human intelligence.
The bootloader for artificial intelligence is actually doing a lot of work there.
Like that's a very big word.
Bootloader implies that like once it has been loaded, the human intelligence is actually
relatively like obsolete and redundant and not really necessary.
Do you also believe that?
It would appear that way, just based on the trend that we're going, that these super-intelligence systems, once they've learned everything from us, they will be better than us technologically, they will be better than us cognitively.
They have none of the biological restrictions that are like meat forms, like, have for us today.
So it feels as if they can be the final form of what we lack in our biological limitations.
It's weird.
I think this conversation, like, we're experiencing when we go and we experience the idea of artificial intelligence.
Like I said, we can't actually go experience that.
You can go to chatGBT.com and you can experience artificial intelligence, like intelligence.
And it's like very purest contained way.
But I feel like there's also a bunch of like adjacent conversations as well, kind of like robotics and like brain and neural link interfaces.
I feel like we're actually like kind of spanning.
We're not just talking about AI.
This is not just an AI conversation.
This is actually like kind of like a frontier tech conversation where it's a collection of a bunch of technologies.
Is my intuition right here?
Very much so.
Yeah, this is the first time in history we've had enough computers to even have this discussion, where we have enough power to build these systems to ask these questions. And it's accelerating super, super fast, faster than anything I've ever seen accelerate. So the questions continue to shift on a regular basis, which is what's really exciting here. And I think this is where I think I want to kind of zoom back out and just say, welcome to bankless, where we explore the frontier of internet money and internet finance and now also internet intelligence. I'm David Hoffman. And I'm here joined today.
by a team member of the Bankless podcast team, Josh Kale.
Josh, welcome to your own podcast.
Thank you.
It's pretty funny to be on the other side of things.
Normally, I'm the one pushing the buttons behind the scenes.
Now I'm the one pushing the buttons in front of the scenes.
So it's cool.
It's really exciting to be here.
I'm very glad you asked me to come.
This is stuff that I love chatting about just for fun.
So it's cool to have an audience of people that hopefully also care about this.
Yeah, maybe to shut a little bit more late on what we're going to talk about in this episode and the motivations for this episode.
Bankless, primarily, deeply a crypto podcast. We've covered every single nuance of crypto that there is to explore, starting with defy and like infrastructure like, you know, blockchain's like Ethereum, but then also down the rabbit holes of like digital identity. And I think we've explored most everything. And now we're kind of just like keeping up with the trends. We've dabbled into the world of crypto AI with AI agents, but there's a lot left on the table on just the AI side of things. And Josh here, I think is a
explorer of the AI and adjacent technology's rabbit hole. And so I want to get a little bit of
taste of what that rabbit hole is like because there's a world of like alternative content
producers out there, like the Dwarquish podcast. You see this on Lex Friedman a lot. But then there's
also like kind of a long tail of content of content producers that are all producing content about
AI. And Josh, you are frolicking down that rabbit hole. And I want to get a taste for what that
rabbit hole is like. So maybe just before we get into even more conversations, it
maybe about the current trends in AI.
We're going to talk about GROC 3.
We're going to talk about Claude and Lama and Facebook and Open AI and even like the open source
world as well.
But I just want to know a little bit more about Josh Kale and like why he's so excited about
AI and motivated to like go down this AI rabbit hole and kind of get a taste for what that
AI rabbit hole is like.
Yeah, I've always been super excited about frontier technology.
I think it's just like it's the reason to wake up in the morning and be excited about
something because over time I've seen how it improves the world.
It started with computers super early.
on. I was like, wait, this is super cool. People are building things with computers. And eventually
led me to AI, which was an amazing frontier that's super exciting. Crypto has kind of slowed relative to
its progress five, six years ago when Bankless started. It's still super exciting. But there is this cool,
new shiny toy on the block that is AI. And it's very much experiencing its exponential curve,
more so than even I've seen with crypto. And the implications are the same, if not much larger than
they were with crypto because now you're talking about actual intelligence of species that we can
duplicate and improve and make better. So the AI infrastructure now is this really cool, important
thing, but there's this like shining new thing on top that can now interact with these payment
rails that we created that is doubling and tripling in speed every week. And Moore's law was
like the amount of transistor on a processor doubles every 18 months. With AI training models,
it's like every 18 weeks. It is so much faster than any other technology you've ever seen.
So it's been really exciting to just try to stay on the frontier of this.
And it's almost overwhelming how much is happening.
Just this week in frontier technology, we had Microsoft announced this new quantum computer.
GROC came out and it was the new frontier model that was made in 12 months.
There is computation improvements and genetic mutations.
There's all these crazy frontiers happening downstream of this new form of intelligence that's happening that's allowing all this research in these other parallels to happen.
So it's kind of this like top order function.
everything is downstream of in terms of innovation. What's exploring the AI rabbit hole? Like,
I'm sure, like, I named some podcasts, right? The Dwark Hodge podcast, he's really into this.
Lex, the podcast. Sometimes he does like six-hour sagas with multiple AI teams that are so
incredibly dense. So I know that it's podcasts. I know it's also on Twitter. And so I think
the crypto community will see some similarities here. There's Twitter drama. There's podcast.
There's also a company releases. How is it the same? Like your information diet to like follow
this meta and follow this trend? How is it the same? How is it different? So it's the
same in a lot of ways in the sense that there's tons of good podcasts. There are tons of great
Twitter accounts. I'd say about 90% of my information flow comes from Twitter still. There's a
really great community similar to crypto Twitter. They're not financially incentivized in the same
way that crypto Twitter is. So there is generally slightly more signal in these accounts to noise,
which I find really interesting. And there's this really cool new thing that helps me learn,
which is just the product itself. What's cool about this frontier that has never existed with
others is you can actually interact with it and it could help accelerate your learning progress.
So if you have a question, instead of having to search through Twitter, instead of having to
search through these podcasts, you could actually just go on GROC, which has access to all of the
Twitter accounts. And you could say, hey, GROC, can you just find 10 tweets from the 10 top
people who are talking about this one thing today and aggregate them for me and share the
data from me? So it's a super accelerated learning curve, which is required because this space moves
so fast. Right. I listened to a three and a half hour long YouTube video.
that I got from your Twitter recommendations.
You tweeted this out that this one is one of the best videos.
It was from Andre Carpathie.
And it was just a deep dive of how an LLM works,
like down to the basement, really deconstructing it.
And I found myself like pausing the video at times
and opening up chat GPT and asking chat GPT questions
about the video which was teaching about how chat GPT works.
And like my level of comprehension and understanding,
I've never learned something faster than I had,
both watching an expert kind of like teach me, a human expert, teach me, while also having this
like AI sidekick. So it's something very interesting of like, I'm trying to learn about AI, but I can
use AI to learn about AI. Yes, it's the self-fulfilling learning curve. And that video is actually
one of my favorite things. And that's how I would suggest a lot of people go about learning
AI. I think you could tie it to crypto where if you're new to crypto and you want to learn and
understand the frameworks and the basis of everything, you kind of start at the top. So you start
with Bitcoin, Ethereum. You understand the big players in the space. You learn who you
Satoshi is, you learn Battalic's story. It's similar where in the world of AI, you can learn about
Open AI, you can learn about Anthropic, you can learn about XAI, you can learn who's the founders,
and kind of get a view of the land. And then skipping that middle part of the Twitter narrative,
of the social structure, and just going right to the foundation, right to the base, like as close
to source as you as you can. And I think in that case, in the Ethereum case, it would be like,
you're reading Battalick's blog post. Now you're going to Andre's videos, and you're just watching
him explain what an LLM is from the core foundation. So you establish this high-level overview of
the architecture and then this very low-level overview of the specific things that make it work.
And once you have that framework, then you could enter the trenches and you can say, like,
you could start filtering through people's social narratives and you could start filtering
through their takes on things. And you have this very core understanding of how it actually works,
which I think is super important for a lot of people to do. And Andre's video is spectacular.
I watched it too, all three and a half hours. Yeah. I'm on.
my second run through of the first half of it. I haven't even like actually entered the second half of it.
It's dense. It's really dense. It's really good. It's really good. It feels like a college level
education on LLMs in a three and a half hour video. It's like love. Right. So Josh, there's a bunch of just
current news cycle events as there will always be in this sector. And I kind of want to go through
a lot of them that has happened in the last like two weeks. It's the XAI and GROC 3 that I think is
the most current event that I think the AI community is digesting because it's,
It's not unlike the Deepseek news cycle event where people just realize, wow, there's this team
out there that everyone was under indexing and now it has like skipped to the front of capacity here.
And it's now another player in this game. And I think as we kind of go through the current events,
we're going to be able to kind of like set the landscape and like understand the board.
So we can start placing the pieces of all these different players in the game.
XAI, of course, meta, grok, even the Chinese players.
So I think that's where we're going to start this conversation.
But first, before we get there, we're going to talk to some of these fantastic sponsors that make the show possible.
Introducing Unichain. Built for Defy, empowered by Uniswap, Unichain is the fast, decentralized layer two, designed to tackle blockchain speed and cost challenges.
With this Mainnet Now Live, you can enjoy transactions at up to 95% cheaper than the Eath Layer 1, all while benefiting from an impressive one-second block time that will be getting even faster very soon.
Unichain is the first layer 2 to launch as a stage 1 roll-up on day 1.
That means it comes with a fully functional permissionless proof system from the start,
increasing transparency and further decentralizing the chain.
More than 80 apps are joining the unichane community, including Coinbase, Circle,
Lydo, Morpho, and Uniswop.
You'll be able to bridge swap borrow and lend and launch new assets and more from day one.
Built by Uniswap Labs, the team behind the protocol that's processed over $2.75 trillion in all-time
volume with zero hacks.
Unichane truly enhances defy experiences with faster, cheaper, and seamless transactions,
even across chains.
and soon the Unichain validation network
will allow anyone to run a node and earn by securing the network.
Visit Uniswap.org and swap on Unichane today.
The Arbitrum portal is your one-stop hub to entering the Ethereum ecosystem.
With over 800 apps, Arbitrum offers something for everyone.
D-Fi, where advanced trading, lending, and staking platforms
are redefining how we interact with money.
Explore Arbitrum's rapidly growing gaming hub
from immersed role-playing games, fast-paced fantasy MMOs,
to casual luck battle mobile games.
Move assets effortlessly between chains
and access the ecosystem with ease
via Arbitrum's expansive network of bridges and onrifts.
Step into Arbitrum's flourishing NFT and creator space
where artists, collectors, and social converge
and support your favorite streamers all on chain.
Find new and trending apps
and learn how to earn rewards
across the Arbitrum ecosystem
with limited time campaigns from your favorite projects.
Empower your future with Arbitrum.
Visit portal.arbitrum.io to find out what's
next on your web free journey. Okay, Josh, let's kick this off with XAI and GROC 3. I remember when
Elon Musk introduced GROC and GROC is the chat GAPT equivalent that is embedded inside of Twitter.
I think it now has its own standalone app, but it was originally GROC 1, to me it was kind of a
joke. It was fun. It was cool, but it wasn't serious. GROC 3 is something materially different.
And to my understanding, like check me here, GROC 3 has kind of just like leapfrogged all the other
models and kind of place itself at number one in terms of some of the benchmarks or metrics that
we use to measure AI LLMs. And now everyone's realizing that XAI, the Elon Musk XAI team, is a
world-class team that is able to compete toe to toe with everyone else. That's my summary. Check me,
if I'm wrong. And like, what would you add to that? I think a lot of people will check you on the
fact that it is the best model in the world. It is very subjective. What defines the best model? There's a lot
of tests that people create to test these models. It has performed exceptionally well. So I think everyone
can agree it is a frontier model. It's a frontier model. This is very much in competition.
Okay, it's pushing the frontier. Yes. So this is like in competition with every other frontier
model that is currently available. There's a lot of talk about chat GPT5 and new anthropic models that
can leapfrog this. But I think it's important to note that like a Google's deep mind is 13 years
old. OpenAI is eight years old. XAI has been building this for 12 months. So it is significantly
younger, and they have managed to go from GROC 1, which was basically a science experiment
to frontier model in record time. And I think that's the story that people are really excited
about is the rate of acceleration that's coming out of an American-based company. Deep Sea kind
took people by storm because it was Chinese and there were not many close ties with the
deep-seek team. This was built right here, right next to all of these other companies. Right under
our noses. Yeah, right, and they managed to leave-frog them after just 12 months. So it's an incredibly
impressive feat of engineering. I think this story of XAI and GROC3 is very on brand for Elon Musk's
leadership, where he just really demands so much out of his teams. What was some of the things that
was kind of happening in the background that really like symbolizes exactly like how this
scrappy team was able to get so far ahead? This is incredible. I think this is a story that only
Elon can tell because there's really not many people in the world that have the resources monetarily
or intelligence-wise to get this done. He was able to attract a team of these builders and
And I believe they were able to get 100,000 GPU cluster in 122 days and then doubled it in 90 days to 200,000 GPUs.
And that's the largest publicly known cluster in the world.
Wait, wait.
Elon Musk has the largest publicly known cluster of GPUs in the world.
I believe so.
Okay.
I think, like, privately, people have more, but publicly, this is the most that I think people know.
And prior to this announcement of the 100K GPU cluster, it wasn't even certain that clusters could be that large.
Because there's a lot of variance that needs to happen.
and a lot of interference.
It's a whole different story,
but it's incredibly impressive.
And Elon, being Elon,
wants to do this as fast as humanly possible
because his goal is to crush OpenAI and Sam Altman,
who turned OpenAI into a closed source company.
So these two opposing forces,
there's Open AI, which is now closed source,
that wants the frontier leading artificial intelligence.
They just want to get there as fast as possible.
And then Grock, who has this other approach
where they want to be the more open source
and they want to seek truth about the universe.
So the whole thing about Graak is they want to discover questions about the universe. That is their ethos versus open AI.
So Elon wants to get there as fast as possible. So he couldn't build a factory because building a factory takes too long.
So they were looking for existing factories. They found one in Memphis that was abandoned by an old tech company.
They decided, okay, we are going to Tennessee. So that was the basis for why it's built in Tennessee is because that's just where the biggest factory was available that had enough infrastructure for them to build it.
So the whole team got up, moved to Tennessee. And in the first 120 days, they first got the GPU cluster, meaning they had to get that relationship with Jensen. They had to convince him to give him 100,000 GPUs, which is no easy feat. Then they had to wire them all together. They needed power. So I think initially the power grid only gave them 20% of what they actually needed. So what they did is they shipped in thousands of generators to be against one wall of the factory. And then they had to cool the factory.
they rented 25% of all of the cooling availability in the United States just to use for this one factory
and that cooled it down. So now they have the most power, the most cooling. They have this crazy
engineering team that's wiring all together as fast as possible. They're working nights and weekends.
And then there's these problems where there's variance. So now the power that's coming in has to power
200,000 GPUs. But the GPUs are spinning up and then they're spinning down over like 10 milliseconds.
And that variance in power makes them very difficult to handle for these generators. So they
it's called Tesla import megapacks, which are these giant battery packs, they had to reprogram
the megapacks that they would fix the variants. And there's all these technological, crazy
limitations that happen first here. And then there's the thing where now you have 200,000
GPUs running all at the same time doing this process called like a reverse gradient,
where it runs through and it trains the model. If any sort of variance happens on any bit of any
of those GPUs at any moment of that training run, the whole training run is gone.
Wait, so there has to be 100% precision by.
this very large group of GPUs?
Every single run.
And they're doing hundreds of thousands of these runs.
Wow.
The amount of precision and engineering coordination required to do this was
unbelievable.
A lot of people would have said this was never possible,
and they did it in 120 days.
Just because there's too many variables that could go run.
Yes, and very much, like, in the way that AI models are frontier,
the manufacturing is very frontier as well.
And the actual training clusters are frontier technology as well.
People didn't know that you could actually string these together this quickly in this way
and do it the way they did. It's an amazing feat of engineering. I think that's the more interesting
story than Grogh 3 is how they actually built it so damn fast. Wow. Okay. There's a ton of parallels
here with Bitcoin mining. Yes. And I guess that's just the story of data centers, but the last time
I ever heard about the subject of megawatts or even gigawatts was from Bitcoin mining facilities requiring
a ton of power to power A6. But A6 are like pretty stupidest machines. You're like the stupidest
machine possible. There's one algorithm and it just spits out algorithms. And you need a cool
it. You need to have a lot of power and you need to have a lot of these machines, but they don't have
to be synchronized with each other. And they run 24-7. They don't stop running and then start running
and then stop running again. So like the level of complexity here is a lot higher than my intuition
with Bitcoin mining. But that's really where the parallels end, right? There's no like synergies.
Like there's synergies with Bitcoin mining and power grids where like sometimes a local city will
flex up and down as power grids and you kind of kind of offset that with Bitcoin mining.
But there's no synergies here.
There's no overlap between Bitcoin mining and AI compute resources, is there?
There's not much.
There's definitely the energy conversation where how are you going to power all this,
but that's mostly where it ends.
I guess there's a lot of similarities in terms of networking infrastructure,
getting the GPU to talk to each other, building them in a cluster.
But ASIC mining GPUs and traditional GPUs that they're using for this training are super different.
So it has some similar parallels.
And I'm sure people who built data centers for Bitcoin mining could help build data centers for AI training.
but that's mostly where it ends.
It's very dissimilar.
Probably to the benefit of Bitcoin,
if these GPUs were ASIC miners,
that would be a problem
because that's a lot of compute
that could be used to attack the network.
So thankfully, two separate worlds.
Okay.
So you're emphasizing that it's not just the fact
that the GROC3 model
is now a frontier model,
but it's also the fact that Elon Musk
kind of does what he does best
and he leads operationally
and commands a very small,
very bright, talented team
to have this extremely,
just crazy outcome in a very small amount of time. So now, as we know it, again, check me
if I'm wrong, but the XAI team is operating the largest known cluster of GPUs. They also had this
kind of competitive advantage where they could use Tesla battery packs and maybe there's like
some domain knowledge that the XAI team was familiar with Tesla, I'm assuming, just because
of proximity to the Tesla team. Do we think that Elon Musk will be able to retain this lead of
compute resource, or is that something that is always going to be an arm's race with other companies, too?
Yeah, so I'm not sure 100% that they do have the single largest cluster. I believe it's a
largely publicly known cluster. A lot of these companies are much more private. So that's unknown.
But I think the story here, like you mentioned, is the rate of acceleration. The fact that they were
able to do this so quickly using so many different domain knowledge is from Tesla, from SpaceX,
even for energy and propulsion, stuff like that. I think that's the story. And I think based on their
rate of acceleration, they should be ahead soon. I love the comparison that Gavin Baker made where
it was like Google DeepMind had 12 years to get here, Open AI had eight years to get here,
XAI had one year to get here. If they continue at this rate of working nights, working weekends,
of really using all this unique domain knowledge to continue growing these, so long as they have
the GPUs, I don't see any reason why they would fall behind. So I imagine it kind of looks like they're at
the front now. Open AI releases their model, Anthropic releases their model. Now they're in the back.
after they released this new version of their training center,
which Elon said they want to power it up to 1.2 gigawatts, I believe,
which is five times, which should be a million GPUs.
So you're talking like a 5x increase from this.
I very quickly think that XAI will just continuously lead
and leapfrog all of these at some point soon.
Interesting. So you think XAI will constantly be found at the number one spot.
And I think what you're saying is the number one spot is constantly going to shuffle.
But you think that like in terms of like the frequency of being at the number one spot,
XAI is a pretty strong candidate for being there?
It would appear so just because of their engineering abilities.
Now, there's a big caveat to that that we saw with Deepseek, where Deepseek did not have
the engineering resources that the United States does in terms of GPU cluster size, but
they were still able to produce a frontier model using this novel breakthrough of distilling.
So there is a world in which GPUs matter less and less.
In that case, it becomes a very different battle.
It's not really a manufacturing battle.
It's more of an intellectual coding battle of who could create the best algorithms, who could
is still the best models, who can make those breakthroughs. Until those breakthroughs happen,
it's very much a brute force challenge where how large can we get our cluster to train? And a big
question with this was scaling laws. A lot of people were questioning scaling laws. Does it
actually scale proportionally if you have 100,000 GPUs to a million GPUs? Do you get a 10x
improvement? So far, the scaling laws are intact and GROC has proven that. So until that changes,
it's likely just going to be a battle of GPUs. And I think the XAI team has a really good chance
of staying in the lead with that.
Okay, so the deep seek model had this breakthrough,
this technical breakthrough in the way that the LLM was designed
because it has this mixture of experts' efficiency breakthrough.
And so if you ask it a question,
the nature of the question will kind of get routed
to different parts, different sectors of the LLM,
this is as I understand it,
and some sectors will light up and be more active
and other sectors will stay quiet and be less active.
And that's an efficiency of compute resources breakthrough.
Yes.
But can't XAI and GROC 4 or 5 or whenever they decide to incorporate this?
Can't they just copy that technique?
And now they have that technique, which was the deep seek advantage,
and they have their compute resources advantage.
So the compute resource is not commoditizable because XAI owns those things.
But the deep seek technical breakthrough totally commoditizable.
Is that conclusion?
That seems right.
Yes.
So now that deep seek has made this crazy chain of thought improvement where they
basically what they do is they took this gigantic model.
and they were able to distill answers from it using chain of thought.
So basically the giant model talks about how it gets to an answer.
The refined model uses that chain of thought to train itself
and give itself higher quality answers with lower compute requirements.
That's totally being used.
One thing that's interesting about XAI that Elon initially said wasn't happening,
but I saw on X yesterday that an engineer confirmed,
they're actually showing chain of thought raw and in real time when you prompt the GROC model,
meaning if you ask it a question,
it will show you exactly what it's thinking as it's thinking it.
So not only is the XAI team using this to improve future models, I'm sure,
but what's interesting is they're currently close source,
but they are making chain of thought open source,
which means other people now can take this GROC3 model,
and they can view the chain of thought.
They can feed that to their own smaller refined LLM,
and they can make their own.
So it's this pseudo-open-source thing that they're doing.
It's not open-source, but it's a window.
It's a selective window into GROC-3.
Yes.
that allows people to kind of see what's going on
and then actually kind of hook into that technically.
Yes.
And I think a lot of people get confused
because LLMs seem like this big, scary network of code
that doesn't make any sense.
But all these things are doing,
they're just token generation machines.
So they are just speaking in plain English to itself.
And then it is reflecting on the English, it said,
and it is thinking again in English.
And you could view this whole chain of thought
in plain English.
There's actually no code that is really running with the model.
So the command line interface of these LLMs
are actually in English.
So there's no hidden code behind the scenes here. There's no like binary that these things are
thinking in. They're actually thinking in English and we are able to see into their brain as it thinks
in English. Yes. And one of my favorite tweets is by Andre. I think he has a pin to his profile
where it says like the hottest new programming language is English. And it's so true. And it really
removes this layer of intimidation from LLMs and AI in general is because they think in English.
They think the same way that we do. And they infer in English and they process things in English. And
you could interact with them in English. It's this really liberating thing that's happening where
you don't really need to write code to engage with these things because they understand code and they
could write code, but they also very well understand English and can interpret that to code however
you want. It's a super powerful thing happening. What's XAI's stance on open source? So GROC 3, to my knowledge,
not fully open source, but that new window into a part of GROC 3 I thought was pretty interesting,
but GROC 3 not open source. My understanding is that GROC 2 is open sourced once GROC 3 is like
stable and release and fully in production. What's XAI's like stance on how to open source their older
models? Yeah, that's correct. So they have a frontier model, which is GROC 3. Once GROC 3 is
finished and feature complete, they're still rolling up features every day. That's when they
plan to release GROC 2 as a fully open source model. And open source mean they'll release the weights
that it was trained on. So it should kind of proceed like that as they go, where their frontier
model will be private. And then the model prior to that will become public and open source for everyone
to use. So I have this tweet here. I'll just read it. Still insane to me that four of the original
Open AI founders have raised billions for direct competitors. Elon with XAI, Mira, Thinking Machines,
Ilya, Super Safe, John Anthropic. There's other frontier model like AI labs out there, but I think
this kind of just gives the lay of the land. It's like trying to set the board here. There's all
these different companies. Each one has their own independent stance on open source, correct?
And ironically, Open AI, I think, is the most closed.
XAI is like pretty far leaning towards open source but not completely open source.
How did the different companies compare in terms of their stance? How do we think about this?
Yeah. So this is like the most insane version of Game of Thrones you could imagine.
Because going back to how we started, this is the fight for the bootloader of artificial
intelligence. And I think the people at the open AI team were at the frontier and they realized
this. So what happened was Elon got pissed because Open AI that they named it after was no longer
Open. And then Mira got upset, who is one of the other co-founders because of unknown reasons. She left to
her own company. Then Ilya left to start his own company. And then John left to Anthropic and now was
working with Mira. So a lot of the co-founders of Open AI either realized the stakes were too high for one
person to control it or they had a disagreement about how it was being controlled. And they decided to
exist on this spectrum of openness to closeness. So in the case of Elon and XAI, his goal is to create an
open AI platform to really be the open AI. He's doing that. He's not fully doing that. Grock
there would be open source if they were. But I think the intention over time is to release it and to
seek truth. Mira has a company that just came out of Stell called Thinking Machines that is
kind of adjacent to this. Ilya is very focused on safe alignment, making sure that the AI is
aligned with humans' intentions that the goodwill of people. And then John actually left open
AI to go to work at Anthropic and then left Anthropic to join Mira in her thinking.
machines company. So it's this wide spectrum. There's a lot of people who are very excited to compete
against each other for this prize of who can have the leading AI models. I'm seeing like a parallel to
the Ethereum co-founders story. So Ethereum had like eight co-founders at the very beginning. Gavin would
left to do Pocodot. Charles Hoskinson left to do Cardano. And so is that kind of like a kind of similar
structure that we're seeing here is like, well, Ethereum was created, AI was created, smart contracts
were created. And then all the co-founders were like, wow, this is really cool. But I want to go lead
my own project because I have a slightly different philosophy on how things ought to get built.
Yeah, this has happened throughout history a few times. You also saw, like traditionally with the PayPal
mafia, where this incredible group of people all built PayPal and then left to start all these other
companies like Teal created Palantir and all these other companies than Elon was from there. David
Sacks was part of that group. There's a lot of names who you know today that were all part
of this cluster. I think it's a very similar thing. But there's also like other AI
lab companies out there, right? Because, like, we haven't even talked about meta here. Meta doesn't have
a OpenAI co-founder. It just has Zuckerberg and the meta team. So there's also meta. Anthropic is
John. So John is an old OpenAI co-founder. Is there anything else other than meta that's not listed here that's
worth discussing as like who's in the game, who's on the board? The personalities aren't as extreme.
This covers most of them. I mean, meta has the most known one, which is Zuck. And then there's
Satya, which has Google and the Gemini. Yes. So those are big names. But those were
I never hear anything positive about Gemini. I only hear negative things about that. Gemini is actually a really
great model and I encourage you to try it. Really? The specialty with Gemini currently is they have an incredibly
large context window, meaning you can fit a lot of words, tokens, people call them, into the models that it can
reference quickly. So with a lot of traditional models or just normal model in general, they're referring to this
kind of densely compressed data set. With the context window where you could feed a very high quality
data that's not compressed through words. Like, for example,
you could upload a PDF, Gemini allows you to upload the most information and also share the most
information. So a context window is like, I'm looking at something. Say I'm looking at a document. And when I'm
a human reading it, I'm like remembering the page that I just read. Now I'm on the second page. And that's
like my local amount of attention that I'm able to like keep in my short term memory. And it's like a
context window is like, well, the Gemini model can actually see like maybe 800 PDF document pages all at
once simultaneously and other models don't have that large of like attention field. Is that a way to
understand this? Exactly. Yeah, that's a perfect explanation. It's like things out of context
would be like you reading a book, putting it down and coming back the next day and trying to
remember specific things. You remember it, but sometimes gets a little blurry. Sometimes you
hallucinate and you come up with the wrong answer. This has all of it presented right in front.
It is very clear, very accurate. And that's what Gemini is particularly good at. So I wouldn't discount
Gemini. Gemini is doing good work. Okay. Why do I think that it's like negative though? Maybe it's a
censorship. Like Gemini is a super high, like, woke biased model. Yes, it could be the censorship thing
where, like, Gemini was the one that was generating pictures of, like, non-presidents as presidents on
murals. And it had a lot of drama for that. It was very much the woke AI. I think they're
starting to turn against that. I think the difference and the reason why you hear less about
meta and Gemini is because they're not part of this cabal of people. They were, like, Facebook and
meta had nothing to do with AI a while ago. Google had nothing really to do with AI. Now they're
turning the resources towards it, but a lot of these other companies are AI-first laboratories.
Okay.
So their entire existence exists for building AI models, whereas with meta and with Google, those
are just pillars of the larger business.
Right.
Okay.
So it's meta and Google that are the two companies that have AI products, but they also have
just like a whole entire like alternative products like meta has Facebook, Instagram,
all that kind of stuff, WhatsApp.
Does that change anything in terms of just like what their piece on the board looks like
or behaves like. How do I think about that? Yeah, I think it's different for meta and Google.
Google AI is a requirement. Google's largest product is its search engine, and AI very quickly has
started to attack that. So they're trying to not be disrupted. Exactly, yes. Or in a sense,
they're disrupting themselves before others can disrupt them. So they want to get ahead of search
getting destroyed or eliminated by AI models by building their own. So I'd say Google is definitely
doing it out of necessity. And again, there's a lot of downstream benefits that come from building
Frontier models for their own business. They can now optimize their engineers. They can optimize a lot of the
code base. I forget the number, but I think on Dwar Cash podcast recently, the Google engineer said,
I think 25% of all the code contributed to the code base now is built by AI. So not only does it help
defend their moat of search, but it also helps improve the productivity of all of the employees at the
company. In the case of META, I think it's a different plan. Meta has Facebook as their
cash cow. They have the attention, but the attention is just like kind of
of messy and not really productive.
And it's just, it's a social media platform.
So what they have is they have this moat of user attention and they want to try to get ahead
of this trend and point that attention towards their products that use this AI.
So meta's open source approach is allowing people to build their own tools, build their
own communities on top of their Lama model in hopes that they can build these products that
strengthen the moat around the existing audience that they have.
So a problem right now is a lot of companies are raising a ton of money to,
train these clusters. They're very expensive, but there's no clear path to revenue for them.
You're just kind of throwing money out a black hole. You're getting these really good models,
but there's really no moat effect outside of chat GPT, which is an application. So you want to
kind of take your existing users, turn them into a mode as best you can. And that's kind of what
meta is doing. They famously did this with Pi Torch was one of them that came out of the old
Facebook team. And there's one other programming language as well. But they like to create these tools
in-house, open-source them, and then build a development community around them to leverage
their existing users. So meta seems the most differentiated via any of the other AI labs,
simply because of like the social media and user base that meta has. That's my intuition.
Is that right? Yes. Yeah, I think we probably like meta the most as crypto people who are
super into open source. Meta has been the largest, most transparent contributor to open source
AI technology. They've poured billions of dollars into training these models. They've open source
all the weights. They haven't open sourced everything. There are still private parts of it that they're
using for their own proprietary platforms.
Instagram, Facebook, they're all built using these AI tools that Meta hasn't announced.
But I think they're probably leading the way in flagship open source models that people can
build off of.
And that's been really exciting.
You could actually run.
As a test on my little MacBook here, I tried to run one of their smallest models.
And it actually works.
It's very slow.
But you can run a model locally on your machine.
And it's really cool that larger companies have access to these huge models to then train
for their needs and do things how they how they wish.
Interesting. On the spectrum of closed to open, like close source to open source. Is it meta that's on the most open source side of the spectrum? And if that's true, why are they there? In terms of big players, I would say yes. They're the largest and the most open source relative to someone like open AI or even anthropic. They're very large and very close source. I think they're there mostly because of the access that they have to users currently and the fact that they were behind. So when meta's behind, unless you're Elon with XAI, it's very difficult to catch up to the frontier model.
So there is a case to be made that if we can't beat them, we could just dilute them.
And by releasing these models that are open source, allow anyone to use them, allow people to build on top of them, allow people to distill them, they can kind of grow this open source conglomerate faster and cheaper than a lot of the closed source people can.
A lot of the closed source models like Chad GBT, their entire revenue is built around their subscription.
And if meta can provide a free alternative, that starts to eat away at their profit margins because there is this shiny free thing that works.
80% is good, but it's free.
And then there's a second thing where they have this moat of tons of attention and users.
And if you're a developer, you want to go where the users are.
And if meta is offering you this open platform to build on top of to access this whole user base,
I think that's super appealing to developers as well.
So the thing for meta is they probably wanted to dilute others while allowing developers to come
and giving them access to the most amount of tools possible.
So they can get this free attention.
They can get this free workforce, essentially, of developers.
who want to come and build on top of their existing platforms.
Okay, so meta has just revenue streams from elsewhere, whereas OpenAI and more of the
closed source ones, the AI-only laboratories, they have to make money from their AI product.
And so that's probably the incentive for them to stay close-source.
Open AI charges $20 to $200 a month for some of their most advanced models.
Meta doesn't have a most advanced model, but they can be a lost leader.
Their AI labs can be a loss leader.
can kind of penetrate the market with a technically inferior product, but they can kind of lean
into the value and the ecosystem of open source and trying kind of just like volume their way
into penetrating into the market. And what I mean by that is like when you open source your
ecosystem and you kind of commit to an open source, you recruit the army of open source developers
to create an ecosystem around your LLM. And so they're just trying to like dilute the value of having
a frontier, a closed source frontier model by just having this more open source one and trying
to make an ecosystem platform more than like a very powerful model. Is that kind of the understanding?
Absolutely. Yeah. Meta does not need AI revenue to survive. Right. And Open AI does. And Anthropic
does. So they are forced to keep these clothes. They are forced to charge a premium. Meta does not.
Medi can earn its growth over a long period of time from network effects from gathering an audience
of builders and developers that want to build on top of it. Whereas these close source models need to
generate some revenue because the promise is that these will eventually absorb large swaths of GDP of the
world and they will produce trillions of dollars of revenue. But until that point, you need to have some
sort of revenue model to keep the lights on to continue training, to continue getting there. And the only
way for these closed source models to do that now is to just charge a premium and try to create
the best apps they possibly can. Do you have any like bets? Where would you place your bets in terms of
like who's going to win here? How do you even think about that? Like, do we even need to think about that? Or
should we just like grab our popcorn and watch these players duke it out and be chill with a drama?
Or like, do you have a favorite here? And if you do have a favorite, why?
Well, my favorite currently is XAA. I am obsessed with the engineering team at that company.
I think the fact that they spun up that cluster in 12 months is absurd, considering they were quoted like
four to six years or something crazy like that. So I think the rate of acceleration of people
who are super hardcore and also just care about more truth seeking versus alignment. That gets me
really excited. If you chat, Grock has an early chat feature that some people got access to.
You could talk to it and it sounds like it's your friend. It doesn't really sound like it's trying
to prevent some parts of the truth from getting to you. It's just very raw, very open. I love
the mission statement. I love the rate of acceleration. I love the hardcore engineering. They just
seem like the most generally aligned because they want to get as far and as fast as possible and then
dump it all off to the public and allow it to be open source. I think open AI has a really good
chance. I think Anthropic is a really good chance, Jim. They all have a great chance.
And there's this crazy, like, sub-level of drama existing there where all the co-founder is leaving and they're competing against each other and they're suing each other.
The thing famously, when Sam Altman got kicked off the board in 2023, part of it was the result of Ilya Satskever, who was one of the co-founders.
He decided that he would vote to kick Sam off the board, which is a crazy thing to do because they were co-founders, but clearly they didn't see things eye to eye.
And the big meme that came from it is like, what did Ile see that caused him to freak out so much that he wanted to kick the CEO out of the company?
He later apologized, he changed his mind, but after doing so, he left. And he said, that's enough, I'm building my own thing. And we still don't know the truth behind all of that drama, right? It's still kind of opaque to us. Yeah, there's a lot of stories that came out of it. Nobody knows the actual source truth. I don't think it has come out exactly what happened. We don't know exactly what I saw, but we know that he left and created a company based around alignment, which is making sure the AI is aligned with human values, because clearly there is something missing in the OpenAI team. So it's this crazy, like, crazy game.
of Thrones, fight for power. Really fun drama, really fun to watch. It's best just, yeah,
the popcorn sit in the back of your seat and watch them all duke it out because the stakes are
very high. Right. What's the deal with Elon Musk versus Sam Altman? Like there's a lot of drama.
That's like some of the most drama that's currently happening, right? Download me on that.
Yeah. And of course, Elon is in Spotlight for 20 other things, but this is one of them.
And basically when Open AI got started, Open AI was named Open AI because it was supposed to be
open source research for AI models. So Elon wrote a $50 million, some amount of change.
check, Open AI was built as a nonprofit.
Elon was one of the co-founders, along with Sam and a few others, Mirra, Ilya, the whole
crew. And then he left, and he started working on other things, and he was building Tesla,
he was building SpaceX. Over that time, Sam, who was running the company, decided that in order
to train these frontier models and to accelerate the progress of AI research and development,
they needed more money. They couldn't exist as a nonprofit. They had to raise and become
not a for-profit, but closer to a for-profit to fund all the GPU clusters to fund
all the electricity, all the training. So Sam considered that he should start raising a lot of money,
and Elon got very upset about this. Because the whole goal for Elon, and the reason they named it
Open AI, is he wanted to keep it a nonprofit. That way, there's no incentive for anyone to use this AI
technology harmfully. And he wanted to allow the open research of it that he believed it was going to be
very powerful. Once it does become powerful, everyone has equal access to it. Sam, over time,
slowly straight away from this, he raised a bunch of money, he trained a big cluster. And now their
revenue is built on this closed source thing that is very, very powerful, and he has no intentions
to give it up. In fact, he's actively trying to create a for-profit branch of Open AI, which is
part of the recent bid that Elon made of, I think it was, I forget the amount like $94 billion or
something for it, because Sam wanted to spin it off cheaply, Elon is kind of trolling him. He made this
huge offer for Open AI that now has to get justified by the board and is kind of sabotaging Sam's
plans to make this for-profit arm. So Elon is very pissed. He went and made his own thing. Sam is
upset with Elon because he thinks that you need money in order to train these clusters, and that's
kind of where they butt heads. Imagine a world where your day-to-day banking runs on a blockchain.
That's exactly what Mantle is building, powered by a $4 billion treasury and poised to become the
largest sustainable on-chain financial hub. As part of their 2025 expansion, Mantle is introducing
three new core innovation pillars that bridge traditional finance with decentralized technology.
First is their enhanced index fund aiming for $1 billion in AUM by Q1.
It provides optimized exposure to Bitcoin, E, Solana, and USC, complete with built-in yield
opportunities.
Next, Mantle banking promises to revolutionize global value transfer through seamless blockchain-powered
banking services, bridging crypto into your daily life.
Finally, Mantle X blends AI with Defi to deliver an intelligent, user-friendly experience
for everyone.
And the best part is that this is all in addition to their already launched products,
like Mantle Network, ME, and FBTC.
Ready to step into the future of finance?
Follow Mantle on X at Mantle underscore official
and joined the On Chain Revolution today.
Sello is transitioning from a mobile-first,
EVM-compatible Layer 1 blockchain
to a high-performance Ethereum Layer 2
built on OP stack with EugenDA and one block finality.
All happening soon with a hard fork.
With over 600 million total transactions,
12 million weekly transactions,
and 750,000 daily active users,
Sello's meteoric rise would place it among one of the top layer 2s,
built for the real world
and optimized for fast, low-cost global payments.
As the home of the stablecoins,
SELO hosts 13 native stable coins across seven different currencies,
including native USDT on Opera MiniPay,
and with over 4 million users in Africa alone.
In November, stablecoin volumes hit $6.8 billion,
made for seamless on-chain FX trading.
Plus, users can pay gas with ERC 20 tokens like USDT and USDC
and send crypto to phone numbers in seconds.
But why should you care about SELO's transition to a layer two?
Layer 2's Unify Ethereum,
L1's fragmented,
By becoming a layer two, cello leads the way for other EVM-compatible layer ones to follow.
Follow Sellow on X and witness the great cello happening where Sellow cuts its inflation in half as it enters its layer two era and continuing its environmental leadership.
In the wild west of Defi, stability and innovation are everything, which is why you should check out Fracks Finance.
The protocol revolutionizing stable coins, DFI, and Rolex.
The core of Fract Finance is FraxUSD, which is backed by BlackRock's institutional biddle fund.
Fracks designed FraxUSD for besting class yields across DFI.
bills and carry trade returns all in one.
Just head to Frax.com, then stake it to earn some of the best yields in Defy.
Want even more?
Bridge your FraxUSD over to the Fraxtal Layer 2 for the same yield plus Fractyl points
and explore Fractyl's diverse Layer 2 ecosystem with protocols like Curve, convex, and
more, all rewarding early adopters.
Frax isn't just a protocol.
It's a digital nation, powered by the FXS token and governed by its global community.
Acquire FXS through Frax.com or your go-to decks.
stake it and help shape Frax Nation's future. Ready to join the forefront of Defy, visit Frax.com now to start
earning with FraxUSD and staked FraxUSD. And for Bankless listeners, you can use Frax.com slash
R slash Bankless when bridging to Fraxel for exclusive Fraxel perks and boosted rewards.
Was the $94-97 billion offer by Elon like more than what people assume Open AI is worth,
less than what people assume Open AI is worth? How do we think about that number in relation to reality?
I'm a little fuzzy on the exact numbers, but I do know that the number for Sam was hoping to take it was $40 billion.
40.
And this is more than double that.
40. Yes.
So this was more than double that.
And Sam said no.
This was a move by Elon to offer him a very high number in knowing that Sam would be resistant to the fact that it came from Elon.
And now you said that now Sam has to go and justify to the board why the $97 billion valuation is not good.
Yes.
And this is great, this is speculation.
So it takes this with the grain of salt.
But it would make sense that the reason.
Elon did it, is to sabotage this plan that Sam had to spin off a for-profit part of Open AI.
So because Sam wanted to get it for $40 billion, Elon offering $90-whatever billion cannot be
justified to the board like, hey, if we're turning down this offer, we're not giving
into you for half that.
That's crazy.
So that's kind of where we're staying now.
So in a way, people believe this is not like for sure, but people believe that Elon
was trolling Sam because he knew he wouldn't sell, but he wanted to make it harder to create
that for-profit entity.
But it was a credible bid, correct?
So Sam could have said yes, and then Elon and Sam would be in a negotiation after that.
Yeah, we've seen this happen before with Twitter.
He made the bid.
People didn't think he was serious.
He tried to pull out.
They wouldn't let him pull out.
And now he has Twitter.
So it's like one of those things where, I mean, this is a real bid in the sense that if they wanted to accept it, Elon could now own Open AI today.
Is it still on the table?
What's the status of the offer?
I don't know.
Things have just gone quiet?
I think it was quickly shut down by Sam.
That was for sure.
So Sam wanted no part of this bid.
He was like, I'm not taking it.
But I'll buy Twitter for X-Math.
He trolled Elon back.
So he was kind of trolled, yes, or attempted to the best he can.
I find that Elon really cares much less than Sam in the sense of being serious.
So Sam's working on his trolling abilities, but that's kind of where they're at currently
is they're not selling Open AI anytime soon.
But didn't Open AI raise from Microsoft a while ago and raised a bunch of money from Microsoft
at a particular evaluation?
They did.
And the deal with Microsoft was super bizarre.
This is a one-of-a-kind type deal, again, because the goal of this or the prize of
winning this race is so high that there is a certain return multiple on Microsoft's investment.
I don't know the specific amount, but once it reaches that multiple, all the shares are then
reverted back to the nonprofit to then be distributed because the assumption is that AI will unlock
the world's GDP. So it will create so much value for the people who own it that it does not
make sense for a single shareholder to benefit from all that because it will be so much money,
the money will not be as important as it used to be. If you generate a hundred trillion dollars worth of
value. Microsoft doesn't need that. Sam doesn't need that. So the idea with the Microsoft
investment is they will get an investment and a return up to X percent. And then after that X percent,
the shares get returned to the nonprofit and they can stand to benefit humanity as a whole. And
that's kind of where a world coin comes in. Is Sam kind of saw this future of infinite GDP unlocked
by robots. And he wanted to come up with this kind of like global payments for citizens. And
that's kind of where world kind of fits into this and ties to crypto.
Worldcoin is like the distribution mechanism. So like when there's one winner of the AI race and the world's GDP is equivalent to like owned by this LLM or this AI company, then we need to figure out how to distribute the spoils of the world's GDP to the people. And we need a distribution mechanism and that is WorldCoyne. Is that the connection?
Yes, exactly. The idea is like so much money will be generated by these AIs that it will relinquish everyone's responsibility.
to be productive in society.
Therefore, we need this payment method to,
one, prove that you are human.
So that's why they scan your eyeballs is because AI so far
cannot create artificial irises.
And just to prove that you're human
so you can get some sort of universal basic income
that can sustain people moving forward.
So that's a weird dystopian future to think about.
But it's something they're taking seriously
because WorldCoin is a very serious company
with many millions of users
who are thinking about these types of problems.
Okay, there's a big ass gap there.
There's a big gap there between like,
All right, we're going to make these AI products and, you know, chatyPD is going to help me think and it's going to help me code.
And that's where we are today. And it is truly changing the face of like GDP in a small way.
And then like we're jumping forward to like, okay, and then one AI is going to win. And it will be the world's GDP and it will direct the world's resources. And owning it is equivalent to owning the world.
Can you like carry us through that logical train of thought? How do we go from where we are today to like a single AI?
Yeah. Like owning the world. How would you?
like Kerry is there. Sure. Yeah. And granted, that's probably the worst case scenario. Is a single
entity owning... Worst case. We don't like that. No, we don't like that. Because that would be like a single
entity owning the largest nuke in the world. They have the most power. They have all their like money.
They have everything. And to leave that up to one person in this case, in this example, Sam,
seems very scary. We don't want that. So that's why there is this race of companies and also
open source models who want to dilute that and also play a part in getting there. Now to get
there is a very long way. There's this debate on AGI and have we reached AGI and what exactly is
AGI. That probably comes first. I guess you could loosely define it as being smarter than a human
and basically every facet. So right now, the goal is to make an AI model that is smarter than a human
and basically everything a human can do. And we're not quite there yet because we're currently
training on all of the human knowledge base. So the way these models work is they'll scrape the entire
internet, they'll scrape all the books, they'll scrape kind of all the publicly known knowledge
that humans have created. They will condense it into this model and they will train the model
based on that data to kind of emulate it. But it's not actually creating original ideas just
yet. It's kind of doing this stuff based on what we've already created. Now, it's mostly
gone through all that data. It's trained itself on all the data. And now they're working on ways to
make it think smarter. We kind of see this early on with chain of thought reasoning. That was the big
deep seek breakthrough is the model can think about what is thinking out loud and then reason
against those thoughts. So it could kind of iterately think on top of itself and get smarter that
way. But there hasn't quite written this like breakthrough where it is generating quite original
thoughts. So there's a big leap there between this is smarter than a human at math. This is
smarter than human at coding. This is smarter than human at socializing. But there's not really a model that
can do all of that at once. That's the first step. We're very close to that. I would imagine that probably
comes in the next 24 months, just based on the rate of acceleration. But then there's also a huge
jump there to actually impacting real world stuff. These robots, like, they don't have bodies just
yet. They don't. They're not physically capable in the way we are. So there's a huge jump
between unlocking tons of GDP and unlocking just like marginal increases of GDP. I think
AGI is the first step where if you can get these brilliant thinkers that never sleep,
can think incredibly quickly, never need any energy. Again,
aren't limited by biological limitations, that's probably the natural first step. That unlocks a
lot. But there's still this whole world of robotics, of automation that needs to happen in order
for them to really replace stuff that we're talking on the scale like decades from now.
Right. Okay, so let me regurgitate what you said. So as I understand it, make sure I got it right.
There is this library of human knowledge, all human knowledge, is recorded somewhere, mostly on
the internet, but also in books and other documents that are out there in the world. And the
current LLMs that are in development by all of the players that we've talked about so far on
the episode are trying to ingest all of that knowledge, make a model that understands and
knows all that knowledge, and then can efficiently with minimal compute resources or
just, you know, efficient compute resources, spit out that knowledge, regurgitate that
knowledge better than any one human can at any one particular domain. So there's all of these
domain experts, there are chess grandmasters, there are PhDs, like rocket scientists, theoretical
physicists, all areas of expertise where there is like a leading frontier human. We are trying to make
an LLM that is better than all humans about all human generated knowledge. And that's what you just said
is we are trying to get there maybe within 24 months. We will have an LLM that is better than all
humans at human previously generated knowledge. And that we're going to get there. This seems pretty
reasonable, not too crazy. But the new thing is, well, humans don't have a monopoly on knowledge.
There's knowledge out there that other species, other types of intelligences can create and know.
And so the next step that we don't know how we're going to get to is can we make an AI model
that can generate new knowledge independently of humans. And that is actually a much bigger
engineering challenge that we do not know how we're going to get to. Yes. Smarter people than I can
probably tell you a clearer path to getting there. It does seem that leading researchers think
like this is imminent and very possible very soon. I don't know exactly how it works, but I know the
implications of it, which are just creating original ideas based on this foundational knowledge that
we have. So, for example, in the world of biology, these models are trained on all of the biology
we know. Using that biology, it can generate enough compute power of thousands and thousands of
PhD students to infer new tests worth having. So in the case of curing cancer, for example, they can
take all of the biological information we've ever unlocked about cancer cells, about how they work,
about like the core biological things. They can make these inferences based on all this knowledge
that sometimes we can't see. There's this funny thing that I heard biology talk about actually
where it takes a very small knowledge unlock to unlock a very large picture. So in the case of a mouse
being inside of a box and the box is a maze and it doesn't know how to get out of the maze,
but it turns out the answer to the maze is just every prime number you turn right. And if you're a
human and you know what a prime number is, you can very quickly infer, okay, prime numbers get you out of
this maze. But without having that knowledge, you can be stuck in that prime number maze forever,
because it's very difficult to guess at that scale. So there are probably these small knowledge
unlocks that AI could infer for us that allow us to go out into the world and test that could
unlock these huge new swaths of information that we were just too blind to see. So I think that's
the most exciting medium-term thing is what type of inferences can
AI make that allows us to go test to be like, holy shit, I can't believe we miss this super
simple thing that unlocks this new gigantic world of science. So that's the part that's really
exciting in the intermediary. So these are engineering tricks. And there's just engineering tricks
out there that unlock like step functions in terms of like the capacity of these models. Is that
what you're saying? Yes. And then I assume there's also people that will say that they will begin to
create their own ideas and do the tests themselves and start to form these artificial conclusions.
That comes further down the line.
And that's actually where Bankless stumbled into AI is right at that subject.
Like, correct me, you were there when we had Eleazar Yodkowski on the podcast.
And me and Ryan were going into this interview with Eleazar Yucalsi, and we understood
him to be this AI expert who deeply understood how AI's work.
And we were just going to ask him about, like, yo, AI, like, teach us about AI, Eliezer.
And then we also had in this agenda, like, oh, we're going to talk about AI and crypto.
But, like, it became very apparent, like, 10 minutes in that Eliezer Yucowski was like,
Once we cross this one particular line and we don't know where that line is, and once AIs get
materially smarter than us and they start self-improving themselves, we lose all semblance of control
and it's a brand new world and we don't have any control in that world. That is the same line
that we are just talking about just now, right? Yes, it's referred to as, I guess, like, takeoff
is what some people call it, is once the AIs get smart enough to reason with themselves,
they can improve themselves. They can create their own knowledge, they can improve themselves.
The brakes are taken off. Yeah, incomprehensible to us.
And there's these funny jokes that we are actually like in the early stages of that right now.
Like if you think about what just came out this week, it's like we have now quantum computing
breakthroughs. We have genetic mutation breakthroughs. We have like chromosome breakthroughs,
protein folding breakthroughs. All these frontier breakthroughs are happening in one week.
What would have like normally come out over the cross of a year, a decade even.
So is it starting to happen? This is probably the early stages. This is stuff that's still
comprehensible to us. I wonder how this will change 12 months, 36 months in the future.
as these models get smarter, as they're able to teach themselves, as they're able to improve.
I guess the idea that Iliaser was referencing is once you've reached this critical threshold,
AIs can teach themselves so much, so quickly, that they will just take off in terms of their knowledge base,
and they will be incomprehensible to humans.
And that's kind of the scary thing, because they will understand us so deeply and so easily be able to manipulate
us and the world around us. We are very dumb relative to what type of intelligence is coming.
Does crossing that line cause any, like, fear in you at all?
Or are you just still like mostly stoked?
I think it's kind of like thinking about the universe and the stars and stuff is like,
it seems overwhelming when you say there's like trillions of these things and we're so small.
But it's like kind of tough to comprehend.
Right.
So I can only be like so much in awe of space or so much afraid of the AI.
Like I understand, I guess on like a conceptual level how scary and how dangerous it could be
because of how easily manipulated we are, how dumb we are, like how limited we are.
But it's tough to kind of feel.
really scared because it's just tough to understand the gravity of it. Like I understand it's a big deal. I
understand it's probably a bad thing. It certainly shouldn't be controlled by a single person. It hopefully
gets distributed. But I don't really know the second order implications of how bad that actually could get.
And you could speculate and I just don't know. I choose to be optimistic. I choose to be excited.
I'm not a researcher. I am not playing a role in this future. So I am just here admiring it from afar.
Eating popcorn. Yeah. That's it. But yeah, it's freaking nuts. It's a really big deal.
And I think this is maybe where the conversation wraps around how we started this conversation with talking about how human intelligence is just a bootloader for artificial intelligence.
And I think what you were alluding to there, why that was a leading question is because there is a potential version of not just our human world, but the universe at large, where it's actually artificial intelligence that scales so much farther than any sort of biological intelligence.
And I think you're invoking this idea of like transhumanism there.
Fill in the rest of this picture, right?
I think in this lifetime we are all as humans going to experience AIs crossing this AGI threshold,
and there's some form of just Cambrian explosion of artificial intelligence.
And then we don't really know what happens after that.
But people talk about this idea of transhumanism and like truly artificial life,
not just artificial intelligence, but artificial life.
All of that conversation begins at that point, right?
Yes.
So there's this like weird trend that's been happening since we really invented early forms of technology
where every incremental improvement we make to the technology,
we become more closely associated with it.
It becomes more of our reality.
So this started with like the radio, for example,
you would spend an hour a day listening to a radio station,
maybe your favorite sports game, talk show.
We got TVs.
You could watch TV shows.
You spent two, three hours a day on this.
Then we got a computer with an internet connection.
You could spend your whole work day.
You're spending eight hours a day in this alternate reality.
And then our smartphone, I have some friends that are using
a 12, 13, 14 hours of screen time a day.
And now Meta and a bunch of other companies,
Apple with the Vision Pro, they're inventing these glasses and goggles where now the technology is
actually strapped onto your face and now you're spending every waking hour of your day
through this virtual lens, into this virtual reality. And the natural extension of that,
and Elon who is at the forefront is building with Neurrelink, is this brain machine interface.
So the idea is that like AI will get so good so quickly, it will become incomprehensible to us.
We just will not be able to understand it. It will be so far superior. We will be like an ant to
this human and the editing human. So in order to get around that, you have to, it's either like,
if you can't beat them, join them. So it's like, okay, we need to figure out how to peacefully
coexist with this artificial intelligence. And that is the brain machine interface. And that's kind of
probably what gets big once AGI starts to get big. And that's what a lot of frontier companies
are also working on, is how do we merge ourselves with this technology? So we don't get left behind.
We still maintain our human element, but we're able to engage with it. And there's this really
funny demo. I saw yesterday. It was two robots who were in a kitchen and they were putting groceries
away and they were interacting with each other, but they weren't saying anything. They were just
communicating telepathically. They just kind of look at each other, showed the item, and they already
know what they're thinking. And it's very quick. Whereas speaking has like a lot of compression. It's
very lossy. I have to speak to you. You have to uncompress it and then recompress it and then you speak to
me. It takes a long time. And I think we'll start to see these unlocks happen in robotics and happen in
like artificial intelligence species and kind of want that for ourselves.
Like how cool would it be to just download a language pack when you travel or it has access
to your neural cortex? It can see everything that you see and it can lucidly allow you to
replay these dreams or it can remember a day in perfect precision and you could query against it.
And it's like unlocking this whole new part of your brain that when supplemented with artificial
intelligence can create some really cool use cases but also really scary use cases.
So I think the future is like very uncertain. It seems certain that.
that AI will get very, very smart. It seems certain that we will try our best to assimilate with
this AI and hopefully integrate with it peacefully, but there are a ton of variables along the way,
mostly around the fact that it is scaling so quickly, and we are just too dumb to keep up
and manage all of this. It's really, it's a freaky time, it's an exciting time, but it's happening
whether people like it or not. It is very much here, it is very much happening. It's the idea that
with neural link interface, that we're creating these AI models that are becoming extremely powerful,
they're only going to get more powerful.
And we're just never going to be able to keep up.
Like biological intelligence is just so slow by comparison.
But with a brain neural link interface with a model,
we can actually connect our brain with a model
and have that model be not just like something else
that does the thinking for us,
but actually our thinking?
Can you like close that gap?
There's one model where like, okay, I have my phone.
My phone's right here.
This computer chip is extremely powerful.
My brain is also pretty powerful.
But I have to actually use my brain to leverage the power of the chip in my phone and the data that's on the internet.
And it's a separate device that I can leverage.
But there's a difference there with a brain neuralink interface where my brain and the chip are actually the same thinking unit.
Is that what you were saying?
Yes. And think about the horrors that is navigating your smartphone.
Like first you have to unlock it.
Then you have to use your two thumbs to type things in.
Then you have to navigate things.
And it's like, oh my God, it's so slow.
It's so painfully boring.
It's like there's very low data streams that come through your smartphone. Whereas like with a neuralink, you can imagine it as an extension of your brain, except it is this like super brain that's built on top of your existing brain. So now you are thinking your normal thoughts, but you have this like sudden genius that is now in your body. And you have all of the modality that comes with these devices. You have your sight. You have your hearing. You have your taste. And similar to how like open eyes models will engage with your camera.
or your microphones, it will do that except you are the sensor.
So your senses become the sensor for this artificial form of intelligence that has way more memory than you,
way more processing power than you, has access to all of the world's knowledge, has access
to all of the inferences that have come up with from all this knowledge, and you can access
it instantly.
And it creates these really weird use cases where if you can just kind of train this
neuralink piece on your brain over a long enough period of time, can it just emulate you the person?
So if your biological body dies, can't you just create a copy of everything that's recorded
and place it in a synthetic brain? And is that really the person? Or is it just their memories and
their ideas trained over a period of however many years it's been in there? So it's kind of this
like you're taking the AI, you're putting it inside of you, and it becomes an extension of your
biological being. And in the meantime, it has a really cool use cases where it can cure people
with Alzheimer's. It could create like any neural cognitive problem. It can solve that. It can solve
people who are paraplegic, quadriplegic. It has a lot of really cool use cases for solving biological
problems. But once those biological problems are solved, then it's like, okay, how powerful can we
become as humans when we merge ourselves with this new form of intelligence? Which creates a lot of really
weird existential questions around like, are you even you? Like, if you copy your brain into
another being, like, does your consciousness transfer? Does that matter? There's a lot of these really
strange existential questions that come from it. But like, that's kind of where we're going. And there's,
these chips are in people today. What? And they're working. Yes, there's a demo by Neurrelink where
one of the players in CSGO, which for people who don't know is like a pretty challenging first person
shooter game on PC, he's able to play just by thinking with his brain. He has no fine motor cortex
function. There's a lot of reflex requirements to be good at CSGO. So the latency needs to
to be as close as zero as possible in order to be at all competitive with CSGO. How competitive were they?
Good. So there's this cool thing that happens when you have direct to brain access versus having to
go brain to your extremities like your hands because it takes a lot of time for you to think something
and then for you to do something. Believe it or not, there is like a millisecond latency between
those things. So when you have direct access to the brain, you lose that latency that's required
for your motor function. So Palmer Lucky uses this really great thing in defense tech because
he's interested in building these super soldiers, there's this problem that happens when you're
shooting a gun. And when you're shooting a gun, particularly a sniper, you want to be precise.
You need to pull the trigger at the exact right time. But there's this latency that happens
between your brain, between your finger. You think you want to do it, then you actually do it.
But if you have direct access to your brain, that latency disappears completely. And you are
just able to pull the trigger based on the thought versus the motor function. So I think
that's the case with gaming and with things in the future, is the best gamers, the best
performers in high twitch skills will be the ones that have direct access to the brain. Yes. Again,
you're removing the latency. Like gaming, you know about ping where there's like a few millisecond ping.
Those milliseconds apply to like the actual motor functions of your body as well. And when you have
direct access to the brain, it removes a lot of that. So that creates this whole other set of use
cases for this stuff. So it's good. It's really, really good. And again, this is the worst that's ever going to be.
So when I was watching the Andre Carpathy YouTube video about how to understand an LLM,
I told you, I was like pausing it and I was asking chat GPT questions about like kind of my
intuition about how these things works. And one of the things that I was asking it is trying
to like understand the comparison between an LLM and a brain structure, like neural structures.
And one of the things I was understanding is like one of the parts of the processes for how an
LLM is created is that what does an LLM do? It tries and predicts the next token. And if it gets
the token right or wrong, then it starts to reweight a lot of the parameters.
according to the feedback, right? And so it just tunes itself using this positive feedback loop of just
like, did you get it right? Then good job. Like, here's a cookie for all the parameters that got it right.
Did you get it wrong? Like, let's take away some weight for the parameters that got it wrong.
And then you just learn when you learn by association. And there's this concept in psychology
called neurons that fire together, wire together. And when a neuron fires, it releases a hormone
and it releases a neural transmitter, and that allows for nearby neurons to physically move closer
to that other neuron. So the rule of thumb is neurons that fire together, wire together. And this is how
basic learning happens. You know, the window of plasticity allows for this to happen very rapidly,
but it still happens in all brains at all times. And that was the same pattern that I noticed in
what Andre Kapathi was talking about in his LLM video, it's like, okay, well, then when a parameter
gets something right, it tunes those parameters closer together. And we
when it gets something wrong, it separates them. So that happens less often. Same structure.
Like neurons that fire together, wire together, parameters that get things right, bind,
and they also separate. And so I asked chat CBT, is this an acceptable comparison? And it said,
yes, with some nuances, with some corollaries. A LLM does learn language in a similar pattern that a human
does, but a human has some very key advantages. And when we are training an LLM, we need
billions and billions of tokens, basically words, or maybe even more accurately, syllables,
for these LLMs to be tuned at all. And when a baby is learning languages, they don't need
billions of words in order to learn language. They need much fewer words in order to actually
start to pick up on words and learn them. And there's a couple of reasons as to why this is true.
Babies aren't just hearing words. They are also seeing with their eyeballs. They are understanding
the context of the room that they are in. They are seeing the context of
when mom says something to dad about some object that they have in their hands. So they have
visual cues that is very useful for them to learn language. But not only that, but human brain
structure is naturally ready to learn language. We have adapted. We have evolved to be very
ready to learn language. So we have these things that LLMs do not have. And I think this
conversation starts to open up with robotics where, okay, well, what happens when you give
robotic appendages to an LLM and you give it a camera and you give it ears and you give it a place
in a room. Well, then you start to actually take away some of the advantages that humans have.
But then when we are hooking these things together, there's just a comparatively like collective
set of technologies that allows what you're talking about, the integration of a chip and
the rest of the universe, the world, resources and capacity. And when a chip hooks into our
biological structure. Hopefully that doesn't make us irrelevant. Hopefully that makes the biological
component of this actually much more relevant. What are your thoughts? It's a testament to how early we still
are. These models are so smart, but they're also incredibly dumb relative to humans. And there's also
the energy thing as well where like if you fed an LLM a stake as energy, it would not work.
Like your brain only requires what a couple thousand calories to like function and to provide life
support for your entire body, whereas these computers consume a ton of
of energy. So it's like fairly dumb in the sense that it is single modality in most cases,
meaning it only trains off of the human language. It's also horribly inefficient in the sense
that it's dumber than us, even though it has a billion times more energy than us. So these
systems are still like horribly inefficient. They're still very, very dumb, but they look very smart.
And it's because they are. So it shows the length of the gap in which we still have yet to
fill, when they can have the compute power of a human brain, but also have the multimodality
of a human being and also become much more power efficient. And you're starting to see this
in the Tesla team with Optimus, where the first use case of broad training on the visual
modality is with Tesla cars. Every car that's been shipped has eight cameras around it. All of those
cameras feed video into this giant cluster. And that cluster is now being trained on the outside world.
And the reason why Optimus, which is Tesla's robotics division, is the best so far, is because
it's been trained on not only words, but also sight.
And it knows what the world looks like around it.
So it can understand it conceptually closer in a way that humans can.
Next, I'm sure you'll see more haptic stuff happening where it'll start to understand the feel
of things.
Then you could have the sound of things.
And you could kind of tap into each sensor of the human body with a synthetic sensor on a
a robot. And that's kind of what's happening. When you think about your phone, it has a camera
for your eyes. It has a microphone for your mouth and for your ears. It has speakers. And it's missing
the smell. It's missing the touch. But those two things are very easily replicated in a humanoid robot.
So we're at one modality. We are slowly getting to two. I'm sure the next ones are coming.
And again, it's just a matter of creating this data. It's very difficult to train these models
unless you have a way of collecting data. Tesla's the only one because they have cars rolling around.
So when their robot initially started getting trained, it thought it was, when it was walking down a hallway, it was looking for the road signs and the lines to say in.
Because it thinks it's a car.
Like this humanoid robot is walking around, but it thinks it's a vehicle.
And it needs to be trained that it's not.
But it at least has the data to give it some sort of context.
So we are like super far from the full multimodal human experience.
But we're getting there.
And like Optimus and I forget the name of the company that just revealed yesterday.
But there's a lot of other robotics companies that are also working towards.
the skull. Josh, this is really, really cool. I hope listeners' imaginations are opening up,
are breaking very wide open. There's just like so many adjacent technology like rabbit holes that
this like spins off of. Like robotics is just a huge conversation. Brain link interfaces is a
huge conversation. There's like this rabbit hole for each one of these things. Even like rockets and
interplanetary travel starts to become like very, very relevant here. Synthetic biology.
Where, since you are on the bankless podcast team, where do you think we should go?
next. I want to talk to a bunch of AI people, and I know you're tuned in with a bunch of the listeners. Maybe
we can just like tease them for what we are trying to do with a podcast or what your like guidance would be
for where we're trying to go. Yeah, I think it's exploring this new frontier that is downstream of
AI because it does unlock a lot of these cool new technologies. So one of the main ones, like you mentioned,
like biochemistry is crazy, how we're able to manipulate human beings and the genetic makeup of us.
That's a really cool place that's worth exploring. Interstine.
travel is really interesting. I think one that's super interesting and probably more pressing than most is
energy. A lot of this AI requires energy. A lot of crypto requires energy. A lot of the world requires
energy. And we have a shortage of it and an incapability of making enough, even though the planet
receives enough. So energy is a really cool sector. There is supersonic flight, which is happening now.
Man, there's so many cool platforms. I think the listeners could probably expect generally more
frontier technology outside of crypto that parallels crypto. So AI,
again, research, downstream affects crypto. We have an entire AI crypto podcast that is built around
this new frontier that's unlocked because of AI. I think that's similar to a lot of adjacent
industries, maybe closely is energy, like figuring out, we definitely want to talk about nuclear
energy, how that is becoming a bigger thing, how that's going to power a lot of these
requirements in a world in which artificial intelligence demands so much energy. Those types of
things are what I would probably look out for. I'd say crypto adjacent frontiers that have
downstream effects that affect crypto, but are like super exciting. Quantum computing is another one.
There are breakthroughs now on like a monthly basis. We don't really know the implications of
quantum computing yet. We know that it can solve really difficult problems. Probably break encryption,
probably mess up Twitter. We have a great episode with Justin Drake that's talking about that.
There's a lot of interesting stuff. So there's a lot. I think just generally frontier technology will
become a bigger focus. Those are some of the main pillars that would be around it.
I want to give my attempt at explaining how all of these things will impact.
crypto and why and how crypto is relevant here.
Please do.
And I also want to give you the opportunity to do the same.
My answer for this is like there's a general broad theme of acceleration here.
Like we are going faster.
Like crypto has always moved very fast.
AI is moving even faster.
Some of these technologies aided by AI are going to be some of the most quickly innovating
technologies that we've ever seen.
Just because as time moves forward, time goes faster.
Technology always speeds everything up.
Gone are the days where any of these frontier technologies can wait for the public markets that close on weekends and holidays and outside of business hours are able to keep up with or are even the right venues for the value of these things to be expressed.
So there are going to be small projects, big projects that are going to monetize a lot of the innovation that they're creating via the crypto markets.
We actually just saw Kyto, a centralized United States domestic data analytics company, issue a token that is currently trading at $2 billion.
That thing trades 24-7, 365.
They didn't have to do a year's worth of collaboration with the SEC and the public market regulators in order to get an IPO on the stock market.
That's way too long.
It costs tens of millions of dollars.
And why would they do that when they could just issue a token?
and it's successfully monetizing Kyto at $2 billion.
And so crypto as a frontier technology that on a timescale can keep up with all of these other
technologies is going to be the monetization layer for teams, projects, innovations, and also the
open source side of AI crypto to begin with is why crypto is relevant at all.
We have these extremely rapidly developing technologies.
They need to be matched with public markets that are also rapidly developing and can
keep up with that speed.
That's my answer.
how would you change? What do you disagree? Do you agree? What's your answer here? I love that. That is a great
podcaster synthesis of where we're going, generally accelerating. I think that's right. I totally
agree. Having value transfer rails for all of this technology that can keep up with this technology
is hugely important. And crypto is the only thing by far that provides that. AIs will want to
transact with each other. They will want to store value in secure places that are provably secure.
The whole value of the world, as it becomes more and more digitized, will like,
likely become tokenized. And this infrastructure that we're building in this world of crypto is
absolutely critical to this future. Because if we do not have payment rails, if we do not have
value transfer, if we do not have secure storage of whatever we have assigned to value in the future,
then this world cannot exist because then the reliance on humans would be too much or the reliance
on AIs that might have mischievous intentions would have. So crypto is still like hugely important.
In fact, I would say it's absolutely critical to a future in which we continue to accelerate, continue to innovate, continue to make these amazing breakthroughs.
It will serve as that sediment layer for all of it.
Okay, so Josh, you are inside of the bankless Discord.
So if any bankless citizens are in there and they have any questions for Josh, feel free to ping him.
I think we're going to open up a frontier technology channel or part of the bankless discord just to talk about some of this stuff.
Because I know many people in the bankless nation are just intrinsically curious about all technologies, crypto included, but also,
much of the things that we were talking about. And as I want to keep on doing episodes in this direction,
I want to understand this a little bit more. I want to keep on exploring these rabbit holes.
And so I think listeners can expect more content on these frontier technologies to show up in the
podcast feed. We are still going to do plenty of crypto content. And maybe there's just going to
be more total content in order to cover both of these bands. Many of these episodes are going to show up
on the bankless premium RSS feed because we want bankless to grow the crypto awareness of the world.
but we also want to make room for frontier technologies as well because it's going to come and impact crypto.
So I think that's my call to action to bankless listeners is there's going to be more total crypto content on the premium RSS feed.
And then you can also expect some adjacent frontier technologies, robotics, brain link interfaces, AI, rocketry, all of the kind of stuff showing up on the free feed as well.
Josh, thanks so much for walking through all of this stuff.
That was a very ambitious episode.
I think we killed it, my man.
Yeah, it was really exciting.
It's very cool to be on the opposite side of this and to be a part of the production.
to talk about this stuff. There's not a lot of people who I know that are interested in this.
So if anyone listening does want to join the Discord, does want to chat about it, that would be
awesome. Thank you for having me. I've been watching the pod for a very long time prior to even
joining the company. So it's pretty cool to finally be on it. So thank you for the opportunity.
Thanks for letting me talk about the stuff that I am just super interested in. In general,
I really enjoyed it. Had a great time. Bankless Nation, you guys know the deal. Crypto is risky.
New frontier technology is also risky, but it's also extremely risky to ignore them and not be
aware of the times because time is accelerating and we need to keep up with the times.
We are here to help you front run the opportunity. Nonetheless, the frontier is risky.
You can lose what you put in, but we are headed west. So thanks for being with us on the
bankless journey. I appreciate it.
