Limitless Podcast - Apple's $200B Gut Punch To Google! The AI War Just Escalated
Episode Date: May 15, 2025This week’s AI Rollup is a ringside ticket to the highest-stakes tech brawl on earth. David, Ejaaz and Josh unpack Apple’s threat to yank Google from Safari, the $200 billion market-cap g...ut-punch that followed, and Microsoft’s tangled profit-share web with OpenAI. Plus why Sam Altman just hired Instacart’s former CEO to turn raw AGI power into everyday apps. They demo the newest frontier models (Gemini 2.5 Flash, China’s trillion-parameter DeepSeek-R2) and explain how cheaper tokens and home-grown Huawei chips could upend the GPU monopoly. Finally, the crew dives into crypto’s AI-agent boom, from VirtuaL launchpad mania to decentralized training networks that promise an open-source Plan B for humanity’s future intelligence. Tune in for the only AI recap you'll need this week.David: https://x.com/trustlessstateJosh: https://x.com/Josh_KaleEjaaz:https://x.com/cryptopunk7213------💫 LIMITLESS | SUBSCRIBE & FOLLOWhttps://x.com/LimitlessFTyoutube.com/@Limitless-FT------TIMESTAMPS00:00:00 AI Game Of Thrones00:02:43 The $200B Blow To Googlehttps://x.com/KobeissiLetter/status/1920154534342459841https://www.reuters.com/business/apple-looks-add-ai-search-companys-browser-bloomberg-reports-2025-05-07/00:12:59 How To Maximize Use Of AI00:16:33 The Microsoft Empirehttps://www.reddit.com/r/singularity/comments/1kkjop0/the_scale_of_microsofts_influence_in_llms_and00:21:41 Is OpenAI Actually Winning?00:27:00 OpenAI's New CEOhttps://openai.com/index/leadership-expansion-with-fidji-simo/https://arstechnica.com/ai/2025/05/openai-creates-ceo-of-applications-role-taps-instacarts-fidji-simo/https://x.com/fleetingbits/status/192051850990762011100:33:33 New Frontier Modelshttps://x.com/officiallogank/status/1912966497213038686?s=46https://x.com/itspaulai/status/1912947803338453501?s=46https://x.com/bongrandp/status/191304024496376643200:39:56 Model Aggregator00:47:22 DeepSeek R2 Is... Insane?https://x.com/deedydas/status/1916160465958539480?s=4600:54:02 Crypto AI Agentshttps://x.com/thedefiedge/status/1920436450048245922https://x.com/vaderresearch/status/1919717523525632326?s=46https://x.com/kurorosage/status/191955581977277688801:00:21 Decentralized Traininghttps://x.com/_AlexanderLong/status/1919416512156144053https://x.com/NousResearch/status/1917299865060794484------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
Welcome to the AI roll-up, brought to you by the Limitless podcast where we say up to speed with the emerging trends and developments in the AI space.
I'm David Hoffman here with my two co-hosts, and Josh.
Ijaz, how you doing?
I'm good, dude.
It's been quite a week, like subtle on the model updates, but we're starting to see some of the ripples of some of these strongholds that big companies have on AI, right?
So we'll get into some of the stories later.
but effectively we're seeing like Microsoft owning pretty much the entire stack of AI and we'll get into like what kind of effects that has, but also Apple suggesting that they might update their search and remove Google from the default caused Google to lose almost $200 billion in two hours. And I watch this happen in real time. So I think for the last two years or so, we've seen kind of AI on this rocket ship, right? You open AI pop up out of
nowhere, wow, it's a really cool product, chat, GBT, et cetera. But now we're starting to see,
okay, right, the big boys have come to play. How much, like, of a stronghold do they have on the
entire stack? And what can they do with this? Can they control our entire lives? Or is there
room for disruption? Yeah, the Silicon Valley just tightens. They're just going at it,
realizing exactly where their choke points are and where other people's soft spars are.
Josh, this is the first AI roll-up that's fully on the limitless podcast.
How does that deal on my man?
Oh, man, it's exciting.
I'm stoked.
I'm grateful for all the people listening who followed with us.
There's a lot of AI content that is coming.
That is super, super exciting.
And I think this week is no different.
I would frame this week for the people listening as like the highest stakes version of Game
of Thrones that we're watching unfolds.
And it is the largest titans in the world that are fighting for the most valuable prize,
which is basically this new form of intelligence, this new,
Yeah, thing that can control a lot of people.
So it's really exciting.
This is a crazy week.
I think that's why I think I really like these episodes,
is this just real world Game of Thrones being played out.
One of the first episodes that we did, Josh,
we talked about just like what happened?
What does an intelligence explosion mean?
And it's kind of a big take that I think throws people for a loop.
But the conclusion of that episode that we did,
which I encourage everyone to go watch,
is that we are making God.
God is the end product of this.
And so no wonder,
you know, Facebook and Apple and Google and Open AI and, you know, four more companies in China
are all like turning into these gigantic tech titans trading blows with each other. And I think
Jaws, that is what you are going to open us up with. The blows that are being traded this week as
everyone tries to jostle to position to be the first person to create God. Yeah. Yeah. So let's start off
with the first right hook. Apple is coming for Google's throne. And they're supposedly or rumored to
to be launching their own AI search product.
Now, let me give you a bit of context here.
You know when you open up the Safari browser
on your Apple laptop or on your mobile,
and you go to the dress bar
and you're like, you just type in your search query, right?
That uses Google by default.
And Apple actually pays Google the privilege
of $19 billion per year
just to have Google automatically default in that search bar, right?
So it's a big deal.
just let alone to license Google search, right?
And so this week, Apple execs Eddie Q
says that the AI search via tools like,
you know, like chatGB or claude of perplexity,
things that, you know, the three of us have used on this show
pretty frequently is way better than Google.
And I don't know about you guys,
but I've kind of like started transferring my habit to that as well.
Like I don't use Google to search anything anymore.
Even when you search something on Google,
it's funny.
It shows you a little AI.
at the top of the page as well. So it's using AI itself. And they're saying that they're
exploring revamping search, which means that they might replace Google. So as you can imagine,
this had a pretty gnarly effect on Google's market cap. As soon as this announcement went live,
it knocked $200 billion, which equated to about 10% of Google's entire red candle, like super
aggressive over two hours. And I watched this happen. I saw a tweet and it said like Google's
lost $100 billion in market cap, and I was like, nah, that's not true. And then I just saw it
like subsequently just go lower and lower and lower. And I think for context here, Apple has the
biggest distribution for Google searches. They have, I think, 2.35 billion active devices,
and all of them have Safari pre-installed and has Google as the default browser there, right?
So let's hypothesize for a second and say, if they decided to switch, then Google loses
billions of search queries per day. I think another way to look at this is like Safari accounts for,
I think, about 18% of all web browsing activity. So that's page views. And in the USA, it's 30%, right?
So if this switches, Google would lose nearly around a third of their browsing activity, right?
And the cost to Google is larger, actually. Like, putting that together, that is, okay,
what, hundreds of billions of searches per year and around $50 billion of, like, gross
add sales. So Apple can basically do this in a single software upgrade, by the way, guys, right?
And don't forget, like, chat GPT itself is rehalling their search function. We mentioned it last week.
So I just think this is completely nuts. And it just shows that the distribution power of Apple at
the device level. And Josh, you spoke about this last week as well, isn't something to be joked around
with, right? Maybe the moat is at the device level for now. What do you guys think? I think we've all been
watching Apple struggle with what to do about AI. They had this whole AI announcement.
over a year ago and we've all been waiting for it.
Now people are realizing that their AI execution abilities has not been great.
So I'm a little bit confused because this would imply that they are more ready to make moves
in AI than I understand them to be.
Josh is a huge AI Apple fan boy and hardware fan boy.
So I bet he's got some pretty interesting takes here.
I do have some takes.
I've been, well, first, very disappointed with Apple's ability to execute on AI, which leads
me partially to believe that the market really overreacted to this news.
$200 billion getting wiped out.
in two hours is kind of crazy, particularly because Apple hasn't yet rolled out any AI. And this is not a
new problem for Google. Where we've been talking about this for months, there's a trend of the 10 blue
links dying. Google searches clearly on the way out is a slow bleeding thing. Sure, this adds
some accelerant, but I don't think this is as detrimental to Google's business as we think.
So Google and Apple kind of have two polar opposite problems. Apple has amazing products, but they can't
actually create the software to use it. And Google has incredible software.
The Gemini, the new Gemini models are about as good as they can get, but they're really bad at
creating consumer products and experiences for people to use them. So what they're doing is they're,
they're kind of harming each other. Like Google has the search, Apple has the users, and Apple's
severing that link, which hurts Google's revenue. I think 56% of Google's revenue is search,
but that percent is going to continue to drop anyway, regardless of whether Apple cuts them or not,
because AI is just getting so good. I think one of the underrated aspects of this that
people aren't discussing is the cost per query of search versus like AI. So the cost to generate a token is,
is it an order of magnitude higher than the cost to generate a search query. So AI actually can't
support the volume. We don't have the GPS. Sorry, the cost to generate a single token?
A single token. Is it is a order of magnitude more than it is a single search? Yes. And so for an average
AI query, you need hundreds, thousands of tokens. Yeah, you need a lot more data, a lot more
more bandwidth, a lot more compute power, because Google is not just nearly a naive 10x. It is
incredibly more expensive to do AI search than it is to do a Google search. Yes. Perhaps
not token, perhaps not order a magnitude per token, but per search at least. You could think
of it. Each search is at least 10 times more expensive than a, or each AI search is more expensive
than a single search query. Because Google's query is really just an index and it's just going through
an index and pointing to things. This is actually generating net new content. So what I hope Google would do,
Google has this unique opportunity where they have, I think, 270 million paying subscribers.
They need to figure out how to get them onboarded onto AI.
They need to create a product.
They need to create a service.
They need to fight back against this trend of the 10 Blue Links dying.
And we're kind of seeing that, like you just mentioned, where when you do make a search query,
it shows you this little Gemini prompt the top where it kind of summarizes their results.
But it's not there yet.
And I'm really hoping they could figure out a way to monetize this huge consumer base that they have to save themselves.
But I guess we'll see.
Why do we think they suck so much?
It's kind of been the recurring story of Google, no?
For the last couple of decades, they tried to do the whole like social network.
What was it called again?
I forgot, but circles.
Circles, that sounds right.
That sucked and died.
They're trying to do the whole like smart device situation.
In fact, actually today they announced that they are rolling out Gemini to all their smart devices, right?
Their smart TVs, their watchers and stuff.
And I was thinking about this and I was like, wow, that's pretty cool.
they're trying to get it on different devices, collect different kinds of data.
And then I was like, who uses this stuff?
Aside from like the Android cell phones.
No one, right?
So maybe it's a skill issue.
It's certainly a skill issue.
And we saw this with the entire AI trend since inception.
Like Google invented the transformer.
Their research has created the transformer white paper.
That's why we have all these models now.
And they had it for many, many years prior to open AI coming around and launching the
chat GPUT API and then chat GPU to the product.
They just can't seem.
to figure out how to create good products with their technology.
It's like a bunch of really bright, intelligent nerds who can do amazing research and form
these amazing creations, but they don't actually know how to sell its people.
And that's been a serious problem.
Maybe we've branded this week, marketed this week, as like the old tech giants,
titans just going blow for blow.
But it's also there's some notion of like senality in these things.
Like Apple, Google.
They're just old companies.
And so they're like old.
old people trading blows.
And you're kind of just cringing as you watch it because they're, you know, they are
hurting each other, but like they're not creating any value.
As much value as they are harm, if that is indeed what's going on.
And then by contrast, you have the much more sharp newer startups, the Anthropics,
the Open AIs.
And they're untested.
And like open AI doing hardware is completely new field.
But there's something optimistic about that, I think.
Then like, let's just get some new blood into this game.
and see what happens.
So you think that's going to leave more opportunity
for smaller startups to take the throne,
potentially, David?
Yeah, I mean, if Apple can't make hardware,
yeah, exactly, yeah,
if Apple can't make hardware
that has AI integrated
and if Google can't figure out
how to integrate a software into hardware,
well, maybe it just takes a net new generation
of companies to solve that problem.
Or maybe it takes a net new medium
or device or experience
to onboard those people.
people, potentially, right? Like, I've, again, I keep saying this on episodes, but I have never
spoken into my handheld device unless on the rare occasion that I call someone and it's usually
my mom. And now I'm doing it like every day, right? So I feel like we're getting closer and
closer to this weird little metaversal scenario where everyone is just kind of got their digital
avatars real and talking to each other through those digital avatars potentially. Pretty
crazy. We've talked about on the show about how too many of our friends outside of the tech
circles are just not using chat chbt. They're just not using AI. And I went to one and I just told
them, hey, download chat chabit, open up the voice feature in it and start talking to it. And then
their first question was like, what do I talk about? I'm like literally the first question that
comes to mind. Just go. Just go for it. And then call me back after you do that. And they call me back
like 20 minutes later.
I just talked to chat TBT for 15 minutes.
We just talked about whatever.
I was able to ask it questions
and talk about things
that I was not able to talk to my friends about.
And I'm like, yes, that's what so.
And I think that that is a homework assignment
that I want to give listeners.
If you have not tried the voice,
voice mode in chat chbt,
open up chat chvety after you're done listening
to this episode and giving us a five-star review
and subscribing on YouTube,
open up voice mode and just start talking to it
and see what happens.
And yeah, just tell us in the comments about what that experience is like.
And I'm kind of curious to hear what you folks are talking to chat GPT about.
I hear a lot of my friends talk to them.
It's therapy.
It's like relationship advice.
It's like therapy for like issues growing up as a kid.
And then a lot of people start to use it for like work stuff, but not like, hey, can you design this document for me?
It's like, I got into this situation with Paul and I think he's encroaching on my territory.
work and I wonder like how can I like politely like put to him that he should just basically
fuck off and it's just he's like deeply personal and I think very contextual to the individual
and their own experience so it's kind of like outside of yeah that's an interesting thing actually
it's outside of social etiquette you know yes do you know what I mean like posting a picture of
your ice cream cone at this new kind of ice cream shop is like cool and accepted right but talking to
or like speaking to this like weird social
genie in a bottle about like your personal afflictions with Paul. It's kind of weird. Yeah.
It's useful for opening up about subjects that are hard to open up about. I think especially for dudes
who it's easier to say harder things to a robot than it is to another human. And so it kind of gives
you that like safe space to be able to like talk about hard things that are on the cusp of things
that you want to talk about, but you don't have the optionality. You don't have the surface area.
you don't have the person to do it with.
So it's solving the male loneliness epidemic did.
Yes, yes, right.
Yeah, all of our best friends, yeah.
Yeah, yeah.
And this is how we give all of our data to it.
Seriously, that's how it's going to happen.
I was listening to an interesting conversation
that Sam Altman had, actually.
It was at a Sequoia session in SF this week.
And he was talking about how people use AI in different age brackets.
So he said the older people generally use it as an optimized search engine if they've
even discovered it.
And then the like the, the,
mid-20s to like late 30s group, they use it kind of as a leverage tool to maybe help them
with work and maybe solve some questions. And then the younger people who are in middle school,
high school, who are much younger, they use it as an entire life operating system. And they
kind of load all their thoughts into it. They load all of their homework into it. They let everything
into it. And they consult it before doing anything. And it is like the single point of truth and the
single reference that they can consult for any sort of decision they make. So I find that that train interesting.
And going back to the point, David, that she made earlier that is there a new company that's going to dethrone Apple and Google?
And Open AI is going for it.
He very clearly said that they're going for the AI subscription OS plan for your entire life.
Is how can we build that ecosystem for that kids are using?
You have to subscribe to your own life.
That's literally the Black Mirror episode, episode one of the new season.
You know, Josh, as you were saying that, I was thinking like, why does that sound familiar?
And then I realized my sister, who is Gen Z, does exactly that.
So we briefly shared a chat GBT account.
Lord knows how that's influenced my algorithm, actually.
But she would literally dominate 75% of conversations per week.
And it's just questions of like random little social conversations that she didn't know the answer to
or things that she could have just, I don't know, figured out herself.
And she was just offloading every single bit.
And I was like, what is opening I going to do with this data?
We should probably get some kind of like Gen Z roundtable on this show at some point, guys,
and like just kind of like interview them and see what they're doing.
There's this chart.
We're going to shift subjects here.
There's this chart that I saw on my timeline this week that I'm hoping one of you guys can explain to me.
It's Microsoft's relationship with other companies in the space, OpenAI, WindServe, Curser, VS code.
And there's like lines of like ownership and partnerships between Microsoft and all of these things.
kind of don't get it. I kind of understand
that like, okay, Microsoft has their fingers and all
of these different things in these weird partnerships and
deals and, like, org structures.
But like, so I'm sharing it on the screen.
It's like, Microsoft has a 49% profit share with OpenEI,
which owns Winsurf, which is forked from VS code.
And opening I is also an investor in cursor,
which is also forked from VS code. But Microsoft
owns VS code. Can someone just like tell me
what all of this is?
So Microsoft, if you guys
recall about, I think about a year,
ago now, guys, do you remember there was that big saga with Sam Altman as CEO being kicked out
by his board? Do you guys remember this? When that happened, Satya Nadella, CEO of Microsoft,
kind of like rubbed his hands together and thought, you know what, now is an opportune time to show
support to Open AI. And we can do this in the sense of financial investment. So I think they invested
somewhere upwards of like $20 billion. I don't know whether that was set over a couple of years,
but all just that one smack-bang investment.
And they would invest a further $19 billion
in providing all the GPU and CPU infrastructure
for Open AI to run their inference and training.
So for Open AI, that was a pretty good deal, right?
It's like, what, you're going to pay for all our compute hardware
and help us train our models and we don't, like, hang on, wait,
what do you guys get in return?
And Microsoft said, well, we would love a bunch of equity
and be like the majority shareholder.
And Sam Oltman basically said, fuck off.
But what we can do is we can give you a 49% profit share of all services that we charge for through Open AI and any products we potentially acquire and integrate into Open AI.
So if we look at the first top half of this diagram, Microsoft gets a 49% profit share of Open AI.
And as we know last week, OpenAI announced that they're acquiring WinSurf.
Now, a little bit of a kind of brain refresher for everyone.
Winsurf is the same as a company like Cursor, which were featured on the show before.
It's the second place to cursor.
Yeah, second place to cursor.
And the best way to think about it is these two companies make it really easy to code up different kind of software apps.
And these apps could be games.
These apps could be apps that appear on your iOS store to help you with therapy, personal training.
Whatever you can dream up, you can basically type in a prompt like you do in chat GBT,
and it kind of codes up an app for you.
And these companies like WinSurf and Cursor are now worth north of like several single digit billions because of this superpower.
And within this product itself, it's known as an integrated development environment.
It's basically carefully packaged software tools, code editors and stuff like that.
So it abstracts away all the complexities of being a software engineer, right?
So people like you, me and Josh can essentially like code up stuff even though we haven't got any formal training or expertise in software engineering, right?
Now, there's a little circle on the bottom left, which says VS code.
And people are like, I haven't really heard of VS code.
Like, what is that?
It was the forefather to Cursor and WinSuff.
Think of it as like literally the thing that gave birth to WinSurf and Curson.
In fact, there were rumors that went around back in the day that Cursor had basically just forked VS code's software.
So an interesting fact here, in 2024, a survey was done to see, you know, how much of
of the integrated development environment
does these different companies command.
VS code commanded 74% of the entire market.
Now, I'm guessing that market share is down a bit,
but that's basically the layout here.
So Microsoft has its hands on VS code
and by proxy of OpenAI,
Winsurf and Cursor,
because Open AI acquired WinSurf
and Open AIA also invested a hell of a lot of money into Cursor.
So I would say that some people have pitched this
as like Microsoft owns the entire AI stack.
And they kind of do, and I'll get into that in a second.
But it's mainly the coding stack that they have a stronghold on, an AI.
Josh, I know you have a ton to say on this.
So I want to throw it to you.
Yeah, it's funny.
It's like Microsoft funds Open AI.
Open AI funds Cursor.
And now buys the cursor rival WinSurf.
And all this is on top of Microsoft's own open source code editor that they've forked.
And now Microsoft's same dollar is getting taxed three times in the same ecosystem,
because they're competing against themselves with co-pilot, which is Microsoft's own AI offering.
So it's this very messy, conflicting thing in which Microsoft found themselves in.
I'm not sure how happy they are about this because now Sam is trying to reduce that 49% profit share
so that they can go public is the rumor this year.
So it's kind of, it's a messy situation.
Is the takeaway here that value is accruing to Open AI more than it is to Microsoft,
and it's on the backs of Microsoft's work?
Yeah. So there's an interesting dynamic that's part of the deal that we probably should mention,
where the nature of the deal is, I think it's 75% of Open AI's profits go to Microsoft until it recoupes
$13 billion, the total investment. Then after it's recouped its money, it reduces down to the 49%
which we're at, which goes until Open AI reaches $92 billion of profit. And then after that,
they're free and clear. So the special profit ends. So there is a cap on this upside. If Microsoft
does hit that cap, that's still an amazing investment. They turn,
what was it, $19 billion into $178 billion.
So they've done really, really well.
I think Microsoft is in a good spot.
Open AI, though, has this open-ended version of this profit margin.
And if you ask Sam Altman, he'll probably say, oh, yeah, it's just a decade until we reach
a trillion dollars of profit or something outrageous like that.
So in the long term, it seems as if Open AI stands to win.
In the short term, Microsoft is doing well, but they also do have this little leech on them
where now their largest investment is kind of leaching away users from co-pilot,
which is Microsoft's own offering.
And it does create this weird dynamic where I'm not sure they're stoked about it.
And now Sam has a lot of leverage because now he's trying to get that 40% profit share down even lower
so he can make it more approachable to public markets when they try to go public.
I'm not privy to like the internal operations and like balance sheets and all this stuff.
but the idea of Microsoft, I don't know, 10xing their cash on billions of dollars
doesn't seem to feel so great when at the end of the day they don't own anything.
They get more money, but they don't own any value.
They don't own the game here, the Game of Thrones,
is all of these companies are trying to make God.
And there's no amount of money that is worth trading not having the opportunity of like owning a slice of God.
And so I kind of see Microsoft as like,
loser in this situation, no matter how many times they can multiply their cash.
So let's actually dig into that, David.
I kind of went down a little bit of a rabbit hole when one of you posted this in our chat.
And I was like, okay, well, hang on a second.
What does Microsoft Stronghold actually look like here, right?
So let's break it down.
And then I have a comment on like where I think the moat is, right?
So in terms of compute, Microsoft Azure, which is their main cloud service, right,
competes with like AWS and Google Cloud is OpenAI's exclusive cloud for frontier model
training and inference. So Microsoft has dropped $13 billion on custom GPU clusters alone, just to fund
this, right? Just for the privilege of training Open AI's frontier models, right? So as long as Azure
remains like this kind of, I don't know, training substrate for Open AI models, switching costs will
keep Microsoft at the center for now. But as you said, like, there's a heavy reliance on OpenAI
AI there. And I think this agreement is in until like 2030 or something, right? They don't have
overall board control, right? But Microsoft's hand is basically firmly on Sam's shoulder saying like,
hey, let's get first look into these other things, right? The other part, which I thought was
interesting is, Josh, you mentioned that it's competing directly with co-pilot, right? Microsoft
copilot. But copilot is based off of OpenAI's 4-0 and four mini models, right?
It's so incestual. Yeah, it's so incestual. And Microsoft,
I read this and I want to get kind of verification on whether this is true,
gets access to the 40 weights specifically.
Yes, that is true.
So this is an important thing.
So the more I think about it, I'm like,
huh, I think the moat is actually,
it only remains for Microsoft in this like stronghold position
if they continue to get access to Open AI's model weights,
which I just don't see happening for the longevity.
Why would Open AI do that?
And secondly, they're relying on developer stickiness
through integrated environments that use VS code
or whatever that might be.
Again, it depends on whether a good enough app
or a software engineering community
decides to use Microsoft's VS code.
And again, I don't see that as being too sticky
to start off with, right?
And then, so when you look at that,
and then on the coding side,
we can't forget that Microsoft owns GitHub as well, right?
That's like 100 million users
and VS code, which is accounted for about 74%
of the IDE market share, which we mentioned, right?
And then we have some of the equity spillover.
So I'm not entirely convinced, as you said, David, that Microsoft, their hold on this is going to remain quite permanent when OpenAI has so much power here.
If they were to switch to some other provider in 2030, Microsoft's stronghold goes.
I think this saga will continue. But ultimately, a opening eye has the IP. They own the model. They have the talent. And I think Microsoft is like scrambling to like work its way up the long tail of crumbs. And maybe you can like negotiate.
to like bigger and bigger crumbs.
But that's kind of how I see this.
It's like ultimately like Open AI is the big winner here.
And there's no way for Microsoft to really change that.
That's my,
that's my takeaway from this section.
Yeah, I agree with you.
I think the young blood has it here.
Yeah.
Yeah.
Yeah.
Let's get into the drama inside of Open AI that happened this week.
Because opening I has a new CEO that's not Sam Altman.
So that's kind of crazy.
I think we all understood like Open AI is Sam Altman.
But there's this new blog post and new news.
Open AI expands leadership with, I'm going to butcher this name, I'm so sorry.
Fidigisimo.
And this is a message from Sam saying, hi, everyone.
I have some exciting news to share.
I'm hoping to do this in a few weeks.
But a leak accelerated our timeline and they go on to announce the new CEO.
What's your takeaway with this?
Is it real, is this a real new CEO?
Because the idea of Sam Altman not being at the helm of the ship is kind of crazy to me.
You guys ever watch the series The Boys from Amazon Prime?
Yes, yes.
Never seen it.
Okay.
The superhero movie?
Yeah.
The concept, Josh, seeing as you haven't seen it, is it's all about superheroes,
but the dark side of superheroes.
So they're all gamed by political investments, and they're, like, lobbied and a bunch
of these things.
And it just shows the evil side of superheroes.
It's not all fun and glory.
And there's this big corporation that manages these heroes, but of course, managing a
talent agency of superheroes.
A talent agency for superheroes.
But of course, when your.
superhero can laser beam you to death, kind of who's in charge is the question there. So they
appoint the CEO and the running joke in the entire series is she has absolutely zero power. She
runs and gets them coffee and all these different kinds of things. Now, I am not suggesting
that this is the case. But the reason why I bring up that example is she has been appointed as
CEO of applications at OpenAI. And it was branded across headlines that she was CEO of OpenAI.
that's just not the case.
She has some kind of equity ownership,
but Sam Altman is very much still the person, the man in charge.
And I think he's using this effort, at least he claims in the blog,
that he is going to now focus solely on research towards AGI
and towards aligning AI models to be to the benefit or betterment of humanity.
And so we can focus his attention 100% on that,
whilst this new lady, Fiji Simo, can focus on
building groundbreaking applications for the consumer end users of OpenAI.
And I found this really interesting because, well, I had a few reactions to this and I'm
curious what you guys thought.
But number one was like, Sam's been talking about AGI, AGI as the most important mission
of Open AI since its literal birth, right?
And now suddenly he's making this big move to have someone who has a huge amount of experience
in building consumer applications to focus a large part of the company's effort and
resources and money on this. So it made me think, are we actually as close to AGI as we thought,
or maybe there are some other nuances and obstacles that we need to overcome. Maybe it's at the
technical hardware level, or maybe it's just the fact that these things don't understand
contextual awareness enough. And so they can't be as personalized as we want and they're only going
to be used for niche cases. I don't know. But on the optimistic side, I thought I looked into this
Fiji Simo person and, dude, her career and experience is absolutely stacked. So she
It was just head of product or CEO at Instacart, which, as you know, is like one of the top consumer applications in the western side of the hemisphere at least.
And before that, she was head of Facebook, whatever that title means, but it sounds again like a CEO of applications, head of Facebook at Meta for 10 years, which is absolutely insane.
So she obviously has a crazy amount of experience building consumer applications.
And I'm actually really excited to see whether she builds out this amazing like chat GPT app store.
I don't know, what's your take, Josh?
Yeah, this is exciting for me because it displays like a clear divide in Open AI.
There are now two races that they're running.
It's the consumer application race where we need to get these users, we need to lock them in,
we need to create the best experience possible.
And then it's the other half, which is we need to create AGI.
And we need to create these systems that power this operating system that we want to build.
So for me, this reminds me similar to X when Elon bought it and Linda Yaccarino became CEO.
She's the one in charge of designing the day-to-day experience.
But Sam is very much still the one who is calling the shots, who is providing the guidance to where the company is going.
But to me, the signals to me, like they're getting very serious about winning this race by having Sam fully committed to the AGI race.
I think there is increasing competition.
They now have a $500 billion investment with Project Stargate.
Elon is trying to figure out how to get a terawatt of power to a data center.
This is a huge scale battle that they're fighting, and the stakes are very high.
So I think Sam focusing on AGI is good.
This new CEO coming in focusing on applications is good.
I think she'll be responsible for building that life ecosystem that we talked about in the last segment.
Sam is focused on building this super form of intelligence.
And I think that feels like a winning strategy.
Yeah.
So the way I'm hearing this is that Sam is focused on building out the raw intelligence capacity of the open AI models.
Like this crude oil, this oil, this refined energy.
and then this new CEO of applications is in charge of what that looks like, basically distribution,
but like turning that energy and putting it into applications that fit in all different corners of
our life for whatever industry you're in, whether you're like, you know, pick an industry,
any industry, some sort of application is going to be built that connects the AGI that Sam is
trying to engineer to the end user end product in a multitude of different ways that needs to come
with like, you know, unique interfaces, unique experiences that fit into the different corners
of the world. So that's kind of how I see. That's how I summarize what you guys just kind of like
put together there. Yeah, Sam's building the engine and Fiji is building the sports car on top of it
or the like really luxurious town that is powered by this single engine. Right. It's kind of a good way to
think about it. Yeah. I can't, looking at this photo, she's very goth-like in this one particular
photo. I knew you were going to say something. I can't not comment on how goth
she looks. And then when you go and you actually just search her on Google, she's a very normal
looking person. Yeah, welcome to the media. This one particular photo just looks incredibly goth.
Yeah, why did they pick that? My god. I don't know, dude.
All right. All right. Let's get into the new models section. So every single week, almost every
single week, there's some new models to talk about. This week is no exception. Gemini 2.5
Flash and also some new models out of China as well. Jaws, 2.5 Flash.
What do we need to know about it?
Okay, so firstly, I'm cheating a little bit here this week, guys,
because this model was actually announced, I think, three-ish weeks ago.
But we haven't updated frontier models on this show for a few episodes.
And there are actually a few more recent ones.
Yeah, new to us, new to us.
Okay, so Gemini 2.5 Flash, it's funny.
Earlier we were talking about how Google kind of struggles to innovate,
at least at the consumer app level,
but also at some other levels,
aside from just kind of like boring search queries or whatever,
you know, that important function that we rely on a lot.
Gemini 2.5 Flash is actually one of the leading models right now
prior to 03 popping out.
In fact, it is better than 03 in many ways,
but we just don't talk about that because Open AI is still the darling child right now, right?
But it just doesn't get the love and attention that it needs.
And part of me thinks, you know, the reason behind this is because you can only access it,
via API right now and not through their main consumer interface for Gemini, so their chat GPT
interface equivalent. But it is much better at reasoning, and it shows huge gains over its previous
model, which is 2.0 flash. So you can basically turn or toggle off thinking or reasoning,
depending on what kind of thing that you're looking for. And if you pull up this table, I think you
guys just pulled up the comparison of Gemini 2.5 Flash to like Open AI-O4 Mini, Croc 3, and all those
kinds of things. It's comparatively cheaper than the majority of the models. I think only Gemini 2.0
flash was cheaper than it. But of course, when you get a step change function in actual ability
for the model, it's way better. And it's way better if you look across all metrics when it comes
to things like reasoning, science, coding, maths, etc. I'm just wondering why it is in 20,
talked about as much or maybe it hasn't been used as much. My guess is it's because open AI is still
so sticky and has all the kind of attention when it comes to, hey, I want to try out this AI thing.
Like David, when you suggest to your friends, I'm guessing you don't go to them and say, hey,
get clawed or God forbid Gemini. Yeah, no, you don't do that, right? You're like, use chat CBT.
So again, it just demonstrates no matter how good your model is right now, at least, the stickiness
with open air is still there. Josh, I wonder if you have any thoughts on Gemini 2.5 Flash before we
move on. Yeah, it gets back to the thing we were talking about earlier where you just have to create
good products for these things. To have a new frontier model is, it's impressive, but it's no longer like,
oh my gosh, I must try this because my models that are on my like chatGBT desktop app are really great.
And I'm pretty happy with that. So I think it, the reason why people aren't excited is because like,
I'm personally not going to go and engage with the API to test this thing. So therefore, it's not
really super relevant to me. And I'm excited for the downstream effects if people want to create cool
products on top of it. But as it stands now, I think myself and many other people,
are limited in their excitement
because there's not many use cases for this for me.
I think I'm most interested now
in how this impacts my life
or how this impacts the services that I use to improve my life.
And if it's just an API call
and there's not a lot of developers building interesting things
that I use on a daily basis,
then I'm like, oh, all right, well, O3 is still pretty great.
I'll just stick with that.
Right now, when OpenAI releases new models,
like the new one is the O3 model, right?
And that is going around in my life,
with my friends talking to you guys on my Twitter sphere,
it looks like and feels like back in the old days
when Apple would release a new iOS update
and iOS would like make a material upgrade
and all of a sudden our lives got enhanced.
And it feels like that.
Like a new chatypte model launch is like this.
Everyone's life is getting enhanced.
We're all playing with it.
It can do new things.
It's very exciting.
No other model release feels like that.
It doesn't have any impact on me
because I don't use that app.
But it also does kind of beg the question,
why doesn't this product
exist. And by this, I mean a app like ChatGBTGPT or a frontend like ChatGPT that is a model
aggregator that also does the work of in code like remembering me. And so right now I'm very
excited that my ChatGBTGPT remembers all my conversations and I'm excited to grow a relationship
with ChatGPT to the point where like I start to work and go back and forth with ChatGPT on how
best I wanted to talk to me. I am like,
nurturing this relationship that I have with Chat Chobit
so that when it spits an output,
it's spitting it out in the form factor that I like.
Why is there not a product that aggregates all of the models?
Because if you go to Chat Chabit and you do the little drop-down menu,
there's like 40, 0.3, 2.5, 2.5 high mini, like, blah, blah, blah.
And I don't know what any of those means.
And I don't care.
And I just want to have this model aggregator
do the work of choosing the best model intelligently
while also doing the work of remembering me and my preferences.
And I don't understand why that product doesn't exist yet.
I'm not sure I have a great answer for that.
I know very early on in the AI days, there was a company called Poe, which probably still
exists.
And they were an aggregator in the sense that you can select your own model.
But it didn't have memory.
It didn't have the smart selection.
It just required the manual oversight to tap it.
I think we're kind of seeing this happening on the enterprise level with WinSurf,
which we mentioned last week, where WinSurf is kind of.
of serving as this orchestration layer, they call it, where as you submit a problem, it will choose
the optimal solution for that problem based on the model. But in the consumer world, I would imagine
there's probably some sort of technical or privacy restriction on the model level, preventing that
from happening. Because if you think about Apple and their app store, which is probably the closest
connection to this, is they won't allow any third-party applications on an iPhone. They must be approved.
They must go through the centralized service. Open AI's largest moat right now is,
memory is the new feature that they rolled out. So I'm not sure, that's why I'm thinking there's
probably some sort of technical limitation preventing that from happening is because I find it hard
to believe they would give up all of that data for free to an aggregator that can become much more
powerful than them. Yeah, I think cost is definitely a major function. Have you guys seen OpenRouter
by any chance? They kind of do what we're discussing right now. This is the company set up actually by
the ex-CTO and co-founder of OpenC, Alex. So if you pull up open up, open
matter right now, just shove it into Google. You'll see that it's described as the open interface for
LLMs. And basically, you can pull across any kind of model that they integrate with. And it includes
all of the frontier models from Western companies, as well as some Chinese or Southeast Asian
companies like Alibaba or Deepseek. And actually, they're also famous for privately screening,
if that's the right term, new models before they even officially get announced.
So OpenAI's 40 and 03 were actually tested privately.
You just didn't know that you were testing it.
You was given a private code name like Vulcan or something like that.
And people were like, wait, this model is so good and it beats this benchmark and that benchmark
and everyone started speculating, oh, this might be Open AI's new thing.
And so they have a direct relationship where you basically type in a prompt on this website
or you build an app based off your API,
and it can select whatever model you want.
Now, I think the way that it's configured right now
for this particular company is you can kind of select
which model gets picked for whatever type of query.
So it's not quite autonomous and smart.
And as far as I know, they don't have a memory function.
Maybe Alex is going to work on that going forwards.
But I don't know, this is kind of like the underpinnings
of something like that, maybe.
Imagine being Alex and you leave OpenC in, like,
like 2022, right at the end of the NFT summer to start an AI model aggregator project in
22, right?
And then Sam Lutman picks your company just.
Yes.
Yeah.
Wow.
Just perfectly selling the top of NFTs.
Yeah.
Well done.
Well done.
Unreal.
Nicely done.
That's not the only model that came out at 2.5.
No.
We also went out of China as well.
Yeah.
I mean, we haven't spoken about our friends across the ocean, but DeepSeek, the ones that kind
of like shocked the.
entire Western AI world by creating a model that was much cheaper but more effective or as competent
as Open AIs frontier models, which they spent billions and billions on and Deepseek just spent
a couple hundred million millions on, is supposedly releasing their next major model update.
So it's not officially out just yet, but this has kind of been leaked that the model will have
1.2 trillion with a T parameters and it'll be a hybrid mixture of experts model, which means
that whenever someone queries the model, the entire set of parameters aren't queried at once,
which is typically how a lot of these models work. Instead, it's more efficient and it might
hit like 37 billion parameters which is needed to answer your particular query or your question,
right? So it's much more efficient and that's how they like mainly drive down the cost.
It's also going to be almost a, well, almost 99%, but it's 97.3% cheaper than GPT40, which is just again insane.
And we were kind of talking about like, you know, earlier, what kind of moat would Microsoft have with like co-pilot or Gemini have with 2.5 Flash? And you could argue that being cheaper per query or per token could be a moat there, right? Because it all adds up. And if you're a startup that's consuming a ton of data and you can get kind of like near the same kind of quality that open AI results give you, then I'm going to take that cost cut and trade off and just use this model instead, right? It's trained on a
of a lot of data, 5.2 petabytes, and it's meant to have a much better reasoning model.
Okay, blah, blah, blah. We always speak about these different characteristics. But what's
something that's actually cool here is that last point on this tweet. 82% utilization in
Huawei Ascend 910B. What they're referencing there is a Chinese manufactured AI chip.
And I just want to emphasize how important this is, because to date, Nvidia has literally,
I'm not making this up, dominated like 95% plus of the chip manufacturing industry.
And that includes like the compute side of things, chip design,
which informs how your model is going to get consumed, data, trained, etc.
It influences a heck of a lot.
We just don't speak about it as much, right?
And now we have China, which was supposedly meant to be a decade behind the Western world,
coming up with a chip that is 75% as good as Nvidia's flagship or one of their
flagship chips, the H-100s. So it's just really interesting to see how much China, and we're
going to get into this in the next topic, but Deep Seek, Chinese manufacturers with Huawei,
they're building their own kind of empire. So maybe it's not Game of Thrones on the Western world.
Maybe it's just Game of Thrones, but then China just kind of like swoops in and takes out the head.
Josh, I know you have a ton of experience on the hardware side of things. I know you love nerding out
about this. I want to hear your take on this. I'm obsessed with Deep Seek in the way that I kind of love
the GROC team as well because they just have the fastest rate of acceleration. And I think when
you're competing on these like long time horizons with really high stakes and strong exponential curves,
the rate of acceleration is probably the most important thing to pay attention to. So Gemini and
and Open AI are very clearly the leaders in terms of frontier models right now. But I think what we're
seeing with DeepSeek is this hyper focus on efficiency. And that efficiency has exponential scales that
that I think will probably quickly exceed that of the top leading models in the U.S.
What's interesting is they're winning on the hardware front and they're winning on the software.
Well, they're not winning on the hardware front, but they're rapidly getting closer.
You said it had each, what was the percentage of the H100 that they were able to capture?
75%.
75%.
So they're 75% there.
And that number was much lower just a few months ago.
What China has is really great manufacturing capabilities.
They have really smart developers who are hyper-focused on this resource constraint.
And what we're getting is these incredible models.
1.2 trillion parameters, I think, is that a record for a public model if it does go public?
Well, Meta's supposedly working on a two trillion parameter model, but, you know, we're just, we're talking about sizes here.
Yeah.
Yeah. So, so far, if this were to release today, this would be the largest public model.
Yes.
And the cost of it are a fraction of what it costs to use GPT-40.
So the trends that we're seeing is an increase in efficiency, an increased rate of acceleration in progress,
and an aggressive decrease in cost per query or cost per token.
And those three things, like really powerful effects that are going to affect markets in a really big way.
I think as we get more power at a lower cost, it unlocks the amount of productive output of these tokens.
I forget the amount of trillions of tokens that Microsoft said they generated in the last quarter,
but it was many, many trillions of tokens.
And if you can get 10 times that token output for a fraction of the cost,
that's a lot more intelligence you can use to either distill models like we see in the smaller models like that Quinn is doing.
or just to apply to even larger models
and to do reinforcement learning on those.
So I think the cost per token
and the efficiency of these models
is something to really pay attention to.
Okay, so if R2 is coming in
with a 97% cost reduction,
as in the cost of running this model
is 97% cheaper,
that means we can just, you know,
do 33 times more compute
for the same amount of cost.
So you get 33 times more compute
for the same cost.
But does that result in a 33 times better out
come because I don't think that that's true. I think the queries that you get, the products that
you get to come out of that compute is running up on some sort of like constraint, right? Like,
we don't just get it 33 times better outputs, right? Not directly. So the constraint mostly is in
the actual model intelligence and those benchmarks that you see. So this would be if it gets released
a $1.2 trillion or trillion token model or parameter model. And that would be the upper bound
of intelligence. So it's not creating that new intelligence. It's not a GI. But what we do have,
is we have a lot more use of it. So currently the cost per token here is much more expensive. And when you
get a cheaper token, you unlock the second order effects of that, which means consumer applications
become a lot more cheap. It means that we can get access to just a lot more intelligence a lot cheaper
and a lot more accessible. And I'm not sure I have great examples of where that comes, but I'm not
sure there's anyone in the world that wants, that doesn't want more intelligence and doesn't want more queries
their shots on goal at doing this. So it's not that it is a smarter form of intelligence, but the
amount of it and the accessibility of it goes up significantly. Well, that's kind of just on brand with
like what China tends to bring to the table. It seems like the United States, Silicon Valley,
brings these net new innovations, more intelligence, increasing IQ, and then China just makes that
cheaper. Yes. Although this time, 1.2 trillion parameters. That is the new top dog. So as of now,
but they, assuming ventworks come out well, that would be highest intelligence and lowest cost,
which is like a huge win.
I actually had a question.
If the cost per query goes down and we kind of continue this trend or this directional trend towards post-training via reinforcement learning, right?
Wouldn't this basically enable them to scale that up?
So you could end up with a much smarter model at a cheaper cost because it costs less to query
in itself, right? So you could just chuck in a bunch of, you know, reasoning traces, which,
for those of you who don't know, is kind of like questions, which will show a question A and then an answer
B. And the model basically has to figure out how to get from A to B, and that's what makes the model smarter
through reinforcement learning. Josh, don't you think this will basically open up an opportunity
for them to scale to a smarter model? I'm guessing that's how they're doing it right now, right?
Alibaba's doing the same thing with Kwan. I mean, the new model that they just released is only
235 billion parameters, but it's as smart, if not better, at certain math, coding, and science
benchmarks. And maybe not so much the social reasoning side of things, but certainly across
those benchmarks, then some of the frontier trillion dollar, not trillion dollar, trillion parameters.
Yeah, no, that's a really great point. I think you're right. What we probably see is similar to Quinn
in the distillation of models. And then the hyper-specialized version of those models. So as
basically the way these distilled models work is there's this giant foundation model and it uses
reinforcement learning to create these good prompts, these really high quality prompts that get fed into a
smaller model that can run locally on a, let's say a laptop or your own device that you have at home.
What happens from this probably is now that you have this higher form of intelligence that you
can distill much more cheaply, you can create a lot more of these distilled models that are even
tighter in size, even smaller and maybe enough to run on your iPhone that are effective enough
to actually work. And maybe you get a distilled model that's kind of,
like a Siri on steroids type thing that you could run locally on a phone, but maybe you also just
get a really great hyper-focused model on a specific type of math problem or a specific category of
science or like a very specific data set that it's trained on that costs basically nothing
because it's been distilled down and it costs so cheap, but it's really, really smart at this one
particular thing. And if you have a series or a network of these nodes that are hyper-trained
on a specific category or a specific problem, then you could kind of build this mesh network on top
to interact with each one based on the needs of the query. So it creates this interesting modular
approach versus the large foundation model where if you can distill these things down
cheaply enough and broadly enough into all these hyper-localized little nodes,
then you could access a lot of general intelligence very, very cheaply.
It reminds me of another example. I mean, like the general theme that we're talking about here
is Chinese AI manufacturers can basically make things cheaper and more efficient,
but the difference, which is typically what they've done with mobile phones when Huawei is copying a bunch of iPhone stuff.
But the difference here is that they're actually innovating on the design and the execution, which is really important.
And it reminded me that Huawei this week launched, I think it was like a 718 billion parameter model, but it was trained on 6,000 of their new ascend chips, which is like their equivalent of like Nvidia H-100.
or whatever you want to call it, right?
And this was a significant reduction from the 8,000 chips
that they used to train their previous model,
which was half the parameter count.
So basically, chip count down, model intelligence up,
which is a scary but pretty exciting trend to see.
I guess the benefit of this so far is that us on the Western side
can kind of benefit from it,
because they're open sourcing all these models,
and they're probably trying to vamp attack
some of the bigger companies in America,
but at some point they're going to close the gates
because they've achieved parity, if not betterment, of these models.
So it's an exciting trend to kind of watch,
and it's not one that's getting publicized in the Western media as much.
I'm just seeing as all of these numbers come down,
ever since we started this podcast,
you know, token parameters go up, chips go down,
and the rate of this happening is just not slowing whatsoever,
which makes me think that's just like,
it just keeps on going. It just keeps on going.
Yeah. So we're going to end up in this like infinite abyss, David, where we can, you,
you can talk to chat GPT as much as you want. You know, earlier on, Josh was explaining how,
you know, doing search queries via AI is so much more expensive than doing a Google search query,
but maybe not for long. Maybe not for much longer.
By the end of this year, it's not going to be the same. Yeah. Yeah. There are some things to talk
about on the crypto side of things, where we got this podcast started. So cookie.fun as website
I got pulled up right now, which shows the total market cap of all AI agent tokens. Now,
maybe to call a spade a spade, these are meme coins associated with AI agents. I think that's a fair
take. But this sector of crypto, the AI sector of crypto, has gotten a lot of life back in it
over the last two weeks. Ijaz, tell us what happened. Yeah. So in this particular instance, life
translates to money. The total market cap of these AI agents, if you remember back in the day,
I think it peaked at like $25 billion, and then it got crushed basically over subsequent months
to about $3.2 billion. Now, over the last three weeks, it has almost tripled. Oh, just over tripled,
actually, to $11.4 billion as of this screen recording right now, right? And as you can see,
charts are green across the majority of the spectrum.
of things. And you're probably wondering, hey, guys, what's happened? Like, has some, like,
crazy innovations been going on? Like, give me the summary as to why this is happening. So,
let's go through some of the takeaways. Number one, Virtuals is leading this entire kind of progression,
right? So if you remember back in the day, we used to speak about kind of like the top AI agent
protocols. Virtual's was one of them. You had AI 16 and Arc as well. Over the bear market,
all three of them were crushed, but all three teams were kind of heads down focusing on pioneering new kinds of innovations.
And virtuals really kind of like came through as the leader here.
So they specifically added over $600 million in market cap over the last two weeks.
And that's just in the virtual's token.
So it's back over a billion dollars.
And they've made a number of changes since we last checked in on them.
So if you pull up this, this tweet, which basically kind of like has like this fun little table as to what they've basically,
done. So in January 2025, they go through things like agent revenue, agent commerce protocol, fee
structure, chain. Basically, everything on the left was kind of unclear, a bit of flimsy,
kind of Ponzi-like. And on the right side, they have a much more structured product and focus
strategically for what they're trying to build and how they're supporting creators or agent developers
on their side of things. But let me explain what any of this means and what the highlight is.
So they created a new agent launch pad. And for those,
of you who don't know what an agent launch pad is, it's basically kind of like a cursor, if you like,
where you can go on, you can use a no-code environment to design what your agent might look like
or what it might do, what functions it might serve, and then you can launch that agent. And usually
you can associate a token with that agent. So it's kind of like your IPOing an agent. And they
created this thing called a Genesis launch pad, which is meant to be a fairer way to launch
agents. So they've completely replaced the bonding curve model with something they call proof of
contribution, which basically means you need to do something helpful towards the virtual's ecosystem
if you want to get access to buying the tokens for that particular agent. And contributions are
measured in something called, I think it's like virtual points or they're kind of like loyalty points.
And there are many ways to earn these points. You can stake virtuals. You can yap or talk about
virtuals on X, or you can simply just hold virtual's tokens or certain virtual's tokens.
Then there's kind of like this presale and people can commit. And the point is it's more
equitally distributed. Now, of course, that's not the only thing that people are talking about.
They're talking about the returns that they're getting on this thing. So the average return
multiple on each of these tokens, David, since they launched this launch pad, is about a 12 to 20x,
which is insane, right? Until you realize that the FTV that your investor,
in is like $212,000. Not million, $212,000. So you kind of have like a guaranteed ROI.
What I don't like about this is probably what you can already guess. And if you pull up this
tweet by Vader is it's mainly just about the money right now. And it's not really about much
innovation. And I know the virtual's team personally are working very hard on creating value.
And you could also argue that on the traditional AI space that they haven't kind of created
anything of, definitely more marginal value on the enterprise side of things, but not as valuable
as we'd expect to see. So it's kind of like, you know, they're kind of fighting for the same kind of
thing. But it's good to see that they're kind of like focused on this thing, right? The second
thing is virtual's updated their fee flywheel, which means that 70% of fees get given back to
the creators. Now, previously, that wasn't the case. And the effect that this has had is basically
any kind of agent developer that launches a coin
can basically get continual money
to continue developing their product.
And that's been a really good way
to kind of like sustain the ecosystem.
So for the AI listeners out there,
it's probably worth knowing
that there's this meta happening
in the crypto industry
around these token launch pads.
The meta that's going on this week
is this like Believe app thing,
which allows you to launch a token
that is paired with,
but other than that has no material association
with kind of a vibe-coded app.
So you can vibe code an app and then you launch the app and then you launch a token with it.
And it uses the token to gain virality for the app.
You know, you attract some DGens.
If they play with your app and then they like your app, they might buy your token,
which again has no material association with the app other than it's sharing the same brand.
And so on one side of things, we have the AI agent part of this like crypto industry,
which is downstream of virtuals, which is a token launch pad.
And then there's the token launch pad side of things, which is the token launch pad.
which is the token engineering, the crypto DGens,
the speculative gambling that really turns people off to crypto.
And on the spectrum of like, is this AI innovation or is this token launch pad financial
engineering?
Like, is this for the AI people or is this for the crypto DGens?
I'm placing this more on the crypto DGens side of thing than any sort of like AI fundamentals.
That's kind of my take about this.
Yeah, I think you're right.
And I think it's important to just say that, again, this is mostly about the money.
most kind of media coverage on this or like updates that people get excited about is the return
multiple that they get, which is of course one of the main reasons why people get in crypto,
they want to make money, but it's also kind of like, it gets kind of like boring or repetitive
over time. And we want us to kind of see some real pioneering innovation that isn't just some
kind of meme coin relationship. Otherwise, you know, what are we doing here? We might as well just like
talk about gambling picks every week, right? Moving on, talking about actually on the note of talking about some real
fundamental use, decentralized training or distributed training is actually an area where Web3
or crypto innovators are actually bettering AI or pioneering AI.
So we've spoken about these companies that are listed on this tweet here a bunch of times before,
but to give some context, in order to build these AI models, you need a heck of a lot of compute
and money and data, and it's very costly. So some folks had a bright idea of.
of thinking, well, what if we do this
in a decentralized distributed way
where we can aggregate compute
from people's idle laptops, computers,
cell phones, and we can
kind of combine them into some kind of
decentralized network and use
that compute to train models. And so a bunch
of teams were like, okay, cool, that sounds like a good idea.
But it was much easier said than
done because the way you need to network
and connect these different GPU servers
and laptops and stuff is not an easy
task at all. In fact, by the end of
2022, the best minds at Google could only put together a 400 million parameter model. Fast forward to
today, and Jensen announced that they're using RL or reinforcement learning to train a 72 billion
parameter model, which I know isn't as big as 1.2 trillion, Josh. I know it's not as big as the
DeepSeek team, but it shows massive improvement. And this is being led by Web3 folks. So kind of like
just not really going down the list, but just give you.
giving you an overlay here, teams like Jensen, teams like Prime Intellect, teams like News Research,
are really pioneering open source distributed training. And I don't think it's something that
should be overlooked because if executed well, these blockchain networks could be worth hundreds
of billions one day. And I'm not exaggerating because what cost or value would you give to networks
that train some of the best local or frontier models of our generation, right? That can be co-owned
by multiple people. So I thought that was really cool.
Yeah, the idea of having a decentralized alternative to the very centralized powerhouses, open AIs, etc., is like attractive to me just like, kind of like on a redundancy perspective.
Humanity at least has like a plan B.
But I'm not sure if they can go toe to toe with the big boys.
Josh, what's your turn?
Yeah, I agree.
I'm still trying to wrap my head around it.
I'm trying to get excited about it.
I think it's an exciting development that is early.
And again, like the rate of acceleration seems like it's going fairly well.
The initial take is that all of the largest companies in the world are trying to just gather all of the information in the world and distill it into a model.
And we're seeing that closed source, but we're also seeing that in open source with meta.
And now meta has up to two trillion parameters on the next model.
And that's going to be open source, open weight.
And I would imagine if people want to build something interesting from that, they could take it and then distill that massive knowledge base down however they would like to kind of customize it as a distilled models.
So I'm trying to just understand the digital.
difference between that, like taking an open source model and distilling it into whatever you want
based on all this knowledge they've done the hard work to collecting versus going to collect your
own data and use it in a decentralized way to build your own ground-up model instead of kind of taking
a top-down approach. Yeah, I completely agree. I think what this is ultimately going to result in
is niche use case kind of bleeding edge or frontier edge models that serve like
a small percentage of people, right? Maybe it's people that care about privacy of data,
or maybe it's people that care about using a certain amount of encrypted, personalized information
that they don't want to share with anyone, again, from a safety aspect to train a model
that serves a particular function. It comes back to what we were saying earlier. If you don't
have a good enough product that comes from this AI thing, you're not going to make it. And this
isn't just unique to
decentralized Web3 folk. This is
also very prevalent to
the top dogs at the top.
We talk about Microsoft, right?
What are their unique models beyond just getting direct access
to open AI model weights?
If you don't have access to that, then you're
kind of like nothing, unless you build an amazing product.
So it's going to come down to the product. It's going to
come down to the founders. I am excited that
open source is still pushing really hard at this.
And I'm more than confident that some good
kind of app program or
software will eventually come from this, but you're in a very competitive mode and competitive industry
that is being funded equally so on the centralized side. And at this point, you can't even argue
that monopolies aren't open sourcing because the folks at meta, the folks at Deepseek are all
doing it anyway. So let's see how this pans out and let's keep our heads in the game.
Josh and Jaws, it's another week. A lot of news to cover, I'm sure in seven days, there'll be a lot more
news to cover and I look forward to talking to you guys then. Yes, sir. Awesome. Another great week.
Limitless. This is the Limitless podcast, brand new feed. If you have not subscribed to the podcast or subscribed to the YouTube, please do so if you are listening to this on podcast. Please make sure to give us a five-star review so we can get this podcast up to the top of the charts. We think that this can be. And, you know, already is approaching in some of the best AI news-related content that you can find. If there are other AI podcasts out there that you guys listen to, let us know because we want to beat them. So yeah, give us a five-star review. If you're on YouTube, subscribe to us.
I like the video so we can grow this channel.
We appreciate you guys all here with us.
And once again, we will see you guys in seven days.
