Limitless Podcast - Stargate: OpenAI’s $500B Plan to Build a Planet-Sized Supermind
Episode Date: June 5, 2025Project Stargate a $500 billion, multi-gigawatt data-center network from OpenAI, Microsoft, and the U.S. government—signals that control of watts and GPUs is replacing oil and nukes as the ...world’s new hard power. We trace how Texas and Abu Dhabi super-clusters, chip export quotas, and Taiwan’s bottleneck are redrawing geopolitical lines, while Meta’s VO-3-powered ads reveal a coming internet optimized for hyper-personalized persuasion. The crew also unpacks Anthropic’s Game-of-Thrones clash with OpenAI over Claude access and the $1.5 billion Builder.ai “no-AI” scandal, exposing the razor-thin line between breakthrough and buzzword. If you want to understand how compute, capital, and national security will shape the 2030s, hit play.------💫 LIMITLESS | SUBSCRIBE & FOLLOWhttps://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS00:00 Stargate's International Expansion10:06 How Big Of A Deal Is This?15:46 Governments vs Private AI22:18 Who Owns The Data?25:37 Content Addiction Endgame35:39 This Is A Big Deal41:19 Can This Be A Good Thing?48:39 Fake AI Agents??56:41 Game Of Thrones Update------RESOURCESDavid: https://x.com/trustlessstateJosh: https://x.com/Josh_KaleEjaaz:https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
There's a topic out there that we haven't discussed yet on the AI roll-up.
And I think it's because this topic is so daunting because the implications are so huge.
Because the outcome of this topic will shape the future of the global order of humanity and planet Earth.
Listeners might have heard of this Stargate project.
This is a Microsoft Open AI SoftBank Oracle and also United States Government Collaboration.
All of these parties are all investing and building this $100 billion,
multi-gigawatt data center campus slated for 2025.
The idea here is that compute and models are going to get so much larger and so much more
powerful that we need to get ahead of this incoming demand for compute and create a $100 billion
brain center, an intelligence center for running AI compute, not as a matter of creating
just a good consumer product, but actually as a matter of national defense and national security.
The conclusion that I think the United States government,
has arrived at, and I think with the assistance of Open AI to arrive at this conclusion,
is that access to compute is equivalent to access to hard power.
Ensuring access to the world's most powerful compute centers is now treated like access to
stealth technology or enriched uranium in past eras.
Nations now view frontier compute as a prerequisite for economic leadership,
intelligence dominance, and military deterrence.
So like what was once the Manhattan Project back in the 40s, a race towards nuclear armament,
is now a race towards building the world's largest to most powerful compute centers,
because access to compute means greater access to more powerful intelligence than our adversaries.
And I think that downstream implications of this will ultimately come just to redraw the geopolitical
lines of the global world order.
Just as in the Cold War, we had lines between the values of free market capitalism in the United
States on the West from Soviet communism in the east, new lines will be drawn downstream of
whoever has the largest intelligence compute centers. So following in the United States,
$100 billion Stargate project, the UAE is building out Stargate UAE. So in May 2025,
this month, or last month, Abu Dhabi state back to G42 signed a deal with Open AI,
Nvidia, Oracle, and Microsoft, some of the same players for a one gigawatt Stargate UAE cluster
with Washington, you know, Washington, D.C., approving the export of half a billion top-tier
Nvidia chips to the Gulf per year, showing that the UAE wants to enter this game of
international compute dominance. So Josh, it draws, here are my big takeaways from what I see
this new fight in the jungle among nation states and their allies, where I think this goes.
I want your guys as help and to see how far down this rabbit hole we can go.
So my first big, I got seven big takeaways for you guys.
First,
seven, let's go.
Frontier compute has become hard power.
So bleeding edge GPUs are treated like stealth fighters and rich uranium.
You know that, that line, compute is the new oil.
Compute is now a matter of national security, national defense.
So whoever owns the watts, the chips, the energy, the cooling owns the economic rents of the 2030s.
That's number one.
Number two, United States export controls are now diplomatic currency for the global world order.
So if you are a tier one ally of the United States, you get access to chips.
If you're a tier two ally, you get chip quotas.
And if you're an adversary, China, you just get locked out.
Number three, energy becomes the choke point.
So a one gigawatt data center needs the energy of like a mid-sized United States city.
So Gulf states and hydro-rich Nordic lands and also, of course, the UAE,
very energy rich. These are very big beneficiaries of this move towards compute. Number four,
the Taiwan bottleneck is huge. I think Taiwan is going to be the line that is drawn between powers
will run straight through Taiwan. And Taiwan will be a tug of war center of anyone who wants to fight
in this fight. Five, compute nationalism. So EU AI, AI factories, China's homegrown accelerators,
Gulf chip splits, all aim to dodge permanent United States cloud dependence.
So compute is going to become nationalized.
The internet is going to become even more balkanized.
The government, number six, the government will shift to input controls.
So regulators will see GPU counts and data center controls as basically the safety levers.
Those are the controls that governments have over the rest of the world.
And at the end of the day, the final big takeaway is if you control the watts and you control the chips, you control the 2030s.
And Stargate is the first concrete, like, proof that raw compute is.
now a frontline geopolitical asset. It's an extremely ambitious topic. I hope we can do this topic
justice on this today episode. So, Josh, I'll throw this one to you. Are the stakes really as high
as I've made it out to be? I mean, in short, yes. We're talking about owning the keys to the
most powerful and influential technology of our time. Like air will have impact across economies,
lives, culture, and these data centers are the things that will power that, right? Complete compute
supremacy we're talking about here, right?
Stepping back a bit, I want to give the audience a bit of an idea as to what to imagine here,
like what this thing will look like.
So Stargate at its core is an initiative, as you said, David, to build these data center
campuses, right?
And we're talking about five to ten of these in the US plus, I think five or ten that's planned
overseas.
And the first, which they've announced, which you just mentioned, is the UAE.
And they're called superclusters, superclusters of these GPUs, which is basically the
machinery that can process all of this data and compute to train all these different models
and to help you access these different models as a customer, as a user in that country that uses
chatGBT, for example, right? And this is no small feat, right? $500 billion has been committed here
by some of the biggest backers and companies. 500 billion. We're talking, 500 billion, sorry. So,
you know, Microsoft, SoftBank, Oracle, MGX, which is basically the UAE and Open AI themselves.
Now, also a fun little side kind of fact to this is Microsoft still secures their cloud compute for the rest of this decade until 2030, right?
And they still secure a 49% profit share of open AI products.
So the point I'm making here is this is the first time that tech companies are having such a massive influence on global political climates and what the future of those respective nation states are going to build.
invest and facilitate over the next decade. Now, why is this important? If you want the best
model, you need the compute to train it, right? And this requires a lot of upfront capital,
massive data structures, etc. Now, currently, the way that Open AI is scaling with their models,
it suggests that anything post-PT5 and GPT5 is a model that doesn't exist yet, but anything
post-it will require around five gigawatts of dedicated,
low latency compute by 2028.
That is, for listeners of the show, a heck of a lot of compute.
That is, like, very, very expensive.
And Open AI, we see taking the strategy here of owning the facilities versus, like, renting
cloud, which is what a lot of companies already do.
And this locks in both the supply and economics, letting Open AI basically amortized
CAPEX over multiple generations rather than pay up, like, huge markups.
So we're seeing complete dominance from not just Open AI.
right, but one man, Sam Altman.
Now, how this plays out into global politics is a completely separate discussion,
which we should definitely have on this show.
But I want to throw it to Josh just get his takes before we dig in deeper.
It reminds me of this short story by Isaac Asimov, which is the last question.
And it's like once you, the idea is like, as you acquire all this intelligence,
there's really only one question that matters.
And in the story, it was can you reverse entropy?
But in this, it's like, can you create artificial general intelligence?
And that is the only thing that matters because once you solve that,
nothing else matters. It's able to solve all of your problems. It is able to give you infinite
levels of intelligence, sort infinite energy. It is like a superpower. This is bigger than nukes. So the stakes
are very clearly as high as David's proposing them to be. I think the scale is interesting,
and that's kind of something that I want to talk about. E. Josh, you mentioned they're going for
five gigawatts of power for some of the data centers. For reference, one gigawatt is equivalent to
about 750,000 homes. So for every gigawatt, this is like a city worth of homes. It's 750,000
homes, 2,500 Teslas, I think 100 million light bulbs is the equivalent. So the scale and
required energy is massive. They are planning to do this with the UAE, but they're actually
making meaningful progress here in the United States in Texas right now. And the way it works is
they just have these massive buildings with giant GPUs inside. So the current setup that they
have an Abilene, Texas that they're running with. It's eight buildings, I believe, with 50,000
GPUs per building is the plan. So that is going to be 400,000 GPUs, which would be the biggest
cohesive cluster, I think, in the world. There's over 2,000 people working on this 24 hours a day.
And what's amazing is the power constraints that they have versus what they're trying to get.
So currently, they only have 200 megawatts of power when in reality they need 1.2 gigawatts,
which is about a 5x multiple on that. So as they're building these, we're starting to see the
restraints that they're running up on. And I think a lot of the timelines they have are super
ambitious. They're like, oh, yeah, we could do this very quickly. But they need power. So I think
that's kind of where the UAE comes in, the gold coast of the world.
They have all of the energy.
They have all of the oil.
They can actually power these things in ways that we can't.
So what I'm excited to see with this is, like, how quickly they're able to out-accelerate us
in terms of powering these GPU clusters, because that's really the big thing that matters.
Do you guys know what CoreWeave is?
It's a company that went public not too long ago.
And I think, like, if you want to answer the question, how big of a deal is this, you'll see
the market repriced CoreWeave up and up and up because what does CoreWeave do? It hosts chips.
It is a power center. It is an intelligence center for rent. And so it just buys a lot of chips.
It hooks them up to power and you can buy access to those ships. And so it's independent from
everything that we're talking about. But I think you're seeing that same like emphasis on importance
of access to compute being expressed in CoreWeave. And the only reason why I'll bring up
Corby specifically is because like the stock price has done a roughly 350% since March when right
after it went public. And so you can see the market starting to really value intelligence centers,
access to intelligence. And as Jaws said, and there's this open AI four countries phenomenon,
like this initiative from them where they're trying to just kind of copy this business and this
is a valuable business model. They have the intelligence and they are selling it to entire
countries. And so that that line that's drawn between powers, like, like Josh said, like
Josh said, is being done arm and arm with a nation-states like tech arm. And China has figured
this out a long time ago. The integration between the Chinese government and the Chinese
tech world is the same. The tech industry of China is an extension of the Chinese government.
And in America, it was always much more separate. Like Facebook was going toe to toe with the
government, all of these people in Silicon Valley, there's a joke out there that, like, why is Silicon
Valley in San Francisco on the West Coast where, you know, D.C. is on the East Coast? Well, because
they want Silicon Valley to be as far as way from the government control as possible.
This, I don't think is, I don't think this works in the world of AI, where you need, because of
AI is such a strong matter of national defense, that you need an alliance between the AI sector
and the government sector for, like, as a matter, just a matter of just what's most,
profitable for Open AI. Like you actually have to team up with the largest power bases in order
to secure the energy, in order to secure the chips, and have the value of things like Open AI,
things like Microsoft, actually reach their maximum potential. It draws what are your thoughts? You're not in your
head. Yeah. Well, I wanted to pick up on your point around defense, David. So you were right to
point out that traditionally in tech, in the West, at least, it's been separate from the government.
They've been going toe to toe, right? But on the defense side of things, tech and the government have
work pretty well together. Actually, just this week, Anderil, which is an infamous or rather famous
weaponry or drone striking company that was started by the former founder of Oculus, which was
acquired by Meta, announced a partnership with Meta, right? And the point around this partnership
was they were going to leverage Meta's new VR or cross-functional VR technology with their drone
technology. So basically creating superhuman gamer computers that can basically take drones to war for
the US. And that was a major non-discreet partnership. And the reason why that was so significant was
because this former founder was fired from meta. He wasn't let go. He didn't leave of his own
accord. So it just kind of like points out how important AI is to the weaponry side of things.
But I think there's an important difference to point out here with the Stargate side of things,
which is this partnership of open AI for countries isn't just about defense.
In fact, it's not even mentioned in the announcement blog post.
What they talk about the most is the fact that AI is going to have very significant consequential effects on that entire state economy.
I actually want to dig into some of the differences with Open AI Stargate for countries versus Stargate that's happening in America that Josh just described.
So with the global order, Stargate will be set up in the UAE.
Let's take that as the first example.
The UAE will own all of their data that goes through that data center.
So the US or OpenAI doesn't claim ownership over the data that's being run through the UAE's Stargate cluster, right?
So they'll own all their data.
It's sovereign.
It's democratic AI, as Sam Altman describes it, for data privacy and compliance.
Number two, they'll create custom AIs to serve the citizens of the UAE.
So, for example, anything in a native language, different laws, regulations, can be leveraged
for custom healthcare or like public services, for example.
And then number three, in every nation that a Stargate is set up in, they'll create a national
startup fund, of which Open AI, by the way, is going to be one of the main contributors to invest
and support subsequent AI companies that blossom from these different models. So we kind of see
this as kind of like a political play, right? The US gets investment exposure into all of the top
companies that comes out of OpenAI's IP or models, but we're keeping the kind of data sovereign
and the compliance and privacy aspects in tow, right? So it becomes a geopolitical play. The US positions
itself as like the trusted AI compute block versus China's kind of like,
stack. The open AI for countries track kind of like offers a partner nation's discounted slots in
exchange for alignment, as Josh pointed out earlier. Yeah, if I'm the United States, this feels like
a no-brainage to me. I think the race to AGI is between the US and China, really. And in the case that
China does reach it first, they have the manufacturing capabilities to physically manifest that through
robots and through devices way faster than we can. So in the case that they actually get there first,
that is a really scary future. And I think the best way to hedge against that is just
to like the enemies of my enemies or my friends. Just get everyone else who is not trying to
onboard. Get them in the loop. Get them iterating. Get them ascending up that curve of acceleration
as fast as possible. And use the resources that they have that we don't quite have access to.
So get access to Saudi energy and Saudi money. That way we can funnel it back to the United States
to provide more GPU cluster training. So to me, this makes a lot of sense. I think the question
now is what position does this place opening eye, particularly Sam Altman in, being the
single facilitator of this intelligence.
Like in a way, in a way, we're kind of creating this supermind where we're just
creating these giant nodes all around the world that will then talk to each other and kind
of recursively learn with each other and have access to huge swaths of data that we otherwise
did not have access to. So if you're thinking about this from Sam's perspective, well,
he is now kind of building this hive mind by leveraging other countries and like, sure the data
is private and sure it's their own private model, but I'm sure there are
downstream wins for Open AI that results from this that are yet to be seen. But if I'm Sam,
I'm probably feeling very powerful right now, I think would be a good word to describe that. And I guess
over time we'll see how this actually plays out. But good strategic move for the U.S., whether or not
it is the right thing to do with only one company versus getting someone like Google or Anthropic
also involved, that remains to be determined. This is where you really see the lines being blurred
between the United States government or nation state powers and the tech sector,
because the United States, Microsoft, SoftBank, OpenAI,
are all dual investors in both the domestic Stargate and the UAE Stargate.
And so it's funny to see Microsoft and OpenAI having their fingers in both camps, right?
They get to own the intelligence center in the United States.
They get to own the intelligence center in the UAE.
And there's no way the United States government could do this without the help.
of Microsoft and Open AI.
This thing all centers around Open AI.
And so in order to compete with China and to be competitive at all, the United States government
needed to ally itself with this tech sector.
But then the tech sector gets to elevate itself beyond the tech sector and is now in the
national defense geopolitics camp.
And so like Open AI, whenever it goes, trades publicly or maybe Microsoft because it does
trade publicly, does that also count as a defense stock, too?
because we are now, like, it is now inside the UAE.
And it is now a, like, well, it's already a super national or international organization.
But like even more so now when you are highly integrated in the defense of multiple countries.
Well, I was going to say the lines of nation states are blurring now, right?
This isn't a topic we're too unfamiliar with when we talk about the Web 3 side of things.
But we talk about political power and influence.
We talk about economic power and influence.
Tech companies have been kind of blurring that company, but it's been the eastern and western
block, as Josh pointed out earlier now. Now it's becoming quite clearly an AI block, and it's all
for the taking right now in terms of alliances. And it doesn't seem to have or interfere with any
past political biases that may have occurred. I had a question for you both. Outside of major
political elections, has there ever been a case where tech companies have had such large political
influence over a single decision.
Outside of elections.
Yeah.
So outside of like lobbying and super PACs.
Yeah.
Well, there's the pre-C Cambridge Analytica debacle where like Facebook was credited with
starting that civil war in Africa from just kind of like advertising, like advertising
manipulation and down to the downstream of this one like local, local domestic like dispute
turned into a civil war because Facebook's advertising program was open to like,
both parties in this local region being able to access and influence people in that region and they created a civil war.
I'm blur around the details, but that's basically effectively what happened.
Does that count?
Kind of.
It's using technology as the tool here, right?
And that's like a very specific network.
I'm talking about like a global order of influence, which is like it's not just the social network, which is basically the advertising or propaganda machine.
it is the machine that allows you to facilitate the workers, right?
Or facilitate any kind of global economy GDP,
a major percentage of your capita per country, right?
That hasn't been done before, in my opinion.
And it's only been done kind of like behind closed doors, handshake deals.
This is like the first major kind of global operation that's happening.
Yeah, we've seen this trend of being able to influence through technology companies.
like we saw this first with Facebook where you can actually sway the opinions of a lot of people
and you could actually sway an election to do that and you could sway a policy to do that.
And there's been this increasing trend of technology companies gaining more leverage to the point
where they can actually influence politics and now are required to participate in it because
they're the only ones that have the required technology needed to compete with other countries.
And while the government has fallen flat on actually innovating, it has to rely on private
industry or public industry where these companies in tech have been created.
and I don't see a world in which that trend changes because as great as it would be to have a Manhattan
project for AI that is government funded that is kind of all pooled together and contributed to
by everyone as a country. That doesn't exist. And there's no world in which it seems like that
is going to exist when the reality is that they're just going to lean on a single entity like Open AI.
So I think this is an increasing trend. I think private industry will be more powerful and more
powerful until eventually they're just at parity with government because they can influence people.
And we're going to talk about this later in the show. But the influence you have on the person can
sway them to vote for anything. In a democratic world where votes matter, having influence over
those people makes a really big difference. I have some food for thought for you guys. I want to
roleplay some scenarios with this entire kind of project and see where you guys take it, right?
Okay. So the first kind of question I was mulling over is who owns the result of?
character and quality of open AI models if, say, the UAE contributes 15% of compute to the latest
frontier model. Have you thought about how that political play might happen? Like, right now,
they've announced 10 Stargate projects within the US, and 10 more will eventually be announced
internationally. Do you think the US will always have to maintain, like, a greater number of
sites versus the outside? Like, how does that play out in your head? That's question number one.
I think the location of compute, where compute happens in the world, will become very important.
You would think that, like, oh, I'll just upload it to the cloud, and the cloud will make, like, geolocation, physical location, just completely irrelevant.
I don't think that's true when it's AI.
And because, like, as soon as things escalate between, like, again, hypothetically, United States versus China, things escalate, then access to intelligence becomes,
a national like becomes a a weapon that you have and you need to make sure that you always have
access to that intelligence you can't be dependent on a third party.
But who gets to define the intelligence if the UAE's data clusters right or compute clusters
account for 20 percent of training open AI's frontier model post 2028 should they have an
influence in the character and quality of that AI model?
Oh I think they'll be able to negotiate there where if they have 20 percent
have 20% of the total compute, they'll be able to negotiate a way to be involved in that conversation,
for sure. Because as an ally, again, if things escalate versus China and you are throwing your
compute against China's compute and whatever, I don't know what that looks like. Just, you know,
subversion tactics, building better weapons, I don't know. But if you can just like align yourselves
with the UAE's 20% more intelligence, then all of a sudden that intelligence adds to your pool of
intelligence and you get to take that fight to China because you guys are allies. So rather than like,
it's like grouping up armies, except now the armies are data centers. Right. The reason why I ask the
question is for so long, we have spoken about American made AI and China made AI. And I feel like
those lines are going to blur over the next decade, right? Like if the UAE wants to add in a clause or
a personality trait in open AI in general, not just within the UAE, but,
as a generalized model to say that, I don't know, you need to be more friendly in your biases
towards the Middle East or Middle Eastern news. I wonder how that plays out to massive nation-state
and government political decisions. Well, I mean, we can just borrow lines from like the 1984 book.
Like, we've always been at war with Oceania, like all of those things that we just, in the book,
just injected into the thoughts of the society and the society just went along with it.
now you just get to do that into Open AI.
And all of a sudden, whatever open AI spits out is truth.
So there's another conversation that's not related to Stargate,
but I think is incredibly illustrative of exactly why Stargate is so important.
We've already mentioned Cambridge Analytica in the 2016 election.
And I just want to kind of trace over some of the facts, the TLDR of what happened.
Actually, this is in 2014.
So, you know, over a decade ago.
So Cambridge Analytica, this company that did like advertising analytics,
and otherwise like internet analytics based, primarily based on Facebook.
They ran a personality quiz on Facebook in 2014,
and this app vacuumed up data from 270,000 quiz takers and their friends,
got collected data on their friends, the friends of quiz takers,
and ultimately generated data on 87 million Facebook profiles without people's knowledge.
Cambridge Analytica built psychographic models like neurotic suburban mom or angry young man,
and while working for the Trump campaign
and allied political action committees in 2016,
micro-targeted them with razor-tailored political ads.
When a whistleblower out of this scheme in 2018,
it became just torched Facebook and big tech,
and that became just like a huge line
between the Democrats and the Republicans.
Remember, this was the whole like Russia hacked Facebook line
from Hillary Clinton, all this kind of stuff.
We saw that it dictated,
it had a very large influence on the 2016 election
that elected Donald Trump.
And that was because of big data and just subtle tweakings into how Facebook served content to its users.
Now, here's the news from this week.
This week is that meta plans on creating AI ads.
And it just wants to make a very simple ad platform that does a lot of the work that it takes to make an effective ad.
And so the idea here is that advertisers can just plug in objectives and a credit card.
That is a direct line from Mark Zuckerberg.
just plug in what you want and a credit card and meta will give you what you want and so while meta's
ais they i mean it's dude again so the meta ai will just spit out a full creative stack
images video copy even like real time ab testing on ads for different users and different locations
and then the i will also decide whom to target on facebook and instagram and also optimize the
pacing of spend to make sure that these ads are as effective as possible. We've always known that this is
AI, like Facebook's meta's product. What is meta's product? Influence. You give them $1,000. They give
you $1,000 of influence. And now they are leveraging AI to make sure that their product of influence
is as effective as possible. And so to me, I look at Cambridge Analytica and then I see this.
and I draw a direct line of Facebook willing to using AI, which will be, again, compute located in the United States, owned by Open AI and, you know, the data centers that we've talked about.
And that will be, you'll be able to use that into accessing influence over citizens of the United States or any other country, really.
And so this is why there's like AI powerhouse, like owning the intelligence center of your local region is so important because that gives you total control.
a total influence over what your constituents think, what the people think, what the facts are,
what the truth is. And we're going to use this in our way. China's going to use this in their way.
Both ways might be kind of authoritarian, kind of totalitarian, either way. But there's a little bit of a
downward spiral where we have to do this because if we don't do it, then China's going to do it.
Josh, what are your thoughts?
This is a continuation of a trend that we've been seeing, which is kind of dark and scary in the
sense that a lot of people don't really know what's going on with AI. They're not sure how powerful it is.
They're not sure where it's popping up within their day-to-day life. And I think in the case of this
advertising example, we have a very clear case of the continuation of this in the sense that
users will not really know that these are AI generated. They'll just like the ads better and better and
better. And what we kind of see with TikTok is when you get content that's tailored to you,
you really spend a lot of time enjoying it and scrolling. And if you could apply that to advertising,
Well, then, I mean, that's incredible news for advertisers, but also kind of scary news for us,
where you're not really quite aware of the ads becoming much more powerful, but they are.
And not only are they more powerful, but they're hyper-customized to you.
So as a user, Facebook has all of your preferences.
They know your data.
They know what you like.
They know what you don't.
They just send a prompt to the AI, and they say, hey, generate an ad for this person with these parameters
based on these things that he likes.
And now you have the best ad in the world that probably doesn't even feel like an ad.
and it makes sure no one else will see that ad. Only you will see that ad because that ad was
crafted for you specifically to influence you the best. And not only that, but the cost to generate
this ad will be multiple orders of magnitude less than it would previously because it doesn't require
people to go out into the world to film things, to go into like an editor and actually create
these things. It's just done with a single prompt and a single click. And I think that's a really big
thing because as the cost of these ads go down, you can then run them and iterate on them much quicker.
And you could kind of, the way that these AI models work is they kind of take these feedback loops and they learn from them over and over.
You could run that loop over and over for a fraction of the cost it takes to create a normal ad and until you have the best ads in the world for people.
So it's a continuation of attention and grabbing attention and hyper-customizing attention.
And it seems, I don't know, kind of weird and scary.
I see just nodding. Do you have anything to comment on this?
I mean, the question becomes, will this result in Cambridge Analytica 2.1.2.4.4.2.0.1.0.1.
but like misinformation on steroids.
It sounds like what we're indirectly getting at here is potentially, yes,
unless it's like regulated or monitored very heavily.
Just in that example, Josh, of like personalized ads where, you know,
you could be selling the same product to different target audiences and maybe you might
misconstrue some of the details of that product in a few different ways just to kind of
appeal to that consumer and get them to click buy.
I kind of worry about how that might kind of spiral into something a little crazier, right?
So would company politics kind of change in the way that, like that information that they supply to meta,
can they tweak it there and then?
That's one thing.
The other thing is there's a shift of a power structure slightly, ever so slightly here.
So typically, you've had meta kind of working in a 50-50% relationship with advertisers, right?
Advertisers will come to them and say, hey, we want to advertise on your
platform. It's like, okay, cool, what's your product? Create the ad, do A, B, and C, and then we can see
where we can kind of like plug you in, right? It's the kind of like YouTube algorithm where it's like,
here's an advert. We'll try and put it in front of the right audience. Facebook is kind of like
now saying, uh, we'll handle all of the video cost production, all the kind of like visual qualities
and we'll maybe take an extra cut on this. I don't know, I don't know whether that's like a fair
take, but I feel like they're going to get more money out of this. They're going to own more of the
tooling and services in-house, and they just become a higher kind of like conglomerate, basically.
And then the third order effect is, well, what is Facebook going to do with all of the data that
they collect from all these advertising experiments that they're running, right? You just mentioned
that they're going to be doing A-B tests. It's going to close that loop, basically, because Facebook
would put out an ad, see if it works, and then maybe tweak it slightly for the next one. Now, AI can just
kind of like close that loop in like real time and like kind of give you the best iteration of
what that ad might be. So the blueprint basically updates every second. And that's a pretty
insane thing. The image generator model is with the, was it last week that we talked about it? No,
two weeks ago, Google's V-O-3. And so it's interesting to see that this is happening so fast. So V-O-3 got
introduced two weeks ago. And now META is using Google's V-O-3 to create AI advertisements.
I don't know when this fully rolls out.
I guess this is just the announcement.
But it's also interesting to note that meta, when this announcement went out,
meta jumped by 3% on the stock market and then competitive, like, ad,
like anything that's competitive to Google in the space or, excuse me, meta in the space,
also went down by like anywhere between like 2% and 5%.
So you can see the market like reacting to this real time.
Did you see that Microsoft also announced a similar product this week?
Oh, really?
No.
Yeah.
So you know how they have?
It's just using SORA, which is Open AIs video generator,
but they specifically announced a product,
which is going to create verticalized adverts or media generation
that will basically be ready for TikTok or whatever that might be,
and they're feeding it to their enterprise customers, right?
So this is like a general rollout to start off with,
but I bet you they're going to try and go toe to toe with this new meta product.
The internet, I think, just becomes more and more dead every single week.
Yeah, all of this points to increasing control, increasing attention, increase it. It's like,
if you have the power to make the optimal version of your product, surely they're going to take it.
And that means that if your product does work as well as they want it to, well, the downstream effects are actually kind of scary and pretty bad.
Like, if your advertising is great and your conversion, click the race jump to 99%. Like, that seems like kind of a, a
scary world. Like, you don't actually want, I don't think we want these products to work as well as
their design, because that creates a reality that isn't good. Like, when click-through rates on ads are
sub-5%, that's great. People are seeing it, they're like, and that's kind of sucks. Like, I'm just
going to keep going on with my day. But when they're really good, that's a lot of distraction and a lot
of manipulation. And Andre has a great take, which you have on the screen, which I'd love to discuss.
Yeah, so Andre Kapathi, former Open AI, he just left Open AI to educate about AI generally,
just kind of a legend in the AI space. He tweeted out, very impressed with V-O-3 and all
the things people are finding on our AI video. That's a Reddit, a subreddit for AI video.
Making, makes a big difference qualitatively when you add audio. There are a few macro aspects
to video generation that may not be fully appreciated. One, video is the highest bandwidth
input into the brain, not just for entertainment, but also work and learning, think diagrams,
charts, animations, etc. Two, video is the most easy and fun. The average person doesn't like
reading or writing. It's very effortful. Anyone can and wants to engage with video. Three, the
barrier to creating videos is approaching zero and four for the first time video is directly optimizable
and directly optimizable is in bold. I have to emphasize, explain the gravity of number four
a bit more. Until now, video has all about been indexing, ranking, and serving a finite set of
candidates, candidate videos that are expensively created by humans. If you are a, if you are TikTok
and you want to keep the attention of a person, the name of the game is to get creators to make
videos and then figure out which video to serve to which person.
Collectively, the system of human creators learning what people like and then ranking
algorithms learning how to best show a video to person is a very, very poor optimizer.
Okay, people are already addicted to TikTok, so clearly it's pretty decent, but in my opinion,
nowhere near what's possible in principle.
The new videos coming from V-O-3 and friends and competitors are the output of a neural network.
This is a differentiatable process.
So you can now take the arbitrary objectives and crush them with gradient descent.
I expect this optimizer will turn out to be significantly, significantly more powerful than what we've seen so far.
Even just the iterative discrete process of optimizing prompts alone via both humans or AIs may be a strong enough optimizer.
So now we can take engagement or even pupil dilation and optimize generated videos directly against that.
or we take ad click conversions and directly optimize against that.
So this is Andre just consolidating this down into a very powerful take of the cycles to create
the world's most addictive video are going from days with humans to minutes with models.
And that isn't even accounting for how the most addictive video can now be optimized for individual
and users.
The other theme that I'm seeing here, and we've talked about this before, is the differentiation
of the internet. I think pre-Facebook, before the algorithm meta of Twitter, Facebook,
Instagram really set in, everyone was looking at the same internet. There was no algorithm sorting
content and optimizing for distribution based off of who liked what. So if I went to Reddit or if I
went to Facebook or whatever or Yahoo, I was seeing the same content that somebody on the opposite
side of the world was also looking at. And that has slowly eroded to the point where it is today where
no one has the same internet.
I have my feed.
Josh has his feed.
Josh has his feed.
No one has the same feed anymore.
At least with YouTube videos,
I could go and take a YouTube video
and I could share it with Josh.
We're like, yo, Josh, this YouTube video was sick.
You should watch it.
Now, like, even that is under threat
where my YouTube video is for me
and it's not necessarily for Josh or anyone else.
And so we're getting siloed
into our own little like content
or bubbles.
And it's always been this way.
And now AI is just going to take
that even further. Yeah, the way of lines platforms work is they rank content. And there's a fixed set of
content in which is applied ranks and weights. And that's how it gets distributed to people. But that model
becomes absolutely irrelevant in the case that we could hypergenerate content where you could
spin up the perfect video on demand. So ranking and indexing, the way that Google works,
the way that YouTube works, that is no longer a requirement. And I want to just click into this,
this one key thing that Andre said that people might not understand, which is the gradient descent part
of how this works. So that's kind of how you train.
large language models, there's this thing called target loss, where every time that you go through
a training run, you're optimizing for a specific thing. So in this case, he was talking about
time spent or pupil dilation, where maybe you could track the size of someone's pupils, whether they're
engaged or not, in the case that you have back to the front facing camera. Well, the way it works is
you send a prompt to this generator, it creates an ad, it measures the objective, so your pupil
dilation or the time and then it iterates again the next time. And it gives you a slightly
better at and a better ad. And it learns when your pupils become a little less dilated, a little more dilated,
and it iterates that very quickly because it's so cheap. So what he's saying is, is not only are we
removing the indexing function because there will be an infinite set of content, but that infinite
set of content can get so good so quickly because of these gradient descent training runs,
where every single time it iterates a little bit better, a little bit better, a little bit better,
a little bit better. It learns what you don't like, what you do like, and it becomes perfect.
And it creates this really weird internet where every day there is a net new internet.
internet because all of the content from yesterday no longer matters because it's all generated on the
spot. So there's this like real time internet that is hyper customized to you that only exist in
your reality and can bend your reality however way it would like or however way you would like it to do.
So it's this really weird, creepy thing. And I think Andre kind of showcases also how powerful this is
because a lot of people, they don't really care to read or to write. But video is so easy. It's so easy to sit there and be
entertained. And I think this form factor, now that we've unlocked it with video creation in V03,
is so unbelievably powerful. And the way that most people will probably feel the effects of AI
first in their own personal lives on a day-day basis. So one habit that we've had on this series
is to talk about kind of like the duma effects of this technology. And so it's easy to kind
of extrapolate where this goes, right? We could create, you know, endless doom scrolling, you know,
on steroids and create an internet where everyone is more separated than other.
But I was just thinking about where this could be incredibly powerful, where we haven't seen
just yet.
The first thing that pops into mind is education.
Could you imagine if you are completely amateur in some kind of sector or task?
Like, let's say you really got into gardening, which is probably going to be something which
people ironically get less involved in as people get more digital and online.
But you had no idea what the first thing to do is.
you could just kind of like plug in, watch a bunch of gardening videos.
It would know that EJAS lives in this particular area, in this particular country.
So the soil type will be A, B, and C.
And before you know it, he's out there, you know, buying products that probably
matters serving him through personalized ads and kind of like, off you go, right?
And maybe, you know, EJaz becomes happier because, you know, he's sniffing flowers all day or whatever that might be, right?
So there's the educational aspect.
The other aspect is I feel like enterprises and employers,
whatever that future world looks like, would really want this for their employees,
whether it's an AI agent that's automating something or a human that's conducting, I don't know,
manual work or online work for them. I feel like this would be something that's super powerful.
And then the final thing is, last week we spoke about Open AI's new device, right?
And Josh, we were kind of like opining whether this might be something that kind of sits in your pocket
or listens or like kind of like hears you in some kind of way.
but there was no visual element here.
I think this now reinforces that there has to be some kind of visual element, right?
There has to be some kind of device that has an eye that sees what we see.
Otherwise, it would kind of be a mismatch product.
I don't know whether that changes your opinion.
This is kind of like a side note to like our previous episode, but I don't know.
I feel like visual elements are that going to be.
Do you mean camera or a screen?
Yes, camera, basically.
Camera, yes.
There needs to be some kind of ingestion mechanism for the world or
around us and what people are seeing if we assume AI models are going to become more visual.
Yeah. Well, it seems probable that there will be a camera on the new device. I think the question is
whether or not there will be a screen. They said that there won't be a screen. It's a suite of products.
There will absolutely be one that includes a screen at some point because this visual manifestation
of this is super important. I think to your point about using it for good, there's this like very
obvious and very steep like K curve where it's like there's this fork where people who have
agency that want to use these tools to leverage them to get smarter will have infinite leverage
infinite power will have so much fun with this because you can shape the world however you want it
but the people who don't people the millions of users who spend hours a day on Netflix who spend hours
day on TikTok there's no reason for them to break out of that sphere when it increasingly gets better
So while I'm sure the upside is infinite and incredibly exciting for a lot of people who want to leverage that, the downside is also equally an opposite as devastating. And it's hard to imagine a world in which that isn't the reality because so many people are so addicted to media and completely happy in that world. And a lot of them might not be upset. They're just stoked to sit there and share videos with their friends and just have a good time ingesting content. And that's just for,
fine. But that creates a really huge divide of people who do want to use this leverage to enhance
their lives or the people who just don't. I think it redefines who your friends are, Josh.
I think the three of us will have different types of friends if we assume this technology evolves
in the way that we've just described. The reason being is if we're just being served up personalized
AI content that reveals our true inner selves and biases and maybe accentuates it because they just
want to sell us a product and increase, you know, whatever eyeballs, then we end up discovering
new people that align with those biases. I don't know whether that makes us better people over time,
but I think it probably introduces us to new friends, which will probably be pitched as a good
thing initially, but I don't know how the longer order effects of that are. David, I see you like
how, you mean, kind of like how. So like downstream of like, again, the Facebook Cambridge Analytica era,
we all got tribed. We got, we all got put in tribed camps. And I think what you're saying is that
but even more granular is what you're saying?
Yeah, exactly.
And I don't know whether it becomes some kind of corporate, homogenous society, David,
where we're kind of like, I don't know,
whoever uses meta and Google's AI the most
then becomes kind of like in their own kind of nation state or pact.
And whoever uses Microsoft and Open AI specifically ends up in another.
But I think they're going to be like weird form of alliances that kind of get created.
Have you heard of a, this is a startup that I got introduced to just an
casual conversation the other day, index.network.
So this is in like pre-product, so most people wouldn't have heard of this.
But the pitch is, I'll just scroll down to the How It Works section.
How it works.
Start with what you're working on.
Upload your notes, your decks, anything that captures you're thinking.
This is information is stored privately.
Then step two, tell agents what you're open to.
I'm looking for early stage founders building privacy.
I want to connect to ZKML researchers and builders.
I'm interested in discovering confidential compute startups.
And then agents compete to match you.
So this is like a founder matching engine.
And then you get matched with them and you can start like talking to them.
So you upload your data.
And then it's like this like a founder matching engine.
You could also just do this for like dating apps too.
You could just like upload all of like your last 10,000 photos on your camera roll.
And then some dating app will like match you with with like other people.
And so I think that's there's things that are happening that are directly creating what you're exactly talking about,
address, but intelligently as well. Yeah. I mean, what's super interesting about this product is it's,
it almost sounds like a training gym, right? Hey, get ready for your next work interview by doing a
million different simulations. But the assumption there is there will be a point of which the human
let's go, let goes of the AI's hand and, you know, goes and becomes a grownup in the world.
But a question then becomes like, what about the products where the AI doesn't let go, right?
it's constantly your companion, where it maybe even replaces the interaction that you would have
in the human world, right? I think those kind of products will be stickier almost in the future.
I don't quite know what that looks like. Maybe it is, it starts off with social media,
like an enhanced YouTube or TikTok. Yeah, it feels like a transitory project because it implies
that there will be more than one agent worth picking from, as if like others aren't capable enough.
There is very rapidly a world in which you don't need to choose. They can all do anything you want.
It's just a matter of like the angle that they choose.
Okay, I want to move on to our final topic of this week, which was actually the most discussed topic in AI across the internet.
Now, I want to give you guys two chances, one guess each, to figure out what this topic might be about.
And your options to choose from the following.
Option A, a groundbreaking new model was made, and it changed people's worlds.
Number two, Google released yet another agent product
and it's going to automate our entire lives away.
Or number three, a multi-billion dollar AI company
was outed for having no AI whatsoever.
I like the reality in which number three exists.
One out of these three is different from the rest.
The first two we've already talked about in 17 different ways.
The last one is new to me.
Wait, David, you don't want to talk about a number.
another frontier model that beats this bench.
Go around the loop again.
Okay.
Well, you'll be happy to hear listeners that it is, in fact, number three.
This company called builder.a.i, which had a valuation of an emphasis on the word
had, a valuation of $1.5 billion.
This company had been backed by Microsoft to the tune of $435 million.
was revealed that they did not have an AI product on the back end,
but it was in fact 700 Indian software engineers
that would manually process and code up any user's request as they were fed it.
Wait, wait, wait.
So I'm vibe coding.
I'm vibe coding on this platform.
And I'm like,
build me an app that plays Snake in this particular way.
And then a bunch of Indian coders would code it.
as fast as possible and give it back to me?
It gets sent to Kuma and Anand in India, presumably,
and they just build up a prototype.
And you have a loading screen where it's like,
this app is being developed.
And it's like doing that for like 15 to 20 minutes.
And really it's engineers on the back end
that are building this entire prototype or app for you
and then just sends it to you.
I saw a bunch of mock-ups from this.
I saw a bunch of mock-ups from prototypes that people would see.
And it was kind of obvious.
now in hindsight because there were like typos everywhere.
Some of the buttons didn't do the thing that they would claim that it would be.
I saw another hilarious tweet, by the way, which was, it looks like AGI stands for,
fuck, let me, let me read that.
Yeah, a guy in India.
But yeah, I'll repeat that.
Go for it.
I saw a hilarious tweet the other day, which said that AGI actually just stands for a guy
in India, which I found like hilarious.
Now, I just want to highlight the story because it kind of shines a light that not everything in AI is wizard magic and going to change the world tomorrow.
We are in fact still very much at the starting phase of this entire revolution.
And most people don't really know what they're getting themselves into.
We've spoken about AI agents a lot on this show.
We've spoken about jobs being replaced on this show.
We've spoken about media going to become enhanced, doom scrolling, all that kind of stuff.
But really, we kind of don't know what that's all going to materialize into,
and it could just be a bunch of Indian software engineers on the back end.
Maybe this whole AI race was just a complete scam.
I can't imagine users not realizing this.
Because I feel like I very frequently abuse my AI in a sense that I will be kind of mean,
be very direct, paying it with a ton of queries.
Oh, yeah, it gets you good results.
If it will give me the wrong answer, I stop it and I say,
absolutely not. You need to change direction. This is what I want. And, like, very quick.
So I'm, like, constantly hounding it with request. God, I didn't realize Josh is an asshole on his own
time. I am a value extractor. And I want to get what I want from the models. And if that's
it's going to take, so be it. Like, okay, sorry. You're not sentient yet. Like, I'm going to
speak to you in a way that gets me what I want. And it's totally not going to use that data to revise
itself in its future. I'm going to have to delete my account before we reich EGI.
It's too late. You're done. You're done. You're done.
Sorry, Sam selling it to the UAE at this point.
But the point is, like, how did they get away with this?
EJAS, do you know how long this was happening for?
Yes, 10 years.
The startup's been operational for 10 years.
Okay, but it hasn't always been an AI vibe coding platform.
No, no, only over the last two years.
Yeah.
What was it doing before that?
It was gathering data, basically, to fuel some kind of product development platform.
And then they jumped on the AI hype train.
They were like, hey, we could vibe code whatever you want, just type in a prompt.
And yeah, it looks like all the funding was going to software engineers in India.
And by the way, this would have never been discovered if the company that had helped finance them to the tune of, I think, 600 million, didn't ask for, I think it was about 150 million to be recouped.
And they defaulted on that payment, which then led to an investigation of their finances, which,
of which the investigators realized that the funding was going to India,
and they were like, hey, why is this going to India?
Why do you need so many engineers?
And they were like, oh, all the user requests are going to this place as well.
And it kind of like unraveled.
This is the eranos of our era.
Yeah, Theranos like Elizabeth Holmes type lives.
Yeah, this is thereinos like to a T.
Oh, yeah, yeah.
We're doing really important work.
That's really good and totally not sketchy at all.
That's unbelievable they got away with it after that long.
That's impressive, honestly.
I wonder if things like that, I wonder if they're embedded in other systems that we use that
were just blissfully unaware of.
Like, is there a human element in this?
I don't know.
I find it hard to believe that it wouldn't become obvious quickly if you are a power user
of AI and had to wait and got inconsistent results on each prompt.
And it just seems, the whole thing seems fishy.
Well, it depends on how hypey it is, right?
And so if they raise at $1.5 billion valuation because of like revenue and user growth
and like all these fundamentals, that's one story.
But if they just raised on hot air and hype and be like, we're an AI, you know, dev shop,
then people just cut a check being like, I'm bullish, like, take my money.
This could 100X.
Then like the actual user growth in customer stories, it could have just been like a hype-based
value, like raise.
And we see that all the time in crypto, like, send money, ask questions later.
Depends on which one of those stories it is.
$400 million with no due diligence to understand.
You can't even ask the question like, hey, what model are you using?
What, like, in-house training do you have?
Like, any of those answers or those questions would have probably surfaced the fact that they
did not have the infrastructure required.
And it was just human labor all the way down.
That's crazy.
The humans are so timid and meek, huh?
I reckon a bunch of AI agent products, Josh, will be revealed to have something similar.
Like, definitely human in the loops.
Because AI agents are nowhere near being able to kind of be fully autonomous yet.
We saw a bunch of this in Web3 and Crypto already, right?
Wouldn't surprise me if one of these leading, like, agents,
I'm not going to name names to throw shade,
but I just like a bunch of humans on the back end.
Man, that's going to be good.
We see this also in robotics,
I think a lot where there are a lot of humanoid robots
with the perception of being fully autonomous
when in reality there's tele operation.
So perhaps there's something similar with that?
I don't know.
That's a weird world.
I guess if the result is what I want,
then all the power.
to you. Keep up the good work. But in the case that it's not, I think, yeah, that's probably
where we're running into problems. Yeah. Josh, before we started this show, one and one final
topic that I want us to cover, we were talking about Game of Thrones, right? And how these
kind of major companies have been making pretty sneaky moves. We saw some news breaking just
this morning, actually, that anthropic cut off clawed access to windsurf, which ironically is
another company which,
presumably doesn't have 700 Indian engineers
on the back end that's doing the AI stuff,
but actually has autonomous AI agents
coding up applications that people use.
What are your thoughts on this?
Because what I want to understand from you
is how does this impact their product, number one?
And then number two, what's the political chess
that Anthropic is playing here?
So welcome everybody to this week's version of Game of Thrones,
the segment that we come back to every single week
because the game theory of AI domination
is very complex and very excited.
So this week we have Anthropic, who is the maker of Claude. Anthropic recently released their
4.0 model, but prior to that, they had 3.5, 3.7. And it was mostly known as, like, the best,
the premier coding model. So if you wanted to write code, for a long time, Anthropics models were
kind of just like the best. So what happened here is we have WinSurf, who OpenAI now owns a very
large percentage of, if not the whole thing. And Open AI and WinServe kind of had this collaboration.
And WinSurf was not receiving data from Claude. So when Cloud 4 came out, Cloud 4 came out, Cloud 4 would
allow windsurf to access its model. So there's this new premier flagship model, and the way
windsurf works is it aggregates models and then determines which model is the best to serve
the user's needs at the time. So this new frontier model came out. Everyone wanted to use it, and Club
was like, hey, we're not giving this to you guys. Sorry, like this is just our model. So Windsurf got cut out.
And now what just happened today is WinSurf not only cut out their brand new model, but their
previous two models as well, the 3.7 and 3.5 model, which a lot of people deemed as the best
in class for coding. So why would they do this? Well, here we could place our tin hats on our heads
and we could start to speculate. So one thing that still is in WinSurf is Google's Gemini 2.5,
which is noteworthy because now we have the two biggest models being ChatGPT and Gemini,
both in the same aggregator. And who owns that aggregator? Well, Open AI. And who's Open AI's
biggest competition? Well, it's Google. So what happens here is now there are a lot of requests made to
windsurf, and the decision on which model to serve is now split between mostly two. So you could go
Google or chat GPT. In the cases that the best outcome is Google, well, opening that data. And they're
like, hmm, that was interesting. Why would you choose their model versus ours? And they collect that data,
and they could see it, and they could use that to refine their model. So going back to the gradient
descent and the lowering the loss every single time, they can take this feedback from when a user
would use Google, integrate that into their future models, until that function,
gradually decreases to zero, and in no instances doesn't make sense for users to go to Google
because chatGBC is so much better at serving their needs. So it's this interesting dynamic where
it's like, hmm, okay, we can actually see the behind the scenes of our competition and how users are
using it, and then we can optimize our models for that. And I think that creates an interesting
dynamic where Open AI once again gets a interesting leg up over the competition.
Josh, what do you think is the Pareto Distribution curve look like in the AI model where?
So for the listener, a parado distribution is kind of this notion that the 80% of all spoils, the 80% of rewards will go to the top 20% of whatever, like animals, species, companies, startups, like to the rich goes to spoils.
And so if you're in the top 20%, you're getting the, you're getting the 80% of the value.
If you're in the bottom 80%, you're splitting 20% of the value.
Now that parado distribution curve can become more gradual and it can be more.
more equitable and it can be more closer to 50-50 or a pre-do distribution curve can become more
severe where like the top two people get 90% where do you think what do you think the nature of
the predile distribution curve is on AI models you probably need to split it into two which is
consumer versus like business and industry because in the consumer world it's very clear that chat
chbt basically owns 90 plus percent and then all of the normal people use chat chbti they're winning
they're above 90 everyone's competing for the final 10 percent in the commercial
of this, I think that changes a lot because people don't really care what the interface
looks like. They're mostly looking to extract the data from the models. And in that case,
well, chat GPT doesn't even really have the best model. Google's Gemini 2.5 Pro, models are named
terribly. But Google actually has the best model. So I think when you are using WinSurf or when you
are using cursor, a lot of times the user who is, they're the developer or the coder or the company,
they're just looking for the best results. And in that case, Chad GPT wouldn't have a
monopoly. It would be kind of split, I'm not sure evenly, but it would be split much more evenly
versus across Anthropic and Gemini and Chat ChpT. And I think that's probably what they're trying
to accomplish with this is trying to sway that more towards them. Because when you remove the
user interfacing, the application layer, you are left to compete only on merit. And if it's a
merit-based approach, and that's based on benchmarks and actual quality of the models, well,
Chatsh-GPT has a ton of competition. In fact, they're not even ahead. So I would say it's probably
close to an equal split. I'd love to actually see the data behind one of these companies to see how much
is being served which model. But it's a very clear divide between commercial and consumer grade.
Like commercial chat chabit, crushing. But commercial, like, not so much. There's still a lot of
competition there for who's going to win. That's the high quality answer that I think listeners
has come to the AI roll up for to hear Josh. That was great. And Josh, this week, what was very
interesting, I think there is so much more to explore with the whole geopolitics lines. We're going to need
a guest, I think, to talk, talk to and inform us about, like, what kind of things look like
under the hood, which is a hard guess to get because if it's a matter of national defense,
not very many people are talking, but I definitely want to talk to whoever is the right person
about that stuff.
Another week in the books, Josh Adros, thank you guys for coming with me down the AI
roll-up rabbit hole.
Always fun.
We'll see you guys next week.
