Limitless Podcast - This Week in AI: The Big AI Short Failed
Episode Date: November 14, 2025In this episode, we explore Michael Burry's massive short position on NVIDIA, questioning if it foreshadows a market correction or reflects AI's dependency on GPUs. We assess Burry's claims o...f inflated tech valuations against data showing ongoing reliance on older GPUs by Frontier AI labs. We also touch on Microsoft's moves with OpenAI, the implications of the GPT-5.1 launch, and innovations from Google’s DeepMind.------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS0:00 The Big AI Short2:39 The Thesis7:02 Hardware Demand9:27 GPU Economics11:42 The Crash and Burn13:20 Energy Constraints17:01 Microsoft Owns it All18:14 ChatGPT 5.123:21 Google 3D Rendering29:00 Final Thoughts------RESOURCESJosh: https://x.com/JoshjKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
If you were alive in 2008, chances are you remember the market crashing.
A catastrophic failure of the housing market that bled into the entire economy,
and it wiped out a lot of entities.
And if you'll remember, there was one person in particular who profited a tremendous amount
off of this happening.
And his name is Michael Burry.
He made $100 million personally.
Collectively, he made a billion dollars off of shorting this market crash.
He saw what was right early, and he doubled down on it, and he made a killing.
In fact, so much so that it became a movie that you've probably seen,
named the big short. Now, Michael Berry recently has published a new position in recent SEC filings
that we uncovered. And EJAS has been all over this over the last couple of weeks, tracking the
positions, trying to understand why he's making these things. And if this bet is correct in predicting
the next big bubble to pop, which is seeming to be AI. So he predicted the housing bubble in 08,
he made a billion dollars. He's predicting another bubble in 2025. Is he going to make another
billion dollars, EJAS? The short answer is, I don't think he will. I think he's going to get blown out.
So Michael Barrie is back. He is short around $300 million worth of Nvidia shares. So he's shorting
at this point the richest, wealthiest company in the world. And that is a big sign basically saying
that I think the air bubble is popping. So the question is, is it? Let's go through his thesis, Josh.
I'm going to simplify his post here, which is basically GPUs run the AI world, right?
Trillions of dollars have been spent by Frontier AI labs collectively to train state-of-the-art AI.
But the thing with these GPUs, Josh, is that they have a lifespan, right?
They don't last forever, right?
And he estimates that the top frontier labs are overestimating or artificially boosting the lifespan of the GPUs that they purchase.
and you might be like, well, that sounds boring.
Why is that interesting?
Well, the thing is, if you have assets on your balance sheet, that factors into your stock price.
So the point that he's making is all these big companies are artificially boosting the lifespan of the GPUs that they've purchased
so that it inflates their stock price.
And therefore, the real stock price is actually a lot lower.
In fact, he says at the bottom of this tweet, by 2028, Oracle will overstate earnings by 26.9%, meta by 20%, etc.
And so his big short, Josh, is on the behemoth that is supplying all these frontier AI labs,
NVIDIA, saying that when the bubble pops and he thinks the bubble's going to pop now,
he will make a heck of a lot of money on this trade.
So is he saying the bubble's going to pop now or is it going to pop in 2028?
Because he's referencing those numbers in 2028.
So I'm kind of curious how he's framing this.
Is this, because now I'm thinking about our positions, is this something that he predicts is going to happen in 20228?
or is this more of a short time scale position?
If we remember from his original short,
I think he held a position for about a year, right?
Or at least like the better part of a year.
So I think he probably estimates
that it's going to happen sometime within that time frame.
But I have more evidence as to like
why he might be thinking that the bubble is bursting.
Have you heard of what a neocloud is, Josh?
The name is familiar,
but I don't understand quite what that means.
So please fill me in.
Okay, so think of like AWOLD.
so any of these other typical cloud providers.
But they're specifically focused on providing GPUs and data center pipelines for you.
So think of like an AWS just for AI compute.
That makes sense?
And so what he did was he looked at the major neocloud providers and he looked at their stock prices over the last week, Josh.
And they were down.
So he upped his position and was like, okay, this is the bubble bursting.
I'm going to lay my claim. There's an issue with this thesis, Josh. I think he's dead wrong,
and I'm about to walk you through four reasons why he's wrong. Okay, so then just to kind of like wrap
my head around this. So the neoclouds are kind of infrastructure for people who don't want to build
their own infrastructure, kind of like you described AWS. If you want to get a data center, but you
don't want to build a data center, you offload that responsibility to a neocloud. And he thinks that the
cost that these companies are marking down these neocloud prices at, particularly the GPUs, is
much higher because the depreciation happens on a shorter time scale than the companies are writing off.
Correct. And neoclouds is one part of the picture, but it's a great example to lead with because
so many frontier AI labs like Microsoft OpenAI actually pay these neoclouds billions of dollars.
Like Microsoft just signed a $19 billion contract with a neocloud provider called Nebius, which
kind of sent its stock price going up. We talked about it on a previous episode. So like it's a good
example to kind of like lead with. But the take here is that these neoclouds are basically,
as you said, overestimating the life cycle of these GPUs and therefore they are wrong.
Except that's not the case at all. And I present to you my first counter thesis,
which is he's wrong about the two to three year depreciation cycle. It's actually more like
five to six, in some cases even eight years. These GPUs are used for more than just training
AI models, they are used for inference, and they're sometimes used just for general queries and
distribution for these AI models, right? But the most staggering kind of example comes from
the top dock, the top neocloud called CoreWeave, who kind of shows the opposite to what Michael
Burry is estimating, which is there are people booking up GPUs, Josh, two quarters in advance,
six months in advance, and these aren't the latest GPUs. These are GPUs, these are GPUs.
that are like five years old, three years old, in some cases, even longer.
So if you dig into the details of all these different neoclouds, you'll start realizing that
it's the opposite of what Michael Burry is estimating.
One, these GPUs have a very long lifecycle, and two, they're being used for way longer
than people expect for really important things within AI.
They are oversubscribed.
They are over-utilized.
This is interesting because you're getting like these really two counterpoints here,
where Michael Berry's subjective take, where he's like, no, this is wrong.
And then you have this objective number, which is core weave.
And they're saying, wait a second, we're actually selling out these old hard drives or these old
GPUs, two quarters in advance.
So how do we kind of piece together who's right and who's wrong through this?
Because it seems like the market demand, and intuitively it makes sense, too, that there is no
shortage of people who want to generate tokens.
And even if you are paying a little extra and premium in terms of cost per kill a lot for
those older GPUs, it's probably still worthwhile.
because the amount of money you could build on top of that is so large.
Forget about GPUs.
Even the CPUs are being used for all the AI distribution and coordination.
I mean, look at this.
AMD CPU tab has just gone up in this week's projected earnings, right?
But to answer your question, it's like, okay, well, who's right here, right?
I want to kind of, before I address that point, I want to look at another actor within this whole circular economy that we like to talk about, right?
which is the frontier AI labs, right?
Google, who is an established company,
they just had their first $100 billion revenue quarter,
are using eight-year-old GPUs to fund their whole thing.
So again, another data point saying that
I don't think these GPUs are old and or not useful.
And then if you look at Nvidia themselves,
they also have crazy amounts of demand.
They're oversubbed for years in advance
for their Blackwell GPUs and all their new GPUs going forwards.
So then the last actor that I would want to look at is where does all the demand come from?
Like who's buying these GPUs? Are they satiating demand?
And if you look at every other company like Google who has a ton of end users, open AI with chat GPT users, etc, every insider take points to there's not enough GPUs to supply all of this demand that they're getting.
From the apps that they produce, from SORA, from a bunch of these different things, there's just too much demand.
and in many cases, they don't have enough scale to even kind of like meet this demand.
Jensen Huang had a conversation with TSM, which is like their main chip manufacturer,
asking them to increase the rates of production by 50%.
So the point I'm saying is Michael Bury, I think, is dead wrong here.
He's definitely underestimated demand and he's underestimated the life cycle of these GPs.
They are more valuable than he thought.
And the old GPUs are worth more than they were worth when they sold.
originally. They're still selling within 5% of their contracts, which is just crazy.
So maybe I have to ask a pretty dumb question, or seemingly dumb at least, and it's like,
why is that so important to the bubble how companies factor off losses in GPUs over time?
Is this some real big existential threat? To me, it feels like a small part to a bigger picture
and not quite as big as something that would totally blow out an entire market that's been built
so far. The biggest budget spend for any.
any major AI company is on GPUs.
That's where all the hundreds of billions are going for from Microsoft.
That's where the majority of open AI is $1.4 trillion that they've committed over the next
five years is going to.
It's all to these compute providers.
It's all to these neoclouds.
It's all to creating their own chips.
It's all to Jensen Huang's pocket.
And so most of the CAPEX bubble comes from GPUs.
So if the bubble's going to burst anywhere, it's going to be from over-leveraging on GPUs.
Do you want to know something crazy?
Josh, what's that?
Even though hundreds of billions have been spent this year alone on GPUs,
the companies who have spent it haven't even made a dent on their balance sheet.
Remember, we're talking about Google here.
We're talking about Microsoft here.
They make hundreds of billions of dollars in net profit per year.
They have plenty of flush cash,
and they're spending it on machinery that they think is going to satiate demand
that you and I maybe don't see on the enterprise.
side or on the end consumer side, but they are obviously seeing it.
Okay. That makes sense. So the depreciation really is a big factor because it's kind of artificially
inflating numbers across the board. It's from the people who are borrowing it, the people who are
issuing the GPUs. There's lots of high inflation that he's assuming is being baked into this.
And eventually that inflation kind of fizzles its way out through either some really aggressive
event or just overtime degrading, which makes sense. Okay, I think I'm up to speed on this.
I think I get his case. But what seems a little bit different this time is that
that the first short that was made was in 2008.
It was against basically the entire stock market,
and particularly the housing market.
That was not growing nearly as fast as the AI market is.
That's like shorting the internet.
And if you short the internet before 1999 or after,
like there's this very narrow window to be correct without getting blown out
because what a lot of people don't realize is in order to short a company,
you have to borrow money against from somewhere else.
And that borrowing comes with a premium.
You have to pay a certain percentage interest in order to loan the money.
If he's not right on this, or even if he's wrong by a couple months, with the rate that the market
is accelerating, it feels like it's a very difficult thing to get right because you not only
have to time it right, but you can't get blown out by the appreciation of things as fast as they're
growing.
Like, I've never seen a hockey stick of an industry steeper than this one.
So it seems like this is a really tough position to be bearish in, or a tough time to bearish
in at least.
Yeah, it's an incredibly risky trade to make.
And I mean, the cherry on top of all of this, Josh, is that he paid the price pretty daily.
You sent a message to our group chat this morning and I had to read it like multiple times.
Michael Burry this morning of our recording announced that he is closing down his fund, the very same fund which took out this $300 million short on Nvidia.
He was down on his position pretty massively.
He overstated his means.
and there's one line here, Josh, which kind of sums it all up.
He goes, my estimation of value and securities is not now
and has not been for some time in sync with the markets.
Tough.
Okay, so this connects the dots for me because I saw the beginning and I saw the end.
I was like, okay, I know he goes out of business, but I do not know why or how he gets there.
And I think this probably connects the story.
Don't, again, to the point yesterday, don't bet against the optimist, dude.
He get absolutely cooked.
So where does that leave us then?
So therefore it is not a bubble and he was wrong or therefore it is a bubble and he was wrong with
timing.
No.
I like where you're sniffing, Josh, because something still seems a bit off, right?
It's like, okay, if it's not GPUs, what could it potentially be?
Well, also, so he blew up his position, but that doesn't ruin the hypothesis, right?
Like just the price has moved against him, but it doesn't prove the thesis wrong.
Is that right?
It doesn't prove the thesis wrong on a long-term horizon because no one knows whether any of these crazy spends are going to be crazy in hindsight.
But for now, in the short term, all the fundamentals show that there is adequate demand for the GPUs and there are end users that are willing to use these AI products that require these GPUs.
It actually tells us the opposite story, which is there are not enough GPUs in the world right now, old and new, that can satiate the demand for all the AI products that are being served.
Right now at this moment.
Okay.
Yeah.
But one thing that could signal a constraint, Josh, is energy.
Put simply, the U.S. energy grade isn't up to par to supply energy to all these GPUs so that they can do the job.
In fact, there's this clip that I have here from Satya Nadella of Microsoft where he goes on to basically say he has hundreds of millions of dollars worth of Nvidia GPUs that are collecting dust in his face.
weather data centers because they don't have the energy to supply this.
That seems troubling.
And I think this is to a point that is being made a lot where there are these gluts,
where they are and how they show themselves is going to be the thing for debate.
And this isn't the only hot clip we got this week.
There is this hysterical clip of Satya Nadella, CEO of Microsoft,
kind of talking about his relationship with Microsoft.
And I want to play this out before we give some commentary on.
Because I think of all the clips I watched this week, this was one of my favorite.
In our case, the good news here is OpenAI has a program, which we have access to.
And so therefore, to think that Microsoft is not going to have something that's...
What level of access do you have to that?
All of them.
You just get the IP for all of that.
So the only IP you don't have as a consumer hardware.
That's it.
Oh, wow.
Okay.
Interesting.
That's so good.
That's crazy.
And this comes off the back of Satya saying in a previous interview when it asks about OpenAI,
He goes, we are like above them, we are beside them, we are around them. They have this full,
total, controlling entity over open AI. And this is, it's like the delivery is hysterical,
the reaction is hysterical. But it also, it shows a lot of interesting dynamics between these
large companies. Because during this interview, Satya also made mention of the fact that
people who are building their moats around individual models are very fragile and very brittle
in the sense that they are one company copying their model away from being worth significantly less.
And we kind of saw that early this week with our Kimi K2 episode.
I highly recommend you watch it if you haven't because it shows how fragile being a frontier model can be, if that's your only business.
And in the case of Satya and Microsoft, I mean, it seems like they have all the leverage in the world.
If they want to fork the open AI code base right now and create their own chat GPT, they can do that.
They own all of it.
Yeah, I've said this before, but I think the biggest winner of Open AI is Satya Nadella.
He has struck the best deal.
Open AI recently restructured their entire company.
We actually did an episode on that.
Feel free to check it out.
But one of the major takeaways from that was Satya owns 27% of Open AI.
And now he is able to engage in any model provider, not just Open AI.
He doesn't have exclusive rights anymore.
He can kind of like flirt with other AI companies.
companies, and he did. He engaged with like Amazon on a bunch of different things. And he just
owns all the IP until something like 2030 or 2032. So he is one of the smartest chess players.
Honestly, unexpectedly for me, I didn't think, I thought Microsoft was kind of like boomer in
this sets, but he's navigated it beautifully. And one thing that I also need to give him kudos for
is he has one of the biggest modes when it comes to enterprise. Like, remember Microsoft software
is a really good consumer-grade product,
but is mainly kind of making their money
from all the enterprise stuff,
from co-pilot and stuff like that.
So beautifully executed from Satya-Hill.
I love this.
Man, Satya's, he's crushing it.
And it's funny because you don't think
of Microsoft as a serious player
in the world of AI at all.
I don't use any of their software.
I don't use copilot.
In fact, he was asked about copilot
because Copilot was the single AI coding agent.
It owned 100% of the market share.
And now it's taken down to 25%.
And he said,
this is a great thing because now I still own 25% of a market that is now 10 times the size of what I
previously owned 100% of. So for him, he's playing the positive sum game. He's very aware of the
market dynamics at play. And for the cost of what was it, $10 billion, he got the largest
shareholder position of Open AI, which was rumored to IPO at a trillion dollars. So they not only
see the financial upside, but they get all the intellectual property, most importantly, the code base
to either reverse engineer for their own wants or to just clone and use for whatever they want it for it, they have it.
And it is a true chess move by Satya. Bravo. Nicely done Microsoft.
As of today, as we speak today at Open AI's $500 billion valuation, Microsoft steak is worth $136 billion.
Not a bad deal. Not a bad deal. But that's not the only big news. We have exciting news on the chat.
GPT front. This is what I'm most excited about because I use chat GPT every day. And this is hopefully
going to change the way I use it. Maybe. We'll see when we talk about it. The news this week is that
chat GPT and Open AI, they launched GPT 5.1, which is a whole new upgrade to GPT5. How different it is,
we're not really sure. I know, I just kind of read through the highlights. It's warmer in terms of
its like sentiment by default. So it'll be a little bit nicer, which is bizarre.
because I thought we were going, we were trying to go the opposite direction, like it's a little too
nice. And they added some safeguards that we'll get into in a second. There's also a much better
instruction following, which is interesting because oftentimes when you instruct GPT5 to do things,
it doesn't always do them exactly as you want. So like if you say, always respond in six words,
it will oftentimes respond in more or less than six words. It just doesn't quite understand the
instructions. And then one of the really fun things that I saw was they, they changed the different types of
personality. So you can choose between personalities on how you engage with the model. And the
previous ones were default, friendly, and efficient. And now there are some newly added ones,
which are professional, candid, quirky, nerdy, and cynical. So in the case you want to
personalize Chad GPT to be more like those models, well, now you have the option to do so with
GPT 5.1. So if I'm sick of it saying, you're so right, I agree, that's a great idea. That's a great
You could just say, cut that out, and it'll actually listen for the first time, which is really exciting.
Ejazz, did you find anything interesting from GPT 5.1 after going through everything?
Yeah, I spent the last like 12 hours playing around with it.
Can I just, I'm going to play bad cop for a bit, Josh?
Cool.
I don't care.
Like this is kind of like a nothing burger.
Point one update.
And it's honestly like I've come to expect more from open air.
So I'm kind of surprised that they've come through with this release.
On the point of different personalities, I think that's.
super helpful because speaking for myself, I think that the current model is too agreeable. It's too
sycophantic. And kind of Sam heard this feedback and the original version of GBT5 was less
sycophantic. But then a bunch of people were like, I wanted to be more friendly. I wanted to be more
appeasing. And so he's kind of like flittering back and forth. This option gives a lot more flexibility
for the end user for me. I'm like, I want someone that's maybe a bit more candid or a little more
cynical when I'm kind of doing my research and then maybe a little more friendly when I talk about
more personal stuff, right? So the pliability is cool. What I will say is like you kind of were able
to do this before. You could just go into settings, user personalization and type in a prompt. This, I guess,
makes it easier because you just got to click a button and maybe that helps a lot of people.
The other cool thing, I guess, is like when it comes to efficiency, it uses fewer tokens and gives you
more output. So you get a higher quality answer for less energy, for less compute, which is probably
going to drive down costs of accessing this type of this type of frontier AI, which is super
cool. But aside from that, it's kind of like, why did they do this? Like, one of the critiques I
saw, Josh is like, it seems kind of rushed. Like, there's no benchmarks, no API release.
A bunch of the dev tools look a little sloppy. It's almost like they kind of rush this.
There's no formal kind of like Sam's doing a live stream around this.
It's nothing around like, you know, hey, here's how it compares against other models.
They just kind of like rush this out and I'm kind of confused.
Yeah, I suspect the public sentiment will start to reshape itself around how this goes.
Like when Apple releases iOS 26, it was a huge release with liquid glass and all this cool new
improvements.
And then 26.1 is an incremental improvement where it adds a lot of new features, nothing really
groundbreaking, but the product gets slightly better.
I think we can kind of view these mid-tier updates.
as that, where they're just small incremental improvements.
Like now, it's a little more efficient.
Like you said, there's this new dynamic thinking time,
which is what they call like smart thinking.
And it allows it to think faster, think more efficiently,
use less words that are fillers,
so it needs less tokens to operate, like the chart that we're seeing.
And that just allows for, one, the model to think more
because it uses less tokens to think.
And then two, it just creates a little more efficiency unlock for open as GPUs
to open them up to do other things.
So I assume this update is probably for both people, for the consumer and for Open AI in terms of efficiency and slightly better product.
Is it this amazing novel breakthrough?
It doesn't appear like it at all.
In fact, it's just like a very marginal improvement.
But it's something.
It is like, it is something.
And I suspect we'll probably get more updates like this where it's like, oh, well, does this like one thing a little bit better.
Do you notice it maybe like one in every 50 prompts, but it's not a big deal?
And that's probably where we are for a little while until maybe Gemini 3.
I don't know.
We have, there are murmurs of Gemini 3 coming down the pipeline from Google?
I've been waiting for it for a month now.
Yeah, that could hopefully blow things out of the water.
But we're at this weird kind of stagnant period where we haven't seen those step function
improvements in AI models in a little while now.
I mean, like the speculation there would be like we're kind of grinding to a halt and
that exponential arc, that S curve where we see AI improving massively and it's curing cancer
may not be quite there yet.
But, hey, listen, in the meantime, there are several parties.
that are super bullish open AI.
One of them is Masayoshi-san of SoftBank.
In this news, he sold his entire stake of Nvidia
worth $5.83 billion to buy OpenAI stock.
It's hilarious because all that money
that is investing in OpenAI is only going to get spent
on Nvidia GPUs.
Just told Nvidia.
Like, why are you doing that?
Masas really, he's been making some questionable decisions lately.
He sold Nvidia too early.
and he missed the whole bubble.
He missed the whole, like,
he's just been making some,
some strange decisions here.
He wants to own 5% of Nvidia.
Do you know how much that would be worth today?
I cannot believe, you know.
So like, you know, someone like kind of like tweeted this.
They were like, you know,
someone should look into what happened
after he sold his entire Nvidia stake in 2019.
Just for reference here,
the GPT 3.5 that went viral across the internet
released two and a half years later.
So pretty, pretty insane thing there.
And then in final news,
Josh. Tell me about this. This looks super exciting. The Google DeepMind team is back with some pretty
awesome news. We are very fond of the Google DeepMind team in particular here at Limitless. They are
very good at building real world physics in a digital manifestation, which I think is a really
important thing to be good at in a world where we're training robots to be good at physical
stuff, but they have not quite had the training time to get good at physical stuff. And what I love is
that we've seen a lot of these projects coming out of the deep mind team. And today we got a new
one that is an agent that is capable of navigating these 3D virtual worlds. So we saw previously
there was a few models that can generate the worlds. Well, now Seema 2, which implies there was a
Seema 1 that I was blissfully unaware of, allows these AI agents to actually navigate these virtual
world. So now not only can Google generate the virtual games. They literally look like video games
in this video, but they can also have the characters navigate these complicated experiences
understand how things work, kind of piece things together.
It's amazing to see because it feels like as I'm watching this video,
it is how a toddler would play a video game.
It's kind of they're slowly moving, it's figuring it out,
you could see kind of reasoning in real time and navigating these spaces.
And it's important to understand as you're watching this,
everything that is on this is generated by AI.
Because previously, these look like trailers to video games.
And those video games would have required thousands of people who are game developers,
lots of time on the game engine, and all of that is done now fully by AI.
So seeing this announcement was really exciting for me because, I'm like, one, I just
love video games and two, like, oh my God, wait, these AIs, not only are they able to generate
the environment, but now these AI models are able to actually engage with the environment.
And what downstream effects does that have on training humanoid robots and the like,
which we saw with the Tesla episode last week.
Yeah, I mean, yeah, to your point, we see a real-life implementation of this with Tesla.
has a world model
which they use
in their robots
which is the Tesla cars
and automated self-driving
and the future optimist robots
and what's cool about this is
Josh one of the main things
that makes your AI model
super cool or really smart
or intelligent is data
but there's a scarcity of data
there's a scarcity of rich human data
like they don't necessarily see
what we see through our eyes
or hear what we hear
these simulated environments
basically synthetically create this data and enhance any of these AI models, right?
And so Tesla kind of gets smarter by replaying a scenario where it almost crashed over and over
and over again in simulated environments until it gets it right. And then that just gets transported
to every single production-ready Tesla that anyone is driving so that if they found themselves
in a similar scenario, they avoid it. And that's just like one small use case for like what
these robots will eventually be benefiting from. So such a super cool update. Yeah. And if you scroll down
actually two posts, EJs, you'll see a lot of similarities to what we spoke about with Tesla, where you could
see the model kind of reasoning in plain English. You could see, oh, we equipped the pickaxe and
copper mine. And then, okay, like, I'll equip the pickax. And it walks you through its chain of thought.
And then not only that, but the post below it shows that it's self-improving. So the more time it spends
cycling through, the more time it spends playing games, the better it gets. And these are early
implementations of what we kind of imagined AI would eventually be, which is this self-recursive
loop where it can improve without outside information. And it's really exciting to see it happening
in the virtual world because I guess that's the only place it can make sense where it costs much
less. You can't harm anybody if a robot goes rogue, but it can learn without the inputs of other
people. So that is the exciting thing about this, I think. And I'm really excited for the deep mind team,
man. I hope they keep going. Yeah. Well, dude, like, do you remember our conversation with Logan Kilpatrick
the head of Google's AI studio.
If you haven't seen that episode,
definitely go and check it out.
One of his main takeaways,
or one of my main takeaways is all of Google's AI products
feeds off of different models within the same suite.
So if their video model learns something cool,
they can transpose that onto their LLM,
which is like in words,
so that it gets recursively smarter over time.
I just think it's like fascinating
and world models kind of like manifest all of these different things
in one particular simulated environment.
So, so cool.
Kind of hard to wrap my head around sometimes,
because it looks like a video game to your point,
but awesome.
Yeah, you have to break this gap between
your normal understanding
when you see these videos and think like,
oh my God, wait, this is not normal.
This is actually all AI generated.
But with that, we have concluded
all the fun new updates from this week.
It was a, I'd say a medium week.
It wasn't anything crazy.
There was no crazy outliers,
but a solid week overall.
Every week there's so much stuff going on.
And the progress,
I feel like we're almost getting immune to it,
how fast things are moving.
But I hope everyone found this interesting.
If you did, please don't remember to share with your friends, like subscribe.
EJez, I know you were manifesting some sort of send-off today.
Do you want to share?
Yes.
Similar to Michael Burry, you shouldn't be shorting the biggest podcast bubble in the world.
And Limitless is at the forefront over here.
We're delivering you all the latest AI news.
And just like the trade that Barry made that he lost $300 million in,
you don't want to be shording the best leading AI podcast in the world.
Give us a five-star rating.
Subscribe if you're not.
What was the stat, Josh?
80% of listeners?
83%.
How many of them like don't subscribe?
Over 80.
Over 80.
That's nuts.
It helps us out massively.
We are currently in the top 30 in the technology podcast across Spotify, Apple and
wherever you listen.
Help us get to the top 10.
It would mean a lot.
What do we say?
Don't ever bet against the optimists.
And by subscribing and by being a part of this,
you are part of the optimists, and we are going to move forward and things are going to progress.
And just don't bet against the optimist.
So thank you for being optimistic with us and being on this journey with us.
And we will be back next week with a whole new slew of episodes.
So thank you for watching and we'll see you then.
