Limitless Podcast - Making Sense of the AI Cycle: Where are We Now?
Episode Date: May 7, 2026Today, we're analyzing the soaring performance of semiconductor stocks fueled by the AI boom, spotlighting gains from SanDisk and Micron. With the substantial increase in tech giants' capital... expenditure projections, now exceeding $800 billion, we can project out the trickle-down effects across the layers of the AI ecosystem. From potential bubble concerns, insights from thought leaders, and the future of memory supply in powering AI applications, we're going to get to the bottom of this AI cycle.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------TIMESTAMPS0:00 AI Investment Insights3:40 Flow of AI Capital5:46 The AI Layer Stack9:47 Rise of Agentic AI12:29 Demand for CPUs14:33 Memory20:54 AI Infrastructure24:19 Navigating the AI Market------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
If you bought Sandisk stock just a year ago, you're up 40 times your money.
Micron up eight times.
Intel and AMD are both up almost four times what you would have put in just a year ago.
And we thought this was an AI bubble.
We were actually wrong about how it's going to play out.
And we have the data to back this up.
Now, this year, five of the biggest companies in the world are going to spend close to a trillion dollars.
As that trillion dollars is spent, it's going to trickle down the stack to a series of layers
that are embedded within this AI ecosystem.
So we're going to be detectives this episode and track down.
where all of that trillion dollars in CapEx is flowing. It's clearly leaving the large cap companies.
Google is spending 90% of all the money that comes in. Where is it going? Well, we are going to
walk you through everything. So hopefully by the end of this, you'll have a pretty good idea of the
structure of what the AI investment universe looks like. And you can make up your own mind on where
you think the best place is to allocate your dollars to collect the money that's flowing from
these large cap companies. Now, I've really had to ground myself over the last couple of weeks,
because it felt like I'm in the land of makebelief.
Look at some of these stocks and how much they've increased over the past.
This is crazy.
This is absolutely insane.
AMD is up 3.5x, and 25% of that move was literally over the last 12 hours, right?
Because they reported earnings, they crushed it.
Sandisk is up 40%.
As you mentioned, ARM, Intel, all of these are in the infrastructure layer of AI,
which has become a very consensus trade.
And that's for many different reasons.
and we're going to address that later on in the stack. But there's one clear thing that changed over the last week. And that came in the form of the major earnings reports for Q1 of 2026 from four of the largest type of scalers. You've got Amazon, Meta, Microsoft, and Google. Now, there was a problem. There was an issue. People thought that they were going to spend hundreds of billions of dollars, and they originally committed to that being around $630 billion in 2026, but no one knew how they were going to make money. Q1 proved that them spending all that money,
not only resulted in more profit and more revenue that they had generated,
but also that they were going to revise this.
They were going to increase their spend in 2026 to a total of $800 billion,
with next year projecting $1.1 trillion.
This is just from four to five companies.
I have to emphasize how nutty and crazy that is.
But again, this translates into actual revenue or money being earned.
I'm going to show you a few different stocks that kind of demonstrate or portray that.
So firstly, Sandusk, up 40.
This is just like an astounding thing where like, you know, a 40x return on any kind of major cap stock is just completely unheard of.
Then you have Intel that operates in the CPU kind of infrastructure packaging layer of AI GPUs is up 4.5x.
What else have we got?
We've got AMD.
I mentioned that.
It's up 320% over the last year.
But literally today it is up 17% absolutely crushing it.
And then we have micron technology, which plays in the AI memory trade, is up.
7.25X over the last year.
And then there's this really cool investment vehicle
where there's a bunch of different AI memory trades.
You can get access to just a basket by this ETF,
Rant Hill Memory, ETF, something I participated in.
That only launched just over a month ago, Josh,
and it is up 72%.
Just insane gains on these stocks.
Do you have FOMO? Because I do.
Yes.
I'm feeling the FOMO.
Major.
And I think the question and part of the concern
that a lot of people are going to be feeling
as they're seeing these numbers is like, oh my God, did I miss this? It seems like very obviously
we're in some sort of a bubble, but what stage of that bubble are we in? How should we deal
with this? How should we navigate this? Well, Brad Gersner, actually, Altimeter Capital
investor, he's like a very prominent thought leader in the space. He went on CNBC yesterday to
talk just about this, about where we are in the cycle. And his idea is that this time is different.
And these are very famous words, we've heard this time is different a lot. But he explained why.
And a lot of it comes from this capital expenditure. I mean, he just mentioned.
and $800 billion is being spent this year, over a trillion dollars is being spent next year,
and a lot of that money is flowing from these large mega cap companies into infrastructure
required to build out more AI use cases. And as of now, these AI use cases are actually printing
a good bit of money. I mean, it comes from that top where we see Anthropic and Open AI printing
billions of dollars of ARR per week. Google just became the most valuable company in the world.
And it seems to be like we have this flywheel that is sustainable so long as AI
is useful. And in the case that AI continues to be useful and we unlock new use cases that are more
valuable than what they are today, then the amount of money people will pay for it will continue to go
up. And those capital expenditures that are being priced into these earnings reports can then be
priced into these further concentric circles out from the core. So if you'll notice, we didn't mention
Nvidia on one of these most valuable, like the largest gainer stocks, because they're kind of sitting
at the top in which a lot of the CAPEX is coming from. So what we're going to do now is kind of
walk through the layers and the stack of where this money flows. So if you think about it,
all of the money comes in through companies like Google, through companies like Amazon,
who are collecting income from retail. Retail is spending on their goods and services. They're
delivering a lot of value to them. They take that money. They redistribute it. Where is that going?
Is the question we're going to answer? So over the last few months, we've seen this happen in
semis. A lot of those companies that you showed us EGES are semiconductor stocks. It's moving into
CPUs. It's moving into GPUs. But let's start with the layer zero that we have.
have here on the artifact. What's going on with the layer zero? What is at the base layer of
this stack? Okay. So you mentioned that four of the hypers, Google, Microsoft, Meta, and Amazon
are kind of taking inflows from retail. There are currently two ways, or rather two startups and
companies that funnel all 99% of that retail. And they're called OpenAI and Anthropic. They're both
private companies. So you can't even access them publicly via equities right now, but they
contain the bulk of chat GPD and Claude users, right? So that in itself is kind of scary to
kind of read, but the fact is they have paying customers, they have paying enterprises,
and that money is where the fountain starts, the waterfall starts. And from OpenAI and
Anthropic, it flows down into what we're calling the layer one, which is kind of like the platforms,
hypers, and also model labs, Google counts itself as one as well. Okay, so you've got Google,
which doesn't actually just act as a model lab. They have TPUs.
They have the infrastructure.
They have the distribution layer.
They have the cloud infrastructure, which is the main funnler of all this revenue that
they're making from their recent earnings.
You've got Amazon that's mentioned here as well, which is doing the same with AWS.
And you have Microsoft, which is doing the same through Azure, Mehta, who's doing the
same from their social media platform.
And we've included opening and anthropic here because some of them sometimes play
in the infrastructure layer.
So the first stack is, okay, we have opening and anthropic dealing with the retail.
That money and revenue flows into the high.
paper scalers that can provide compute and distribute their models. Very important. Google Cloud,
AWS, and Microsoft Azure, all their cloud computing services, distribute the model to all their
enterprise customers and governments and customers that actually want to, retail customers
that actually want to end up using these things. And then, as you mentioned, Josh, we move into
layer two, which is what we're calling kind of the GPUs and semiconductor area, right? Now, this is
where the narrative breaks. Classically, we have been told.
old, AI's going to do so well, demand's going to increase.
Where's the best way to buy the picks and shovels?
Well, it's invidia, of course, right?
They're the most valuable company in the world, right, Josh?
And that was correct.
Yes, they were a $4 trillion company.
Yes.
As of today, that has changed.
Google has taken the crown again.
And I think it's because we're kind of reaching further out of this risk curve.
The money is starting to propagate further out.
So as it started, we needed to go from zero to one.
We needed to go from no AI to AI.
And what was required of that?
Lots and lots of GPUs.
lots and lots of intelligence, and that's where
NVIDIA was most valuable, because they were able to provide
the GPUs to spin up the AI,
to take the industry from zero to one.
Now that we have AI, now that we have established
business and revenue streams, that money is
flowing to optimize the stack.
So Jensen and NVIDIA are going to
continue to make a tremendous amount of money, but that's
kind of baked into the valuation, and we've seen that run up.
I mean, they're the second most valuable company in the world
right now. It's a tremendous amount of market cap
that they've absorbed over the last, call
36 months. Now the time
is come to cycle further
out this risk curve to move beyond the picks and shovels into this next layer of the stack that
even sits below GPU consumption. And my understanding, EJAS, is it's not just GPUs now. There's
also CPUs. And that has to do a lot with the new agenic trend that we're seeing where
agents are kind of being proactive and they need orchestrators. And while GPUs are really good
at solving complex mathematical problems to produce inference, you need smarts and you need brains
in order to kind of orchestrate these models. And that's what's key about this next infrastructure
layer, that's going to really be a pretty big deal in the space.
That's exactly it.
So I've got this article pulled up where Jensen went on a stage right now, and he talked
about GPUs and the demand increasing ever so highly.
And he goes, consumption is going through the roof for GPUs with the rise of Agentic AI
in the last several months.
Now, I want to unpack this phrase very specifically because it explains why Intel and
AMD are absolutely skyrocketing right now.
Agenic AI refers to AI agents.
AI agents is kind of like, think of like an instance of a chat GPT or a clawed code
that can just go off and do a bunch of things autonomously for you.
So you don't have to type to it, you don't have to speak to it, you don't have to prompt it.
It just goes off and works autonomously.
Now you can imagine the market for an AI autonomous worker, it's pretty large.
The tab is pretty huge.
It's pretty much any sector that involves a computer for now until robots actually become a thing.
But there's one kicker.
This is the narrative violation, Josh.
Guess what you need to allow the AI agents to use the tools to go off and do that work?
They need the brain.
They need the CPU.
Computer processing unit versus the graphical processing unit.
Right.
So let me take you through a little historical context here.
Now, if we rewind to let's take GPT 4.0, right?
It was a breakthrough model.
Everyone loved it.
Can you guess how many CPUs were used to train or inference that entire model?
Like, what's the ratio roughly? Do you think?
How many...
CPUs or GPUs?
CPUs.
So how many CPUs do you need for, like, the average GPU?
Like, what do you think that ratio was?
I would guess close to zero.
Yeah.
You would be correct.
We barely even used it.
Fast forward to today, that ratio is almost one to one.
Let me rephrase that.
You need one CPU call per GPU,
and that trend is going to flip over the next six months
where the number of CPUs will outweigh GPUs
will outweigh GPUs.
So basically, Intel and AMD,
who kind of like built their bread and butter of profit margins
off of CPUs that were used for like gaming
and a bunch of other stuff
now has found themselves in an absolute goldmine of an industry
which requires the things that they've been building for over decades.
So Intel and AMD are like, well, okay,
I'll spin up as many CPUs as you want.
And Jensen's like, I need all these CPUs so I can spin up all these data racks.
Now, to help you understand why it is needed specifically,
Josh, you mentioned AI agent orchestration.
Right.
So let's say you spin up a bunch of AI agents.
They need to, guess what, interact with other AI agents.
They also need to use tools.
And most importantly, they need to use these tools faster than humans can themselves.
Josh, on yesterday's episode, you mentioned what you loved about Codex specifically was the browser use.
And the fact that it's so quick, right?
Yep.
Like, you love that.
The reason why it's able to do that is because it has access to more CPU.
It allows you to kind of run C compiler and a bunch of other things.
So the long story short is CPUs are in huge demand,
and that's why AMD stock is up 15% today,
and it's probably going to be up what's up three and a half X over the last couple of years.
So they just reported their earnings, Intel did the same about two weeks ago,
and there's a huge demand for CPUs in general.
And this is kind of the national extension of the way that AI is moving.
This hasn't always been the case, but we went from basically LLMs, right,
where we just used GPUs.
It was a chatbot.
you type into an interface. Then we went into reasoning and chain of thought where there was a lot
more tokens needed in order to answer the same question. Then we moved into the agentic era where
things like OpenClaw and the Claw-Agentic swarm operating system came into play. And that's when this
trend changed again. So assuming these trends are going to continue changing, the paradigms are going
to continue to shift. But the one singular core truth throughout this is that we need more tokens.
And no matter how we get them is up for debate, like how those are going to
to be generated. But the fact is that no matter what we do, every paradigm shift has resulted in a
gigantic increase in more tokens, which seems like is a trend that you continue to bet on. And one
person who bet on this pretty bigly was Donald Trump and the U.S. government, like us, for the
first time, like, we're making money. Trump bought Intel and he's up, what, 500 percent so far?
500 percent. So the government, about a year and a half ago took a 10 percent.
stake in Intel. And their primary reasoning for that was we have too much reliance currently in the
US on this one company called TSM, which is based in Taiwan, which China presumably wants to take
over at some point. It's too much of a geographical and national risk. And so we wanted to kind of
bring a bunch of TSM's capabilities on shore. And that was expressed in the form of Intel,
who doesn't just build CPUs, by the way. They're working on building a bunch of frontier GPUs. And that's
a story for another day. But the point is, Trump bought
10% via the government and they are up 5x. You know what price you bought at $20.47. It's trading at
$100.111 today is what the ticker is trading at. So shout out to the US government pumping our
bags and winning on behalf of the people. That's very exciting. Yeah. Okay. So let's continue down
this waterfall. So what we've started off with is Anthropic and Open AI as the retail, that flows down into
the hyperscales, the Google, the Amazon's, the cloud distributors. Their revenue margins are expanding.
This is amazing.
But they rely on semiconductors.
They need Jenss Huang's GPUs.
But Jetson Huang needs all these CPUs because all these AI agents are using all these different tools.
Okay, so we need CPUs.
Now, can you guess there's another component, Josh, that makes up 50% of the bill of materials cost for a GPU.
50%.
Guess what it is?
What does every single GPU on the planet need?
Jensen, what does he need?
What is his bottleneck choke point that doesn't work?
like the company doesn't work in the absence of it's memory. Memory is the biggest thing in the
world. It's impossible to make enough memory. No one has it. And therefore, any of the memory stocks
that you have invested in in the last year have gone absolutely nuclear. And these are probably
among the biggest winners. When you think about Sandisk delivering a 40 times return, Micron is
looking like a seven times return. Memory has been the choke point of all of these because
memory is the most important thing. It's where we get our contact spend us from. It's where a lot of
the training data is stored, memory is the next layer of the stack.
Yes, exactly.
And if you thought Nvidia had a monopoly on GPUs,
let me introduce you to the secret monopoly of memory.
So they're basically four companies which dominate in the AI memory landscape.
Their names are Micron, which is a US company, SK Heinix,
which is, by the way, the biggest memory provider, Korean company,
Samsung, second biggest.
And then we have Sandisk.
Now, you might assume that they all make the same types of memory.
Three of them do.
Micron, S.K. Heinex, and Samsung make something called high bandwidth memory.
This is the premium memory that goes into making the GPUs, the Rubin Ultras,
the fashionably new GPUs that Invidia releases every single year.
That incorporates 50% of it.
A bill of materials is this fancy high bandwidth memory.
It is an extremely complex thing to make.
the supply is super constrained
and only these three companies dominate
in making it.
And that is for a different reason.
I'll explain the memory cycle later on.
But then you have Sandisk,
which is up 40%.
And the reason why they're up 40%
is because there's a second type of memory
which is required by these AI models
and it's called NAND or NANDD NAND Flash.
Now, here's the difference between the two.
High bandwidth memory,
which is the original GPU premium,
basically allows you to move
data really quickly in the AI models. Think about it, right? These AI models are like 10 trillion
parameter large, at least Methos and GPD 5.5. They need access to data very quickly, and it's clunky.
It's very big. High bandwidth memory basically solves that. It allows you to kind of store the memory
and like move that memory really quickly. But there's the second type of memory which Sand disk
specializes in called Nant. And what that does is when you're having a conversation, Josh,
have you ever noticed that the models now are really good at maintaining the context and remembering
things that you mentioned like a few sentences before. Have you noticed that?
For a longer period of time, yes. The context windows have expanded quite a bit now.
Yes, exactly. Now, the main enabler for that is this NAN storage, which is kind of like a temporary
memory storage. It's more of a commodity. It's not as sexy as high bandwidth memory,
but it's super important. Sandisk dominates that entire sector, which is why it's up 40x.
So memory has famously gone under a huge supply constraint. These providers are sold out, and I'm not
exaggerating this until the end of 2028. They have signed customer contracts until the end of
28. They don't have enough memory. They don't have enough supply. And so ramping that up is going to be
the key focus over the next couple of years. And this has been really felt in the retail market as
well. If you use memory for traditional use cases like building PCs or just general consumer
products, all of those have either been delayed or the price has increased because it is so difficult
to get your hands on this memory. And when you think about this memory, there's a pretty simple
analogy that I was using earlier today when it came to describing and understanding how this works.
So high bandwidth memory, like you mentioned, it's known as HBM for short. It's basically a series
of DRAM stacks stacked on top of each other. And you could think of those as this huge desk
right next to the AI model that they can very quickly reference. And then the NAND is the file cabinet
that's kind of right next to it. That is more persistent that holds these ideas for a longer
period of time. It's larger, but it is a bit slower. And the dynamic between the two is really
interesting. Now, one of the things that was kind of a narrative violation as I was reading about
it is the idea that DeepSeek can actually create more efficient models in terms of how much
they use memory, and yet the demand for memory goes up. So Deep Seek, they published this famous
paper that allowed them to get basically frontier level intelligence using a small fraction of the
amount of memory that traditional AI labs have used. This seems like a bad thing. We need less
memory because we've become more efficient, but the reality is that the inverse actually happened,
where now we have greater memory efficiency, but far greater memory demand. So,
Maybe you could explain this dynamic that's happened with this because there's a strange deep-seek paradox going on that I think is a narrative violation in which people kind of look for when seeing things that can pop this bubble.
And the reality is that this is not even a little small tear in the bubble.
It's actually improving the quality of the money spent.
How is this working with this memorod dynamic here?
Yeah, it's basically Jevin's paradox.
So the problem that you're explaining is if 50% of this GPU thing is reliant on this one component supplied by four.
different companies, they have to work on a workaround, right? They're probably going to create
GPUs or models that don't rely high on memory. A deep seek version 4, which was released a few weeks
ago, was the instantiation of that. It uses, I think, like, 5 to 15 percent of a Claude Opus 4.7
model, which is a drastic reduction, which may lead you to think that memory stocks are going to
crumble, except the actual opposite happened. So I've got this block of text here, but I'm going to
explain it in very simple terms. What they found was the architecture
unlocked that Deepseek V4 created,
actually ended up using more of that NAND flash memory
because their architecture change was,
we'll just use more agents.
We'll let more agents do the thinking
before we give someone an answer to their prompt,
and it resulted in a smarter effort.
But that didn't decrease the reliance on HBM or memory in general.
It increased it overall.
You'll see here that each prompt required 157 rounds
and a bunch of token context,
all of that interfacing with these memory components.
So the actual opposite happened,
and this is Jevon's paradox playing out in reality,
where if you assume that the demand for goods,
or rather the cost of goods goes down,
you'll assume demand goes down
because it's like cheaper,
you don't make as much money,
but the, in fact, opposite happens
where demand goes way higher
because now you can do more things for cheaper.
So Jevin's paradox playing out here.
Okay, next layer of the stack,
power generation and infrastructure.
How are we going to plug in these GPUs?
well, we need infra for it. Otherwise, they're just going to be sitting there dark with no power
and no ability to turn them on. This is layer five of the stack. This is where you will recognize
names, perhaps, like, Bloom Energy, which we have famously mentioned. Our boy, Leopold is up, like,
what, four or five times return on Bloom Energy? Yeah, like two Bill. This is, yeah, like doing pretty
well. And this is the part of the stack that is, I guess, core and sits a little bit downstream of
the GPUs. But again, contingent on it working. Tell me a little bit about this company,
particularly, I mean, Corning.
We had a deal with Corning Opio today and InVIL, which is pretty big.
I would imagine most people listening to this have never even heard of Corning before.
Yes, exactly.
So I have to be upfront.
This is where I'm weakest in this stack or this script.
So I have to be upfront and maybe we'll do an episode later down when we've done a bit of research.
But basically, if you look at the stack so far, it flows down from retail, it goes through semiconductors, it goes through CPUs, and it goes through memory.
Where does the puck flow next?
Well, there's an issue.
I buy all these GPUs.
I buy all these CPUs.
I set them all up neatly in a warehouse, in a stack, in a server rack,
but I don't have enough power to power the thing.
Or if I do have the power,
I don't know how to regulate the power to the right GPUs
and the right CPUs at the right time to prevent a blackout,
but not overheat it.
And then I have a cooling system.
There's a whole architecture around the GPUs
and the CPUs operating at optimal form.
There was an article that was released by the information
this week which showed that XAI,
which has the largest cluster of GPUs,
over a million high-grade Nvidia GPUs,
only utilize 11% of its power.
That's because they don't have enough power generation
or the chips or the architecture
to allow this power to flow directly through GPUs.
I think this is where the puck is flowing next.
And today's announcement,
where Nvidia is partnering with Glassmaker Corning,
which is focused on optics specifically,
is part of that puzzle.
But there's also another piece which is basically the power suppliers or infrastructure providers on its own,
mainly referenced by companies like GE Vernova, CEG Constellation Energy, which we had on our stack just now,
which not only supply the power and make sure the power ends up at the data center,
but also regulates and make sure that the power enters at the right time to the CPUs to the GPUs to make sure they're optimally performing.
And this is layer five of the six layer stack.
The sixth and final layer being the raw materials.
Now, this is kind of the bare bones foundational layer.
When you think about the AI stack from first principles, what is required at that foundation,
it is the raw materials in to get the intelligence out.
We've turned sand and silicon into thought.
And throughout that entire process, there is a tremendous amount of technology empowering it.
Now, we are not proficient currently in materials.
But if you would like us to be, and if this is an episode of interest, we can go deeper on layer
five and layer six of the stack because I was looking at a chart this morning from of lithium carbonate,
which is just a very critical component to a lot of the AI stack. I'm looking at a chart from
November where it was priced at $75,000, and now it is at $187,000. So the gain and return on
some of these materials has been just unbelievable. And this is the sixth and final layer of the
stack. So perhaps in a future episode, but hopefully this has given you kind of a loose orientation
of how you could think about the trickle down economics of AI, how it kind of starts from this
layer zero and works its way through the infrastructure. Now, EJES, we have to kind of orient ourselves
in reality. Good things don't last forever. And there is a high probability that this one doesn't.
Perhaps this time is different. But there is a case to be made that it is not because traditionally,
memory has done well in the past. There have been a series of memory booms that have done very well,
every single one of which has followed off with a bust. So where would you say we are currently
in the cycle? Or at least, what are the downsides that people should be looking out for and aware of
that would let them know that, hey, maybe things are looking a little frothy here.
There's some cause for concern.
Yeah.
So the first factor is the boom and bust cycles, particularly with memory, which I think is, what was it, layer three, layer four of the sec that we just described.
It has traditionally gone through many boom and bus cycles, which I'm showing on our screen here through this chart.
Now, what isn't represented here is at the start of, on the left-hand side of this chart, we had 14,
key memory providers. Fast forward to today, and we have three for hide badmouth memory and one for
NAD. So four. And the reason why is every single boom and mainly the bust has crushed out
specific companies to leave like the three that we have right now. And so history would tell us
that the same thing is going to happen this time. Now, I'm conflicted here, Josh, because I'm sitting here
as a podcaster and saying this, but I listen to three podcast episodes from the memory execs from all
of these different companies yesterday, guess what they said? They said, this time is different.
They said, this time is a very unique opportunity because not only is AI surging demand for all of
these memory chips and, you know, they've got payments for all of these things, on the front end,
the AI products themselves are being paid for. They're actually making money. Now,
the definition of a bubble is it's levered. It's a bunch of hoopla, right? There isn't actually
any money coming in. This disproves that narrative, the bubble narrative, because there's
money being paid off. None of these companies are levered up. That 800 billion KAPX number that we
started this episode with is all coming from cash flows that they have right now or cash reserves
that they have right now. No one is levered up. No one's borrowing money for this. Yeah, in 2025 and
27, I mean, that's $2 trillion of guidance that comes from cash flow. It comes from Google spending
90% of the money they actually make. So there isn't leverage. This is somewhat sustainable. And
now that we have the rough Kappex guidance, people can kind of price in what this will look like when
impacts the market. And that is the current layout of the land. This is everything you need to know
about the full AI stack, what has been doing well, what hasn't. And I guess the question for you is,
now that you're oriented or whoever is listening, is what part of the stack are you most
interested, specifically what companies are most interesting? Because there is this gigantic
hotball of money that is moving its way through the market for the first time in a very long time.
This is a paradigm shift. For the last decade or two, all of the money has kind of accumulated at the
top of these funnels has accumulated to Google to Amazon to these large mega cap companies,
but now it's working its way out. And where is that money going to end up? It is TBD. We've seen
semiconductors. We've seen memory do really well. Will they continue? TBD. Where else is the money
flowing to? We're not sure. But the goal of this episode is just orientation, right? You kind of now
have a lay of the land. You have an idea of where everything is and can maybe go out and pick some
winners. So I guess the prompt for this episode is who are the winners? Who's going to win this
battle of CAPEX accumulation is what I would love to know, because I will plan and allocate my portfolio
accordingly. Yep, yep, same here. My bags is the answer, Josh. But right now, I think that we are
kind of in a bubble, but it's a different type of bubble than what we've used to because no one's
levered up, as I mentioned earlier on. And it remains to be seen. Like, every quarterly earnings,
I'm going to evaluate everyone's spec sheet and see if they're, you know, actually making money from
this thing. Q1 earnings, which released
over the last two weeks tells us that it is. And listen, there are always levered bets everywhere.
Like, you know, if you don't buy any of the companies that we mentioned, there are smaller cap stocks
that you might go after, they are high risk with all of these different types of investments,
and the narrative can change, as we know, in AI, in a second. So the long story short is,
neither of us know, but we will be keeping track of everything and we'll be updating you.
Josh's prompt earlier on is genuine for me as well. If you want to hear more about lower layers
of the stack, we will do the research and we'll deliver you that episode. Let us know in the
comments and let us know what stocks you're investing. But aside from that, Josh, I think we're done.
That's it. Thank you guys so much for watching. As always, share it with your friends who might
be interested in an episode like this. And we will see you guys in the next one. See you guys.
