Big Technology Podcast - Inside The AI Bubble: Debt, Depreciation, and Losses — With Gil Luria
Episode Date: November 14, 2025Gil Luria is the head of technology research at D.A. Davidson. Luria joins Big Technology Podcast for a special Friday edition special report digging into the AI bubble, or whatever term you'd like to... use for the questionable investment decisions in AI today. We cover all the bad stuff: debt, depreciation, and losses. We talk about Michael Burry's bet against the technology and why he might be right, and how OpenAI should play this to optimize its potential. Tune in for a comprehensive edition looking at the risks of the AI trade, and what happens from here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
AI bubble fears are growing as Wall Street tries to do the math.
Let's break it down in the Big Technology podcast, Friday edition, special report on the AI bubble with D.A. Davidson, head of technology research, Gil Luria.
That's coming up right after this.
The truth is AI security is identity security.
An AI agent isn't just a piece of code.
It's a first-class citizen in your digital ecosystem, and it needs to be treated like one.
That's why ACTA is taking the lead to secure these AI agents, the key to unlocking this new layer of protection and identity security fabric.
Organizations need a unified, comprehensive approach that protects every identity, human or machine with consistent policies and oversight.
Don't wait for a security incident to realize your AI agents are a massive blind spot.
Learn how ACTA's identity security fabric can help you secure the next generation of identities, including your AI agents.
Visit ACTA.com.
That's okayta.com.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
Welcome to Big Technology Podcast Friday edition
where we break down the news in our traditional cool-headed and nuanced format.
We have a great show for you today
because we're going to go deep inside the AI bubble.
This is really an episode I've been looking forward to doing for a long time.
We're going to look fairly deep inside the numbers,
which companies are taking on too much debt,
whether all of this is sustainable.
We've touched on it in various different formats.
but today we're actually going to bring a name that you've heard on the podcast before
because we've read his analysis and bring him to life for you, at least being a voice,
Gil Luria, who's the head of technology research at D.A. Davidson is here with us to discuss it all.
Gil, great to see you. Welcome to the show.
Thanks for having me, Alex.
So I have really appreciated your analysis.
Oftentimes when we see Wall Street analysts weigh in on trends.
We typically hear from them about the things they think
going well, but less about the things they think are not going well. You're somebody who's
really called balls and strikes. You've talked about, you know, the companies that are doing this
the right way, the companies that are doing it the wrong way. And that analysis is super
valuable because we find ourselves in this moment where Wall Street and really all of us are
trying to figure out whether the AI investment curve is going to keep going the way it's been going
or whether it's a bubble and everything is going to pop. So let me give you at least,
to start the argument that it is not a bubble.
And this is coming from Reed Albergatti at Semaphore.
He says, AI is in a market of opportunity and uncertainty, not a bubble.
He writes, the market punished AI stocks like Forweave and Palantir this week.
It seems like the world is convinced that the AI bubble is deflating and nobody wants to be
the last one out.
This isn't just a Wall Street phenomenon.
Every tech dinner I've been to lately, a good chunk of the conversation was spent on
journalists peppering bullish tech executives about how long this really can last.
And yet, what the executives are saying makes sense.
They are selling a product that customers can't get enough of,
and the total addressable market for this product is virtually every person and company on the planet.
And this week, we saw AI companies touting rosy numbers from AMD CEO Lisa Sue predicting the AI compute market
would grow to $1 trillion to Anthropic getting profitable by 2028.
So for all this talk of AI bubble, too much debt, I think Reed is making a really good point here, which is that there is insatiable demand for the products.
And you do have public company CEOs, like Lisa Sue, who like have to have some rigor behind the things they say talking about these major numbers.
And even companies like Anthropic, which are losing a lot of money, planning to get profitable in a few years.
So what's your read on this, Gil?
There's a lot to unpack there.
And the framework that I do it with is to say both things are true.
So AI is the most revolutionary technology that we've had in a really long time.
Whether it's back to the Internet or back to the Industrial Revolution, we'll only know in retrospect.
But clearly the tools are very powerful and are getting better.
All you need to know in order to realize that is just use them.
may ask chai gpt to do things for you that are hard that you would ask other people to do
whether it's summarizing writing giving you advice and you see that not only is it incredibly capable
but it's much better than it was a year ago and it's much better than the year before that so yes
there is insatiable demand for this product that is true that's there's a lot of healthy behavior
around that capability.
And the healthy behavior
are reasonable, thoughtful business leaders
like the ones at Microsoft, Amazon, and Google
that are making sound investments
in growing the capacity to deliver AI.
And the reason they can make sound investments
is that they have all the customers.
They have all the business customers
and by extension of their relationships
with Open AI Anthropic,
they have all the consumer relationships as well.
well. And so when they make investments, they're using cash on their balance sheet, they have
tremendous cash flow to back it up, they understand that it's a risky investment, and they balance it
out. So all of that is true. At the same time, we are seeing behaviors that are unhealthy. And that's
where those of us that have lived through financial bubbles or technology bubbles are recognizing
patterns that we've seen in the past and are saying, hold on, this is unhealthy behavior.
And there are companies that are exercising unhealthy behavior, and that's what we're
trying to call out. And that's why there is a good, reasonable debate here is what's
healthy and what's unhealthy. And you mentioned several companies here. So I want to touch on a few
of them because they represent some of this range.
Hall and Tier is the best company in the world right now.
I'm not going to even say the best software company.
They're the best company in the world right now because what they're able to do is go
into a company and ask them, what's your biggest need right now?
What is it that you think AI can do for you?
And then do it soup to nuts.
And that's why you're seeing extraordinary growth rates there and they're being incredibly
successful both their end on the government.
where they're doing similar things
in just a more clandestine way.
Then there's companies like Corweave,
which is the poster child for the bad behavior
that I'm talking about.
We're talking about a startup
that is borrowing money
to build data centers for another startup.
They're both losing tremendous amounts of cash,
and yet they're somehow being able to raise this debt capital
in order to fund this build-out,
again, without having the customers or the visibility into those investments paying off.
So there's a whole range of behaviors between healthy and unhealthy,
and we just need to sort that out so we don't make the mistakes of the past.
And we can delve into why I think debt is an unhealthy way to invest in data centers.
I think that's a worthwhile discussion because when we just say,
oh, we don't want debt financing.
There's a reason for that,
and it's not just our experience from the past.
Okay.
Actually, let's go right there right now.
The debt is an issue.
I'm thinking about companies like Oracle,
which is taking on,
Oracle itself is taking on a tremendous amount of debt
to fund its AI data center buildout
with this promise that Open AI will eventually pay them
some revenue that it may or may not materialize.
It seems to me, I mean,
as someone who's not spent two,
much time digging into the finances of Oracle, but they're effectively leveraging the company
on Sam Altman's ability to deliver revenue and affect maybe profitability growth for opening I.
Then you have meta, which has had a lot of cash on the balance sheet, and they're now using
debt to fund their AI data center buildup.
Maybe with meta, you could say, you know, companies typically build this way, and even
though they have the cash, it just makes more sense from financial standpoint.
because there are arguments to say okay fund it with debt who cares you'll pay it back everything's
growing well um but i i actually will turn it to you and and and hear your perspective on why debt
is such an issue here we've just started to see debt make its way into this conversation so how
why is it a problem and how concern should we be so we have to go back to finance 101 right there's
certain things we we finance through equity through ownership and there's certain things we finance
through debt, through an obligation to pay down interest over time.
And as a society, for the longest time, we've had those two pieces in their right place.
So debt is when I have a predictable cash flow and an asset and or an asset that can back
that loan. And then it makes sense for me to exchange capital now for future cash flows to the lender.
So again, the conditions are an asset that is longstanding that can back the loan and or predictable cash flows to support the loan payments, right?
That's why we have a mortgage. A mortgage is an example of both. All right, a mortgage is, wait a second. If I stop paying my mortgage payments, the bank owns the house. And since they only lend me 80% of the value of the house, even if the value of the house goes down a little bit, they'll be fine. And they have an access to my income, which,
is relatively predictable, even on Wall Street. And so they know that I'll pay my mortgage payment.
That's a loan that should be there, right? We use equity for investing in more speculative things
for when we want to grow and we want to own that growth, but we're not sure about what the cash
flow is going to be. That's how a normal economy functions. When you start confusing the two,
you get yourself in trouble. And that's what, to your point, Oracle is doing. They're saying,
I have this startup that's promising me $300 billion of revenue at a high margin over the next five years.
So I'm going to go borrow money to build out the infrastructure in order to deliver that.
And what Oracle has been exposed as is, hold on, Open AI promised me $300 billion.
They also promised Microsoft $200 billion, Amazon $38 billion, core weave $25 billion, in total $1.4 trillion.
And who's this company that just promised all that?
A company that, at best, will have $15 billion of revenue this year
and we'll be losing more than that.
So we'll be losing probably more than $20 billion this year.
So are they in a position for me to borrow money
because I have certainty around those cash flows?
No, that's bad behavior.
And that's what we're talking about here.
If you're borrowing money to make a speculative investment based on a speculative customer, that's bad behavior.
And frankly, that's what's dragged the market down over the last few days is the realization that this bad behavior is happening.
And nobody wants a piece of that.
Okay.
So what are the consequences then if you can't, I mean, all right, let's say Oracle.
Let's keep with this Oracle example, right?
They can't pay.
they're building these data centers
it's one of those things that
all right let's say the music stops
and you know
Open AI is like all right there's not going to be any more
AI improvement left or
for any number of reasons
AI development slows down
they're like actually we're not going to ease those data centers
or we don't have the money to pay you
what is it seems like
that's just a local issue
for Oracle is that or an economy
wide problem? The answer to that
is it depends on what magnitude we're talking about.
Again, lessons of previous cycles, and especially the financial cycle.
If we have tens of billions of dollars of debt into an asset that stops being a productive
asset, then if there's a problem, it's the people that issued that debt or own that
that lose money and the people that own stock in the companies that made those loans, Oracle,
core weave, et cetera, they'll lose, if it's tens of billions,
of dollars, some financial firms will lose, and that's, and mostly the owners of that debt
and that equity will lose. The problem starts happening when you get into hundreds of billions
of dollars of debt, which is where we were headed, at least as of a couple of weeks ago.
Again, Open AI, this startup, great startup, great product. A startup committed $1.4 trillion to all
these entities, so those entities, as well as Open AI, could go out and raise debt capital,
which means they were seeking, they and their customers were seeking hundreds of billions of
dollars of debt. If we are here two years from now, and there's hundreds of billions of
dollars of debt, and the demand for AI stabilizes, or we built enough data centers to
support the demand we have at that point in time, and the price for leasing,
pieces of AI for renting access to GPUs goes down,
all of those assets then can't pay enough
to pay the interest expense on that debt.
All of that debt defaults at once.
Now we're talking about systemic risk.
That's what folks are warning about right now.
It's okay if some financial investors lose
tens of billions of dollars here and there.
If we have hundreds of billions of dollars of debt
of debt into what is really just one product with one price and that price goes down and all those
assets become worthless, now we're going to drag the entire economy down. And again, we're all saying
this from experience. Everybody should rewatch the big short and not just to see Christian
Bail and Margot Robbie and Selena Gomez. It's a great movie that talks about how it's a little
problem until everybody does it and then it's a big problem that affects everybody.
Right. Okay. So one more question about this. Who are they borrowing from? Like, who are the institutions that are giving them or the investors or the individuals giving them this money? And if it, I mean, Gil, you lay it out so well. We don't know. This is speculative. Shouldn't really use debt for speculation because, again, it could go under and then you could have a problem. Who are they borrowing from? And what do you think the calculations were from the people lending this money that they, who obviously
understood the things that you're saying and said, you know what, let's give them the,
give them the cash anyway. Well, the short answer is the largest institutions in the land.
US Bank, J.P. Morgan, Mitsubishi Bank. Those are the companies lending to Corweave. And again,
the math they're doing, we believe, isn't the right math. Let me dig into that, what I mean
by that a little bit. Again, this is a speculative asset. Just because we're all using
and excited about, which we laid out at the beginning, we are, and it is exciting, and we need a
lot more compute. It's still a speculative asset in the sense that we don't know how much of it
we're really going to need in two to five years, because we don't have experience doing that.
This is brand new. We don't know how much a GPU is going to rent in five years.
You mean when we get the revenue projections from Open AI that they're going to make like,
you know, a hundred billion dollars a year?
I guess I'm being exaggerating a bit.
You can't trust them because you just don't, how do they know?
Exactly.
So one, AI may turn out as well as we expect, but it may not.
And two, Open AI is not in a vacuum.
They're competing.
Part of the reason they're over-promising and creating this too big to fail
and fake it till you make it and getting everybody else to have skin in the game.
reason they're doing that is because they know they're competing with meta and with Google and
with Elon, people that have a lot more resources than they do. So for them to say, oh, we're going to
have $100 billion of revenue by 2007, which Sam Altman just did, is completely disingenuous.
He has no idea. He's competing against much bigger, more powerful companies that have technology
that's at least as good as his.
So lending money based on that is dangerous.
Because again, these GPUs, you're building a data center, you're renting out GPUs,
and right now maybe you're renting out a GPU for $4 an hour.
And maybe that way the business makes sense.
But these GPUs keep getting so much better every year that that same GPU in just
three years may be only renting out at $0.40 an hour, at which point the data
Center is literally worthless because that won't be enough to cover the expense of operating
the data center. So this is where we get in trouble. When somebody underwriting a JP Morgan,
a U.S. Bank, or Mitsubishi Bank ignores that. And to answer your question, why would they do that?
These are professionals. It's because they don't have the downside, right? They have a mandate
to deploy capital into AI. They got an order from their boss who got an order from their boss
that says, we don't have enough AI in our portfolio.
Go find me AI to invest in.
And so somebody comes to them and says,
hey, look, I'm building a data center, lend me money.
I'll pay you 9%.
That's a fantastic interest.
And you sign up for it.
You get a big bonus that year based on signing that deal.
If the deal goes sour,
if the data center is worthless in three years,
you don't care.
You're not giving your bonus back.
That's the world we had back in the financial crisis.
That's how we got in trouble then.
And that's how we could get in trouble now if we don't do something about it.
So, Gil, it's a great segue because you brought up the big short.
And this has definitely been a week where that is applicable because Michael Burry,
basically the star of that movie of that story,
the guy who effectively shorted, he shorted the housing market when he saw that
we were engaging in extremely speculative loan behavior to people who should not get loans.
he has started to, well, not started, he has sounded a real alarm now.
And it is interesting because the incentives that you described are sound exactly similar
to the incentives of the people writing loans for people, for the subprime mortgages,
people who shouldn't have, have gotten that loan for a house they couldn't afford.
But they were going to get their bonus anyway, right?
It's the, it mirrors that story.
And this week, Burry made headlines because he'd shut down, he completely shut down his, his firm.
and he basically, you know, described it to the valuations and the behavior we're seeing with AI and for the reason that you just outlined, which is depreciation.
Here's a tweet from him.
Understanding depreciation by extending useful life of assets artificially boosting, boosts earnings is one of the more common frauds of the modern era.
Okay, so basically, if you don't, if you don't accurately capture depreciation of the GPUs,
he's effectively calling it a fraud.
Massively ramping CAPEX through purchase of
Nvidia chips and servers on a two to three year product cycle
should not result in the extension of useful lives of compute equipment.
Yet that is exactly what all the hyperscalers have done.
By my estimates, they will under state depreciation by $176 billion
from 2026 to 2028 by 2028, Oracle, there's Oracle again,
will overstate earnings by 26.9%.
meta by 20.8%, etc., but it gets worse.
And so that, so that maybe, then Burry basically, you know, closes up shop.
So what he's saying is that like all these companies say that these chips will depreciate
over five or six years, but like you said, if the, if the Nvidia chips get that much better,
that much more quickly, we could have a much more accelerated depreciation making the data centers
that they're investing billions in today, worthless, as you put it.
How do you evaluate Burry's critique of the situation?
Sounds like you agree with him.
He's spot on.
By the way, Big Short is the story of how he was spot on, but he almost didn't make it.
A lot of the movie is about how long it takes to play out, and you can be right, but if you're right too early, you don't make it.
And the story is about him and the handful of people that did make it.
There were a lot of people that were short the market for a long time and lost everything because they couldn't wait long enough.
he was just in a position to wait.
And he's spot on right now.
And look, depreciation gets wonky.
So let me just hit it at a high level
because it is really important to this conversation.
Right.
Depreciation is based on an accounting standard
that helps companies say,
well, I have an asset.
How long is it useful?
How long can that asset generate revenue for me?
And if it can generate revenue from me over five years,
then I should take the cost of acquiring that asset
and spread it over five years as an expense for accounting purposes.
That's what accounts are there to do.
And these accounts spent time three to five years ago
with companies like Microsoft and Amazon
and said, you know what, based on where the technology is now,
we're looking at these chips,
and it looks to us like they can generate revenue for you
for about five or six years.
And that's why we're going to allow you to
depreciate that to extend your depreciation to five to six years because then you have less
expenses and you look more profitable. So we're going to allow you to do that. But what's happened
over the last three years is the technology has taken huge leaps forward. Jensen Wong has been
preparing for this for decades. And here we are and we can make the most of the brilliant chips that
he's designed and now he can make one every year that's 10 times better than the one he made the
year before. And that's great. That's why we have all these great tools and that's why they're
getting so much better. But back to those accounts, if you ask the accounts today, how long
will this asset generate meaningful revenue? They would not answer five or six years. They would
probably answer three years. And to Mr. Burry's point, if you told Amazon and Microsoft,
and certainly if you told companies like Oracle and Corweave, no, no, no, no, these chips will
only generate meaningful revenue for three years. Their profitability would decline very
dramatically. So again, from Microsoft, Amazon, Google, I don't worry about that too much. They can
handle it. For companies like Corweave and Oracle, it means they'll never be able to raise any more
capital again, which means they would all go away. So that's why this point is important,
even though it's a little wonky. Because when these companies come back and tell you,
no, no, no, I have five-year-old chips that work just fine. That's not the same thing. Saying I have
a five-year-old chip that works just fine doesn't mean that it can generate the same revenue that
it did five years ago, which is the accounting question. So that's slight of hand by these
companies to tell you that the chip still works. If it's only generating 1% of the revenue
generated five years ago, it's by all intents and purposes worthless. And so that's where we have
to ask the accounts the right question. And I think we will be over the next couple of years.
And we're going to be correcting this dislocation, which is what Mr. Burry is betting on.
Right. Saty. Nadella was on Dwar Keshe's podcast this week in an interview with him and Dylan Patel
from semi-analysis and he basically was talking about this a couple of what a year ago two years ago
the the h-100 uh invidia chip was state of the art now we're already talking about the we've
blackwell is deploying there's another generation coming out and the generation after that is
underway via reuben which is going to make these h-100s which again it just came out a couple of years
ago, you know, not, I wouldn't say completely worthless, but you were paying $30,000 per GPU a couple
years ago. It's going to be very hard to justify, you know, having that same value, which is what
you're pointing out. But let's look at the other side here, which is semi-analysis has the
counterpoint from Jordan Nanos. Okay. Jordan says there's basically no precedent to say the chip
would fail out, would fare or fail or wear out in two to three years.
The hardware manufacturers have contracts that are standard for three to five years,
and they offer extended warranties for six to seven years.
The proof of Burry's argument would be predicated on Nvidia releasing chips
that so drastically outperform the current generation in two to three years
that all hypers everywhere are incentivized to go through another KAPX cycle.
They'd have to all buy new chips and rip out the existence.
existing ones that seems like a much farther leap than saying we might be able to run these chips
for five to six years in the data centers themselves. What do you think about that?
A couple of things here. So first of all, I think I like that Nvidia picks fun names like
Viro Rubin and Richard Feynman to name their chips. I like that as a naming convention. He's clearly
a dork, right? In the best way possible. The other thing to note is that Dylan and Dworkesh are
roommates, and boy, is that a fun apartment to hang out in, right? Those two are some of the
smartest people around, and I love hearing them speak. Now that I've said that, I'll bring you
back to the fact that what they just said is slight of hand, right? The fact that the chip works
after three years doesn't mean it's going to generate the same revenue, right? You can have a
working chip. I can have a 10-year-old Mac or a 10-year-old PC that still turns
on, I wouldn't want to use it because it wouldn't be able to do the things I need it to do.
So just the fact that it doesn't break after three years doesn't mean that it can generate the
same revenue that it did three years ago. So that's one level of sleight of hand. The other thing
they point to is, oh, don't worry about it. We do have five-year-old chips that we're still
renting out at decent prices. And that's really just a function of where we are right now in the
expansion cycle. We are so short on chips to process these AI transaction, these AI
token generation inference transactions that people are renting out anything they can get their
hands on. This is like used cars during COVID. People would pay a premium to a new car
to buy a used car because there were just no cars around. That doesn't mean that that used cars
was worth more than a new car. It just means there was so much scarcity that people
overpaid, and that's where we are right now.
That's not a sustainable situation
because everybody is building our data centers.
And again, even if you took out the bad players
that are borrowing money to build data centers,
you still have Amazon, Microsoft, Google, meta, Elon,
using cash flow to build data centers.
So we're building tremendous amount of capacity.
Once that capacity even gets close to catching up,
then the old chips that can't do as many calculations,
for you will be worth a fraction.
And this is where the market's going to end up, right?
Markets are efficient.
And here's where the equilibrium is.
Here's where we'll get to the balance of supply and demand is on dollar per flop.
Dollar per flop is to say dollar per calculation, right?
Remember, what these AI chips do is they generate tokens, which is what we call words or numbers or images.
And if Richard Feynman chip can generate X tokens per second and an H-100 can generate one-one-millionth of a token per second
or take a million seconds to generate a token, then the Richard Feynman chip is worth a million times more
than an H-100.
Even if the H-100 is working, we won't use it because it can't keep up.
It's not worth it because we'd rather use a chip that can generate the amount of tokens that we need.
And so this is two different conversations.
Will the chip work?
It might, but will it be worth something?
Will it be able to do enough computation to generate revenue?
That's a completely different question.
And that's where we think in a three-year time frame, three-year-old chips will just not be able to do enough computations to be worth keeping them on.
So either we replace them or we use them for much less important things that will generate much less revenue.
So again, we have to be careful, not confusing, does it work with can it generate revenue?
And just to go back to the Bury point then, what he's saying is because these companies are writing the chip depreciation at six years or maybe, you know, seven years, but they're not going to actually be doing anything that effect.
they're going to be overstating their profits and, you know, smart investors, as he's saying
that smart investors will catch on and then think their valuations because of it? Or what's the
risk there? That's exactly it, is that we're one account conversation away from having all
these companies have to report much lower profits. And, you know, we use profit multiples to value
companies. So if a company has to depreciate most of its assets over three years instead of five
years, that means their profitability is going to go down proportionally. And we could have,
in an stylized case, the value of a company declined by 40 percent because an accountant said
you have to depreciate this over three years instead of five years. That's why this is very real.
Sounds wonky, but this is very real. If you reduce a company's profit,
profitability by 40%. Their value will go down by 40%.
This is, by the way, this is why I thought it was important to have this conversation on the show
because it is like when debt gets involved, that's when things start to get.
And Burry's talking about more than debt, right?
He's talking about these depreciation costs can even hit the companies like the Microsofts
that aren't taking on a tremendous amount of debt to do this.
So that's another issue.
So these conversations, debt, depreciation.
This is where the rubber meets the road on the AI bubble conversation.
It's one thing to say the valuations are out of whack.
It's another to say, here's the actual pressure points.
And these are the pressure points.
That's right.
And again, I go back to Microsoft, Amazon, Google.
They can handle it.
They have a big business.
It's diversified.
This is only one part of the business.
They can literally stop on a dime.
Again, they're deploying cash because right now that cash is sitting on their balance sheet
in generating 4% returns.
And so they're saying, well, this AI thing's huge.
We think it can generate 15% returns based on our math.
Let's use the 4% cash and deploy it here.
We think we can get those returns.
But by the way, the second we think that stops,
we can stop our CAPEX on a dime,
go through a couple years where we don't do any CAPEX
while we absorb the previous investment.
And those companies will be just fine.
It's the companies that either have the debt
or are lending the money, or have equity investments
in highly leveraged entities,
those are gonna be the ones that can't handle
that kind of a transition.
As well, is this, does this all,
is it all sort of forgiven to use that term
if Open AI just delivers what Sam Altman promises?
Yeah, maybe it's worth having the,
our mental framework for looking at AI is that there's three camps, right?
There's, we have to take a weighted probability of three out
There's the pessimist outcome, which is AI's cute, and it's useful, but it's cute like the metaverse, or it's cute like social media, in which case it'll be a useful technology, but really we're spending way too much money on it.
Then there's the op, and a lot of people in that camp right now, I'm not sure I agree with that.
There's the optimist scenario, which is AI is the most powerful technology in a long time.
It will be so powerful that it will make us so much more productive
that it will drive an acceleration in GDP growth.
This is where Microsoft, Amazon, Google are at from their perspective.
And then there's a maximalist scenario,
which is we are maybe as close as a couple of years away
from superintelligence,
a technology that can do anything a human can do better than any human,
in which case it's going to replace us en masse,
create untold wealth to whoever owns a piece of that.
And therefore, no investment we could possibly make is enough
in order to get there, especially if you believe that only one
the one entity that gets there first will own everything.
This is where Mark Zuckerberg lives, Sam Alkman,
Elon Musk, Dario Modi.
They live in this maximalist camp of,
this is a race that we can't afford to lose,
and therefore we need to build everything we can.
Now, all three of these things are possible,
and so you have to plan ahead for that.
But most of us are in this optimist camp,
which is we should invest a lot.
We just have to be thoughtful and careful about how we do it.
So if we're wrong, and it's the pessimist scenario,
we don't bring everything down with us.
Entire economy doesn't fall apart.
That would be good.
And, by the way, let's leave ourselves room,
that maybe the maximalist scenario is right too.
So let's at least be near the rim when that happens
so we can be competitive there.
That's the healthy way to see all this.
And again, it accounts for the fact that AI is very good.
It will grow a lot.
By the way, open AI, when we step and talk about open AI
is an entity and what it's doing,
first of all, I want to give them credit
and put some blame as well.
So let's give them credit for the fact that in November of 2022,
Google was sitting on GPT, and it left it in what they call the pantry.
They chose not to introduce GPT because they didn't know what to do with it
or if it was a good idea to share it with the world.
And OpenEI came out and said, this is unbelievable.
People are going to love this, and they came out with a great consumer product
that we now know is Chad GPT, and now has 800 million weekly active users
and it's driven the fastest growth of any startup ever.
Give them credit for that.
Then let's talk about the detriment of their behavior recently,
which is they have extended their ambition to a point
where they've made all these commitments
that they can't possibly live up to,
and as that's been exposed,
they've been dragging everybody down with them.
So Open AI is just this unique entity.
Now, if you were to ask me, what should Open AI do?
Some people do ask me that question.
I would say, just focus on,
chat GPT. Just focus on having the best
frontier model. Ramp chat
GPT. It's an amazing product.
You have a head start. People
are using chat GPT as the
verb. Like they used to say, I Google this.
People are saying, I chat GPT this.
If they just focused on
that and grow responsibly, they
will be very successful.
If they go down the current path,
overcommitting, deciding
that they have to build their own data centers, they need
their own hardware, they need their own chips,
they won't make it.
So hopefully, for the benefit of their customers and their shareholders, I hope they focus on what they're really good at, which is this model and chat GPT.
Okay, I have more questions about this for you.
We do need to take a break.
So let's hop away for a moment and come back and continue.
I guess we're going to have to call this the AI bubble special report because there's so much to discuss.
So we'll continue talking about that.
And we will try to hit some of the news.
So we'll do that right.
after this.
Finding the right tech talent isn't just hard.
It's mission critical, and yet many enterprise employers still rely on outdated methods
or platforms that don't deliver.
In today's market, hiring tech professionals isn't just about feeling roles.
It's about outpacing competitors.
But with niche skills, hybrid preferences, and high salary expectations,
it's never been more challenging to cut through the noise and connect with the right people.
That's where Indeed comes in.
Indeed consistently posts over 500,000 tech roles per month,
and employers using its platform benefit from advanced targeting and a 2.1x lift and started
applications when using tech network distribution. If I needed a higher top tier tech talent,
I would go with Indeed. Post your first job and get $75 off at Indeed.com slash tech talent.
That's indeed.com slash tech talent to claim this offer. Indeed, built for what's now and for what's next
in tech hiring.
tech team isn't just talking about multi-agentic AI. They already deployed one. It's called
chat concierge, and it's simplifying car shopping. Using self-reflection and layered reasoning
with live API checks, it doesn't just help buyers find a car they love. It helps schedule a
test drive, get pre-approved for financing, and estimate trade and value. Advanced, intuitive,
and deployed. That's how they stack. That's technology at Capital.
One.
Shape the future of Enterprise AI with Agency, AGNTCY.
Now in open source Linux Foundation project, agency is leading the way in establishing
trusted identity and access management for the internet of agents, the collaboration layer
that ensures AI agents can securely discover, connect, and work across any framework.
With agency, your organization gains open, standardized tools, and seamless integration,
including robust identity management to be able to identify, authenticate, and interact across any platform.
Empowering you to deploy multi-agent systems with confidence, join industry leaders like Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat, and 75-plus supporting companies to set the standard for secure, scalable AI infrastructure.
Is your enterprise ready for the future of Vagentic AI?
Visit agency.org to explore use case.
is now. That's AGNTCY.org. And we're back here on Big Technology Podcast Friday edition with
Gil Luria, the head of technology research at DA Davidson. Gil, we've been talking a lot about
the potential risks here of the AI buildout. And we ended with OpenAI. Let's just go right
back to OpenAI here. Is it possible that already, you know, you're kind of in the first half
separated the companies that are behaving well with the companies that are not.
I just want to ask you this.
Is it possible that companies are already to leverage on open AI?
Here is the Wall Street Journal.
Big tech soaring profits have an ugly underside.
Open AI's losses.
Here's the story.
Quarterly profit soared at Nvidia, Alphabet, Amazon, and Microsoft is AI revenue-related
board in.
Cash flows are mostly fine, albeit a lot is now going into building new data centers.
Some of the money comes from actually selling AI services to businesses.
But much of the AI-related profits come from being a supplier to or an investor in the private companies building large language models behind AI chatbots.
And they're losing money as fast as they can raise it.
Open AI and Anthropic are sinkholes for AI losses that are the flip sides of the chunks of the public company profits.
I think this story says something like 60.
Here it is.
Open AI's loss in the quarter equates to 65% of the rise in the underlying earnings of Microsoft,
Nvidia, Alphabet, Amazon, and meta together.
And that ignores Anthropic, which Amazon recorded a profit of $9.5 billion from its holding
in the loss-making company in the quarter.
So all these profits that we're seeing from these companies are not all,
but certainly the majority is just the money that these two companies are spending on the
build out. How does that equate with this idea that, you know, I mean, maybe they're doing it
responsibly, but certainly all these companies stock share prices have jumped dramatically this
year. And so again, going back to our bubble question, isn't that a problem too?
Yeah, absolutely. But let's parse that out a little bit. So first of all, OpenEI is a really big
part of Microsoft Azure growth. And Microsoft Azure is the most important business within Microsoft.
So let's focus on Microsoft and just say that this is actually less true about Amazon and Google.
They're a lot less reliant on Open AI and even Anthropic.
But let's focus on Microsoft and say that half of their AI revenue approximately,
which is, again, a big piece of the Azure growth,
which is the biggest piece of Microsoft's growth, is coming from OpenAI.
Let's talk about the other half of the AI growth for Microsoft.
The other half of the AI growth is very healthy.
That's companies, because everybody's a Microsoft customer,
going to Microsoft and saying,
I built this AI tool and I really need compute capacity to be able to use it.
I'll buy it from Azure.
And then Microsoft says, that's great.
We'll sell you the GPUs, access to the GPUs.
But then, you know, on top of that, we'll send you,
we'll sell you some database products and data warehouse products and data fabric products.
And, oh, by the way, your Microsoft 365 license is going to go up
because you're going to use copilot.
And this is great for Microsoft.
So all the other AI stuff is absolutely great for Microsoft,
and it's a big reason why they've done so well.
Then let's talk about the Open AI piece of this.
Absolutely, this is the piece that it risks.
Because to your point, open AI is a negative gross margin business.
They claim they're not, but they are, right?
Which is to say it costs them more to answer your Chad GPT question
than they make revenue from you.
And that's something we need to be aware of and concerned with, right?
especially if it's a big part of Microsoft Revenue.
It's like, wait a second.
You're getting this person for a company that losing money.
I just saw this week there was someone who tweeted that like Open AI Switcher says either they will answer you immediately with some slop or they'll go out and spend $10,000 on compute thinking through like your query.
Because you're right, it does seem like with these, especially with these queries that require a lot of reasoning.
It just takes a lot of computing processing power.
Sorry, I didn't mean to jump in, but sorry.
So this is why it's dangerous, right?
But let's think about, here's an analogy that I think is very useful for understanding
why this is okay from Microsoft perspective.
And that's Uber.
If you remember when Uber started, the rides were a lot less expensive than taxi.
Let's say it was a, let's call it a $10 ride anywhere across town,
which was so attractive that everybody started using Uber.
And what happened was we started, we started.
using Uber a lot more than we used cabs before. And it started replacing driving. And we expanded
the market for riding well beyond where it was because the price was so attractive. But then what
happened is we changed our behavior and we started using it so much that Uber can then gradually
ratchet up the price to a point where today Uber is a very profitable company. Because it's a $30
ride. And some people may not be using it as much as they did when it was $10, but most of us are
because we've changed our behavior and we see a lot of benefit in using it. And now we're
paying the appropriate price. The appropriate price wasn't $10. It was always $30. And now we're
willing to pay $30 because we've learned over time that that's beneficial to us. We see value in it.
We're willing to pay for it. The same thing should happen with these chats. And let's use
Chad GPT, it's still the leading one. Right now, I may be paying $20 a month. Very few people are
paying $200 a month, but my neighbor, Jane, who has her own law firm, is using Chad GPT so much
in her practice that she could postpone or even avoid hiring another associate. So let's say that
associate was $100,000 a year. If she's paid $20 a month, even $200 a month, that creates so much
value for her, that in the future, if she continues to do this and realizes she never has to
hire that associate, she could just use Chad GPT to summarize deposition, extract important
information, help her strategize and create documents, and she doesn't need to hire 100,000.
She may be willing to pay $10,000, $20,000 a year.
So as we use Chad GPT a lot more broadly, we're going to be willing to go from a $20 a month
price point to a much higher price point. So at some point, this is a product that will be profitable.
We just have to expand the usage so much that the people that are using it in a very valuable way
will be willing to pay what it's worth. That's the journey Uber went through, and that's the
journey we're going to go through chat. What I would point out, though, is that unlike Uber and
the ride share market, which lent itself to winner takes most or winner takes all,
Chat is entirely not like that because I could have the same conversation with Gemini
and Meta's going to give me the tools to do this and GROC is going to give me the tools to do this.
So it's not a winner-take-all market, which means that that process of getting to a price point
that is beneficial enough may take longer and Google and Meta may decide that they never want to do that,
that they're willing to pay for all this compute to keep you in YouTube and keep you in Insta.
And that's where the risk is to a company like OpenAI and chat.
But if you're Microsoft, that's okay because you'll just use your data center capacity to host GROC
or to host another chat that is worthwhile.
And if not, you'll rent that capacity out to your business customers that are using it to
produce more value in their business and therefore are willing to pay that premium and then
buy databases and data warehouses, et cetera, et cetera.
And that's why the market freaked out when meta, I guess, was taking on this debt and
increasing its CAPEX because it's harder to see that direct line of it's going to be okay
if you're a company like Meta versus a company like Microsoft that has the data centers.
That's right.
That's exactly right.
And so a couple of things happened with Metta because Met again is an unusual situation
because they don't actually have business customers
to rent this capacity to.
It really is just Mr. Zuckerberg wanting a bigger and bigger toy.
Mark Zuckerberg's magical adventure.
And again, remember, the reason that's happening
is that he's an AI maximalist.
He thinks that we may be a couple of years away
from having a tool so powerful he will get to rule it all.
That's why he's willing to spend.
And what he told investors last time was,
I am managing this unbelievable business.
I just grew ads, ad revenue by 25%.
I'm unbelievably profitable doing that.
But instead of being disciplined and spending 25% more next year,
I'm going to go well beyond that and spend a lot more than 25% next year.
And that's what investors said.
It seems irresponsible, Mark.
That's a lot of money you're spending this year.
Why don't you just spend 25% more next year?
And when he said, no, I'm going to go well beyond that, they sold the stock.
And the other thing that happened before.
Yeah, then he's okay with it.
He owns the whole thing.
As far as he's concerned, it's his money.
And that's how he behaves.
And it's worked out for him so far.
So I don't know that I want to challenge him.
The other thing that happened with META that was interesting is that when they went out
to borrow, they didn't borrow the capital.
They created a special purpose vehicle.
Right?
They went out with Blue Owl and they said, you know, we'll put a couple of billion in.
Blue Owl, you put a couple more.
and then you can borrow 10 times that
and build data center capacity for us.
And the reason that hit a nerve is,
I don't know if you remember when we really started
using the term special purpose vehicle.
It's about 25 years ago with Enron, right?
And now a special purpose vehicle in itself, not illegal.
Hiding it was illegal, and that's what Enron did,
and that's why incredibly we actually got to put somebody in jail.
But a special purpose vehicle is meta saying
the capital markets are so irrational now
and their ability to lend money to anybody to build AI
that we're going to use this as an infinite money glitch
and we're going to have somebody else borrow the money.
It's not going to go on debt to our balance sheet.
It's not going to go as CAPEX, PPNRR balance sheet.
It's going to go somewhere else.
We will have a line item in our balance sheet
that says operating lease commitments,
but it'll be a lot smaller and people don't pay attention as much to that line.
And why wouldn't they?
If they can, why wouldn't they?
And this is, again, one of those things that got people to say, oh, this is unhealthy.
We don't want to be doing this again.
We know how this ended 25 years ago.
Right.
Gil, can I keep you here for another couple minutes?
I want to talk to you about this AI prisoner's dilemma before we head out.
Of course.
Because the question is, again, like we've talked a lot today, I think,
appropriately about debt, about depreciation, about Open AI's ability to pay back its, or to
actually meet its commitments. But then there's this question of how anything becomes profitable.
And it's not so simple. And so I think what I'm seeing here from Bloomberg is that there is a
suggestion from the Odd Lots team that there's some game theory involved that might keep this
unprofitable for a long time. So they're quoting this one report. An analyst suggested that there's
a prisoner's dilemma of sorts in inference pricing. Inference of course is when you actually use
the models versus train them. If every inference application prices its services based on quality
and charges by usage, the market might remain stable. But because market share is more important
than margins for the equity investors and the venture capital investors supporting these
inference firms, every firm has a greater incentive to offer flat rate pricing with unlimited
usage figuring a race to the bottom. And by the way, that would apply to Open AI and everyone.
So they say, they say everyone subsidizes power users, everyone posts hockey stick growth charts, everyone eventually posts important pricing updates.
Now, here's Bloomberg.
AI isn't normal technology.
It's not clear whether they will be at some point, whether there will be at some point ever when someone will be in position to say, you know what, this is good enough.
It's cliche by now, but people talk about AI like they're building a new god or they talk about it like they're building nuclear bomb.
And we have to get there before any country on Earth does.
In fact, it's because of these.
huge stakes that in recent week there's been talk about how a U.S. government might backstop
some of these companies, some of the companies financing in debt. So basically what they're,
I mean, what they're saying is you, this is not acting like a rational technology because
a, everybody wants market share and B, yeah, they're willing to spend to get there. And so
what do you think about that? Is it going to be a persistent issue? How should we view this?
Yes. Yes. So here's the thing. Who are the players in this game theory, right? It's meta, it's Google, it's Microsoft, it's Amazon. It's companies that are used to win or take all markets. And they think of all markets as being win or take all market, meaning if I don't win this market, I'll get none of it, or at least not enough of it that will be meaningful to me. So they are willing to do anything to win, which to the point of that means they'll be willing to lose money for a long period of time, so
they have a chance to win. And what happens then is it's only the biggest, most deepest pocket
player that can win because they can wait it out or at least communicate to everybody else
that they're willing to wait it out. So that's where a company like Openair has no chance,
because they can't make it through another year or two at this level of spending. So they certainly
won't be able to outlast Google meta and Microsoft in this game, right?
And it explains a lot about Mr. Zuckerberg's behavior.
Again, he's not just spending the money.
He's telling us he's willing to spend anything to win.
He's signaling to all the other players is, I will not lose.
So you can keep throwing money at this.
I'll keep throwing money at it longer.
And that is exactly where we're at,
which is why we may have persistent losses for a while here
because these companies have very deep pockets.
Again, the smaller ones will either get absorbed by the big ones
or just have to walk away.
But those big ones believe this is winner-takes-all.
By the way, you can tell I'm not sure it's winner-takes-all.
I think we can be using different chat programs over time
and companies will be using different AI.
So I'm not sure winters-take-all.
I do think that some of these companies can all succeed together,
but I do think the analysis is correct
and that they see it as winner takes all
and they're doing what they can
to not only stay in the game, but communicate
to everybody else, signal to
everybody else that they're staying
in the game. And then the
Bloomberg piece brings up this risk because
of that, right? So that there could be
not a credit crunch,
but a collateral crunch,
right? A collateral crunch
is the sudden collapse
in the value of assets underpinning
all these loans. And then
they quote this chief economist from
Raymond James, Jeff Sout, who made this statement before the financial crisis. The risk is that
the contagion spreads and morphs from this collateral crunch into a full-blown credit crunch.
And that is exactly what happened in 2008. So is that, I mean, it sort of kind of encapsulates
what we've been talking about, that you could see a contagion here from, you know, people being
burned on a handful of these deals. Maybe it's, you know, just throwing it out there. Maybe it's
the Oracle deal. Maybe maybe CoreWeave. And then saying, all right, I'm just not lending. I'm going to
really tighten up my lending practices because this went bad. Yeah. And I think that's where we're
headed. So again, we're tens of billions of in, so of dollars of debt into this, which means if it
goes away, then some people get hurt, but not the whole system. It's only if we get hundreds of
billions of dollar in that will get hurt.
Well, the more likely scenario is that what's happening right now in the market might scare
these underwriters straight, they'll stop making these irresponsible loans, and we'll go back
to funding this out of cash flow by the companies that have the customers and have the
wherewithal, and again, have the deep pockets to ride this out.
Because if you play this scenario out, what you'll realize is meta-microsoft and Amazon, Google,
fully expect all those companies that are barring to build data center to go bankrupt.
This is great for them because that means that when they go bankrupt,
they can buy assets at pennies on the dollar.
So if I'm Microsoft, I know that in two or three years,
I probably won't have to do any KAPX because I'll be able to buy data centers out of bankruptcy
for pennies on the dollar.
So I might as well let this play out.
And if irresponsible lenders want to make these loans, it's their problem.
I'm going to be able to capture those assets when I need it at pennies on the dollar.
All right, Gil, last one for you.
The one thing we haven't talked about is the potential bottleneck on power.
Satya Nadella, for instance, was talking about this on a Brad Gersoner's podcast,
talking about how, like, he has chips, he can't plug in because he can't power this.
He doesn't have warm shelves.
He can't power the shelves.
Mustafa Suleiman, CEO of Microsoft Day.
I was on this show earlier this week, talking about how they do.
have capacity for training, but inference is a problem. Here's zero hedge who puts it in a zero
hedge way. Has anyone done the math on how many hundreds of new nuclear power plants the U.S.
will need by 2028 for all these AI daily circle jerk deals to be powered? What do you think about
the power question? To me, it's an increasing issue that, like, if anything is going to put
the brakes on this, maybe it's just that the power will run out. So power is the, the
bottleneck. But what happens is in a market-based economy such as ours is that when there's enough
revenue and profits at stake, we work our way through bottlenecks. And that's what's happening
and will happen here. So yes, the grid may not be able to give us enough capacity to turn on
a data center because at peak they can't let us have us access to electricity. But with storage
solutions. They could give a sum. And then what these companies are doing is putting power what they
call behind the meter, which is to say generators and turbines and diesel trucks because it's so
lucrative that it's worth it for them to park 10 diesel trucks and run them so they can power
those chips because they make so much money renting out those chips. So we will find a way.
Markets find a way and we're going to find a way through this.
It's just a matter of being creative and, you know, it's very lucrative.
If you're an electrician right now or an HVAC technician, boy, are you making bank.
You're you're getting flown on private jets to and making twice as much money so you can install a data center.
So it's a good time to be electrician or HVAC technician.
This is all going to make a great movie one day, Gil.
Yes.
Hopefully not as devastating as a big short.
I have Adrian Brody in playing me in that movie.
Okay, sounds good.
This has been an AI bubble special report, Gil Luria.
Thank you so much for joining us.
I, you know, I feel like I needed this, we needed this.
It's the deep dive I've been waiting to do, and I'm so glad we did it.
So thanks for coming on the show.
Appreciate it, Alex.
Enjoyed the conversation.
Same here.
All right, everybody.
Thank you so much for listening and watching if you're on Spotify or YouTube.
Nick Clegg, the former president of global affairs at Meta,
former deputy prime minister of the United Kingdom,
is coming on on Wednesday.
Talk about whether we could trust Silicon Valley with superintelligence,
and Nick has some really interesting thoughts
about the economic value of superintelligence,
whether it'll even make financial sense to own it.
So we hope that you tune in then.
Thank you for listening, and we'll see you next time on Big Technology Podcast.
