Big Technology Podcast - AGI or Bust, OpenAI’s $1 Trillion Gamble, Apple’s Next CEO?
Episode Date: October 10, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Why the AI industry needs to get to AGI to make the investments pay off 2) The diverging tracks between ...AI model improvement and investing in scaling 3) Why the LLM craze may delay the path to AGI 4) So what is all this compute for? 5) OpenAI's $1 trillion infrastructure investment 6) The increasing prevalence of debt in AI funding 7) Could an AI collapse hit the global economy? 8) OpenAI and AMD's wacky deal 9) Oracle's margins 10) OpenAI's Sora 'surprise' 11) Will John Ternus be Apple's next CEO? --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Are we now at the point where the AI investment frenzy means it's AGI or market collapse?
OpenAI's $1 trillion infrastructure investment may impact all of us,
and he's the frontrunner to be Apple's next CEO.
We'll cover it all right after this.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional cool-headed and nuanced format.
We have a great show for you today.
cover all of this crazy AI investment and ask ourselves, are we insane for not sounding an alarm
and saying this is going to get much worse before it gets better? A lot of money is going
into AI and it's time to look at where that money is actually going, what it's premised on,
and whether it is putting too large of a bet on Open AI, maybe betting the entire US,
maybe the global economy on one company's fortune. We'll also talk,
about who might be the next Apple CEO and whether if it's time for him to just step up right now.
Joining us, as always, on Friday, is Ranjan Roy of Margins.
Ron John, great to see you.
Good to see you.
I'll admit it.
I am going to be the next Apple CEO.
It's breaking right now.
He might be the frontrunner to be the next Apple CEO.
If you fix Siri, I think a lot of people will be happy.
So no pressure.
It's obviously an easy thing that not many people have tried.
is quite simple to do.
That was my pitch to Tim.
He bought it.
He bought it.
He's been listening for a while and he realized the only way to fix series to bring me in.
Well, I look forward to being able to do our next set of episodes with you in the big UFO and Cooper Tino.
So that is assuming, of course, that we still have a global economy by the time you take over.
And I'm starting to think it's not such a sure thing because we've seen two things happen.
One is an even greater increase in investment in AI infrastructure over the past, let's say, two weeks.
And it's been two weeks since we last spoke.
And there is so much that's happened.
But as that has come up, there has been an increasing chorus from people, even within the industry, that's starting to say,
does this make any sense at all?
Is this going to be a problem?
And let us return to Dave Kahn, the Sequoia partner, who wrote, of course, a great piece a couple years ago or a year ago even about.
the $600 billion question around generative AI and whether there's going to be enough profit
to actually justify the investment. He now says that that question is quaint because we are at a stage
of the buildout that is much further than that. So he says this, one thing has become clear,
nothing short of AGI will be enough to justify the investments now being proposed for the coming
decade. This is happening even as AI's potential is being realized. Chachipiti has given.
continued at its epic rise to north of $12 billion in run rate revenue.
Anthropic has reached $5 plus billion in run rate revenue in a meteoric rise.
And there's a new club of companies scaling quickly from zero to $100 million in revenue.
There's a version of the world, and this is the version that Microsoft and Amazon increasingly
seem to be pursuing, where the next frontier is AI adoption.
The models have proven themselves to be great.
And now it's time to monetize these investments and drive,
a world-changing technology evolution.
But that point of view is by no means widespread.
Outside of these giants, a debt-fueled second push is happening.
Labs are taking all their profits and capital
and plowing them right back into new data centers
and a new breed of companies, namely Oracle, meta, and Corweave
are going all in no-holds-barred.
Given the scale of these investments,
the only objective that can explain this strategy is AGI.
So I think Khan is making a really good point here.
Two things are happening.
One is you have companies like Microsoft, which came on this show last week, to talk about
how it was basically being more rational with AI spending.
And Amazon, you could put them in that bucket.
And then you start to see this crazy buildout that Open AI, Oracle, and others are driving.
That the numbers really only make sense if you get to AGI.
If you get AI, that cure cancer.
And so much of that is speculative.
and that is where we might be getting into a danger zone.
So, Rajan, I'm just going to turn it over to you.
What do you think about that?
Yeah, I'm really glad that he's making this distinction between those who are kind of
just pushing the idea that it's time to monetize these investments and drive a world-changing
technology evolution versus this debt-fueled second push.
I've been looking at this a lot over the last couple of weeks around the Oracle deal, the
AMD deal.
All these deals are focused on CAP-X and not product.
Again, I mean, there's been some incredible moments in product over the last couple of weeks,
SOAR included, which we're going to talk about, and I've been waiting to talk to you again.
But I think overall, this idea that, you know, it has to be AGI.
It has to be something that justifies all of this CAPEX, you know, investment is what Oracle,
meta, core weave, all these companies are betting.
And there has been absolutely nothing that shows us that we're actually headed in the
this direction. So it starts to feel more uncertain and irrational. Nothing's stopping it right now,
but I think it's a good thing that we're talking about it. Right. And over the past couple weeks,
I've been asking myself, like, am I a lunatic for thinking that some of this infrastructure
spending is just not following the data that you're seeing in AI research? And it's something
we've talked about on the show with people in the industry for, I don't know, how many months now,
six months a year, where we've talked about how the gains that you're seeing from scale,
are leveling off. And that's something that seems to be somewhat consensus, as close to consensus
as you're going to get an AI because there'll be some folks, maybe like the Dario Amadez of Anthropic,
who are saying that scale, you know, is a way to get to AGI, scaling up LLMs. But everybody
else is saying we're seeing diminishing marginal returns. So you have that seeming consensus.
And on the other side, you have investment that's building as if that's not true. That's building as if you
can just scale your way to AGI.
Let's go back to Dave Kahn here.
He really makes this point well.
What's surprising me is that this doubling down on Kappex is happening even as the dream
of AGI seems to be cooling off.
Two things have happened.
First, new model progress has tapered off despite much larger training clusters.
Second, as a likely consequence, AI luminaries have started to walk back their AGI timelines.
In December, Elias Sutskevres said that pre-training is dead.
in June, Sam Altman said,
AGI will be more of a gentle singularity
in that same month,
Andre Capathly,
forecasted a decade of agents
rather than AGI in 2027.
It's such an amazing divergence
between what the people in the field are saying
and what Wall Street and the investors are buying.
What do you think about this?
I like my singularities to be gentle,
so I'm still glad Sam's saying that.
But no, no, I agree because there's kind of two parts of this.
It's one, you know, like,
is this KAPX investment, are these investments in data centers actually going to be required
and is this need for compute going to be, like are we going to get to AGI and just these very
heavy compute processes that solve all of our problems? But then the other part that I actually
think Dave Kahn didn't really get into is it kind of was presented as binary, that like heavy
compute leads to AGI, but also the idea that doing these things in a more computing
efficient way is still another third path, I think. And we, I mean, you on stage with Google
hearing about how it's algorithms, not just raw compute, I think we've been seeing a lot more around
that. I saw some paper around, I think it was called like a tiny recursive model can actually
achieve very similar results as deep seek. The idea that if you actually do things in a more
compute efficient way, that makes things a lot more cost efficient for companies and people will
much prefer that to the heavy GPT-5. It's going to think hard and long about every single
problem, even very simple queries that you give it, just to kind of just drive compute usage.
So I think overall, the only way any of this makes sense is if we realize this vision of just
like heavy compute AGI, which there's no real signs pointing to, at least that I can see.
And as you said, when even Sam and Carpathie and all of the, and Celia are all saying this as well, it's such a disconnect from the actual investments that are being made.
Right. And we'll get into sort of what the logic might be even if we're not going to get AGI simply from growing these data centers and models.
But I think we can both agree that it's crazy making in a way what we're seeing right now in terms of,
the investment and where the research is pointing, completely disconnected.
Well, have a question.
What's your current definition of AGI?
Other than Waymo's driving around New York City.
But in this context, obviously that's the first definition.
I mean, that's the official industry standard.
But in all these contexts, I'm curious, like you mentioned AI curing cancer
is kind of one high-level interpretation of this.
But how do you look at what,
what could be AGI in this context.
Let's just go with the definition that I think these companies are thinking about,
and that is that AGI that can do more than 50% of white collar work today.
Okay.
I mean, which, but I guess that's the part where I still have trouble kind of squaring this,
because I really believe you can probably do 50% of white collar work
without incredibly heavy compute,
kind of in that Microsoft and Amazon camp that the models are good,
And obviously, long-time listeners know where I stand on product versus model.
But, like, I think there's a world where you can build these very complex workflows and you can do this work.
And it doesn't require AGI.
It requires the current state of technology and all those data center.
And we'll get into the actual, like, does the feasibility these data centers from kind of an investment in chip standpoint even makes sense?
But I think you don't need that interpretation.
of AGI to actually make this, make AI realize its potential.
Okay, so what you're doing right now is you're giving the perfect rationale for why this
build out makes sense.
Because I think what the labs would argue to their investors is, even if we stop
today, we have technology that can, with the right orchestration, automate 50% of
white color work, and therefore this investment is going to be worthwhile.
And you actually kind of hear it slip from people like Dario who says that 50% of entry-level white-collar jobs may be automated within a couple of years.
If you don't need a massive technological advance to get there, then that would be the logic for this build-out and make the investment worthwhile.
Do you think, I'm curious, do you think that level of a case even has to be made to investors?
like do you think that was a pretty good pitch i'm halfway in there but but do you think they're
even getting to that level right now or what do you think these conversations look like between
some of them some of them yes you're going to talk about like opening i and amd and we're going to get
into it in a moment yeah the mat levine piece was extraordinary some of it some of it is like yes that needs
to be made i feel like that was a version of the pitch made to light speed for instance because i spoke
with the light speed investor who wrote a billion dollar check into Anthropic.
And he basically, for the Dario profile, did the math for me and sort of explained it in a way,
similar to the way that you just explained it.
So I do think that, yes, that's where the conversation gets in some areas.
But I also think there are others who are like, let's just do a deal.
Please, we need the Open AI brand shine.
And that's where it nets out.
Yeah, I can see that.
I can see that it can go both ways.
And on that question of AGI, because I think, like, Dave Conn, it even kind of pointed
out three things, three kind of, like, underlying factors that make things even less likely.
And I thought this was interesting, because we all talk about AGI as this kind of, like,
vague concept that the labs will get us to.
But in reality, like the first big thing, which I don't think I hear very much about it all,
is that the labs are starving Ph.D. programs of talent. So, like, now, actually, that really
kind of foundational research moves only towards labs away from universities and kind of more
traditional research. Even though that's where all of this started is actually, I think, a dynamic
that is totally overlooked and could have kind of longer-lasting consequences around this.
But then, and then one of the other ones I liked was corporate politics tends to favor
invoke consensus ideas over more radical unpopular ones, I think it's fair to say that, like,
even though the Sams and the Dario still present themselves as kind of renegade, you know,
like us against the world taking on these kind of challenges, these companies are becoming
corporations, I mean, valuations, certainly, but you have to start to imagine, I mean,
open AI's internal politics are the stuff of legend, but overall,
the idea that they're going to still be able to operate truly in that kind of like intense
innovation way versus they're starting to get a little Google cloudified, I think is another
risk to this. Yeah, I think this is a great point that Khan brings up, which is basically
you're putting so much money towards LM development and you're investing as if scaling
LLMs is a straight shot to AGI that when you get to the, if it's not, you're actually going to
slow down that pursuit because you're so focused this way. I think it's a great, it's a
great point. I think that you do hear from folks like Demis saying that we need a couple
breakthroughs beyond LLMs to get to AGI. And of course, Jan Lacoon has been loudly talking about
that. I think this is a real risk though. I think he's totally right. If you, if you're
Being thrown millions of dollars to join, let's say, meta, to work on LLMs, and you would have otherwise been pursuing a non-LLLM solution that, you know, sort of out of the box and maybe at a 10% chance of working.
But if it worked, would be a big breakthrough.
Then in aggregate, what's happening now is probably slowing down the AI field, which is really interesting.
And let alone what would happen if this actually goes bust.
And then, I mean, on that, like, you have to imagine, what are the actual human dynamics?
underlying this, like, this is a small group of people. A lot of these people work together.
So you have to imagine the group think, like really has to pervade the way they're approaching
or thinking about anything. And obviously, I think that's why Deep Seek was such a moment,
because it was like, okay, completely separate teams and people actually can play in this realm
and have a different way of thinking. But it really just becomes more and more clear that
this smaller group of people have the kind of same mindset and think the same way. And
this is the bet they're putting us all in and the entire global economy potentially, as you
said. And then one last part of this is that he talks about how the incentives inside these
organizations drive short-term thinking on the order of one to three years. I think that's
really important as well. It's just that, you know, even if you're, well, especially if you're
Open AI, you're this, you were founded as a lab on long-term research. Now you have to return
what, like a trillion dollars in investment over the next couple of years. You're not going to be
like, let's work on the frontier and experimental stuff. You would like basically have found
a method that you like and you're going for it. So kind of ends his piece asking what the
new compute is for. I'll just read this part. If new compute investments aren't getting as
closer to AGI, then what's the point? One argument is that the compute is a commodity.
of the future, and that stockpiling this resource is likely to be valuable regardless.
I think that sort of goes to Ron John's point.
Setting aside the issue of depreciation, which makes this argument tenuous at best, the bigger
question becomes how long financial markets will be willing to underwrite such stockpiling
and whether investors even understand that this is what they're doing.
My sense is that while researchers are increasingly uncertain about how compute translate into
capability improvements, Wall Street hasn't fully woken up.
to this. So let's just say, let's just talk about this, you know, wrap a bow on it. Basically,
I think we're both a little bit concerned that even if you get to a place where with the current
systems, you can create real economically valuable work, maybe this current buildout is so
overenthusiastic and is vulnerable to, A, the technology not improving, or B, efficiency improvements
that there's a non-zero chance that they're effective.
I don't want to say lighting this money on fire,
but maybe that's what's happening.
I think it's a generous interpretation almost.
I think, yeah, it's in terms of where this takes the economy,
I think we definitely need to get into that
because, like, how much of this money is real,
how much of it is being spent?
And I actually, there's like a, I'd listen to this,
really good podcast talking about the data centers and like flying a drone over and seeing
you know, like actually hundreds of people working and it's the size of lower Manhattan or I think
it was like Central Park down to Soho. So there's stuff being built, which I think is at least a good
reminder for me that this isn't all just kind of like completely made up. But I think that we haven't
had a genuine discussion investigation around the dollars and how they flow.
We've been talking about this for years around like what investments are just compute,
what investments are actual cash, where are things being built.
What's, you know, like it hasn't really been dug into and kind of really analyzed in a
traditional financial sense.
And maybe this is the moment that that starts.
All right.
And why don't we start doing some of that?
right now. So the Financial Times has a article about it saying that opening eyes computing deals
top one trillion dollars and then sort of asking whether this makes sense because ultimately someone
has to fund all this all this build out. Now here's the story. Open AI has signed one trillion
dollars in deals this year for computing power to run its AI models, commitments that dwarf its
revenue and raise questions about how it can fund them. Here's some of the deals, the deals with
invidia and AMD could cost up to $500 billion and $300 billion respectively.
Oracle's deal with OpenAI could cost another $300 billion.
CoreWeave has disclosed computing deals with OpenAI worth more than $22 billion.
OpenAIA has also launched an initiative with SoftBank, Oracle, and others known as Stargate
and pledged up to $500 billion in U.S. infrastructure for OpenAI.
It's not clear how the Nvidia and AMD deals would fit.
into the Stargate plans, although I think we do believe that they're including that as part of the $500 billion.
The deals would give open AI access to more than 20 gigawatts of computing capacity, roughly equivalent to the power from 20 nuclear reactors over the next decade.
Each gigawatt of AI compute capacity costs $50 billion to deploy in today's prices, making the total cost, about $1 trillion.
And then they go to this analyst, Giluria, at DA Davidson. Open AI is in no position to make any of these commitments. Part of Silicon Valley's fake it until you make it ethos is to get people to have skin in the game. Now a lot of companies have a lot of skin in the game on Open AI. And as we've mentioned in the past, Open AI is expected to lose 120 billion between now and 2029. So how does this math work, Ranjan? I mean, it doesn't. It doesn't. I think like from it again,
any kind of like standard rational analysis, it doesn't.
But the thing I keep thinking about is, does this kind of like, does this force open AI
and others who are playing this game to push?
Again, going back to the topic of kind of like heavier compute solutions, like Pulse,
we talked about a couple of weeks ago, this, have you used it yet?
Or do you know anyone who has?
I have not used it.
I do know someone that has Dan Shipper from every, he has some good things.
to say about have you used it?
I've not, but again, this idea that it's just going to be like sucking up
compute all night long just to give you some updates in the morning and then potentially
ads as we talked about.
And I've kind of like come to definitely believe that's the direction it's going.
But SORA itself or even that this was one of the big issues that you brought up around
the GPT5 launch, but like kind of pushing the model and the platform in that.
to much, much heavier compute thinking and reasoning when it's not required, it feels like
that is, I mean, everything around how they're building this company is incentivized to
push the absolute least efficient solutions possible to actually make their own economics
work. So that part, I think, like, they have to go in that direction, and they're definitely
going in that direction, but otherwise, it just, yeah, none of this makes any sense to me.
Wait, explain how pushing least efficient compute projects make the economics of open AI work.
Well, someone has to pay for it in the end. So assuming that you'll start paying your $200 instead
of $30 for GPT Pro to get your pulse updates. And at a certain point, they're going to have to
charge us for making our cameosoras that to actually account for the amount of computer,
it's requiring. But it's basically that people are not, that compute's not going to be
leveraged or used if we're just stuck in the current paradigm of what models are needed and what
kind of computers needed. So, so you have to, again, like what we were talking about earlier,
can you kind of create these workflows based on the technology that exists today and kind of
make it more and more efficient? It's to actually show that we are utilizing this compute,
this investment in 20 gigawatts and like all these nuclear reactor equivalents, it's going to be
very clear, very quickly that it's a bad investment in idea unless they can actually show like
just as important as revenue is actual like compute utilization for them right now. And I'm sure
internally these are conversations they have to be having because otherwise you're putting all
this money in and very quickly people will be like, well, that next tranche of money, we don't need
to actually release because no one's using this stuff.
I get it. So for them, they want to incentivize massive compute usage because as they go to
their investors, they're using that compute usage as a proxy for the value of this technology.
Yeah, exactly.
This is why our technology is value. We can't, we don't have enough compute.
And if they're able to tell that story, then they might get more money.
So that incentivizes them to use a lot of compute for stuff they don't really need.
Exactly. And us as consumers are benefiting right now because no one actually has to pay for it on the other end. And this kind of makes the like 2010s VC subsidized Uber rides look like, you know, a quaint memory where we're just able to get all this benefit as consumers and generate our SORA videos. But in reality, like, no one's paying for that right now.
do you worry about the debt that's coming into the picture so of course a lot of this has been funded by VC money
now it's starting to move toward debt this is from the FT story open AI valued at 500 billion this
month is preparing to raise tens of billions of dollars of debt to fund infrastructure this is
also from the Wall Street general debt is fueling the next wave of the AI boom and again this is
like the phase two that con talked about.
A few smaller companies, most prominently core weave,
have been relying on creative financing
to vault themselves to the AI forefront for a while.
Oracle is also part of this.
To make good on its end of the contract with OpenAI,
Oracle has to spend on infrastructure
before it gets fully paid by OpenAI.
Analysts at Key Bank Capital Markets estimate,
in a recent note that Oracle would have to borrow
$25 billion a year,
over the next five year to fund these commitments.
And of course, just, you know, talking between us,
a lot of this is based off of revenue predictions
that are exponential increases, really, for open AI,
and not just incremental increases.
Oracle is already highly leveraged.
The company has a long-term debt of about $82 billion
at the end of August,
and his debt to equity ratio was about 450%.
By contrast, Google Parent Alphabet,
debt to equity ratio was 11.5% and Microsoft's was about 33%. Don't we get into trouble when
these bubbles end up taking on debt and they can't pay it back? Well, I think from like a larger
economy standpoint, it's still a bit unclear because this is still very concentrated. So,
you know, it's one company here with $82 billion of debt and a 450% debt to equity ratio.
So this isn't, you know, like homeowners across the country taking on unreasonable debt against their household.
So what that kind of spillover looks like, I think it's still pretty unclear.
But my favorite was actually just a couple of hours ahead of this recording, my favorite SoftBank, they announced they're taking a $5 billion margin loan that secured against its chip unit against arm holdings.
they've actually taken out $18.5 billion of margin loans against arm shares.
Like, I mean, this is the stuff that is Masa son, God bless them.
But this is the kind of stuff that I feel when you look back on and it just doesn't,
it doesn't feel right or make sense.
Are you worried that there would, I mean, obviously there's a chance of a real equity
pullback.
But are you worried about something like a soft bank going insolvent from something like this?
And what the ripple effect is there?
Am I worried about a soft bank going insolvent, I think?
Masel always make his way back.
But I don't worry around the direct spillover effects.
I really think this is still,
it's been still like a relatively contained group of people
that have made the most money off of this.
It's been relatively few companies
that have truly benefited from this.
I think, yes, if there's an equity pullback, what does that mean, like, geopolitically?
And I think the world is certainly not in a stable place and any kind of additional
uncertainty does not help, you know, kind of maintain any kind of stability.
But overall, I don't know, I still haven't, I've been, I haven't seen a compelling
argument about how this really spills over other than just an equity,
pullback. I mean, there's no growth story for the global economy if this goes away. It's driving
the entire growth that we've seen over the last couple years. But is that just kind of bringing back
to rationality? And is that, or is it actually, I don't know, you said, you mentioned is,
will the global economy still be here next week? So let's hear your take on it. We can get into that.
We can get into that. So let's do this. Then we're going to touch on maybe the AMD deal again.
This is from the Financial Times.
America is now one big bet on AI.
The hundreds of billions of dollars that companies are investing in AI now account for an astonishing 40% share of the U.S. GDP growth this year.
40%.
AI companies have accounted for 80% of gains in the U.S. stocks so far in 2025.
This is helping fund U.S. growth as the AI-driven stock market draws its money in from all over the world and feeds a boom in consumer spending by the rich.
In a way, America has become one big bet on AI outside of the AI plays.
Even the European stock markets have been outperforming the U.S. this decade.
And now the gap is starting to spread.
So far in 2025, every major sector from utilities and industrials to health care and banks have fared better in the rest of the world than the U.S.
You know, if AI doesn't deliver for the U.S., the U.S. and its economy will lose the one leg they are now standing on.
Your thoughts?
I'm still back to you on this one.
Do you think, what does that actually mean?
If let's say suddenly there's like predictions that the compute is not going to be effectively utilized, we get into GPU depreciation and people, someone does the math.
What do you think is the worst case scenario?
Probably, I think maybe global economy blowing up is a post an overstatement.
Okay.
Now that we're talking about it in our poolheaded.
And not blowing up disappearing, I think you said.
Yeah, well, you know, that might have been the, you know, drama podcaster in me.
But no, look, I think it would be bad.
I think, you know, obviously the global economy would be intact.
You would think.
I think you're really right in pointing out that the debt and the investment is really contained.
But when it, if it were to go away, if this AI moment were to go away, you would see, you would see some really negative.
economic consequences in the U.S., especially, it may be outside.
Here, this is from Deutsche Bank.
They say the AI boom is unsustainable, unless tech spending goes parabolic and it's highly unlikely.
AI is saving the U.S. economy right now, says a Deutsche Bank analyst.
In the absence of tech-related spending, the U.S. would be close to or in a recession this year.
And this is from Bain, $2 trillion in annual revenue is what's needed to fund computing power.
the computing power needed to meet anticipated AI demand by 2030.
However, even with AI-related savings,
the world is still $800 billion short to keep pace with demand.
And the boom is not sustainable.
Yeah, I don't know.
It seems to me that there's going to have to come a point
where the rubber is going to meet the road here and something's going to go bad.
And I don't know exactly where that's going to be or how widespread it will be,
But even with all of AI's promise, and this is what Kahn was getting at in the piece we read at the beginning, a correction is there will be a correction here.
I think one thing that's still kind of like, it was interesting to me the way the Bain and Company report had talked about, yeah, the $2 trillion in annual revenues needed to, it's what's needed to fund computing power to meet anticipated AI demand by 2030.
However, even with AI-related savings, the world's $800 billion short,
it's still, again, putting it in that paradigm of, like,
you need that revenue to cover the computing power
that's going to be needed for anticipated AI demand.
But I think how these numbers get calculated and extrapolated,
it blows my mind.
It boggles my mind.
It's like trying to, is it just a straight line interpolation?
is it, is it, I mean, it's exponential, but like trying to forecast AI demand in 2030, given
how much things have changed. And again, like, I think what I saw was it, Google, it was like
1.8 quadrillion tokens now being leveraged by Gemini. Like, the numbers are pretty spectacular,
but still extrapolating that out for the next, you know, five years and trying to make any sense of it
and really try to put numbers behind it.
I don't know, especially when the machine god's going to come,
and AGI's and superintelligence are going to come.
I don't see how you do that with a straight face.
Well, Ron, John, I think I came in here really down in the dumps
looking at these numbers, but you've talked me off the ledge appropriately,
so I appreciate that.
All right, let's do two small dives into a couple of companies, Oracle and AMD,
and then hit the break.
First of all, what do you think about this Oracle story?
So they're buying, obviously, a ton of compute.
This is from the information.
Oracle became the best performing megastock of 2025 after its executive said last month
that the once sleepy database firm will generate an astounding 381 billion in revenue
from renting out specialized cloud servers to Open AI and other AI developers over the next five fiscal years.
But the margins they're getting on those are averaging 16 percent.
And in some cases, Oracle is losing considerable sums on rentals of small quantities of both newer and older versions of NVIDIA chips.
In the three months that ended, in August, Oracle lost nearly $100 million from the rentals of NVIDIA's Blackwell chips.
So you have a software company that's used to 70 or 80% margins.
Now they have margins of a retail business.
Obviously, their sales are up.
Does it make sense that their stock is up about 80% this year?
and their market cap is at $854 billion with a 69 P.E. ratio?
No, no, I think this is a really good point here.
And I think, like, everyone should start really getting into these kind of numbers.
Because, again, we have not talked about, like, you know, gross margin on these businesses.
We've all said theoretically for a long time that, you know, like generative AI has a different financial profile than traditional software.
And again, it's not something that has like infinite economies of scale or near infinite,
but instead has a real cost underlying it.
And to see that is actually kind of mind blowing.
Like, again, as you said, 70% margins down to averaging 16%.
And maybe you can argue that this is just at this point as they're kind of kind of like
getting this business up and running and scaled.
But in reality, like there's no reason that.
the margin profile should change over time. Like maybe it starts to improve, but it's incremental.
Maybe you get to 20, 25 percent. Maybe you squeeze out 30 percent. But in reality, like,
this is a different business than the sleepy database company that was just churning out cash for so
long that it really should call into question, I think, like how, what the economics of all these
companies are going to look like and it's going to be different and maybe it'll be fine like maybe
these will be retailer style businesses that are gigantic and operational but it's not going to be
70% margin software businesses right i think i'll luria was an ad the analyst that we quoted
before was on cnbc making this point basically saying like it doesn't make any sense for
oracle to be valued uh as at more to be for oracle to be more expensive uh than a microsoft
where Oracle is basically like setting up these data centers
and not pulling in margin where you have a Microsoft
that's actually setting up the infrastructure
and making money off of it and making profit.
So I don't know.
I think sort of what we're seeing is this push towards AI
might make sense overall over time,
but there is some silliness around the margins
and that they'll shake out.
Of course, there's been some silliness
with the Open AI, A.M.
deal. M.G. Siegler was here on Monday. We were talking about how, you know, maybe it didn't make
sense. And now there's a really interesting fake conversation that Matt Levine published.
That might be what OpenAI and AMD discussed before AMD agreed to give OpenAI 10% of its
company potentially. Let's talk about that right after this.
The holiday sneak up fast. But it's not too early to get your shopping.
done and actually have fun with it. Uncommon goods makes holiday shopping stress-free and joyful
with thousands of one-of-a-kind gifts you can't find anywhere else. I'm already in. I grabbed a cool
smoky the bear sweatshirt and a Yosemite ski hat, so I'm fully prepared for a long, cozy winter
season. Both items look great and definitely don't have the mass-produced feel you see everywhere
else. And there's plenty of other good stuff on the site. From moms and dads to kids and teens, from
book lovers history buffs and die-hard football fans to foodies,
mixologists, and avid gardeners.
You'll find thousands of new gift ideas that you won't find elsewhere.
So shop early, have fun, and cross some names off your list today.
To get 15% off your next gift, go to Uncomondgoods.com slash big tech.
That's Uncommongoods.com slash big tech for 15% off.
Don't miss out on this limited time offer.
Uncommon Goods.
We're all out of the ordinary.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
And we're back here on Big Technology Podcast, examining.
I think someone said we were, they love how this show has become a detective show for the AI bubble.
I love that comment.
Let's keep at it.
So, of course, Open AI and AMD had this big deal this week where Open AI and AMD basically spend tens of billions working on data center development.
So OpenAI can use AMD chips for inference.
there was this weird element of the deal where AMD said if certain milestones are hit,
OpenAI will have the opportunity to get 10% of AMD's company's stock, basically for a penny a share.
So Matt Levina, Bloomberg surmised why this might have happened and how this might have come together.
Here's the fake conversation.
Open AI.
Well, we were thinking that we would announce the deal, and that would add $78 billion to the value of the company,
which should cover it, basically paying for the chip.
AMD, dot, dot, dot, dot, dot, dot.
AMD.
No, I'm pretty sure you have to pay for the chips.
Open AI.
Why?
AMD.
I don't know.
I guess it seems wrong not to.
Open AI.
Okay, well, why don't we pay you cash for the value of chips and you give us back stock?
When we announce the deal, your stock will go up and we'll get our $78 billion back.
AMD.
Yeah, I guess that works, though.
I feel that we should get some value.
Open AI.
Okay.
You can have half.
You give a stock worth like $35 billion and then you.
you keep the rest. Levine says the deal between open AI and AMD was obviously going to create a lot
of stock market value. The announcement of the deal would predictably increase the value,
the market value of AMD. And it's not like it decreases the market value of open AI.
Why not use that stock market increase to subsidize the deal? What do you think about this wrong time?
I mean, this is actually, this is what I asked you earlier about like, what do you think these
conversations really look like behind the scenes? I mean, I loved this.
because I don't think it's completely out of bounds that this is actually some of the conversation
that's happening right now, like that, well, obviously when we announce this, obviously when
we announce this, it's going to boost your stock, that should definitely cover some element
of the overall cost of the deal. Like, it's, and we've been seeing this forever, like, I mean,
not forever, for a few years. This is not that different than imagine.
a Google or like Amazon and Anthropic and it's like well we'll give you five billion but it's going to
be like three billion in compute but it's going to raise your valuation by this much so you know
like overall this kind of funny money-esque element of it has has been there for a while it's just at a
much grander scale and one thing I kind of want to bring up like I think what we were talking about
a second ago the average like my life for the last decade has kind of been
at the intersection of retail, software, media,
and, like, average retailer PE ratios around 20 on a good day,
average software company, even Oracle right now is sitting around 60.
Do you, when you start to actually try to bring some rationality to this
and some discipline and rigor, are these companies going to be more like retailers?
Or whatever, I don't know, like maybe it's going to be industrial equipment.
I've seen a lot around how Stargate is not an AI play, it's an energy play.
which is a good business and can be and is interesting,
but shouldn't they be valued more like a traditional energy
and energy infrastructure company rather than AI and software?
I think that is something that we're going to be seeing a lot more of
and people are trying to come up with some new metric for an AI company
that's very different than software.
And even all these AMD OpenAI kind of circular ways of approaching,
financial analysis, I think, start to look ridiculous.
That's right. I think as long as we have good times, then there will be massive multiples
that will be attached to AI companies. But second, we see a sign of a slowdown, I think that
will deflate exceptionally fast. Yeah, I think you've heard it here, folks. Go out, come up with
your new financial metrics for this new breed of company. It's not software. It's not quite
retail somewhere in between we need we need a new ways to measure this stuff and this of course is not
investment advice so take that we're telling you to go out and do the research yeah to telling our
audience yeah right uh all right let's let's finish this long very long already cross
spanning almost our entire show with but just to look at at how we we both feel about this i mean
what's your scale of concern here from like one being not concerned to 10 being
concerned. I'm still going to put it, and regular listeners know that I am often concerned about
many things. I'm still putting it at 3.5 to be exact, to get to the exact feeling of concern.
I really think this is a shakeout. It's kind of like a back to reality come down to earth
moment, but I don't think there's going to be massive spillover effects from any kind of
any kind of rational analysis on what's really happening right now and investors just start to
start to come back down a little bit. I think it's going to be okay. What about you? I'm out of five.
I do think that there's a non-zero chance that progress stalls and a lot of this hype around AI
just translates. It just fades. People realize that it's harder to implement than a lot of
the hype was making it out to be and it takes longer. The timeline is longer than it's anticipated
and maybe that just leads to stagnation for a while. I don't know. I'm not convinced that's
going to happen, but I definitely appreciate the possibility. All right, let's talk a little bit
about Open AI's announcement this week. They had this dev day, developer day. It seems like
everybody just wants to build. Have you heard this before? Open AI held its annual dev day on Monday
where the company rolled out its plan to build
apps into chat GPT.
The demo showed how programs like Spotify and Figma
can be called or discovered
without leaving the chat GPT window
with so much of the tech world
barreling towards AI integration,
Open AI's demo was the best picture yet
of what an AI-first internet
might look like with interfaces
like chat chip BT querying information
and executing commands directly.
I don't know why I can't get excited about this,
or maybe I do,
if you feel like I've heard this from Google,
from Amazon, from Apple, and Open AI again.
And I'm not like going to lose my mind over the platformization over of chat GPT.
Am I underplaying this?
No, no, until I, so I actually tried, I use the Figma app for ChatGPT a bit.
And as a non-UI designer, but someone who's curious, I was like, okay, can I actually start
to make mock-ups and start to, like, like, like, can I actually start to, like,
like build out my own app interface and in reality, you can make like a flow chart and fig jam
and it was okay. It wasn't anything revelatory. But honestly, to me, like that whole app
integration side, I think it should work at some point. But to me, like, the fact that Google flights
I still go to as opposed to Gemini integrated directly into Google flights, which it is, it owns and is
the same company like I no one has actually shown what this can look like successfully yet but
I but I do think like using it poses an interesting question for the actual app companies themselves
because at what point is the value your database basically or is the value your actual UI and
interface and so like if I can go use chaty BT to create my Spotify playlist for me then all my
my DJ on Spotify, like, does that all become useless?
And do these companies let chat GPT actually kind of take the UI layer away from them?
And they're going to push back.
So I don't see this moving that quickly.
Did you try on there?
I haven't tried it yet.
I just, I can't.
I really can't get excited about this.
I'm sick of hearing this story.
And maybe I'm shut off in a way that's like bad because I should be open as
reporter covering this stuff. But like I set up Alexa Plus recently and I was like, oh, it's
cool. You can call an Uber from, you know, your echo device. And I was just like, I'll just do it
on my phone. I don't know. I'm not sold yet that this is going to be the platform of the future.
Oh, wait, Alexa Plus. What are your eyes? I did set it up as well. What are your thoughts?
Not enough use to really review it yet. But I did speak with Panos Penae, the head of devices
and services at Amazon.
And he said by the end of October, everybody should have it.
So no more early access.
And I think, you know, if that doesn't pan out, we'll see what happens.
But there's the latest promise.
And I'm very excited to air that episode, which is coming up in a couple weeks.
I'll say I'm liking it.
It's getting kind of like somewhat on par with chat GPT voice,
just in being able to ask questions while I'm cooking.
I have an Alexa echo show in my kitchen and like asking more detailed questions.
And it works pretty well.
But my favorite was last week or I think on Monday I was asking for like NFL scores and I was
looking at them.
It completely made up that the Jets won, which I thought was kind of amazing and hilarious.
In our context, Alex is a Jets fan.
It's the only universe where the Jets can win is AI hallucinating their victories.
You should just create your own entire SORA parallel universe where the Jets are winning, Super Bowls, all of the above.
I did see a SORA video, I think probably in my Jets Discord this week, where the coach was cheering on the players after a loss because they were in line for a better draft pick.
And that's really all the organization cares about it.
It's really like too close to home.
It's like, we love to lose.
That's as far as the AI can go.
You know, this is a bit of a diversion, but what's the deal with Bill Belichick and UNC, Mr. Patriots fan?
I'm just thinking about Drake May and the Bill's game on last Sunday night.
Belichick, I, yeah, it's such a tough one.
I mean, the younger girlfriend, the terrible start.
If you became one of the greatest, you know, as one of the greatest independent journalists and media personnel,
around and build up your legacy over decades to just throw it away.
Like, what drives someone to do that?
I don't know.
It would be the parallel would be, yeah, becoming one of the greatest journalists
and then going to become the editor-in-chief of a college newspaper and plagiarizing.
That is what Bill Belichick is doing right now.
Exactly what Bill Belichick is doing right now.
Lord Almighty.
All right.
Let's talk about speaking of SOR.
Let's talk about SORA.
I mean, we could go on the Belichick thing forever, but we'll leave that to Pablo.
So, yeah, the Verge has this story.
So obviously, you and I didn't get a chance to speak about Sora last week.
We had Max Zephen.
So I want to speak with you about Sora briefly.
The copyright thing and usage thing.
Open AI said it wasn't.
Sam Altman basically said he was surprised at the reaction,
or that Open AI wasn't fully aware of what the reaction would be.
he said, I think the theory of what it was going to feel like to people, and then actually
seeing the thing, people had different responses. It felt different to images than people
expected. This is, of course, about copyright and rights. He was surprised that rights holders were
sort of up in arms about the fact that Saur had copied their stuff and gave them an opt-out
as opposed to an opt-in. That seems like crazy and not exactly truthful. How do you not anticipate
that. Well, okay, I wanted, I definitely wanted to bring this up last week while I was on vacation. I was
itching to be on the podcast just to talk about SORA because, so I made video of myself in the
Mario Brothers movie Fighting Bowser on the streets of Brooklyn and like, obviously my six-year-old
son loved it. But as I'm looking at it, I'm like, this is insane. Like, I cannot believe that this is
okay. Now, two days later, Open AI did introduce, like, more strict content guidelines. And
it still blows my mind that Sam Wallman acted like he was surprised by this. But I wanted to read
the statement from Open AI 48 hours after, like, everyone was creating New South Park episodes,
was basically it was people are eager to engage with their family and friends through their own
imaginations as well as stories, characters, and worlds they love. And we see new opportunities
for creators to deepen their connections with the fans.
This is Varun Shetty, the head of media partnerships in Open AI.
He does say we'll work with rights holders to block characters from SORA
at their request and respond to takedown requests.
I think this is nuts and a big deal because they are basically saying,
like, it's out there, and you're going to have to come to us to bring takedown requests,
but we're basically okay with this.
even kind of pushing you to say, you know, you create media, but people really want to deepen
their connections by putting themselves in your copyright material. Like, I don't know that
in the overall Open AI story, like this is their approach to copyright and kind of intellectual
property. So to me, I hear a lot of sensitivity around like people and what data are they really
going to upload to OpenAI. And I think this was such a reminder that, I don't know,
basically whatever your personal feelings, you're telling your chat GPT therapist might get
auto-published into a SORA post one day. Right. And Brun, of course, comes from meta.
And if you ever had to deal with meta copyright infringement, the process, it sucks.
And it's almost an insult to copyright holders to have to go through something like that.
it does not incentivize it's so arduous it does not incentivize you to work on it and by the time
meta will take something down uh the thing has already uh already spread to the point where it doesn't
really make a difference so of course it doesn't surprise is that is that sadly bullish open a i then
in terms of this is the right approach i i maybe it's good for the business but um ethically i think
it's dubious at best all right we have five minutes left let's talk quickly about
the potential successor for Tim Cook so it's from Bloomberg Apple puts
hardware chief John Turnus in the succession spotlight when Tim Cook
eventually steps down as CEO it's likely he would remain involved in some
capacity perhaps as board chairman that would put him on a path taken by other
tech leaders including Jeff Bezos Larry Ellison Bill Gates and Reed
Hastings big questions who would run Apple on a day-to-day basis in terms of a
formal CEO transition hardware and
engineering chief John Turnus remains the leading contender.
And German says, Apple probably needs more of a technologist than a sales or an operations person.
The company has struggled to break into new technology categories, even though products like
the iPhone 17 are clearly resonating with customers.
What do you think about this?
I think it's the right move.
What's your perspective?
I think some new blood is required.
Sorry, Tim.
I think it's
I don't know
I still feel
I don't know
I'm going to make a call here
they should buy Snapchat
and make Evan Spiegel CEO
I said it and I said it
I want some product vision
at the company again
keep the operational guys
Tim it worked from a shareholder
perspective
it did not work from a true product
perspective and the day
they can only squeeze
so many
2999 Apple 1
subscriptions out of me before before this just has to go somewhere else so and you know who would
love that Evan Spiegel would love it I've heard multiple people uh around him that he thinks he's
Steve jobs so certainly Evan would be all about that what Apple do it gave it to you Evan I just gave
it to you I doubt Apple would do it no all right when how long do you think Tim Cook should stay in the
seat for. I tend to think that he should probably, you should probably step down sooner rather than
later. Yeah, I think it's actually like, it could be the most kind of smooth. Like, no one's going to
hold it against him. It's not a, it's not like he was fired or pushed out. It's just time to move on.
It's the next phase of the company. Like, it's clear that they have to figure out what's next. And I love Tim.
he's not the guy to figure out what's next for the company.
We've seen it.
We've seen this for a few years now that they're not going to.
And again, like other companies are just driving ahead with innovation and new product
development that like Apple hasn't done anything with it.
And Ternis has been at Apple for 24 years since July 2001.
And I almost, I actually like the Spiegel idea even more because I'm with the belief that Apple
really needs a not Steve era CEO because so many things inside that company are done there
just because that's the way Steve Jobs did them. The silo is the secrecy. And obviously it's
served them very well. But eventually you have to be like, all right, let's try something new.
Yeah. No, I think, okay, you've heard it here. This is the proposal from Kantrowitz and Roy.
It's a long shot here, but it's out there.
Crazier things have happened.
Ron John, thank you so much for coming on.
Really great having you, as always.
All right.
See you next week.
See you next week.
Folks, Rick Heitzman,
the managing director of First Market Capital,
is going to be on the show on Wednesday
to continue our conversation about AI's economics.
And then Ron John and I will be back with you next Friday.
Thank you so much for listening.
And we'll see you next time on Big Technology Podcast.
