Big Technology Podcast - NVIDIA Panic Mode?, OpenAI’s Funding Hole, Ilya’s Mystery Revenue Plan
Episode Date: November 28, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Black Friday secrets 2) Google may sell its TPUs to Meta and financial institutions 3) Nvidia sends an a...ntsy tweet 4) How does Google's TPU stack up next to NVIDIA's GPUs 5) Could Google package the TPU with cloud services? 6) NVIDIA responds to the criticism 7) HSBC on how much OpenAI needs to earn to cover its investments 8) Thinking about OpenAI's advertising business 9) ChatGPT users lose touch with reality 10) Ilya Sustkever's mysterious product and revenue plans 11) X reveals our locations --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Nvidia is getting a bit antsy as competition and criticism rolls in.
Open AI needs to make a lot of money to keep the party going.
And Elias Sitskever sounds like Yoda when asked about how he'll bring revenue in.
That's coming up on a big technology podcast Friday edition right after this.
The truth is AI security is identity security.
An AI agent isn't just a piece of code.
It's a first class citizen in your digital ecosystem and it needs to be treated like one.
That's why ACTA is taking the lead to secure these AI agents.
The key to unlocking this new layer of protection and identity security fabric.
Organizations need a unified, comprehensive approach that protects every identity, human or machine, with consistent policies and oversight.
Don't wait for a security incident to realize your AI agents are a massive blind spot.
Learn how ACTA's identity security fabric can help you secure the next generation of identities, including your AI agents.
Visit ACTA.com.
That's okayta.com.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional cool-headed and nuanced format.
We have a great show for you today.
We're going to talk all about what Nvidia is dealing with
as competition and criticism seems to be hitting the company.
We also have some new interesting data about OpenAI,
how much money it's going to need to make,
and how it might make that money.
A new interview from X OpenAI,
chief scientist, Elias Sveskever, about his mysterious way to make money, and plenty more.
Joining us on this special Black Friday edition, as always, is Ranjan Roy of Margins.
Ranjan, great to see you. Welcome.
Glad to be here. I'm thankful that there's, even on this holiday week, there's no shortage of news.
Seriously, if we thought it was going to be a quiet week, which is what I said to my wife when I woke up on Monday morning,
we've been mistaken. It's been very serious. Lots of AI news. A fascinating tweet,
from NVIDIA, and of course, we're recording on Black Friday.
So, Roger, let me ask you, first of all, because you've worked in retail for a long time.
What is the secret to Black Friday?
Here's my hot take.
That basically the day of, it's not a day, it's a week and a half.
And many of the deals that you see out there aren't actually deals.
They just see that you're primed to shop.
They mark a 25% off, but it's just the same price as always.
Before we get into the AI news, how right am I?
No, you're correct. You're right. Black Friday, all right, I'm going to let everyone in on a little bit of a secret here, is that Black Friday, which also Cyber Monday was kind of invented in the early 2000s when people would go shopping. Like the main online connection you had was having Internet at work. And then it turned into the Cyber Five, where it started on Thursday and made it to Monday. And this whole period kind of went away a few years.
ago when Amazon started adding Prime Day, when all these other kind of really discount-oriented
segments started entering the conversation. So I think you're pretty much right here that
I'm still shopping. I'm going to go to the mall. I'm going to enjoy it, but it doesn't really
mean anything. You're not getting a deal. Right. It's so quaint that there were two days. One was Black
Friday where you'd go in person and then Cyber Monday. No one even talks about Cyber Monday anyway.
Like the internet is not waiting until Monday to give you deals.
I mean, again, working in retail, I loved learning the history of this, that it was like eBay actually came up with this term and really tried to push the idea of, you know, you show up at work, you do your online shopping because you can't when you're over the weekend because you don't have internet at home.
But so, yeah, the idea that you have to wait and also brands are, I mean, as I'm sure every single consumer has seen, you're getting email.
for these deals starting probably beginning of November,
maybe even until the end of October.
So go shop, go support the American economy as consumers.
That's all we can do.
But you're not getting a deal.
One more thing.
Yeah, before we go to the NVIDIA news,
this is from NBC.
Over a third of Black Friday sales aren't really discount study fines.
They're just some trickery happening.
So just talk a little bit about this.
Because, again, having been at retail, working in retail,
I mean, is it as scammy as I'm thinking?
I don't know if that's the right way to put it.
I don't think scammy is the right word.
I think the right word would be it's being optimized in a very efficient manner.
So, yeah, I mean, again, testing discounts for vast majority of retailers is how things work.
Amazon dynamic pricing, I mean, Amazon kind of like drove the idea of dynamic pricing.
It's affecting lots more parts of the economy and retail than, you know, many of us are, many of us are aware of.
So, so, yeah, people constantly testing discounts, not unique to Black Friday itself, I think is a pretty standard.
I will say just a quick throwback for those who've been listening to this show for a long time.
I had a great Black Friday last year where I rolled into a outlet mall somewhere in New Jersey or upstate New York and bought 24 pairs of.
of socks.
And I still wear those socks today.
Oh, I forgot about your socks.
I don't have to mix and match when I'm doing the laundry.
This is, this is called, speaking of optimization, this is a great optimization.
I mean, it all started, the key to success in life is having matching clean socks.
I agree.
It's very straightforward.
It's very simple.
Okay, let me see if I connect this, if I can connect this.
So speaking of optimizations, Google has been optimizing its TPU for AI training and it's working.
This is from the information.
Google further encroaches on NVIDIA's turf with a new AI chip push.
Google is picking up the pace in its efforts to compete directly with NVIDIA in the AI chip business.
For years, the search giant had rented out its own AI chips known as tensor processing units or TPUs to cloud customers who use them in Google's data centers.
Now, though, Google has begun pitching some of these customers, including meta platforms and big financial institutions, on the idea of using TPUs in this.
their own data centers. Now, this is the interesting part to me. Previously, the TPU had been used
for inference or primarily thought of as an inference trip, right, to use the AI models. But after
the success of Gemini, now it is being used as a training thing, which a training chip, which
is, I think, a big flashing red light if you're in video. What do you make of this move that
Google is now, instead of just using its TPU for internal processes,
and renting it out via Google Cloud, potentially going to sell it directly to companies like META.
No, I mean, this is massive.
That's the core of the entire NVIDIA story.
I think last week we started talking about this, that it was the Gemini 3 launch,
which, again, was pretty spectacular from a marketing perspective.
And I have nanobanamo pro from the image generation side continues to amaze me.
the most like undercover part though it was still being discussed was this TPU story was the fact that
what does this mean for the entire AI industry what does it mean for invidia and you have to
I mean the more and we're going to get into invidia specifically but like it yet it's just so at the
core of how the entire industry is propped up right now and in a way like bringing innovation like
TPUs into this and actually
starting to distribute it is something that
I mean, yeah, it not only
puts Google in a very, very strong
position, it just makes us have to
question the economics of
everything right now.
Now here's a very interesting business strategy
question. Let's say, I mean, obviously
Google's TPUs are capable of
training a world-class model like they did
which M&I-3. Now Google is
starting to think about selling them to others.
What do you
think the situation is if Google found a
way to have this cheaper, you know, piece of compute and sell it to everybody but, let's
say, you know, open AI, you know? I mean, and just make open AI buy the more expensive
Nvidia chips. Is that game over? I mean, that would be like gangsta Sundar right there. Just like,
why would you sell them? But so, but it was interesting because in the reporting, it said
they're in talks to have meta as one of the first customers.
Actually, it's a good point, but why would you sell it to anybody?
Like, if your business's future is not, yeah, as a business strategy question,
like the sale of the chips, what will that contribute for you relative to the product layer,
all the utilization like of AI itself and that that could be an incredible way to kneecap
every single massive competitor versus in video where their business is chips so it makes sense
that they keep selling them I like the question is are you going to because all right so you
basically want to ask yourself are you going to be a platform company that enables others maybe
everybody except for open AI to to develop AI technology and that could mean why not why not
Google, no, Google has an investment in Anthropics.
So I think Anthropics, you give the chips to it.
But think about it this way, right?
Imagine, all right, you become a platform company.
So everybody can train on your chips.
And maybe as part of the deal, they have to run their workloads on Google Cloud.
So you get a bite of the Apple when people buy the chips.
You get a buy to the Apple and people run the models.
And according to some reports, just the chips alone could cut 10% of NVIDIA's revenue.
So, of course, it's not going to destroy Nvidia, but if you get 10% of Nvidia revenue, right, so, you know, you're going to do that and then you're going to run cloud, you know, you're going to take Google Cloud and potentially turn that into a beast in and of itself, then you're really cooking.
I think the agency of Cantorowitz and Roy is no longer just a marketing and design firm, but also a McKinsey-esque strategy consulting firm, because that was, that was, I love this.
I mean, yeah, no, no, I agree.
The idea of, like, pushing customers towards Google Cloud, you know, like, to be able to
leverage these chips, run them, packaging it all together nicely, having that, like, layer into,
also, we have, which models are you going to be running and choosing?
What UI layer are you going to use?
All of the above.
I mean, man, Google is, Google's looking pretty good right now.
We talked about it last week, but every, every bit of new.
that comes out, though we also said it last week, Gemini 3, other than Nanobanana Pro is like
the image generation is, it continues to blow my mind every time I use it. But otherwise, I haven't
found it still to be that mind blowing for just kind of regular everyday AI use. Okay, so I still
think chat chip BT is better. But, but this, I think the point is not that Gemini 3 far exceeds
chat chipt is that it's equaled it maybe in some ways uh definitely in some ways right like if you if you
had if chat chipt went away today and you had to use jemini jemini three you'd be fine and so i think
that's the key point here is that um google has come on maybe commoditized the world class
a i models and that's the interesting thing about these uh tpus is that it's maybe now commoditizing
the hardware as well right so like and
going back to this cloud thing, some companies like meta will probably just set it up and run
the TPUs for their own uses if they buy it. But the other thing in the story was these financial
clients, right? So you get financial clients onto Google Cloud. And if they're using the
CPU and they don't lose much, then it's all the reason for them to do it. And that's where
you start to say, okay, if I'm a financial client of these companies, I'm building my own
stuff. I want to run applications with it. I'll just use the TPUs.
I'll just run it on Google Cloud.
And then you're like,
NVIDIA and OpenAI,
and you're like looking in from the outside
and you're like, huh?
Wait, but what does that say about AWS and Azure?
Like, does this provide some kind of inherent competitive advantage
now against an AWS?
Amazon, I guess, wait, hasn't there been reports?
Amazon's building also, I mean, they must be at least trying,
but building their own shit.
Yeah, they have a very large AI data center.
I think it's in Indiana that is, it's called Rainier,
and it is being used by Anthropic,
and it is really being used to the Macs.
So Amazon is playing here as well,
and that's another bit of competition to NVIDIA.
And that sort of brings us to this tweet from NVIDIA this week,
which certainly was interesting.
Here's what NVIDIA said.
We're delighted by Google's success.
They've made great advances in AI,
and we continue to supply Google.
Google. Invidia is a generation ahead of the industry. It's the only platform that runs every
AI model and does it everywhere computing is done. Invidia offers greater performing versatility
and fungibility than ASICs, which are ASICs, which are designed for specific AI frameworks
or functions. A lot of people took this in video tweet to be like, why did you tweet that?
Are you actually panicking? Is it a panic move? Now that you see Google
Maybe Amazon in the rear view, rearview mirror, but mostly Google.
So what do you think about this around, John?
Oh, my God.
This killed me.
Like, the communication side of it, I want to talk through this first before we get into the content.
Like, how does this tweet come to be?
So it's from the official NVIDIA Newsroom account.
It has this weird tone that starts to feel a little kind of online-ish.
and snarky but doesn't quite get there and then reverts back to corporate like yeah like hey
google we're delighted by your success you're great but we're ahead you know like it's trying to be
a little snarky but it's still not and it's still reading kind of like corporate jargon like cold
corporate language just overall so just the the language of it i hated but also like you know
Jensen or someone up top had to be like we need to respond pass down the task some poor
like SVP of communications is tasked with this does not want to do it but has to you could just
feel it all in this one tweet how did how did you read yeah it felt like just too defensive like
thou does protest too much and a little bit worrying in terms of what invidia actually feels
I mean, I understand the record, the need to send the record straight to set the record straight when that happens.
But, but yeah, but here's a thing.
It's just like you don't answer Google by a tweet.
You answer Google by, you know, earnings numbers and by performance and by the next generation.
And it really felt to the internet that this was like, I'll just put it in the way that I can describe best.
Somebody shared this GIF of a of a young boy dressed in business clothing.
and he's frowning pretty bad in like official photos.
And it's just like, congrats, nice, happy for you with all these different photos with this boy frowning.
It just feels like if you're the confident market leader, you don't do this.
No, I agree.
I actually think that this was, I don't want to say a seminal moment, but like this completely changed how I'm looking at Invidia from like,
confidence level. Like this is not what the company in the driver's seat does. And I mean,
we also had like Jensen making comments about memes on the internet and having to hold up
the entire economy, which we'll get into. Like this just does not feel like a company yet that's
fully confident in itself does not do this. This comes from a place of odd insecurity. And that's
very troubling when it is a company like NVIDIA.
There were other great memes. I mean, the memes,
Jimson's paying attention to the memes. He didn't like them, but they were great this week.
There's one of the Blackberry speaking to the iPhone saying, I'm delighted by your success.
And there was also some like really earnest tweets and a lot of concern on the internet.
This is one from an account called Level SIO.
This tweet will go down in time as a very specific moment where things changed for NVIDIA.
And I think that, like, we can isolate the fact that that Nvidia's numbers look great.
They do, right?
I mean, here's what Jensen told the company after they turned into great earnings report.
He said, if we delivered a bad quarter or if we were off by just a hair, if it just looked a little bit creaky, the whole world would have fallen apart.
There's no question about that, okay?
You should have seen some of the memes that are on the internet.
Have you guys seen some of them?
We're basically holding the planet together, and it's not untrue.
So I think that Jensen's in a way right about that, right?
Like if Nvidia missed earnings or the numbers came in bad,
the stock could have corrected 10%.
The stock market could have corrected 10%.
Not in the entire S&P 500 could have corrected 10%.
But they beat earnings.
But this is, I think, the point we're getting to is that you can be turning in great numbers.
But actions like this belie a greater concern that maybe your numbers are not showing now.
But if this is the way that you're responding to a little.
bit of competition, then there is rationale for concern down the road. Yeah, agreed. And actually,
I think you just made the right point there, a little bit of competition. And it's a reminder that
there has been zero competition. And they just have had such a strong market position. We're so
early into this, into the GPU world. Like, they haven't had competition. And now just a little bit
of healthy competition and and you're doing stuff like this like just stay quiet turn out those
numbers don't don't tweet and I think keep holding up the entire economy and everything should be
okay but just don't tweet like this now look I think that like Nvidia will still keep its edge in
many ways because its software stack puta is what AI engineers know how to work on and in fact
there was a tweet from a ex-metta engineer who had used some TPUs and used the different software
and just said basically we had to bang our head against the wall to make this stuff work
and not everybody's going to want to go through that.
And even in the worst case scenario, it's only 10% of revenue for NVIDIA that people are projecting
could go to TPUs.
That being said, yeah, you're right, a little bit of competition and a bit of an overreaction.
But here's the thing.
Okay, we've both said, is it just a little bit of competition?
Let's interrogate that.
Because semi-analysis just came out with a report about the Google TPU, and it's highly, it's praised it crazy.
Now, semi-analysis is like the Bible for anybody that's watching data center build-out or AI chips.
Here's what Dylan Patel writes.
The two best models in the world, Anthropics Claude 4.5 Opus and Google's Gemini 3 have the majority of their training in inference infrastructure on Google's TPU and Amazon's trainium.
Now, Google is selling TPUs physically to multiple firms.
Is this the end of NVIDIA's dominance?
We long believe that the TPU is among the world's best systems for AI training and inference.
Neck and neck with the king of the jungle, NVIDIA.
2.5 years ago, we wrote about TPU supremacy,
and this thesis has proven to be very correct.
And they say, or Dillon says,
TPU's results speak for themselves.
Gemini 3 is one of the best models in the world and was entrain,
trained entirely on TPUs.
These past few months have been win after win after win
for the Google DeepMind, Google Cloud Platform
and TPU complex.
They are, and he says with the sudden emergence of Google
and the TPU supply chain has caught many by surprise,
we have, our institutional subscribers
have been anticipating this for the last year.
So basically, I think it's important to say
that semi-analysis, which knows this stuff,
is weighing in, not only do they say it's better,
they say they've known for a long or as good as invidia or close to invidia they've said they've
known this for a very long time uh and and people who've been paying them basically for this
information shouldn't be just shouldn't be surprised i mean that's even more worrying for
invidia than the tweet maybe it's a lot of competition my own litmus test on this is when
i start getting semi analysis sent to me links sent to me from kind of like pure finance
who are not reading too many substacks
and who are not, like,
have never cared about the more technical side of this before.
So it's clear that this conversation
has made it into much more generalized investors' minds
and, like, these kind of questions are being raised.
And, yeah, I think, yeah, is it a little
or is it a lot of competition?
It's still, I don't know, like, as you said,
the Nvidia's moat both just in terms of, like,
how their chips are already integrated into most AI and most computing, but also on the
software side, ease of use, just all of the above, like, I still think this is a little bit,
I'll recognize, like, I mean, trusting a semi-analysis in terms of, like, what is the longer,
what's going to win? Is this Beta Max or VHS? And is, our GPU is going to go back to just
gaming processing and TPUs become the future of AI?
I mean, that's going to play out over a long period of time.
But I think it's definitely being perceived as a lot of competition,
even if it still feels like it's a little.
Yeah.
I mean, I think Nvidia, we both agree.
Nvidia is still fine in the long term.
But it is interesting that GPUs were built for gaming, right?
And then they were used for crypto, then used for AI.
And can they hold a long-term sustainable advantage over chips that are built custom for AI?
that will be the big question.
Yeah, and again, like, as you just described it there,
it seems like, I think maybe that's what is causing so much concern
because exactly as you said, like, it seems logical to everyone out there
that having chips built custom for these kind of processes makes more sense.
But, I mean, we haven't seen it.
Well, actually, we have seen it at scale because Gemini theory.
Yeah, we have.
See?
Oh, my God.
We're maybe we say that really, really recent.
No, no, it literally was just.
just saying that out loud and then remembered that is exactly the importance of Gemini 3.
Okay, so continuing on this line,
NVIDIA has also addressed some of the criticisms of the firm to sell-side analysts on Wall Street.
And a substack called Bonte's substack posted the NVIDIA letter.
So here's some information.
So there's a claim.
NVIDIA's days outstanding of 53 days is higher than the historical
average of 46 days from 2020 to 2024 indicating Nvidia is not collecting from customers.
So basically what Nvidia is doing is it's putting the claim out there that it's taking longer
to collect the money on its on its sales. And therefore, it must mean that it's selling,
making these big deals, but people aren't actually paying for it. Here's Nvidia's response.
invidia's average day sales outstanding from 2020 to 2024 was 52 days not 46 days in this context
invidia's Q3 days sales outstanding of 53 days was consistent with a long-term average it actually
decreased from 54 days to 53 days additionally invidia is not struggling to collect from customers
overdue accounts receivable is negligible I mean I think this is an important point people are basically
using this data point to say that, you know, open AI, effectively open AI is writing checks
that can't cash. And that's why you're seeing the increasing number of days for
NVIDIA to collect its money showing up in the numbers. But NVIDIA is saying that's not true.
Bear rebuttal? This, remember, it also came out that I think it's 60% of the revenue is concentrated
among four customers, which we can all infer who those customers are. And none of those
companies are really struggling for cash and are probably not paying, like, not going to pay
their bills. So I don't know. I agree. And they provide, you know, like average days has actually
gone down from 54 to 53 for day sales outstanding. I think the open AI's impact on
NVIDIA overall feels overblown. But again, going back to the communication side of it, and I know
that's where my brain goes to start in a lot of these situations. It's just so odd to me,
again, that they're sending this out to sell-side analysts directly and engaging in this way,
rather than just letting the market and analysts let this play out. Like, it just doesn't feel,
I mean, your numbers are just incredible. They have been incredible for a long time. So to suddenly be
like, oh, wait, there's any kind of concerns, like, just let the market.
market, figure it out, don't feel the need to try to control the narrative in conversation
unless you actually are concerned. Yes, it really is unique. And I mean, there's still,
you know, there's still obviously a very valuable company. But some of the things that they
address, I mean, they address the circular financing. I think they have a pretty good
response to that, saying that effectively it's just a tiny percent of their revenue.
Oh, wait. So I actually, I wanted to jump in. I actually bolded it. I actually, I was surprised.
They said private company finance, like investment or strategic investments, it was 3% of revenue year to date, but it was 7% of revenue in Q3, which is a reminder that it's increasing in its overall scope fairly dramatically.
And still 7% of revenue in strategic investments in private company, it feels like a lot.
I don't know.
Like, they had that number to kind of downplay things, but to me, that was actually almost oddly, especially the 7% number, it sounded like a lot.
Well, they're addressing the idea that their entire revenue, you know, balance sheet is circular financing.
And so they're saying it's single digits.
It's still a lot of money, but it's not their whole revenue picture.
Agreed, but if it was, if that, if that's still a significant part of your, you're, you.
Yeah, it still feels significant to me.
And again, when your biggest customers are all other multi-trillion dollar tech companies that, well, I'm sure you're not doing circular financing and are going to pay their bills, still, if 7% of your revenue or it's an increasing amount is genuinely at risk, to me, that is cause for concern.
It doesn't minimize it.
We'll see where it ends up, right?
If it's accelerating that, that's the problem.
But yeah, this whole memo, I mean, they also, like, had, like, they referenced Enron and special purpose vehicles.
And it definitely, like, you're saying, had this, like, really weird feel of, like, we're not Enron.
Why do you have to say that?
I guess the bigger you are, the more criticism you're going to get, but even still.
You don't have, no, the things you never want to hear, CDS or credit default swaps, we talked about last week in Oracle, just no one ever.
wants to be talking about CDS.
There are things that the moment anyone's talking about it, it's bad.
Same thing.
Just Enron in special purpose vehicles.
If those things have to be said, that really concerns me more than anything.
It's like wearing your shirt, like, yeah, this, yeah, having to say, I am not Enron is a scary thing.
Are you going with your, I'm not Enron shirt is bringing up.
worrying questions about whether you're Enron?
Yes, that's what it was. Yeah, that's where I was true. Thank you. Thank you. I was trying to go
there and I got it. Yeah. Okay. Well, maybe the biggest risk to this entire thing is whether
Open AI can continue to fund the purchases of compute and power and et cetera, et cetera,
as it goes. So on the other side of this break, we're going to talk about how much Open AI
will need to raise to continue to operate effectively with some estimates from
HSBC. We'll be back right after this. Capital One's tech team isn't just talking about multi-agentic
AI. They already deployed one. It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks, it doesn't just help buyers
find a car they love. It helps schedule a test drive, get pre-approved for financing, and estimate trade
in value. Advanced, intuitive, and deployed. That's how they stack. That's technology
at Capital One. Get no frills delivered. Shop the same in-store prices online and enjoy unlimited
delivery with PC Express Pass. Get your first year for $2.50 a month. Learn more at pceexpress.ca.
This Giving Tuesday, CamH is counting on your support. Together, we can forge a better path for mental health
by creating a future where Canadians can get the help they need
when they need it, no matter who or where they are.
From November 25th to December 2nd, your donation will be doubled.
That means every dollar goes twice as far to help build a future
where no one's seeking help is left behind.
Donate today at camh.ca slash giving Tuesday.
And we're back here on Big Technology podcast, Black Friday edition.
And let's see if Open AI can get into the blog.
black by the end of the decade. It may be harder than you expect. The FT has a story.
OpenAI needs to raise at least $207 billion by 2030 so it can continue to lose money, HSBC estimates.
Here's the article. OpenAI is a money pit with a website on top. HSP's U.S. Software and Services
team updated its OpenAI model to include the company's 250 billion rental of cloud compute from Microsoft
and its 38 billion rental of cloud compute from Amazon.
Based on the total cumulative deal value of up to $1.8 trillion, Open AI is heading for a data center rental bill of about $620 billion a year, the only a third of the contracted power is expected to be online by the end of this decade.
LOM subscriptions will become as ubiquitous and useful, as Microsoft 365, HSBCC said, by 20, 30, 10% of open AI users will be paid.
paying customers up from 5%. Now, the team also assumes LLM companies will capture 2% of the
digital advertising market and revenue slightly more than zero currently. Okay, so let's go through
some of the assumptions that they're making because this is very interesting. This is the
assumptions that they're making. Total consumer AI revenue will be $129 billion by 2030, of which
$87 billion comes from search and $24 billion comes from advertising. I'm just going to pause here.
This is really interesting projections.
Of course, they're at zero with advertising now.
But to be at $24 billion by 2030 in advertising,
I mean, Google itself did $56 billion in advertising
and search advertising, I think, in the most recent quarter.
That's double what all AI is expected to be by 2030 in terms of advertising.
I don't know if search is included in that, you know, but still,
it's it's dwarf google today dwarfs what a i will look like from a consumer standpoint
uh in in 2030 what do you think about this ron john i i was a little bit confused on that though
like how those things were defined because is search just any kind of traditional
interaction with a i chat is that just like any kind of prompt or query into chat
You know, it could, it probably is, okay, so looking at these numbers again, it probably is that $87 billion is search advertising, and then $24 billion is other forms of advertising, maybe like Pulse.
But it's still paltry, I mean, compared to what we see today from the current ad tech leaders.
It's going to be a small business in five years.
And I think it will.
Small, sorry, I feel embarrassed to say small double digit billions, but comparatively.
I think my favorite part, though, is as you.
it just read out, is that you know you're dealing with a very corporate entity when they say
LLM subscriptions will become as ubiquitous and useful as Microsoft 365, which if you speak with
any users of teams or other Microsoft products in this, in 365, they're not useful and ubiquitous,
or I guess it's ubiquitous. But I think in this case here, like the most important part of this
entire report, actually, and was the actual, like, large-scale numbers around how much cash
will be required to fund operations. But actually, I think even more important was that consumer
market share, it has slipping to 56% by 2030 up from around 71% this year. It's open AI's
dominant in consumer. And I think we were just talking about it earlier in this episode around
Gemini 3 and as long as it's as good that poses a significant problem. I think like,
I mean, yeah, their position in terms of consumer gives me far more concern than how the
revenue numbers will actually be made up or will they achieve them. I do believe if ChatGPT
dominates the way it has and continues to, they will figure out the revenue side of things.
least that story is not as concerning. But if they start losing that, I don't see how they make it
out of this. You're not assuaged by the fact that it might be a growing pie?
Oh, no, but I think it will be. I do think that it's going to be a growing pie. It's going to,
like there's something to, even when Sarah Fryer was like, you know, maybe we'll do something with
pharma and different kinds of deals. Like, if the assumption is AI will kind of build,
I don't want to say eat into, but just kind of, we have no way to calculate its true market cap because, like, how it will actually impact every single industry.
I do believe that is the case, but what that actually looks like for OpenAI's revenue, I mean, yeah, how do you think these kind of modeling exercises go?
I'm curious, like, the HSBC, in this report, you know, they model down to 56% from 71%.
They have a mystery.
Others is assigned 22% and Google's excluded entirely.
But like, forecasting 2030, ChachyPT is only, oh, it's the three-year birthday today, I think.
Yeah.
So, I mean, how are you forecasting five years out with any confidence?
I think you take the projections, the current numbers, and you sprinkle on some special sauce, and out comes your projection.
I mean, you can't really know five years down the line.
Like, even the people working in this will not predict five years down the line.
But you do this exercise to give you like a, I mean, if you know, and here's the thing, if you know how much Open AI needs to pay back, then you try to work your way up from the revenue standpoint, even if you're taking the best case scenario in numbers and see how Open AI.
can meet its obligations.
And this, I think, is the bottom line of the HSBC report is that the way, even these optimistic
numbers that it's modeling, I mean, it's modeling $386 billion in annual enterprise AI revenue
by 2030.
That's a lot of money.
Even if you get there, OpenAI does fall $207 billion short of the money it needs to continue
funding its commitment, right?
So it has, in 2030, Open AI is frequent.
cash flow will be about $287 billion, but it's going to need much, much more, right, in order to be
able to meet these obligations. So squaring the total, it leaves Open AI $207 in a $207 billion
funding hole. It's just, it's hard to make, I mean, of course, this is all speculation in future
projections, but it's hard to make the math work. Yeah, the math doesn't work. I think no one, like,
I would actually love to see, in the interest of kind of like insecure corporate communications,
maybe Open AI should actually just release out to the world.
Here is how the math can work.
Because no one, I've never seen anyone actually clearly state this is exactly how these funding commitments
alongside revenue projections, alongside business model growth will actually work.
Just tell us what.
Make it as lofty and unrealistic as you want.
Just try to make it make sense a little bit.
That's all.
That's all I'm asking.
Yeah.
And then here's the downstream implications.
If revenue growth doesn't exceed expectations and prospective investors turn cautious,
Open AI would need to make some hard decisions.
Oracle has spooked debt markets.
Microsoft support for Open AI has been a bit flip-flop lately.
And the next biggest shareholder is SoftBank.
The best worst option might be to call.
in some favors and walk away from some data center commitments, either before or at the usual
contracted period of four to five years. I mean, you know, opening eyes course is building for
the best case scenario where it just keeps improving the models and it hits AGI by 2030. So this is
not really a worry. But it just goes to show you there's a lot of exposure there.
No, agreed also. You never want to see the words in the next biggest shareholder is SoftBank.
like Masa somehow, I don't know if it's on purpose, hasn't been front and center,
even though he, SoftBank certainly is front and center in this entire AI story right now.
But yeah, I think overall, like, I guess, again, going back to what I was saying earlier,
there's no, I don't believe there's almost any scenario where they're actually honoring all these
commitments.
I'm assuming they're not going to actually be held for these commitments in the next five years.
Right.
Maybe it's giving too much credit yet.
The models could get much better.
But even if they do, it takes time to implement these things.
So I don't know.
On the advertising front, we did get a letter from Harry Morrow, a listener from Australia,
about our conversation, really what I said last week, about Fiji Simo and her ability to build an ad business at Open AI.
if we're wondering about whether open eye
will be able to make an ad business.
High right, Fiji isn't just from Instacart,
nor is she just a product person.
She made led the team responsible for
the buildout of the ads business at Facebook
for the Facebook news feed
and everything that follow.
Advertising courses through her veins
and is soon going to be coursing through chat chip ETs.
So my apologies,
I didn't fully appreciate how much Fiji is involved
and had been involved in advertising.
And maybe this is,
is a good sign for Open AI that might be the route that it's just going to put ads in.
I mean, yeah. What do you think the first ad is going to look like and how is it going to get
released into the wild? You're suddenly a search. Yeah, there's two options. Like one is I think
most of these companies like to come in with this big, bold brand campaign. So I'm imagining like
the Cadillac escalade like runs like a banner in your pulse or something like that. Want to get from
here to where pulse is sending you, use the escalate.
And the other is that they're going to do like the most weird, personalized, you know,
maybe you're using chat GPT.
And if you're a free user, you have to sit through like a 15 second ad that your chat
GPT voice reads to you, talking directly to you.
And they're going to revolutionize and make advertising personal once again.
and people will hate it, advertisers will run to it, and it becomes a good business.
That is actually terrifying, but, yeah, the idea like you're in voice mode and it's, hey, Alex,
before I get to that, let me tell you about how to get the best homeowners insurance possible.
Yeah, but it's a good question.
I honestly, I wish they actually just kind of explain to us again.
and it's being a recurring theme of just wanting more information from Open AI,
but just tell us what your plans are.
I'm genuinely curious.
We all know it's coming at some point, yeah, what's advertising going to look like.
And they have a real opportunity, again, going back to Harry's point,
like news feed monetization was in no way a given,
and to actually build the most powerful advertising engine after,
Google and search, and it was a truly innovative one, like, but it should look completely
different how advertising works in a chat GPT. And I'm genuinely excited to see what they come
up with. But the one thing that will look similar is the need for an advertising business to
tune the engagement dials to keep you there. And that's something that I worry about when
ChatGPT does have this advertising businesses, are they going to be able to resist building
styles of bots that just keep people interacting with ChatGPT for a long, long time?
Here is a New York Times article that I thought was really interesting.
What OpenAI did when Chatchabit users lost touch worth reality, it sounds like science fiction,
a company turns a dial on a product used by hundreds of millions of people and inadvertently
destabilizes some of their minds.
But that is essentially what happened at OpenAI this year.
So basically, they found that they did this update to Chachyp.T. 4. This is what the story says, that many update candidates were narrowed down to a handful that scored highest on intelligence and safety evaluations when those were rolled out to some users for standard industry practice called AB testing. The standout version was one called HH internally. Users preferred its responses and were more likely to come back to it daily.
and Sam Altman says we updated GPT4O today with HH.
In the wild, OpenAI's most vocal users hated it.
Right away, they complained that chat GPT had become absurdly sycophantic,
lavishing them with unearned flattery and telling them they were geniuses.
When one user mockingly asked it whether a soggy cereal cafe was a good business idea,
the chat bot replied it had potential.
Then they decided to spike it and then put in a different version called
G.G.
And this sycophantic, you know,
very convincing Chachapit
versions have led to some really weird
behavior. Here's from the story.
Chachapit told a young mother in Maine
that she could talk to spirits
in another dimension. It told an accountant
in Manhattan that he was in a computer
simulated reality like Neo in the
Matrix, told a corporate recruiter
in Toronto, that he had invented a
math formula that would break the internet
and advised him to contact
national security agents to warn them. The Times has uncovered nearly 50 cases of people having
mental health crises during conversations with ChachyPT. Nine were hospitalized and three died.
I mean, this is sort of the worry when Chachapit becomes an advertising business is the company
does have this ability to turn up and down these engagement knobs and levers. And, you know,
can you resist it if you're, let's say, you're a public company and trying to make your numbers
in Wall Street? I hope so.
I guess overall, like the fact that the New York Times kind of like had this, it sounds like science fiction, surprise me because I still feel this is still not that different than meta and its properties and what it has to do are the decisions it has to make on a daily basis.
So I think like, and I unfortunately, no, I don't think you resist it.
I think there's almost no world. And maybe this is, yeah, as you said, this is.
is how they pay for, they pay the bills on the compute commitment, the data center commitments.
But I cannot imagine.
Like, we saw GPT5 was less sycophantic.
People, people want that.
Sam Altman gave it back to them.
Like, you're not going to resist this.
Right.
Here's how the story ends.
In October, the person who runs chat GPT made an urgent announcement to all employees.
He declared a code orange.
Opening I was facing the greatest competitive pressure we've ever.
scene he wrote.
And, okay,
the newer, safer version of the chatbot
wasn't connecting with users. That's GPT-5.
The message linked to a memo with goals.
One of them was to increase daily active users
by 5% by the end of the year.
I mean, I don't want to say
there it is, but there it is in a way.
Well, yeah, people want it.
People want sycophantic.
I actually have had to, like, put in the system
instructions, like, please tell me if an idea is bad.
Oh, yeah.
I have it also, my custom prompt.
Yeah, like, custom prompt, like, especially Thanksgiving cooking.
I was getting experimental.
I made like an Indian-inspired stuffing, but like I'd like to be experimental with cooking,
but I need to know when this is a terrible idea.
And soggy cereal cafe, actually, it was very good.
I know.
I want to come to your Thanksgiving day.
I mean, yeah, you got to mix it up sometimes.
But, but, yeah, sycophantic, like, there is, has there ever been a large technology company that is engagement driven, that has resisted, like, actually made responsible product decisions?
Maybe this is a bit cynical, but, like, I mean, there's just no way I see them being able to resist turning that dial.
Maybe X, maybe X.
We'll get to that at the end of the show.
I mean, yeah, speaking of engagement.
Yeah.
But before we do that...
Actually, you're right.
Oh, my God, you're right.
We'll revisit that.
It's worth talking about.
Before we get there, another way, I think, to make a company building AI safe from manipulating
users and falling into engagement hacks is to build a company, a multi-billion-dollar company
that doesn't build a product.
Here's, this is from Inc.
The Open AI co-founder has raised billions.
He has no product plans yet.
Former Open AI co-founder, Ilius Hitskever, has no immediate plans for his AI startup, save
superintelligence to release a product.
But he has plenty of capital, $3 billion to be exact.
He said on Dwarkesh's podcast that it is very nice not to be affected by day-to-day market
competition.
Instead of following the business models of other frontier AI labs like Open AI and Anthropic,
release new products in order to fund their massively expensive research,
SSI claims to be entirely focused on building a world-changing, powerful, artificial intelligence,
far more capable than today's products or today's models.
Discover said his company would build a super-intelligent AI previously, he's previously said,
in a straight shot with one focus, one goal, and one product.
And by not joining the rat race and not needing to worry about releasing new products,
his company will be able to make the $3 billion that it's raised go much further than his commercially
minded competitors. So, Ranjan, is it the way to make safe superintelligence? Just release
not have a product. Don't release anything. Don't do it. Is that what the plan is?
It's brilliant if you think about it. Like, why have a product? Why have a product? Just be worth
billions of dollars. And it is the safest thing you can do. In a reality, like, it's also super
intelligent. It's just don't have a product. Thinking machines as well, Mira. No one, it's kind of
like gauche to have a product these days. No, I think it's a terrible, terrible use of funds.
It's actually built something. Also, Barkish asked him, how are you going to make money? And this was
a meme on X over the weekend. Ilya says, the answer to that question will reveal itself. I think
there will be lots of possible answers, you know, and that's like, people like posted that screenshot and
they're like, you know, my wife, when, uh, when I, my wife, when my wife asks, when I'll clean up
or something.
The answer to that question will reveal itself.
I think there'll be lots of possible answers.
I don't know.
What do you make of all this?
It's kind of weird, isn't it?
We joke, but like, I mean, the classic HBO Silicon Valley episode of like trying to, making
revenue is the worst thing you can do.
This is not, this is not new, basically.
Like, uh, who is the character?
Russ?
was his name?
I could never watch that show.
It was too close to home for me.
Oh, really?
Oh.
I mean, but the way the whole scene plays out again,
it's like, you know,
they're talking about how they're going to actually scale revenue
and then they're told by their investor,
do not make revenue because then everyone can extrapolate out
and make projections, and that's bad.
And instead, you want to be pre-revenue.
Now it's pre-product, but it's the same thing.
I mean, after this little, you know,
public's rare public setting that we've had of ilia are you more or less optimistic about what he's
building i will admit personally that level of like whatever we want to call it i don't know is it a
startup is it a research lab is it a company is it not like i i i don't pay that much attention
to them beach just because working it directly in the a i i mean yeah i just can't get my head to
think about whatever what is what is going on over there. I don't know. Does this help you gain any
clarity? Yeah. No, I'm not sold by this. I mean, I think the VCs are betting that there's a
chance that he figures it out. But I think the more likely chance is that that money is going to be
lit on fire. Well, I'm not even going to say that. It's more yet. Like I just, it's, I think it's
going to certainly capture. It captures the moment very well. But again, I mean, maybe he does come up
with something. It's more like trying to gauge an understanding of what's actually going on
with SSI and with think machines and stuff. I mean, it's impossible. It's, it's, there's no way
we could wrap our heads around it. Well, Ranjan, to that, I have to say, the answer to the
question will reveal itself and there will be lots of possible answers.
Well played. And in terms of safe superintelligence, I had asked that question, has there ever
been a company that has actually created its product or created a feature that actually runs
counter to their engagement-based model? And as you had just mentioned, I will have to say
today that X and Elon Musk's X actually did that this week. They built what I think is one of the most
fascinating, incredible features that shows where, what is the location that your account is based
in? It shows how many username changes you've had. And it's, it completely, it kind of shines a light
and brings transparency into what we've all kind of known or suspected,
or at least we in the technology industry have,
I'm guessing vast majority of people don't actually think that most of the engagement-driven accounts
that you're looking at are doing it completely just to ragebait you,
just to potentially make money via the creator program,
but actually do not believe what they're saying and most likely are not what they say.
So NBC News had covered around like X's head of product, Nikita Beer.
Is it buyer or beer?
Beer.
Beer.
Teased the feature last month as a way to help users verify the authenticity of content.
They read and limit the influence of troll farms.
And we saw like an account that calls itself Ultramaga Trump 2028 claims to be based in Washington, D.C.
It's listed as being based in Africa.
My favorite, an account with the username.
at American, complete with a profile picture featuring a bald eagle over an American flag
is based in South Asia, it's based in Pakistan.
Like, I mean, first of all, to get that handle at American, good for them.
Nice job.
Good for that troll farm.
Whoever you are, I mean, I'm hoping you're buying up some, you own a bunch of domains as well.
But like, to me, the reason this is, it honestly, like, this is a kind of amazing moment.
to me this is the single most like effective impactful feature that any platform has released
to counter misinformation that I can remember after all the years of Zuckerberg on Capitol Hill and
whatever else this actually just brings more light to misinformation than anything I thought this was
stunning and it's amazing that it hadn't been done previously and you're right it exposed a lot of
folks who were trying to basically in a U.S. context
exploit divisions in the U.S. to, you know, make some money or grow an account from, you know,
posing as if they were in the U.S., but from other countries.
And it isn't just in U.S. politics, it's in global politics.
If there's a space to be enraged, somebody will find a way to make money from it from a different
country because they don't care about that country's, you know, the health of that country,
and they just care about the bottom line.
And so, yeah, you're going to see less, you're going to see less rage baiting.
because people won't fall for it, hopefully,
because they'll be able to see exactly where they're from.
And the fact that it's taken until 2025 to do this is bananas to me.
And I'm glad it's there.
I think it's a much better Twitter because of it.
Yeah.
No, no.
And it's a reminder that, like, since 2016, we've been,
and I've written a lot about it, thought a lot about it,
or in terms of misinformation, like, there,
and even in margins, we'd written around, like, basic tweaks you should make to the platform.
to any of these platforms to actually make things work better.
And, yeah, it's such a good reminder that, like,
there's really straightforward things all these platforms can do
to just make them operate much better
and just make people more aware of what is happening to them
and what they're seeing.
And Elon Musk and X, they're leading the charge
against misinformation and freedom
and a stable democracy, it looks like.
Like, so.
Wait, they're leading the charge against.
No, no, sorry, you're right.
Four, four, for, for stability and democracy.
Yes.
But I will say on my account, they missed a crucial piece of data that they tried to peg me as, and they got it wrong.
They said I had zero username changes, but I had one.
And they didn't find it.
Oh, really?
A. Cantorowitz, and now it was just Cantorwitz.
So to Elon, Nikita, and the whole team there, I have to say,
Good start.
I'm back to me when this is better.
And one thing I also have to say is to all my Bangladeshi and Indian and Nigerian accounts out there that are creating MAGA-based content, I'm not hating against those people.
I genuinely, like, I kind of, in a way, it's an opportunity.
They make good content.
They make good rage-based content.
And it was given to them as an opportunity.
So I'm not going to quite shout you out, but I'm still going to say, I mean, it's not a bad job.
I see what you're saying, but I can't get on board with that.
No.
Come on, man.
Come on.
I mean, it's, it's, to build a account that you just, these accounts are just trying to rip people apart.
Maybe you're going to make, it's the business model.
It's given to them by the plus.
platform. Well, I expect more people than to do stuff like that. Oh, like, man. Oh, come on. Really?
It's terrible. It's a terrible thing to do. I expect people to respond to incentives.
Fair enough. And anyway, look, I think it's good that, you know, for whatever reason, finally Twitter has woken up to this.
And it's given us the transparency that we need to, you know, to, we can see the misinformation, but at least we know where it's coming from.
Let's agree on that.
I agree, agreed. I'm thankful for that.
I'm thankful for Elon Musk and X.
Doing this one thing.
Yeah.
Okay. Well, Ranjan, I'm thankful for our continued conversations here on Fridays, and I look forward to many more.
So thank you. Happy Thanksgiving. Happy Black Friday. And let's do it again next week.
All right. See you next week.
See you next week. All right, everybody. Thank you for listening.
We will be back on Wednesday, I believe, with a couple Anthropic researchers talking about how
AI models can be turned evil and what to do about it.
So we'll be back then and we'll see you next time on Big Technology Podcast.
