Big Technology Podcast - OpenAI’s User Growth Miss, Musk vs. Altman, Prediction Market Ban
Episode Date: May 1, 2026Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) OpenAI misses revenue and user projections, per WSJ 2) Why can't ChatGPT break the 1 billion user mark? ...3) Is consumer AI not working? 4) Ranjan makes the case for consumer AI 5) Musk vs. OpenAI at trial 6) Potential outcomes of the case 7) Musk admits distillation? 8) Cloud services crush earnings 9) U.S. Senators ban themselves from trading prediction markets 10) CBS Sports' shameless gambling article --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Open AI is growing slower than anticipated.
What does that say about the broader AI story?
Elon Musk and Sam Altman meet in court and anthropics valuation
is approaching $1 trillion.
That and more is coming up on a big technology podcast Friday edition right after this.
Next week I'm live at Knowledge 2026,
ServiceNow's annual conference in Las Vegas,
where Enterprise AI moves from promise to production.
I'm sitting down with ServiceNow's president and CPO Amit Savary
on the platform strategy powering it all.
their people and technology leaders on what AI means for the workforce,
the engineering team behind ServiceNow's Nvidia partnership,
on what it really takes to ship AI at scale and Ultra Beauty on deploying AI across
300 stores.
These are the conversations you won't hear anywhere else,
and new episodes are dropping on my YouTube page starting next week.
We've all heard the stat.
95% of AI initiatives fail.
It's not because the technology isn't ready.
It's because you don't have the right process or the right partner.
Meet Aboard.
A board.
is your partner for AI transformation, which means they listen, use their very own powerful software
tools, and deliver exactly what your company needs to thrive in the age of AI.
Working with big and small clients, Aboard always delivers in weeks, not months.
Your AI revolution is just beginning.
Visit Aboard.com to get your AI rollout right.
Welcome to Big Technology Podcast Friday edition, where we break down the news in our traditional
cool-headed and nuance format.
We have a great show for you today.
So much news to break down, including Open AI's user and potentially revenue miss.
We'll talk about the internal numbers, the company's response, and what it means for the rest of the AI story.
We also have Musk and Sam Altman in court.
And of course, a big week for big tech earnings.
Joining us, as always on Friday is Ranjan Roy of margins.
Ranjan, great to see you. Welcome back.
Good to see you, Alex.
A lot to cover this week.
A lot to cover.
and it's weeks like this where we see some data that comes in,
and in the data you can start to see some broader stories
and really where the AI trend is moving.
Let's go to our first story here.
OpenAI misses key revenue user targets
in a high-stakes sprint towards its IPO from the Wall Street Journal.
Open AI recently missed its own target for new users and revenue.
Stumbles that have raised concern among some company leaders
about whether it will be able to support its massive spending
on data centers. Open AI is, of course, pushing back on this story. To me, honestly, like,
the revenue numbers is one thing, like, obviously this is a new category. You're going to have
revenue misses, right? That sort of comes with the territory. But to me, the bigger part of this
story is that OpenAI had a goal to hit a billion chat GPT users by the end of the year in 2025.
It missed it. It's still not even announced that number. So the latest that we have is 900 million
active users of chat GPT.
That came in February,
2026, and the billion is yet to be found.
Now, of course, it's still a big product.
But we saw Torrid growth last year and some big moments with the studio
Ghibli stuff.
Voice, of course, was important.
Now that consumer story is tailing off.
And it makes me wonder about the future of consumer products in generative AI.
So what do you think?
Well, are they an enterprise company or a consumer company?
I think, like, the,
new focus mantra, the new pivot to enterprise. I've been saying this for months now, that
they have to have some kind of general focus and decision and strategic direction, or is it
codex and actually the developer communities where they're going to see growth? But I think
going from 900 to a billion, it is kind of amazing because GPT Image 2 went mildly viral, certainly
as much as the studio Ghibli stuff, when was that, six months ago, eight months ago, whatever it was, it's a, it all times a flat circle right now.
It's like, yeah, exactly.
Could have been last week for all I know.
But this is exactly where, like, maybe you don't need to get to a billion users and that's okay.
And it seems like the strategic direction they're going.
I saw one thing that showed they went from three million users in codex to four.
million. And that was impressive and that is impressive. But trying to do everything all at once and actually
pushing back when you're these, this reporting is coming out rather than Sarah Fryer saying,
you know what? We're okay not hitting a billion users and that's fine because the way we're
building our business is not purely going to be on chat GPT consumer growth. But they're trying
to have it every way. And I think that is potentially
setting them up for issues as like more official numbers come out as they try to push to IPO.
Okay. So I definitely, let's put a pin in the enterprise side of things. And opening eye's response
to the story, by the way, I have it on from a spokesperson. This is ridiculous. We are totally aligned
on buying as much compute as we can and are working hard on it together every day. So open AI is
vigorously disputing the fact that they are wondering about whether they should buy more compute.
and you could even say that that's going to be their strategic advantage over Anthropic as this battle heats up.
And we'll talk about Anthropics, forthcoming fundraising, pretty soon.
But I think that we'll get to Enterprise.
We've been talking a lot about Enterprise.
But I am curious to hear your perspective on the fact that this has sort of hit a wall with consumers.
Let's take all the data points together.
ChatGPT should have been at a billion.
It's not.
Consumer sentiment or sentiment overall,
about AI, extremely negative. In fact, had somebody come into the comments on Spotify and be like,
I heard an ad for your podcast, FU and FAI. Like, that's how negative. I'm like, well, I'm not even
the industry. I'm being critical here. You know, but that, but I would be just the very fact that I'm
talking about AI got me a double, double FU this morning. And then the last thing, and I think this
is important. This is new data that I got from Aptopia. So this is exclusive to the podcast here.
daily active user growth across all AI AI apps.
So that includes perplexity and clawed and the Gemini's of the world.
Chat Chipiti, growth is not just tailing off, it's down.
So you can see that while like the space is growing overall,
the growth is, is completely flatlined.
And it's been down, I think, from according to,
Notopia four of the past five months.
So this is like, this is a real slowdown.
So what's happening?
Well, does the Apptopia data actually, I mean, their name is Apptopia,
include like app usage or is it mobile web and web usage?
It includes app usage.
So that's interesting because we also, and I'm going to get to this in a moment,
but maybe it's worth bringing up now.
Now, if you are a user of these apps, your usage is.
actually up, but the gross
addition of users is
slowing down. Okay.
Gross, I mean, you know, sort of the number, not
like as an end, this is a nasty
addition of users. We're a gross
in net. Come on, our listeners know gross.
Our listeners know gross.
I sure do. Well, hold on.
To clarify,
it's actually a declining
aggregate gross number of users in these
apps is what the data is showing.
Or it's the addition.
slow down.
Yeah, yeah.
I mean, at this kind of base, so that doesn't surprise me.
I do believe that there is, like, everyone who is interested has downloaded a chat, GPT, a Gemini, Claude, whatever else, has started to use it.
I think even out of my personal experiences, friends, family, everything, like, everyone already has it on their phone.
900 million, I mean, I think like in the U.S., it's probably reached relative saturation.
So to me, like, the actual growth side of it is not as much of a concern.
I do think, like, how do they find those next 100 million users?
Is it like that you don't hear a lot of talk around international growth and strategy from these companies and this whole market?
Like, I don't know, like in India, obviously China is going to be its own very, very specific market.
Like, in Africa, like, where is the next kind of vector of growth?
Because when you're at 900 million, you've tapped out the U.S.
And pretty much, I'm guessing as much as you're going to.
And then as long as the average person who is using it is using it more, it's still, you know, moving in the right direction.
Well, let me push on this a little bit further.
In Enterprise, we're seeing all these different use cases, right?
We're seeing, of course, the agentic use cases that we talk about all the time.
But we're also seeing purpose-built apps for finance, for legal, for medicine, right, all over the place.
Any industry you look, there's a purpose-built GPT app that's actually proving valuable, taking off, building users, and having real, like,
like significant valuations.
There's a new one I hear about every week.
Consumer, it hasn't happened that way.
You would think that with the technology this powerful,
there would be a breakout of consumer apps.
And we're going to get into big tech earnings in a bit.
Meta is case in point, right?
They have had this technology.
They're trying to build a consumer app with it.
Yes, they're trying to develop the foundational models,
but they're also working on the applications.
It's just not taking off with consumers, is my point.
Do you think I'm wrong about this?
Yes, completely.
And this is going to be my rant for the week or one of many potentially.
But it's interesting.
The entire meta ecosystem experience is now powered by AI.
The way everyone talks about AI does not have to be like, yes, meta AI, the chat experience.
I don't know anyone that's using it.
I know they put out crazy numbers.
And I'm sure people get kind of looped into interacting with the chat experience.
But every time you scroll your Instagram feed, the recommendation engine that's powering the ad that is being served to you, this was meta's like greatest.
I mean, they broke out of the Apple iOS 14.5 prison and kind of showed that they can, why everyone is more addicted to Instagram than ever.
Every ad that's being created probably has AI component to it.
Like I think actually Facebook is just one big AI slop fest if you've logged in recently.
So, like, I think the end user having a chatbot experience like chat GPT is where everyone's head goes into.
But in reality, so much of consumerization, Spotify, the number of AI generated songs for better or for worse that are showing up on the platform and getting plays is increasing.
So I think the big kind of like disconnect here is everyone is thinking consumer generative AI or consumer AI.
or consumer AI overall is are people downloading and asking questions to a chat bot?
Meanwhile, every existing consumer experience, restaurants on DoorDash are creating much more engaging images using, like, it's happening everywhere.
So I think to me, that is the real consumer AI application, not how many people are using chat GPT.
And apparently Amazon even has like these little AI power podcasts about their product.
They're about their products.
And Katie Nautopoulos from Business Insider was like playing one of the podcasts about,
I think, Exama cream.
No, no, diaper rash cream.
You can write your own questions and the host will address it.
And she just writes like, my butt hurts.
And they're like, that's a great question, Katie.
Amazon, like the growth in Rufus from what I've been hearing is actually spectacular.
I've been using Rufus more myself.
Now, what's Rufus?
Rufus is Amazon's AI.
Actually, it is a chat experience for the most part.
But it's basically, so it's like you can ask questions,
you can either ask questions directly in an Amazon product page.
Now my Amazon, and probably because I've been using it more,
the entire left rail when I log in is actually Rufus.
So they are pushing people more towards it.
Again, you ask questions.
It not only gives you recommendations,
you can ask questions about a product,
Does this have USBC charging when I was getting something recently?
But also, they're actually injecting their entire Amazon ads business directly within Rufus as well.
So like when we've been talking about well, chat GPT have ads,
they're already building out this entire AI advertising ecosystem directly.
So I think, but it's embedded in the product.
It's not someone going to chat GPT.
And chat GPT shopping has not taken off in the way everyone is expecting six days.
eight months ago. Meanwhile, Amazon is figuring it out. So I think there's so many pockets. And I know I
work in the AI industry and I want to be biased, but you know I can be very skeptical about this.
But this one, I have to push back on consumers are engaging with AI more than ever.
Okay. Let me push back on this one more time. Then we can move on to our other stories.
First of all, I would say you, and we've had this debate before, I think, you really have to take the
recommendation engines. The AI
recommendations, recommendation engines and put them in one category and then the generative experiences
in another category. We've had AI-based recommendation for a long time, like feed sorting and
ad serving. But what I'm talking about specifically is how does generative AI translate into real
consumer experiences? And yes, you can chat with Amazon and you can, you can listen to a podcast about
diaper cream. You know, that's all exciting. But what I'm saying is,
Where are like the wave of consumer applications that, you know, we might have expected?
You know, there's no, you know, remember, um, character.AI, like, there's no, like,
AI character or AI friend app that's, that's taking off.
Uh, there's no, like, explore history app that's taking off.
There's no, like, you know, AI stylist app that's taking off.
There's no AI prominent AI dietitian that's taking off, et cetera, et cetera.
There are definitely, you know, categories of consumer products that just do not have a consumer generative AI application taking off in a way that you thought it would.
And then again, like, you're seeing this slowdown in chat chipped-d growth.
Not that it's nothing.
I mean, it's going to hit a billion users.
The question is when.
But, like, even Open AI, and they said they were stretch goals, but even Open AI anticipated that it would hit a billion and it just hasn't.
So what's your response there?
This is actually a perfect example.
Are you, I'm guessing this is as far away from your everyday habits as possible.
But have you ever used a dress-up app?
No, this is not something I've used.
Yes, that was a very good prediction ahead of time.
Well, no, this is another, like working very closely in the retail and consumer world.
This is something we'd start experimenting in my previous experience that adore me,
like virtual dress-up apps and try-on apps
actually have been exploding in popularity.
Then you have Google, actually within Google shopping,
virtual try-on is actually gaining a lot of ground
where you can actually find a model exactly your size,
you can even upload your own picture,
and then you can actually try-on items within the Google shopping experience.
Those are all generative experiences.
Those are all not going to show up in an Apptopia like ChatGPT,
experience. But I do think, again, it's being integrated into the things people are doing every day.
And also, LLMs are feeding into an Instagram like their recommendation engines. It's no longer just
machine learning anymore. So it's still embedded in there as well. Okay. Look, I think the reason why I'm
bringing this up and the reason why I wanted to start the show this way is because, well, we have,
of course, this concrete data point from OpenAI.
But obviously, everybody is, every company is making this pivot into some form of
agentic type of experience like the Codex and the cloud codes of the world and the enterprise
move.
And so my question really is, are they making this move from a position of strength where
like they have, you know, you would like to have massive growth of chat GPT, but to see
that there's potential in in this enterprise and agentic application and say, okay, we're just
going to place our beds there? Or are they moving out of a position of weakness? We're like, oh, it's
not growing as much anymore. And now we have to make our move. So that's where I can turn and get
skeptical again. I think they're moving like from, it is a strategic mistake. I think rather that
And my kind of like hot take on this is when you have like a company that's a developer first culture, everyone is going to get more excited about Codex.
And why is everything like moving to the command line?
Most average people are never going to do anything from a command line interface.
Yet so many of these projects, so many of these products are moving in that direction.
And people get very excited.
And I even see all this stuff around how like everyday users are.
going to be actually in the command line using codex.
No, they're not.
So I think it's a bias within these organizations because they're developer-first cultures.
And I think it's a mistake.
I think there's like a lot of opportunity from everything I was saying.
And actually, again, Amazon I think gets it.
You don't see Amazon.
They know this is our product.
This is our business.
This is our customer.
So we are going to embed generative experiences or AI-first experience.
experiences throughout. And we're going to move things in that direction. And that's where I think,
like, everyone is rushing there. This is what I work in. Everyone, and again, you're seeing,
like, Anthropic had this historic run, and suddenly 4.7, you just see all this negative sentiment
come out around cost, and people instantly start stepping back little. And then Codex comes in
and 5.5. And, like, it's, I don't think when everyone is rushing towards,
the same thing, that for a company like OpenAI that has such a foothold in consumer,
it's the right decision.
Your advice to Open AI would really be like stick with consumer.
Don't give up on the SOR type stuff and try to own the consumer side of generative AI as
opposed to shifting to Codex.
Yeah.
Unless they're almost accepting Google will beat them at it.
Which is not unreasonable.
Like when you are Google and you're already on the, I don't know, did you see this study around how like Google, I mean, in an evil way, like giving Chromebooks to every student in America and now the actual YouTube utilization, like YouTube usage during school hours is like exponentially.
But I could have guessed that.
Yeah, but for better for worse.
Hopefully they're watching big technology podcasts there.
Well, as long as as the first graders of America are just.
Actually, my son who was in first grade, he, if I ever play our podcast in the car when we're driving, he gets so mad.
And he's like, this is the most boring thing ever.
So I'm sorry.
I don't think the first graders, that demographic is our biggest fan.
These are the people that were angering first graders and anti-AI listeners.
Hate mail from both.
These are the constituencies.
He's leaving two-star reviews without me even knowing on my phone.
But Ron John, I mean, I, okay, so this is a thing that my other side of it is even though these, let's just take the stuff to be true, even if it were true, revenue miss, user miss, but deeper engagement, I would say Open AI is heading in the right direction with Codex.
I mean, if you think about Anthropic, right?
Last July, I was in Anthropic speaking with Dario.
He was happy that they were making $4 billion ARR.
Now they're at 35, potentially.
there is a tremendous market opportunity to go after with this agent-style use case in the enterprise.
And so to me, like if Open AI thinks that they can pass Anthropic because they're going to have more capacity and potentially on-par or better models go there.
No?
I mean, I work in that at writer.
Like that's, I mean, I see it firsthand.
It's very attractive.
And it's like when it's working, it works very fast.
But it's competitive.
It's also like when it for a company of Open AI size, again, at writer, that's we've been enterprise only for our entire life.
So like that's the game.
Open AI, it's not, it hasn't been the game.
And they have this asset of 900 million users.
They can be integrated directly within everyone.
And the important thing here is you can grow revenue fast.
And I do think this is all ahead of the big IPO race and battle here
because you can grow revenue a lot faster by getting a bunch of developers using your tool,
them not paying attention and token maxing and just blowing out tokens
and you'll increase consumption, you'll increase revenue very quickly.
But that's a short-lived phenomenon versus you have every person in the U.S.
You own the verb to search with AI is to chat GPT,
something. Like, that is a tremendous asset. And I think they're kind of seeding it to Google right now.
Okay. Well, I think we will, we'll just have to watch this play out. As opening eye does this,
of course, it has the thorn in its side of Elon Musk. And I'm curious how if you've been
watching the trial between Open AI and Musk this week, and if you have any thoughts on whether
This trial will lead to anything of consequences.
Of course, Musk is suing Open AI for taking his money,
going from a charity from a for-profit,
from a charity to a for-profit,
unjustly enriching themselves and betraying the charitable trust value
that case is taking place this week.
What's your read on it?
It's rare that listeners will hear me agreeing with Elon Musk,
but I think this is one case, like,
it feels like at a very simple logical level, this is, they were a nonprofit.
And that was the entire founding story for a long time. I mean, they are a nonprofit.
Hold on. What if the current status, so much happens that I can't even remember it, have they converted or not?
Yes. They've converted, but they still have the nonprofit. The nonprofit. The nonprofit.
That owns a certain amount of, yeah, yeah, yeah.
Like, we've joked for a long time around how opaque the structure is.
I think it puts Ceylon Musk in a pretty good, just from a very human logical, like if you're trying to convince a jury, I think it's a pretty good argument.
I think there's been zero accountability for any large technology company for so many years that,
the idea that anything would ever happen that would actually derail the business because there's just so much vested interest in it.
Like, I don't know.
The cynic in me just assumes nothing will actually happen.
Maybe there's a fine.
Musk and Sam put on a good show.
But do you think there will actually be any consequence coming out of the trial?
No, I don't think so.
I mean, maybe there will be a fine to Open AI because they'll have to end up paying that.
money to the nonprofit. But I agree with you. I think that Elon has a like to stand on here.
I mean, he gave 30 plus million dollars to found this thing. And he currently has like no share in it
at all. I don't see how that's fair. And of course, the opening argument is like, well,
Elon gave this as a donation to a charity. He can't look at it as an investment. And I'm like,
well, of course you gave it to as a donation to a charity. You were a charity. You set up that structure
with him in the beginning.
If you began as a for-profit, he would have looked at it as an investment.
Now, I know Musk is trying to get Musk and Sam Altman removed.
Sorry, he's trying to get Sam Altman and Greg Brockman removed from the top of OpenAI.
I don't think that's going to happen.
But I wouldn't be stunned if the jury ended up siding with Musk here.
And, of course, it's advisory.
So we'll see what the judge does.
I don't think the judge is going to blow up Open AI, but there could be some consequences.
Like, but what, though?
A couple.
Yeah, billions, billions, billions going from the for-profit to the nonprofit.
I wouldn't be stunned.
I mean, I guess it's like a significant amount of billions.
And by the way, that could hamper the whole, you know, build out the, can you imagine you're
an investor and you put all this money in for them to, you know, have database capacity
to compete against Anthropic and then you have to and has to go elsewhere?
I don't know.
So the interesting part here is one, the fact that.
that GROC is a direct competitor, X-A-I, like,
it just makes the whole thing even just richer,
I think, in terms of how they're approaching this.
Did you see that Elon was, like, promoting the Ronan Farrow,
Sam Walter's article across Twitter, X?
Yeah.
So talk about what happened there.
Yeah, so users were reporting.
It was actually like a new UI experience almost,
of having an article pop up, both in the standard ad format, Elon retweeting it,
but also even like just popping up at the bottom of your screen,
the Ronan Farrow, New York article about Sam Altman having many faces.
And which, did you read it?
It was for, if you've been following Sam Altman and Open AI for a long time,
there wasn't anything groundbreaking in it.
But it surprises there.
Yeah.
Yeah.
But it painted a pretty strong picture.
especially if you're not following closely.
But it's still funny to me that like this bastion of free speech and non-manipulated speech,
supposedly of X, literally the owner going to trial is able to kind of just manipulate and control what people are seeing.
Do you think that Open AI kind of has a Zuck Winkleweig argument to make here,
which is like if you were, if you were so smart, you would create a Facebook, but you didn't?
Like, they could point to the fact that, like, most of the value has been created by them.
And Elon has sunk billions into building XAI, which has had mixed results at last.
Oh, that would be, has that been said yet?
Because if you're listening, Sam, that's the argument.
Like, I feel this whole thing is for show.
I mean, I think, like, they both recognize and Elon's trying to kind of like cut them at the
knees ahead of their IPO, boost XAI.
Obviously, there's a strong show element, and that would be the greatest.
It's like, how's XAI going, bro?
Like, you already paid your $44 billion for X and for Twitter, and you're jamming that
into everyone as much as possible, but we built something people love.
We basically, like, invented this entire industry right now.
How are you doing?
I will say, though, there are, I mean, there are GROC users out there.
I was flying back from Vegas to New York and sat next to a guy that drives the subway.
And I was like, you're talking about AI.
And he goes, yeah, I use GROC.
I don't have to badger it to give me an answer I want.
So there is appeal out there, but clearly it's not.
It's not as far as like the big businesses go.
It's not holding a candle to open AI or Anthropic right now.
Well, what is the, what do you think the GROC strategy?
is in this. Do you think they're going to go pivot to enterprise away from consumer?
No, speaking of the opening and consumer, maybe they should lead into Bad Rudy and that other AI girlfriend that Musk made.
That could be the potential growth area there. Just from a business standpoint.
Maybe if Open AI is truly kind of moving away from consumer, it does like open it up.
But I guess like why has meta AI, I mean, it's not a good product.
but like the actual chat bot experience from every time I've tried using it.
But it's just like to me it still feels like if you already have the consumers undivided attention,
they don't have to open up another app and experience.
Like someone should be killing it on this, whether it's meta, whether it's Elon and X.
But it hasn't happened yet.
This is the point I was making at the beginning of the show.
All right.
All right.
Thank you for seeing the light.
I guess that chat bot, I guess Google, Google has shown.
Google for real?
I don't think so.
No?
Do you think Google is this great hit consumer AI chatbot?
I mean, Gemini, like funneling users from your core experience to a standalone app,
Google has shown they are able to do that far more successfully than meta.
I mean, I think like, based on Gemini's numbers in the consumer market, they've shown you can do that.
Okay, before we go to break, because we have a lot more to cover today, you highlighted a section of dialogue in this court case.
You want to share a little bit about why that's important and what it is?
Yes.
So there's a few interesting, really interesting parts that came out so far in the trial, including Elon playing like,
logical jujitsu about like it's a yes or no question that's like asking me do you beat your
wife which i don't know like doing that in a courtroom is just so ridiculous to me as though it's like
you're i did high school debate and like that felt like the kind of thing you would do when you're a
freshman but more important relative to the industry um musk was asked do you know what distillation is
by open a i's lawyer william sabbitt he's it means to use one ai
model to train another model.
And he was asked, has XAI done that with Open AI?
Musk replied, generally, all the companies do that.
So that's a yes, partly.
Musk continued distillation is a technique where a smaller AI model is trained to mimic the
behavior of a larger, more capable model, making it cheaper and faster to run while preserving
much of its performance.
So it's, actually, and he continued, the Sabbath, has Open AI technology been used in any way
to develop XAI.
Musk, it is standard practice to use other AIs to validate your AI.
I think, like, this is significant because I think the distillation conversation when it
comes to Chinese models and deep seek has been a pretty loaded one.
And if the fact that he's just admitting this openly and saying it confidently,
still, like, from a commercial perspective, what does that mean is kind of crazy to me?
Like you would think, and maybe there, I guess there's probably not a lot of law and regulation around not doing this, but it's still, I don't know, again, from a purely commercial perspective, I was shocked that he was saying this. Were you?
Yeah, definitely. No, it's stunning. And clearly it's happening everywhere. And it goes to sort of a question I asked Greg Brockman last week, which is that like, is it going to be economically viable to train these models if you just get distilled? And, um,
I don't know. There's coming, there may come a point where, you know, right now we're seeing
real leaps in every, every new model to a degree. And it might come a point where it sort of levels out.
And once that does, you know, how far is the distillation going to be behind the proprietary stuff?
Probably not that far. And so that sort of gets to the question of, well, do we end up seeing
sort of intelligence at a certain point commoditize and compute at a certain point commoditize?
and we end up in a price war because everything is basically delivering the same.
And so then you compete on price.
I mean, that's sort of, that was the logic behind this kind of memorable quote
that Mark Cuban gave me in the episode we did on Wednesday where he said,
Open AI is shitting money away at scale because that, that's his belief is effectively
you kind of get to that place.
What do you think, Ron John?
Well, are you saying that the models will be commoditized and it will be about product
and price.
Yeah, that could be the case.
Okay, okay.
Just checking, just checking.
I'm advancing this theory.
I'm not, you know, sort of throwing it out.
I think it's possibility.
Well, actually, on that, I don't know if you saw, like, one of the more, on the topic
of both distillation and price, there's a lot of hype around DeepSeek V4 is supposed to be,
again, like top-level frontier model at a fraction of the cost that almost certain.
proudly is distillation at its core.
And then like, I don't know,
had you seen like Brian Chesky,
who's, I think, been on the show a few times.
Just one.
Yeah, okay.
So they're talking about using Quen from Alibaba,
from a cost perspective.
That basically, and I do think moving to a world
where let's say you use Anthropic and Open AI
to actually build,
but then start to cost-optimized
towards cheaper models.
and maybe it's within their ecosystems,
maybe it's just an open, free-for-all
in terms of any model.
I do think that's where things will go.
But did you see there's apparently like a House of Representatives recommendation
around banning the use of Chinese models
and like actually calling out Airbnb specifically?
Really?
No, I haven't seen that.
I mean, I have seen.
I mean, if you look at,
like apps like perplexity, for instance, like they'll allow you to use like the Open AI or
Anthropic models or Kimi K2, which they have, of course, like they've downloaded the weights,
they've post-trained on their own.
They've sort of given their own version of that model.
But I just don't see the Chinese models going away because ultimately if you ban the Chinese
models, aren't you effectively saying like you're banning open source?
I mean, there are the Nvidia Nemo models, which are open source, but outside of that, it's
mostly a China thing.
Well, actually, so here.
So two Republican-led House committees, they're probing specifically Airbnb and any sphere, which is the owner of cursor, over their use of Chinese models.
So I found this really interesting specifically because after, like, we didn't talk about meta and manis last week.
I mean, to me, like, first of all, we could definitely get into what's going to potentially happen there.
But China, that's, like, quite the salvo.
You cannot acquire our technology, even after that technology has moved to Singapore and trying to get out of the CCP oversight to actually say we're blocking that transaction.
To me, actually, like the U.S.-China Tech Cold War, like, heated up significantly when that happened.
And then when I saw this, that the Republican House committees are actually throwing out this.
this idea that you cannot use Quinn or other Chinese models,
I think it's going to get,
I mean,
that whole Jensen Dworkesh exchange is going to become
far more significant or a bigger story this year.
Okay.
This week,
I'll just say one thing.
Then we really need to go to break.
This week I heard probably the best explanation of what Jensen's position is,
which is effectively,
if you don't sell the American or Nvidia tech,
stack into China, you will force the Chinese model makers to build, to optimize basically
algorithmically on Chinese chips like chips from Huawei.
In the event that they are able to make those optimizations and in some ways, you know, outpace
the American models or become a appealing alternative, they could potentially build those
on Huawei chips alone and not make it.
compatible on the NVIDIA stack, and then do their own form of export controls on the U.S.
or to the rest of the world and basically have control over AI.
So let's say they make state-of-the-art models built on Huawei chips.
They could hold the U.S. back from actually using those and effectively restrict our ability
to have cutting-edge AI.
By putting that constraint on them, you sort of put yourself under the barrel in that way
where you could potentially not have access to the AI that you want.
I think that's a circular but reasonable argument.
But question, should large tech companies in the U.S. be allowed to use Chinese models?
Yes.
I mean, you should be able to download the weights, do the work on your own, and then run them.
I think so.
Okay, but only the open source side of it, not directly connecting to the AliBond.
but quite infrastructure the same way you would to an anthropic yes or no it depends what you're doing it's a
yes or no question god yeah you know mr mr senator uh i'm gonna say i'm gonna say yes i'll say yes
i don't have a problem with it for now until we see no other things i would say it's not
going to lead to i don't think it will lead to like a clear catastrophe right away like is the is the
is the fact that you can't say so i mean i don't know this is kind of a weird thing to
a weird rabbit hole to go down.
But is the fact that you can't get straight answers about Tiananmen Square going to impact
which hotel or apartment room you book on Airbnb?
That would be weird.
Well, maybe feng shui, I say this with the Taiwanese mother-in-law, could start injecting itself into Airbnb successfully.
Maybe that would have a much clearer understanding of the, exactly.
So maybe this is we're both arguing for.
Okay.
That form of soft power, I'm on for.
All right.
Let's go to break. We'll go to break and come back and talk a little bit about big tech earnings and prediction markets right after this.
I've interviewed a lot of great tech founders on this show, and one surprisingly universal challenge comes up again and again, finding the right domain name.
It's something I ran into myself when launching big technology. The names you want are often taken, and it's tempting to just settle and move on.
But the founders I respect most don't settle on fundamentals, and your name is one of them.
it should immediately signal what you actually build.
That's what I appreciate about dot-tech domain names.
It just makes sense.
It tells the world your customers, your investors, anyone Googling you,
that you're building in technology.
Clean, direct, no qualifiers.
And I'm seeing more serious startups lean into it.
Nothing.com, one-x.com, aurora.com, CES.com, dot tech,
ultra.com, alice.com, dot tech, and so many more.
If you're building something tech first, don't settle.
Secure your dot-tech domain from any registrar of your choice
and make your positioning obvious from day one.
Look, if you have a kid in school right now, you know the drill.
What should take 20 minutes of homework ends up taking two hours and usually ends in tears.
And every good tutor, well, they're fully booked for months.
This episode is brought to you by Brainley.
Brainley is an AI-powered personal tutor built by educators, not a general purpose chatbot.
It doesn't just give your kid the answer.
It walks them through step-by-step explanations so they actually understand the material.
It learns how your child learns, diagnosis when they're struggling, and builds a personalized learning path in under three minutes.
Available 24-7, there's no scheduling headaches, and it's just a fraction of the cost of a private tutor.
Finals are coming. Build your teen's study plan now. It only takes minutes.
Go to brainy.com slash big tech to get 50% off your first brainly subscription with my code, Big Tech.
That's B-R-A-I-N-L-Y dot com slash big tech.
Most leaders know how work is supposed to happen,
but when it comes to how it actually gets done day-to-day across tools, teams, and handoffs,
they're mostly guessing.
That's exactly the problem Scribe Optimize was built to solve.
Trusted by over 80,000 enterprises, including nearly half of the Fortune 500,
it gives leaders a live view into how work is really happening across approved business apps
without interviews, manual process mapping, or extra effort from the team.
And because it's continuously analyzing real workflow activity,
the insights stay current instead of going stale the moment a process changes.
You can see which workflows are happening, where time is going,
and which tools are involved.
It automatically surfaces top issues, explains why they're happening,
and even recommends ways to fix them with estimated time savings.
And importantly, it's built with privacy in mind.
So activities only captured in admin-approved business apps,
and user-level data is anonymized by default.
the kind of visibility that used to take months.
Now, it's just always on.
If you're ready to stop guessing and start seeing,
visit Scribe.how slash big tech.
That's S-C-R-I-B-E.
Dot-Hall-B-T-Tek.
And we're back here on Big Technology Podcast Friday edition.
Just to continue going on with my conversation
or my point here about AI consumer,
if you look at the earnings that came in this week,
if you were a cloud company, you were very happy.
If you were building AI consumer apps or you were building for consumers, you were either not happy or you were thrilled that you didn't invest a lot into AI.
So let's just break it down.
This is from CNBC.
You look at Google Cloud.
Google Cloud grew 63 percent, $20 billion.
This is by far the strongest growth rate for any period since Google started breaking out cloud results in 2020.
That's massive.
AWS, by the way, stuck in the 17, 18% growth rate range for the past few years grew 28%.
Microsoft grew 40% Azure.
If you are providing the AI infrastructure for this enterprise buildout, you are doing really well.
What do you think about this, Ron, John?
I mean, the numbers are insane.
63% at that scale.
I mean, and I guess it reflects this is like a public company earning breakout that kind of tells the anthropic story as well that we keep hearing about through fuzzy ARR numbers.
Here we have a clear 63% growth to $20 billion in a quarter for Google Cloud is nuts.
Like I think, yeah, it's, I feel will there be demand or are we overbuilding capacity?
it seems like that question has been answered.
Do you see any holes in that story?
Yeah, so here's a tweet from Gary Marcus.
Shear insanity, Amazon, Google, Microsoft, and META collectively are spending more money
than the Manhattan Project every single month, or than 12X, 12X, the Manhattan Project
every year.
And what do they have to show for it?
None are making major profits on AI.
None has a technical moat.
A massive price war is inevitable.
A few of their customers are seeing major returns on investment.
greatest capital misallocation in history.
I mean, here's the question.
Is what these cloud services divisions are seeing this big, massive bump in revenue,
just downstream of the major amounts of money that the Anthropics and the Open AI are raising
and sort of not quite sustainable without those big fundraising?
And by the way, big fundraising moments.
And by the way, a lot of that fundraising is coming from them.
What do you think?
Okay.
I like your circular funding.
And again, and actually a lot of that funding is in the form of cloud credits often that is recognized as revenue.
I'm not saying that's 100% sure what's happening, but maybe.
Yeah, yeah, yeah.
So on one side, I feel like, again, if you listen regularly, you know, I can be very skeptical.
And I will open AI or Anthropic have a successful IPO.
I'm not sure.
I feel like Gary Marcus and Ed Zitron and them like,
I wish they just said, okay, something positive or impressive has happened.
Like, not everything.
Gary has to a degree.
He did say that Claude Code is a combination of neurosymbolic systems and machine learning.
Oh, okay.
Which is fair, which is fair.
Like LLMs on their own without a harness, without a product, without like all of this.
Okay.
All right.
At least Gary's.
recognizing it. I do think the investment in the infrastructure side, it's interesting because
maybe, okay, maybe the one argument against this is obvious that the demand is there and they've got to
keep building is maybe if I take what he's saying and extrapolate a bit, is the idea that the
economics of how they're investing are flawed. That like the building out assuming constant price
at today's growth and today's like revenue,
the fact that it will scale linearly or exponentially like that,
maybe it's true that as costs come down,
the amount they've invested if Deep Seek v4 and Quinn and others
and people are using open source
and the actual cost goes down dramatically,
then it could be pretty bad capital allocation.
Right. I mean, I think we can't,
even though the use case is,
are there, which they are, right? And even though this won't go to zero, we cannot discount the
fact that there could be a collapse here because of the very factors that Marcus is pointing out.
Yeah, I think, okay, I'll say, and it's true, no one understands the economics of any of these
businesses right now. Like, what the actual, what is a true margin will, again, we've seen it with
Anthropic that just that insane spectacular growth, the pushback on price and
understand after 4-7 came out and recognizing that a lot of it is subsidized anyways.
So what are the expenses to Anthropic will eventually have a more clear picture of?
And then how all that relates to the infrastructure side, I guess it's fair.
No one, what is an average margin for an AI business?
no one knows yet.
Exactly.
So that is something that, I don't know,
I think we need to keep coming back to on this show.
You know, at first it was like,
is this technology going to work?
The technology is working.
And the question is, like,
these business decisions that are being made are,
there's no other way to really describe them
than YOLO decisions, right?
Nobody knows what's going to happen here.
The demand is coming in,
but it's a brand new category.
There's bumps in the road.
and we could end up seeing a price collapse.
I also actually, when you say YOLO, it kind of makes me think like the executives, the CEOs of these companies are all in the same circle, which makes this interesting too.
So like when everyone around you that you have known, respected, hated, just like that is your basically social circle or like professional circle,
your closest LinkedIn connections is saying the same thing.
It's going to exacerbate how you think.
Yeah, it is interesting to me.
And it's a very, the Musk ultimate trial reminds us,
this is a very, very small group of people that have known each other,
competed against each other.
I mean, you know, had spats with each other.
like, remember when Zuckerberg was in Musk the cage match?
Like, all types of interactions.
And they're all speaking of, they're all thinking the same exact thing.
Maybe that's another reason.
Everyone could be wrong.
That's sort of what makes what Apple has done, even though Apple did try to make this happen,
which has made what Apple's done, quite impressive that they decide,
hey, we don't want to spend on foundational models.
I'm kind of going 180 on Apple, honestly.
Let me just say they had iPhone sales grow 21.7%.
They don't have AI on the iPhone.
Siri sucks.
This is just the counterpoint to what we've been saying.
They had quarterly sales of $111 billion.
I think I foreshouted it earlier by saying, you know,
in consumer, you're probably unhappy if you spent a lot
or you're happy if you didn't spend anything.
And when I said that second part, I was referencing Apple.
I mean, if their ineptitude and incompetence and God, do I hate Siri,
but if that ends up helping them in the long run,
because by sheer virtue of incompetence,
they did not go all in on building their own models
and investing in AI infrastructure,
and that ends up being the right decision.
God bless John Turnus and his reign.
because I mean.
Could happen.
I mean, conventional wisdom now is like, oh, Apple, you did a good thing.
And now you're selling your Mac minis.
By the way, in the earnings call, they talked about how Mac Mini has become an important part of the AI agent infrastructure.
And they've also talked about how the new Siri is coming this year.
So you might end up getting the best of both worlds.
I believe it.
I'll believe it when I see it.
Honestly, if they do this, I will take back of many of the negative things.
I've said about Tim Cook.
Actually, I'm going to say something positive about Siri today.
Do you know Alexa Plus cannot translate into Chinese?
My wife was asking, and we actually have an Alexa Plus and Siri both kind of like next to each other.
And then she turned around and asked Siri and Siri and was able to translate something into Chinese.
So Siri's got something.
I guess it's Alexa plus, not the other leading ones.
But say Siri won one battle.
Okay. Well, that is probably more than it's one in any time in recent history. So we got to give one to Siri. Man, Apple. Again, don't down Apple. I think that's something I'm learning. All right. Let's end today talking a little bit. We have some prediction market news. This is a recurring theme that comes up on the show about the prediction markets. And we have a story, Rantan, you can take us away about senators banning themselves from prediction market trading.
The U.S. Senate unanimously, how rarely do we see something along bipartisan lines barring senators from trading on prediction markets?
And obviously, Cal Shee and Polly Market, I mean, we apparently on, it was a few weeks ago, Cal she said it suspended one U.S. Senate candidate and two candidates for the House of Representatives for political insider trading on their own campaigns.
There was this crazy story where a U.S. Army Special Forces Master Sergeant actually was charged with using classified information around the Maduro capture that he was part of that mission to bet on, which is just like insane still to mean, like the most dystopian thing imaginable.
But it's nice to see the U.S. Senate actually restricting themselves from doing something absurd.
Yeah, no, I think there is a growing recognition that some of the prediction market activity can be very cancerous to a society, can be unfair to voters, sorry, to gamblers, which is like, I guess they should know better.
Yeah, voters don't care about.
Yeah, but, well, I mean, on the other hand, you could say, well, they're actually like more accurate now.
So what do you think about that?
Where do you stand on that?
So I've seen that argument and kind of like the companies themselves almost use that argument
that if a small number of people are kind of driving the market in the actual accurate direction
using insider information that makes the market more accurate.
Which is true.
But it doesn't make me like this any better.
And it also rigs it against everybody else.
And I think it is, I think if you look at it on a whole, there is a.
there is a serious, you know, this stuff has only recently been legalized and it's kind of taken as
normal today. And I say this is someone who likes to put like a couple dollars on the game when I'm
watching anything on, and put it on like the Fandual odds. But, you know, there is, without a doubt,
a lot of healthy activity here, but also a lot of extremely cancerous activity here. And it's almost
like you're seeing a society that can't help itself. So let me tell you one story before.
we leave. There's this quarterback in college football at Texas Tech. His name is Brendan Sorsby.
He just entered a gambling addiction program for sports betting that could end his college career.
This is according to Matt Schick from ESPN. And then Schick posts a article from CBS Sports about
the fact that he could miss the season. This is the second paragraph of that article.
And this really annoys me. Texas Tech was a
an overwhelming favorite to repeat as Big 12 champions after acquiring Sorsby this offseason,
but now has moved to an even money at Plus 100 via Fanduil Sportsbook after Monday's news.
The Red Raiders projected win total has also decreased, going from 11.5 at opening to 10.5 victories,
and Sorsby's no longer on Fanduels' Heisman odds list after opening at plus 2,500, just outside of the top 10.
CBS.
Allow me to address you for a moment.
You are writing an article about a quarterback
with a serious gambling addiction problem
that may cost him a season in the NCAA
and potentially send him right to the pros
where his life may be destroyed
because his draft standing will not be anywhere close to where it was before.
Maybe destroyed is too strong,
but it won't be what it was before.
You have no less than three mentions
of the odds movement from said person
life-destroying activity.
With hyperlinks directly out to those exact bets, to those bets.
Now, I don't say this likely.
Get a fucking grip, CBS Sports.
Don't do this.
This is just a, you know, it is a, it propels people into the situations that Sourcesby finds himself.
And I don't understand how we have a society who is looking at this and saying, we have no problem here.
I this is disgusting this is crazy like actually the this is a good call out for this is the most kind of like weird example of like obviously like how much sports sites have been incorporating odds into even just like TV broadcasting into every like their websites apps everything but yeah that is quite do you think someone even even even
do you think this is just AI generated and the logic around all these incorporating bets is
already built into the CMS and like, or do you think they, someone actually sat down and was like,
I'm going to do this? Or do you think someone had to do it and actually felt sick to their stomach?
Which of those three? Oh, God. I mean, I don't know if it's what, which one would be better.
To be honest, someone's got to get on the phone with Barry Weiss and say, you know, don't do this, please.
I mean, out of all their problems.
This is a pretty bad one, though.
I'm going to put this up there.
I'm writing a letter to the editor.
I'm going to do it.
I'm doing it.
Dear Barry, first time call her longtime listener.
Listen, we've got to talk about CBS Sports.
Well, I guess we will end on that uplifting note, Ron John.
I mean, Lord Almighty.
I didn't think we could get more depressing than Open AIs missed billion user number,
but I think we found it here.
I'll end up to do the glum.
Generative AI is showing up in consumer experiences.
There we go.
Now, excuse me, while I put a polymarket bet on when OpenAI will announce that number.
Yeah.
Just kidding.
I don't do that.
All right, everybody.
Thank you for listening Rajan.
Thanks for being here again.
Have a good week.
See you next week.
All right, everybody.
See you next week.
And we will be back next time on Big Technology Podcast.
Rosen lasagna, medium power, 15 minutes.
Sounds like, Ojo time.
Let's play.
Feel the fun with Play-O-Joe, the online casino with all the latest slot and live casino games.
What you win is yours to keep with no wagering requirements, instant payouts, and no minimum withdraws.
Hey, I just won.
Woo-hoo!
Feel the fun! Play-O-Joe!
Honey, forget about the lasagna.
Let's celebrate!
19 plus Ontario only. Please play responsibly.
Concerned about your gambling or that of someone close to you.
Call 16-531-2600 or visit conexontera.ca.
