Big Technology Podcast - DeepSeek's Fallout For AI Companies, OpenAI’s Path Forward, Siri Somehow Got Worse
Episode Date: January 31, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) The DeepSeek impact on Silicon Valley 2) The four areas of margin in AI 3) How OpenAI is positioned after... this week 4) Whether OpenAI can be fine losing the lead on model building 5) DeepSeek's impact on Anthropic 6) Why Amazon is happy about DeepSeek 7) Should NVIDIA be taking the worst of it? 8) Okay, let's discuss Jevon's Paradox 9) We have a discord now 10) Apple made Siri even dumber --- Join us on Discord Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Let's assess the fallout of the deep seek moment and how it changes things for open AI,
Anthropic, Nvidia, and others.
Plus, Apple makes Siri Dumber and iPhone sales are falling.
We'll cover it all on a Big Technology Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional
cool-headed and nuanced format.
Well, last Friday we spoke with you about DeepSeek and over this past week, it seems like
it's really changed the entire conversation in AI. So we obviously covered it on Friday.
We had M.G. Sepler come in for an emergency podcast on Monday. Today we're going to pick up that
conversation and talk a little bit about now that we have a little bit of distance from the
actual news itself, what it's going to mean for almost every tech company in the AI game today.
And then we'll also talk about Apple and how Siri has become even dumber with Apple intelligence.
Joining us, as always, to do it is Ron John Roy of Margins.
Ron John, what a week it's been. Welcome back to the show. I can't believe the deep seek
sell-off was only five days ago now. These weeks, these weeks are getting longer, I think.
That's crazy. And it's very interesting that NVIDIA was the one that was hit. I guess
NVIDIA has been a symbol for artificial intelligence and maybe the AI bubble, if you think
it's a bubble. But there are some others outside of NVIDIA that I think it's worse for. And we can
talk about that a little bit today. So let me just kick it off, because
I'm about to publish a story on big technology, talking a little bit about the four
areas of margin in AI and where this week's news leaves open AI. So I was speaking with a source
this week who's working in the industry who basically put it this way. You can get margin
in four areas of the AI business. There's hardware, and then Nvidia is taking all of that.
There's data centers. So you have Azure and Amazon's AWS and Google Cloud taking all that.
there's AI model building where there's not really a margin,
especially which we see this week after Deepseek has basically shown that
we're going to be on a curve where these models will just get cheaper and cheaper to run.
And if you're licensing them from an open AI,
it'll get about as close to free or the energy that you need to run them as possible.
And then there's applications,
and that's the chat GPT and the replicas of the world that use this technology to build.
And so basically the assessment is the biggest takeaway from this week
is that if OpenAI thought it was running an API business or a model building business
before this week, right now there's clarification. It's all about the applications. It's all
about Chad Chip-T. Good news. It has the leading application in the AI application world,
but it certainly shakes up the broader picture for the company. So, Ron, I'm just going to put it to you.
I'm curious how you react to all that and where you think Open AI is after this past week.
Well, my first reaction is I'm very glad that I have been ransed.
that LLMs will be commoditized for, I think, about a year and a half now.
And now it seems to be the conventional wisdom, the fact that we're starting the conversation
here saying that models will provide no margin versus the idea that actually that's
where the insane profits would be, that wisdom, I mean, that's completely flipped on his head.
Now it's almost the assumption after R1 in Deepseek that there will be no economic
incentives from the model side. It's interesting how you put it, though, because remember,
Open AI in the financial projections that leaked, I believe it was going to be 80 or almost 85%
of the revenue was supposed to come from the application layer, the consumer subscription side,
versus Anthropic is actually one that gets even more concerning here because they were betting
on the API layer, the model layer, the fact that people would pay them lots of money.
for access to the models directly.
So even in this framework,
it almost bodes worse for Anthropic
than Open AI here.
Yeah, I think that this was really
vindication week for us on a couple of fronts.
You sent the product as all that matters
and me whispering over the past couple weeks
that Anthropic might be in bigger trouble
than a lot of people are anticipating.
We'll cover.
Dario Amo Dei had a really interesting post
about this deep seek moment,
and we're going to cover that in a moment.
But, you know, it's interesting because I'm actually leading my story off with the revenue numbers about how chat GPT was supposed to always be the lead for open AI and not the API.
But there's one thing I can't square with that.
Like, you would think that this would be total vindication for their strategy and, like, actually a good thing for them.
And maybe they might even eventually use different models within chat GPT if we end up in this world where everything is commoditized, which is something I hadn't considered before, but I'm going to mention in the story.
But the surprising thing to me has been how flat-footed open AI has been in their response to deep seek.
Now, obviously, it's a moment that will probably shake you if you're a model builder because you've been not, let's say, surpassed, but effectively equaled by an open source model coming out of China that is so much cheaper to run.
And we talked about it last week that the big deal in this moment is not that it took less money to train and that's obviously under question.
It's the fact that you can use the API for much cheaper.
And I was speaking with a developer this week who said basically it's $60 per I think million tokens to use the OpenAI O1 model.
And it's $2.19 to use the DeepSeek R1 model, both comparable reasoning models.
And it even goes up to, let's say, $10 if you're using an API provider that's going to make it a bit stable.
But it's so much cheaper.
And then you look at like what has Open AI been doing these past couple days.
in terms of its messaging, and it's been terrible.
And they've been the ones that have been so good at the PR side of things.
I'm just going to talk a little bit about what Sam Altman has does.
So first of all, he says,
next phase of the Microsoft Open AI Partnership is going to be better and much,
it's going to be much better than anyone is ready for
with some awkward selfie with him inside you and Della.
Then he talks about, addresses Deepseek R1 directly.
He says, Deepseek R1 is an impressive model,
particularly around what they're able to deliver for the price.
will obviously deliver much better models, and it's also legitimately invigorating to have
a new competitor. We will pull up some new releases, and then he says, looking forward to bringing
you all AGI and beyond. I mean, that to me reads as sort of panic and shock from Altman,
and I wouldn't think it would be the case if it is, as we're all agreeing, an application world
at the end. What do you think? Well, so separating it out from the application side,
the most important part of his statement, I think we should get into, because there's been a lot of
controversy around it, is he makes sure to say and believe more compute is more important
now than ever. I think that's the central question for the entire technology industry.
It's underlying the entire Nvidia story. I mean, I've seen so many cell side analyst notes,
you know, where clearly there's a heavily invested interest in Nvidia and some cost
into NVIDIA where people are just, you know, just, you know, hitting the bandwagon saying
this actually means there's going to be more compute than ever needed and Sam saying it
too, because this was the week, remember Stargate was the middle of last week, the idea
that investing $500 billion would be required. And then suddenly DeepSeek comes along and
the initial chatter is it's only $6 million. And then there's been plenty of reporting around
originally the Chinese quant hedge fund. I think I saw some number in the FTU was like I had already had around like $1.2 billion in GPUs invested. And so basically, and we all know that the $6 million cost attributed to it was only on the inference side of R1. So overall, it was not $6 million to actually create the R1 model. But overall, it showed us it can definitely be done for significantly,
cheaper than $500 billion.
And overall, the amount of the compute no longer means victory.
And I think that is a fundamental shift that Sam Altman and Open AI have gone so far down
the road of those with the most compute will win.
That's been Mark Zuckerberg's attitude, even Satya Nadella has pushed this kind of thesis.
So to me, no one is still, everyone, the entire industry, at least out of Silicon
Valley is still trying to hold dear to this idea that the more compute you have, you will win.
And so do you think that's why Open AI has been so shaken this past week, because basically
they were going to have that compute advantage. And now, you know, maybe it doesn't matter
as much. We're going to, obviously, we're going to get into Jevin's paradox as we're required to.
I mean, obviously, this is a contractual thing. We're going to get into that invidia section in the
Microsoft section. But like, let's focus on open AI for a minute. I mean, again, going back to our
question here, if it's all about chat GPT, and that is the leader, and they've shown that they've
been able to innovate with things like operator, which I think we're going to get your thoughts on in a
moment, and chat chip chipt voice mode, which is pretty cool. And they have, they've gone from,
let's say, 100 million to 300 million weekly users of chat chat. GPT in, you know, in just a year,
which is massive growth
and shows that there's a real promise
for the app side of AI
even though I'd like it to be bigger.
But anyway, that's a story for another day.
So what's the big deal?
Again, if this compute advantage
or this compute lead
doesn't make a big of a difference
just as long as AI can surge forward.
Well, that's exactly why
their reaction is so fascinating to me
because I have definitely argued
that Open AI is great.
greatest asset has actually been their product, their product jobs, that chat GPT, the
UI behind it, the like overall experience, their voice mode, incredible products. Operator,
which I have played with a lot. I spent probably like 12 hours last weekend trying to use
is not a great product. It's an interesting product. It's not a great one yet. But they've had
an advantage on the product side. So on one side, it's odd to me that they don't just kind of say,
you know what? We're going to win on product and we're going to have great models behind it. But then
it's kind of when you go back and remember at their core, which Sam Altman has said many times,
they're a research house that kind of accidentally walked their way into a business. And at that
point, you can almost picture the ego side of it, the kind of like more competitive side that
they don't want to lose that battle. They don't want to lose the actual foundation model.
creating who's going to get to AGI first, they care more about that than hitting their
$340 billion valuation, which there was reporting that SoftBank will potentially invest in
this week, $25 billion at a $3.40, I think. So they're still, I really believe
Sam Altman cares more about having the better models, getting to AGI faster than the
business side, because this doesn't fundamentally change at least what's been reported around
their business plan.
Yeah, so let me see if I can do my best to sort of tie it up now that we're talking
about it.
I'm thinking it through.
I think it's important for Open AI to lead in models because if you lead in models,
you can dictate where the product goes.
And honestly, that's why, you know, everyone wants to be able to do that.
And when you have open source, in particular open source, come and maybe not grab the lead,
but basically show that it can play in your ballpark, then you're in trouble.
because open source is a sort of swarm of developers.
It's Deepseek that's going to build on meta
and meta that's going to build on Deepseek
and all the other developers
that are going to customize and push forward.
And if open source is the one that leads over proprietary,
you could end up seeing the product game potentially spin out of Open AIs control.
Just to throw out like a crazy example,
like let's say open source gets to AGI before OpenAI does.
Well, then, you know, the product.
that are going to be built on that will be, you know, by definition, better than the chat GPT.
So if you're open AI, it's also like, even if your model business is not going to be the breadwinner
really is incumbent upon you to be able to lead this race. And that's why, you know, we're going to
talk about meta in a moment, but you would think that meta is devastated by this moment because
they've been, they've been sort of outflanked by deep seek, but it turns out like actually the
strategy, you know, might be proving true. Well, no, I think that,
this idea that a bigger model or better model means a better product is the underlying
thesis or thought process of a lot of Silicon Valley. But again, I disagree with it. And let me get
into Operator. So Operator is Open AI's new kind of like browser take computer takeover,
browser. It has a built in browser, but it can do things and take actions across the web for
you. So I was doing a test. I've been, I'm working on a medical innovation fund as a nonprofit,
and I've been having to kind of go through a lot of medical research papers, and I'll manually
pull all the author's names into an air table, and then look up, you know, like LinkedIn profiles,
Google Scholar, PubMed profiles. So it seems like a pretty straightforward, this whole world of
agentic that we've been promised seems pretty straightforward. So I went to operator, and
I paid the 200 bucks for ChatGPT Pro, and I kind of, it is really cool.
Like, it's actually, like, you give it a prompt and you say, here's a list of papers,
go find the authors, put them into Airtable, then go to Google, search the author's name,
find these links and paste the URLs back into Airtable.
Seemingly, like, not an overly complex thing.
It worked two or three times really well, and it was,
mesmerizing because you're literally watching it and it's a built-in browser in the operator
interface and like you're watching it do this and just click around and actually copy and paste
links. So you start to see the promise. Then the most fascinating part was it got lost in air
table. It started just clicking on random things. You can almost see it just like losing its place
and losing its flow and rhythm and just it completely broke. So then you take control. You can
actually kind of like assume control back in the browser, try to redirect it, and I could not get
it to work again. Or then it started copying in completely incorrect things. Like it just does
not work even close to where the promise is. But you see where it could be going. But again,
this is the product they have released. SORA, very similar. Huge promise. We waited like a year
and a half. SORA is available for, I think, all chat GPT plus subscribers right now. I have not seen
anyone posting cool demos or all those kind of things. So the last two products that they
have launched have been pretty much flops. So I think that's why they are still working under
the assumption that operator, if the model development becomes so good, then that's the only way
this kind of like browser takeover behavioral work. So they're still betting that the model
will solve everything.
This is why you need better AI, don't you think?
Like, better models will make sure that operator works better.
And in fact, they were asking Kevin Wheel, who's the head of product, what's coming next for open AI?
And what kind of moat does Open AI have?
He says, we're about to launch some models that are meaningfully ahead of the state of the art.
And some people are speculating that's 03 or who knows, maybe we'll see GPT5 finally.
So don't you think these two work in concert?
Yes, no, it's tough because on one hand, the coolest part of the experience, and again, I'm going to give full credit, it was a really, really fascinating kind of like, holy shit, this is the future moment.
So I'm still going to give them credit on that, just the way the entire overall UI and just experience of it.
But still, as you're using it, you're like, is this really the way I need to set this up?
In reality, I could have written a PDF parser that did the same thing using AI and
Claude.
I do the stuff regularly.
Extracted authors' names, done, write, written something.
Like, it was basically web scraping.
So the web is such a messy thing that how many people are really going to derive value
from this kind of tool?
So it actually made me question, like, is this the way?
And I saw some posts as well around, like, could websites change the way they're structured?
so they're more open and easy for an agent to access.
Maybe that starts to be an interesting thing.
But the idea that we need a model that's so smart that it'll actually understand the first time
how to use a pretty heavy tool like Airtable, I don't think we do need that.
I think the way this whole battle will be, like one, will actually be around, you know,
creating an Airtable connector that's already just pre-pre-per.
program to not LLM-based to actually do this kind of work.
So I don't think, I still embarrassed that we're going to see such a smart model in the
next few years that'll actually get the complexity of the entire web.
I don't see that happening.
It's one of the interesting things about looking at Deepseek with its reasoning model
is it will show you its chain of thought.
And you can actually see the model kind of struggle its way through trying to find exactly
what you're intending with your prompt.
And so, like, there was this concept of like,
oh, the prompt engineer is going to be the new job.
And it's already getting to the point
where you don't need to write the best prompt
because the models will sort of get to that point eventually.
And I think that is sort of why being able to watch that
within Deep Seek's consumer app is the reason why it ended up going so crazy
and taking over the top app spot above Chachupitin and the rest.
And in fact, I think that's probably like,
the biggest risk for open AI is like who cares if the model's a little better or a little cheaper
and we can argue about whether it was but the fact that the deep seek app went to the top of the
app store that to me might be why they had such serious concerns and before we move on i do want to
move on to what dario from anthropic said just want to talk about a couple things about the
open AI business so no one paid attention to it this week but there was a report that open
AI has now made four times the amount of its nearest competitor on the chat GPT app since it launched
at $529 million.
It's not a massive number, but this group called app figures now estimates that the AI app market
is $2 billion this year.
And as things like operator get better, that's going to be even bigger, an even bigger number.
So we're starting to see this AI app market materialize even again if it's not as big as I
would want it to be. And then it sort of leads into this next part, which Rangani, you've already
brought up, which is that Open AI is trying to raise, again, at a crazy number. They are in
talks, this is according to the Wall Street Journal, to raise $40 billion in the funding
round that would value them as high as $300 billion. Yes, the $3.40 billion number was tossed out,
but I guess they've come back to Earth a little bit, and now it's looking like $300 billion.
I mean, late last year, Open AI raised $6.6 billion at $157 billion valuation.
And we said, right, what did we say?
When that came out, oh, they're probably going to need to raise again at the end of the year.
2025, they're already raising in January.
Oh, my God, I loved this.
You couldn't give me enough news this week, but then Masa Sun had to come in and just push the idea of,
instead of, you know, the world is like his Open AI finished and he said,
No, let's come in and add another 15 or 25 billion ad, bring in 40 billion and pump this
from up to 300 billion.
I mean, it's amazing.
There's not much you can rationally or analytically say about this.
All you can do is just sit back and enjoy that Masa is once again right in the center of
things.
I just found a great quote from Masa.
I guess he said this last year.
he was talking about Gates and Zuckerberg, he says, those are one business guys. Bill Gates just started Microsoft and Mark Zuckerberg started Facebook. I am involved in 100 businesses and I control the entire tech ecosystem. These are not my peers. The right comparison for me is Napoleon or Genghis Khan or Emperor Quinn, who's the builder of the Great Wall of China. I am not a CEO. I am building an empire. What a guy.
Is that from, so Lionel Barber, who was the editor-in-chief of the FT when I was there,
he just came out with a book.
It's called Gambling Man, the Secret Story of the World's Greatest Disruptor, Masayoshi
Son.
Is that where this?
Because I saw this quote everywhere, and I was trying to find, like, the actual where it was
originally from.
Yes, it's from that book.
Okay.
I'm going to buy that right now, Lionel.
I'm reading it.
Yeah.
One last thing.
Can we put to bed this?
idea of the subprime AI crisis, which we talked about last year.
I'm very curious what you think about this.
This idea that these companies had raised, you know, so many billions of dollars that
startups were going to put their technology in their products and then they were going to
depend on it.
And then eventually those rates were going to have to go up and kill the startups and
kill the entire AI world.
It was a very interesting theory.
But now that we've seen that rates will go down to just about zero to use this technology,
do we still believe in the subprime AI?
crisis? So again, it was a really interesting theory from Ed Zitron, the idea that, yeah,
startups that are building, the costs will only escalate the actual input costs of these AI
models. So they're subsidized now. At some point, it'll catch up and it'll bring everyone down.
That's why this is such an exciting moment. To me, that's the biggest news of the week.
Separate from the financial markets and the considerations on what this means for NVIDIA,
everyone out there now has access to very good models cheaply.
And that means that the product side, the application side, this is like Andresen's time
to build.
Like, let's get out there.
Everyone start tinkering with everything and not having to worry about getting some inflated
API bill.
Like, this is it.
This is going to happen.
And this is going to keep happening.
And now the entire industry is going to start thinking about not funneling all the
investment dollars other than Masayoshi Sun to the model builders. But to me, you can imagine
there's going to be hundreds of new startups where capital will flow to that will actually be
more on the application there and actually solve problems and be interesting for regular business
people or just regular people and not just open AI making a few products and us all being stuck with
them. So that's why, I mean, this to me was such a big week.
from all the actual, I don't know, market and global geopolitical implications, but it just
means this is where the world is going in the next year or two. And that's far more interesting
to me. Yeah. When we let off our show last week, I was like, Ranjan, what do you think
about Deep Seek? And your immediate reaction not thinking was, I love it. I still love it.
I still love it. And what we're saying right now sort of plays perfectly into this piece that
Dario Amode, the CEO of Anthropic put out, where he talks about.
on Deepseek and export controls.
He says, and this is sort of going to our point,
the efficiency innovations DeepSeek developed
will soon be applied by both U.S. and Chinese labs
to train multi-billion-dollar models.
These will perform better than the multi-billion dollar models
than they were previously planning to train,
but they'll still spend multi-billions.
The number will continue going up until we reach AI
that is smarter than almost all humans at almost all things,
which I think he breaks is coming in a couple of years.
I think his post was excellent
and it also was like I
kept reading posts by like
Dario Amadeh and Andrew Ng and Jan Lecun
and I was just like oh yeah
like this is real and it's going to
spread into the other models basically
making models that are much much
cheaper to run which should spur
that
you know spark of building and
the only thing is like you just
said this might not be good for the proprietary
builders so I was interested
to see it in Dario's post
What do you think is going on here?
Reading the post, and again, it was like a very good in terms of him talking about the,
what the actual, like, genuine innovation from the, from Deep Seek's side around their models were
and recognizing there was genuine innovation happening.
But the push on export controls, I think, I mean, and I'm curious your thoughts on this,
it's such a difficult question because on one side,
is it going to be enough?
The fact that Deepseek open source this, and I mean, literally it's on AWS now.
And like it's not some like only in China thing.
It's not that kind of like bifurcated world of only like the West versus China.
It's more integrated than that.
And it did it.
To me, one of the interesting parts is, and again, plenty of debate around what the actual GPU count is
for the Quant Hedge Fund that, you know, spun this out.
But overall, it does feel like, or there's been plenty of arguments for the idea that
because of export restrictions, it almost forced the additional innovation and efficiency.
Like, because they still did not have unfettered access to the NVIDIA state-of-the-art chips
that they had to, they had to innovate around that, which is really interesting on
already. But what does that mean about how we approach this? And especially of all people,
anthropic, a model builder whose entire financial forecast is predicated on selling models,
like to kind of lean into this is a national security thing. And I'm always, I'm very
been pro-band TikTok and in other areas really cognizant of the, you know, like national security
implications of a lot of this. I am curious, like, they've already shown us.
it's not effective anyways, so I don't know. I think it was a little self-serving.
Oh, definitely. And this is like going to go to like the practical theory of how CEOs act,
which is in their own self-interest. Yeah, fair. You know, it's like very interesting to like see
everybody on X reacting to what Dario's saying by saying like, uh, this is an unprecedented level
of cope and he's just trying to hamstring, you know, the Chinese companies now that they've,
you know, released a reasoned model and he doesn't have one. And he's just upset about this.
like, as if you're supposed to do anything else as a CEO, then try to look out for your
own interests. I don't think it looks good for him, but I don't think it really matters.
Well, Dario, I can just beg you, now that we all have access to cheaper models, can you please
make Claude the paid subscription level not hit my rate limit when I'm halfway through a project
because it's one of the most frustrating things.
There's going to be some problems there. Maybe they're just going to try to sell themselves.
that, I guess, acquisitions are legal again in America?
Yes. That's actually, I mean, back on the political side, I think the whole changed
M&A environment actually is really worth watching with a lot of these companies because there
was kind of this assumption, I mean, over the last couple of years, that especially once
you're hitting these like many billions of dollars in valuation that M&A is essentially
off the table. So that'll be pretty interesting. But also,
I mean, and we haven't even mentioned Google in all of this.
Google has been making some pretty big strides, at least on the model building side,
even on the application layer side.
So then you start to get into which company really needs an Anthropic.
Again, Claude's a great product, but at what price?
Yeah, I mean, maybe it'll be a capitulation sale to Amazon,
but even Amazon is in good shape this week after Deepsea had put Deepseek in its product.
And Andy Jess, he bragged about it.
I was planning to talk about it later, but it's worth talking about now.
It just, you know, it took a week and it's there.
And I think in a minute it is the best model in AWS.
And that's going to lead to more building through AWS.
So I think that they're probably doing jumping jacks over at AWS HQ.
Yeah.
Remember we had the theory that Amazon was going to Amazonify this whole space by like bringing
you into the AWS ecosystem, offering you the fancy experience.
models. Like, actually, they were going to deep seek with their Amazon Nova, their new foundation
model suite, which was supposed to be winning on cost. That was kind of our thesis anyways,
that they were going to just bring you in, let you use the high price stuff, and then give you
the Amazon basics version. And now they got deep seek already. So I think they've actually played
this pretty well then if they haven't invested and just kind of been sitting to the side.
Definitely. And just it's telling that it was Jassy that was out there tweeting about it yesterday.
It's clearly important to the company.
All right, one last thing on Anthropic. This is from Hugging Face co-founder Tom Wolfe.
He says if both DeepSeek and Anthropic models had been closed source, yes, the arms race interpretation could have made sense.
But having one of the models freely available for download and with detailed scientific report renders the whole closed source arms race competition,
argument with artificial intelligence unconvincing in my opinion here's the thing open source
knows no border both and its usage and its creation and I think that like to just put a point on
the first half of this conversation it is this has just been the week where it's been the open
source and good for open source good for the companies that benefit from open source like let's say
the metas and the anthrop and the amazon's bad for the proprietary model builders and it brings
right into the meta thing, actually, which is like a lot of this commotion over deep seek was
began when there was a screenshot passed around of like a blind post saying meta engineers
had been freaking out about the fact that deep seek had surpassed their open source models
and there's like now reporting that top executives within meta are worried that the next
iteration of Lama won't perform as well as deep seek. But ultimately, I think that as they
digested it, people like Jan Lecun, who's going to be on the show in a couple of weeks,
and Mark Zuckerberg were basically like, yep, it's going to be open source. We will take what
they did and put it into our models. And we're going to ultimately, you know, be in the right
place, which is that we're going to help develop AI that's available to all. It's going to be
in our terms as long as we can keep our lead, right? That's the big question. And then we're
going to benefit? Well, this is one thing I've not been fully clear, because if we're talking
about does the value accrue to the model layer or the application layer, meta is in a position
where they should probably be able to do some pretty amazing things at the application layer.
One, you have distribution that absolutely no other company or nothing in the history of mankind
has had in terms of your billions of users that you can put.
products on to. And meta AI, obviously already, I mean, image generation, it's fun to play with. In the
Raybans, it works incredibly well. Even the real-time language translation is getting pretty good.
So getting good products in front of people, at that point, I do wonder why do they care so much
about having the winning model? Because in reality, they can build it with Deepseek if they want.
I mean, they're building it with the Lama right now, but what the massive incentive is.
And originally, I remember thinking, like, it was all done in an awesomely, like, you know,
in an amazing way to undercut the open AIs and all the other paid players.
But I'm still a little bit confused on that.
I mean, I do think, like I said earlier, that everybody wants to dictate how this goes.
So they kind of have an idea of where it's going and they can lead, like, the product development
and have people that are developing on their tools
and making it better as opposed to somebody else's.
You really want to be in the pole position there.
But it's, again, it's not the end of the world for META
if DeepSeek just kicks their butt
and they're like, all right, well,
we'll just put this into Messenger.
And maybe people will use our chatpots that,
which I don't think they use right now.
Even though they're talking about how they want the billion user assistant,
I don't think anybody is, maybe some people.
But top of mind, the fact that like META has an assistant,
and still not really competing with chat cheptie as far as I can tell.
I actually, I use it for like quick fun image generation with my son, like just because
it's right there in WhatsApp search.
Like I actually, I'm convinced that probably a large percentage of their AI assistant usage
because it's built into the search bar essentially is people accidentally trying to search
for something and ending up using meta AI.
I, but it's actually, it's good.
I mean, it's not, I've never spoken with anyone who's using it as a more dedicated chat bot or chat assistant, like most of us use chat GPT and Claude and others.
But for like quick one-off questions, it's, I feel it's, again, the distribution side of it, it's right there in the apps that people are in all day long anyways.
It's, in terms of poll position, I think it's definitely has an advantage.
Yeah, and it's just going to get better over time as, as let's say, you know, let's say these open source models do achieve AGI, right, to just go with the crazy example.
Well, you can then deliver that to a billion users and you don't have to rely on open AI to do that, or billions of users, for that matter.
There's still a lot of, let's, you know, what if, though, on this front.
And let's quickly talk about this Jevin's paradox thing.
So this was Saty Indadella's reaction to it.
He says his AI becomes more efficient and accessible.
its adoption will soar, transforming it into an indispensable commodity.
I think we mostly share his opinion on that.
I think you definitely do, Rajan.
But my only question is, if AI gets exponentially cheaper,
then will we see an exponential rise in applications?
Because on the application side, like we have chat GPT,
we have these metabots, then we have some stuff in enterprise software like Salesforce as agents,
and then, of course, there's coding applications.
But after that, for as powerful of a technology that AI is, I don't know if we've seen the applications that I would anticipate.
Well, let's start with, so Jevon's paradox is not actually a paradox where it's not a logical contradiction.
It's just kind of, it was a bit counterintuitive at the time.
So that's been frustrating me, I think, more than anything, because it's just completely sounds like one of those things that people have been using because it sounds smart.
but we're still going to have to talk about it, and it's the idea because everyone is.
And again, to me, it actually is intuitive as any kind of resource becomes more efficient
and accessible, it'll actually grow in overall aggregate demand and output because people
will figure out more uses.
And I mean, I think at the core, this is it.
This is what we've been saying.
And this is now the thousands of startups building interesting apps on top of it.
And finally, the promises that we've been made for so long and have not been realized
will be realized because people will actually pay attention to building cool experiences
and products using AI.
It's not just Microsofts of the world and Open AIs of the world and only a few.
And then others trying to build stuff, but being really limited based on the actual cost
input side of it.
So I think it is real.
It's like the idea that AI is going to become much more deeply integrated into everything we do in a good way, I think is going to be real.
And this makes it a lot more real.
So do you think that this is unfairly punishing Nvidia or do you think it makes sense because there's been more uncertainty that's been added in?
The AI industry was moving on like, I don't know, if a linear path is the right way to put it, but out on a path.
And now the path sort of changed directions.
I think, yeah, I definitely think it's fair. The biggest change is for the last two years,
the invidia story has been ironclad. Like, and every earnings report, the every growth. I mean,
so the valuation just becomes richer and richer and the overall market cap grows and grows,
but no one could question it. There was absolutely no doubt. I do think, again, the invidia story
has fundamentally changed.
And I'd mentioned this before, but it's almost fascinating to me.
When you remember, think about the $3 trillion market cap, how much money is tied into this
company?
Because every sell-side analyst report, just emails left and right, like, everything is
sure.
Everyone is just like, this means nothing.
The story is still great.
Because the amount of just vested interest in this company right now means that people are going
a fight for it. People are going to fight to make sure the story doesn't go away. But it was always
a story and they were realizing it pretty well. But now there's at least a seed of doubt that
was not there in the last few years, that at a certain point, do we need better and better
chips? Is that where the battle's going to be won? Because then there's no competition. Do we need
more and more compute? And even if the world needs more compute, do they need the latest
NVIDIA chips to realize actual utility and applications, maybe not.
There's at least a little bit of doubt.
So I think the recovery here over the last few days is warranted that, you know, it was a
pretty sharp sell-off for a company that size.
But the invincibility of NVIDIA for me is gone.
There's still so many unknowns, right?
A lot of this is based off of things that should happen, right?
Like, because Deep Seek was able to develop with, like, lesser chips than if you actually
put those innovations and you use the better chips and use more of them, then they should deliver
better innovation. But this is all based off of hypotheses, right? It's not like a rule. Just a hypothesis.
Like, it's not a law. And so I, yeah, I totally understand why people would be a little wary about
NVIDIA. And when you trade a company like that, you trade probabilities. So did the uncertainty
or searching change the probability that they would dominate? Yeah, maybe a little bit. By the way,
breaking news. This is from tech meme. They're quoting why. Or sources say,
Opening Eye plans to release O3 Mini today with 01 level reasoning and 40 level speed
as the company staff is galvanized by Deep Seek's success.
So it's game on.
I guess it's game on, but just make operator a little better before you do that, guys.
That is like, I mean, it still, it still baffles me.
And I say this again because I'm not an AI researcher.
I am just a technology person that's using all these different apps and technologies,
like building some stuff.
But like, why are they so focused?
Like they release SORA with so much fanfare.
It's basically useless for the vast majority of the world in its current incarnation.
Operator, huge release, not really usable.
Enough people show some demos and there's endless Twitter threads of, like,
like 10 amazing ways you can use operator, but in reality, no one is using operator today
in an actual day-to-day way that helps their work.
So I just rather than focus on making that stuff good and being so caught up in the
model battle, it feels like to me that if that mentality never changes, that $300 billion,
$300 billion valuation looks even richer.
By your underestimating pride here, you know, there is a level of pride that's involved
and that's probably what's happening.
So, okay, let's quickly hit some of the misconceptions about Deep Seek.
We can just go through this real quick.
We talked about it last week, it was a misconception.
We talked about last week that, you know, they said that they trained just for $5 million.
I think we can both agree now that they trained for a lot more and $5 million was just their last
training run.
Or even though it's impressive for a training run.
It doesn't fully incorporate the cost. So we should definitely note that.
Yeah, no, no. I think it seems to be like everyone is accepted. It's not five or six million.
It's something more, but it's still not 500 billion or 100 billion or it's not at that scale.
And I think that that still means that the story is very important. And then also, again, even amidst the export controls and then you get into.
did they circumvent them? When did they circumvent them? What chips exactly did they get
and did they build on? But whatever it is, it was not some super cluster of the most advanced
up-to-date chips. And so it still reminds us that kind of that necessity breeds innovation
or whatever the quote is, is still definitely part of this story. Yeah. And I said this on
CNBC. It's not the process as much as it is the output. And the fact that
they were able to run reasoning at such a cheaper cost is the thing that matters most.
And people can complain about the way they did it.
They can complain about the fact that they're talking about a different number of chips.
But ultimately, like, the process has galvanized, I mean, the output has galvanized Silicon Valley.
Like you're hearing from Altman and Amaday, and that's the bottom line.
The other thing that's been interesting has been the anger, it seems like, from Microsoft's an Open AI,
where, and from others, where it seems like Open AI had, that Deepseek had effectively copied
some of the stuff that they were doing, or like maybe taking their data.
This is from Bloomberg, Microsoft probing of Deepseek, link group, and properly obtained
Open AI data.
Microsoft and Open AI are investigating whether data output from Open AIs technology was obtained
in an unauthorized matter.
Microsoft security reaches in the fall observed individuals they believe may be linked
to Deepseek, exfiltrating,
large amount of data using the open AI application programming interface or API.
This led us to a, I think, quite funny headline at 404 media.
Open AI Furious Deepseek might have stolen all the data from an open AI stole from a
runjun, you're pretty good at like sort of assessing like when it's fair for
tech companies to take content and when you know it's all right to rip it off or rip
other people's products off. Is this like a sort of nefarious move from
deep seeker, is this like somebody in a glass house throwing rocks? I think glass house and
rocks, especially open AI. So you have on one side the kind of like, I mean, certainly larger
question that I've still predicted will be answered in some capacity around like individual
artists work or, you know, creating in the style of specific artists or certainly the New York
Times lawsuit. But then even at scale, if we remember open AI, open AI,
apparently had been scraping YouTube at a large scale for their initial data sets.
So I think, and the entire world seemed to have reacted in a pretty similar way where
obviously it's a bit rich from all companies, OpenAI, kind of going down this route.
I don't believe they had released any kind of official statement.
So at least to their credit, these were off of kind of like just more.
like either leaks or just genuine reporting that they're looking into this or exploring it.
But yeah, I don't think anyone anywhere is going to have any sympathy for open AI in this matter.
Yeah, I'm on the same page there.
And it was like, it wasn't done through hacking.
It was done through the API.
So like, I guess, cry me a river.
Yeah, I mean, it's the YouTube example perfectly.
Like, and it's a big platform against big, well, actually small platform against big platform in this case.
but just using the actual available technology
and probably breaking the terms of service a little bit
and then using that to get started.
Okay, so I want to take a break,
but before we do, I should say that we have a Discord now.
So Ronan and I had been talking about starting a Discord
as these stories had continued to break,
and there was more to talk about,
and we were kind of curious what the audience was thinking throughout the week.
So that Discord is now open.
It is open to Big Technology paid subscribers,
So if you go to bigtechnology.com,
you can see there's a post that says,
let's talk deep seek AI, et cetera,
on Big Technology's new Discord server.
If you're a paid subscriber, you just scroll to the bottom.
There's the invite link.
If you're not, you could just sign up
and then scroll to the bottom.
And there's the invite link.
It's been kind of fun.
We're just about wrapping day one,
and there's been some really good conversation there already.
Alex Damos, the security researcher,
is in there telling us right now
about how Deep Seek's security issues
really are compared to all the hype like what happens if you download deepseek are you in trouble
so thank you for the idea around john it's been it's been fun getting it off the ground and
we hope listeners will join i actually learned i think more from just a few comments from
alex damos around what are the actual security concerns or considerations on deep seek than
pretty much everything else i read this week so like actually at a very technical level even
not like having to myself look up what safe tensors are and other like really deeply technical
terminology. So we're, it's off to a good start today. Yeah, it's been cool getting it off the
ground. And again, you could go. If you want to join, just go to big technology.com.
Click the let's talk deep CKI, et cetera on big technology's new Discord server link and then join
us. We'll see you over there. All right, take a break. And when we come back, we're going to talk
about how a series still terrible or maybe it's even worse. And then,
briefly touch on Apple earnings, and then we'll get out of here.
So we'll see you right after the break.
Hey, everyone.
Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines in 15 minutes or less
and explain why you should care about them.
So, search for The Hustled Daily Show
and your favorite podcast app,
like the one you're using right now.
And we're back here on Big Technology Podcast Friday edition.
So we've talked about Deepseek the entire episode so far,
which I think is merited,
but we're not going to let the week go away
without talking about this great post
by John Gruber on Daring Fireball,
titled Siri is Super Dumb and Getting Dumber.
I mean, the long and short of it is
that basically he had a friend who asked,
Siri won the Super Bowl, who won each Super Bowl, and Siri got 34% of those right, which is
truly disastrous.
And basically, Gruber's point is Apple got this gift of generative AI.
It put it into Siri, and somehow it made Siri worse.
It definitely, the summary notifications have been getting worse and more, or just realizing
how much use, how useless those have been getting is just becoming even.
more and more acute in my day to day.
Jen Moji were fun for a minute.
And I've ranted a bit about this week, so I'm glad that even, like, Gruber, who is
about as long time an Apple fanatic as it gets, the fact that he is recognizing and
ranting himself about how bad it is, but it really calls into question, where are they
in all of this?
The only thing, though, I don't know, I'm curious, do you think this is good for Apple this
week reminding us that the entire direction of the AI industry is in question. So maybe
them screwing up the first phase of this battle so badly actually means they'll be okay
and they can start fresh. It's definitely the galaxy brain take here. It was kind of interesting
watching. So the Apple earnings came in yesterday. They sold less iPhones in this Q4 than they
did it in the year previously. So iPhone sales are going down. Apple.
Apple intelligence is garbage.
And yet, you look at what happened.
Deep Seat comes out, actual AI innovation, there's panic in a sell-off.
Apple made Siri worse and selling less iPhones, and the stock is up 6% on the week.
It really is amazing.
I don't know.
I really, to me, I'm a little bit puzzled at the buying activity on Apple.
I just don't see what the news is.
Maybe services, they beat on services revenues.
So that's pretty impressive.
But we talked about it that when Apple Intelligence came out, it wasn't going to lead to a super cycle.
It hasn't led to a super cycle.
And I think Mark German put it pretty well today.
He said Apple Intelligence is a half-baked marketing strategy that was rushed in response to opening eye and Google Gemini.
Yes, Apple had no choice.
They did exactly the right thing, but they shouldn't have been in that position in the first place.
So I guess like what the market is saying, you know, why it sent it up this week, of course, there's the services beat.
But even after DeepSeek came out, and I just think it's a pretty simple thing, which is just that you can build more with Deepseek and Apple has got to find a way to build consumer products and it can maybe use open source to do it.
But I don't know.
Do we have faith that Apple's going to do it?
I don't.
Well, I was thinking because even that idea that the marketing drove the product side of it with Apple, which.
Usually, it's the other way around.
To me, the most fascinating part of this is, and I think a lot of companies, and Apple probably
is the one most troubled by this, is the kind of the dichotomy between normal everyday users
and call it the industry.
And by the industry, I mean the actual technology-focused people working in the companies,
the investors focused on the companies, there's such a distance between.
what any normal person wants or expects from AI and what these companies are pushing.
And to me, that distance has actually been these kind of like the core part of Apple's errors
in this, that they're more focused on the market and probably their own competitors and
even their own employees and what they're thinking about rather than what the everyday
consumer is thinking about. Because again, I can promise you, no, most,
certainly non-tech-focused iPhone users were not clamoring for Apple intelligence.
And I think even most technology advanced Apple users could easily install chat GPT and
whatever else on their iPhone and still be fully locked into the Apple ecosystem.
So the idea that they needed to do this, I actually disagree with Mark Garman, that they
had to do this.
I think they could have still just sat back.
The only reason they would have had to do it is for the market.
And, I mean, apparently it's still kind of working.
Apparently, the average Apple investor has not actually tried to do a basic task with Siri, but it's still working, I guess.
I just want to end with this.
I love the how it started, how it's going.
And I saw one come across my X timeline this week that just really sort of captured it all.
How it started is to think different campaign with these beautiful black and white images.
that can know, like, if you're an iPhone user, you're just kind of more artistic,
like kind of clastic, you have more taste.
And then I've been in, okay, and then the second image is how it's going.
And it's, imagine it, Gen Moji, and it is a sunny side up scrambled egg with hands and feet.
This is what people want.
This is what they've been clamoring for and why they will spend, it will launch an entire new iPhone super.
cycle. The sunny side up egg with hands and feet. I mean, I've been in Miami for most of
the weeks past week. And these, I don't know if they're in New York yet, but these Gen Moji
billboards are everywhere. And they're terrible. They really don't say anything. And if that's
your AI play, Lord help you. I saw some posts around like everything they've offered
feels like something that Tim Cook is the only user of.
Even notification summaries, like, again, they're relatively useless and almost counterproductive.
But I can see if you're Tim Cook, maybe they're kind of useful.
I can see Tim Cook really being into Gen Moji.
He's just churning out, tossing out Jen Moji left and right.
I mean, someone's got to be using it.
All right, well, I will make like a Gen Moji as a big scrambled egg with one hand up and say,
have a great weekend.
I have a good one.
All right, everybody. Thank you, Ranjan. And thank you all for listening. We'll be back on Wednesday. Actually, I'm speaking with the NVIDIA executive. So that'll be fun to air on Wednesday. And then Ronjin and I'll be back on Friday to break down the week's news. Thanks again for listening and we'll see you next time on Big Technology Podcast.