a16z Podcast - DeepSeek: America’s Sputnik Moment for AI?
Episode Date: February 6, 2025Two words have caught the Internet by storm. DeepSeek. The Chinese reasoning model r1 is rivaling others at the frontier with an open-source MIT license, methods that some claim may be 45x more effic...ient, an alleged $5.6m cost, the release of reasoning traces, a follow-on image model, and the fact that all of this was released by a hedge fund China.Many are already referring to this as a Sputnik moment. If that’s true, how should we – whether founder, researcher, policy maker – not just react, but act? Joining us to tease out the signal from the noise are a16z General Partner Martin Casado and a16z board partner, Steven Sinofsky. Both Martin and Steven have been on the frontlines of prior computing cycles, from the switching wars to the fiber buildout, and have witnessed the trajectories of companies like Cisco to AOL to ATT – even Worldcom.So what really drove this DeepSeek frenzy and more importantly what should we take away? Today, we answer that question through the lens of Internet history. Resources:Steven’s article: DeepSeek Has Been Inevitable and Here's Why (History Tells Us)Alex Rampell’s article: Why DeepSeek Is a Gift to the American People Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
R1 comes out and it looks pretty good.
That's not the best layer to monetize it.
In fact, there might not be any money in that layer.
I have yet to see the GPT wrapper.
The internet is such a great example because there's no way this doesn't play out like the internet.
It's actually a very big step when it comes to the proliferation of this model.
It's a good reminder that there are always pockets of people innovated.
WorldCom and AT&T did not predict the internet was going to come out of universities.
Two words have caught the internet by storm. Deep, seek. Specifically, a Chinese reasoning
model that seems to rival others at the frontier. But that's not all. Alongside there are one
model that dropped in late January came, a fully open-source MIT license, a paper outlining
its methods that some claim may be 45 times more efficient than other methods, an alleged
$5.6 million cost, the release of reasoning traces, a following
one image model and the fact that all of this was released by a hedge fund in China.
Since then, there have been so many claims and claims about those claims that many are
already referring to this as a Sputnik moment. But if you think about it, the reason that
Sputnik, the first satellite launched into lower Earth orbit by Russia in 1957, the reason
that Sputnik still matters in 2025 is because America took all the actions that it did in
58, 59, 60, a moon landing speech in 62, all the way up to 1969 when we reached the moon.
Those are the actions that made Sputnik Sputnik. A wake-up call was responded to.
So now that we're here, how should we, whether you're a casual listener, a founder, a researcher
at a top AI lab, or a policymaker, not just react to this message, but act.
joining us to discuss this and tease out the signal from the noise,
our A16Z general partner and pioneer of software-defined networking,
Martin Casado, plus Stephen Sinovsky,
longtime Microsoft exec, including being the president of the Windows Division between 2006 and 2012.
Stephen, by the way, has also been a board partner at A16C for over a decade
and shares his learnings online at Hardcore Software,
where he recently wrote a viral article,
called Deepseek has been inevitable, and here's why.
Of course, we'll link to that in the show notes.
Both Martine and Stephen have been on the front lines of prior computing cycles,
from the switching wars to the fiber buildout,
and have even witnessed the trajectory of companies like Cisco, AOL, AT&T,
even WorldCom.
So what really drove this deep seek frenzy?
And more importantly, what should we take away?
Have bigger and better frontier models been optimizing for the wrong thing?
and where does value in the stack accrue?
Today, we address those questions
through the lens of internet history.
I hope you enjoy.
As a reminder, the content here is for informational purposes only,
should not be taken as legal, business, tax, or investment advice,
or be used to evaluate any investment or security
and is not directed at any investors or potential investors
in any A16Z fund.
Please note that A16Z and its affiliates
may also maintain investments in the companies discussed
in this podcast. For more details, including a link to our investments, please see A16C.com
slash disposures.
It's been a busy few weeks. I don't know about you guys, my Twitter feed, podcast, everything,
deep seat everywhere, maybe unsurprisingly. But what's your TLDR in terms of what came out
and maybe also your take on why it blew up in the way it did? Because we've seen lots of releases
in the last, let's say, two years since ChatGPT.
The quick overview, of course, is out of essentially nowhere,
a small hedge fund quasi-computer science research organization in China
releases a whole model.
Now, those in the note, no, it didn't just appear.
There's a year and a half or so of buildup and...
And they're really good.
And they're really good.
Nothing was an accident.
But it appeared to take the whole rest of the world by surprise.
And I think there were two big things about it that really caught everybody's attention.
One was how did they go from nothing to this thing?
And it seems to be a constant factor of compatibility and capabilities with everybody else.
And this number got thrown around that it only cost $5 million.
Yeah, $6 million.
The number is irrelevant.
Because it turns out they wrote a paper and they said, hey, we innovated in this particular set of things on training.
Which even here, everybody was like, oh, well, that was pretty clever.
And then because of the weirdness that we don't need to get into of the financial public markets and how this whole thing happened on a Friday, the whole weekend was like everybody whipping themselves into a frenzy.
So they could wake up Monday morning and trade away a trillion dollars of market cap, which seems to be a complete overreaction and craziness.
But that's not what we're here to talk about.
To your point, there's a lot of moving parts here and there's a lot to consider.
It's actually a fairly complicated situation.
So there has been this view that the traditional one-shot LLMs were starting to maybe asymptote.
Like GPD4, there hadn't been a big advancement, but then there's just going to be this new breath of life.
And opening I released a reasoning model, which is O1, and everybody's very excited about that.
And so in this grand tapestry we're considering you have all this excitement about O1
and how that's going to drive compute costs and NVIDIA, and then R1 comes out and it looks pretty good.
And then all of a sudden they're saying, well, if you can do it just as cheap, is this going to actually drive the next wave and so forth?
And so there's a lot of buildup to 01, which lent to the R1 hype.
And then I think to your point, people didn't know really what to think about it.
And I agree with you.
It was a total market overcorrection.
By the way, it's also worth pointing out that in addition to people saying, wow, this is a great model, there's a lot of like theories and rumor around, oh, well, maybe this is the CCP doing a sci-op, maybe it costs a lot more, maybe this is very intentional, it's right by Chinese news,
There's just a ton of rumors. Maybe we'll do our best to dissect everything going on.
Yeah, maybe let's just do that. Because to both of your points, there was a lot here, right?
There was the performance element. There was these quotes around costs. There's the China element. There's the
virality. It hit number one in the app store. There's also shipping speed. I think Martine you shared that they
released an image model shortly after. And then it was released on a Friday. So there's this
huge mixture of people reacting, some people who know what they're talking about and some people who
don't, quite frankly. And so we're like 10 days or so out from this release, which by the way,
As both of you said, that was the R1 release.
There was the V3 release, what, two months ago, which was the base model.
So now that we're a little bit further out, what's the signal from the noise?
So maybe I'll give you the lens of Chinese people are smart.
There's one lens, the lens that I hold, which is China has great researchers.
DeepSeek has actually released a number of soda models, including V3,
which is actually probably a more impressive feed.
It's almost like a chat GPD4.
And oh, by the way, to create one of these changes.
of thought models, these reasoning models, you need to have a model like that, which they had done and we had known about.
All of the contributions that they've done have been in the public literature somewhere, just nobody really aggregated.
So there's a thought that I hold, which is this is a very smart team that's been executing very, very well for a long time in AI.
They are some of the top researchers.
The fact that they spent $6 million just doing the chain of thought is actually not out of whack, which what Anthropic has now said, they've spent in OpenAI has said that they spent.
And so this is a meaningful contribution from a good team in China.
And so it means something and we should respond to it.
So some of the outcry is warranted.
I do think that we respond to it, but I don't think for the reasons a lot of people are saying.
I completely agree with that.
And in fact, you also saw the people outside of that team in China sort of piling on to try to make it more intergalactic than it was.
I mean, my favorite old friend of mine, Kai Fu Lee, comes out on X and says something about this is why I said two years ago,
Chinese engineers are better than American engineers.
But the truth is, to your point, about reaching some asymptotic level of progress.
Yeah, like the previous base models, like the GPT lineage, seem to have asymptotied around
GPD4.
Right.
But what's super interesting about that is that asymptote was true if you looked at it
through the lens of the function that everybody was optimizing for, which is, to my view,
this kind of crazy, hyper-scaler view of the world, which is we need more compute and more
data, more compute, more data. And we're just on that loop. And a lot of people from the outside
were like, well, you are going to run out of data. And I would just as, you know, a microcomputer
person was like, well, at some point, you're going to end up breaking the problem up to the
seven billion endpoints of the world, which will have vastly more compute than you can
ever squeeze into one giant nuclear power data center. And so a lot of what they did was sort
of a step function change, not necessarily improvement, just a change in the trajectory.
Yes. And that, to me, is the part where the hyper-scalers needed to take a deep breath and say, okay, why did we get to where we were? Well, because you were Google and meta and OpenAI funded by Microsoft, which all had like billions and billions of dollars. So you obviously saw the problem through the lens of capital and data. And of course, you had English language data, which there's more of than anybody else, so you could keep going. The way I thought of it is when Microsoft was small, we used to just decide, is it a
small problem, a medium problem, or a large problem. And I remember at one point we started joking
that we lost the ability to understand small and medium problems and solutions. And we only had
like large, which was just trivial. And then huge and like ginormous. And our default was
ginormous. Because we thought, well, we could do it and no one else could. And that's a strategic
advantage. And I feel like that's where the AI community in the West, if you will, got just a little
carried away. And it was just like every startup that has too much money, the snacks get a little
too good. So I've heard two theories of why they were able to do this. One of them is this
constraint, one that you've said, I think, which is actually very true, which is we've just been
using this blunt instrument of compute and blunt instrument of all data, and we just haven't
thought about a lot of engineering under constraints. The second theory I heard, I don't know if it's
true, but it's tantalizing, which is the reason B3 is so good is actually because it has access to
the Chinese internet as well as the public internet.
which is actually an isolated thing.
We don't really have access to the internal Chinese Internet.
And we certainly don't train from it as far as I know, which they do.
So it could be the case both things are true.
They could have had a data advantage.
They definitely have the engineering constraint.
Well, even on the data, their starting point is the Chinese Internet, per se,
that has much more structure to it.
It's a much better training set.
That's a great point.
And inasmuch as human annotated data is important here.
And for chain of thought, you do want experts saying,
here's how I would reason about a problem. I mean, this is what this whole chain of thought is,
it's basically what are the reasoning steps. If you want to look at a place to arbitrage really
smart, educated people and relatively low cost, it's hard to beat China globally, right? And so
they definitely have access to a bunch of potentially highly educated, annotated data, which is very
relevant here. And so I happen to be able to believe that this did not come out of nowhere. It's
not a sci-op. This is a great team taking advantage of what it has. But there are still things
that are very significant about it that are worth talking about. For example, the license is very
significant. The fact that they decided to release the reasoning steps is very significant.
Those are two things that you're not seeing headlines about, right? You're seeing headlines
about all the other things that we just talked about. You said the reasoning traces, those were
released, which using the comparable 01 were previously not. Right. And then the open source
license. So there's two things that are pretty remarkable about deep seek R1 that have implications
on adoption. We haven't seen a license this permissive recently for a soda model. It's basically
the MIT license, which is like one page, you can do anything, right?
It's like free isn't free beer, for real.
Yeah, for real, for real.
I think at A6NZ, we have one of the large portfolio of AI companies, both at the model layer
and at the app layer.
And I will say any company at the app layer is using many models.
Like I have yet to see the GPT wrapper.
They're all using a lot of models.
They do use open source models and licenses really matter.
And so this is definitely going to result in a lot of proliferation.
The second thing is, so a reasoning model actually thinks through the steps of the problem.
and it uses that chain of reasoning
or chain of thought
to come up with deeper answers.
And when OpenAI
released O-1, they did not release
that chain of thought.
Now, we don't know why they didn't do it,
but it just turns out that that chain of thought,
if you have access,
it allows you to train smaller models
very quickly and very cheaply.
And that's called distilling.
And so it turns out that you can get
very, very high-quality smaller models
by distilling these public models.
And the implications are both
that this is just more useful,
for somebody using R1, but also you get a lot more models that can run on a lot smaller devices,
so you just get more proliferation that way. So it's actually a very big step when it comes to
the proliferation of this model. Absolutely. And I think that there's this tendency to peg yourself
at, oh, it should just be open, but without really defining it, which I think is important in this case.
And I think because of where they came from and that they don't have a business model, that was part of
what was unique about this, was it was a hedge fund, like almost this side project, but not
a side project. It has this effect that like, well, we're just going to give the whole thing away.
And the rest of the companies are still trying to figure out their revenue models, which I would
argue was probably premature. And it starts to look a little to me like, hey, let's charge for a
web server. And it's like the business of serving HTTP, not a great business. And I think
everybody just got focused on the first breakthrough, which was the LLM, which if you look back
at the internet, what exactly happened was everybody got very focused on monetizing the first
part of the internet, which was HTML and HTTP. And then along came, I don't know, Microsoft
and a bunch of other companies to say that's not the best layer to monetize it. In fact, there might
not be any money in that layer. And the real money is going to be in shopping and in plane tickets
and in television. And even other companies, AT&T got wound up trying to monetize even lower
layer. But that's not how you're going to get to 7 billion endpoints. And I think that the licensing
model really matters because what's going to happen is that there's going to end up being
some level of standardization. Now, I don't know where in this stack or in what level, but there
is going to be some level of standardization. And the licensing model for the different layers
is going to start to matter a lot. Anyone who was around during the internet remembers the
battles over the different Gnu V3 v4, the open this license. I remember very well.
Right. Well, you're doing a dissertation, and it turns out, even your dissertation, which part of it and how you released it was a huge issue because it could make or break a whole approach. And I think that the U.S. industry lost sight of that importance because they got so used to this model of like open just means we're a business and we pick and choose. What we throw out there is evidence that we're an open company.
Yeah, yeah, yeah, yeah. Totally. And I think that view isn't it aligned with how technology has just shown to evolve in an era where there's no cost for distribution. Before, when there was a cost for distribution, it turns out the free model was irrelevant because you still couldn't figure out how to get it to anybody.
Yeah, totally. I do want to take the other side of this because I actually tend to agree with you. And so what you just said is, A, it could be the case that the model's the wrong place to focus and everybody thinks there's a lot of value in there. And so they're playing all these cute games with openness as opposed to.
distribution. And that could very well be true. But there's another view, which is actually the
models really are pretty valuable. And in particular, the model itself isn't an app. But it could
be the case that if you're building an app, you need to vertically integrate into the model. It could
be the case. And therefore, like if I'm building the next version of chat GPT or we just had
today deep research launch, it could be that the apps actually require you to own the model. And in that
case, deep seek is less relevant because they're not building apps. And then this means that
the impact to the opening AI's rant topics are not as great, right?
And so I do think that there's this fork that we don't know the answer.
Fork number one is maybe the models do get commoditized.
You need to focus at the app layer, and then the license doesn't matter.
Or the models really matter up the stack, in which case the whole deep seek phenomenon
really isn't as impactful an event as people are making it.
So I'm going to build on that just because I want to say, you're right, both times.
No, no, and the variable is time.
And the Internet is such a great example because there's no way this doesn't play
out like the internet. Like it just has to. And what we saw was for a while, building one app
seemed like a crazy thing because you had to own Windows and you had to own office. But then a new
app came along that didn't own any of those and it was search. And so that's why I think a lot
of people also because of age and what they lived through immediately jumped to, oh, these LLMs
are going to replace search. But it turns out that's actually going to be really, really hard. Because
there's a lot of things that search does
that the models are bad at,
really bad. And so
what's going to happen is a new
app is going to emerge. And
then when the new app emerges, that's going to get
vertically integrated. And the research app is a
super good example of that.
And then all of a sudden,
other apps are going to spring up. Oh, there's Google
Maps and there's search. And then there's
Chrome. And then it goes back
and eats the things that it couldn't
do before. And I really
feel like that's the trajectory we're on.
it's still a matter of where
and what integrates. But the thing is, is that the
apps that ended up mattering on the internet
literally didn't exist
before the internet. And I think that's what
people are losing cyber. Same with mobile.
Same with mobile. They're all, everybody
is completely, right. There were no social
apps. You know, okay, fine, I get it.
There was GeoCities and a bunch of others. But people
get so caught up on
new thing, it's going to replace something.
Zero-sum thinking is so dangerous.
There's zero-sum, and you can think of everything
as the spectrum. And when something new
comes along, the whole spectrum gets divided up differently, which is what Google said when
they bought rightly. They said, you know, what are people going to do the internet? They're going to
type stuff. And what are they going to type? They're going to type it, but they're going to type it
with other people. Okay, so this is great. So we're actually seeing this happening now, which was
someone will come up with a model that does something like in a consumer space. Let's say like text
to image. And then it turns out that over time, people are like, oh, it's kind of like
Canva. Exactly. It's like slowly do the AI native version. Just you're right. Just like the cloud
native version of Word, the AI-native version of these kind of existing apps.
The reason it's important is because it looks like Canva or it looked like Word or it looked
like PowerPoint or it looked like Excel.
But what's important is that they're actually different.
Nothing is going to ever be PowerPoint again.
Why?
Because PowerPoint, the whole reason for existing was to be able to render something that couldn't
ever be rendered before.
And so all of the whole product, it's 3,000 different formatting commands.
Like literally, that's not a number I made up.
Like, it's 3,000 ways to Kern and nudge and squiggle and color and stuff.
And actually, it turns out you don't need to do any of that in AI.
So the whole product isn't going to have any of those things.
Yeah, exactly.
And then it turns out all those things make it really hard to make it multi-user.
And so then when Google comes along and starts to bundle up their competitor that's going to replace it, they're focused on sharing.
So, Stephen, let me ask you this.
You said something really interesting.
I'm good.
I got I did.
Which is this has to pin out like the Internet.
And you guys have used examples of different companies, the mobile wave cloud era.
Those are things we can learn from.
But I just want to probe you, is there something different here?
To bring it back to DeepSeek, this is very important to realize the capabilities of China.
It's a very credible player.
But I don't think that R1 itself is a standalone is going to have that deep of an impact.
But on the Internet, so there's actually these parallels when it comes to capital buildout that you see in the AI,
which is it takes a lot of investment.
And there's a special parallel
that Mark Andreessen actually remind me of it,
which people don't tend to see as well,
which is in the early days of the internet,
like the mid to late 90s,
a lot of investors, a lot of big money,
think banks or sovereigns,
they wanted exposure to the internet,
but they had no idea how to invest
in software companies.
Like, what are these new software companies?
Who are these people?
Like, they're all private companies.
Like, so what did all of them do?
They all invested in fiber infrastructure.
So we're starting to see this thing again, right?
We see a lot of banks and big investors.
Listen, we want to build up data centers
because they don't know how to invest in startups.
Like we know how to invest in startups, right?
So on one hand, you can be like,
oh, we're going to see all of this kind of capital expenditure
and all this capital expenditure is going to go into physical infrastructure,
and therefore we're going to have another fiber glut equivalent,
but a data center glut.
So the counter to that point where I think is different is,
at the time of the fiber build out, you've had one company
which happened to be cooking its numbers
where it had a ton of debt to build all of this out.
When the price of fiber dropped, that company went out of business and that caused a huge issue.
You have a much better foundation for the AI wave.
The primary investors are the big three cloud companies.
They've got hundreds of billions of dollars on the balance sheet.
Even if all of this goes away, they'll be fine.
Invidia can take a price dip.
Invidio will be fine.
So I don't think we're heading to the same type of glutton crash that other people have,
which is very appealing to draw parallels to the Internet for that I don't think is there.
Oh, I am completely with you on that.
that part of it is going to look like the amount that Google invested in the early 2000s
or the amount that Facebook invested five years later or people forget that Microsoft poured,
I don't know, $30, $40 billion into Bing.
And it's still number three or whatever, but it still doesn't matter.
Yeah, I would bet, I don't know this is a fact.
I'll bet Meta's spending more money on VR than it is on AI right now.
Yeah, just to show you.
And maybe Apple too, right?
Oh, Apple, also because Apple, whatever is bigger than Gargantuan is how much they're spending.
And so it really isn't about the investing profile.
And I think that is a super important point that you made to really just hammer home.
There's a certainty that nobody's going to come out of this unscathed, but the scathing is not going to be at all what anybody thinks.
And then not like what it was.
Oh, why come, I believe, had $40 billion in debt, right?
I mean, it was just one of these things were structurally it was.
Oh, and there were companies that we've all forgotten about that went bankrupt over that era.
Actually, there was one in Seattle whose name I'm forgetting, but that was like $20 billion, just proof gone.
To your point, these companies have had so much cash on their balance sheet.
They've been waiting for a moment to invest in the next generation.
Which also contributes to their willingness to scale up as much as they did.
So let's talk about that.
In your article, you talk about the difference between scale up and scale out and the natural tendency in these early parts of the wave to scale up
when really there tends to be a shift towards software basically going to zero costs.
So, Stephen, what do you mean by that?
And are we at that change introductory?
Now we'll just switch to make sure we're really talking about the technology now.
Not the finances.
But when you're big, you want to double down on being big.
And so you start building bigger and bigger and bigger computers that don't distribute the computation elsewhere.
So if your IBM, you just say the next mainframe is another mainframe that's even bigger.
If your Sun Microsystems, you just keep building bigger and bigger workstations.
Then if you're digital equipment, bigger and bigger mini computers.
And by the way, all along, you're just doing more MIPS in the acronym sense than the previous maker for less money.
And then the microcomputer comes along.
And not only did they do, like, fewer MIPs, but they cost nothing, and they were going to be gazillions of them.
And so you went from an era when IBM would lease 100 or 500 new mainframes in a year, and Sun might sell 500,000 workstations to like, oh, let's sell 10 million computers in a quarter.
And I think that scale out where there's less computing, but in many more endpoints, is a deep architectural win as well, because it gives more people more.
control over what happens. It reduces the cost. So today, you know, the most expensive MIPs you can
get are in like a nuclear powered data center with like liquid cooling and blah, blah, blah,
whereas the MIPs on my phone are free and readily available for use. And I think that, to me,
has been a blind spot with the model developers now. They all do it. I mean, I run Lama on my Mac.
And the first time you do it, your mind is blown. And then you start to go, well, now that's just
how it should happen. And then you look at Apple and their strategy.
which the execution hasn't been great,
but the idea that all these things
will just surface as features popping up
all over my phone and they're not going to cost anything,
my data's not going to go anywhere.
That's got to be the way that this evolves.
Now, will there be some set of features
that are only hyperscale cloud inference?
Oh, yeah, just like most data operations
happen in the cloud now.
But most databases are still on my device.
So I'm smiling because this is the story
from like a microcomputer guy.
I'll tell the story from an internet guy.
There's the perfect parallel,
which is, do you remember the switching wars?
Oh, yeah, absolutely, yeah, yeah.
So for the longest time, you had the telephone networks,
and they were perfect.
They would converge in milliseconds.
They would never drop anything.
You got guaranteed quality of service.
And here comes five, nine.
That's five nights.
And then here comes the Internet.
You had none of these things, like convergence was minutes,
like it dropped habits all the time.
You couldn't enforce quality of service.
And there was these crazy wars at the time
where like, why are you doing this Internet?
stuff. It's silly. We know how to do networking. But what the switching people, the telephone
people, didn't get was what happens when you actually have a best effort delivery and then how
it enabled the endpoints. They needed the value to be in the network and they couldn't think
that way. And that really brought the internet. And I think the exact same thing is playing
on. I actually see it a lot of the times, like people, they look at these models, oh, they
hallucinate. Or, oh, they're not correct at these things. But they enable an entirely new set
of stuff, like creativity and coding. And it's an entirely white space and it's going to grow very
quickly and to assume that somehow they don't fit the old model is irrelevant to where it's going
to go. What I do is I just S slash QOS to hallucinate. Yeah, yeah, exactly. Exactly. Because like to
now explain, what happened was I was going to all these meetings in the 90s with all these
pocket protector AT&T people who would just show up and they would yell at Bill Gates like
QOS, QOS, QOS. And we had to go all look up what QOS was because not only we're not using
TCP IP, but the network we were using never worked. Because it was like a PC-based network.
And the IBM people... Like the Netbuy stuff? Yeah. I never got it. I am talking to a networking
genius. So I should like the ping of death. But it was just hilarious because they're telling
me about QOS. I didn't know what it was. I walked them over to my office. And this was like in the
winter of 1994. And I'm like, oh, look, here is a video of the Lily Hammer Winter Olympics playing
on my Mac. Yeah. Awesome. And it was like literally,
Literally, it was a postage stamp, the size of an iPhone icon.
And they were like, well, that's 15 frames a second.
I'm like, I know it's usually like five.
And like, where's the audio?
I said, well, if I want the audio, I just call up this phone number on your system.
And then they just laughed at me.
And so here we are, of course, all using Netflix on every device all over the world.
And I think that they can't understand that these paradigms where like the liabilities either don't matter or just become features.
And of course, that's what gave birth to Cisco.
And they just went, well, this is how we've been doing it.
And it all works.
It only works in our crazy weird universities and in the Defense Department.
And now that's all we use.
And I want to tie this back to Deepseek because the reason we're getting so excited about this
is because we've seen things like Deep Seat come out before.
And it's not Zero Summit.
It doesn't replace the old thing, right?
It is a component of the new thing.
And the new thing, we still haven't even envisioned yet, right?
It's like the Internet is just coming right now.
And our excitement is for the new thing to come.
And so when I saw Deep Seek's coming,
I'm like, amazing.
This is another step to basically AGI in your pocket.
These can run on small models.
It shows that we're going forward.
My reaction was not, oh, shit, I need to like short Nvidia or whatever.
I think that's actually the wrong answer.
Yeah, I mean, I read the let's short Nvidia blog post that flew around that whole weekend.
And I was like, are you crazy?
I'm like, A, Jensen is a genius.
B, their company is filled with geniuses.
What about the Tam just expanded, don't you like?
Yeah, yeah.
It's exactly.
And so it is super exciting.
This is the scale out step.
just happened. And so now you could see everybody doubling down. And to your point that you made
earlier that I think is super insightful and really important is this enabling of specialized
models, because that's what's going to end up being on your phone. And that's what's going to
enable the app layer to really exist. To me, this is all the equivalent of the browser
getting JavaScript. Yes, I did. Because once the browser got JavaScript, then all of a sudden
you could do anything you needed without going to some standards body or building your own
browser. And I think that's where we are right now. One follow up there is if you think about how
this progresses to date, I feel like the benchmarks have always been like which model has the
most parameters, how is it doing on this coding test that isn't representative necessarily like
what device can this fit on? How much does it cost? Do we expect then a different set of benchmarks or
things that we're judging these models by or should we just be looking at the app layer? Does there
need to be some sort of shift that kind of moves us away from bigger, better, as you're saying,
and something that represents scale out.
Of course, I thought all those benchmarks were just silly to begin with.
To me, they all seemed like,
remember the benchmark we used to do with browsers
was like how fast it could finish rendering a whole picture.
And so Mark Andreessen invented the image tag in the browser.
The neat thing that they did in their implementation was progressively render it.
And then what that did is empower stopwatches all over the world of magazines
to write who finishes rendering a picture faster.
And of course, here we stand today.
Like, that's a thing you can measure even.
that's a time and it doesn't matter.
And so I think those will all go away.
And we're just very quickly going to get to what does it actually do.
I do think that the measure that's going to start to really matter will depend on the application
that people are going after.
Take this research stuff that just appeared like this week.
Well, it turns out when you're doing research, the metric that matters is truth.
And all of a sudden, you're giving footnote links and you're giving sources because what's
really happening under the covers, it's a little bit less of genera.
and a little bit more of IR.
And all of a sudden, vector databases
and looking things up and reproducing them matter.
And so now we're probably along the lines of ImageNet,
and they're going to start to generate
thousands and thousands of routine tests
that are like, is this true?
This is totally an aside.
But you reminded me of a kind of a weird historical errata,
which is the fact that Andreessen made the image tag.
So in a way, he's also the grandfather to some AI
because Clip, which is an AI model,
basically will take an image and describe it.
The way it does is using the meta tags and image tags.
So he created the metadata to do this.
I will say back on the topic of the images,
here's one thing I've noticed working with these companies
where these models are actually pretty magic by themselves, right?
If you have a big model, you just expose it, people use them, right?
Which is very different than computers.
Like you just put the model out there.
Yeah, yeah.
The thing is all the other models catch up very quickly because they distill so well.
So it's not defensible in a way.
And so the companies that are defensible,
that I've seen is they'll put out a model that's very compelling. And then once the users are engaged
in the model, they find ways to build an app around that actually is retentive, right? So it'll start
converging on like PowerPoint. It's more stable and requires configuration. So that tends to be
very defensible. And then the applications that use models, they use lots of models, and they do
fine-tune these models a whole bunch. And the last two years have been the story of the large model.
It really has been, and they've been magic. Like, people use them and people really like them.
And the first time you're in chat, Chip, you're like, this is amazing. And now I think we're in the
era of workflow around models, which are stateful complex systems, right? And also many models.
Many models is a great point. To build on that, this is what happened with user interface.
The whole notion of user interface that IBM put forward was just derived exactly from their green
screens and their 30-70s. And they made a shelf of rules on how...
40 characters.
Of like exactly how the UI should be. And this is the F-10 button and this is the whatever.
And then it turns out that people were building all sorts of UI
frameworks actually looks exactly like the browser today, where there's a zillion frameworks on the
endpoint, you pick and choose, you do what you want to do, you can invent a new calendar drop down
if you want or not waste your time. It's really up to you. And I do think that aspect of creativity
is extremely important to applications. And then for apps to be differentiatable and to also,
to use an MBA, have a moat, apps are going to also embrace the enterprise. And for better or worse,
one of the lessons that we keep learning is if you want to get adoption in the enterprise,
you're going to have to do a bunch of work
to turn off parts of your app
or to filter parts of your app
or to disable it or whatever it is.
And I think the smartest entrepreneurs
are going to recognize the need for sign-on,
single sign-on at the beginning.
Arbach and SSO are like the tip-day.
Every time.
Every single time, because it turns out
that's also a great way to price.
It's not super hard.
And I think so much dumb stuff
has been done about AI
and alignment and censorship
and whose point of view is it
and all this other stuff
that there's now a whole industry
that just wants to show up
and tell you all the things
that they don't want out of AI.
And the smartest entrepreneurs
are going to actually get ahead of that
and they'll be there to sell
because it turns out that
is actually enormously sticky
in the enterprise.
And I think that we're going to see
the smart productivity tools
embrace that immediately.
And it could be even
at the most granular level
of turn it off for these users
or whatever.
Well, we had Scott Belski
at Speed Run recently,
And to your point, he talked about Adobe and someone said, well, you have all these licensed images, right, for Firefly.
Do consumers really care about that?
And he was like, honestly, not really.
But you know who does care?
It's the enterprise, right?
So to your point, those are two different modalities.
And founders are going to have to figure that out.
But I do want to touch on, you know, a lot of people are talking about Deepseek as this Sputnik moment.
And that can be viewed in the lens of geopolitics, U.S., China.
But also, if you think about Sputnik, that wouldn't have been a moment if Kennedy didn't do his movement.
landing speech if we didn't actually get there.
So in other words, if changes weren't made.
And so let's say you're in a boardroom.
You're an advisor.
I don't want to talk to the boardroom.
I want to talk to the U.S. government, right?
And so, like, for me, actually, the biggest aha of deep seek is nothing we've talked about
right now.
The biggest aha of deep seek is how blind our policies have been around AI, right?
They've been so wrongheaded.
So our previous policies around AI have been, we can't open source because it'll enable
China, we've got to limit our big labs, you know, we've got to put all of this regulation
on top of it. And the reason is, is for safety and all this other stuff. Export controls.
All the export controls, so we can't enable other countries. Expert controls on chips. We've
talked about putting export controls on software, weight limits, all of this other stuff. Like,
that was our entire policy. And for me, the biggest, biggest, biggest takeaway, the whole
deep seek thing is that's the wrong way to do policy. China is a lot of very smart people.
They're incredibly capable. They're great research.
they can build stuff as well as we can,
and they can open source it.
We did not enable them.
They did this even with export controls on chips, right?
So there's basically all of our activity
has been for not.
And what we should be doing
as funding and investing in our research labs,
and we should be going as fast as we can.
And it really is the AI race,
just like we went through the space race,
and we need to win.
And we have everything that we need to win.
The only thing in our way
is our own regulatory.
Just to build on that,
the lesson is not Sputnik,
the lesson is the Internet.
What we learned from the internet, which Al Gore famously claimed to have invented the internet,
but what he really did was invent the regulation that allowed the internet to flourish.
And they could have looked at the internet and said, oh, my God, this is a Sputniknomen,
and then tried to turn it into what AT&T and WorldCom wanted.
And they were there lobbying trying to make that happen.
And frankly, AOL wanted it to happen that way, too.
And so they ignored that, and they went with what made the internet strong to begin with.
And so what gave us this deep seek moment was the strength of the worldwide technology community.
And so as much as people want to own it and be the singular provider, it's not going to work.
The biggest difference, not to overanalyze the analogy, I think it's a Sputnik moment in the sense that it's a wake-up call for half the world.
It isn't a geopolitical wake-up call.
It's not about war.
It's literally just about technology diffusion.
And we've had so many misfires since that.
I mean, we had the whole encryption war where we tried to put export controls on encryption and all this.
And, you know, although people thought we were being silly as an industry when many of us were championed this.
Well, you can't.
It's like outlying math.
It turns out like outlawing math.
And the fact that it used those chips, well, the world's economy, as we've seen, is very, very hard to put export controls on things.
Remember when we were going to export control PlayStation's?
Oh, yeah.
No, Xbox.
Like, the government came to it.
Or, like, actually, 2048-bit encryption and email.
Yes.
Because people came to, and, well, we can't have bad actors.
That's their favorite phrase.
Bad actors encrypting their email.
I'm like, well, they're just going to encrypt the attachment themselves.
And then there's nothing we can do about that.
For sure.
But in this case, we've actually put export controls on GPUs before.
I mean, like a perfect analog.
We were like, oh, listen, you can do weapon simulation on these things.
Like, a PlayStation was the first to actually use the SGI.
Right, right, right.
Remember that.
We can export control that.
We can't let that into Saddam Hussein's hands, the whole thing, total failure because it just turns
out global markets or global markets and we're much, much better in investing, which at the time
we did in our own infrastructure.
We did a great job of that.
And I think it was great analogy with the Internet and with Al Gore.
We should be doing exactly that again.
And some politician needs to stand up and be the Al Gore of this moment.
I think that we will get that.
So I do think that there is now a wake-up call.
I think that the futility of the past four or five years of this kind of stuff is now very, very
clear. And I mean that even more broadly than you were saying. Like, I mean, like, the people who
wanted to control this technology at this very granular level in all these think tanks and
institutes that were all aligned. I mean, the number of books written, the number of academic
departments started, the number of assaults on technology companies to align. I mean, whole
meetings in Switzerland about aligning, you know, with the world leaders. That's just not how
anything evolves. And if the biggest lesson for computing, starting in 1981, with the IBMPC,
or, frankly, 1977 with the Apple,
has been the creativity at the edge,
just enabling that.
And I think the problem that the regulators had
was they had never faced regulating a connected world before.
And I think the other lesson from Deep Seek is just,
okay, the world is already connected,
the world is already native in all of this stuff.
So now the amount of actual calendar time
it takes for something to diffuse technically is zero.
I mean, Deep Seek is.
I mean, DeepSeek, I think, was the number I saw this morning is like 35% of the DAUs of OpenAI.
And that's a giant spike because just all the same people are just trying it out because there's no friction.
It takes no time.
And so it's so unbelievably exciting to be part of what's going on right now.
And we just don't need to throw water on it and be party poopers.
So one thing I will say, it's like I personally don't think this is a crisis moment for Open AI or Anthropic.
I think like apps are hard to build.
I think that, like, right now the apps that they put out are very complex.
They actually know their users.
They have very specific use cases.
And so, I mean, I think for them it's a bit of a wake-up call that they can't slouch and they got to move very quickly.
But I'm still very, very bullish on our labs.
And I think they can stay ahead too.
So, again, there's this view of deep seek as a crisis moment for NVIDIA, a crisis moment for open AI and anthropic.
I don't buy any of that.
I think it's more of like a wake-up call for the regulatory environment.
And then, listen, we should all acknowledge that.
Listen, there's going to be global competition.
We need to stay ahead.
I would also say that what we should see now, the right reaction from all of these frontier folks, is they should all just start be building apps.
Because the best feedback loop to build a great platform for other people to use is to be building apps.
And there's this whole concentrated conversation over competing with your partners or whatever.
Our industry is co-opetition through and through.
It's Andy Groh's lesson.
So just everybody should be prepared for these big players to compete with you.
but history has shown
that's no
surefire success
at Tam agrees
10x
there's just a lot of room
for a lot of folks
yeah
I mean Microsoft spent
10 plus years
like a distant number
three in the applications
business
and it was a platform shift
that all the other players
ignored
that caused it to win
and so I think
that the Tam
is going to be
100X
it's going to be
every endpoint
the revenue
is going to come
from the apps
side of it
and then there'll be
a developer side of it
it'll just be
a different pricing
model for different sets of scenarios, but it's going to be there. So everything is rising right now.
Since it is this positive sum growing world, do you have any thoughts just real quick on the fact
that this came from an algorithmic hedge fund, a quant? Is that any different to your expectation?
Or does that actually signal that more can participate? It's a good reminder that there are
always pockets of people innovating. Worldcom and AT&T did not predict the internet was going to come
out of universities. They did not think that a physics lab in Switzerland was going to invent
the protocols that become foundational.
That's so true.
And they certainly, and they also didn't expect
a failed corporate lab
to develop TCP IP that became the standard.
I mean, it wasn't like the IBM lab.
It was like literally a lab
that they'd all but shut down
because it failed just down the street at Park.
And so...
You remember like SRI was involved?
Like all these places
that you don't even think about anywhere.
Right. And so most of this isn't going to be
even in any history that's written in five years.
And I think that that is the
exciting. All right, that is all for today. If you did make it this far, first of all, thank you.
We put a lot of thought into each of these episodes, whether it's guests, the calendar
Tetris, the cycles with our amazing editor Tommy until the music is just right. So if you'd like
what we put together, consider dropping us a line at rate thispodcast.com slash A16Z. And let us know
what your favorite episode is. It'll make my day, and I'm sure Tommy's too. We'll catch you on the
flip side.