a16z Podcast - Monopolies vs Oligopolies in AI
Episode Date: August 28, 2025In this interview from the 20VC podcast, Martin Casado (a16z General Partner) joins Harry Stebbings to unpack the state of AI, the rise of coding models, the future of open vs. closed source, and how ...value is shifting across the stack.Martin offers a candid view of the opportunities and dangers shaping AI and venture capital today. Resources: Find Martin on X: https://x.com/martin_casadoFind Harry on X: https://x.com/harrystebbingsMore about 20VC:Subscribe on YouTube: https://www.youtube.com/@20VCSubscribe on Spotify:https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466&nd=1&dlsi=d1dbbc6a0d7c4408Subscribe on Apple Podcasts:https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465Visit their Website: https://www.20vc.comSubscribe to their Newsletter: https://www.thetwentyminutevc.com/Follow 20VC on Instagram: https://www.instagram.com/20vchq/#Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
There's only been one sin, and that one sin is zero-sum thinking.
We always worry about, like, oh, is this defensible?
Oh, will this layer get margin?
Will this layer get value?
And the answer has kind of been unilaterally, yes.
The answer has been every layer has gotten value.
Every layer has winners.
These markets are so large, and they're growing so fast.
We're actually seeing brand effects take place.
In this phase of model scaling, a lot of the approaches to scaling don't generalize.
This gives a ton of room for the application developers to build their own models.
I think that right now, open source is most dangerous because China is better at it than we are.
To be on the podcast, we're sharing a conversation from our friends at 20VC with A16Z,
general partner, Martine Casado.
They cover the state of AI investing, why the real sin is zero-sum thinking, how value is being
created at every layer of the stack, and the risks of monopolies versus the reality
of concentrated markets.
Let's get into it.
Martine, man, I love our conversations.
I was so excited when you said you'd join me again.
Thank you so much for doing this, man.
So excited to be here.
It's great to see you.
Dude, I freaking hate these things.
How did you get into Venture?
intro questions. So I just want to dive right in. It is a freaking nuts time. So starting off,
how do you evaluate where we're at today in the AI investing landscape, peak hype cycle,
great, super excited, both. How do you evaluate it? So I'm kind of of two minds. Of one mind is I do
feel like my intuition doesn't really work like it has the last 20 years. It's,
just the future is very uncertain. And one of the reasons is, is because, you know, this is really
the first time, like, software development and software creation is being disrupted. And so on
one hand, I'm like, I don't really know what to think. On the other hand, observationally, there's
only been one sin. And that one sin is zero-sum thinking. We always worry about, like, oh, is this
defensible? Oh, will this layer get margin? Will this layer get value?
And the answer has kind of been unilaterally, yes.
The answer has been every layer has gotten value.
Every layer has winners.
Things that we thought were silly are making money.
It's been solved.
There's profitable companies.
I mean, the business case is there, et cetera.
And so I think the one sin is not playing the game.
Do you agree with the playing the game on the field sentiment?
When we look back at 21, you know, I remember everyone saying playing the game on the field,
I wish I hadn't played the game on the field.
To be transparent, Martin,
do you agree that you have to play the game on the field in Buncher?
I think behavior should follow business.
It shouldn't follow Marx.
And I think in 2021, behavior was following Marx, right?
It was like the public, Marcus just decided these companies were valued a whole bunch.
You know, Tiger came in with a ton of money and deployed it a whole bunch.
And so, like, I think behavior following investment in Marx is a bad idea.
But in this case, you have some of the fastest.
growing companies we've ever seen
by users, by revenue,
I mean, the amount of value
that's kind of shifted to this is so significant
and so I think investors' behavior should follow that.
If not, I mean, what are we doing?
When you think about a shifting value,
again, I'm diving right in, but this is not first gone.
Let's go roundabout.
A lot of people have fun,
and you said about kind of disruption of software development,
there is a ton of players in the vibe coding space.
They are predominantly all sitting on top of Anthropic
Claude Code is gaining more and more
dominance. How do you think about
these providers' reliance on
a tool that could eventually shut them off?
There are two futures
to code. In one future, you've got
Anthropica's a monopoly
and another future
you have
let's call it an oligopoly
or maybe
even a bit more of a
market of these
coding models. And they're just very
different futures. And I think when you answer
this question you have to consider both of these. I will say the timing of this conversation
you and I are having right now is like pretty soon after Claude 4 launched. And that's like a
major model launch. And these models are so episodic. Every time one launches, everybody's like,
it's the future. Everything's going to happen. Like remember like the whole jibbley opening eye launch
and we're like, oh, image is going to change forever. And then it comes. We're excited and then it
kind of, you know, passes. And maybe that'll happen here. Maybe that won't. I don't know. But like
for sure, like our perception is colored by that launch. So let's consider both of these.
I'm going to consider the first one. So historically, models don't really keep much of an
advantage because they're so easy to distill. And so we've even in the last week have seen
launches of models, you know, Quinn and I forgot Kimmy that came out. And they're great. And people
like them and they adopt them. And in that world where you continue to have new models from
different providers, you know, I would never count out Google.
Their coding models are fantastic.
The rumor is that GPD5 coding is going to be great.
So in this world where you've got lots of models coming out from lots of providers,
you need to have a consumption layer that's independent, right?
And so then all of these companies are going to add that, that, you know,
that consumption layer value, like, for example, to non-technical users or to Python users
or to professional coders or whatever it is, and that's going to be a very
healthy layer.
The other futures, let's assume that Anthropic is just a monopoly on coding models.
And in that case, you have what you normally have in these situations is they will decide
kind of where it's not profitable for them to enter or will change their business model.
Like maybe they're like, listen, we want to have the consumption layer, but we're never
going to be like an app dev tool company.
It's just it's a different sales motion, a different sales team.
And nobody knows where that stops, but they will put pressure on anybody that they view in
their core focus and they will do whatever that they can to either capture that margin
or just capture that market share. I just think it's just the wrong time to have this
conversation right after a major model launch. Because like I said, these models are so
episodic and we always think, like we always assume every time a model launches it's going to be a
monopoly and it just really hasn't been the case. Going to your zero-sum thinking, if you were to put
a bet on which future is more likely, which
future do you think is more likely?
Olegopoli.
This is how the cloud, well, this is how the cloud played.
I think probably the best
analog we have is the cloud,
right? You know, the other
companies that are behind models
can subsidize these things arbitrarily.
I think about Gemini. And they don't
have to do this in a way
where they have the same economics as
an independent company.
And so if you look at how the cloud, remember the cloud,
AWS was like 70 or 80%
market.
early on. Nobody thought they could ever catch up to them.
You know, they were the massive market leaders that created the category.
I mean, they had way more dominance than Anthropic has now.
And Microsoft and Google are like, you know, that's an important big market we have to be in it.
And they just basically spun their way into it.
And then you ended up with an oligopoly on the clouds.
I see no reason.
I mean, Gemini 2.5 is a great model.
It's a great model.
And if you actually look at it, you know, on the price performance, I would say in many use cases,
it's the one that I actually use as my standard model.
it's better than anthropic
for some use cases
if you actually
take new
price performance
and Google
could arbitrarily subsidize
that too
never count out
open AI
I mean they
they started the party
they haven't had
a major model release
in a while
certainly around code
so that's going to show up
and so I just feel like
it's you know
the players
the money behind the players
the fact that these models
distill
this wound up in an oligopoly
but I mean I don't know
that's just my guess
To what extent do you think the large model providers in 10 years' time have already been created, or are they yet to be founded?
I think that you end up with models with different flavors, and there's going to be a lot of new flavor models that will come out.
You know, like, you know, we haven't even, you know, like, you know, Mira and Ilya are out there creating models.
I mean, you've got these very legit teams that were some of the pioneers.
you know, we're just starting up models for the sciences.
And as you get more into kind of R.L. territory, these models really get a certain flavor.
They don't generalize nearly as much.
And so, like, that's going to naturally from a technical perspective, fragment the models.
And so I would say the core base model for, like, language, search, and code.
I mean, I think even code, actually, it's still so early.
I mean, it's very, very early in the super cycle.
In previous super cycles, remember, it took two or three generations for the winners to emerge.
I mean, Google was third generation search.
Facebook was third generation social networking.
Remember, there was Myspace, there was Friendster, and then Myspace before that.
And so I think there's a lot of change.
There's a lot of change to come.
But I do think that both Anthropic and Opener have done a remarkable job.
remarkable with brand independence and market share
and so I suspect they'll continue to be stalwarts
in the industry.
Are you in either of them?
We're investors in opening eye, yeah.
Got you. Okay.
My question to you is fundamentally,
there's many, but do you think models
are fundamentally good investments for venture firms?
When you look at employee stock compensation
and the dilution that comes from it
and then the dilutive nature of the businesses,
Yeah.
It's a hard sell.
Okay, so if there's one thing I've learned, honestly, for anybody that's listening to this,
this will be worth like your time.
There is no one way to think of AI, and it is no, like, one way to think about models.
And the models themselves are entirely different businesses, depending on how you talk about
the models.
So to even answer that question, we have to tease apart what you mean by model.
So, for example, if you look at the diffusion models, like, say, like 11 labs, mid-jurney,
Black Forest Labs, ideogram, these are wonderful businesses that have great economics because
the models are smaller. The ecosystem isn't subsidized in the same way, right? Like, Google subsidizes
language and code and video, but not speech, right? And so from an investor, these are clearly
great investments because, you know, if you just look on a metrics alone. On the other hand,
the frontier language space, it's much more complicated because there's so much
subsidization, right? Right? You have meta and Google a bunch of Chinese players that are
entering it. So for a subset of the players, and this is why it's a tricky question. For a
subset of the players, you're like, yeah, clearly, these are the fastest-scrowing companies we've
ever seen. There's tons of value. These are very valuable entities, right? You know,
Anthropic, Open AI.
But at the same time, even three years in,
they've already been a number of companies
that have had to exit early.
And so I would say it's kind of a high-stakes game
where the winners really win,
but it requires a lot of capital to enter the game.
And if you're not in one of the leaders,
like that capital is forfeit.
We do a show every week with Rory O'Driscoll and Jason Lemkin
and Rory very aptly, I think, just said,
listen, with the transition to AI,
every ambassador's just accepted a willingness
to go massively up.
up the risk curve on investing.
Do you agree with that?
Well, I think it's the requirement of the game.
It's like these are very capital-intensive companies to build.
You know, they have to get the capital from somewhere.
They're also the fastest-growing companies.
And so, you know, for the winners, it's justified.
And so I think it's not that investors,
are willing to go up.
I mean, we'd be very happy not to.
I mean, I know you would, right?
And it would be great to have great returns with low risk.
But the nature of the system
and the game which we're playing requires it.
And this is, by way, this is the dissonance in all of this.
It's just so important to call out,
which is, on one hand, you do have these great businesses
that are very fast growing.
And zero-sum thinking has been tremendously wrong.
I mean, Nvidia is continuing to grow in value,
the hosting providers
which everybody wrote off
is being kind of
non-defensible business
continued to grow in value
the model companies
which I can't tell you
how many investors
wrote off the models
I mean this
this question has been around
for three years
they continue to grow in value
so every layer
of the stack continues
to grow in value
so on one hand
you're like
it's all working
you should be in the leaders
in every
layer of the stack
on the other hand
we've seen tons of
wipeouts already
for the non-leaders
and so it's almost
this bipolar
or paradoxical situation
where you kind of have to play
but it's very, very high risk.
And if you don't play,
I mean, you're kind of missing
one of the fastest growth in value
that we've seen in, what, 20 years?
Do you think you see the concentration of value
to one or two players across markets
in every market?
Whether you look at voice,
it's, you know, obviously you're 11 labs,
whether you look at it's kind of a replet and lovable
and open AI and Anthropic.
This is such a great question.
So here's one thesis.
It's so early we don't know
and maybe in a month
all this gets proven wrong.
But we actually talk about this a lot
internally and here's one thesis
and this is the one that I'm attached to
which is these markets are so large
and they're growing so fast
we're actually seeing brand effects take place
and we haven't seen that since the internet.
By brand effects I mean
if you become the household name
you will get the adoption
because it just does not require a lot of
education. It does not require a lot of
competitive discussion
or competitive
positioning in the field.
I would say, for many of these
models, I mean, you know,
is one better than the other? Yeah, maybe, but they're pretty
close, but like people know chat GPT.
It's like it's a household name. My mom knows
chat GPT.
You know, people... Crassy when you're
like, honestly, why did I do lovable?
For the exact same reason, the chat GPT
wins. I thought it was the consumer brand that would
win. A hundred percent. And I just, and I
just think these markets are so large, brand effects work. I mean, let's talk about mid-Journey.
Mid-Journey was the first that got above the quality bar. It's taken zero investment
from institutions. It's still the market leader. And it continues to do great. And this is,
meanwhile, a bunch of other people have entered the market. And so I do think it's not unreasonable
to assume that these markets are very large. Leaders are going to have brand monopolies and
brand modes, and they'll be able to maintain them until things slow down.
And in general, I've found markets do this, which is when markets are expanding,
so markets tend to expand and then contract, right?
Think about cloud, right?
It was kind of like this funny thing, and it became very massive, and then, of course,
it slows down.
When it slows down, then you have the consolidation, and then, you know, competitive dynamics
come in.
I mean, we're clearly in a massive market expands phase.
It's just very clearly the case.
And in which case, the leaders are going to continue to have.
you know, a distribution advantage just through brand recognition.
When does that tail off or does it not tail off?
When does the importance of brand and brand recognition dwindle
and product prioritization or product quality trample?
I mean, I think it's as soon as the market growth slows down.
You know, I mean, again, let's take cloud as an example where...
These are actually tools of markets, sorry, I'm so sorry to interrupt you,
but market growth.
Or actually just consumer intrigue, which is, there's a lot of people who want to try building a website on Replit or Lovable or Bolt or any of them.
There's a lot of people who want to try voice with 11 Labs.
To what extent is it market intrigue versus the expansion of market?
Well, I just think the expansion of market provides the dynamic so that you don't saturate the user with competing messages, right?
I mean, the idea of market expansion is the frontier continues to expand.
And the first thing the frontier hears is the household names.
And so the household names win.
And so I just think that that's a natural artifact of expansion.
As soon as like the expansion slows, then that frontier is going to hear both names.
And then all of a sudden now you're in a discussion of which one to use and not to use.
And again, I think like for the longest time when the cloud market was expanding, everybody knew AWS.
It was the leader.
It was 70, 80 percent market share.
and then as soon as that growth slowed down,
then all of a sudden market share started to shift dramatically
and it just wasn't obvious.
Do you do GCP? Do you do Azure, etc.?
But I would say that's less an artifact of the fact
that Google, Microsoft decided to enter the game
and much more that the market growth itself started to slow down.
So we see market growth slow down
and then we see the dispersion of value across players more so.
That's right.
So the market slows down.
And once that happens, the frontier, it becomes more saturated, right?
Just because we're not adding people as much.
And so they will get more of the educated message.
They'll start making more decisions.
And you can have more of a conversation.
Like, of course, Anthropic would love to have the same brand as Chat, GPT, as a household name.
But how do you reach that frontier, you know, if it's growing that fast?
It's just, it's operationally tough to do.
Kind of the only way do it is just through.
brand recognition, which is kind of this word of mouthy type thing. It's like on every podcast
and, you know, the friends and whatever. And so I do think, I do think we're seeing brand effects
happen now. And we saw these in the early internet. The brand later tends to get 80% of the
market. It just tends to break out Pareto for a while. And then over time, it'll slow down and
these things even out based more on product differentiation. How do you factor that into your
thinking when investing today?
Well, you just try to invest in the leader.
And it's worth paying up for the leader, honestly.
I mean, it's, you know, so I think for me, I ask two questions.
Question number one is like, for the area that it's focused on, is it the leader?
If it is, it's definitely worth paying up.
And then the second one is the story actually has been that in a competitive space,
almost everybody just found kind of a new nichey white space.
So let's just take the example of Open AI.
I mean, the opening I was the first to code, right?
With GitHub co-pilot, I mean, they provided the weights, as far as I know, and they lost that.
And they were first to Image with Dali, and they lost that.
And they were the first to video with SORA, and as far as I can tell, they lost that.
And yet, there's still the massively dominant player in language and continue to be so and will be so.
And arguably, that was the right thing for them, because that's by.
far the largest market by far.
And so Open AI acted totally rationally
and has the largest market.
But that gave the ability
for Mid Journey to take image
or BFL to take image.
You know, Google seems to have grabbed
video with V-O3.
Code, I mean, on the model side,
Anthropic has, you know,
turned that into, you know, this wonderful
business. And so when markets
expand, not only do you have these brand
effects that we were talking about, they also
tend to fracture a bunch. And what
seems to have been a sub-market will emerge as a leading market.
And you even see this kind of on the image side, right?
You've got a bunch of viable image players that focus on different things, right?
Like ideogram is great for designers, a professional design community.
BFL is the open source community that, you know, especially for developers that use these
things and products.
And then Mid Journey is for, you know, more of the fantasy, like, you know, also professional
designers, but it's a very stylized kind of opinionated view.
and all of these are independent, you know, viable companies.
So I think we're going to see fragmentation for quite a while
before we see consolidation.
I need, the show is successful because I'm very open with my troubles.
I need your advice.
You know A bridge in the US.
I'm not sure if you're in it, but I'm sure you know it.
Very simple.
There's a European player that does like medical transcription for nurses.
They went from 1 to 8 million in a year,
and we're looking at leading their A.
and I'm thinking exactly the same.
You're going up against Abridge
because you're going to need to compete in the U.S.
is this going to be a big business?
Is that a losing game
where you are a European competitive?
This is a great question.
So another very interesting thing
that we haven't seen in a very long time
is we do have geographic biases
showing up with AI
and the regulatory environments
are quite balkanized.
There's language and cultural biases.
that are also balkanized.
And so we're actually seeing a lot of regional players show up.
And so I think it's very legitimate.
Now, the thesis cannot be European Company X wins the American market.
But I promise when it comes to AI, the European market is large enough.
I promise that.
And so I think a very legit thesis is, you know, this becomes a regional player in Europe
and then maybe a portion of the U.S. market.
Can I ask you, a lot of people denigrate these businesses that we've discussed because of their margins.
that simply pass through funnels to the large language models.
Do you think that is something that changes over time
and it's the same for all great businesses?
Uber started off with shit margins, now they have better margins.
I just don't buy that these are endemic to the business model.
This is certainly not my experience at all.
And so there's always this question.
If you're a founder and you get access to relatively cheap private capital
and you can do a trade-off between margins and distributions
and it's land grab time, what would you do?
And the argument is the incremental user,
someone you can monetize forever down the road,
and then if you don't get that user during ladder ground,
you could never monetize it.
The rational business decision
is to sacrifice margin for distribution.
It's just the rational business decision,
and we've seen this forever.
I mean, hell, the web wasn't even monetized, right?
Literally.
I mean, like, this time we can actually monetize these things.
Forget, forget, forget, like,
you know, break even or negative margins.
It was literally like massively negative
because we didn't even have a business model
until the advertisements come up.
So this is like the most rational thing
that markets have been doing,
at least tech markets forever.
And it's no different this time with AI.
I do think there's a question of,
okay, so if you do want to then turn on margins,
how do you do it, right?
And then you can, of course,
you'll either have to build a traditional mode,
two-sided market,
place, a brand moat, the long tail kind of integration and domain understanding. So, for example,
let's say your healthcare company, if they really crack the European market and they understand
all the regulation, like, Anthropics not going to take the time to do that. Or, you know,
so there's clearly pricing power you have on that side. Or you have to do actual technical differentiation.
One thing that we're learning is in this phase of model scaling,
A lot of the approaches to scaling don't generalize.
So if I want to be much better at coding,
I may not be so good at something else.
This gives a ton of room for the application developers
to build their own models that service certain areas
that the large models just aren't focused on.
And so I think there's even a ton of technical level to differentiate.
So my sense is, and this is, I mean, I mean, I don't want to talk too much about,
my portfolio and what I see
just because there's sensitivities
we're around the number
but in my experience most of these companies
that are like they say break-even margins
it's like a board level
specific choice
to prioritize distribution
not just because this is systemically
something they have to do
we mentioned their sovereignty
I am intrigued how you think about safety
and safety around AI and models
you've had Vin or Kosu
would be like we have to lock this down
If this was not locked down, it would be like nuclear secrets being handed out.
I remember then Mark came and was like, fuck that, no way.
How do you feel about the future of safety within this landscape?
I mean, it's crazy to have VCs talking against open source, right?
I mean, Founders Fund did too.
And for me, it's just wild when, you know, pro-innovation, you know,
pro-initivation sectors of the economy academia too
have decided that like open transparent innovation
is somehow an antithesis of safety
and I know that's not what you asked
but like I just want to make the point
it's just we were in very bizarre land for a while
and it seems like we're coming out of that now
so let me let me just draw a bit of a character
you think we're coming out of that
I think we're moving more and more into that
great let's her and Alex
are going to turn long fully closed
great so let's let's let's go back to that in just one second i'm going to answer the question
that you because you actually asked uh like a great question on how i view this and let's go
to whether we're coming out or not so so how do i think about safety um so i you know i was actually
very very close to security during the rise of the internet you know i worked for the intelligence
community um i worked for livermore um national labs and then you know when i did my phd like
you know, a good, you know, 50% of my work was in security.
I taught, like, a cybersecurity policy course.
And the thing with the internet is you had these very specific examples of, of new types of
attacks that, like, impacted nation states, like critical infrastructure would go down.
You know, you'd have things like the Morris Worm, like, you know, I mean, you had these really
significant examples.
And that kind of kicked off this.
this large discussion on how you, how you handle it. And it was so significant at the time that
at the nation state level, you know, we started thinking that we have to actually change our
doctrine. You know, you were kind of this Cold War era, mutually sure destruction. We had to
change it to this notion of like defense asymmetry, which meant the more we relied on these things,
the more vulnerable we were, right, as opposed to like a country that didn't rely on them because
you could be attacked. And then, of course, kind of the,
whole terrorist information warfare stuff.
And so the implications were so absolute and you had so many proof points and you could
articulate them incredibly well.
And so if you look at the AI stuff, I mean, for every computer system, you have security
considerations.
But we've got this 30, 40 year, very robust discourse around this that we can draw from
and use from.
And the thing that I don't understand is how all of a sudden,
And we've decided that these are not computer systems.
They don't obey the same laws.
And we have to kind of throw out everything that we've learned and kind of like revisit
the discourse, even though we don't even have the same proof points.
I mean, like, nobody can make a strong argument on asymmetry or need a shift to doctrine.
And if they can, let's go ahead and have that discussion.
You know, I still have yet to see the dramatic new attack.
It's going to come for sure, but we haven't seen it yet.
And so I just feel like the discourse around this is.
is not in line with the reality.
It's not in line with historical precedents.
And so we should absolutely take these things seriously,
but we should draw on the information that we've learned from in the past
and the approaches we've taken in the past.
The last thing I'll say in it is the biggest difference this time
is in the past, the people created the technology
were kind of pro-tech and the people that were like selling security solutions
were like the fear mongers, right?
So you'd have somebody create like the internet
and they're like, this is safe
and it's great for everybody,
but then you'd have somebody to create a firewall
and like, oh, the internet's dangerous.
Every sociopaths of your next door neighbor.
So you had both the same voices,
but in two different bodies based on interests.
The interesting thing this time
is they're in the same body.
So the person that's creating the thing
is also like, oh, this thing is very dangerous.
I don't recall the last time we had something like that,
but it's created a dynamic
that's just been very confusing for everyone.
Do you not think open source increases the opportunity set
for hostile actors like China and Russia to harm us?
I mean, I think it's tautologically true.
Like, I think tautologically you can say,
do you believe computers and the availability of computers
increase their ability to harm us?
And I would say absolutely computers and availability of computers do.
But very specifically, open source over closed source.
So I think that right now open source is most dangerous because China is better at it than we are.
And as a result of that, we're seeing a proliferation of Chinese open source models everywhere.
Now, unfortunately, we don't have control over Chinese regulation.
And so I would say the answer is yes, because of China.
and not because of us,
and the right way for us to respond
is to fuel our open source efforts against that.
So let me just be very specific.
So I think Chinese open source
can be a national security issue, for sure.
And any of this offer that produced by a nation state
that we view quasi-adversarially,
the way that we combat that is we also are incredibly open
and we also do a proliferation of,
technology. What do you think we can learn from China regulatory-wise that would enable us to have
the same or better open source ecosystem slash environments? I mean, to me, this is, you know,
the United States is a long history of being pro-innovation, pro-innovation for national security,
pro-innovation for national defense. I think we should be funding this stuff like crazy. I think we
should get the national apps involved. We should get academia involved. We should make this a national
priority, just like China does, and we should just, you know, a full-throated endorsement of all of
this stuff. I think we should do closed stuff. I think we should do Dove and stuff. And we've done this
forever. You know, my first job out of college, this is, you know, 1999, was working at Lawrence Livermore
National Labs on the ASCII program. And what were we doing then? We were, I mean, the broad program
was stipulating nuclear weapons. I mean, this is what it was. And a lot of the concerns we have
today, we're concerns we had then around compute.
I mean, we actually stopped Saddam Hussein
from, like, importing
PlayStation because we were worried about, you know,
using them for simulation. We'd put export controls
on the hardware.
And we'd say the same things. Like, oh, you know,
computers out there, like computers,
you know, they're going to
enable, you know,
you know, the enemies
and all sorts of stuff. And this is like nuclear
weapons. This isn't like some abstract
AI thing. This is like actual
on the ground weapons.
The posture that we took at the time
and the conclusion is we're just going to be the leaders
and all of this stuff.
And we funded academia
and we funded the labs and we won.
And we were able to control
like the technical discourse
of the planet going forward.
And this time, instead we want to put our head
in the sand and let somebody else do it.
So like they're going to learn from our, you know,
our success and somehow, you know, we're not.
Do Trump's cuts to university's research
labs, not impact your ability to do what you just said? Are you not actively going against
what you should be doing? I am very pro-investing in academia and in the national labs.
I think there's always a political shift in money, depending on what they view is in line
with administration politics.
Like, I've, I still, I can't tell you, you know, I did my PhD at Stanford.
I've done a bunch of NSF grants.
I don't remember ever somebody saying, we like indirect costs.
Every researcher, every professor, every single one was like indirect costs are terrible.
Obama, Obama tried to get rid of indirect costs.
He was like, you know what?
universities, they have a tax-exempt status. So why don't we just have them, you know, spend
5% of their endowments like any other tax-exempt organization? And, you know, that will cover
a lot of indirect costs. And he couldn't get it through. So this is a bi-partisan issue that is
longstanding. And I mean, I would say that like a change is needed. Now, to the
extent that, you know, I think these things are very hard to implement. But I would say
concretely, yes, we should invest in these things. Yes, we need a shift in how funding happens.
I do think that, like, indirect costs have gotten way out of hand. And until it was like Trump
doing it, everybody that I know in academia totally agreed. But yes, of course, change and shifts in
funding will be disruptive. And so I think all things are true. I just don't want to do. I don't want
or do this to a simple like Trump does bad things because I don't think that is the case.
And then, you know, funding science is arbitrarily good because I don't think that's the case.
I mean, I definitely think we should fund as much or more.
I definitely think that you shift in funding and change to the system is needed.
And, you know, the right path through that is complex.
I don't quite know it.
You very kindly said that I asked a good question on the reversion back to closed source.
And we mentioned Alex joining matter, what it meant for Lama.
I've said quite zero sum-wise, to your point,
we're clearly seeing a movement back towards closed and away from open.
How do you see that?
And do you disagree as my statement now on the transition?
No, I think that's – so I agree on the ground 100%
that I think we're seeing a movement away from open source,
but the rhetoric around open source has shifted, right?
I mean, we just had the AI – what is the name of the bill that just came out?
I mean, it's like the American AI policy and recommendations
is a full-throated endorsement for open source.
So I think discourse-wise, there's more support for open-source than ever before.
I think ecosystem-wise, I think you're right.
I do think it's quite likely that we're going to see less open-source.
Now, listen, Open AI has said that they're going to open-source.
That would be wonderful.
And if they do that, I think that would be very, very positive.
Do you think they will?
I just, I have no idea.
I hope so.
It'd be a very rational.
I mean, here's a great, maybe here's the, like, we say open-source, but it's
such a misnomer when it comes to when it comes to AI. I mean, the standard model of open sourcing
AI is you open source the smaller model and you keep the more capable model closed source.
And it's a way that you get distribution and brand recognition, but you don't actually erode
your business. This has been very, very successful as a business model. And unlike actual
software open source, just because you release your model doesn't mean somebody can replicate it.
Like to replicate it, you'd have to like recreate the data pipeline and the training pipeline.
And so, you know, I think that there's just like a lot of concern of investing, you know,
hundreds of millions of dollars or billions of dollars to train something and then just
giving all of that away.
But I feel very confident that the business justification is there and behavior will always
follow business.
And we're going to continue to see open source be a large part of the ecosystem.
And remember, historically, open source has only been about 20% of the total market value.
I would say it's much higher than that for AI.
So in a way, we're doing better.
than software has historically.
What did you believe about the AI landscape
that you now no longer believe?
We've touched on so many different elements.
My mindsets have changed around so many.
I mean, the one for me that I've just consistently got wrong
is just how fast these coding models advance.
And this is probably just sunk cost fallacy.
My entire life, I've just been this nerdy program.
I've been programming since the 90s.
I mean, it's like it's my happy place.
And I just never thought that they would advance
to the level that they,
they have. I mean, I still develop most
the evenings and it's just, you know, instead of
watching a sitcom, I just goof off
and mostly writing like old video games or whatever
just for fun. Like, it's silly
stuff. And I'm already at the
point that I just, I couldn't
I just couldn't work back
to working without them
and I've spent, you know, 30 years
without them. And it's just
their ability to
offload all of the shit I didn't
want to learn is remarkable.
The thing that kept me away from code
for a while, which is I would kind of
dabble with it, I would drop it. It's, you have to
just learn all of this, like,
all these weird frameworks
and like, it's, none of the
knowledge is foundational. It's just like some
fucking random dev came up with some weird way
to do something and you've got to kind of
learn, you know, some poor
design decision to do it and none of it made
any fucking sense. And it just felt like
you're wasting your brain space
on poor decisions
made by random open source development.
And that was programming in the past.
I probably in the programming.
So let me just put it in context.
In the late 90s, programming was you download your IEDE, you sit down to your computer,
you program something, and then it would turn into a binary, and then you'd run that binary.
And so, like, you could, like, really get a lot done just by sitting down and writing code.
You know, by, I would say, like, 2015 or so, you know, writing with something, it's like,
you'd have to, like, fucking, like, download.
like 50 million packages and like to run it you got to run some stupid dev server and to like
actually have anybody else use it you got to like learn how to host it and you know like it was a
bunch of libraries that were like dealing with incompatibilities for all this is a weird
fucking platform so like 90% of your time and nothing to do with code like 90% of your time
was just dealing with all the environment platform bullshit and so what's so nice now is you can
just focus on your code so like now I literally just I mean I use cursor and I just have like
like, I just have the AI, tell me how to host the thing and tell me what package to use
and whatever. And I just strictly focus on what I want in the logic. And so it's almost like
it's brought coding back. And you can see this across the industry. Like all of, like, I've got,
I mean, I grew up in the industry. I know a bunch of very strong developers that have been
developing for a very long time that have basically stopped. They're like running companies now or
whatever. And they're all back to programming at night. And I, I really think that, you know how
like there's like the adage of like, I don't know, like the old man that goes into the garage
and like makes the train set for like nostalgic reasons. I think like the modern version of it
is these old systems programmers like, you know, vibe coding at night just because it's become
pleasant again. And so I know you asked about the thing that's kind of surprised me the most,
but I really think it's such a marvel what these coding models are able to do. And they add very
real value. Do you think they make one X engineers 10x or 10x engineers?
10x. 10x engineers 100x would be what I said. But I don't actually think it's that. I think they make 10x engineers 2x. I would say every company I work with uses cursor, right? And then if I actually look at, has that increased the velocity of the products coming out? I don't think that much just because so much. So what's changing then? Because dev productivity is going up. So it's the quality.
of product going up if the product
release cadence isn't?
I just think the things that are hard
remain really hard.
And so
you know, like, let's just talk about
like creating a model.
So
let's say I'm creating a new model,
a new frontier model, right? And to create
that new frontier model, I've got to collect data
and I've got to run a pipeline and I've got
to like sit with my, you know,
my Jupiter notebook and I've got to like look at
the lost curves, I've got to rerun it.
And like, that's just a lot of kind of experimentation and so forth.
You know, there's no coding model that's going to do that for you.
But if I wanted to create tests or a test suite or, you know, or visualization or write
documentation, it's actually really good at that.
And so I would say that probably in the long run, having more robust, maintainable code
bases with less bugs is just as likely to be the impact as feature velocity.
Because, you know, in startups, again, I'm an infra, I'm an infra guy.
This is probably different for the apps.
Like, I've always thought apps had no technology to begin with.
Like, every time I look at vertical SaaS, I'm like, why don't even care about the technical team?
It's fucking crud, man.
It's like, crud is like create, you know, read, update, delete.
It's like they all do the same thing.
They all just kind of look like a web app.
They're all, like, who cares about the technology?
The technology is simple.
These are all these kind of go-to-market things and whatever.
But infrastructure is different.
Infrastructure is like very real trade-offs in the design space
that only some of the understands computer science would know.
So for infrastructure companies,
I think it's quite unlikely that AI will really help speed that up
because it comes down to something that the developer has to decide on,
has to articulate the trade-offs.
But I do think it could really help with the development process
so you have less bugs and things like that.
And so I actually view it more as like a more robust developing methodology
that necessarily speeds up the core product.
Given the kind of dev productivity changes that occur because of these tools,
how does that impact defensibility within companies today?
If time to copy, which is Misha Fiver said this on the show,
he said time to copy has basically been reduced to nothing.
To what extent does that change?
defenseability for companies.
I mean, I still think we should just go back to the split between apps and infrastructure.
For apps, like, how long does it take to copy it anyways?
I mean, you know that there are entire companies that their stated purpose is just to copy
another company in the app space.
It's just so easy to do.
I mean, there is no core technology for random app.
I mean, there's no, like, differentiable technology for random app.
Let's say that you're creating, I don't know, some health.
health care, vertical SaaS thing.
Like, you could contract, and you have been forever, the actual app.
I mean, the business is actually the long tail of understanding that domain.
So I just don't think it changes that paradigm at all.
And then when it comes to core infrastructure, which is what I focus on, things like, think
like databases, foundation models, there's no way that right now models can just copy.
And the reason that there's no way is it, it's not that the models aren't capable of doing
the technology. It's just that there is a long tail of understanding of the tradeoffs for the
particular use case and domain. And because it's a new market often, then you understand that
through market exploration. And so I just don't feel that I think these models really help
with the software development process for, for, you know, non-deeply technical areas like apps,
sure they can help speed it up. But over time, all of these reduced to a long-tail understanding
of the market. I mean, Aaron Levy said it so beautiful. I mean, do you know what the average,
what do you think the average PR is, pull request is for a production code base? Like, how many,
how many lines of code is the average change that gets accepted? Would you guess for like some
production enterprise app? I have no idea. It's two. It's two. Yeah, it's very, very small. It's
actually two, but let's say it's 12, right? And what is that two or 12? And what is that two or
12-lines signify. That two or 12 lines signify probably some learning in the field or some
understanding of what is needed. And so the long tail, the thing that's the hard thing is to
understand the specific deployment environment of market you're going to. That's the hard thing.
The hard thing isn't the two lines of code. That's actually quite easy. And so in many ways,
I would say, you know, the AI is getting rid of the middle, right? So very new computer science
like models, they don't know how to do just because nobody's done it before, and that's kind of
pushing the state of the art.
And then in the app space, all of the hard stuff is the business anyways, right?
And this is why, like, the changes are very small, and, like, you learn everything to go to
market, which the models don't know just because you're exploring a new market.
And it's all the bullshit in the middle that they're helping us with.
And so, you know, for me, it's just kind of netecutive.
Do you think that CS holds the same weight as a study in education discipline that
it always did and you would always recommend it or does that change in a world that's
funnily more democratized in terms of creation like we discussed i mean i feel i feel very strongly that
like if you care about building systems out of computers you have to understand how they work
what do you think we do today martin that we will look back on in five or ten years time and you
i can't believe we did that it could be prompting it could be choose the model that we're working
and I find it ridiculous that we are supposed to choose which model,
like GROC 3, GROC 4, GROC 5, GROC shopping, GROC weather.
What the fuck?
Just figure it out.
Well, I'm just taking it from a programmer's view.
I mean, I just think hopefully we'll just stop worrying about frameworks altogether.
And maybe even languages, maybe even like a proto-language evolves.
And we can just focus on logic and fundamental trade-offs.
I mean, we've gotten in this very backward.
world where these days programmers think about all the non-fundamental stuff and they don't think
about the fundamental stuff. Let me give you an example. So I always worry, this is going to be
this weird philosophical rant, but I always worried, you know, while I was doing grad school and
when I was doing research, that we kind of entered a space where there's so much research that
has been done over the years that you never know if you're doing something new. Like you just
couldn't do the literature search. There's so much. And so like the entire industry just spent all of
its time redoing research.
You know, it's like, it's like, it's like you're, like, cleaning a room and you're trying
to, like, sweep out the dust.
But rather than sweep it out the door, you're just kind of moving it.
Like, you'd move it to the bed or you move it to the wall.
And then, like, that's all you do is just kind of sweep the dust around, but you never
actually get it out of the house.
That's what research felt to me.
It was like we're in this mad delusion.
And on top of that, it also felt like many of the most important problems were kind of between
disciplines. And so, like, in order to even solve them, you just have to know too many things,
and we couldn't do that. And so I just felt like there's, like, the entire scientific industrial
establishment was just kind of redoing the same stuff. And so in a way, I think AI has the ability
to pull out of this mass craziness, this mass ineffectiveness, which, A, it's very good at telling
you if you've done it before, right? You know, it's very good at that. It actually knows all the
literature, knows all the history. And it's also very good at tying different disciplines, right? It is an
expert in all of these things. And so I think we've been stuck in this morass. And it's a bit of a
liberator so we can actually focus on the new problems and know we're doing new things. And so I've
got this very optimistic view of where it's pulling us. And so I know it's more of a philosophical
answer to the question that you asked. But in a way, I think it needed to happen to get to the
next level of problems that we need to solve. In terms of like societal implications there,
I mean, the worst question ever is like, oh, the job displacement question.
but I am intrigued
because in the one hand
I see intense job displacement
happening fast than ever
and then I'm also very aware
of Bradfeld wrote a brilliant post
where he basically said
every single cycle
every time we've always said
oh what are we going to do
calculators what are we going to do
computers what are we going to do
AI now what are we going to do
to what extent does this actually
require the what are we going to do
versus another for fuck sake
don't we see the pattern
Yeah. So I'm very sympathetic to concerns around job displacement. And I think we should take him very seriously as a society. Like I'm in no way libertarian. I think that this is kind of where governments do step in and we do help out. But first we have to understand. And it's actually very unclear. So let me tell you just a quick anecdote. So I, you know, my cousins are all pretty like, I think high ends the wrong term. But they're pretty established trans.
translators. And they have been for a long time, multiple languages, and, you know, they visited
recently. This is a husband and wife pair. And they're like, listen, like, you know, we have to change
jobs because translation is all going to AI. And I asked, I said, you know, so the jobs are going
away. And they said, well, no, they're shifting. And now, instead, we've got to, like, spot-check
these AIs. And the only way we can hold it up to our standards if we rewrite the entire thing,
but they won't pay for that.
And I don't, by the way, these are Italian,
so they speak this way, but they're like,
you know, I can't work on something without a soul, right?
And I think that their dilemma is a good microcosm
for the broader dilemma,
which is one thing that's very unique about AI
is that it actually requires today a human handler.
I mean, they're just so unpredictable, you know.
I mean, most of the use cases that we know,
all the monetized use cases have a human on the other side of it, right?
I mean, coding, you've got a professional coder, all the creative stuff.
You've got, you know, somebody like doing all of the creation.
I mean, these are, it's kind of an enabler and that's a tool.
But the nature of what you do does shift.
And that's very different than, for example, electricity where, like, it doesn't require a human.
Like, it's like either you light the fire or, like, there's no fire to light.
And so, you know, I think we as a society need to understand the level of displacement.
We have to understand it.
I think it's very important that we do.
I think these are things that governments should get involved in.
I do just have to turn to your venture investing just before we do a quick fire.
Do you enjoy it as much as you did before?
It is a much faster landscape.
The money is much bigger.
Do you enjoy it as much you did before?
I love it.
I love it.
They said that they didn't think you enjoyed the administrative work that you now have to do.
with the size and scale of Andreessen?
Oh, well, those are two different questions.
I love the investing.
I mean, the investing is great.
It's just the most exciting time in the industry
since the late 90s.
It's great to be part of a super circle.
I love it.
Actually, no, I love the...
I actually really like the firm building side.
You know, I mean, frankly,
I could do without, you know, endless meetings,
but I've actually been pretty good
at limiting those too
and so no no I think this is actually the most exciting time
to be in the industry and venture
I'm not trying to I'm not trying to bullshit
I'm not trying to bullshit you I'm a venture investor too
I'm with you and I say the same to our LPs
is your price elasticity more on deals
because of the super cycle entry point that we're in
or less because of the risk or uncertainty level
that we're in
philosophically for me
philosophically I just think the market
sets the price
I just don't have the hubris
to think I can somehow
outsmart the market or
like a single deal is going to like bend to my will
and so I mean
philosophically how we think about
investing in general is
Because of price often
Price no ownership yes
What does the ownership we need
It all depends
on the fund, the market, the size of the market,
the understanding. Everything comes down to ownership for us, not price.
I mean, you just can't make the fund mechanics work
if you don't get the ownership.
Now, for very, very, very, very, very large markets
that are obviously very large for very large checks,
then we don't care as much.
But that tends to be growth territory anyways.
For early stage investments, you know,
you kind of need to understand what the median outcome is
and you have to be able to
size the median outcome in a way
that at least returns, say, a fifth of the fund
or half of the fund.
Is that not the joy of being at Andreessen?
You can take a 5% ownership on first check
because you can size up into the next
and size up into the next.
Is it not my challenge
that I have to get as much as possible
on the seed or the A?
So the way that I view it as a bit different,
which is I think there's two legit ways
of investing now that have emerged.
One of them is you're very much a specialist
and you've got a special network, special value,
you understand a special, sorry, you understand,
sorry, you understand a special size of the market.
Like, like, you're very, very much a specialist.
And that is kind of how you win deals,
get the ownership, keep the ownership,
and then make your company successful.
The other one is, and I wouldn't say it's like an AOM thing,
but it's like you have all of the products
so that you can be adaptive in the market.
Because, you know, I've been doing this for 10 years.
The strategy that works has shifted this entire time.
Sometimes it's early.
Sometimes it's mid-stage.
Sometimes it's collaborating with growth.
And so if you don't have, honestly, sometimes it's credit.
Sometimes, we don't have a credit fund, but I can understand why people do it.
And so the market is competitive and everybody's scrambling for deals.
And if you don't have the different funds or products to offer,
then often that's kind of where people are going to squeeze you out
or get alpha, et cetera.
And so I think that for the game that we play,
it's very, very important that you have all of these funds
and the ability to enter at all stages for exactly that reason.
And so, again, I don't think it's a you, me thing.
I think you play a very different game than we do,
because I do think that on one side, like, you know,
you have to go very specialized, very focused very early,
where for us, you know, we're trying to find out
what is the right time to enter,
to, you know, to get the ownership that we need.
What's the size of fund that you primarily invest out of day to day?
I know you have flexibility.
1.2 billion.
So I run the infrastructure fund, which is $1.2 billion fund.
So my challenge here is your cost of capital is just so much less than mine.
Your ability to put a larger check in, bluntly, with much more confidence,
is that because I'm investing out of a $275 million series A fund and $120,
$25 million seat fund.
It's just like much more meaningful dollars for me than it is for you,
which will affect my willingness.
Yeah, well, my challenge is like we have to live with these investments forever,
and conflicts are very, very difficult for us to do.
And so we don't enter very often at the stage that you do for this reason.
I mean this respectfully, everyone chastises Andreessen for their conflicts
and for investing in many conflicting companies.
Do you think that's unfair?
It's so hard to keep your nose clean on this one,
because especially with a shift towards AI,
companies pivot all the time after you invest.
Like, I don't recall, like, intentionally investing in a company.
In fact, I mean, we routine, I would say the number one reasons we don't, that's not true.
One of the top reasons we don't invest in companies because of conflicts.
I mean, we do it.
I mean, just recently, I can't say the name of the company.
We didn't invest because it was a hard conflict.
And even though, like, by the way, the company, the portfolio company was not doing the thing, but it was on the roadmap.
And the founder called me, he's like, Martine, you just can't invest this company.
I said, okay.
So I think we try our best to keep...
Can you say okay?
Sorry, sorry, just to push back on you there.
If it's not on the roadmap,
I'm really sorry, founder.
I have as much faith in conviction as you as possible.
But if it's not on the road map,
I'm not having you tell me how to do my job.
So here's my talk track,
and it's evolved over the years.
And I stole this from Chris Dixon,
which is I say, listen,
you have one mortal enemy.
You choose whoever that mortal enemy is,
and whoever it is,
I'm with you,
if we're going to go kill that mortal enemy together,
but you get one.
You don't get an arbitrary number of mortal enemies.
And so in this case, I'm like, listen, is this it?
Is this your one mortal enemy?
And the viator said, yes, this is the one mortal enemy.
I'm like, all right, fuck them.
Let's go kill them.
And that's, and that's it.
That's kind of, now, listen, we have a number of companies where they pivot midstream
and they start competing after we've invested.
It happens all the time.
And we also do have the venture and the growth fund.
And we try to minimize conflicts there, but sometimes
they happen, you know, just very different stage companies, very different teams working on it.
But I would say that we try very, very hard to steer away from conflicts.
Given the nature of, as you said there, the volume of pivots that occur today, given your
entry point, I always advocate wholeheartedly for being 98% founder.
And then you have wonderfully smart people like Elad Gill wholeheartedly advocate for being
market first.
how does the pivot frequency and experiences you've had
impact your prioritization mechanism around where you spend time?
So listen, I don't want to speak for Alad,
but that's not my experience working with Alad.
And I've done many deals with him.
A lot is very, very focused on the founder.
I think the one thing I would say
is he's very good with founder market fit.
Maybe the best in the industry.
I have a huge respect for how Alad invests.
unpack that. Why and how does he do found a market fit that's the best? He will find a market
that he really likes. And sometimes it's like even a fast follow market, right? Like, you know,
and then he will find who he thinks is a great founder for that market. And so he's very good at
like this kind of boy band construction based on the market. The primary point I want to make is
is very much in his investment cycle,
the founders have always mattered.
Any of this, he's followed on deals.
I've done, I've followed on deals.
He's done, we've done a bunch of deals together.
I've never, I've never gotten the impression.
I mean, I actually always got the impression
that actually the founder's, the primary decision
once he's chosen the market.
So I would say it's a primary concern for him.
When you have misjudged a founder,
what did you not see that you should have seen?
So can I ask you, can I ask you your previous question?
because you're like, okay, so how do I, how do we think about it?
So we think about it very, very simply, which is the only sin in investing,
and I've sinned so much.
The only sin in investing is missing the winner.
Like, there's no, it's fine to, like, invest in a category that doesn't work.
It's fine to lose money.
But, like, if you choose the wrong company, like, that's not okay.
And listen, it's just so hard to get it right all of the time.
And so the way that we view it is we just look for viable, you know, what are viable spaces?
And it's determined viable because...
Someone said to me the other day, I'm so sorry to interrupt you, that Andreessen, you get killed for choosing the wrong company but being right about the space.
You won't get killed if you were just wrong about a space.
Correct. That's exactly right. Yeah. Yeah. So the view is like there's basically no amount of work you can do to determine if a space is going to work or not.
I mean, that's just, you know, that's like weather prediction, but given a set of companies, you can actually do the work to understand which one of those are the best.
Now, we've got it wrong.
Do you think you can?
The question is, can you beat the market with that strategy?
Yes, I think you can beat the market.
No, I do not think that you can equivocally tell the best.
Can you beat the expectation of the market by running this strategy?
I would say, yes.
Can you specifically pick the winner every time?
Absolutely not.
Clearly not.
When did you most poignantly for you pick the market but pick the wrong horse?
I just don't want to, I don't want to call out any specific company.
Fair enough.
When you think about like you mentioned sins there, what's a big sin that comes to your mind when you were?
Well, yeah, I mean, I can answer the opposite.
There's a bunch of markets that just haven't really worked, right?
Like, you know, the entire streaming market has been very, very tough.
Like, the data streaming market.
It's just turned out to be a subset of the analytics batch market.
And so, you know, maybe, you know, Click House is, Aaron Katz is doing phenomenal.
And I'm not an investor, but he's doing phenomenal.
But that may be the one breakout since Confluent.
But, like, that's just been a very, very tough space historically.
Whether you're at the dashboard layer, you have the transformation layer, you have the feature store layer.
It's like there's been entire spaces where we've played multiple.
will best, but like, it just didn't, it just didn't work out. And so many, many, many times
we'll invest the space where just none of them work. You know, I will tell you, there's definitely
been companies reinvested where at the time the company was the very, very clear leader, and then
something happens, some macro shift, some, you know, something else happened. And, you know, I think
that's just how the game goes. And you've probably heard this. I mean, the thing with actually
having a strategy like that is if you're trying to scale a venture firm, you just need something
that you can articulate and teach other people.
I just find it hard that if you pick the right market
and the wrong horse, bad, Martin.
But if you don't pick the right market, fine.
To me, some points need to be given
for the insightfulness to pick the right market
and some forgiveness to be seen
for that it's fucking hard to pick the horse.
Almost I'd fire the one who picked the wrong market entirely.
Where was your insight, at least?
Yeah, and this is why you run your...
on venture firm and you can have whatever strategy you want.
Is that not, is that moronic? No, I learned. No, no, no, no, it's not. No, I just think it's
philosophically different on the approach, right? And so I actually don't believe you can
predict the future of technology adoption. It's a very tough thing, right? I mean, you don't
know what a big company is going to do can wipe out an entire market. You don't know
innovation will wipe out entire markets. This happens all the time. I mean, you could argue that
AI is really invalidating tons of markets. And I don't think anybody could have seen that
happen. But if you have, say, 10 companies that have some traction and you can talk to the founders,
you can diligence to the teams, you can diligence to the market, you know, deal with this to the project,
you're doing to the technical approach, I think you can just say something a lot more concrete
than, you know, is some future innovation going to wipe out this entire market?
Do you think it's paradoxical or opposing to believe that both AGI will be dominant and
present in a set time period and to at the same time be investing in enterprise?
I don't know. I mean, I would say humans are AGI and we still invest in enterprise
SaaS. This is the problem is everybody somehow, they somehow think that AGI just means like
unlimited powerful and anything I want to disappear in the future disappears. Come on, you're AGI.
I'm AGI. I am. I think to be honest, we invested in a process. I think to be honest, Sam Altman,
that's the definition of what AGI is. So whatever him and my
Microsoft Decider's AGI, will be AGI.
Dude, I want to do a quick fire round.
So I say a short statement, you give me your immediate thoughts.
Yeah?
Yeah.
What's one of the most over-hyped AI categories today?
ASI.
What's one of the worst VC takes on AI you've heard recently?
Open source is bad for national security.
What one founder would you back in any category?
Whatever they did, I just want to widen them the money.
Michael Trull
Hmm
Why
Specifically
I've worked with them for a year
He's just remarkable
He's uh
What makes him remarkable
It's just so rare
That I've found a founder
Who
Knows
He has three things
He knows what he wants
He's got an intuition
That's impeccable
and he listens incredibly well
and gathers information
and it's a very, very
potent combination
and then of course he's incredibly smart
and he's got great product taste.
What's your favorite trait in yourself
that has been most impactful to your own success?
Deep-seated anxiety from being poor?
Seriously, I agree.
I mean, listen, I grew up
like, you name it, food stamps, dirt road,
like I mean I come from Montana so funny people hear the name Martin and they're like
oh he must be so and then you know I was actually born in Spain so I'm a Spanish citizen so they're
like you know he must be some like sophisticated European I'm like motherfucker dude I grew up on
a dirt road in Montana like when there was hunting season my school shut down like I'm a
like a Western country boy and so um I you know listen I you know I mean I mean I had a great
family I didn't have any of those hardships had a wonderful family and educated family
and so, like, you know, we kind of muddled our way through,
but, you know, you go through that
and you see how hard your parents work and whatever.
You just don't take anything for granted.
And, you know, listen, I sold a very successful outcome
from a company, and I could have retired on that day.
And I still have not taken a day off
or I haven't worked since basically forever.
Now, listen, I'll take like a week.
off while I have a job, but I've never not had a job in, what, 20 years? It's just...
Did that day feel fucking awesome? Coming from a dirt track and bunny, you know, food stamps,
as you said, you can retire today. I know you didn't, but did it feel as good as you thought it would?
You know, it's kind of an interesting thing. No, I mean, no, I, you know, I mean, it was very
bittersweet. I think you actually selling companies is very bittersweet for any founder, right?
It's like, you know, it's a death in a way.
I mean, you know, you spend so much time with something and then it shifts.
But here's the interesting thing.
And maybe this is kind of advice to other founders, which is you always think about,
you always think about that thing you'll do when you, like, you know, make the $100 million or whatever.
You're like, you know, I'm going to go do that thing.
But you only think about that thing in the most stressful times.
So my thing was, so my cousin's a movie director, his name is Vincenzo Natale, a pretty legit guy.
And I was like, you know what I'm going to do?
As soon as like I, you know, the money hits the bank, I'm going to drive down to Hollywood and I'm going to help him make movies and be an actor and just kind of be one of those people.
And I, and so, you know, it happened, the wire hit and I was driving down the five.
And I'm like, what the fuck am I doing?
Like, I love technology.
I love my job.
I don't know.
I hate Hollywood.
I have nothing in common with these people.
people. You know, I probably got two hours out of town and I just turned my car around and came
right on back because I was like, you know, you only have those visions at the most stressful
time. And when you're not stressed, you realize that there's something that brought you to this
place and his genuine interest and genuine love of it. And so my only, my only advice to other
people going through this is just don't, don't use those dreams that you concocted when you were like
really in the pressure cooker, like not sleeping, your relationships are falling apart. That
whole thing like that's not the thing that steady state you're going to want to do like you're
probably where you are because of for the love of and letting that go tends to be pretty disastrous
to some people was making money or having money what you thought it would be you know i had to
play all of these tricks i actually borrowed one which was very helpful um which was uh so i just
have had a hard time spending money just because like i mean literally i mean like for me like
you know when i got into like the stanford phd program this is so embarrassing but like
we always thought like $20 was like a lot of money growing up like you know and we'd call it like
the yuppie food stamp because it was like 20 bucks and and uh i remember i was like i was going to go
to bites cafe and i was going to pay with $20 like a $20 bill because like that's kind of like
like some like stamp of like having money so you know i was just you know just so naive to all
of these things um and so like it was just very hard for me to like you know like once you know i made
enough generational, you know, generational wealth to do it. And so I talked to a friend of mine
who I went for similar things. Like, you know, I did. He said, I came up with, let's, you know,
let's call him Brad. I came up with a Brad coin. And the Brad coin, let's say I'm worth, you know,
10 times more than like an average rich person. So my, the Brad coin is worth, you know,
10 times more. So I buy a thing in Bradcoins. And so if it's, you know, let's say it's a business
class flight, right? I'm not.
That's $10,000, but in Bradcoins, it's only $1,000.
And $1,000 sounds a lot better than $10,000, so I feel good.
So I actually had to adopt a lot of these mechanisms where, like, I'll make a martin coin and it's worth this much money.
What got worse with money?
This is something I have to deal with all the time, but, like, I mean, my wife forces me to keep it real.
I mean, she just won't abide by any of the shit.
So, man, I got three fucking dogs that are crazy.
Like, she doesn't like help in the house.
Like, I drive a fucking Volkswagen.
we have three chickens in the back you know i'm like fucking schlepping the kid all the time i mean
like listen man if it were me i would be living your life man i'll be like 100% you know you know
being new york in the penthouse with the private jet and instead i'm in the fucking
vols wagon with three dogs in a messy house and no hell so i just like i uh yeah dude you're so
whipped. You know, it's not even that, right? It's like, you know, like, I mean, this is what
marriage is, man. Like, you know, what's your biggest lessons on marriage? For me, I'm 29. I got
a great relationship, but not quite that yet. What would you tell me about greatness in marriage
that I should know? Well, listen, I got it wrong once. I'm not sure I can, I'm the right guy to
ask here. Like, my startup was really tough. Like, you know, it was, it was really tough. And I think
that burned through my first marriage and she's
she was great yeah fuck dude I'm the wrong
guy to ask
I'm really the wrong guy to ask
I mean I will say I will say
something I mean which is a different question
than he asked but I think it's important which I
have found
that men in particular
that have stable relationships
just do a much better job
in work
they're just much more stable I think the best founders I have
tend to be, like, have families and et cetera.
And I do think, again, like, you know, I don't want to make it a gender thing.
Maybe it's not.
It may just my observation.
I work with a lot of men that, like, families are really, really, really good for men,
even though they can be a pain in the ass.
And so I just think the only high-level view is, like, it's just these things are super important.
And so, like, whoever you have and you're working with it, like, it's an important thing that, like, kind of,
Like, it really is keeping you grounded.
I mean, in my case, listen, like, I mean.
You got chickens, baby.
I mean, you know, it's like what is Zorba the Greeks say?
It's the full catastrophe.
I know it's the only way I can do what I do.
There's no other way, right?
I mean, like the level of pressure is the amount of work that I do.
I mean, it probably work all in 80 to 100 hours a week.
I've been doing it for 10 years.
I mean, the amount of demands, I just, it's very, very hard to do with, like, without, like, you know, support and grounding.
And so, you know, in a way, again, like, I'm not the right person to ask her, like, how do you treat your way?
Like, I just, whatever, like, I'm a fucking autistic nerd.
Like, I have no idea.
But I do know that these things are incredibly important for us.
And you should value them and treat them as such.
If you think about Andreessen in 10 years' time, where do you think Andresen will be?
be then? Like, what does, the 10 years ago, when you remember it, it was a fucking different
firm, amazing and innovative in its own time, but it was from where it was now night and day.
Yeah.
Where is the 10-year Andreessen in 2035?
The most remarkable thing about the firm, in my opinion, is that it's able to evolve and
adapt very aggressively because the way it's structured. I mean, Mark and Ben really are the top
of the firm. They really are. And I think it's a feature, not a bug. And I think that
I think it's very, I mean, it's kind of a historical quirk that VC was created around a partnership model.
Like, that's the same thing you use for a dentist office or a law firm.
And I think it's, there's positives in that there's a bunch of different agendas that kind of sit at the same level.
But for like decision, velocity, and disruptive change, it's death.
And so I think that that's a massive benefit to the firm.
I'm just delighted that this is the way it is because they can make these.
big aggressive. So I don't know what it's going to look like in 10 years. I guarantee it's
going to look different as it evolves with the landscape. Martin, I so appreciate you, dude.
You are fantastic. You're open. You're honest. I love the last 15 minutes there. But I really
appreciate you, man. Yeah, likewise. Harry, always a pleasure. You're the best.
Thanks for listening to the A16Z podcast. If you enjoy the episode, let us know by leaving a review
at rate thispodcast.com slash A16Z.
We've got more great conversations coming your way.
See you next time.
This information is for educational purposes only
and is not a recommendation to buy, hold,
or sell any investment or financial product.
This podcast has been produced by a third party
and may include pay promotional advertisements,
other company references,
and individuals unaffiliated with A16Z.
Such advertisements, companies, and individuals
are not endorsed by AH Capital Management LLC,
A16Z, or any of the agency.
its affiliates. Information is from sources deemed reliable on the date of publication,
but A16Z does not guarantee its accuracy.