a16z Podcast - Why a16z's Martin Casado Believes the AI Boom Still Has Years to Run
Episode Date: December 30, 2025This episode is a special replay from The Generalist Podcast, featuring a conversation with a16z General Partner Martin Casado. Martin has lived through multiple tech waves as a founder, researcher, a...nd investor, and in this discussion he shares how he thinks about the AI boom, why he believes we’re still early in the cycle, and how a market-first lens shapes his approach to investing.They also dig into the mechanics behind the scenes: why AI coding could become a multi-trillion-dollar market, how a16z evolved from a small generalist firm into a specialized organization, the growing role of open-source models, and why Martin believes AGI debates often obscure more meaningful questions about how technology actually creates value. Resources:Follow Mario GabrieleX: https://x.com/mariogabrielehttps://www.generalist.com/Follow Martin Casado:LinkedIn: https://www.linkedin.com/in/martincasado/X: https://x.com/martin_casadoThe Generalist Substack: https://www.generalist.com/The Generalist on YouTube: https://www.youtube.com/@TheGeneralistPodcastSpotify: https://open.spotify.com/show/6mHuHe0Tj6XVxpgaw4WsJVApple: https://podcasts.apple.com/us/podcast/the-generalist/id1805868710 Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
If you ask me, what is the one area that AI has surprised you?
It's encoding. I've been developing my whole life, and I would never have guessed it'd be this good.
You have mentioned that some of the energy that you're seeing in AI really reminds you of the 90s.com boom.
This feels a lot like early 96, but I don't think we're anywhere close to a late 90s level bubble.
No, I think that could come.
The current technology wave is you can actually deploy capital and you can get revenue on the other side of it.
And I think that is what the market is trying to normalize.
But there's a true value being created in this AI,
and I think that if money's not following it,
it's going to miss the greatest super cycle in the last 20 years.
How would you describe your investing style today?
What is your filter?
I used to think from company out.
I've stopped that.
Now I think only from markets in.
The reality is the market creates the company,
in most cases, not the other way round.
And so I always start with what is the market,
and then I ask the question,
is just the right founder for this market?
It's clearly not perfect.
And in fact, you'll be wrong a lot of the time.
But I would submit that if you invest in this way,
you will be right in a way that's better than market norm.
Today, we're replaying a conversation from The Generalist
with A16D general partner Martine Casado.
Martin shares his perspective on the AI boom,
why he believes we're still in the 1996 moment of the cycle,
how a market-first lens shapes his investing,
and why he's skeptical of AGI-centric framing.
He also reflects on his path from game-endom,
and simulations to pioneering software-defined networking and investing at the frontier of AI and
infrastructure. They close with why AI coding could be a multi-trillion dollar opportunity,
a 16-Z evolved from a small journalist firm into a specialized organization,
concerns about Chinese dominance and open-source AI models,
and how World Labs is tackling the 3D representation problem with implications for robotics in VR.
Hey, I'm Mario, and this is the generalist podcast. As the saying goes, the future is already here. It's just not evenly distributed. Each week, I sit down with the founders, investors, and visionaries living in the future to help you see what's coming, understand it more clearly, and capitalize on it. Today, I'm speaking with Martine Casado, a general partner at Andresen Horowitz and leader of the firm's infrastructure practice.
Martin has had one of the most fascinating journeys in Silicon Valley
from writing game engines for budget video games in the 90s
to selling his startup for approximately $1.3 billion in 2012
and now investing in the next generation of AI companies
like Cursor and World Labs.
In our conversation, we explore why Martin believes the AI boom has room to run,
how he identifies market leaders before consensus forms,
and what China's dominance in open-source models
means for American technological sovereignty.
If you like today's discussion,
I hope you'll consider subscribing
and joining us for some of the incredible episodes
we have coming up.
Now, here's my conversation with Martin.
Awesome. Well, I've really been looking forward to this a ton.
You have such an interesting background
and have sort of charted a lot of these different cycles
in technology as both the founder and investor.
So excited to get into AI today in particular,
but to start, I wanted to maybe begin with a part
of your history that intrigued me, which is that in the early 2000s, as far as I could tell,
you were spending a little bit of time at the Department of Defense, Department of Defense,
working on simulations. Tell me about that.
Actually, it was Department of Energy.
So I've worked at Lawrence, yeah, Lawrence Livermore National Lab.
So actually, I'm going to rewind it like just a couple of years.
So I actually paid for a lot of undergrad writing game engines for video games.
So that was kind of, you know, so back in like the night.
you only really got into computers if you wanted to hack or make video games.
Like that was it.
I mean, it wasn't what it is now.
And I kind of took the video game route.
And so I did like a lot of, you know, game development.
And in college, I did a lot of engine development.
And so what I was interested in was things like 3D engines and game physics and game mechanics.
And that pushed me towards computational physics, like simulation.
I mean, so the game industry is a very tough industry to do.
And I was actually quite interested in science.
and I was quite interested in physics.
And so that pushed me towards the national labs.
And so, yeah, so my first job was doing basically computational physics
working on these large simulations at Lawrence Livermore National Labs.
And I started interning in like 97, 98 time frame,
and then I took a full-time role in 2000.
Do you remember what games might have used some of the engines you were building?
This is so funny.
So I worked, the company probably doesn't exist anymore,
but I worked with a, it was a,
contract outfit called Creative Carnage,
and they worked with the budget division of,
I think it was either a claim or accolade,
and it was called Head Games.
And I think we have the great distinction
of having had the games
with the lowest ever score on PC Gamer.
So they would do games,
they would do games like,
I remember there was like Extreme Paint Brawl,
a mountain biking game,
like a skydiving game.
And so this was like very early days of like 3D engines.
And we didn't quite understand the game mechanics.
And so it was like a super budget, you know, game shop.
But these were games that you'd go to Walmart and buy.
I mean, they were very legitimate games.
And so that was kind of my shady entree into this.
I love that.
The razzies of video games.
Exactly.
Yeah, yeah.
Yeah, budget games.
Yeah.
This is, this is, you know, off-piece at this point.
But do you, are you still a gamer?
Like, do you find yourself interested in that as a media form?
So I've never been a big gamer as far as playing games,
but I've always loved creating games.
And I still do that.
That's what I do in evenings now.
So I love music.
I love narratives.
I love programming.
And I love games.
And so actually, if you track some of the, I mean, this is like not a great word.
This is all hobby work.
But, you know, I worked with Yoko on AI town.
I've recreated a bunch of old 8-bit games using AI,
and so it's actually still a big passion of mine.
But again, I'm not a big gamer.
I don't like sit down and play games.
That's really cool.
I knew you still remained, you know, kept your technical chops up,
but didn't realize you were applying it in that way.
That's super interesting.
By the way, AI makes it a lot easier.
I would almost certainly not be programming like I do now,
but it wasn't for AI, for sure.
Okay, well, we're definitely going to dig into that
from a few different angles.
You know, after Lawrence Livermore and Department of Energy, you started your Ph.D. at Stanford and then sort of dropped out to start Nassira. And, you know, I wondered about that part of the journey specifically because you've made a few big leaps in your professional life. And that was maybe, you know, sounded like a rather significant one. Had you at that point imagined yourself being an academic indefinitely or had that been always something that you were interested in, you know, the idea of starting something.
Yeah, so I actually didn't drop out.
I finished my PhD, so I think it's, it was kind of funny.
So the adage, it's kind of interesting, the adage at the time was the only way to be a successful founder is you have to drop out of your PhD, right?
Because, you know, Sergey and Larry Page were in the floor above, above where I was in Gates.
And most, almost all of the successful founders at the time were PhD dropouts where I had actually completed.
So, no, no, I actually didn't plan to be a founder at all.
I actually had a faculty offer at Cornell at the time.
And we're talking 2007 now.
So my plan was, you know, I did this PhD work.
I had done a startup previously as a very small thing.
It was called Illuminic Systems, which, you know,
instead of raising money, we ended up selling it.
And so I liked being a founder, but I thought this was kind of like,
I was so naive.
I thought this was something that, you know, you could just start a company
and do it for a couple of years and then sell it and go do something else.
But, you know, I started the company in 2007 and then 2008 hit.
That was a hell of a reality check because, you know, this is this fork in the road.
Like, do you do this company in a, you know, the worst economic environment since the Great Depression or do I go be an academic and, you know, it forced me to really decide what I wanted to do and I decided to do the company?
Was that a difficult decision at the time?
It was so hard.
I mean, it sounds daunting given the environment, but, you know, in your spirit?
It was, it was so hard because, you know, I mean, especially because.
Because, you know, I mean, this is when Sequoia had released their slide deck, rest in peace, good times.
Everybody was, you know, riffing their companies.
I mean, the economy was taking.
So it was very, very tough.
And part of it was honestly just responsibility.
I was just like I condensed all my friends to join this company.
And I would feel like such an asshole if I just like left.
That was part of it.
And another part of it is I just felt like there was work to be done that I hadn't finished.
And I just am of the temperament that if I start something and I don't finish it, it'll bug me forever.
And so I kind of didn't want to face myself in 10 years.
But I'll tell you, when I made the decision, I call my mom and she said, Martine, you're an idiot.
So for what it's worth, I was pretty alone in the decision.
Wow, no kidding.
Well, it ended up, you know, being both technically or technologically an important company and, you know, having an incredible outcome.
Yeah, I worked out.
Yeah.
And, you know, in sort of reading about part of that period, I was interested to see just how important you really became at the acquirer of VMware from sort of contemporary press at the time.
You'd really taken on a growing role and scaled the sort of team that you were leading to really a rather large size.
So it seemed like that was also clearly an option for you.
How did you make the choice to, you know, flip over from, you know, operating at a very, very high.
level to the investing side.
Yeah. So, you know, I learned easily as much at VMware than I did in the startup.
And it was a phenomenal experience. And, you know, it's one thing to do a startup and, you know,
to do early founder sales and to build a team. And it's an entirely different thing to get,
you know, a business to a billion globally with all the partners. And especially within a large
organization where, you know, you're overlaying with kind of an existing core team and other
product teams, et cetera. I mean, it was a great experience. But one thing that's,
important to remember is, you know, I started the research for this in probably 2005 and 2006, right?
And so by the time, you know, I was at VMware for three years. It had already been 10 years.
So we got quite by 2012. So it had already been 10 to 11 years that I'd been working on
exactly the same thing. And so I've just found that my career goes.
in kind of decade epochs, right?
So in my 20s, I was write papers, write code, engineer, you know,
poorly dressed PhD student that knew nothing about business and nothing about anything.
And it really was, that's what I did.
I mean, I wrote a lot of, a lot of papers.
I built a lot of systems.
And I love that.
And then in my 30s, basically almost to the day, I mean, it was this journey,
which is like, you know, building products, building a business, building a team,
and doing that globally.
And I did think to myself,
like, you know,
I'm so enamored with technology
and I'm so enamored with startups
and I love innovation.
You know, you ask yourself,
okay, so what do you do next, right?
And I like being close to like where things are being created.
And so that means that you get involved
in the startup ecosystem.
But do I want to spend another 10 years
doing a journey that I've already done?
Or do I want to zoom up one more level?
And so I almost feel like my 20s,
It was like the abstraction was, you know, a product or lines of code.
And then I zoomed out a little bit.
Then the abstraction was one company.
And then when you join a firm, you zoom out a little bit more.
And then the abstraction is a company.
And you actually see the experiment in parallel.
And I will tell you, like from this vantage point, even though I had done two companies,
I learned so much more than I ever would have if I would have done another company.
So for me, it was the right decision.
Does that mean that the sort of glide path you're on is toward, I don't know,
governor of California, the next abstraction layer, mayor of San Francisco.
I will never, listen, I had a small taste of politics last year when I thought that there was
nobody defending AI from a policy standpoint, never realized I will never, ever, ever, ever go into
politics, man.
As far as I can tell, everybody just lies to each other all the time.
It is not for me.
Yeah, it sounds like it would be infuriating.
You know, Andresen had invested in Nisera, and so you'd obviously built this relationship
with Mark and Ben, but how did the sort of decision to come aboard actually come about?
Were they, you know, pitching you? Were you pitching them? How did you guys make the call?
Yeah, it's kind of a funny state. So it's actually not a super public story. It's kind of a funny story.
So Mark and Ben invested in Nassira's angel investors. You know, this is before the fund existed to begin with.
And actually, the way that I met Ben was Andy Rackcliffe was on my board.
So Andy Ratcliffe is the famous benchmark partner.
He's a professor at Stanford.
And I was looking for a CEO because, you know,
I was a very, you know, technical CTO kind of co-founder.
I didn't know anything about enterprise sales.
And he's like, you know, I know this guy.
He's just coming out of HP.
He told the company his name is Ben Horowitz.
And so I actually met Ben Horowitz to interview him for a CEO.
And you know what he told me?
He said, I'm too rich.
You're like, all right, this guy's not the guy.
No, he was a guy, but he was so great.
I actually learned more from him in that 45-minute meetings
than any other advisor I'd talked to up to that point, which had been years.
I mean, it was the most eye-opening thing ever.
And so he said, listen, we'd love to angel invest.
Mark and I are trying to figure out what we're going to do.
They did some angel investing.
And when they started the firm, then we went and pitched and we raised, I mean,
at the time we called it a series B, but it was really a series A from them.
And so, you know, that we kind of had like a history before, Ben joined the board.
And so, you know, listen, I built the company under his guidance.
It was very critical to basically every aspect of it.
And so when I was thinking about what to do next, actually, I reached out to Mark.
And I actually felt it would be better to reach out to Mark because like Ben was on my board.
And so like that relationship is, you know, it's kind of like, it's like your PhD advisor.
you're never not their student.
And I think with a board member,
you're never not like the founder
that they work for.
And I said, hey, listen, Mark,
you know, I'm interested in the next steps.
And one thing people, I think,
don't appreciate about Mark and Ben
is how good operators they are.
And so they took it very seriously.
They themselves managed the conversation.
I mean, I was still really trying to figure out
the next thing to do.
And Mark was really texting me every single day.
You know, they brought me.
And, I mean, like, the close process that these guys run is just absolutely world class.
And, of course, I knew them very well.
So it's not like that would have really been necessary, but, you know, they knew what they wanted.
They had an opening for an infrastructure.
We had a long relationship, you know.
And so, you know, in fairness, I didn't even really talk to anybody else.
You know, I mean, there was some kind of very early conversations, but I knew that, you know, that's where I wanted to land.
And so it was kind of a mutual process that was pretty streamlined.
amazing thanks for showing that um i have to jump back to when you say the 45 minutes with ben
taught you more than every other advisor do you remember anything you know from that meeting in
particular that stood out yeah yeah yeah a bunch of them i mean one of them is um you know i was
talking about pricing by the way i i you know anybody who who works with me is going to realize that
like half of what i say i just steal from ben because i say what i'm about to tell you i tell
people all the time but it's so true and so i was asking a question about
pricing. And he says, I just want you to know, this is the single most important
decision you'll make in the history of the company, one decision. And really, for your net worth
as a human being, this is the most important decision. And let me describe why. Well, you know,
you own a bunch of the company. The valuation of the company is going to come down to
growth and margins. Growth and margins, the single most important decision on what impacts
that is going to be pricing. And so everybody views pricing totally glibly or they kind of make it up
or they're at hoc,
but they don't understand
how important
that single decision is
towards the health
and ultimate valuation
of the business.
And then he actually broke
all of that down.
And at the time,
software was going through
a pricing change
like it is today.
So it was going from
kind of on-prem perpetual
to recurring.
And this had massive impacts
in how you comp your sales team
had massive impacts
on how you do go to market
and massive impacts
on what numbers meant
to be a head.
healthy business. And so he just walked through all of that from this very single
discussion. And just so you know, we're seeing the same shift now as we go from basically
recurring license to usage-based billings. And so even, even, you know, this conversation
I had in 2009 is still relevant today and I draw from it. So I think this is a good example
of, you know, this deep insight that he was able to portray from his operational knowledge.
Yeah, incredible. If you were to pick a VC firm that has changed the most,
since you joined A16Z in 2016.
Arguably that you would pick your firm,
the firm you work at in terms of transformation.
So much seems to have changed in that time period.
And so I wonder, you know, when you look back on it,
what was the Andresen of 2016 like
and where do you see the biggest differences?
Oh, yeah, it's totally different.
I think it was the ninth general partner.
You may want to take it on it.
It was like the ninth.
And when I joined probably seven,
70 people at the firm.
On Mondays, we can all sit around the same table.
Everybody was kind of a generalist.
You know, we didn't have a notion of a more senior investor below the GP ranks.
Like, we didn't have any sort of progression ladder.
It was actually us.
It was a specific tenant of the firm that you'd have, you know,
relatively junior, we'd call them deal partners, DPs.
And they would only stay for two to four years.
And the idea was, is that, like, you know, you get more network that comes in.
You know, they're quite relevant.
And then also you kind of spread the A60 network as they go join other firms.
Yes.
So it was very, very different.
So now all of that's different, right?
Like, GPs are specialized.
We have multiple funds.
You know, we have a clear progression ladder of investing partners.
We're, you know, 600 some, all of people, maybe more.
you know, we invest in all sorts of different levels.
There's a lot of process and methodology.
And so I would say the primary motivator for all of the change
is the question, how do you scale venture capital?
Yes.
You know, in some ways, and I've said this before,
so it's kind of this historical quirk that venture capital firms
have the same partner model as like,
a legal firm or a dentist office or a doctor's office,
which is this partnership model where everybody's kind of equal, et cetera.
And it made sense when the market was 1,000th the size.
Like if you think about it, like, when we create venture capital firms,
the market was so small.
But it's grown now and it's professionalized as it's matured a lot.
And so now firms have to answer the question.
So how do you scale deploying money?
How do you scale AUM?
How do you scale decisions?
How do you deal with conflicts, et cetera?
And so that's been the prime motivator that has changed many of the shifts
that we've made at A16Z.
You mentioned that one of the big shifts
is this verticalization
and you head up the infrastructure practice.
For someone that maybe wouldn't understand
how to put the parameters around that,
what falls in the bucket of infrastructure
and what might fall beyond it, so to speak?
So the roughest cut is
if the buyer or user is technical,
it is infrastructure.
So it is the stuff to build the stuff.
like apps are built on infrastructure.
Now, and in particular, it's computer science infrastructure, right?
So, like, you could say infrastructure is, you know, construction and rebar and concrete.
This is computer science infrastructure used to build software.
And so it's the traditional compute, network, storage, security, dev tools, frameworks, et cetera, et cetera, et cetera.
Now, if there's a piece of software and the user of the buyer is in marketing or in sales,
or in a flooring shop or in a veterinarian,
that's not us.
That's apps.
For us, all of the consumers,
whether they're an admin type, a developer,
that's infrastructure.
And, you know,
in looking at the team that you've built out,
one of the sort of striking things is,
it's an extremely technical team.
You know, I, you know,
seeing folks talking about sort of building custom AI GPU setups
and so on and so forth.
You know, when you think about many of the great venture investors
over the past,
however many years, pick a few decades,
a lot of them are not super technical, right?
Like you can look at Mike Moritz or John Doer or Peter Thiel's maybe in between a little bit,
but ultimately I would say probably not a technical person in the way that we're talking about it here.
Why does it matter to, you know,
why is it important to have that level of technical expertise to do this style of venture investing?
So I think the, actually the bigger priority for hiring our business,
our team is actually product experience,
especially in an infrastructure enterprise,
and less pure technical prowess.
Like, nearly everybody on the team has either built a company
or run a product team.
There's very few that were like low-level engineer, you know,
or low-level researcher.
And so I would say that is the primary focus.
And the reason is because we invest somewhere between the seed,
you know, let's call it like an early sea.
And often you can't judge a company purely by financial metrics,
but often there's enough to evaluate so it isn't just a bet on the founder.
And so what are you left with if that's the case?
What you're left with is market understanding.
And I just think it's very tough to do market understanding and infrastructure
if you don't have a product background, which, by the way,
is way, is way more important than the technical background.
If you don't have a product background, you can't evaluate the market.
And then if, you know, in infrastructure,
you don't have some technical basis.
I don't even think you can
like have the conversations
that are important.
And then of course
to map any given company
to that market,
you have to have also
that same understanding.
I think it's a great point about,
listen,
I think some of the best infrastructure investors
ever were not classically technical.
Like Mike Volpe is phenomenal.
Doug Leone is phenomenal.
Fenton is phenomenal.
These are the greats.
And I think that a lot of this
is because we have almost a generational
shift in the,
industry where before it was such a kind of obscure knowledge, understanding the people and the
networks and where they came from was critically important. I think now it's matured to the point
that you actually can't take a bit more of a systemic knowledge based on the fundamentals
in the industry rather than those. And so I think this is more of a testament to the maturity
and the size of the market than us as investors. And I will also say many of the top investors,
investors right now in infrastructure are non-technical and they're phenomenal, right? There's many
great folks out there. So this is just our approach. It's definitely not the only approach to being
successful. That makes sense. You talked about how your life has sort of fallen into these
decades. And it is almost a decade, I think, from when you joined A16Z. With the benefit
of that decade of learning, how would you sort of describe your investing style today? What
does your filter on this market look like?
So I've kind of decided I'm, I just need to remove, we as investors need to remove
ourselves from, from predicting the future, which is a funny thing because we're supposed
to predict the future.
I don't think that's a mistake.
And so our approach is very straightforward.
We believe that the founder network, the founders themselves are, are smarter than
customers.
They see the future, not us.
They're definitely smarter than investors.
And so if there are three or four very good founders that are working on a space,
we just assume that space is good because, A, their founders,
B, they're doing the opportunity cost of doing it.
You know, they're risking their time, you know, their family's wealth in order to do this.
And so to first order, we just say, okay, what are interesting spaces?
And there's a whole methodology we used to do that.
And if there's an interesting space, the next question we ask is who is the leader in that space?
and is it too early to determine in it?
And if, you know, if it's too early, we wait.
And if we determine that one we think is the leader,
then we try and make the investment.
The thing about this approach is a,
it kind of removes us from, you know, like,
there's so many aphorisms on investing.
Like, this is a great founder and the founder has grit.
And, you know, like all of these things.
But at the end of the day,
all of that you have to kind of filter through yourself
and your team.
and we're all very biased
and none of it
you can systematize
where if you're simply asking the question
A is this legit space and B is just the best
cutting in this space? This is something you could actually
throw work on and it's not
it's clearly not
perfect and in fact
you'll be wrong a lot of the time
but I would
submit that if you know if you
invest in this way you will be
right in a way that's
that's better than
market norm. Do you try, I mean, you must actually, to some extent, still evaluate the founder.
And I imagine you've had plenty of meetings where you've met a founder and felt sort of palpably,
this is an extremely impressive person. It almost sounds like you distrust that emotional response
in yourself? Or how do you sort of think about that? This is a great question. So if there's one
thing that has shifted in me about how I think about investing and how I think about companies,
I used to think from company out, right?
So I'll look at the company.
I'm like, the founder is great.
The product is great.
The technology is great.
The good market is great.
I've stopped that.
Now I think only from markets in.
The reality is the market creates the company,
in most cases, not the other way round.
And so I always start with, like, what is the market?
And then I ask the question,
is just the right founder for this market?
The answer to your question of, like,
is this a great founder or not founder?
I don't think that there's a single answer.
It strongly, strongly depends on what they're setting out to do.
Now, I do wait a lot of things.
I do wait things like earned knowledge.
Like, have you earned the knowledge to be in this market
based on your experiences in the past?
Like, where you're at the bowels of Uber
building out their storage system
and now you're bringing it to the rest of the world?
You know, I'm a very product-focused investor,
and so I just tend to resonate with product-focused founders
that see the world in terms of,
of what is the product we're going to create
and how am I going to insert that
into the market as opposed to pure
technologists, which don't care about that
and pure salespeople, which also don't care about
that. So I'm a very product-focused
CEO, but I will say that
my umbrella answer, my macro
answer to you is almost all questions I ask
about companies actually stem from the market
on it. Really interesting.
You mentioned that
you're sort of happy to wait until a leader
has emerged in a certain market.
But how do you determine when that's the case?
And, you know, if it's sufficiently durable,
is it like true market share sort of, you know,
looking at it from that vantage?
Or are you sort of making a few guesses of like, you know, maybe?
Yeah, yeah, yeah.
That's, I mean, that's the, yeah, that's the part of the job
where it's an underdetermined system, right?
There's way more variables than equations than we just do our best.
And, and, you know, our analysis is multifarious, right?
Like, I know, like, as investors and probably fueled by things like X,
we like to reduce VC to like, here are these five things.
Here's our basic thesis.
And, you know, the reality is most investment decisions take a lot of work.
You consider an awful lot of things.
And then at the very end, you kind of look at it and you make a judgment on that.
So what are the things we look at?
Like I mentioned, founder market fit is very important.
Tactical approach is very important.
The market itself, to me, is incredibly important.
important. I've just learned that if you're selling into a market that shrinking, life sucks.
Even if it's a huge market, if it's a huge, huge market, let's say, like, switching and routing
is this huge market. But if it's only growing 3% or as flat or it's shrinking, you know,
you're dealing with budgets that are contracting, people that are losing the jobs, like all of
the incumbents are going to be fighting for their lives. It's just, you know, so I'm very sensitive
to markets that are growing versus shrinking,
ability to hire, ability to fundraise.
I mean, all of these things go.
I mean, the final memos for investments
tend to be fairly comprehensive.
And so all of this also necessarily requires us
to do a lot of work before companies are fundraising.
And so, like, there's a kind of a necessary part of this motion,
which is you're constantly trying to, like,
enumerate the companies that are out there
and then doing the analysis to determine, you know,
who is, you know, in the lead.
and who is not.
And then you're right.
At the end of the day,
you just kind of be like,
okay,
I mean, we did all of this work
and we think that you can make this argument here.
And we get it wrong a lot, right?
There's nobody can predict the future.
Yeah,
that's the beauty of this asset class, right?
Yeah, 100%.
I mean,
you know,
you just have to be comfortable knowing that
even if a company looks like the leader now,
anything can happen.
Like,
they can get acquired the next day for an acquire
that they decide to do.
A new company can show up that didn't exist before.
you know, anything could have, there could be a platform shift, et cetera. And so the entire goal is,
can you over a set of investments, you know, beat, you know, the upper quartile of the other venture
capital firms? That is the goal and you take the losses along the way. You know, we're talking
about, you know, the importance of the entrepreneur or the executive. On X, I saw, you mentioned
that you thought Hawk Tan, the broadcom CEO was, you know, one of the great CEOs of, you know,
the past decade plus.
And that's not a name that I usually here discussed in that debate.
Like, can you tell me where that comes from and why you think that?
I'll make a stronger form to say,
I think Hocktan may be the best,
outside of maybe Jensen and a handful of others,
he may be the best CEO the industry has ever seen in infrastructure.
He's just unbelievable.
You know, somebody should do like the,
the hot tan
book or overview or portfolio
or, you know, focus piece or
whatever. The employee retention
is unbelievable.
He's managed to do these
incredibly complex acquisitions.
And I will say, so, you know,
normally when you buy a company,
any company at all, like the
team that you integrate
the acquired company into is
you know, you've got all these kind of lawyers and
corp dev and biz dev and HR people running around.
and you've got this entire committee for integration,
you know, when Hocktown inquires a company,
even something like the size of like a VMware,
like the M&A committee is HockTam.
The integration committee is Hocktan.
I mean, the guy is just legendary on like how hard he works,
how he runs his meetings,
he knows everything about his business,
he knows all of the numbers.
And what's interesting, he's a business guy.
He's not a technologist nor a product guy.
You know, but, you know, he has stayed away from the line line.
And, you know, to his credit, like, you know,
he just focuses on the business.
there's a lot we can all learn from what he has done
and what he's going to do.
I really do think he is probably the most iconic CEO right now.
Well, you've put a good marker on my editorial calendar there,
so I'm going to make sure to do some more research
and see if I can write a good story.
I don't know if he's ever done before.
You should.
Yeah, why not?
Yeah, that's a great thought.
You had another tweet that I thought was really interesting
and caused a little bit of a stir in VC world,
which it's so fun what things happen to cause a stir or not in these discussions.
The tweet for folks that didn't see it is,
the idea that non-consensus investing is where the alpha is
is actually quite dangerous in the early stage.
There's a little bit after that, but that's sort of the meat of it.
Why do you think that struck such accord and caused such, you know, not outrage,
but, you know, discussion?
Well, I think it just managed to piss everybody off.
I think there was like every constituency found a reason to hate it.
right?
The ideal tweet.
Yeah, that's right.
It's like the mother of all war shock tests, right?
And, you know, I think for, you know, there's this sense outside of VC that VCs are just pattern matching and add no value.
And so for those people, it was a confirmation.
And so they're like, oh, I know it.
VC's just consensus of S, you know, and now Martine is disacknowaging it, which I totally wasn't,
but we can get into that.
And then for the investors, it was like an attack on their originality.
which is like, I don't do that.
I'm a consensus.
You had many junior investors who don't know what they're talking about
that they kind of said a bunch of random stuff.
But he had some very senior investors that were like,
oh, I do all these non-consensus bets and like whatever, whatever.
So everybody found like some reason to take umbrage for, by the way,
which was like I hadn't even thought deeply about the tweet
because this is fairly innocuous thing.
I thought was just so obvious.
I was like, I'll say some obvious thing on a Sunday morning.
And it just turns out to have been a lightning rod.
What prompted you to say it and what were you sort of trying to communicate that probably a lot of people maybe talked past to the actual point, I think?
Well, I work with a large team of investors and I'm often in the position of providing guidance.
And if you're not considering follow-on capital, then you're not fully evaluating the opportunity set.
And I've found that the cliche VC aphorism rulebook is like,
everything must be alpha and this and that.
So I just thought there's plenty of people talking about, you know,
finding the diamond in the rough.
There's plenty of people that are talking about finding the white space.
But like there's this another side to it that isn't as represented,
which is as you go later and later stages, VCs become more and more consensus driven.
And that's exactly because they're putting more money in and they need more predictability.
It follows that right out of the system.
So in a way, this is the most banal tweet you could ever imagine.
It's just, it's actually totally obvious.
I'm not saying I consensus the best.
I've done tons of non-consensus stuff.
I'm just saying that if you don't consider this, it's dangerous.
And so often we don't talk about that.
So that was the genesis, which is a so really banal tweet from a very obvious place.
Well, it's always good to cause a little bit of a stir every once in a while,
especially over something that is ultimately benign.
I just feel like I feel like X is like
it's just totally chaotic right
there's some tweets I'm like this is so deep and pithy
and like no ignores another one
who's like this kind of pointless thing
and so in a way again you know just like
looking at the market as opposed to the company
I think that like tweets are much more indicative
of the people receiving it
than the person actually tweeting it
speaking of well quite consensus sectors at the moment
let's get into AI and this
you know, wild world we're living in at the moment, which you're spending a lot of time on.
I know that you have mentioned that some of the energy that you're seeing in AI really
reminds you of the 90s.com boom.
Like, what are those sort of symbols of that effervescence that you spotted that did bring
that to mind?
Yeah, so let's see.
I turned 20 in 96.
And I, you know, I was interning at Livermore.
Probably starting, I don't remember, it was 97 or 98.
But, you know, so I was going back and forth for, you know, a few years.
Then I, you know, I worked full time in Livermore in 2000.
And I just remember this kind of slow boil that erupted during that time.
Like when I started, you know, computer science as an undergrad was 2895, you know,
It was kind of this wonky discipline, you know,
it was actually kind of in a little bit of a slump.
But the web was just starting and you could feel this excitement.
And then by the time I graduated, I mean, I went to Northern Arizona University.
It was a school.
My father was a professor in, in Feigstaff, Arizona.
And even in this small mountain town school, we had, you know,
students that were graduating, getting these crazy jobs in, you know,
as programmers and all over the nation.
And, you know, they were being actively recruited.
You know, so, like, there was just kind of all of this excitement.
And then when I would go to the Bay Area, you know,
I would kind of get kind of caught up in all the founderitis that was going on.
And you had everything.
You had all the parties.
You had all, you know, I remember the first time I landed in Silicon Valley.
I drove down the 101.
I'm like, all these billboards are talking to me, right?
Yeah, yeah.
You know, there was just this energy.
and it was in the streets
and you'd have like Linux conference
and the Python conference was going on
and everybody would show up
and all these companies getting created
and it was just
optimism and chaos
in every sector that you look at
and then it feels to me
that things got a bit institutionalized
which is it's just kind of like another day
to do business for the last 20 years
and I feel again now you have a lot
of the same type of energy
which is like I mean you know
the billboards we've had for a very long time
but again you've got like
these kind of cultural movements that follow it
and, you know, all the founders
and all the investing and going on.
So I just feel like it has the same level of energy
that we had in the late 90s.
Do you think we're circa 96 or closer to circa 99, early 2000s?
96, 96.
Really? You think we got some, you got some room to run?
I think people forget what a bubble looks like.
I mean, every time valuations go up, people say bubble.
I mean, you know, but like, listen,
I mean, a bubble is like when you get into, like, a car
and the taxi driver is giving you stock tips.
Like, that's a bubble.
I mean, remember all of the crazy excesses and, you know, all the crazy blowups.
It's totally, totally different.
So, I mean, this feels a lot like early 96.
And the big difference is, is then companies weren't even making money.
And it lasted so much.
By the way, people were decrying bubble in 97 and 98.
Yeah, I believe that.
And 99 and 2000.
Like, I mean, like, you'll be right eventually, yeah.
You know, people were saying it, right?
And then, you know, and they actually had really legitimate concerns.
You had WorldCom, which had, you know, $40 billion in debt, which is super levered.
Yeah.
It was like a single supplier that was underlying all of this stuff.
You know, you could IPO a company, you know, with basically no revenue, very little revenue.
Many of these companies, these crazy businesses had no money.
Like, they were making nothing, right?
And so, like, there was these very legitimate concerns.
And none of those really exist today, right?
I mean, you know, the companies that are bankrolling, a lot of the infrastructure have
hundreds of billions of dollars on the balance sheet, you know, Google, meta, Microsoft.
Like, Open AI has real revenue.
Cursor has real revenue.
And the valuations aren't totally out of whack with the revenue.
So, yes, you know, markets will oscillate for sure.
And so they'll go up and down and you'll have pullbacks or whatever.
but I don't think we're anywhere close
to like a, you know, late 90s level bubble.
No, I think that could come.
And, you know, listen, when like, you know...
And probably will, right?
And it probably will, but, like, I don't think we're anywhere close.
I just think people forgot what a good bubble looks like.
There are a lot of fun, man, so...
Fun ball the music is on.
I promise.
I think, you know, I don't know where we are in the cycle,
and, you know, I didn't live through that period as an adult,
so I can't compare.
But I think we're at the stage of taxi driving.
knowing these things very well.
I do think, you know, the valuations are certainly getting spicy in some levels.
Maybe they're not quite at the peak.
Yeah, but I mean, honest question for you.
Do you think right now is out of whack with 2021?
I don't think it's 2021.
Nope, I agree.
I think we're not there yet.
But I don't know, does it feel like 2019, mid-2018 to me?
Like, yeah, that seems about.
And so, yeah, maybe we got another.
18 months or two years, but I don't know if I'd love, like, how many, if I was writing big
checks in, let's say, 2019, I don't know how many of those I would have been thrilled about
in 2022, right? For sure, you have valuations waxing and waning. I think it's great to
actually apply it to 2021. I mean, um, uh, 2021, there was a lot of excitement, but it wasn't actually
driven by real business usage, right? It was like, it was like COVID.
the flight to online,
and then just a bunch of private capital
flooded in the market.
Remember, like, you know, Tiger, Koto,
inside, all of these were deploying very heavily.
And so in a way, there was kind of this excitement and exuberance,
but not for any sustainable business reason.
It was really like a, it was like an influx of capital
and then this kind of, you know,
quirk of the macro that wasn't sustainable.
But with AI, I mean, you know, we were, you know,
three, four years in, it looks sustainable.
We understand retention.
We understand growth.
We understand margins.
Yeah.
And much less of a tech revelation.
You know, there was really...
Yeah, that's right.
So we actually have a foundation underlying it.
So I would say, yeah, I mean,
it kind of feels a little bit 2019-ish,
but it's real.
And so, you know, unlike, you know,
the 2021-22 collapse.
I mean, you could argue that we're still early in cycle.
And yes, it's going to continue to oscillate.
But I don't think we're anywhere near the top.
Interesting.
Yeah, I think that's...
I mean, I want to think about it more,
but I think you make a lot of very good cases there.
We don't have the tigers coming in,
but we do have a lot of sort of sovereign wealth fund money perhaps coming in
and a lot of big corporate cash, right?
Totally.
That's a different level.
This is actually very, I mean, maybe, you know, on this podcast,
like we're not going to have the time to dig into it.
This is very interesting construction about the current technology wave
is you can actually deploy capital
and you can get revenue on the other side of it,
these are very capital-intensive businesses, right?
And I think that is what the market is trying to normalize.
Like, you can't even really enter the casino
without a billion dollars for these foundation models, for example.
And that is because, so I agree we're in a bit of like terra incognito
as far as understanding, you know,
what the capital structure is long time after you've raised this much money.
But what we do know is you can actually convert it into revenue and into users.
And so I think this is where we're going to see
a lot of rationalization
and normalization in the market.
But again, I don't think it's basically, it isn't speculative,
right? It is just trying to understand
what the market is doing. I ultimately think
the markets are very efficient.
And so I think, you know, like, we'll rationalize
but there's a true
true value being created in this AI.
And I think that if, you know, money's not
following it, it's going to miss the greatest
super cycle in the last 20 years.
Yeah, that's the, you know,
the other side of it is like you could
really miss out. You know, you mentioned that
there's really something obviously valuable being created, and I fully agree. But I was interested in the fact that you see these studies, you know, MIT had their study not long ago that, you know, said that, what was it, 95% of these enterprise deployments are not delivering value. Why is there that gap in what we're seeing? Is that like a measurement problem? Is it a, you know, a deployment problem? I think one of the problems with AI is that it's been around forever. And so we have all these presuppositions on what it is.
right? So here's my view on AI. Right now, AI as it is, is very much an individual
prosumer type technology that's attached to individual behavior, right? It's like me using
chat GPT, me using cursor, me using ideogram, right? Me using mid-journey. And the value that
organizations get is that their users are using chat chit-t. Their users are using, you know,
whatever. That's what it is. However, there are platform teams within the enterprise and their
boards are like, we need more AI, go implement stuff. And so they're scrambling to do these AI
projects. And of course, those are failing, right? This is such a different technology and a different
shift. So if you measure some internal effort to go ahead and do stuff by yourself, that really,
you know, then, you know, I would say the failure rate, of course, it's going to be very high.
that has nothing to do with the fact
that, you know, now
many, tens of millions of users are using
the technologies, technologies,
getting value from them and driving
that value into whatever the workplace is.
And so I just think that
when it comes to this wave
of AI, we have to realize it's a very
new thing.
It's going to have a totally different adoption
cycle. We've not yet cracked
like the direct sales enterprise,
you know, I would say for those
enterprises that are listening, you know,
rather than doing your own kind of project for now,
it's probably better to work with like a vendor
or a product company that's actually doing these things.
Then over time, just like the internet,
by the way, the internet was the same way.
Just like the internet, it will make its way
into the enterprise in a way that we all understand.
But it's just not there yet.
What are the ways that you've ended up
incorporating it into your life most, would you say?
And on the other side,
are there areas in which you're especially protective
of not using it sort of to preserve
of your thinking?
I mean, like I mentioned,
so I code with AI.
So the reason I stopped coding is I just didn't want to learn the next framework, right?
I mean, the thing with developing in the late 90s
is you'd sit down to your computer and you'd write code, you know,
and it was all kind of there,
and you didn't have to learn a lot of stuff.
You'd mostly just write in code.
You know, and then I, you know, through the 2000s,
I did my PhD, and, you know,
so then, you know, I kind of invested enough time
to understand all the frameworks and whatever.
But, you know, I step away because I'm building a business
or I'm becoming an investment.
When I go back to it, I just have to learn all of these new things,
especially with all this web stuff.
And you're not learning anything fundamental to computer science
or anything foundation or anything that's useful outside of that context.
You're learning, you know, whatever stupid design decision,
some random person that created the framework did.
And so that's really what slowed me down from coding.
And with AI, I don't have to deal with any of that.
I'm like, you know, whatever.
Give me boilerplate for an app
so I can write a video game
and all of those decisions are made by AI.
So I use AI coding very heavily.
I do it almost every night
and it's really just been lovely.
Yeah, yeah.
It's kind of my relaxing time,
but it's really just lovely to be able to just
kind of focus on code again.
You know, another kind of just personal thing I like.
So I love reading, you know,
kind of, you know, historical,
you know, books.
historical figures that are closely tied to innovation or economics.
And often I have a lot of questions.
And so these days, what I'll do is I'll read a chapter.
And then when I walk my dog, this is silly.
When I walk my dog, I use Brock Audio Mode,
and I actually have conversations about the chapter.
And in a way, I don't even care if, like,
the questions I have are kind of analysis synthesis questions,
not fact-based questions.
Like make an argument of why the school of Salamanca in Spain
in the 1300s
was a progenitor
to the Austrian School of Economics, right?
And so, like,
and, you know,
so I actually have these conversations
about what I read,
and I just find that I think
more deeply about it when I do it,
and I actually find it interesting
and it's kind of more well-rounded.
So that's personally been great.
I think I'm a bit OCD when it comes to writing,
so I will not use AI for writing.
I think writing is thinking,
and I use writing to think.
And so if something did that for me,
me, I wouldn't be thinking.
And I think this, for me, has just been a lifelong tool.
And so I don't think I've ever used AI to write a single, I mean, maybe that's not true.
I don't mean to be too categorical, but I never, never use AI for writing.
So that's the one area that I've really tried to protect.
Really interesting.
I can't help but ask, you know, what some of those historical books that you have enjoyed might be.
Yeah, yeah.
So I've just been in Eisenhower lately.
I've been going through a bunch of Eisenhower books.
It was interesting about Eisenhower.
right? He's a conservative president, and he was a moderate.
But he was also, you know, was under his watch at the Warren Court was created.
The Warren Court, you know, I mean, very famously, you know, was the vanguard of the civil rights movement
as far as, you know, overturning, you know, policy and kind of getting a read of a lot of the Jim Crow laws,
et cetera. And so a lot of kind of my questions have been, you know, today we, you know, people,
you know, criticize the court system and there's a lot of rhetoric on how, like, the courts being
et cetera, et cetera. And so I've actually been kind of having conversations with the grot
comparing and contrasting the rhetoric around the war in court with the rhetoric around the current
Supreme Court. It's so interesting how similar the criticisms actually are. And so, of course,
it's a different environment in a different era. But for me, I just feel much, much closer
to what's going on now as being part of a historical, like a historical trend than some
total aberration. I like to be part of like the broader narrative. Perhaps starting in COVID or
perhaps you would start earlier even,
it feels very, very clear
that we're living in history in a way
that, you know,
maybe wasn't as obvious, you know,
a couple decades ago or something like that.
What's cool is to actually rewind the clock
and listen to the rhetoric during the Vietnam War
and listen to the rhetoric during the dot-com,
boom,
and listen to the rhetoric, you know,
around the war in court and realize
I don't think it's actually that much,
you know, we always have this kind of story.
Not so different.
Yeah, we always tell our stories.
like, oh, this is unprecedented times.
We've never done this before and blah, blah, blah, blah, blah, blah.
But like, they said those words back then, too.
They did.
They were like, oh, this is unprecedented.
We've never done this.
It's the end of the nation.
Like, blah, you know?
Really?
For me, anyways, it's nice to realize that this is a continuum.
It's been going out for a long time.
The U.S. is anti-fragile.
It is the best country, you know, on the planet.
You know, like we always have challenges to deal with,
and we do a good job of dealing with them.
Are there parts of the AI world right now that you consider almost a mirage?
You know, something that looks like it could be something, but for some fundamental reason, it's unlikely to last.
You know, I think folks have talked about, you know, prompt engineering, for example, is something that's maybe more of a transitory state of affairs.
And I wonder, yeah, how you, you know, what you might point to that has a similar quality.
So I think that what we're seeing is we're seeing two pretty distinct paths that these AI model companies take.
So one of those paths, the model just does more and more and more, right?
And so you basically have one model that does more and more and more, right?
Like these code CLI tools, like Codex, for example.
So you could be like, I'm going to make it super complicated
and do all this prompt engineering and have all this software,
or I could just expose the model to the user.
And it seems in these situations you just expose the model to the user
and it just does better because the model is smarter than whatever code you're going to write.
And then it's just so hard to kind of interpret what the model's going to say anyway.
So that's one path.
It's like, you know, we're also seeing this in the pixel space, which is, you know,
instead of like having a model for image and a model for 3D and a model for music and a model for characters,
I'm going to have one video model and it does everything.
And oh, by the way, I'm going to make it interactive.
This is Genie 3.
So it just does everything, right?
So there's like one path, which is the God model path.
And the argument for that is the bitter lesson argument.
That's, you know, you have all the data, you know, like the model is.
smart, et cetera.
This is kind of, you know,
and that's clearly a viable path.
The other path is the composition of models paths,
which is, let's take the pixel case again.
Actually, I just saw this amazing video
this morning on X,
where somebody made a video and they're like,
I used MidGurney to make the images.
I use World Labs for the 3D scenes.
I use Suno for the music.
You know, and like, it's this composition of different models,
and you look, and they have this just beautiful,
you know, video that was
created. And so the argument for the composition is if you have an opinion on what comes out,
you'll just have a lot more control, right? If I want fine green camera movements and field
of view, I'll need 3D. Maybe I want very specific images and I want like consistency across
those, so I'll need an image model. I want the music a certain way. I may want to change it
over time, so I want a separate thing for music, right? And so I honestly believe we're going to see
both of these paths. And I think the biggest mistake is people assume it's going to be one or
the other, right? Everything's
going to be one model, but the problem with that is
composition is just real. We've got an
existing, you know, set
of tool, we've got an existing tool
chains that use, like, components
of outputs that you're going to want to use.
And so I think that's a mistake. And the other one is, like,
oh, these big models aren't going to be useful.
Like, you need a collection of small models. Clearly,
that's not true because the bitter lesson
will continue to make these single models
much more powerful.
That's really interesting.
You mentioned Codex there, and
and you've talked about, you know, using a lot of these tools in your evenings,
which brings me on to Cursor, which I know you're very involved with.
When you were doing the analysis on, you know, AI code generation and thinking about,
okay, who's the leader here?
Was it just blazingly obvious that it was Cursor?
How did that sort of come about?
Yeah, I mean, listen, it depends on what you're doing.
I'm a developer.
We were looking at developer tools.
And the developer tool is the ID.
listen, the coding space is enormous, right?
There's repos, there's testing, there's, you know, PR management, you know, et cetera, right?
But in the case of coding, you know, I mean, the co-pilot had given us a glimmer of how
powerfully AI could be integrated in the development process.
You know, the cursor team executed so exceptionally well.
You know, half of our companies were using it.
And, you know, and they were just, you know, at the time, just very, very focused on building
out the IDE and being the leader.
So just for that bet, that was very clear.
That didn't mean that, like,
we didn't think CLIs were a good bet.
It was just different, right?
That's different.
I mean, you could give a, you know,
and at the time, there really wasn't as many approaches
that were using, like, the PR for an interface to the developer, right?
Like, like, using GitHub's interface for the developer.
But, you know, it was also pretty clear that, like,
Cursor's ambitions was to change all of code.
It was also very clear that like Coder was evolving.
And so, you know, from our perspective, you know,
a very, very product-focused team working on tools for developers
that has this kind of broad vision was, you know,
the right bet for a developer tooling for us.
And of course, that, you know, that's worked out quite well.
I just so important, I said it before I want to say it again,
that that doesn't mean that all of these,
like there isn't tons of value in all of these other areas.
Like coding models, tons of value, CLI tools,
tons of value.
I mean, this space is enormous.
If you just do rough math, right?
Like, let's say there's 30 million developers.
There's more, but let's say it's 30 million.
Let's say they get on average 100K a year.
I mean, what is that?
So it's like, what, $30 trillion market or something?
Yeah.
You know, let's say you get 10%.
It's $3 trillion.
I mean, we're talking about like an infinitely sized market.
And if you ask me, like,
what is the one area that AI has surprised you?
It's encoding.
Listen, I've been developing my whole life.
And I would never have guessed it'd be this good.
And so you've got an infinitely sized market that AI is very effective at going.
And so I think we're going to see a bunch of super successful companies.
What do you think will sort of dictate the winners and produce real defensibility here?
Because obviously it's given the size of that market, you see lots of interest from large companies and insurgents to sort of take a piece of that space.
So my general rule of thumbs is while markets are accelerating in their growth, they will fragment.
And that's a natural law of physics.
And so everybody worries about defensibility on day zero, which is just dumb, in my opinion.
Like, it doesn't matter until markets slow down or they consolidate, right?
And it literally just kind of falls out of the, like, listen, if I have to spend a dollar,
if I'm in a company and I've got to spend a dollar, if I'm going to spend a dollar in an area where I don't have competition or you do have competition.
Well, of course, you're going to do it where you don't.
where you're the leader anyways.
That's why we've seen basically fragmentation
in most of these domains.
We've got companies growing in most of these domains.
And so I think when it comes to code
long term, what keeps them defensible?
So here's my current view is
I don't think there's any inherent defensibility in AI.
I don't think that exists.
I think that AI overcomes the bootstrap problem,
so it's kind of solves your customer acquisition problem
because it's so magic.
And that won't be the case forever.
But right now it's like somebody invented cold fusion
and people show up for electricity, right?
So, like, it solves your customer acquisition problem.
But from a defensibility standpoint, you have to go to traditional modes, right?
You have to, like, you know, we know how to do modes and this, you know, whether that's a two-sided marketplace.
It's an integration note.
It's a workflow mode.
Like, whatever it is, like you still have, as a company, have to build that.
You know, the good news relative to incumbents, last point on this is when you have new behaviors,
incumbents have a tough time executing.
And we clearly have new behavior here, right?
It's like, it's an individual behavior, it's a new relationship.
There's actually often an emotional component, too, like, you know, like the shift between
GPD 4 and 5, we saw that.
And so I think new behaviors advantage challengers, I think we're seeing this play out.
And so I worry much less about the incumbents.
I think, you know, if you're a founder listening to this and you're doing an AI company,
priority zero is finding that white space, not worrying about defensibility, in my opinion, you know.
And then once you find that white space, you know, rely on.
traditional modes to protect it when the market slows down.
There's another of your companies that I have been so interested in from, you know, only from
the outside that I'd love to hear, you know, your story with them and for folks that maybe
haven't come across them yet for them to understand what they're building.
And this is World Labs, which, I don't know, reliably any time you go on X, there is now
something really interesting, some sort of 3D model that, you know, World Labs is responsible for
that is quite...
It's magic. It's the most magical thing.
Yeah, mesmerizing. So how did that come about?
Yeah, this even goes back to my experience
for writing 3D engines for video. I mean,
it's just a particular interest.
So listen, World Labs was created by
like the true pioneers
in 3D. It was Fei-Fei,
who did ImageNet, you know,
super famous Fei-Fei. It's Ben Mildenhall
who created Nerf, which is the Neuro
Radiance Field. It's Christoph
Lassner, who was like, Galsh and Splats
before they were cool. It's Justin
Johnson was the style of transfer guy.
I mean, like, they've just got the most epic team.
The easiest way to articulate what they're doing is they want to take a, you know,
a 2D representation, like an image and create a 3D representation from it,
like a scene or a world.
And it's a very, very tough problem because if you just have an image, you can't see everything.
You can't see in the back of the table.
You can't see behind you, et cetera.
So there's a ton of generative components to it.
And it's also a tough problem because
the way that you train models is with data
and there just isn't a lot of 3D data.
So it's kind of this unsolved problem,
but it turns out to be a very horizontal problem, right?
So, for example, why would you need a 3D scene?
Well, you could use it because you wanted to create
a pure virtual environment that you want to interact with,
right?
You want to place a character with, you want to change the angle with it,
you want to augment it, you want to step into it like VR, right?
You know, you could also use it for any sort of kind of design, you know, like architecture.
You can use it for AR, right?
Like, actually, I just saw this great.
This guy named Ian Curtis did this cool thing where, like, he had a 3D representation of his living room in his, in his Oculus.
And then he was, like, overlaying, like, changes on it.
He made with rural labs.
And so he could, like, switch between, like, the virtual labs recreation and the real recreated.
So he could, like, change furniture and change things.
like that. And then actually, ultimately, this is very relevant in robotics, right? So the problem
with just 2D video is you actually don't have depth. You don't have, you can't see behind things,
right? And so you need to create some 3D representation if you want like a traditional program,
let's say, like a robotics brain to decide things. Like, you know, how far away is this? What
might it look like on the other side? How do I plan around these things? So the more 3D
representation you can create,
kind of the smarter like an embodied AI would be.
And so they're really trying to tackle this kind of holy grail of problems of
I just have one view in the world that's 2D,
which is what our brain does.
And then how do I kind of recreate that in 3D so that I can kind of process it?
The robotics piece was, you know,
is the piece that I think is so interesting.
Obviously, you know, the VR applications feel very obvious in a way.
But obviously, on the other hand,
still not a market that is
massive at this point, but the
robotics piece feels like that could be
I don't know, truly
a game changer when you're combining some of these
other developments we've seen
in that industry over the past couple years.
It really feels like it's addressing one of the
major
sort of limiting factors.
Well, let's go back to the market size
first. Just because I feel this is
a mistake we keep making in the AI
world, which is nobody would have said
2D images of market.
nobody, right? There's a whole class of companies which were like, you know, in the past,
you know, like they were, you know, small acquisitions, they were never really profitable
that were trying to like, you know, build 2D images. And yet now, you know, we have companies
like Mid Journey, which famous was bootstrapped to hundreds of millions of ARR, we've got BFL,
we've got Ideogram, very successful companies, et cetera. And so I think in general, when you bring
the marginal cost of creation to zero, the market size explodes, right? Marginal cost of image
creation, of video creation, of music creation, et cetera.
So, and again, I know that this wasn't the point of your question, but I think it's
very important to touch on is, if you bring the marginal cost of 3D content creation
to zero, I think that that market is infinitely large.
I mean, one of the reasons VR sucks is because there's no content.
And like, I mean, I don't know, like, you know, I've got a Quest 3.
I love it.
But like, I go, I spend 24 bucks and I get like the stupidest, like, little thing.
And so I would say a lot of metaverse, wait, I hit.
the term, but, I mean, just to get us all on the same page, VR, online gaming, et cetera,
is really gated on content. It is so hard to build 3D scenes. It is so expensive.
And so I think that markets that weren't markets before can become markets.
That said, I agree with you that long term, if we're going to have embodied AG,
I'm not an AGI guy, but I'm going to say, if we're going to have embodied AI,
embodied AI, embodied AI that looks at the world and then creates a re-representation of that
world and decides how to interact with that world, somewhere somehow you're going to need to
recreate that world in 3D, right? You can't do it with length, right? Like the description I like
to say is like, let's say I put you, I blindfold you and I put you in a room, the lights are off
and I try to describe the room so you can navigate it or like, or do any task. Like the words are
just not going to be accurate enough. I like to, you know, I'll be like, there's a cup in front
of you. It's about three feet. You know, like that won't work. On the other hand, if, you know, I give you a
camera and then you can kind of recreate the 3D and your position in that 3D, of course,
you can now navigate the room. And so there's something very fundamental to this solution space
for embodied, embodied AI. You said a few things there that were super interesting to me.
One of them, I do want to dig into the VR piece. It's true that there's not enough content,
but isn't the real constraint there, the hardware? Like, functionally, there's more
than enough content for us to live almost infinity lives for us to be in it.
But until it actually feels sufficiently high fidelity, it's just not enjoyable enough, right?
Maybe for some people.
I mean, listen, I love VR.
I have, every time I do VR thing comes, I buy it.
And my problem is, is unlike a video game, which are deeply immersive and, you know,
you've got a ton of content, like I walk a plank and then I'm done, right?
I shoot a zombie and then I'm done.
I just feel like you don't.
have enough imbersive content.
And so there's probably somewhere in the middle.
If you look at a lot of online purely virtual experiences,
the gating factor is like,
how do you build these very, very large world?
It takes years.
It takes teams of people years to build kind of these levels
and these worlds and these 3D environments.
And what's very interesting, I think this is such an important point.
It's very interesting.
I work very close with World Labs.
I go in on Wednesdays.
I work with a team.
I write code.
Like, you know, I mean, it's all silly, you know, like, I'm like, like a beta user, right?
Like, I kind of, you know, do some kind of silly things, but I'm very, very close.
And they work with a lot of artists.
And these are traditional true artists that have backgrounds and 3Ds.
And they make these beautiful worlds and whatever.
They spend a ton of time out.
They'll spend tens of hours making it.
And so what you end up is you end up with a very detailed, very rich virtual world
that would have taken maybe a year.
year, you know, if you had a team of humans, they can be, like one person can do it with
less time, but it still requires a ton of craft and a ton of work from an artist. And so I think
that, you know, technology like this is going to increase the amount of virtual scenes and worlds
that are there for us to kind of view and explore. And I think as a result, any market that that
requires these is just going to grow because, you know, you can produce more, better quality,
and faster.
Really interesting.
And then you said,
you're not an AGI guy.
Tell us why.
I think at the very foundations,
I don't think we have figured out
how the human brain works.
And I don't think,
you know,
I think maybe a language model
or something is a small subset of it.
But I tend to agree with Jan Lacoon,
which is, you know,
we'll get to AGI at some point in time,
and we keep chipping off.
pieces of it, but like there isn't a straight path from where we are now.
It's not like you just add compute and data to the existing models and then we have
AGI.
I think I think that we just keep chipping off each pieces of the problem.
And so for me, using AGI as some goal or measuring stick or destination, all it does
is encourage very sloppy thinking.
because it ends up becoming the place
that you put all of your expectations
and all of your fears.
And right now it's not even a real place.
And so I really try and force people
not to use the term AGA,
not to trick in terms of it
because it's very hard to have a conversation
because it's such a holding place
for magic and magic fears.
And so I like to talk about
concrete problems, solutions,
products, technologies,
technology trends, technology directions.
And then, hey, maybe at some point in time,
we will know the architecture that will provide
human level intelligence with all the flexibility
that can learn just as fast, et cetera.
And then we can start talking about AGI.
But until that time, it just erodes conversational quality.
It does not enhance it.
I fully agree.
It feels like it, yeah, it obscures meaning
much more than it reveals anything.
Yeah, it just doesn't help in a conversation, right?
It really encourages lazy thinking.
It quickly becomes almost entirely semantic where you're like,
well, actually, what do you mean by AGI?
Oh, well, this is what I mean.
Okay, well, then, you know, this is how we sort of, you know.
And also, it becomes a universal justification without having to actually have a justification.
Well, why is this, what, why is the marginal risk for AI greater than traditional computer systems?
Oh, AGI.
That doesn't mean anything.
It's not a statement, right?
Why is this going to put end people out of a jobs?
Oh, AGI.
that doesn't know both of these are great questions the the labor question is an important question
the marginal risk question is an important question we should have those discussions we have
those discussions not in terms of a GI because that's not a thing we should do it in terms of like
what's actually happening now and in my experience every time you say AGI this is what people
use to justify whatever their fear is whatever their concern is or whatever their most optimistic hope is
and the problem is when you dig into it is like this kind of belief that
Like there's this magic thing that will provide it.
And so it did, for me, it's conversational and discourse quality.
That's the problem with the term AGI, not the fact that like someday we will have computers at a smart
team.
Of course we will.
But right now that's not helpful.
You mentioned that, you know, compute and data is not going to be enough for us to sort of have a straight shot to, you know, AGI, whatever we might call it, let's say a brain equivalent in every way to a human.
How does that impact how you think about the progression of this from an investment perspective?
Do you expect continuous large leaps in the capability of these models over the next few years?
Do you think we're sort of to expect maybe more incremental improvements from now on?
I think we're part of the long march of technology to solving all problems.
And even if we stopped AI research right now, there's been enough that's been unlocked to create a tremendous amount.
of value and there's going to be new things that are unlocked. And so I just view this as the same
continuum that we were on 10 years ago and 20 years ago and 30 years ago. And, you know, we're going
to continue to have to unlock new things. And, you know, I just feel because these things are so
starting, startlingly impressive that sometimes we kind of don't view this as part of a continuum
that has to go. And like we've already solved it. Now we just have to sit back and wait for it to
happen. I don't believe that. I believe that. I, I,
I believe, like, listen, the way that I view investing now is the same as I did five years ago and 10 years ago.
And we need to have more improvements.
But what I do acknowledge is that we've unlocked a ton.
And so now is a great time to product ties and to turn into real businesses worth it's been done.
You've talked before, I think maybe tweeted about the fact that a lot of U.S. companies end up using Chinese open source models.
Do you think that there is maybe more awareness
of why that might not be the best thing
and that that is primed to change
or is it something that you're currently quite worried about?
No, I think it's something we should all be concerned with.
You know, it's kind of funny.
This is the reason why I got so involved
in the political discussion,
which I'll never do again,
just because it's such a terrible space to be in.
But, you know, you had V-Santis,
sees who should know better and who should be pro-innovation talking against open source,
academia was entirely silent. And so it's like the United States just decided that it wasn't
going to invest in the number one thing for proliferating technology the way that we see it.
And I think largely because of that, like the proliferation of open source has been pretty
muted in the United States. And I do think that, you know, China really answered the call.
They've done a phenomenal job. I would say many of the best, you know, AI teams are in China.
Their models are many of the best models. And they're being used all over the place.
And so I think in some ways, you know, we had the wrong approach as a nation and as an industry.
Now, that is being rectified. I think that's being understood. But I think now we have a lot of
catch-ups do. I think that, you know, like our models aren't the best. And, you know, honestly, a lot of it
just comes down to policy questions, right?
Like, there's a lot of risk to release something open source if somebody else is going
to try to find something, you know, copyrighted in it and then sue you for, right?
There's a lot, I mean, there's a lot of spurious litigation around these things.
And then we have these policy proposals that would be disastrous.
Like, like, you know, SB 1047 from Scott Lehner, I mean, part of that was actually developer
reliability, right? So that means that if somebody uses this in a way that caused a
mass casualty event, which like, let's say a car crash, right, you can sue the developers.
And so I think from the United States standpoint, we've not done what we've done in the
past. We've used the precautionary principle. We've changed the way that we approach technology
from a policy standpoint historically. And we've done it in a way that slow down innovation.
And as a result, we're on her back foot.
And being on our back foot, you know, with China respect to technology, I don't think is in the national interest.
And so I'm, listen, I've been very encouraged what the current administration has done with regards to AI.
I think their kind of policy recommendations have been fantastic.
And so I am cautiously optimistic that things are changing, but we're not there.
We've got a lot of work to do.
Amazing.
Well, as a few wrap-up questions for you, as we move into our sort of final few minutes here, I always like to ask a few.
philosophical ones. One for you is if you had unlimited resources and no operational constraints,
what is an experiment you would like to run? Do I have ethical constraints? I'd say no. I'd say
for the, you know, no people were harmed in the making of this thought experiment.
Yeah, 100% nature versus nurture. I would like go to space. I would clone a whole bunch of people.
I would like have a whole bunch of controls. I would like play out their lives. Can I live forever too?
Sure. Yeah. Why not? No time constraint. Okay. So no ethical constraints. No
time constraints, unlimited
resources, yeah, 100% nature versus nurture.
Yeah. And you can imagine
how I do it, right? I'd just clone.
I'd have a whole bunch of people. I'd clone a whole bunch of people
and I'd minorly tweak these things and I'd let
them live out their entire lives. I'd simulate
entire worlds for them. And I'd answer the question
what is innate and what is not.
And then ultimately, that'll be the question on free will
too, right? Yes, that's right.
You'd probably have a few
things that fall out of that. So that would be
quite efficient. You know the title of the
like, you know, one, like, you know, in
after like 300 years of doing this
experiment, like the title of the report will be
what does it mean to be human?
There you go. Excellent.
That's a great answer.
What do you think is a tradition or practice
from either another culture or time period
that you think we should adopt more widely today?
Oh, fuck. Ciestas.
Easy.
That's a layup.
This is your Spanish
heritage.
I assume.
Yeah, and unfortunately these days, it seems only southern Spain.
I come from the most backwards part of Spain, right?
From what's being in Spain and like, you know, like, CSA is a God-given right.
I think everybody should take a nap.
There you go.
Agreed.
Final question.
If you had the power to assign a book to everyone on Earth to read and understand,
what would you want to put on their reading list?
The weirdest people in the world.
Hmm.
That's a good one.
You know, David Deutsch's beginnings of infinity, of course.
Taylobs the consequences of fat tails.
Hmm. I've never even heard of that from him.
It's a, I mean, it's a, you know, it's a, it's a, it's a, it's a, it's a statistical,
um, yeah, the statistical consequence of fat tails. Um, and then Hammings had to be an engineer.
Huh. Could you tell me a little bit about the weirdest people in the world and the final
one? I think, I know the weirdest people in the world, but, uh, there's a, you know,
there's a sort of, I don't know how to describe it, a, a bit of wordplay in that title that reveals
something about what it's really about?
Yeah, yeah.
I mean, it basically says,
the Protestant Revolution has changed the way
that we associate with ourselves
and with each other.
And it basically,
we used to be very, very tribal.
And so that kind of has certain impacts
on, like, trust.
And the Protestant Revolution
kind of forced nuclear families
and forced separation,
and that required us to be prosocial.
And then it also has a second thesis
on how free markets
also produce pro-social behavior.
the reason that I would include it there is, listen, I think humanity, if you just take the long arc here because we're being philosophical, like, you know, like the ultimate enemy is entropy. It never goes away. I don't think any single tribe solves that. I think you need pro-social behavior to actually do planetary level innovation and understanding how we work around trusts and coordination and cooperation is very critical. So listen, I don't think it's the ultimate book. I think it's great of that. By the way, one,
One more book I'd add is the end of history and The Last Man.
Is that what it is?
I don't know.
I don't think I've heard of that.
Yeah, Fukuyama.
Yeah, that's a great book.
Of course, the end of history, yes.
Yeah, the end of history.
I've never read that, but I've, you know, I...
It's phenomenal.
It was interesting because the historic, he's actually recanted on that.
But it's like this Hegelian view of humans.
And, like, his conclusion is, like, liberal democracy is the end of history.
And I think that's being questioned right now.
but he does such a great job of taking the Hegelian view that there is this dialectic,
there is this evolution of humans, we are continuing to get better.
And then, listen, he thought maybe we'd have arrived.
I think the conclusion now is that we haven't arrived, but I love this idea that we as a species
are improving how we interact, how we have policies, how we socialize.
And so all of these kind of have this general theme of, listen, we as a species are going
to continue to solve problems, we're going to continue to have to work together,
that we're going to continue to have to cooperate.
And ultimately, listen, it'll be us versus entropy.
No better place than that to end.
Thank you so much, Martine.
I really, really enjoyed this.
That was a lot of fun.
Thanks so much.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment,
subscribe, leave us a rating or review,
and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z.
and subscribe to our substack at A16Z.substack.com.
Thanks again for listening, and I'll see you in the next episode.
This information is for educational purposes only
and is not a recommendation to buy, hold, or sell any investment or financial product.
This podcast has been produced by a third party and may include pay promotional
advertisements, other company references, and individuals unaffiliated with A16Z.
Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC,
A16Z or any of its affiliates.
Information is from sources deemed reliable
on the date of publication,
but A16Z does not guarantee its accuracy.
