a16z Podcast - Beyond Chatbots: Marc Andreessen and Ben Horowitz on AI's Future
Episode Date: October 31, 2025In this closing keynote from a16z’s Runtime conference, General Partner Erik Torenberg speaks with our firm’s cofounders, Marc Andreessen and Ben Horowitz on highlights from throughout the confere...nce, the current state of LLM capabilities, and why despite huge capex, AI is not a bubble. Resources:Follow Marc on X: https://x.com/pmarcaFollow Ben on X: https://x.com/bhorowitz Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
I think we don't yet know the shape and form of the ultimate products.
It's one just obvious historical analogy is, you know, the personal computer
from sort of invention in 1975 through to, you know, basically 1992 was a text prompt system.
17 years in, you know, the whole industry took a left turn into GUIs and never looked back.
And then, by the way, five years after that, the industry took a left term in the web browsers and never looked back, right?
And, you know, look, I'm sure there will be chatbots 20 years from now,
but I'm pretty confident that both the current chatbot companies and many new companies
are going to figure out many kinds of user experiences
that are radically different
that we don't even know yet.
Every major technology shift brings new capabilities,
new pressures, and new questions about how progress unfolds.
At A16D's runtime conference,
I sat down with Mark Andreessen and Ben Horowitz
to discuss the current state of AI,
how reasoning and creativity are evolving,
how markets adjust to new technology,
and what this moment means for founders and institutions
shaping what comes next.
Now, to Mark and Ben.
Please join me in welcoming Mark Andresen and Ben Horowitz with general partner Eric Tornberg.
Follow me into our solo. Get in the flow. And you can picture like a photo.
Music makes mellow. Maintains to make melodies for MCs, motivates the point.
Some everlasting.
Thank you for the rock Kim.
Ben picked the music.
Mark, there's been a lot of talk lately about the limitations of LLMs, that they can't do true invention of, say, new science, that they can't
to true creative genius.
That is just combining or packaging.
You have thoughts here, I would say you.
Yeah, so for me, you get all these questions.
And yeah, they usually come and either sort of our language model is intelligent in the sense
of can they actually process information and have sort of conceptual breakthroughs the way
the people can.
And then there's our language models or video models creative.
Can they create new art?
Actually have genuine creative breakthroughs.
And of course, my answer to both of those is, well, can people do those things?
And I think there's two questions there, which is, okay, even if some people are, quote,
unquote intelligent as in having original conceptual breakthroughs and not just let's just say
regurgitating the training set or following scripts what percentage of people can actually do that
i've only met a few some of them are here in the room but not that many most people never do and
then creativity i mean how many people are actually genuinely creative right and so you kind of point to
a beethoven or a vengo or something like that you're like okay that's creativity and yeah that's
creativity and then how many beetovans and vangos are there obviously not very many so one is just
like okay like if these things clear the bar of 99.99% of humanity then that's pretty
interesting just in and of itself. But then you dig into it further and you're like, okay,
like how many actual real conceptual breakthroughs have there ever been actually ever in human
history as compared to sort of remixing ideas? If you look at the history of technology,
it's almost always the case that the big breakthroughs are the result of usually at least 40 years
of sort of work ahead of time, four decades. In fact, language models themselves at the
culmination of eight decades, right, of previous work. And so there's remixing. And then in the arts,
it's the exact same thing, you know, novels and music and everything. There are clearly creative
leaps, but there's just tremendous amounts of influence from people who came before. And even if you
think about somebody with the creativity of Beethoven, like there was a lot of Beethoven in Mozart and
Haydn and in the composers that came before. And so there's just tremendous amounts of remixing
a combination. And so it's a little bit of an angels dancing on the head of a pin question,
which is like if you can get within, you know, 0.01% of kind of world beating generational
creativity and intelligence, like you're probably all the way there. So emotionally, I want to
like hold out hope that there is still something special about human creativity. And
I certainly believe that, and I very much want to believe that.
But I don't know.
When I use these things, I'm like, wow, they seem to be awfully smart and awfully creative.
So I'm pretty convinced that they're going to clear the bar.
Yeah.
I think that seems to be a common theme in your analysis when people talk about the limitations of LMs.
Can they do transfer learning?
Just learning in general.
You seem to ask, can people do these things?
Well, it's like lateral thinking, right?
So, yeah, so it's like reasoning in or out of distribution, right?
And so it's okay.
I know a lot of people who are very good at reasoning inside distribution.
How many people do I actually know who are good at reasoning outside of distribution
and doing transfer learning?
And the answer is like, I know a handful.
I know a few people, where whenever you ask them a question,
you get an extremely original answer.
And usually that answer involves bringing in some idea
from some adjacent space and basically being able to bridge domains.
And so you'll ask them a question about, I don't know, finance,
and they'll bring you an answer from psychology.
Or you ask them a question about psychology,
and they'll bring you answer from biology, right, or whatever it is.
And so I know, I don't know, sitting here today, probably three.
I probably know three people who can do that really likely.
I've got 10,000 in my address book.
And so three out of 10,000 is not that high a percentage.
By the way, I find this very encouraging.
Yeah, immediately the mood in the room has gone completely to hell.
I find this very encouraging because look at what humanity has been able to build, right, despite
all of our limitations, right?
And look at all the creativity that we've been able to exhibit and all the amazing art and all the amazing movies and all the amazing novels and all the amazing technical inventions and scientific breakthroughs.
And so we've been able to do everything we've been able to do with the limitations that we have.
And so I think that do you need to get to the thing where you are 100% positive that's actually doing it, you know, original thinking?
I don't think so.
I think it would be great if you did, and I think ultimately we'll probably conclude that that's what's happening.
But it's not even necessary for just tremendous amounts of improvement.
Ben, we were just celebrating some hip-hop legends at your paid-in-full events last week.
And so you think a lot about creative genius.
How do you think about this question?
Yeah, I mean, I think that I agree with Mark that it's, whatever it is, it's very useful, even if it isn't all the way that level.
I think that there's something about the actual, like, real-time human experience that,
humans are very into, at least in art, where, you know, with the current state of the technology,
kind of the pre-training doesn't have quite the right data to get to what you really want to see.
But, you know, it's pretty good.
One of Ben's non-profit activities is something called the Pay and Fall Foundation,
which is honoring and actually providing essentially a pension for sort of the great innovators in rap and hip-hop.
And so he knows and has many of, we were just at the event and he, you know,
as many of the kind of leading lights of that field
for the last 50 years perform,
and it's really fun to meet them and talk to them.
But, like, how many people in that entire field
over the course last 50 years
you'd you classify as, like, a true conceptual innovator?
Yeah, well, you know, it's interesting.
It depends how broadly you define it,
but there were several of them there on Saturday.
So, Rock Kim.
I think, yeah, Rakim, you'd certainly put in that category.
Dr. Dre, you'd certainly put in that category.
George Clinton, you'd certainly put in that category
in a narrower sense, like,
G-Rap certainly had a new idea.
But, you know, it depends.
Like a fundamental kind of musical breakthrough,
you'd probably just say Barack Kim and George Clinton.
Are they excited?
So two out of...
Well, I mean, those of the guys who were there.
Oh, yeah, yeah, yeah.
Yeah, but yeah, it's a tiny percentage.
Tiny, tiny, tiny, tiny, tiny, tiny, tiny.
We had the fireside last night with Jared Leto.
He was talking about how many people in Hollywood
are really scared or against what's happening here.
What do you see in, you know,
When you talk to the Dr. Jays, the Nas, the Kanye's, are they excited?
Are they using it?
So everybody who I speak to, there are definitely people are scared in music,
but there are a lot of people who are very, very interested in it.
And particularly the hip-hop guys are interested because it's almost like a replay of what they did, right?
They just took other music and they kind of built new music out of it.
And I think that AI is a fantastic creative tool for them.
It, like, way opens up the palette.
And then for a lot of what hip-hop is,
is it's kind of telling a very specific story
of a specific time and place,
which having intimate knowledge
and being trained just on that thing
is actually an advantage
as opposed to being like a generally smart music model.
At the same time, people also use the same logic of,
hey, whatever is more intelligent
will rule whatever is less intelligent.
And Mark, you recently...
Not said by anybody who owns a cat.
Yeah, exactly.
Mark, you recently tweeted
a supreme shape rotator
can only rotate shapes
but a supreme word cell
can rotate shape rotators
and also
someone's clapping
and also
high IQ experts
work for mid IQ generalists
what means
what means
yeah so the PhDs
all work for MBAs
right so it's okay
so yeah
well I just take it up a level
it's just like when you look at the world today
do you think we're being ruled
by the smart ones
right is that your big conclusion
from like current events, current affairs, right?
Okay, we put the geniuses in charge.
You mean Kamala and Trump aren't the best?
Well, it doesn't even be specific towards the U.S.
Let's just look all over the world.
Yeah, and so I think two things are true.
One is we probably all kind of underwrite the importance of intelligence.
And actually there's a whole kind of backstory here of intelligence
actually turns out to be this like incredibly inflammatory kind of topic for lots
of reasons over the last hundred years, which we could talk about great detail.
And even the just very idea that like some people are smarter than other people,
which just like really freaks people out and people don't like to talk about it.
We really struggle with that as a society.
And then it is true that intelligence is like in humans, intelligence is correlated to almost
every kind of positive life outcome, right?
And so intelligence, generally in the social sciences, what I'll tell you is what they call
fluid intelligence, the G factor or IQ is sort of sort of zero point four correlated to basically
everything.
And so it has zero point four correlation to like educational outcomes and professional outcomes
and income and by the way, also like life satisfaction and by the way, nonviolence,
being able to solve problems without physical violence and so forth.
And so, like, on the one hand, like, we probably all underrate intelligence.
On the other hand, the people who are in the fields that involve intelligence, probably overrate intelligence.
And you might even, you might even coin a term like maybe like intelligence supremacist or something like that, where it's just like, oh, like, intelligence is very important.
And so therefore, maybe it's like the most important thing or the only thing.
But then you look at reality and you're like, okay, that's clearly not the case.
Yeah, it's still only 0.4, right?
Well, so to start with it's only 0.4.
And, you know, in the social sciences, 0.4 is a giant correlation factor, right?
like most things where you can correlate,
whether it's, you know, genes or observed behavior
or whatever, anything in the social sciences,
the correlations are much smaller than that.
So 0.4 is tiny, but it's still only 0.4.
So even if you're like a full-on,
even if you're like a full-on genetic determinist
and you're just like, you know, genetic IQ
just like drives all these outcomes,
like it still doesn't explain, you know, 0.6 of the correlation.
And so that leaves it.
But that's just on the individual level.
Then you just look at the collective level.
Well, you just look at the collective level.
It's like a famous, famous observation as you take a,
but you take a bunch of, you take any group of people,
you put them in a mob and the mob is dumber, right,
than the average, and you put a bunch of smart people in a mob,
and they definitely turn dumber, like,
and you see that all the time, right?
And so you put people in groups and they behave very differently,
and then you create these, you know, questions around, like,
who's in charge, whether who's in charge at a company
or who's in charge of a country.
And, like, it's, whatever the filtration process,
it's clearly not, it's not, it's not, it's certainly not only on IQ,
and it may not even be primarily on IQ.
And so therefore it's just like this assumption
that you kind of hear in some of the AI circles,
which is like inevitably the smart kind of thing
is going to govern the dumb thing.
Like I just think that's like very easily.
It's just sort of very easily and obviously falsified.
Like intelligence isn't sufficient.
And then you just, you just convey it.
You know, we're all in this room lucky enough
to know a lot of smart people
and you just kind of observe smart people.
And like some smart people, you know,
really figure out how to have their stuff together
and become very successful.
And a lot of smart people never do.
and so there's there must be
there obviously are and there in fact must be
many other factors that have to do with success
and have to do with like who's in charge
than just raw intelligence
it begs the follow-up question of
what are some examples of what that might be
skills sort of outside of intelligence
and more particularly specifically why couldn't
AI systems learn them
yeah so Ben like what other than intelligence
what in your experience determines for example
success in leadership or in entrepreneurship or in
and solving complex problems, or organizing people.
Yeah, well, there are many things.
You know, like a lot of it is being able to have a confrontation in the correct way.
And it's like there's some intelligence in that,
but a lot of it is just under really understanding who you're talking to,
you know, being able to interpret everything about how they're thinking about it
and just kind of generally seeing decisions through the eyes of the people working
in the company not through your eyes
is a skill that you develop
by talking to people all the time,
understanding what they're saying,
so forth, these kinds of things.
And it's just, you know,
it's certainly not an IQ thing
and not that, like I could imagine,
an AI training on any individual
and like figuring it all out
and knowing what to say and so forth.
But then you also need that integrated
with, you know,
like whatever the business ought to be doing.
So you're not trying to do what's popular.
You're trying to get people to do what's correct,
even if they don't like it.
And, you know, that's a lot of management.
So it's not a problem anybody's working on it.
Yeah, he currently, but maybe they will.
Some combination of, like, courage, some combination of motivation,
some combination of emotional understanding, theory of mind.
Yeah, what do people want?
like, you know, marry to, you know, what needs to be done, and then, like, how talented
are they? Like, which ones can you afford? Like, if they jump out the window, it's fine,
you know, which one's not fine, you know, this kind of thing. There's a lot of, like,
weird subtleties to it. And it's very situational. I think the hardest thing about it
and why management books are so bad is because it's situational. You know, like your
company, your product, your people, your org chart is very, very different than, you know,
here are the five steps to building a strategy. It's like, well, that's the most useless
fucking thing I ever read because it has nothing to do with you. So one of the interesting
things on this, like the concept of theory of mind is really important, right? So the theory
of mind is can you and your head model what's happening in a person's head, right? And you would
think that maybe that, you know, maybe obviously people who are smarter should be better at that.
It turns out that that may not be true. And the reason to believe that that that
that's not true, which is as follows.
So the U.S. military was the early adopter
and has continued to be sort of the leading adopter
in U.S. society of actually IQ testing,
and they basically launder it through something
called the ASVAB, which is their vocational altitude battery test.
But it's basically an IQ test.
And so they still use basically explicit IQ tests,
and they slot people into different specialties
and roles, you know, in part with according to IQ,
including into leadership roles.
And so they know what everybody's IQ is
and they kind of organize around that.
And one of the things that they found over the years
is if the leader is more than one standard
deviation of IQ away from the followers,
it's a real problem.
And that's true in both directions, right?
If the leader is not smart enough
to be able to manage,
you know, to be able to, you know,
for somebody who is less smart
to model the mental behavior
of somebody who's more smart
is, of course, inherently very challenging
and maybe impossible.
but it turns out the reverse is also true
which is if the leader is two standard deviations
above the norm of the organization that he's running
he also loses theory of mind
right it's actually very hard for very smart people
to model the internal thought processes
of even moderately smart people
and so there's actually a real
there's actually a real need to have a level of connection there
that's not just right and therefore by inference
if you had a person or a machine that had a you know
a thousand IQ or something like it it may just be
It would be so alien, its understanding of reality would be so alien to the people or the things that it was managing
that it wouldn't even be able to connect in any sort of realistic way.
So again, this is a very good argument that like it, yeah, the world is going to be far from organized by IQ for, yeah, for centuries to come.
Yeah, and Zuckerberg had a great line, which is intelligence is not life.
And life has a lot of dimensionality to it that is independent of intelligence, I think, that, you know, if you spend all your time work on intelligence,
you lose track of that.
We sometimes say about some specific people
that they're too smart to properly model
or they sort of assume
too much rationality on other people
or they're just overthink things
or over rationalize them.
Just to your point that it's on everything.
Yeah. Yeah. People often,
people seldom do what's in their best interest,
I should say. You know, I also suspect
this kind of gets more into the biology side of things.
You know, there's more and more scientific evidence
that basically also that like human, human cognition or human, I don't know,
whatever you want to call it, self-awareness, information processing, decision-making
sort of experience is not purely a brain.
Like, basically, the sort of mind-body dualism is just not correct.
And again, this is an argument against sort of IQ supremacism or intelligent supremacism is
it's not, you know, human beings in an experience existence just through the rational thought
and specifically not through just the rational thought of the brain, but rather it's a whole
body experience, right? And there's, there's aspects of our nervous system and there's aspects of
everything from our gut biome to, you know, to, you know, to smells, you know, to the olfactory
senses and, you know, and hormones and like all kinds of like biochemical kind of aspects
to life. If you're just going to track the research, I suspect we're going to find as human
cognition is a full body experience much, much more than, much more than people thought. And so therefore
to actually, and this, you know, this is like a, and this is, you know, one of the kind of big
fundamental challenges in the AI field right now, which is, you know, the form of AI that we have
working is the fully my body dual version of it, which is it's just a disembod, you know,
like a disembodied brain. You know, the robotics revolution for sure is coming when that
happens when we put AI in physical objects that move around the world, you know, you're going to
be able to get closer to having that kind of, you know, integrated intellectual, physical,
you know, experience. You're going to have sensors and the robots are going to be able to,
you know, gather a lot more, you know, real world data. And so maybe you can start to actually
think about synthesizing, you know, a more advanced model of cognition. And, you know,
maybe we're going to actually discover more both about how the human version of that works
and also how the machine version of that works. But it's just, to me, at least reading the
research like that, all those ideas feel very nascent and we have a lot of work to do to try to
to figure that out. Do you have a sense for how they are, Elams are at Theory of Mind today?
Or do you have a sense where the limitations are? You like to talk to them a lot.
Are there any particular things that are particularly surprising to you as you do?
Yeah, I would say generally they're really good. Yeah. And so like one of the, one of the more
I find one of the more fascinating ways
to work with language models
is actually have them create personas
and then basically have...
Well, actually, so, I like basically...
I like Socratic dialogues.
I like when things are argued out
and like a Socratic dialogue.
And so you know, tell a...
Tell a any advanced LLM today
to create a Socratic dialogue
and it'll either make up the personas
you can tell what it is.
It does a good job.
It has this very, very annoying property
which is it wants everybody to be happy.
And so it wants all of its personas to agree.
And so by default,
it will have a...
it will have a briefly interesting discussion,
and then it will sort of figure out, you know, basically,
like you're watching, I don't know, PBS special or something,
it'll kind of figure out how to bring everybody in agreement
and everybody's happy at the end of the discussion.
And, of course, I fucking hate that.
Like, it drives me nuts.
I don't want that.
So instead, I tell it, I'm like, make the conversation more tense, right?
And, like, fraught with, like, anger and, like, you know, people, you know,
they're going to get, like, increasingly upset throughout the conversation.
And then it starts to get really interesting.
And then I tell it, you know, bring it, you know,
introduce a lot more cursing.
You know, really have them go at it.
Like, all the gloves come off.
They're going for full, you know, reputational destruction of each other.
You do a lot of these skits.
Yeah, skits.
And then I get carried away.
And then I'm like, it turns out they're all like secret ninjas.
And then I'll start fighting.
And you've got Einstein, you know, you know, hitting, you know,
Niels Bohr with Nunchucks.
And by the way, it's happy to do that too.
So you do have to, you have to control yourself.
But it is very good at theory of mind.
And then I'll give you another example.
There's a, there's a startup actually in the UK in the world of politics.
And what they've found is that they've found that language models now are good enough.
So specifically for politics, which is sort of a subcategory where this idea matters.
So, you know, in politics, people do focus groups, you do focus groups of voters all the time.
And by the way, many businesses also do that.
You know, so you get a bunch of people together from different backgrounds in a room
and you kind of guide them through discussion and try to get their points of view on things.
And focus groups are often surprising.
Like politicians who, if you talk to politicians, you do focus groups, they're often surprising.
They're often surprised by the things that they thought.
voters cared about is actually not the things that voters care about. And so you can actually
learn a lot by doing this. But focus groups are very expensive to run. And then there's a long lag time
because they have to be actually physically organized and you have to recruit people and vet people
and so forth. And so it turns out that the state of the art models now are good enough
at this so they can actually, they can correctly accurately reproduce a focus group of real people
inside the model. So they're going to clear that bar. In other words, you can basically have a focus
group actually happening in the model where you create personas in the model and then it actually
accurately represents, you know, a college student from, you know, Kentucky is contrasted to a
housewife from Tennessee as contrasted to a, you know, whatever, whatever. You just like specify this.
And so, you know, they're good enough to clear, they're enough to clear that bar. And, you know,
we'll see how far they get. I want to segue to the bubble conversation. Amina G2, Jensen, and
Matt spoke about the enormous scale of physical infrastructure being built out. AI CAPEX is 1%
GDP. How should we understand and think about this bubble question? Well, I think the fact that it's
a question means we're not in a bubble. That's the first thing to understand. I mean, a bubble is a
psychological phenomenon as much as anything. And in order to get to a bubble, everybody has to
believe it's not a bubble. That's sort of the core mechanic of it. And they, you know, we call that
capitulation. Everybody just gives up like, okay, I'm not going to short these stocks anymore. I'm tired of
losing all my money. I'm going to go long. And we saw that actually. And, you know,
and I had a little bit of question, like, really, what was the tech bubble? But in the kind of
dot-com era, right as the prices went through the roof, Warren Buffett started investing in tech.
So like, and he swore he would never invest in tech because he didn't understand it. And so if he
capitulated, nobody was saying it was a bubble when it became like a quote-unquote bubble. Now,
if you look at that phenomenon
the internet clearly was not a bubble
you know is a real thing
it was in the short term
there was a kind of price dislocation
that happened because
the market
you know there were just
not enough people on the network to make
those products go at the time
and then the prices kind of
outran the market
you know in AI it's much
harder to see that because there's so much
demand in the short term right like we don't have a demand problem right now and like the idea that
we're going to have a demand problem five years from now to me seems quite absurd uh you know could there
be like weird bottlenecks that that appear you know like we just at some point we just don't have
enough cooling or something like that you know maybe but like right now if you look at demand and
supply and what's going on
and multiples against growth
it doesn't look like a bubble at all to me
but I don't know
do you think it's a bubble mark
yeah look I would just say this
yeah like nobody knows in the sense of like the experts
like if you're talking to anybody
like a hedge fund or a bank or whatever like they definitely don't know
generally the CEOs don't know so
by the way a lot of VCs don't know
they just got upset like VCs get like emotionally upset
when you guys have higher valuations.
Like, it makes them, like, angry.
And, you know, and I get it all the time,
and I'm like, what are you mad about?
Like, the shit is working, man.
Be happy. Come on.
But so, like, there's a lot of emotion around,
like, people wanting it to be a bubble.
Yeah.
Nothing's worse than passing on a deal
than having the company become a great success.
Like, that's just, it's just,
that valuation's outrageous.
You can be furious about that for 30 years in our business.
It's amazing.
then you can find, yeah, you come up with all kinds of reasons to cope
and explain why it wasn't your mistake, but it's the world, it's the world that's wrong,
not me, right? So there's a lot of that. Yeah, yeah. So I just, I would just say,
like I would always say bring the conversation back to ground truth fundamentals. And the two
big ground truth fundamentals are, number one, does the technology actually work? You know,
can it deliver on his promise? And the number two is our customers paying for it. And if those,
if those two things are true, then it's very hard to, it's very hard to, like, as long as those two
things stay grounded. You know, generally, generally things are going to, I think, are going to be on track.
Yeah. When Gavin was up here with DG, he said chat chit was a Pearl Harbor moment for Google,
the moment when the giant wakes up. When we look at history and platform shifts,
what determine whether the incumbent actually wins the next wave versus new entrance? Or how should we
think about that in, you know, reacting to it is important. But that doesn't mean, like,
It's a Pearl Harbor moment.
I think Google got their head out of their ass,
so it was the sound of it.
So, you know, they're not going to get completely run over,
but nonetheless, like, I don't think Open AI is going away.
So, like, they definitely let that happen.
Yeah, some of it to speed.
And then just, look, it's execution over a long period of time.
And, you know, some of these very large,
companies, to varying degrees, have lost their ability to execute.
And so if you're talking about a brand new platform and you're talking about, you know,
kind of building for a long time, it's like, you know, Microsoft got caught with their
pants down on Google.
Microsoft's still, like, very strong, but they missed that whole opportunity.
They also missed the opportunity, you know, Apple was nothing.
And Microsoft fully believed that they were going to own mobile computing.
they completely miss that one.
But they were still so big from their Windows monopoly,
they could build into other things.
So, you know, I think generally the new companies
have won the new markets,
and that doesn't mean the big company,
the biggest companies,
the biggest monopolies from the prior generation
just lasts a long time is the way I would look at it.
Yeah, I also think we don't quite know,
like it's all happened so fast.
We actually don't, I think we don't yet know
the shape and form of the ultimate products.
right and so and so like because it's tempting and this is kind of what always happens
it's kind of tempting to look at i'm not saying what's what these guys did on stage but
it's kind of tempting to look sometimes you hear the kind of reductive version of this which is
basically it's like oh there's either going to be a chat bot or a search engine right the competition
is between a chat bot and a search engine and the problem google has is the classic problem
of disruption are you going to disrupt the 10 blue links model and swap in a you know at you know
sort of AI answers and you know potentially disrupt the advertising model and then the problem
opening I has is they have the full, you know, the full chat product, but, you know, they don't have
the advertising yet and they don't have the distribution, Google scale distribution. And so,
you know, you kind of say, okay, that's a fairly, it's a fairly, like, that'd be straight out
of, like, you know, the innovator Shilema, you know, business textbook, like this is just a very
clear, you know, one versus one, you know, kind of dynamic. But that assumes that, you know,
the mistake that you can make and think in that way is that assumes that the forms of the product
in 5, 10, 15, 20 years that are going to be the main things that people years are going to be
either a search engine or a chatbot, right?
And, you know, there's just obvious historical analogies.
One just obvious historical analogy is, you know,
the personal computer from sort of invention in 1975
through to, you know, basically 1992, you know,
was a text prompt system, right?
You know, and at the time, by the way,
an interactive text prompt was a big advance
over the previous generation of like punch card systems,
time sharing systems.
And then, you know, it was, you know, 1992,
so it was, what, 17 years in,
know, the whole industry took a left turn into GUIs and never looked back.
You know, and then by the way, five years after that,
the industry took a left term into web browsers and never look back, right?
And so the very shape and form and nature of the user experience
and how it fits into our lives, you know, is, I think, still unformed.
And so, like, you know, like, I'm sure there will be chatbots 20 years from now,
but I'm pretty confident that, you know,
both the current chatbot companies and many new companies
are going to figure out many kinds of user experiences
that are radically different that we don't even know yet.
And by the way, that's one of the things, of course, that keeps the tech industry fun,
which is, you know, especially on the software side, you know, it's not, it's not obvious
what the shape and form of the products are. And there's just, I think there's just
tremendous headroom for invention. As you're coaching entrepreneurs and the entrepreneurs in this
room, what else feels different about this era or other advice that you find yourself
to spend, whether it's around sort of the talent wars that are going on or other aspects that
feel unique to this era? What other advice do you want to be leaving our entrepreneurs with it?
that's unique to this era well like i i actually think you said the right thing which is this is a
unique era and so trying to learn the organizational design lessons of the past or trying to learn
kind of too much from the last generation is it can be deceptive because things really are
different like the way these you know the way your companies are getting built is it is quite
different in many aspects and you know the types of you know what the just like our
observation on like PhD AI researchers is just very different than like a traditional
engineer full stack engineer or something like that so you know I think you do have to
think through a lot of things from first principles because it is different and like
observing from the outside, it's really different.
Yeah, and I would just offer, like, I do think things are going to change.
So I already talked about, I think the shape and form of products is going to change.
And so, like, I think there's still a lot of creativity there.
I also think, and I, let's suppose I, I think that, like, in a world of supply and demand,
the thing that creates gluts is shortages, right?
So, like, when something becomes too scarce, there becomes a massive economic incentive
to figure out how to unlock new supply.
And so the current generation of AI companies are really struggling with,
particular shortages of the really talented AI researchers and engineers,
and then they're really challenged with a shortage of infrastructure capacity chips
and data centers and power.
I don't want to call timing on this.
There will come a time when both of those things become gluts.
And so I don't know that we can plan for that.
Although I would just say the following, number one,
the researcher engineer side of things,
it is striking to the degree to which there are excellent,
you know, outstanding models coming out of China.
now for multiple companies and specifically, you know, Deep Seek and Quinn and Kimmy.
It is striking how the teams that are making those are not, you know, the name brand, you know, for the most part,
these are not like the name brand people with their names on all the papers.
And so, like, China is successfully decoding how to like basically take young people and train
them up in the field.
Well, and XAI to a large extent, too.
Yeah.
And so I think that I think there's going to be, and look, it makes sense up until, it makes sense
that for a while it's going to be the super esoteric skill set
and people are going to pay through the nose for it.
But like, you know, there's no question.
The information is, right, being transferred into the environment.
People are learning how to do this.
You know, college kids are figuring it out.
And so, you know, there's, and I don't know that there's ever going to be a talent
glut per se, but like I think for sure there's going to be a lot more people in the future
who, of course, know how to build these things.
And then, and then by the way, also, of course, you know, AI building AI, right?
So the tools themselves are going to be better, better at contributing to that.
And so, and I think that, I think,
this is good because I think that, you know, the current level of shortage of engineers and
researchers is too constraining. And then on the ship side, I don't want to, I'm not a ship
guy and I don't want to call it specifically, but like it's never been the case. It's never
been the case in the ship industry that there's ever, you know, every, every shortage in the ship
industry has always resulted in a glut because the profit pool of a shortage, the margins get
too big, the incentive for other people to come in and figure out how to commoditize the function
get too big. And so, you know, Nvidia has like, you know, the best position probably anybody's
ever had in chips, but notwithstanding that, I find it hard to believe.
that there's going to be this level of pressure
and infrastructure in five years.
Yeah, and even if the bottleneck
within the infrastructure moves,
so if it becomes power,
if it becomes cooling or anything else,
then you'll have a chip glut for sure, yeah.
So I think over then,
I would just say this,
it's likely the challenges that we all have
in five years from now
are going to be different challenges.
Yeah, yeah, yeah.
Like don't,
definitely this industry of all industries
don't look at us as static.
Like, you know, the positions
could change very, very fast.
Let's actually close on more of this macro note.
Mark, you mentioned China.
Last month we were in D.C.
And one of the big questions that Senator has
is how should we make sense
of sort of the state of the AI race vis-a-vis China?
Do you want to share just the high-level summary
what you shared with them?
Yeah, so my sense of things,
and I think the current, if you just observe currently,
specifically like Deep Sea Quina Kimi
and these models coming out of China,
my sense basically is, like,
I would say the U.S. specifically in the West generally,
but more and more specifically the U.S.,
is like the conceptual innovations are, you know,
have been coming out of the U.S.,
coming out of the West, you know, kind of the big kind of
conceptual breakthroughs.
China is extremely good at picking up ideas
and implementing them and scaling them
and commoditizing them.
And, you know, they do that obviously
throughout the manufacturing world.
And they're doing it now very, I think, successfully sort of in AI.
And so I would say they're right.
running the catch-up game like really well. You know, and then there's sort of always this
question of like how much of that is like being done, let's just say like authentically, you know,
through hard work and smart people and then how much is being done with maybe a little bit of
help. Maybe a little USB stick in the middle of the night, you know, kind of help. So, you know,
there's always a little bit of question. But like either way, you know, they're doing a great job.
Obviously they aspire to, you know, more than that. And they're many very smart.
created people in China. And so, you know, it will be interesting now to see, you know,
the level to which the conceptual breakthroughs start to come from there and whether they,
whether they pull ahead. And so, but like, I would say, like, what we tell people in Washington is,
like, look, this is a foot, this is now, this is a full on race. It's a foot race. It's a game
of inches. Like, we're not going to have a five-year lead. We're going to have, like,
maybe a six-month lead. Like, we have to run fast. We have to win. Like, we have to, we have to do
this. We can't. And then we can't put constraints on our companies that the Chinese government isn't
putting on their own companies.
And so, you know, we'll just lose.
And, you know, do you really want to wake up in the morning
and live in a world, you know, really controlled
and run by Chinese AI?
Most of us would say, no, we don't want to live in that world.
And so, so that, so there's that.
And I would say I feel moderately good about that
just because I think we're really good at software.
You know, the minute this goes into, you know,
embodied AI in the form of robotics,
I think things get a lot scarier.
And, you know, this is the thing I'm now spending time
in D.C. trying to really educate people on,
which is, you know,
Because the U.S. and the West have chosen to deindustrialize to the extent that we have over the last 40 years,
you know, China specifically now has this giant industrial ecosystem for building, you know,
sort of mechanical, electrical and semiconductor and now software, you know, devices of all kinds,
including phones and drones and cars and robots.
And so, you know, there's going to be a phase two to the AI revolution.
It's going to be robotics.
It's going to happen, you know, pretty quickly here, I think.
And when it does, like, even if the U.S. stays ahead in software, like, the robot's got to get built,
and that's not an easy thing. And it's not just like a company that does that. It's got to be an
entire ecosystem. And it's, you know, it's going to be, you know, like, I mean, you know,
the car industry was not three car companies. It was thousands and thousands of component suppliers
building all the parts. And it's been the same thing for airplanes and the same thing for
computers and everything else. It's going to be the same thing for robotics. And, you know,
by default, sitting here today, that's all going to happen in China. And so even if they never quite
catch us in software, they might just lap us in hardware, and that'll be that. You know,
the good news is I think there's a growing awareness in, there's a growing awareness, I would say,
across the political spectrum in the U.S. that, like, deindustrialization went too far,
and there's a growing desire to kind of figure out how to reverse that. And, you know, I say I'm
guardedly optimistic that we'll be making progress on that, but I think there's a lot of work to be
done. On that call to arms, let's wrap. Thank you, Mark and Ben, to wrap up. I'd like to welcome
Thank you.
Thank you, everybody.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe,
leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z and subscribe to our Substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
as a reminder the content here is for informational purposes only should not be taken as legal business tax or investment advice or be used to evaluate any investment or security and is not directed at any investors or potential investors in any a16z fund
please note that a16z and its affiliates may also maintain investments in the companies discussed in this podcast for more details including a link to our investments please see a16z.com forward slash disclosures
You know,
