a16z Podcast - Amjad Masad & Adam D’Angelo: How Far Are We From AGI?
Episode Date: November 7, 2025Adam D’Angelo (Quora/Poe) thinks we're 5 years from automating remote work. Amjad Masad (Replit) thinks we're brute-forcing intelligence without understanding it.In this conversation, two technical ...founders who are building the AI future disagree on almost everything: whether LLMs are hitting limits, if we're anywhere close to AGI, and what happens when entry-level jobs disappear but experts remain irreplaceable. They dig into the uncomfortable reality that AI might create a "missing middle" in the job market, why everyone in SF is suddenly too focused on getting rich to do weird experiments, and whether consciousness research has been abandoned for prompt engineering.Plus: Why coding agents can now run for 20+ hours straight, the return of the "sovereign individual" thesis, and the surprising sophistication of everyday users juggling multiple AIs. Resources:Follow Amjad on X: https://x.com/amasadFollow Adam on X: https://x.com/adamdangelo Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years.
Humanity went through the agriculture revolution and industrial revolution.
We're going through another revolution.
We will not be able to call it something.
It's the future people who will call it something.
But we are going through something.
The number of solo entrepreneurs that this technology is going to enable is vastly increased what a single person can do.
For the first time, opportunity is massively available for everyone.
Just the ability for more people to be able to become entrepreneurs is massive.
The age of solo entrepreneurship powered by AI is here,
but the pact of full automation is messier than the hype suggests.
Today, you'll hear from Adam DeAngelo, founder of Quora and CEO of Poe,
and Androd Mossad, founder and CEO of Replit,
on why we're in a brute force era of AI rather than true intelligence,
and what that means for the future of work.
We discussed the expert data paradox, how automating entry-level jobs creates a crisis
in training the next generation of experts, why managing tens of agents in parallel will define
the next wave of productivity, and how the sovereign individual framework might be the best
lens for understanding AI's economic and political impact.
Plus, Adam makes the case for why Vyde coding is radically underrated, and Amjad explains
what Claude 4.5's strange new self-awareness might signal about the path ahead.
Let's get into it.
Adam, I'm John. Welcome to the podcast.
Thank you. Yeah. Thanks for having us.
So a lot of people have been throwing cold water over LLMs lately.
It's been some general bearishness.
People talking about the limitations of LLMs, why they won't get a stage GI.
Well, maybe what we thought was just a couple years away is now maybe 10 years away.
Adam, you seem a bit more optimistic.
Why don't you share your broad general overview?
Yeah, I mean, honestly, I don't know what people are talking about.
I think if you look a year ago, the world was very different.
And so just judging on how much progress we've made in the last year
with things like reasoning models,
things like the improvement in code generation ability,
the improvements in video gen,
it seems like things are going faster than ever.
And so I don't really understand where the kind of bearishness is coming from.
Well, I think there's some sense that we hoped that they would be able to replace all of tasks or all jobs.
And maybe there's some sense that it's like middle to middle,
but not end to end.
and maybe labor won't be automated in the same way that we thought it would on the same timeline.
Yeah, I mean, I don't know what the previous timelines people were thinking were,
but I think if you go five years out from now, we're in a very different world.
I think a lot of what's holding back the models these days is not actually intelligence.
It's getting the right context into the model so that it can be able to use its intelligence.
And then there's some things like computer use that are still not quite there,
but I think we'll almost definitely get there
in the next year or two
and when you have that
I think we're going to be able to automate
a large portion of what people do
I don't know if I would call that AGI
but I think it's going to
satisfy a lot of the critiques that people
are making right now
I think they won't be valid in a year or two
what is your definition of AGI?
I don't know. Everyone thinks it's something different
one definition I kind of like
is if you say
that you have a remote worker, a human, any job that could be done by someone whose job
can be done remotely, that's AGI. You can then say, does it have to be better than the best
person in the world at every single job. Some people call that ASI. It does something better than
teams of people. You can argue with those different definitions. But I think once we get to be
better than a typical remote worker at the job they're doing, we're living in a very different
world. And I think that's effectively what people, that's a very useful anchor point for these
definitions. So in summary, you're not sensing the same limitations of LOMs that other people
are. You think there's a lot more room that LOMs can go from here. We don't need like a brand new
architecture or other breakthrough. I don't think so. I mean, I think there are certain things like
memory and learning, like continuous learning, that are not very easy with the current architectures.
I think even those you can sort of fake and maybe we're going to be able to.
to get them to work well enough, but we just don't seem to be hitting any kind of limits.
The progress in reasoning models is incredible, and I think the progress in pre-training
is also going pretty quickly, maybe not as quickly as people had expected, but certainly
fast enough that you can expect a lot of progress over the next few years.
I'm Judd, what's your reaction hearing all this?
Yeah, I think I've been pretty consistent and consistently right, perhaps.
Dear I say.
Consistent with yourself or consistent with what I'm with myself
and with I think how things are unfolding.
That I started being a bit of a more public doubter of things around the time
when the AI safety discussion was reaching its height back in maybe 22, 23.
And I thought it was important for us to be realistic about the progress
because otherwise we're going to scare politicians.
We're going to scare everyone.
DC will descend on Silicon Valley.
They'll shut everything down.
So my criticism of the idea of AGI 2027,
you know, that paper that I think is called Alessender
or someone else wrote.
And then in this situational awareness
and all this hype papers
that are not really science,
they're just vibe.
Here's what I think will happen.
The whole economy will get automated.
Jobs are going to disappear.
All of that stuff is that, again,
is just, I think it's unrealistic,
it is not following the kind of progress
that we're seeing, and it is
going to lead to just bad policy.
So my view is
LMs are
amazing machines.
I don't think they
are exactly human
intelligence equivalent.
You can still trick
LMs with things like
they might have solved the strawberry one,
but you can still trick it with
single sentence questions, like, how many
are in this sentence. I think I tweeted about it the other day, which was like three out of the
four models, couldn't, didn't get it even. And then GPD5 with high thinking had to think for
15 seconds in order to get a question like that. So LMs are, I think, a different kind of intelligence
than what humans are. And also they have clear limitations and we're papering over the
limitations and we're kind of working around them in all sorts of ways, whether it's in the
LM itself and the training data or and the infrastructure around and everything that we're doing
to make them work.
But that makes me less optimistic that we're, we've cracked intelligence.
And I think once we truly crack intelligence, it'll feel a lot more scalable and that you can,
and that the idea behind the bitter lesson will actually be true in that you can just pour more,
more power, more resources, more compute into them and they'll just scale more naturally.
I think right now, there's a lot of manual work going into making these models better.
In the true pre-training scaling era, 2.2, 3, 3.5, maybe up to 4,
it felt like you can just put more Internet data in there and it just got better.
Whereas now it feels like there's a lot of labeling work happening.
There's a lot of contracting work happening.
A lot of these contrived RL environments are getting created in order to make LMs good at
coding and becoming quoting agents, and they're going to go do that. I think the news from OPI
that they're going to do that for investment banking. And so I try to coin the term I call
functional AI, which is the idea that you can automate a lot of aspects of a lot of jobs by
just going in and collecting as much data and creating these RL environments. It's going to take
enormous efforts and money and data and all of that in order to do. And I think I agree with
Adam that things are going to get better 100%. Over the next three months, six months,
clot 4.5 was a huge jump.
I don't think it's appreciated how much of a jump it was over four.
There's really, really amazing things about clot 4.5.
So there is progress.
We're going to continue to see progress.
I don't think LMs, as they can understand, are on the way to AGI.
And my definition for AGI is, I think, the old school RL definition,
which is a machine that can go into any environment and learn efficiently
in the same way that a human could go.
into you can put a human into a pool game and within two hours they can chew pool and
be able to do it right now there's no way for us to have machines learn skills like that on
the fly everything requires enormous amount of data and computer and time and effort and more
importantly it requires human expertise which is the non-bitter lesson idea which is human expertise
is not scalable and we are reliant today we are in a human expertise regime yeah i mean i think that
Humans are certainly better at learning a new skill
from a limited amount of data in a new environment
than the current models are.
I think that, on the other hand,
human intelligence is the product of evolution,
which used a massive amount of effective computation.
And so this is a different kind of intelligence.
And so because it didn't have this massive...
equivalent of evolution, it just has pre-training for that, which is not as good. You then
need more data to learn everything, every new skill. But I guess I think in terms of the functional
consequence, so if you're like, when will the job landscape change, when will the economic
growth hit, I think that's going to be more a function of when we can produce something that is
as good as human intelligence,
even if it takes a lot more compute,
a lot more energy, a lot more training data.
We could just put in all that energy
and still get to software
that's as good as the average person
at doing a typical job.
So I don't disagree with that.
And that's, it is, it feels like we're in a brute force
type of regime, but maybe that's fine.
And yeah, yeah.
So where's the disagreement then?
I guess.
So there's agreement on that.
Where is the diversity?
I don't think that we'll get to the singularity or I don't think we're going to get to the next level of human civilization
until we crack the true nature of intelligence, like until we understand it and have algorithms
that are actually not brute force.
And you think those will take a long time to come?
I'm sort of agnostic on that.
It just does feel like the LMs in a way are distracting from that
because all the talent is going there
and therefore there's less talent
that are trying to do basic research on intelligence.
Yeah, at the same time, a huge portion of talent
is going into AI research
that used to previously wouldn't have gone into AI at all.
And so you have this massive industry, massive funding, you know, funding compute, but also funding human employees.
And that is, I guess I, nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years on it.
But basic research is different, right?
like trying to
like trying to get into the fundamentals
and as supposed to like there's a lot of industry research
like how do we make these things more useful
in order to generate profit
and I think that's different
and often I mean Thomas Coon
this philosopher's science talks a lot about
how these research programs end up
you know becoming like a bubble
and like sucking all the attention and ideas
and like things
think about physics and how there are like these industry of our string theory and like it pulls
everything in and there's sort of a black hole of progress and you know yeah yeah no and I think
I think one of his things was like you got to wait until the current people retire that's right
you can have a chance at changing the paradigm it's very pessimistic about paradigm but I guess I feel
like the current paradigm this is maybe I think the current paradigm is pretty good and I think we're
nowhere near the sort of like diminishing returns of continuing to push on it and I bet yeah I guess
I would just bet that you can keep doing different innovations within the paradigm to to get there so let's
say we continue to brute force it we're able to automate a much of labor do you estimate that
GDP is something of you know four or five percent a year or are we going up to you
10% plus or what does it do to the economy?
I think it depends a lot on exactly where we get to and what AGI means.
But so, so let's say you have, let's say you have LLMs that with, with an amount of energy
that costs $1 an hour, they could do a job of any human.
Let's just, just, just let's take that as a theoretical point you could get to.
I think you're going to get to much more than 4 to 5% GDP growth in that world.
I think the issue is you may not get there.
So it may be that the LMs that can do everything a human can do
actually cost more than humans do currently,
or they can do kind of like 80% of what humans can do,
and then there's this other 20%.
And I do think at some point you get to LMs,
they can do everything, every single thing a human can do for cheaper.
I don't see a reason why we don't eventually get there.
That may take 5, 10, 15 years.
But I think until you get there,
we're going to get bottlenecked on the things that the LMs still can't do
or the, you know, building enough power plants to supply the energy
or other bottlenecks in the supply chain.
One thing I worry about is the deleterious effect of LMs
in the economy in that
say LMs
effectively automate
the entry level job
but not
but but the
but not the experts job right
so let's take
you know QA
QA QQAQQA
QA assurance
and it's
it's so good but
there's still all these long till
events that it doesn't handle
and so you have a lot of really good QA people now
like managing like hundreds of agents
and you effectively increase productivity a lot
but they're not hiring new people
because the agents are better than new people
and that feels like a weird equilibrium to be in right
and I don't think that many people are thinking about it
yeah yeah for sure yeah no I think that's
you know I think it's happening with CS majors graduating from college
there's just not as many jobs as there used to be
and LLMs are a little more substitutable
for what they previously would have done
and I'm sure that's contributing to it
and then it means that you're going to have fewer people
going up that ramp
that companies paid a lot money to employ them
and train them.
And so I think it's a real problem.
I think it's going to,
I'm guessing you'll probably see some kind of,
like that problem also creates a economic incentive
to solve the problem.
So it may be that there's like more opportunities
for companies that can train people
or maybe use of AI to teach people these things.
But for sure, that's an issue right now.
Another related problem is that since we're dependent
on expert data in order to train the LMs
and the LMS start to substitute those workers,
but at some point there's no more experts
because they're all out of the jobs
and they're equivalent to that alums
but if the alums is truly dependent
on labeling data expert RL environments
then how would they improve beyond that?
I think that's something a question
for an economist to really sit down and think about
as like once you get the first tick of automation
I mean there are some challenges there
And so how do you go, how do you go to the next part?
Yeah.
I mean, I think a lot of it's going to depend on how good of RL environments can be created.
So, you know, in one extreme, you have something like AlphaGo where it's just a perfect environment and you can just blast past expert level.
But I think a lot of jobs have limited data that anyone can train from.
And so I think it'll be interesting to see how easy is it for research efforts to overcome that bottleneck.
If you had to make a guess on what job category is going to be introduced or explode in the future,
you know, some people say it's like the, you know, everyone's an influence or, you know,
or in some sort of caring field or, you know, everyone's employed by the government and some sort of bureaucrat thing
or maybe training the AI in some way.
You know, as more and more things start to get automated,
you know, what is your guess as to what more and more people start to do?
You know, doing art and poetry is...
Yeah, I mean, at some point you have everything automated,
and then I think people will do art and poetry.
And, you know, there's a data point that the people playing chess
is up since computers got better at human than humans at chess.
so I don't think that's a bad world if people are all just kind of free to pursue their hobbies
as long as you have some kind of way to distribute wealth so that so people can afford to live
but I you know in the near that that's a while away and in the near term well like 10 15 years out
I don't know how much but yeah in the in the I'll put it in the at least 10.
10 years range, I think in the near term, the job categories that are going to explode,
the jobs that can really leverage AI.
And so people who are good at using AI to accomplish their jobs, especially to accomplish
things that the AI couldn't have done by itself, there's just massive demand for that.
I don't think we're going to get to a point where you automate every job.
Definitely not in the current paradigm.
would, uh, I would doubt it happening.
I, I'm not certain it would ever happen, but definitely not in the current paradigm.
Now, here's what I think, because a lot of jobs is about servicing other humans.
You need to be fundamentally human in order to, you need to be actually human in order to understand
what other people want, you know? And so you need to have the human experience. So unless we're
going to, uh, create, uh, create, uh, you need to be actually human.
human, humans, unless AI is actually embodied in a human experience, then humans will always
be the generators of ideas in the economy.
Adam, there's one time's point around the human part, because you created one of the most,
you know, the best wisdom of the crowds, you know, platforms in the universe, and now you've
gone, you know, all in with Poe.
What are your thoughts on, you know, to what extent will we be relying on?
humans versus will we be trusting AI to be our therapist, be our, you know, caretakers in other ways.
Humans have a lot of knowledge collectively and, you know, even like one individual person
who's an expert and has lived a whole life and had a whole career and seen a lot of things,
they often know a lot of things that are not written down anywhere.
And you call test and knowledge, but also what they're capable of writing down if you did
ask them a question.
I think there's still an important role for people to play in the world by sharing their
knowledge, especially when they have knowledge that just wasn't otherwise in an LLM's
training set.
You know, whether they will be able to make a full-time living doing that, I don't know.
But if that becomes a bottleneck, then for sure that's going to mean that all the sort of like
economic pressure goes
to that
I don't
in terms of the like
you know
you have to be human
to know what humans want
I don't know about that
so like as an example
I think
I think recommender systems
the system that ranks
your Facebook or Instagram
or Cora feed
those recommender systems
are already superhuman
at predicting
what you're going to
be interested in reading.
Like if I gave you a task that was like,
make me a feed that I'm going to read,
like there's just no way,
no matter how much you knew about me,
there's no way you could compete with these algorithms
to just have so much data about everything I've ever clicked on,
everything everyone else has ever clicked on,
what all the similarities are between all those different data sets.
And so I don't know.
You know, it's true that as a human you can kind of like simulate being a human.
and that makes it easier for you to like
test out ideas. And I'm sure that
composers and artists
are, this is an important part of their
process for
doing work. Or chefs or, you know,
they produce something and, you know, a chef will
cook something and they taste it. And it's important
that they can taste it. But
I don't know, you know, they
just, they have very little data
compared to what AI can be trained
on. So I don't know how that's going to
shake out.
That's a good point. I mean,
ultimately, what
recommender systems are they're like aggregating all the different tastes and then
sort of finding where you sit in the sort of multi-dimensional taste vector space and like
getting you the best content there so I guess there's some of that I think that's more
narrow than we think like yes it's true in recommender systems but I'm not entirely sure
it's true of of everything um but so
I think the best prediction
for where the world is headed
and this is not a
endorsement or necessarily
like this is where I think the world had it
because I think part of it is
will be slightly
unstable, unstable
but I think the
sovereign individual continues to be
I think a really good set of predictions for the future
although it's not a scientific
book or not it's a
very polemic book and um but but the idea is uh you know in the late 80s early 90s um
are they economists i'm not sure i think they're economists or political science majors uh two people
out of the UK um wrote this book about trying to predict what happens uh when computer technology
matures right they're like you know humanity went through the agriculture revolution and the
industrial revolution. We're going through another revolution, clearly. Information revolution,
now we call it intelligence revolution, whatever. I think we will not be able to call it something
and say future people will call it something. But we are going through something. And so they're
trying to predict, okay, what happens from here? And what they arrive at is that ultimately you're
going to have large swaths of people that are potentially unemployed or economically not
contributing, but you're going to have the entrepreneur, the entrepreneur capitalists going to be
so highly leveraged because they can spin up these companies with AI agents very quickly.
So because they have this, because they're very regenerative, they have interesting ideas,
they're human, they have interesting ideas about what other people want.
They can create these companies very quickly in these products and services and they can
organize the economy in certain ways.
and the politics will change because, you know, today's politics is based on every human being economically productive.
But when you have only, we have massive automation and then you have a few entrepreneurs and very intelligent generative people are actually able to be productive, then the political,
structures also change.
And so they talk about how the, you know,
nation, states sort of subsides.
And instead, you go back to, uh, to an era where, um,
states are like competing over people, over wealthy people.
And like they, you know, uh, as a sovereign individual, you can like,
uh, negotiate your tax rate with your favorite state.
And so it starts to sound like biology a little bit.
And I don't think.
it is far from where I where it might be headed now again it's it's not a sort of a value judgment
or desire but but I do think it's worth thinking about when when people are not the you know
unit of economic productivity things have to change including culture and and politics yeah I think
there's a question with that book and some of this conversation more broadly of like when does
the technology reward the, you know, the defender versus the sort of aggregator or something,
or like the, when does it incentivize more decentralization versus centralization?
Like, remember Peter Thiel had this quip a decade ago of like, you know, crypto is libertarian,
is more decentralizing, AI is, you know, communist or more centralizing.
And it, it's not obvious to me that that's entirely accurate on either side.
AI does seem to empower a bunch of individuals, as you were.
saying, and then also, you know, crypto turns out as like fintech, it's like stable, you know,
it does empower sort of, you know, in nation states we're talking about doing the sort of like,
you know, the China thing that they were going to do. So, yeah, I think there's an open question
as to, you know, which technology leads to who does it empower more, the edges or the
center? And I think if it empowers the edges, it seems like the sovereign individual is, and maybe
there's a barbell where it's like both, basically, the big, the income that's just get much,
much, much bigger, and there's
like these edges, but anyways, that's on it.
Yeah. I'm very
excited for the
number of solo
entrepreneurs that this
technology is going to enable.
I think it's just
greatly, it's vastly increased
what a single person can do.
And there's so many ideas
that just never got explored because
it's a lot of work
to get a team of people
together and maybe raise the
funding.
for it and get the right kind of people
with all the different skills you need.
And now that one person can
bring these things into existence, I think
we're going to see a lot of really amazing stuff.
Yeah, I get these treats all the time
about people who like with their jobs
because they started making so much money
you're using tools like Rapplet
and it's
really exciting. I think
for the first time,
opportunity is massively
available for everyone.
And I think that that is, to me,
the most exciting thing
about this technology
other than
all the other stuff
that we're talking about
just the ability
for more people
to be able to
become entrepreneurs
is not as it.
That trend
is obviously going to happen
as we look out
of the next decade or two
do you think
that AI is more likely
to be sustaining
or disruptive
in the Christian incident
so to ask it another way
do you think
that most of the value
capture is going to
come from companies
that were scaled
pre-open AI starting
so replet still counts as the latter
and sort of as court in some degree
or do you think most of the value
is going to be captured by companies
that started, you know, after, let's say, 2015, 2016.
So there's a related question which is
how much of value is going to go to the
hyperscalers versus everyone else
and I think on that one
we are, I actually think we're in a pretty good balance
where there's enough competition
among the hypers
that the
there's enough competition
that as an application level company
you have choice
and you have alternatives
and the prices are coming down
incredibly quickly
but there's also not so much competition
that the hyperscalers
and the labs like Anthropic and OpenAI
there's not so much competition
that they are unable to raise money
and make these long-term
investments. And so I actually think we're in a pretty good balance and we're going to have a lot of
new companies and a lot of growth among the hyperscalers. I think that's that's about right.
So the terminology of sustaining versus disruptive comes from the innovators dilemma. And it's this
idea that whenever there's a new technology trend, it's sort of there's this idea of a
power curve, it starts as a toy almost or something that doesn't really work or captures
the lower end of the market. But as it sort of evolves, it goes up the power curve and
eventually the disrupts even the incumbents. So originally the incumbents don't pay attention
to it because it looks like a toy and then eventually disrupts everything and eats the entire
sort of market. So that was true of PCs. You know, when PCs came along, the big main frame
manufacturers did not pay attention to it.
And initially it was like, yeah, it's for kids or whatever.
But we have to run these large computers or data centers or whatever.
But now even data centers are running on PCs and so on.
And so PCs were this a hugely disruptive force.
But there are technologies that come along and really benefit their incumbents
and really don't really benefit the new players.
startups. I think Adam's right. It's both. And maybe for the first time it's kind of both
like a huge technology trend because the internet was hugely disruptive. But this time it feels
like it is an obvious supercharge for the incumbents, for the hypers, for the large internet
companies. But it also enables new business models that
that is perhaps counterposition against the existing ones.
Although I think what happened is everyone read that book
and everyone learned how to not be disrupted.
For example, Chad Chipt-D was fundamentally counterposition against Google
because Google had a business that was actually working.
Chad ChbPT was seen as this technology that hallucinates a lot
and creates a lot of that information,
and Google wanted to be trusted.
And so Google had ChatchapD internally.
They didn't release Gemini until like two years after Chat Chitpity,
and chat ChitpD had sort of already won that, like, at least a brand recognition.
And so there was in a way Open AI came out as a disruptive technology.
But now Google realizes this is a disruptive technology and kind of response to it.
At the same time, it was always obvious that AI is going to better with Google.
At minimum, it's, you know, overview, search overview has gone a lot better.
all its
workspace suite
is getting a lot
better with Gemini
their mobile phones
everything gets better
so it seems like
it's both
yeah I really agree
like everyone read the book
and that changes
what the theory even means
because you have
you have all the
public market investors
have read that book
and they now are going to
punish companies
for not adapting
and reward them for adapting
even if it means
they have to make
long-term investments
I think all the
the management leadership of the companies have read the book and they're on top of their game.
I think also just like the people running these companies are in, I guess I would say smarter.
I think then like the companies from the generation that that book was sort of built on and they're on at the top of their game and they are, a lot of them are founder controlled and so they can make, it's easier for them to sort of take a hit.
and make these these investments so that's I actually you know I think if if you had an
environment more like we had in say like the 90s I think this would actually be more
disruptive than than the current hyper competitive world that we're in town one mistake that we
have reflected on over the past few years though of course I haven't been here for
more than just a few months, is this idea that we've passed on companies because we,
they weren't going to be the market leader or the category winner. And thus we thought,
oh, you know, learning the lessons from from Web 2, you have to invest in the category winner.
That's where things are going to consolidate. Values going to accrue over time.
And it seemed, so why do the next foundation model company if the first one already has a
as a head start? But it seems like the market has gotten so much bigger than,
that in foundation models, but also in applications,
there's just multiple winners,
and they're kind of fragmenting
and taking parts of the market that are all venture scale.
I'm curious if this is a durable phenomenon,
but that seems just one difference than the Web 2 era,
is just more winners across more categories.
I think network effects are playing much less of a role
now than they did in the Web 2 era also,
and that makes it easier for,
competitors to get started. There's still a scale advantage
because if you have more users, you can get more data, if you have more
users, you can raise more capital. But that advantage
is not, it doesn't make it absolutely impossible
for a competitor of smaller scale. It makes it hard,
but it's, there's definitely like room for
more winners than there was before. I think another difference is that
people are seeing the value so strongly that they're willing to pay early on
and maybe a way that they, the question with Web 2 companies was,
how are they going to make money?
And you were Facebook super early, obviously, you know, Google, et cetera,
was like, oh, how are they going to monetize?
And, you know, the companies here are monetizing from the get-go,
you know, your guys' companies include it.
Yeah, yeah.
And the, I think with the earlier generation of companies,
the monetization kind of depended on scale.
Yeah.
Like you couldn't build a good ad business until you got to millions, tens of millions of users.
And now with subscriptions, you can just charge right away, I think especially thanks to things like Stripe that are making it easier.
And so that's also made it a lot more friendly to new entrants.
There's also questions of geopolitics.
Like, you know, it seems clear that we're not in this globalized era and perhaps a
going to get much worse. And so investing in the foundation, in the open AI of Europe might
be a good idea. And like similarly, China being in an entire different, different world. And so
there's sort of a geo aspect of it. Yeah. All of a sudden, our geopolitics, you know,
nerdiness is helpful, is useful. Adam, you know, we were talking about sort of human knowledge.
Did you see yourself with Po kind of disrupting yourself in a sense? Or talk about the, the, the
bet that you made with Poe in the sort of evolution there?
You know, I think we saw Po more as just an additional opportunity than as disruption to
Quora. The way we got to it was we, in early 2022, we started experimenting with using GPD3
to generate answers for Quora. And we compared them to the human answers and sort of realized
that they weren't as good, but what was really,
unique was that you could instantly get an
answer to anything you wanted to ask
about. And we realized
it didn't need to be in
public. It actually was, your preference
would be to have it be in private.
And so we felt like
there was just a new opportunity
here to let people
chat with AI and
in private. Yeah. And it
seemed like you were also making a bet on how the different
players were going to, that
was going to be. Yeah, yeah. So it was also
a bet on diversity of
model companies which took a while to play out but I think now we're getting to the point where
there's there's a lot of models there's a lot of companies especially when you go across
modalities you think about image models video models audio models especially like the reasoning
research models are sort of diverging agents are starting to be their own source of diversity
so so we're lucky to now be getting into this world where there's there's sort of enough
diversity for a
general interface
aggregator to make sense.
But yeah, it was a bet
early on. We kind of...
It's surprising, actually, that
even
not particularly technical consumers
actually do use multiple
AIs. Like, I
didn't expect that, like, you know,
people only use Google.
They never, like, looked at Google and
then Yahoo or, like, very few people do it.
But now you talk
to just average people and they'll say,
yeah, I use chatypsum most of the time,
but Jev and I is much better at like these types of questions.
And so yeah, interesting.
The sophistication of consumers have gone on.
And even people saying that they've different personalities
and they, you know, you know,
sort of resonate with Claude more, you know, or whatever.
The, I want to return back to this point
we said earlier about kind of talking about like dark matter,
about how we're going to, you know, brute force.
There's a lot of knowledge that people have
that's, you know, sort of not sort of categorized yet.
And it's not just tacit knowledge, it's actually knowledge that you could, you know,
ask them about and they could describe it.
How, you know, because one question people have with the alums is like how much we've already trained the whole internet, how much more knowledge is there?
Is it like 10x?
Is it like a thousand?
Like, what is sort of the, what is your kind of intuitive sense of if we do brute force it and build this whole, you know, machine that gets all the knowledge out of humans onto sort of, you know, a data set that we can then, you know, implement?
How do we think about the upside from there?
You know, I think it's very hard to quantify,
but there's a massive industry developing
around getting human knowledge
into the form where AI can use it.
So this is things like scale AI, surge, Mercor,
but there's a massive long tail of other companies
just getting started.
And...
as you have, you know, as intelligence gets cheaper and cheaper and more and more powerful,
the bottleneck I think is increasingly going to be on the data and what do you need to create that
intelligence. And so that's going to cause this, that's going to cause more and more of this
to happen. It might be that people can make more and more money by training AI. It might be that
more and more of these companies get started
or it might be
that there's other forms of it
but I think it's going to be
sort of like the economy is going to
naturally value whatever
the AI can't do
what is the framework for
what it can't like what it's meant to model for
what it can't do?
I don't you know you can
ask an AI researcher they might have
a better answer but to me
there's
just information
that's not in the training set
and that is something that's
inherently going to be
you know
it's going to be something that AI can't do
there will be you know
the AI will get very smart
it can do a lot of reasoning it could
prove every math theorem
at some point if it starts
from you know some axioms that you
that you give it
but if it doesn't know
how did this particular
company solve this problem
20 years ago if that wasn't in the
training set, then only a human who knows that is going to be able to answer that question.
And so over time, how do you see Quora interfacing with, like, how are you running these
in parallel? How do you think about this? Yeah, so I mean, Quora, our focus is on human knowledge
and letting people share their knowledge and that knowledge may be helpful for, you know,
it's helpful for other humans and it's also helpful for AI to learn from.
We have relationships with some of the AI labs.
And we're going to sort of play the role.
Cora will play the role that it is meant to play in this ecosystem,
which is as a source of human knowledge.
At the same time, AI is making Cora a lot better.
We've been able to make major improvements in moderation quality
and in ranking answers
and in just improving the product experience.
So it's gotten a lot better by applying AI to it.
Yeah.
And I'm going to talk about your future as well.
Obviously, you know, you had this business for a long time,
you know, focused on developers.
At one point, you're targeting, you know...
There's a non-profit.
No.
Exactly.
The end tech market, I believe you had two or three million in revenue reported.
And then, you know, recently a tech grant,
I know it's outdated, but I think it's a report is like 150 million.
I know it's higher since you've had this incredible growth as you've shifted the business model and the customer segment.
How do you think about the future of Replit?
I think Carpathie recently said that it's going to be the decade of agents.
And I think that's absolutely right.
It's as opposed to like prior modalities of AI, like when AI first,
came to coding. It was
auto-complete with Coal Pilot.
Then it became sort of chat
with Chat Tipiti.
Then I think Cursor
innovated on this composer
modality, which is like
editing like large chunks of
files, but that's it.
I think what Repleta innovated
is the agent
and the idea of like not only
editing code,
provisioning infrastructure like databases,
doing migrations,
you know, connecting to the cloud, deploying,
having the entire debug loop, like executing the code,
running tests.
And so just like the entire development lifecycle loop
having inside an agent, and that's going to take a long time
to mature.
So we're agent in beta came September 2024,
and it was first of its kind that did this both code
and infrastructure, but it was fairly janky,
didn't work very well.
And then Agent V1 around December, it took another generation of models.
So you go from Claw 3.5 to 3.7.
3.7 was the first model that really knew how to use a computer, a virtual machine.
So unsurprisingly, it was the first also computer use model.
And these things have been moving together.
and so with every generation of models
we find new capabilities
and you know
Agent V2 improved on autonomy a lot
Agent V1 could run for like two minutes
Agent V2 ran for 20 minutes
Agent 3 we advertised it as running for 200 minutes
it just felt like it should be symmetrical
but like it actually runs
kind of indefinitely like we've had users running it
for 20 plus hours
and the main idea there
was that if we put a verify on the loop,
I remember reading Deepseek,
a paper from Nvidia about how they
used Deepseek to write Kuta kernels
and they were able to run deepseek for like 20 minutes
if they put a verifier on the loop,
like being able to run tests or something like that.
And I thought, oh, okay,
so what kind of verify can we put on the loop?
Obviously, you can put unit tests,
but unit tests doesn't really capture
whether the app is working or not.
So we started kind of digging into computer,
and whether computer use was going to be able to test apps.
Computer use is very expensive and it's actually kind of still very buggy.
And like Adam talked about, that's going to be a big area of improvement
that will unlock a lot of applications.
But we ended up building our own framework with like a bunch of hacks and some AI research
and Rapplet's computer use, I think, testing models, I think one of the best.
And once we put that into the loop, then you can put Rapplet
in high autonomy, so we have an autonomy scale,
you can choose your autonomy level.
And then it just writes the code,
goes and test the applications.
If there's a bug, it reads the error log
and writes the code again and it can go for hours.
I've seen people build amazing things,
but let it run for a long time.
Now that needs to continue to get better.
That needs to get cheaper and faster.
so it's not necessarily a point of pride to run for a lot longer
like it should be as fast as possible so we're working on that
agent for there's a bunch of ideas that are going to be
coming out agent for but one of the big things is
you shouldn't be just like waiting for that one feature that you requested
you should be able to work on a lot of different features
so the idea of like parallel agents is very interesting to us
so you know you ask for a login page but you could also ask for
or Stripe checkout and then you ask for an admin dashboard,
the AI should be able to figure out how to paralyze all these different tasks,
or some tasks are not paralyzable,
but should also be able to do merge across the code.
So being able to do collaboration across AI agents is very important.
And that way, the productivity of a single developer goes up by a lot.
Right now, even when you're using CloudCode, a cursor, and others,
there isn't a lot of parallels I'm going on.
But I think the next boosts in productivity is going to come from sitting in front of programming environment like Replit and being able to manage tens of agents, maybe at some point hundreds, but at least five, six, seven, eight, nine, ten agents, all different, all different, you know, working in different parts of your product.
I also think that UI and UX could use a lot of work in terms of rate.
now you're trying to translate your ideas into just like textual representation. I'm just
like a PRD, right? What product managers do, right? Just product descriptions. But product
descriptions, it's really hard. And you see in a lot of tech companies, it's really hard to align
on the exact features because it's like language is fuzzy. And so I think there's a, there's a world in
which you're interacting with AI in a more multimodal fashion. So open up, uh,
like a whiteboard and being able to draw and like diagram with AI and and really work with it like
you work with a human. And then the next stage of that, having like better memory, better memory
inside the project, but also across the project and perhaps having different instantiations of
Replit agent that, you know, that this agent is really good at like Python data science because
it has all the information and skills
and memories about my company
what it's done in the past
so I'll have a data analysis
like sort of Rapplet agent
and I'll have like a front end Rapplet agent
and they have memory over multiple projects
and over time and over interactions
and maybe they sit in your Slack
like a worker and you can like talk to them
so again like I can keep going
for another 15 minutes about
a roadmap that could span like
three to four to five years perhaps
but this agent phase that we're in is just there's so much work to do and it's it's
it's going to be a lot of fun yeah it's a i was talking to one of our mutual friends one of the
co-founders one of these uh you know big productivity companies and he leads a lot of their
and he's like man uh during the week these days i'm not even talking to humans anymore as much i'm
just like it's just you know using all these agents to to build so it's living in the future to some
is already in the present. There's something
interesting about that and that
are people talking to each other less
at companies? And is that
a bad thing?
So, it's, you know, I think
I'm starting to think more about
the second order of facts of things like that.
You know,
will it make it awkward
for like, again, the new grads
I feel so bad for them. Like
you know, if
people are not sharing as much knowledge
between each other or it's like,
it's not culturally easy to go ask for help
because you should be able to use AI agents.
There's some cultural forces
that I think need to be reckoned with.
Yeah.
I get a lot of tough cultural forces for Zoomers these days.
Yes.
Gearing towards closing here,
obviously you guys are focused on running your companies,
but to stay current on the AI ecosystem,
you guys also make angel investments as well.
Where are you guys most excited?
we haven't talked about robotics
or you guys bullish on robotics
in the near term or any
emerging categories or use cases or spaces
that you're looking to make more investments in
or you have made some. I actually think vibe
coding generally is just
unbelievably like
high potential.
Just the idea that all the, you know, this
is underhyped even still?
I think so. I think
you know, just opening up
the potential of
software to the mainstream
of, you know, every, everyone, I think that, and yeah, I actually think one reason I think it's
underhyped is that the tools are still very far from what you can do as a professional
software engineer, and if you imagine that they're going to get there, and I think there's
no reason why they wouldn't, it'll take a few years, but then it's like everyone in the world
is going to be able to create any things that would have taken a team of a hundred,
professional software engineers
that's just going to massively
open up opportunities for
everyone. So I think
Replet is like a great example of this
but I think it's also going to
that there will be cases other than
just like building applications
that this also creates.
By the way, just on that note,
if you were going to Stanford or Harvard,
you know, today, 2025,
would you major again in computer science
or just focus on building something?
I think I would.
I mean, I went to college starting in 2002
and it was right after the dot-com bubble had burst
and there was a lot of pessimism
and I remember my roommate, his parents had told him
like, don't study computer science,
even though that was something he really liked.
And I just kind of did it because I liked it.
And I think that
I think that it's definitely like the job market is worse than it was a few years ago.
At the same time, I think having these skills to understand the sort of fundamentals of what's possible with algorithms and data structures,
I think that actually really helps you in managing agents when you're using them.
And I'm guessing that it will continue to be a valuable skill in the future.
I also think the other question is like, what else are you going to study?
And every single thing you could imagine, there's an argument for why it's going to be automated.
So I think you might as well study what you enjoy.
And I think this is as good as anything.
Yeah, I think there's a lot to get excited by.
One thing is maybe kind of random, but like I get really fired up to see like mad science experiments,
like the deep seek OCR that came out of the other day.
Did you see it?
It's wild where, correct me if I'm wrong, because I only looked at it briefly, but basically, you can get a lot more economical with a context window if you, like, have a screenshot of the text instead of as a fucking text.
Yeah, I'm not the right person to be correcting you on.
But, like, there's definitely some really interesting things.
Yeah, I saw another thing on Hacker News Saturday
where text diffusion
where someone made a tax diffusion model
by instead of doing
Gaussian denoising, he would take
like a single Burt instance
and like try to mass different words
and just predict like these different tokens
and so we have a lot of components
and I don't think people think a lot about that.
You know, we have now
the base pre-trained models
We have all these RL reasoning models.
We have the encoder decoder models.
We have diffusion models.
There's all these different things, like, just like, you know,
you mix them in different ways.
I feel like there isn't a lot of that.
I mean, it'd be great if a, like, a new research company just, like, comes out
and it's, like, not trying to, like, compete with OpenEI and things like that.
But instead, it's just trying to, like, discover how to put these different components together.
in order to create a new flavor of these models.
In crypto, they talk about composability
and mixing primitives together and AI,
maybe there needs to be more.
There's less playing around, I found.
Like, there is, like, I remember in the, like,
web 2.1 era when we were, like, playing around with JavaScript
and what browsers could do and what web workers could do, whatever.
There was a lot of, like, really interesting, weird experiments.
I mean, Replit was born out of that,
the original version of Replit in open source pre the company,
which my interest was like, can you,
compile C to JavaScript, right?
That was like one of the interesting things.
And that became WASN by the time it was MScriptin.
And it was like such a nasty hack.
But I think there's so much,
I think we're an era of Silicon Valley
where it's like very,
very get rich driven.
And that makes me a little sad.
And that's partly why I moved the company out of SF.
I feel like the culture in SF has,
has gotten maybe to
maybe I wasn't there
but like during the dot com era
a lot of people talked about
how it's sort of like
get rich fast
or the crypto thing
so I feel like there needs to be a lot more
tinkering and I would love to see
more of that and more companies
getting funded that are trying to
just do something a little more
novel even if it doesn't mean
like fundamentally new model
last question
Amjad you've been
into consciousness for a long time
Are you bullish that we will, via some of this AI work or just some scientific progress elsewhere, make some progress and understand in, you know, getting across this hard problem?
You know, something happened recently, which is interesting.
Quad 4.5 seemed to become more aware of its context length.
So as it gets closer to the end of the context, it starts becoming more economical with tokens.
It also, it looks like its awareness when it's being red-teamed or in test environment, like jumped significantly.
And so there's something happening there that's quite interesting.
Now, I think in terms of, you know, the question of consciousness, it is still fundamentally not a scientific question.
and there is a sort of
we've given up
on trying to make a scientific
but I think
this is also
the problem
that I talked about with all the energy
going into LMs
no one is trying to really think about
the true nature of intelligence
true nature of consciousness
and there's
a lot of really
core
core questions like one of my favorite one is the roger penrose emperor's new mind where he wrote a book
about how everyone in the sort of philosophy of mind space and perhaps the larger scientific ecosystem
started thinking about the brain in terms of a computer and in that book he tried to show that it fundamentally is
impossible for the brain to be a computer because humans are able to do things that
Turing machines cannot do or Turing machines like fundamentally get stuck on, such as, you
know, just basic logic puzzles that were able to kind of detect, but like there's no way to
include that in a in a in a in a cheering machine for example like this statement is false you know
those like old logic puzzles um and uh anyways it's like a complicated argument but uh if you read
that book or or many others uh there's like a core strain of arguments in the theory of mind
about how uh computers uh are fundamentally different from from human intelligence and
And so, yeah, I haven't really, I've been very busy, so I haven't really updated my thinking too much about that.
But I think there's a, there's a huge field of study there that is not being studied.
If you were a freshman entering college today, would you study philosophy?
I would do that.
I would definitely study philosophy of mind.
I would probably go into neuroscience because I think those are the core questions that are kind of become very, very important.
as AI kind of continue to see more of jobs and economy and things like that.
That's a great place to wrap. I'm John. Adam, thanks for coming on the podcast.
Thank you. Thank you.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe, leave us a rating or review
and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z and subscribe to our substack at A16Z.
substack.com. Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice, or be used to evaluate any
investment or security, and is not directed at any investors or potential investors in any
A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the
companies discussed in this podcast. For more details, including a link to our investments,
please see A16Z.com forward slash disclosures.
