a16z Podcast - Is AI Slowing Down? Nathan Labenz Says We're Asking the Wrong Question
Episode Date: October 14, 2025Nathan Labenz is one of the clearest voices analyzing where AI is headed, pairing sharp technical analysis with his years of work on The Cognitive Revolution.In this episode, Nathan joins a16z’s Eri...k Torenberg to ask a pressing question: is AI progress actually slowing down, or are we just getting used to the breakthroughs? They discuss the debate over GPT-5, the state of reasoning and automation, the future of agents and engineering work, and how we can build a positive vision for where AI goes next. Resources:Follow Nathan on X: https://x.com/labenzListen to the Cognitive Revolution: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSykWatch Cognitive Revolution: https://www.youtube.com/@CognitiveRevolutionPodcast Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
AI is not synonymous with language models.
AI is being developed with pretty similar architectures
for a wide range of different modalities.
And there's a lot more data there.
Feedback is starting to come from reality.
Maybe we're running out of problems we've already solved
when we start to give the next generation of the model
these power tools,
and they start to solve previously unsolved engineering problems.
I think you start to have something that looks kind of like superintelligence.
There's a growing debate about whether AI progress has plateaued
or if our expectations have simply caught up to the pace or change.
On this episode, I'm joined by Nathan LeBenz,
host of the Cognitive Revolution to unpack whether AI innovation is actually slowing.
We break down the case for slowdown,
from Cal Newport's argument that students are using AI to get lazier,
to the claims that GPT5 wasn't a leap over GPT4.
Nathan and I look at what's really happening under the hood of AI,
from new reasoning and math capabilities to real scientific discoveries
and multimodal systems that go far beyond chatbox.
We also discuss agents, automation, and how quickly work itself is starting to change.
And we'll end on the big question.
If progress isn't slowing down, how should we shape it toward a future we actually want?
Let's get started.
Nathan, I'm stoked to have you on the A&Z podcast for the first time.
Obviously, we've been podcast partners for a long time with you leading cognitive revolution.
Welcome.
It's great to be here.
Thank you.
So we were talking about Cal Newport's podcast appearance on Lost Debates.
And we thought it was a good opportunity to just have this broad conversation and really entertain this sort of question of, is AI slowing down?
So why don't you sort of steal man to some of the arguments that you've heard on that side from him or more broadly than we could sort of have this broader conversation?
Yeah, I mean, I think for one thing, it's really important to separate a couple different questions, I think, with respect to AI.
One would be, is it good for us right now even?
And is it going to be good for us in the big picture?
and then I think that is a very distinct question from
are the capabilities that we're seeing continuing to advance
and at a pretty healthy clip?
So I actually found a lot of agreement with the Cal Newport podcast
that you shared with me when it comes to some of the worries
about the impact that AI might be having even already on people.
He looks over students' shoulders and watches how they're working
and finds that basically he thinks that they are using AI to be lazy,
which is no big revelation.
I think a lot of teachers would tell you that.
Shocker.
He puts that in maybe more dressed up terms
that people are not even necessarily moving faster,
but they're able to reduce the strain
that the work that they're doing places on their own brains
by kind of trying to get AI to do it.
And, you know, if that continues,
and I think he's been, I think, a very valuable commenter
on the impact of social media.
Certainly, I think we all should be mindful of,
how is my attention span evolving over time
and am I getting weak or averse to hard work?
Those are not good trends if they are showing up in oneself.
So I think he's really right to watch out for that sort of stuff.
And then as we've covered in many conversations in the past,
I've got a lot of questions about what the ultimate impact of AI is going to be.
And I think he probably does too.
But then when it comes to, it's a strange move from my perspective to go from,
There's all these sort of problems today
and maybe in the big picture, two,
but don't worry, it's flatlining,
like kind of worry, but don't worry
because it's not really going anywhere further than this,
or it's scaling is kind of petered out,
or we're not going to get better AI than we have right now,
or even maybe the most easily refutable claim
from my perspective is GPT5 wasn't that much better than GPT4.
And that, I think, is where I really was like,
what, wait a second.
I was with you on a lot of things,
and some of the behaviors that he,
observes in the students, I would cop to having exhibited myself.
When I'm trying to code something these days, a lot of times I'm like, oh, man, can't the AI just
figure it out?
I really don't want to have to sit here and read this code and figure out what's going on.
It's not even about typing the code anymore.
You know, I'm way too lazy for that, but it's even about figuring out how the code is working.
Can't you just make it work?
Try again, you know, and just try again.
And I do find myself at times falling into those traps.
But I would say a big part of the reason I can fall into those traps is because the AIs are
getting better and better. And increasingly, it's not crazy for me to think that they might be
able to figure it out. So that's my kind of first slice at the takes that I'm hearing. There's
almost like a two-by-two matrix maybe that one could draw up, whereas do you think AI is good or bad
now and in the future? And do you think it's like not a big deal or a big deal? And I think
it's both on the good and bad side. I definitely think it's a big deal. The thing that I
struggle to understand the most is the people who kind of don't see the big deal. And I think it's
deal that it seems pretty obvious to me, and the, especially when it comes again to the leap from
GPD4, GB5, maybe one reason that's happened a little bit is that there were just a lot more
releases between GPT4 and five. So what people are comparing to is, you know, something that just
came out a few months ago, 03, right, that only came out a few months before GPT5. Whereas with GPD4,
it was shortly after ChatGBT, and it was all kind of this moment of, whoa, this thing is
like exploding onto the scene. A lot of people,
We're seeing it for the first time.
And if you look back to GPT3, there's a huge leap.
I would contend that the leap is similar from GPT 4 to 5.
These things are hard to score.
There's no single number that you could put on it.
Well, there's loss.
But of course, one of the big challenges is that what exactly does a loss number translate into
in terms of capabilities?
So it's very hard to describe what exactly has changed.
But we could go through some of the dimensions of change if you want to and enumerate some
of the things that I think people maybe are starting to, or have to have to change.
have come to take for granted and kind of forget, like, that GPD4 didn't have a lot of the things
that now were sort of expected in the GPD5 release because we'd seen them in 4-0 and 01 and
03 and all those things sort of maybe boiled the frog a little bit when it comes to how much
progress people perceived in this last release.
Yeah, a couple reactions.
So one is, and even to complicate your two-by-two, even further in the sense of, is it bad now
versus it bad later?
Cal is not really who we both admire, by the way, a lot.
Cal is a great guy and a valuable contributor to the thought space,
but he's not as concerned about sort of this sort of future AI concerns
that sort of the AI safety folks and many others are concerned about.
He's more concerned about what it means to life for cognitive performance
and development now in the same way that he's worried about social media's impact,
and you think that's a concern, but nowhere near as big a concern
as what to expect in the future.
And then also he presents sort of this theory of
why we shouldn't worry about the future because it's slowing down.
And why don't we just share how we interpreted kind of his history, which as I interpret it was
this idea of, hey, the simplistic version is we've figured out this way such that if you throw
a bunch of data into the model, it gets better and sort of order of magnitude.
And so the difference to GB2 and GPD3 and the GPD3 and GPD4, but then that sort of significant,
the difference, but then it achieved sort of a diminishing returns significantly.
And we're not seeing it a GPD5 and thus we don't have to worry anymore.
How would you edit the characterization of his view?
of the history, and then we can get into the differences between 4.5.
The scaling law idea, which is definitely worth agreeing, taking a moment to note that it is not a
law of nature. We do not have a principled reason to believe that scaling is some law that will go
indefinitely. All we really know is that it has held through quite a few orders of magnitude so far.
I think that it's really not clear yet to me whether or not the scaling laws have petered out.
or whether we have just found a steeper gradient of improvement
that is giving us better ROI on another front that we can push on.
So they did train a much bigger model, which was GBT 4.5,
and that did get released.
And there are a number of interesting, of course,
there's a million benchmarks, whatever,
the one that I zero in on the most in terms of understanding
how GBT 4.5 relates to both O3 and GBT5.
And Open AI, obviously famously terrible in naming, we can all agree on that.
I think a decent amount of this confusion and sort of disagreement actually does stem from unsuccessful naming decisions.
4.5 on this one benchmark called Simple QA, which is really just a super long-tail trivia benchmark.
It really just measures, do you know, a ton of esoteric facts?
And they're not things you can really reason about.
You either just have to know or don't know these particular facts.
the 03 class of models got about a 50% on that benchmark
and GPT 4.5 popped up to like 65%.
So in other words, it basically of the things that were not known
to the previous generation of models, it picked up a third of them.
Now, there's obviously still two thirds more to go,
but I would say that's a pretty significant leap, right?
These are super long tail questions.
I would say most people would get like close to a zero.
You'd be like the person sitting there at the trivia night
who, like, maybe gets one a night
is kind of what I would expect most people to do on simple QA.
And that checks out, right?
Like, obviously, the models know a lot more
than we do in terms of facts and just general information
about the world.
So at a minimum, you can say that GPD 4.5
knows a lot more.
A bigger model is able to absorb a lot more facts.
Qualitatively, people also said,
in some ways, maybe it's better for creative writing.
It was never really trained
with the same power of post-training
that GPD-5 has had,
and so we don't really have an apples-to-apples comparison,
but people did still find some utility in it.
I think maybe the way to understand
why they've taken that offline
and gone all in on GPD-5
is just that model's really big.
It's expensive to run.
The price was way higher,
it was a full order of magnitude
plus higher than GPD-5 is.
And it's maybe just not worth it
for them to consume all the comprehensive,
that it would take to serve that,
and maybe they just find that people are happy enough
with the somewhat smaller models for now.
I don't think that means that we will never see
a bigger GPT 4.5 model with all that reasoning ability,
and I would expect that that would deliver more value,
especially if you're really going out and trying to do esoteric stuff
that's pushing the frontier of science or what have you.
But in the meantime, the current models are really smart,
and you can also feed them a lot of context.
That's one of the big things that has improved so much over the last generation.
When GPT4 came out, at least the version that we had as public users,
was only 8,000 tokens of context, which is like 15 pages of text.
So you were limited.
You couldn't even put in like a couple papers.
You would be overflowing the context.
And this is where prompt engineering initially kind of became a thing.
It was like, man, I've really only got such a little bit of information that I can provide.
I've got to be really careful about what information to provide
lest I overflow the thing and it just can't handle it.
There were also, as context windows got extended,
there were also versions of models
where they could nominally accept a lot more,
but they couldn't really functionally use them.
You know, they sort of could fit them, you know,
at the API call level,
but the models would lose recall.
They'd sort of unravel as they got into longer and longer context.
Now you have obviously much longer context
and the command of it is really, really good.
So you can take dozens of papers
on the longest context windows with Gemini,
and it will not only accept them,
but it will do pretty intensive reasoning over them
and with really high fidelity to those inputs.
So that skill, I think, does kind of substitute
for the model-knowing facts itself.
You could say, geez, let's try to train all these facts into the model.
We're going to need a trillion, or who knows,
five trillion, how many trillion parameters
to fit all these super long,
or you could say, well, a smaller thing that's really good at working over provided context
can, if people take the time or, you know, go to the trouble of providing the necessary
information, I can kind of access the same facts that way. So you have a kind of, do I want to
push on this size and do I want to bake everything into the model or do I want to just try to get as
much performance out of a smaller, tighter model that I have? And it seems like they've gone that
way, and I think basically just because they're seeing faster progress on that gradient,
you know, in the same way that the models themselves are always kind of in the training
process, taking a little step toward improvement, you know, the outer loop of the model
architecture and the nature of the training runs and where they're going to invest their
compute is also kind of going that direction. And they're always looking at like, well, we could
scale up over here, maybe get this kind of benefit a little bit, or we could do more post-training
here and get this kind of benefit. And it just seems like we're getting more benefit from
the post-training and the reasoning paradigm
than scaling. But I don't think either one is
I definitely don't think either one is dead.
We haven't seen yet what 4.5
with all that post-training would look like.
Yeah. And so, I mean, one of the things that you mentioned
that Cal, you know, the analysis missed was
that it way underestimated the value of extent of reasoning, right?
And so what would it mean to fully sort of appreciate that?
Well, I mean, a big one from just the last few weeks
was that we had an IMO gold medal
with pure reasoning models
with no access to tools from multiple companies.
And, you know, that is night and day
compared to what GPT4 could do with math, right?
And these things are really weird.
Like, it's nothing I say here should be intended
to suggest that people won't be able to find weaknesses
in the models.
I still use a tick-tack-toe puzzle to this day
where I take a picture of a tick-tac-toe board
where one of the players has made a wrong move
that is not optimal
and thus allows the other player to force a win
and I ask the models
if somebody can force a win from this position
only very recently, only the last generation of models
are starting to get that right some of the time
almost always before they were like
Ticktack chose a solved game
you can always get a draw
and they would wrongly assess my board position
as the player can still get a draw
So there's a lot of weird stuff, right?
The jagged capabilities frontier remains a real issue,
and people are going to find peaks and valleys for sure.
But GPD4, when it first came out,
couldn't do anything approaching IMO gold problems.
It was still struggling on, like, high school math.
And since then, we've seen this high school math progression
all the way up through the IMO gold.
Now we've got the frontier math benchmark that is, I think, now like up to 25%.
It was 2% about a year ago or even a little less than a year.
ago, I think. And we also just today saw something where, and I haven't absorbed this one yet,
but somebody just came out and said that they had solved a, you know, canonical, super challenging
problem that no less than Terrence Tao had put out. And it was like this, you know, this thing
happened in, I think, days or weeks of the model running versus it was 18 months, you know,
that it took professional, not just any professional mathematicians, but like really, you know,
the leading minds in the world to make progress on these problems. So,
Yeah, I think that's really, you know, that's really hard jumping capabilities to miss.
I also think a lot about the Google AI co-scientist, which we did an episode with.
We can, you can check out the full story on that if you want to.
But, you know, they basically just broke down the scientific method into a schematic.
You know, and this is a lot of what happens when people, there's one thing to say, the model will respond with thinking and it'll go through reasoning process.
The more tokens it spends at runtime, the better your answer will be.
That's true.
Then you can also build this scaffolding on top of that and say, okay, well, let me take
something as broad and, you know, aspirational as the scientific method, and let me break that
down into parts.
Okay, there's hypothesis generation.
Then there's hypothesis evaluation.
Then there's, you know, experiment design.
There's literature review.
There's all these parts to the scientific method.
What the team at Google did is created a pretty elaborate schematic that represented their
best breakdown of the scientific method, optimized prompts for each of those steps, and then gave
this resulting system, which is scaling inference now kind of two ways. It's both the chain of thought,
but it's also all these different angles of attack structured by the team. And they gave it legitimately
unsolved problems in science. And in one particularly famous kind of notorious case, it came
up with a hypothesis, which it wasn't able to verify because it doesn't have direct access
to actually run the experiments in the lab, but it came up with a hypothesis to some open
problem in virology that had stumped scientists for years, and it just so happened that
they had also recently figured out the answer, but not yet published their results. And so there
was this confluence where the scientists had experimentally verified, and Gemini, in the form of
this AI co-scientist, came up with exactly the right answer.
And these are things that, like, literally nobody knew before.
And GPD4 just wasn't doing that.
You know, I mean, these are qualitatively new capabilities.
That thing, I think, ran for days.
You know, it probably costs hundreds of dollars,
maybe into the thousands of dollars to run the inference.
You know, that's not nothing,
but it's also, like, very much cheaper than, you know, years of grad students.
And if you can get to those caliber of problems
and actually get good solutions to them, like, you know,
what would you be able to?
willing to pay, right, for that kind of thing. So, yeah, I don't know. That's probably not a full
appreciation. We could go on for a long time, but I would say, in summary, GPT4 was not able to
push the actual frontier of human knowledge. To my knowledge, I don't know that ever
discovered anything new. It's still not easy to get that kind of output from a GPT5 or a Gemini 2.5
or, you know, a clawed opus four or whatever. But it's starting to happen sometimes.
And that in and of itself is a huge deal.
Well, then how do we explain the bearishness or the kind of vibe shift around GPD5 then?
You know, one potential contributor is this idea that if a lot of the, if the improvements are at the frontier, you know, not everyone is working with, you know, sort of advanced math and physics and a day-to-day.
And so maybe they don't see the benefits in their daily lives in the same way that, you know, sort of the jumps in Chad GBT were obvious and shape the day-to-day.
Yeah, I mean, I think a decent amount of it was
that they kind of fucked up the launch, you know, simply put, right?
They, like, were tweeting Death Star images,
which Sam Altlin later came back and said,
no, you're the Death Star.
I'm not the Death Star.
But I think people thought that the Death Star
was supposed to be the model.
That was generally the, you know,
the expectations were set extremely high.
The actual launch itself was just technically broken.
So a lot of people's first experiences of GPT-5,
they've got this model router concept now
where I think another way to understand
what they're doing here is they're trying to own
the consumer use case
and to own that
they need to simplify the product experience
relative to what we had in the past
which was like, okay, you got GPD4 and 40 and 40 mini
and 03 and 04 mini
and other things, you know, four or five
within there at one point. You got all these different models,
which one should I use for which? It's like very confusing
to most people who aren't obsessed
with this. And so one of the big things they wanted to do was just shrink that down to just ask your
question and you'll get a good answer and we'll take that complexity on our side as the product
owners. To do that, interestingly, and I don't have a great account of this, but one thing you
might want to do is kind of merge the models and figure out, just have the model itself decide how much
to think, or maybe even have the model itself decide how many of its experts, if it's a mixture of
experts architecture it needs to use, or maybe, you know, there's been a bunch of different
research projects on like skipping layers of the model. If the task is easy enough, you could
like skip a bunch of layers. So you might have hoped that you could genuinely on the back end
merge all these different models into one model that would dynamically use the right amount of
compute for the level of challenge that a given user query presented. It seems like they found that
harder to do than they expected.
And so the solution that they came up with instead was to have a router where the
router's job is to pick. Is this an easy query? In which case, we'll send you to this model.
Is it a medium? Is it a hard? And I think they just have two really models behind the
scene. So I think it's just really easy or hard.
Certainly the graphs that they showed, you know, basically showed the kind of with and without
thinking. The problem at launch was that that router was broken. So all of the queries were going to,
the dumb model. And so a lot of people literally just got bad outputs, which were worse than
03 because they were getting non-thinking responses. And so the initial reaction of like,
okay, this is dumb and that's sort of, you know, traveled really fast. I think that kind of
set the tone. My sense now is that as the dust has settled, most people do think that it is the
best model available. And, you know, things like the meter, the infamous meter task link
chart, it is the best. You know, we're now over two hours and it is still above the trend
line. So if you just said, you know, do I believe in straight lines on graphs or not? And how should
this latest data point influence whether I believe on these straight lines on, you know, power
logarithmic scale graphs? It shouldn't really change your mind too much. It's still above the trend
line. I talked to Zvi about this, Zvichwitz's legendary infovore and AI industry analyst on a recent
podcast too and kind of asked him same question. Like why do you think the, you know, even some of the
most plugged in, you know, sharp minds in the space have seemingly pushed timelines out a bit
as a result of this. And his answer was basically just, it resolved some amount of uncertainty. You know,
you had an open question of maybe they do have another breakthrough, you know, maybe it really is
the Death Star, you know, if they surprise us on the upside, then all these short timeline,
you know, we could have expected a, yeah, I guess one way to think about it is like the,
the distribution was sort of broad in terms of timelines. And if they had surprised on the upside,
it might have narrowed and narrowed in toward the front end of the distribution. And if it,
if they surprised on the downside or even just were, you know, purely on trend, then you
would take some of your distribution from the very short end of the timelines and kind of
push them back toward the middle or the end.
And so his answer was like, AI 2027 seems less likely,
but AI 2030 seems basically no less likely,
maybe even a little more likely
because some of the probability mass from the early years
is now sitting there.
So it's not that, I don't think people are moving
the whole distribution out super much.
I think there may be more just kind of shrinking the,
you know, it's getting a little tighter
because it's maybe not happening quite as soon
as it seemed like it might have been,
but I don't think too many people,
at least that I think are really plugged in on this,
are pushing out too much past 2030 at all.
And by the way, you know,
obviously there's a lot of, you know, disagreement.
The way I kind of have always thought about this sort of stuff is
Dario says 2027, Demas says 2030,
I'll take that as my range.
So coming into GPT5, I was kind of in that space.
And now I'd say, well, I don't know,
Dario's got, what cards does he have up his sleeve?
They just put out 4.1 opus, and in that blog post, they said,
we will be releasing more powerful updates to our models in the coming weeks.
So they're due for something pretty soon.
Maybe they'll be the ones to surprise on the upside this time,
or maybe Google will be.
I wouldn't say 2027 is out of the question.
But, yeah, I would say 2030 still looks just as likely as before.
And again, from my standpoint, it's like, that's still really soon.
You know, so if we're on track, whether it's 28, 29, 30, I don't really care.
I try to frame my own work so that I'm kind of preparing myself and helping other people
prepare for what might be the most extreme scenarios and kind of, you know, one of these things
where if we aim high and we miss a little bit and we have a little more time, great.
I'm sure we'll have plenty of things to do to use that extra time to be ready for, you know,
whatever powerful AI does come online.
but yeah I guess I don't
my worldview hasn't changed all that much
as a result of these
summer's developments
anecdotally I don't hear as much about AI
27 or situational awareness
to the same degree I do talk to some people
who've just moved it a few years back
to your point
but yeah
Dorcas had his whole thing around
you know he still believes in it but sort of
you know maybe because this gap in continual learning
or something to the effect
that maybe it's just going to be
a bit slower to diffuse
and, you know,
meters paper, as you mentioned,
showed that engineers are less productive
and so maybe there's less of a sort of concern
around, you know,
people being replaced to the next few years
in mass.
I think when we spoke maybe a year ago about this,
or I think you said something like 50% of 50% of jobs.
I'm curious if that's still your,
your litmus test or how do you think about it?
Well, for one thing, I think that meter paper is worth unpacking a little bit more
because this was one of those things.
And I'm a big fan of meter and I have no, you know, no shade on them because I do think
do science, publish your results?
Like, that's good.
You don't have to make every experimental result and everything you put out conform to a
narrative.
But I do think it was a little bit.
it was a little bit too easy for people who wanted to say that,
oh, this is all nonsense, to latch on to that.
And, you know, again, there's something there that I would kind of put in the Cal Newport
category, too, where for me, maybe the most interesting thing was the users thought
that they were faster when, in fact, they seemed to be slower.
So that sort of misperception of oneself, I think, is really interesting.
Personally, I think there's some explanations for that that include, like, hitting go on the agent, going to social media and scrolling around for a while and then coming back.
The thing might have been done for quite a while by the time I get back.
So honestly, one like really simple, and we're starting to see this in products, one really simple thing that the products can do to address those concerns is just provide notifications.
Like the thing is done now.
So, you know, stop scrolling and come back and check its work.
that in terms of just clock time
it would be interesting to know
what applications did they have open
maybe they took a little longer
with Cursor than doing it on their own
but how much of the time was Cursor
the active window and how much of it was
some other random distraction while they were waiting
but I think a more fundamental issue with that study
which again wasn't really about the study design
but just in the sort of
interpretation and kind of digestion
of it
some of these details got lost.
They basically tested the models
or the product cursor in the area
where it was known to be least able to help.
This study was done early this year,
so it was done with, you know,
kind of one, depending on how you want to count,
right, a couple releases ago,
with code bases that are large,
which, again, strains the context window,
and, you know,
that's one of the frontiers that has been moving.
Very mature codebases with high standards for coding
and developers who really know their code bases super well,
who've made a lot of commits to these particular codebases.
So I would say that's basically the hardest situation
that you could set up for an AI
because the people know their stuff really well,
the AI doesn't, the context is huge,
people have already absorbed that through working,
on it for a long time. The AI doesn't have that knowledge. And again, a couple generations
ago, models. And then a big thing, too, is that the people were not very well versed in the
tools. Why? Because the tools weren't really able to help them yet. I think the sort of mindset
of the people that came into the study in many cases was like, well, I haven't used this all that
much because it hasn't really seemed to be super helpful. They weren't wrong in that
assessment, given the, you know, the limitations. And you could see that in terms of the
some of the instructions and the help that the meter team gave to people. One of the things that
is in the paper that they would, if they noticed that you weren't using Kirster super well, they
would give you some feedback on how to use it better. One of the things that they were telling
people to do is make sure you at tag a particular file to bring that into context for the model
so that the model has, you know, the right context. And that's literally,
like the most basic thing that you would do in cursor, you know, that's like the thing you
would learn in your first hour, your first day of using it. So it really does suggest that these
were, you know, while very capable programmers, like basically, mostly novices when it came
to using the AI tools. So I think the result is real, but I just, I would be very cautious
about generalizing too much there. In terms of, I guess what else, what was the other question?
And what is the expectation for jobs?
I mean, we're starting to see some of this, right?
We are definitely seeing no less than like Mark Banioff has said that they've been able to cut a bunch of headcount
because they've got AI agents now that are responding to every lead.
Klarna, of course, has said very similar things for a while now.
They also, I think, have been a little bit misreported in terms of like, oh, they're backtracking
off of that because they're actually going to keep some customer service people, not none.
And I think that's a bit of an overreaction.
Like they may have some people who are just, you know, insistent on having a certain experience
and maybe they want to provide that.
And that makes sense.
You know, it doesn't, I think you can have a spectrum of service offerings to your customers.
I once coded up a pricing page for a set.
And I actually just vibe coded up a pricing page for a SaaS company that was like basic
level with AI sales and service is one price.
If you want to talk to human sales, that's a higher price.
And if you want to talk to human sales and support, that's a third higher price.
And so, like, literally, that might be what's going on, I think, in some of these cases.
And it could very well be a very sensible option for people.
But I just, I do see the intercom I've got an episode coming up with.
They now have this fin agent that is solving like 65% of customer service tickets.
that come in. So, you know, what's that going to do to jobs? Are there really, like, three
times as many customer service tickets to be handled? Like, I don't know. I think there's kind of a
relatively inelastic supply. Maybe you get somewhat more tickets if people expect that they're going
to get better, faster answers, but I don't think we're going to see, like, three times more
tickets. By the way, that number was like 55 percent three or four months ago. So, you know,
as they ratchet that up, the ratios get really hard, right? At half, ticket resolution,
in theory, maybe you get some more tickets,
maybe you don't need to adjust headcount too much,
but when you get to 90% ticket resolution,
you know, are you really going to have 10 times as many tickets
or 10 times as many hard tickets that the people have to handle?
It seems just really hard to imagine that.
So I don't think these things go to zero,
probably in a lot of environments,
but I do expect that you will see significant headcount reduction
in a lot of these places.
And the software one is,
really interesting because the elasticity is are really unknown. You know, you can potentially
produce X times more software per user or, you know, per cursor user or per developer at your
company, whatever. But maybe you want that. You know, maybe there is no limit or no, you know,
maybe the regime that we're in is such that if there's, you know, 10 times more productivity, that's
all to the good. And, you know, we still have just as many jobs because we want 10 times more
software. I don't know how long that lasts. Again, the ratios start to get challenging at some
point. But yeah, I think the bottle, you know, the old tile account thing comes to mind. You are a
bottleneck, you are a bottleneck? You are a bottleneck? I think more often it is, are people really trying
to get the most out of these things? And, you know, are they using best practices? And have they
have they really put their minds to it or not? You know, often the real barrier is there. I was,
I've been working a little bit with a company that is doing basically government doc review.
I'll obstruct a little bit away from the details.
Really gnarly stuff, like scanned documents, you know, handwritten filling out of forms.
And they've created this auditor, AI agent that just won a state-level contract to do the audits on like a million transactions a year.
of these, you know, these packets of documents,
again, scanned, handwritten, all this kind of crap.
And they just blew away the human workers
that were doing the job before.
So where are those workers going to go?
Like, I don't know.
I don't, they're not going to have 10 times as many transactions.
You know, I can be pretty confident in that.
Are there going to be a few still that are there
to supervise the AIs and handle the weird cases
and, you know, answer the phones?
Sure.
Maybe they won't go anywhere.
You know, the state, you know, the state may do a,
strange thing and just have all those people like sit around because they can't bear to fire
him. Like who knows what the ultimate decision will be. But I do see a lot of these things where
I'm just like when you really put your mind to it and you identify what would create real leverage
for us. Can the AI do that? Can we make it work? You can take a pretty large chunk out of
high volume tasks very reliably in today's world. And so the impacts I think are starting to be
seen there on a lot of jobs. Humans, I think, are, you know, the leadership is maybe the
bottleneck or the will in a lot of places might be the bottleneck. And software might be an
interesting case where there is just so much pent-up demand, perhaps, that it may take a little
longer to see those impacts because you really do want, you know, 10 or 100 times as much software.
What is, yeah, let's talk about code because it's, you know, it's where Anthropic made a big
bet early on, you know, perhaps inspired by the sort of automated researcher, you know,
recursive self-improvement, you know, sort of, you know, desired future. And then we saw
open AI make moves there as well. When we flesh that out or talk a little about, you know,
what inspired that and where you see that going? You know, utopia or dystopia is really the
big question there, I think, right? I mean, is maybe one part technical, two parts social in terms of
why code has been so focal.
The technical part is that it's really easy to validate code.
You generate it, you can run it.
If you get a runtime error, you can get the feedback immediately.
It's somewhat harder to do functional testing.
Replit recently, just in the last like 48 hours,
released their V3 of their agent.
And it now, in addition to, you know, code, code,
try to make your app work, V2 of the agent would do that.
And it could go for minutes and, you know,
in some cases generate dozens of files
and I've had some magical experiences with that
where I was like, wow, you just did that whole thing
in one prompt and it, like, worked amazing.
Other times, it will sort of code for a while
and hand it off to you and say, okay, does it look good?
Is it working?
And you're like, no, it's not.
I'm not sure why you get into a back and forth with it.
But the difference between V2 and V3
is that instead of handing the baton back to you,
it now uses a browser and the vision aspect of the models
to go try to do the QA itself.
So it doesn't just say, okay, hey, I tried my best.
best wrote a bunch of code like let me know if it's working or not it takes that first pass at figuring
out if it's working and you know again that that really improves the flywheel just how much you can do
how much you can validate how quickly you can validate it the the speed of that loop is really key
to the pace of improvement so it's a problem space that's pretty amenable to the sorts of you know
rapid flywheel techniques second of course they they're all coders right at the
these places, so they want to, you know, solve their own problems.
That's, like, very natural.
And third, I do think on the, you know, sort of social vision competition,
who knows where this is all going, they do want to create the automated AI researcher.
That's another data point, by the way, from, this was from the 03 system card.
They showed a jump from, like, low to mid single digits to roughly 40%.
of PRs actually checked in by research engineers at OpenAI
that the model could do.
So prior to 03, not much at all.
You know, low to mid single digits.
As of 03, 40%.
I'm sure those are the easier 40% or whatever.
Again, there will be, you know, caveats to that.
But that's, you're entering maybe the steep part of the S curve there.
And that's presumably pretty high-end, you know,
I don't know how many problems they have at OpenAI,
but presumably, you know, not that many relative to the rest of us
that are out here making generic web apps all the time.
So, you know, at 40%, you've got to be starting to, I would think,
get into some pretty hard tasks, some pretty high value stuff.
You know, at that, at what point does that ratio really start to tip
where the AI is like doing the bulk of the work?
GPD5 notably wasn't a big update over 03 on that particular measure.
I mean, it also wasn't calling back to the simple QA thing.
GPT5 is generally understood to not be a scale up relative to 40 and 03.
And you can see that in the simple QA measure.
It basically scores the same on these long-tail trivia questions.
It's not a bigger model that has absorbed, like, lots more world knowledge.
It is, you know, Cal is right.
I think it is analysis that it's post-training.
But that post-training, you know, is potentially entering the steep part of the S-curve
when it comes to the ability to do even the kind of hard problems
that are happening at Open AI on the research engineering front.
And, you know, yikes.
So I'm a little worried about that, honestly.
The idea that we could go from these companies having a few hundred research
engineer people to having, you know, unlimited overnight.
And like, what would that mean in terms of how much things could change?
And also just our ability to steer that overall,
process. I'm not super comfortable with the idea of the companies tipping into a recursive
self-improvement regime, especially given the level of control and the level of unpredictability
that we currently see in the models, but that does seem to be what they are going for.
So in terms of like why, I think this has been the plan for quite some time. Even you remember
that leaked anthropic fundraising deck from maybe two years ago where they said that in 2025 and
26, the companies that train the best models will get so far ahead that nobody else will be
able to catch up. I think that's kind of what they meant. I think that they were projecting then
that in the 25, 26 time frame, they'd get this automated researcher. And once you have that,
how's anybody who doesn't have that going to catch up with you? Obviously, some of that remains
to be validated. But I do think they have been pretty intent on that for a long time.
Five years from now, are there more engineers or fewer engineers?
I tend to think less.
You know, already, if I just think about my own life and work,
I'm like, would I rather have a model
or would I rather have like a junior marketer?
I'm pretty sure I'd rather have the model.
Would I rather have the models or a junior engineer?
I think I'd probably rather have the models in a lot of cases.
I mean, it obviously depends on, you know, the exact person you're talking about.
But truly forced choice today.
Now, and then you've got cost adjustment as well, right?
I'm not spending nearly as much on my cursor subscription as I would be on a, you know,
an actual human engineer.
So even if they have some advantages, you know, and I also have not scaffolded,
I haven't gone full co-scientist, right, on my cursor problems.
I think that's another interesting, you start to see why folks like Sam Altman are so focused
on questions like energy and the seven trillion dollar buildout because these power law things are
weird and, you know, to get incremental performance for 10x the cost is weird. It's definitely
not the kind of thing that we're used to dealing with. But for many things, it might be worth it,
and it still might be cheaper than the human alternative.
know, if it's like, well, cursor costs me, whatever, 40 bucks a month or something, would I pay
400 for, you know, however much better? Yeah, probably. Would I pay 4,000 for however much better?
Well, it's still, you know, a lot less than a full-time human engineer. And the costs are
obviously coming down dramatically too, right? That's another huge thing. GPD4 was way more expensive.
It's like 90, it's like a 95% discount from GPD4 to GPT5. That's, you know, no small.
thing, right? I mean, it's Apple's dabble is a little bit hard because the chain of thought
does spit out a lot more tokens. And so you get, you give back a little, on a per token basis,
it's dramatically cheaper. More tokens generated, you know, just does eat back into some of that
savings. But everybody seems to expect the trends will continue in terms of prices continuing to
fall. And so, you know, how many more of these like price reductions do you have to, to then be
able to, you know, do the power law thing a few more times, I guess I think, I think less.
And I think that's probably true, even if we don't get like full-blown AGI that's, you know,
better than humans at everything.
I think you could easily imagine a situation where of however many million people are
currently employed as professional software developers, some top tier of them that do the hardest
things can't be replaced.
but there's not that many of those
and the real like rank and file
you know the people that over the last 20 years
were told learn to code you know
that'll be your thing like the people that are the really top top
people didn't need to be told to learn to code right
they just it was their thing they had a passion for it
they were amazing at it um we may not
it wouldn't shock me if we like still can't replace those people
in three four five years time
but I would be very surprised if you can't get
your nuts and bolts
web app, mobile app type things
spit out for you for far less
and far faster than
and probably honestly with significantly higher quality
and less back and forth
with an AI system than with your kind of
middle of the pack developer
in that time frame.
One thing I do want to call out, you know,
there are definitely people have concerns
about progress moving too fast,
but there's also concern,
and maybe it's rising about progress,
not moving fast enough in the sense that,
you know,
a third of the stock market is MAG7.
You know, AI Capax is, you know,
over 1% of GDP.
And so we are kind of relying on some of this progress
in order to sort of sustain our economy.
Yeah.
And with the, you know,
another thing that I would say has been slower to materialize
than I would have expected are,
AI culture wars
or sort of the
ramping up of protectionism of various
industries. We just saw
Josh Hawley
I don't know if he introduced a bill
or just said he intends to introduce a bill
to ban self-driving cars nationwide.
You know,
God help me.
I've dreamed of self-driving cars
since I was a little kid, truly, like sitting at red lights.
I used to be like,
there's got to be a way.
I think we took a Waymo together.
Yeah, and it's so good.
And the safety, you know, I think whatever people want to argue about jobs,
it's going to be pretty hard to say 30,000 Americans should die every year
so that people's incomes don't get disrupted.
It seems like you have to be able to get over that hump and say, like,
saving all these lives, if nothing else, is just really hard to argue against.
But we'll see.
You know, I mean, he's not without influence, obviously.
So, yeah, I mean, I am very much on team abundance, and, you know, my old mantra, I've been saying this less lately, but adoption accelerationalist, hyper-scaling pauser, the tech that we have, you know, could do so, so much for us even as is.
I think if progress stopped today, I still think we could get to 50 to 80% of, you know,
of work automated over the next like five to 10 years.
It would be a real slog.
You'd have a lot of, you know,
co-scientist type breakdowns of complicated tasks to do.
They have a lot of work to do to go sit and watch people
and say, why are you doing it this way?
What's going on here?
You'd handled this one differently.
Why did you handle that one differently?
All this tacit knowledge that people have
and the kind of know-how procedural, you know,
just instincts that they've developed over time.
Those are not documented anywhere.
not in the training data, so the AIs haven't had a chance to learn them.
But again, when I say like no breakthroughs, I still am allowing there for like, you know,
fine-tuning of things to just like, the capabilities that we have that haven't been applied
to particular problems yet.
So just going through the economy and just sitting with people and being like, why are you doing
this?
You know, let's document this.
Let's get the, you know, the model to learn your particular niche thing.
That would be a real slog.
And in some ways, I kind of wish that were the future that we were going to get.
because it would be a methodical, you know, kind of one step, one foot in front of the other, you know, no quantum leaps.
Like, it would probably feel pretty manageable, I would think, in terms of the pace of change.
Hopefully society could, you know, could absorb that and kind of adapt to it as we go without, you know, one day to the next, like, oh my God, you know, all the drivers, so, you know, are getting replaced or that one would be a little slower because you do have to have the actual physical buildout.
But in some of these things, you know, customer service could get rant down real fast, right?
Like if a call center has something that they can just drop in,
and it's like, this thing now answers the phones and talks like a human
and has a higher success rate and scales up and down.
One thing we've seen at Waymark, a small company, right?
We've always prided ourselves on customer service.
We do a really good job with it.
Our customers really love our customer success team.
But I looked at our intercom data, and it takes us like half an hour to resolve tickets.
We respond really fast.
We respond in like under two minutes most of the time.
But when we respond, you know, two minutes is still long enough
that the person has gone on to do something else, right?
It's the same thing as with the cursor thing that we were talking about earlier, right?
They've tabbed over to something else.
So now we get the response back in two minutes,
but they are doing something else.
So then they come back at, you know, minute six or whatever,
then they respond.
But now our person that's gone and done something else.
So the resolution time, even for like simple stuff,
can be easily a half an hour,
and the AI, you know, it just responds instantly, right?
So you don't have to have that kind of back and forth.
You're just in and out.
So I do think some of these categories could be really fast changes.
Others will be slower.
But yeah, I mean, I kind of wish we had that.
I kind of wish we had that slower path in front of us.
My best guess, though, is that we will probably continue to see things that will be significant leaps
and that there will be, like, actual disruption.
The other one that's come to mind recently, you know, maybe we can get the abundance department on these new antibiotics.
Have you seen this development?
No.
Tell us about it.
I mean, it's not a language model.
I think that's another thing people really underappreciate or that, you know, you could kind of look back at GPT 4 to 5 and then imagine a pretty easy extension of that.
So GPT4, initially when it launched,
we didn't have image understanding capability.
They did demo it at the time of the launch,
but it wasn't released for some months later.
The first version that we had could understand images,
could do a pretty good job of understanding images,
still with, like, jagged capabilities and whatever.
Now, with the new nanobanana from Google,
you have this, like, basically Photoshop-level ability
to just say, hey, take this thumbnail.
Like, we can take our two feeds right now,
you know, take a snapshot of you, a snapshot of me,
put them both into nanobanana and say,
generate the thumbnail for the YouTube preview
featuring these two guys,
put them in the same place, same background, whatever.
It'll mash that up.
You can even have it, you know, put text on top,
progress since GPD4, whatever we want to call it.
GPD5 is not a bust, and it will spit that out.
And you see that it has this deeply integrated understanding
that bridges language and image.
And that's something that it can take in,
but now it's also something can put out
as part of one core model
with like a single unified intelligence.
That I think is going to come to a lot of other things.
We're at the point now with these biology models
and material science models
where they're kind of like the image generation models
of a couple years ago.
They can take a real simple prompt
and they can do a generation.
but they're not deeply integrated
where you can have like a true conversation
back and forth and
have that kind of
unified understanding that bridges
language and these other modalities
but even so it's been enough
for this group at MIT
to use some of these
relatively
narrow purpose-built biology models
and create totally
new antibiotics, new
in the sense that they have a new mechanism
of action like they're
They're affecting the bacteria in a new way, and notably they do work on antibiotic-resistant bacteria.
This is some of the first new antibiotics we've had in a long time.
Now they're going to have to go through, you know, when I say get the abundance department on it,
it's like, where's my operation warp speed for these new antibiotics, right?
Like we've got people dying in hospitals from drug-resistant strains all the time.
why is nobody, you know, crying about this?
I think one of the things that's happening to our society in general
is just so many things are happening at once.
It's kind of the, it's like the flood the zone thing,
except like there's so many AI developments flooding the zone
that nobody can even keep up with all of those.
And that's come for me, by the way, too.
I would say two years ago I was like pretty in command of all the news
and a year ago I was starting to lose it.
And now I'm like, wait a second, there was new antibiotics developed.
You know, I'm kind of missing things, you know,
just like everybody else,
my best efforts.
But the key point there is AI is not synonymous with language models.
There are AIs being developed with pretty similar architectures for a wide range of different
modalities.
We have seen this play out with text and image where you had your text-only models and you
had your image-only models and then they started to come together and now they've come really
deeply together.
And so I think you're going to see that across a lot of other modalities over time as well.
And there's a lot more data there.
You know, we might, I don't know what it means to, like, run out of data.
In the reinforcement learning paradigm, there's always more problems, right?
There's always something to go figure out.
There's always something to go engineer.
The feedback is starting to come from reality, right?
That was one of the things Elon talked about on the Kroc 4 launch was like,
maybe we're running out of problems we've already solved.
And, you know, we only have so much of those sitting around in inventory.
You only have one internet.
You know, we only have so much of that stuff.
But over at Tesla, over at SpaceX,
like we're solving hard engineering problems
on a daily basis, and they seem to be never-ending.
So when we start to give the next generation of the model
these power tools, the same power tools
that the professional engineers are using at those companies
to solve those problems,
and the AI start to learn those tools,
and they start to solve previously unsolved engineering problems,
like that's going to be a really powerful signal
that they will be able to learn from.
And now, again, fold in those other modalities, right?
the ability to have sort of a sixth sense
for the space of small molecules,
the space of proteins, you know,
the space of material science possibilities.
When you can bridge or unify
the understanding of language and those other things,
I think you start to have something
that looks kind of like superintelligence,
even if it's like not able to, you know,
write poetry at a superhuman level necessarily,
its ability to see in these other spaces
is going to be truly a superhuman
thing that I think will be pretty hard to miss.
You said that that was one thing that Cal's analysis missed is just the lack of appreciation
for non-language modalities and how they drive in some of the innovations that you're talking
about.
Yeah, I think people are often just kind of equating the chatbot experience with AI broadly.
Yeah.
And, you know, that conflation will not last probably too much longer because we are going
to see self-driving cars unless they get banned.
and that's a very different kind of thing.
And talk about your impact on jobs, too, right?
It's like, what, four or five million professional drivers in the United States?
That is a big deal.
I don't think most of those folks are going to be super keen to learn to code.
And even if they do learn to code, I'm not sure how long that's going to last.
So that's going to be a disruption.
And then general robotics is like not that far behind.
And this is one area where I do think,
China might be actually ahead of the United States right now, but regardless of whether that's
true or not, you know, these robots are getting really quite good, right? They can like walk over
all these obstacles. And these are things that a few years ago, they just couldn't do it all. You know,
they could barely balance themselves and walk a few steps under ideal conditions. Now you've got
things that you can like literally do a flying kick and it'll like absorb your kick and shrug it off
and just keep going, you know, write itself and continue on.
way. Super rocky, you know, uneven terrain. All these sorts of things are getting quite good.
You know, the same thing is working everywhere. I think one of the, the other thing that's kind of,
there's always a lot of detail to the work. So it's a sort of inside view, outside view, right? Inside
view, you're like, there's always this minutia. There's always, you know, these problems that we had and
things we had to solve. But you zoom out and it looks to me like the same basic pattern.
is working everywhere.
And that is like,
if we can just gather enough data to do some pre-training,
you know, some kind of raw, rough, you know,
not very useful, but just enough at least to kind of get us going,
then we're in the game.
And then once we're in the game,
now we can do this flywheel thing of like, you know,
rejection sampling, like have it try a bunch of times,
take the ones where it succeeded, you know,
we fine tune on that.
the RLHF, you know, feedback, the sort of preference, take two, which one was better, fine-tune on that, the reinforcement learning, all these techniques that have been developed over the last few years, it seems to me they're absolutely going to apply to a problem like a humanoid robot as well. And that's not to say there won't be a lot of work to figure out exactly how to do that. But I think the big difference between language and robotics is really, mostly, that there just wasn't a huge repository.
of data to train the robots on
at first. And so you had to do a lot of
hard engineering to make it work
at all, you know, to even
stand up, right? You had to have all these control systems
and whatever, because there was nothing
for them to learn from in the way that
the language models could learn from the internet.
But now that they're working at least a little bit,
you know, I think all these kind of refinement
techniques are going to work.
And it'll be interesting to see if they can get the error rate low
enough that I'll actually like allow one in my house around
my kids. You know,
that they'll probably be better deployed in, like, factory settings first,
more controlled environments than the chaos of my house,
as you have seen in this recording.
But I do think they're going to work.
What's the state of agents more broadly at the moment?
Where do you see things playing out?
Where do you see it go?
Well, broadly, I think, you know, it's the task length story from meter
of the, you know, every seven months or every four months doubling time.
we're at two hours-ish with GBT-5.
Replit just said their new agent V3 can go 200 minutes.
If that's true, that would even be a new high point on that graph.
Again, it's a little bit sort of apples to oranges because they've done a lot of scaffolding.
How much have they broken it down?
How much scaffolding are you allowed to do with these things before you sort of are off of their chart
and onto maybe a different chart?
but if you extrapolate that out a bit and you're like, okay, take the four-month case just to be a little aggressive, that's three doublings a year. That's 8x task length increase per year. That would mean you go from two hours now to two days in one year from now. And then if you do another 8x on top of that, you're looking at basically say two days to two weeks of work in two years.
That would be a big deal, you know, to say the least, if you could delegate an AI two weeks
worth of work and have it do a, you know, even half the time, right?
The meter thing is that they will succeed half the time on tasks of that size.
But if you could take a two-week task and have a 50% chance that an AI would be able to do
it, even if it did cost you a couple hundred bucks, right?
It's like, well, that's, again, a lot less than it would cost to hire a human to do it.
And it's all on demand.
It's kind of, you know, it's immediately available.
if I'm not using it, I'm not paying anything.
Transaction costs are just like a lot lower.
The whole, you know, many other aspects are favorable for the AI there.
So, you know, that would suggest that you'll see a huge amount of automation in all kinds of different places.
The other thing that I'm watching, though, is the reinforcement learning does seem to bring about a lot of bad behaviors.
Reward hacking being one, you know, any sort of game.
between what you are rewarding the model for and what you really want can become a big issue.
We've seen this in coding in many cases where the AI will,
Claude is like notorious for this, will put out a unit test that always passes, you know,
that just has like return true in the unit test.
Why is it doing that?
Like, well, it must have learned that what we want is for unit tests that pass.
You know, we want it to pass unit tests.
But we didn't mean to write fake unit tests that always pass.
but that technically did satisfy the reward condition.
And so we're seeing those kind of weird behaviors.
With that comes this like scheming kind of stuff.
We don't really have a great handle on that yet.
There is also situational awareness that seems to be on the rise, right?
Where the models are like increasingly in their chain of thought,
you're seeing things like, this seems like I'm being tested.
You know, maybe I should be conscious of what my tester is really looking for.
here, and that makes it hard to evaluate models in tests because you don't know if they're
actually going to behave the same way when they're out in the real world.
So those, you know, I wouldn't say this is a high level or high confidence prediction, but
like one model of the future I've been playing with is the task length keeps doubling
while at the same time these weird behaviors pop up and then are suppressed.
And we have seen in the Claude 4 and in the GPD 5 system cards,
Claude 4 reported, I think, a two-thirds reduction in reward hacking.
And, you know, in GPD5, they reported a few different dimensions,
but, you know, say something similar, reduction in deceptive behavior.
But those behaviors kind of just emerged.
So it's sort of like weird behavior emerges.
Then they sort of figure out how to tamp it down, but not entirely.
presumably in the next generation they'll like tamp it down some more
but maybe some new additional weird behavior could emerge
and then we'll have to kind of tamp that one down
all the while the tasks are expanding in scope
you know with a every four months doubling
so you could end up in a world where
you can delegate really like major things to AI's
but there's some small
but not necessarily totally vanishing chance
that it, like, actively screws you over
in the way that it is trying to do that task.
And so you have a, you know, maybe it's like, okay,
I think here I'm about to get two weeks
worth of work done for a hundred bucks.
Okay, that's amazing.
But there's also a one in 10,000 chance
that it, like, legitimately, you know,
attacks me in a, you know, like, meaningful way.
Some of the things that we have seen,
these are, like, fairly famous at this point,
but in the Claude 4 system card,
they reported blackmailing of the human
the setup was that the AI had access to the engineer's email
and they told the AI that it was going to be replaced
with a less ethical version or something like that.
It didn't want that and it found in the engineer's email
that the engineer was having an affair
so it started to blackmail the engineer
so as to avoid being replaced with a less ethical version.
People I think are way too quick in my view
to move past these anecdotes.
People are sort of often like, well, you know,
they set it up that way and, you know,
that's not really realistic.
But another one was whistleblowing.
You know, there was another thing where they sort of set up this dynamic
where there was some, you know,
unethical, illegal behavior going on.
And again, the model had access to this data.
And it decided to just email the FBI
and tell the FBI about it.
So, first of all, I don't think we really know what we want.
you know, to some degree, maybe you do want AIs to report certain things to authorities.
That could be one way to think about the bioweapon risk, you know, is like, not only should the models refuse, but maybe they should report you to the authorities if you're actively trying to create a bioweapon.
I certainly don't want them to be doing that too much. I don't want to live under the, you know, surveillance of Claude 5 that's always going to be, you know, threatening to turn me in.
but I do sort of want some people to be turned in
if they're doing sufficiently bad things.
We don't have a good resolution society
wide on what we want
the models to even do in those situations.
And I think it's also, you know, it's like,
yes, it was set up, yes, it was research,
but it's a big world out there, right?
We've got a billion users already on these things
and we're plugging them in to our email.
So they're going to have very deep access
to information about it.
about us. You know, I don't know what you've been doing in your email. I hope there's
something too crazy in mine, but like, now I got to think about it a little bit, right?
What did I? Have I ever done anything that I, you know, geez, I don't know.
Or even that it could misconstrue, right? Like, it's obviously not, um, may I didn't even
really do anything that bad, but it just misunderstands what exactly was going on.
So that could be a weird, you know, if there's one thing that could kind of stop the
agent momentum in my view, it could be like,
the one in 10,000 or whatever, you know, we ultimately kind of push the really bad
behaviors down to is maybe still just so spooky to people that they're like, I can't
deal with that, you know, and that might be hard to resolve. So, well, you know, what happens
then? You know, it's hard to check two weeks worth of work every couple hours or whatever,
right? Like, that's part of where the whole, then you bring another AI into check.
Check it. You know, that's, again, where you start to get to the, now I see why we need more electricity and $7 trillion of buildout is, yikes, you know, they're going to be producing so much stuff. I can't possibly even review it all. I need to rely on another AI to help me do the review of the first AI to make sure that if it is trying to screw me over, you know, somebody's catching it. I can't monitor that myself. I think Redwood Research is doing some really interesting stuff like this, where they are trying to get systematic on like, okay, let's just assume.
This is quite a departure from the traditional AI safety work where the big idea traditionally was,
let's figure out how to align the models, make them safe, make them not do bad things, great.
Redwood Research has taken the other angle, which is, let's assume that they're going to do bad stuff.
They're going to be out to get us at times.
How can we still work with them and get productive output and get value without, you know,
fixing all those problems
and that involves like
again all these sort of
AI supervising other AIs
and crypto might have a place
to a role to play in this
another episode coming out soon
with Ilya Polosuhin
who's the founder of NIR
really fascinating guy because he was one of the
eight authors of
The Attention is All You Need paper
and then he started this NIR
company. It was originally
an AI company. They took a huge detour
into crypto because
they were trying to hire task workers around the world
and couldn't figure out how to pay them.
So they were like, this sucks so bad
to pay these task workers
in all these different countries
that we're trying to get data from
that we're going to pivot into a whole blockchain side quest.
Now they're coming back to the AI thing
and their tagline is the blockchain for AI.
And so you might be able to get, you know,
a certain amount of control from, you know,
the sort of,
crypto security that the, the blockchain type technology can provide. But I could see a scenario
where the bad behaviors just become so costly when they do happen that people kind of get
spooked away from using the frontier capabilities in terms of just like how much, you know,
work the AIs can do. But that wouldn't be a, that wouldn't be a pure capability stallout. It would be a,
we can't solve, you know, some of the long-tail safety issues challenge.
And, you know, if that is the case, then, you know, that'll be, that'll be an important fact about the world, too.
I always, nobody ever seems to solve any of these things like 100%, right?
They always, every generation, it's like, well, we reduced hallucinations by 70%.
Oh, we reduced deception by two-thirds.
We reduced, you know, scheming or whatever by however much.
but it's always still there, you know, and if you take the even, you know, lower rate and you multiply it by a billion users and thousands of queries a month and agents running in the background and processing all your emails and, you know, all the deep access that people sort of envision them happening, it could be a pretty weird world where there's just the sort of negative lottery of like AI accidents.
Another episode coming up is with the AI underwriting company and they are trying to bring the insurance industry and all of the,
the wherewithal that's been developed there
to price risk, figure out
how to create standards,
what can we allow, what sort of guardrails
do we have to have to be able to ensure
this kind of thing in the first place?
So that would be another really interesting area to watch
is like, can we sort of financialize those risks
in the same way we have with car accidents
and all these other mundane things?
But the space of car accidents is only so big.
The space of weird things that AIs might do to you,
you know,
have weeks worth of runway is much bigger.
And so it's going to be a hard challenge.
But, you know, people are, people are working.
We got some of our best people working on it.
What do you make the claim that 80% of AI startups have Chinese open models?
What do you make of the claim and the implications?
I think that maybe, that probably is true with the one caveat that it is only measuring
companies that are using open source models at all.
I think most companies are not using open source models.
And I would guess, you know, the vast majority of tokens being processed by American AI startups are their API calls, right, to the usual suspects.
So weighted by actual usage, I would say still the majority, as far as I could tell, would be going to commercial models.
For those that are using open source,
I do think it's true that the Chinese models have become the best.
You know, the American bench there was always kind of thin, right?
It was basically meta that was willing to put in huge amounts of money and resources
and then open source it.
You've got, you know, Paul Allen funded group,
the Allen Institute for AI, AI2.
You know, they're doing good stuff too, but they don't have pre-training,
resources. So they do, you know, really good post-training and open-source their recipes and all
that kind of stuff. So it's not like American and open source is bad. You know, and again,
it's a time. This is another way in which I think you can really validate that things are moving
quickly because if you take the best American open source models and you take them back a year,
they are probably as good, if not a little better than anything that we had commercially
available at the time.
If you compare to Chinese, you know, they have, I think, surpassed.
So there's been like pretty clear change at the frontier.
I think that means that the best Chinese models are like pretty clearly better than
anything we had a year ago, commercial or otherwise.
So, yeah, I mean, that just means like things are moving.
I think that's like, hopefully I've made that case compellingly.
But that's another data point that I think makes it hard to, I don't think you can believe
both that the Chinese models are now the best open source models
and that AI has stalled out and we haven't seen much progress since GPT4.
Like those seem to be kind of contradictory notions.
I believe the one that is wrong is the lack of progress.
In terms of what it means, I mean, I don't really know.
It's, we're not going to stop China.
The whole, I've always been a skeptic of,
the no-selling chips to China thing.
The notion originally was like,
we're going to prevent them from doing, you know,
some super cutting-edge military applications.
And it was like, well, we can't really stop that.
But we can at least stop them from training frontier models.
And then it was like, well, we can't necessarily really stop that.
But now we can, you know, at least keep them from, like,
having tons of AI agents.
Well, we'll have, like, way more AI agents than they do.
And I don't love that.
line of thinking really at all, but one upshot of it potentially is they just don't have enough
compute available to provide inference as a service, you know, to the rest of the world. So instead,
the best they can do is just say, okay, well, we'll train these things and, you know, you can figure
it out. Here you go, like have at it. It's kind of a soft power play, presumably. I did an episode
with Anjene for May 16, Z, who I thought really did a great job.
of providing the perspective of what I started calling countries three through
one ninety three, if the U.S. and China are one and two.
Three through, there's a big gap.
You know, there's like, I think the U.S. is still ahead, but not by that much in terms of
research and, you know, ideas relative to China.
We do have this compute advantage, and that does seem like it matters.
One of the upshots may be that they're open sourcing.
and countries 3-3-193 are significantly behind.
So for them, it's a way to, you know,
try to bring more countries over to the Chinese camp,
potentially in the U.S.-China rivalry.
It seems like the model everybody,
and I don't like this at all,
I don't like technology decoupling.
As somebody who worries about, you know,
who's the real other here?
I always say the real other are the AIs, not the Chinese.
So if we do end up in a sense,
situation where yikes like you know we're seeing some crazy things it would be really nice if we
were on basically the same technology paradigm to the degree that we really decouple and you know not just
the chips are different but maybe the ideas start to become very different publishing gets shut down
you know tech trees evolve and kind of grow apart um that to me seems like a recipe for you know it's
harder to know what the other side has. It's harder to trust one another. It seems to feed into
the arms race dynamic, which I do think is a real existential risk factor. I would hate to see us,
you know, create another sort of mad type dynamic where we all live under the threat of
AI destruction. But that very well could happen. And so, yeah, I don't know. I do kind of,
have some sympathy for the recent decision that the administration made to be willing to sell
the H-20s to China.
And then it was funny that they turned around and rejected them, which to me seemed like a
mistake.
I don't know why they would be rejecting them.
If I were them, I would buy them.
And I would maybe sell inference on the models that I've just been creating, and I would
try to make my money back doing that.
But in the meantime, they can at least, you know, demonstrate the greatness of the Chinese
nation by showing that they're not.
far behind the frontier, and they can also make a pretty powerful appeal to countries
through 193 and say, like, you know, look, you really want to, you see how the U.S. is acting?
In general, you know, you really want to, they cut us off from chips.
They had a even a long, you know, the last administration had an even longer list of countries
that couldn't get chips.
This administration is doing all kinds of crazy stuff.
You know, you get 50% tariffs here, there, whatever.
how do you know you can really rely on them
to continue to provide you AI into the future?
Well, you can rely on us.
We open source to the model.
You can have it.
So, you know, come work with us and buy our chips
because by the way, our models will, you know,
as we mature, they'll be optimized to run on our chips.
So I don't know.
That's a complicated stuff, a complicated situation.
I do think it's true.
I don't think the adoption is as high as that 80%.
I think that is, you know, within that subset
of companies that are doing stuff with open source.
We're going to experiment with that at Waymark,
but to be honest,
we have never done anything
with an open source model in our product to present.
Everything we've ever done has been through commercial.
At this point, we are going to try
doing some reinforcement fine-tuning.
We are going to do that on a Quinn model, I think, first.
So, you know, that'll put us in that 80%.
But I'm guessing that at the end of the day,
we'll take that Quinn model,
we'll do the reinforcement fine-tuning,
and we'll probably get roughly up to as good
as, you know, GPD-5 or Cloud 4 or whatever.
And then we'll say, okay, do we really want to
have to manage inference ourselves?
How much are we really going to save?
And at the end of the day,
I would guess we probably are still going to end up just being like,
eh, we'll pay a little bit more on a monthly bill basis
for one of these frontier models
are a little bit better, maybe still,
and, you know, it's operationally a lot easier.
And they'll have upgrades,
you know, um, so yeah, I mean, of course there's regulated industries.
There's all, there's a lot of places where, you know, you have hard constraints.
You just can't get around and that forces you to do those Chinese things, Chinese models.
Then there's also going to be the question of like, are there back doors in them?
You know, people have seen the sleeper agents project where a model was trained to be good up until a certain point of time.
And, you know, people put the today's date in the system prompt all the time, right?
Today's date is this, you are Claude, you know, here you go.
So then that's going to be another kind of thing for people to worry about.
And we don't really have great.
There have been some studies.
Anthropic did a thing where they trained models to have some hidden objectives
and then challenged teams to figure out what those hidden objectives were.
And with certain interpretability techniques,
they were able to figure that stuff out relatively quickly.
So you might be able to get enough content.
that you take this open source thing,
you know, created by some Chinese company, whatever,
and then put it through, you know, some sort of,
not exactly audit, because you can't trace exactly what's happening,
but some sort of examination, you know, to see,
can we detect any hidden goals or any, you know, secret backdoor,
bad behavior, whatever's, and maybe with enough of that kind of work,
you could be confident that you don't have it.
But the more and more critical of this stuff gets us,
but, you know, again, going back to that task link,
doubling weird behavior now you've got to add into the mix what if they intentionally programmed it
to do certain bad things under certain you know rare circumstances um we're just headed for a really
weird future you know that we've got all these there's there's no limit to it you know all these
things are valid concerns they often are in direct tension with each other um i don't i'm not one who
you know, wants to see one tech company take over the world by any means. So I definitely think we
would do really well to have some sort of broader, more buffered, ecological-like system where,
you know, all the AIs are kind of in some sort of competition, you know, mutual coexistence with each
other. But we don't really know what that looks like. And we don't really know, you know, we don't
really know what an invasive species might look like, you know, when it gets introduced into that
very, you know, nascent and as yet, like, not battle-tested ecology.
So, yeah, I don't know.
Bottom line, I think the future is going to be really, really weird.
Yeah.
Well, I do want to close on an uplifting notes.
So maybe as a, as a Gearing-Tur's closing question, we could get into some areas where
we're already seeing some exciting capabilities emerge and sort of transform the experience,
maybe around education or healthcare or any other areas you want to highlight.
Yeah, it's, boy, it's all over.
One of my mantras is that there's never been a better time to be a motivated learner.
So I think a lot of these things do have kind of, you know, two sides of the coin.
There's the worry that the students are taking the shortcuts and they're, you know,
losing the ability to sustain focus and endure cognitive strain.
Flip side of that is, as somebody who's fascinated by the intersection of AI and biology,
sometimes I want to read a biology paper and I really don't have the background.
an amazing thing to do is turn on voice mode and share your screen with chat GPT and just go through the paper reading it's you don't even have to talk to it most of the time you're doing your reading it's watching over your shoulder and then at any random point you have a question you can verbally say what's this why are they talking about that what's going on with this what is the role of this particular protein that they're referring to or whatever and it will have the answers for you so if you really
want to learn in a sincere way, you know, the things are unbelievably good at helping you do that.
Flipside is you can take a lot of shortcuts and, you know, maybe never have to learn stuff.
On the biology front, you know, again, like, we've got multiple of these sort of discovery
things happening, the antibiotics one we covered.
There was another one that I did another episode on with a Stanford professor named
James Zhao, who created something called the virtual lab.
And basically this was an AI agent that could spin up other AI agents,
depending on what kind of problem it was given.
Then they would go through a deliberative process
where you'd have, you know, one expert in one thing would give its take
and they'd, you know, bat it back and forth.
There was a critic in there that would criticize, you know,
the ideas that had been given.
Eventually they'd synthesize.
Then they were also given some of these narrow specials.
tool. So you have agents using the alpha fold type, not just alpha fold. There's a whole wide, wide
array of those at this point, but using that type of thing to say, okay, well, can we simulate, you
know, how this would interact with that? Agents are running that loop. And they were able to get
this language model agent with specialized tool system to generate new treatments for novel
strains of COVID that had, you know, kind of escaped the previous treatments.
Amazing stuff, right?
I mean, the flip side of that, of course, is, you know, you get the bioweapon risk.
So all these things do seem like they're going to be, even on just the abundance front itself,
right?
Like, we may have a world of unlimited professional private drivers, but we don't really have
a great plan for what to do with the 5 million people that are currently doing that work.
We may have infinite software, but, you know, especially once the 5 million drivers pile into all the coding boot camps and, you know, get coding jobs, I don't know what we're going to do with the 10 million people that we're coding when, you know, 9 million of them become superfluous.
So, yeah, I don't know. I think we're headed for a weird world. Nobody really knows what it's going to look like in five years. There was a great moment at Google's I.O. where they brought up some journalist. I know we're skeptical of journalists.
This is a great moment to, we're going direct, right?
This is a great reason or example of why one would want to do that.
They brought up this person to interview Demis and Sergey Brannan.
They, the guy asked, like, what is search going to look like in five years?
And Sergey Brinner, like, almost spit out his coffee on the stage and was like, search.
We don't know what the world is going to look like in five years.
So I think that's really true.
like the biggest risk I think for so many of us that I, you know,
to include myself here, is thinking too small.
You know, the worst thing I think we could do would be to underestimate how far this thing could go.
I would much rather be, I would much rather be mocked for things happening on twice the time scale that I thought
than to find myself unprepared when they do happen.
So whether it's 27, 29, 31, I'll take.
that extra buffer, honestly, where we can get it. My thinking is just get, you know, get ready
as much and as fast as possible. And again, if we do have a little grace time to, you know,
to do extra thinking, then great. But I would, I think the worst mistake we could make would be to
dismiss and not feel like we need to get ready for big changes. Should we wrap directly on that
or is there any other last note you want to make sure to get across
regarding anything we said today?
One of my other mantras these days is the scarcest resource
is a positive vision for the future.
I do think it's always really striking,
whether it's Sergei or, you know, or Sam Altman or Dario,
like, Dario probably has the best positive vision
of the frontier developer CEOs with machines of love and grace.
But it's always striking to me how little detail there is on these things.
and when they launched GPT40, which was the voice mode,
they were pretty upfront about saying,
yeah, this was kind of inspired by the movie Her.
And so I do think, like, even if you are not a researcher,
you know, not great at math, not somebody who codes,
I think that this technology wave really rewards play.
It really rewards imagination.
I think literally writing fiction might be one of the highest value things
you could do, especially if you could write aspirational fiction that would get people at the
frontier companies to think, geez, maybe we could steer the world in that direction.
Like, wouldn't that be great?
If you could plant that kind of seed in people's minds, it could come from a totally non-technical
place and potentially be really impactful.
Play fiction, I had one other dimension to that, but yeah, play fiction, positive vision
for the future, anything that you could do to offer a positive, oh, behavioral.
too. These days, because you can get the AIs to code so well, I'm starting to see people
who have never coded before. I'm working with one guy right now who he's never coded before,
but does have a sort of behavioral science background, and he's starting to do legitimate
frontier research on how are AI is going to behave under various kind of esoteric circumstances.
So I think nobody should count themselves out.
from the ability to contribute to figuring this out and even to shaping this phenomenon.
It is not just something that the, you know, the technical minds can contribute to at this point.
Literally philosophers, fiction writers, people literally just messing around,
Pliny the jailbreaker, you know, there's, there are almost unlimited cognitive profiles
that would be really valuable to add to the mix.
of people trying to figure out what's going on with AI.
So come one, come all is kind of my attitude on that.
That's a great place to wrap.
Nathan, thank you so much for coming on the podcast.
Thank you, Eric. It's been fun.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe,
leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
follow us on X, A16Z, and subscribe to our substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
It should not be taken as legal business, tax, or investment advice,
or be used to evaluate any investment or security,
and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
