a16z Podcast - Marc Andreessen on AI Winters and Agent Breakthroughs
Episode Date: April 3, 2026This episode originally aired on the Latent Space Podcast. swyx and Alessio Fanelli speak with Marc Andreessen about the arc of AI from its origins in 1943 to today's breakthroughs in reasoning, codin...g agents, and self-improvement. They cover the parallels between AI scaling laws and Moore's Law, the architectural insight behind Claude Code and the Unix shell, the coming supply crunch in compute, and why the messy reality of 8 billion people means both AI utopians and doomers are too optimistic about the pace of change. Follow Marc Andreessen on X: https://twitter.com/pmarca Follow Shawn "swyx" Wang on X: https://twitter.com/swyx Follow Alessio Fanelli on X: https://twitter.com/FanaHOVA Listen to Latent Space. Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
This episode originally aired on the latent space podcast.
Mark Andresen has watched AI cycle through summers and winters for more than 35 years,
from coding and LISP in 1989 to backing the foundation model companies today.
He argues that the current moment is not another false start,
but the payoff from eight decades of foundational research,
catalyzed by four distinct breakthroughs,
large language models, reasoning, agents,
and self-improvement.
He also makes the case that the combination of a language model, a Unix shell, and a file
system represent one of the most important software architectures in a generation.
Swix and Alessio Fanelli speak with Mark Andreessen,
co-founder and general partner at A16Z.
Something about AI that causes the people in the field, I would say,
to become both excessively utopian and excessively apocalyptic.
Having said that, I think what's actually happened is an enormous amount of
tactical progress that built up over time.
And like, for example, we now know the neural network is the correct architecture.
And I will tell you, like, there was a 60-year run where that was like a, you know, or even 70 years or that was controversial.
And so the way I think about what's happening is basically, I think about basically the period we're in right now is it's, I call it, 80-year overnight success.
Right.
Which is like, it's an overnight success because it's like, bam, you know, chat GPT hits and then in an 01 hits and then, you know, open-claw hits.
And like, you know, these are like overnight, like radical, transformative successes.
But they're drawing on an 80-year sort of wellspring back.
backlog, you know, of ideas and thinking. It's not just that it's all brand new, it's that it's an
unlock of all of these decades of, like, very serious hardcore research. By 18, like, this is
100, this is what I would be spending all of my time on. This is like such an incredible conceptual
breakthrough. Before we get into today's episode, I just have a small message for listeners.
Thank you. We will not be able to bring you the AI engineering, science, and entertainment
contents that you so clearly want if you didn't choose to also click in and tune into our content.
We've been approached by sponsors on an almost daily basis, but fortunately, enough of you actually
subscribe to us to keep all this sustainable without ads, and we want to keep it that way.
But I just have one favor to ask all of you.
The single most powerful, completely free thing you can do is to click that subscribe button.
It's the only thing I'll ever ask of you, and it means absolutely everything to me and my team
that works so hard to bring the in space to you each and every week.
If you do it, I promise you we'll never stop working to make this show.
even better. Now let's get into it.
Everyone, welcome to the Lidenspace Podcast. This is Alasio, founder of Colonel Lapse, and I'm
joined by Spix, editor of Lidenspace. Hello, and we're in A16 with A, Mark and G, so welcome.
Yes. Yes, A and what, half of 16?
A1.
A1. Exactly. Apparently this is the final few days in your current office, you're moving across
the road. We have a limous and we have from projects underway, but yeah, this is actually,
This is the original.
We're in actually the original office.
We're in the,
we're in the whole thing.
It's beautiful.
Yeah,
great.
Thank you.
So I have to come out.
This is a,
you know,
I wanted to pick a spicy start.
In October 2020,
I just made friends with Roon.
And I wanted to give him something to sort of be spicy about.
And I said,
it'll never not be funny.
The A16Z was constantly going,
the future is where the smart people choose to spend their time.
And then going deep into crypto and not in AI.
And that was in October 22,
22,
and Roon says there was an internal meeting.
A16 Z to reorient around Gen.
A.I. Obviously, you had, but was there a meeting?
What was that?
I mean, I don't look. I've been doing AI since the late 80s.
Yeah. So I don't know.
As far as I'm concerned, this stuff is all Johnny come lately.
Yeah, I mean, look, we've been doing AI our entire existence.
I mean, we've been doing AI machine learning deep, you know, deep, we've been doing
this stuff way from the beginning, obviously.
A. A.A. is just core to computer science. I actually view them as like quite,
quite continuous. You know, Ben and I both have computer science degrees.
You know, we both, Ben and I actually both are old enough to remember the actual AI boom in the 1980s.
There was a big AI boom at the time.
And there was one under names like expert systems.
And they were of like LISP and LIS machines.
I coded it LISP.
I was coding a LISP in 1989 when that was the language of the AI future.
Yeah.
So this is something that we're like completely, you're completely comfortable with,
I've been doing the whole time and are very enthusiastic about.
Is there as strong like this time is different?
Because my closest analog was 2016-17.
was in the AI boom and it petered out very, very quickly.
It's just in terms of investing.
Sort of, sort of.
Investment excitement.
Although that's really when the Nvidia phenomenon really,
I would say it was in that period when it was very clear that at the time,
the vocabulary was more machine learning,
but it was very clear at that time that machine learning was hitting some sort of takeoff
point.
Yeah.
Well, and as you guys,
you guys have talked about this at length on your thing.
But, you know, if you really track what happened, I think the real story is it was,
it was the Alex Net basically breakthrough in like 2013.
That was the real knee in the curve.
and then it was obviously the transformer breakthrough in 17.
Yeah.
And then everything that followed.
But, but, you know, look, machine learning, you know, they were, you know, look, I mean, look, I've been working, you know, I've been working with one of my, you know, kind of projects working with Facebook since 2004 and on the board since 2007.
And, and, you know, they started using machine learning very early.
And, you know, I have used it basically, you know, for like 20 years for, you know, content, you know, feed optimization and advertising optimization.
And obviously, you know, financial services, you know, many, many, many, many companies, many different sectors have been doing this.
And so it's like one of these things.
It's like it's not a single thing.
Like it's like layers, right?
And the layers arrive at different paces, but they kind of build up.
Yeah.
They kind of build up over time.
And then, yeah, and then look, in retrospect, it was 2017 was kind of the, you know,
the key point with the transformer.
And then as you guys know, there was this really weird like four-year period where it's like the transformer existed.
And then it was just like, let's go.
Yeah.
Well, but between 2020, but between 2017 and 2021, I mean,
that was the era of which like companies like Google had internal chatbots,
but they weren't letting anybody use them.
Yeah.
Right.
And then, you know, and then Open AI developed chat GPT, or GPT2,
and then they told everybody, this is way too dangerous to deploy, right?
You know, we can't possibly let normal people, normal people use this thing.
And then you guys, I'm sure remember AI dungeon.
So there was like a year where like the only way for a normal person to use GPT
was in AI dungeon.
Yeah.
And so we would do this.
You'd go in there and you'd pretend to play Dungeons and Dragons.
In reality, you're just trying to talk to, talk to GPT.
And so there was this, you know, there was this long, you know,
And, you know, the big companies, you know, big companies are cautious.
And, you know, the big companies were cautious.
By the way, it took open AI, you know, they talk about this.
It took Open AI time to actually adjust, you know, kind of redirect their research path.
I think it was at Rosewood, right?
The dinner that founded Open AI was right there.
Right.
But that dinner would have taken place in 2018.
The formation of Open AI as late as 2018?
Sorry.
No, I'm wrong.
Probably earlier.
They just celebrated a 10-year anniversary.
So it is 2025.
Yeah.
So 2015.
Yeah.
2015, yeah, 2015. But then Alec Radford did GPT1 in, what, probably 17, 18, 17, 18.
So it is, yeah, for the, and then they didn't really, and then GPT3 was what,
2020, 2020, 2020, because that became co-pilot immediately.
Even Open AI, which has been, you know, the leader of this thing in the last decade,
you know, even they had to adapt and lean into the new thing. And so, yeah, I think it's just
this process of basically sort of wave after wave, layer after layer, you know, building on itself.
And then you kind of get these catalytic moments where the whole thing,
pops. And obviously that's what's happening now.
Is it useful to think about, will there be an AI winter? Because there's always these patterns.
Like, is this endless summer? It's something I constantly think about because do I get,
do I just like, just get endlessly hyped and just trust that I will only be early and never wrong?
Or will there be a winter?
So there's something about, you can say the following. There's something about AI that has led
to this repeated pattern. And you guys know this. It's winter, summer winter.
I'm watching. And it goes back 80 years.
80 years. So the original neural network paper was 1943, right, which is, which is amazing.
It was, it was far back that long. And then there was, if you guys have ever talked about this on your show, but there was this, there was a big, there was an AGI conference at Dartmouth University in 1950. 55.
And they got an NSF grant to, for all the AI experts at the time to spend the summer together. And they figured if they had 10 weeks together, they could get AGI.G.
Of the other end. And they got there, by the way, they got the grant. They got the 10 weeks. And then, you know, 1950.
You know, no AGI.
And like I said, I lived through the 80s version of this where there was a big boom in a crash.
And so there is this thing.
There is something about AI that causes the people in the field, I would say, to become both excessively utopian and excessively apocalyptic.
And it's probably on both sides of like the boom bus cycle.
You kind of see that play out.
Having said that, I think what's actually happened is like just in, you know, and we now know in retrospect, like an enormous amount of technical progress that built up over time.
And like, for example, we now know the neural network is the correct architecture.
And I will tell you, like there was a 60-year run.
where that was like a, you know, or even 70 years where that was controversial. And we now know
that that's the case. And so we now, you know, everything we're building on today just sort of derives
from the original idea in 1943. And so, so in retrospect, we now know that like these, these guys
were right, you know, they would get the timing wrong and they thought, you know, capabilities
would arrive faster or they were, it could be turned into businesses sooner or whatever. But like,
they were fundamentally, the scientists who worked on this over the course of decades were fundamentally
correct about what they were doing. And the, and the payoff from, from all their work is happening now.
And so, so the way I think about what's happening is basically, I think about basically the, the,
the period we're in right now is it's, I call it 80 year overnight success, right?
Which is like, it's an overnight success because it's like, bam, you know, chat GPT hits and then,
in then 01 hits and then, you know, open, these are open, these are, these are like overnight,
like radical overnight transformative successes, but they're drawing on an 80 year sort of wellspring
backlog, you know, of ideas and thinking.
It's not just that it's all brand new.
It's that it's an unlock of all of these decades of like very serious hardcore research and thinking.
Look, there were AI researchers who spent their intense.
higher lives. They got their PhD. They worked for, they've researched for 40 years. They
retired. In a lot of cases, they passed away and they never actually saw it at work.
Yeah. So sad. It is, it is sad. It is sad. Jeff Hinton was like the last guy.
Yeah, yeah. Well, they were the guys that was a guy, Alan Newell. I mean, there's tons of
John McCarthy. John McCarthy was like one of the inventors of the field. He's one of the guys
organized the Dartmouth conference. And, you know, he taught at Stanford for 40 years and passed,
you know, passed away. I don't know, whatever, 10 years ago or something. Never, never actually
got to see it happen. But like, it is amazing in retrospect. Like, these guys were incredibly
smart and they worked really hard and they were correct. So anyway, so then it's like, okay,
you know, as I say, history doesn't repeat, but it rhymes. It's like, okay, does that mean that
there's going to be another like, you know, basically boom-buzz cycle? And I will tell you, like,
look, like, in a sense, like, yes, everything goes through cycles and, you know, people get overly
enthusiastic and overly depressed and there's, there's a timelessness to that. Having said that,
there's just no question. So the foremost, the foremost dangerous words in this time is different.
Do you know the 12 most dangerous words of investing? No. The foremost,
Four most dangerous words in investing are this time is different.
The 12 most dangerous words.
And so, like, I tell you what's different.
Like, now it's working.
Like, there's just no, I mean, look, there's just no question.
And by the way, I'll just give you guys my take.
Like, LLM's like from basically the Chad GPT moment through to spring of 25, I think you could still,
I think well-intentioned, well-informed skeptics could still say, oh, this is just pattern completion,
and, oh, these things don't really understand what they're doing.
and the hallucination rates are way too high.
And, you know, this is going to be great for creative writing
and creating, you know, Shakespeare and sonnets as rap lyrics or whatever.
Like, it's going to be great at all that stuff.
But we're not going to be able to harness this to make this relevant in, you know,
coding or in medicine or in law or and, you know, kind of feels that, you know,
kind of really, really matter.
And I think basically it was the reasoning breakthrough.
It was 01 and then R1 that basically answered that question and basically said,
oh, no, we're going to be able to actually turn this into something
that's going to work in the real world.
And then obviously the coding breakthrough over the,
or basically the coding breakthrough that kind of catalyzed over the holiday break
was kind of the third step in that.
But he was like, all right, if, you know,
if Linus Torvald is saying that the AI coding is not better than he is, like,
like that's never happened before.
That's the benchmark.
Yeah.
That's never happened before.
And so now we know that it's going to sweep through coding.
And then, and then we know, you know, we know that if it's going to work in coding,
it's going to work in everything else, right?
It's just that, because that's, that's like, that's like the hardest, in many ways,
that's the hardest example.
And now everything else is going to be a derivative of that.
And then on top of that, we just got the agent breakthrough,
with OpenClaw, which is fantastic, which is amazing and incredibly powerful.
And then we just got the auto research, you know, the self-improvement.
You know, we're now into the self-improvement breakthrough.
And so the way I think about it is we've had four fundamental breakthroughs and functionality,
LLM's reasoning agents, and then now RSI.
And they're all actually working.
And so I'm just, as you can jump.
I'm jumping out of my shoes.
Like, this is it.
Like, this is the culmination of 80 years worth of work, and this is the time.
It's becoming real.
Yeah.
I'm completely convinced.
I think the anxiety that people feel is like during the transistor era, you had morsela,
and it's like, all right, we understand why these things are getting better.
We understand the physics of it.
With AI, it's so jagged in like the jumps.
Like you said, it's like in three months, you have like this huge jump.
And people are like, well, this can keep happening, right?
But then it keeps happening.
It will keep happening.
And so like, how do you think about also timelines of like what's we're building?
I think we always have this question with guests, which is like, you know,
should you spend time building harness?
for a model versus like the next model,
just going to do it one shot in the lead in space.
And how does that inform, like,
how you think about the shape of the technology?
You know, you talk about how it's a new computing platform.
If you have a competing platform,
then like every six months, it like drastically changes
and what it looks like, it's hard to build companies on top of it.
Yeah, so it's a couple things.
So one is like, look, Moore's Law was what we now call a scaling law.
Like Moore's Law was a scaling law.
And for your younger viewers,
Moore's Law was every chip.
Chips either get twice as powerful or twice as cheap every 18 months.
And that, and that, you know,
it's gotten more complicated in the last few years, but like that, that was like the 50-year
trajectory of, of the computer industry. And then, by the way, and that's what took the
mainframe computer from a $25 million current dollar thing into, you know, the phone in your
pocket being, you know, a million times were powerful than that, like that, you know, for 500
bucks. And so that was a scaling law. And then, and then key to any scaling law, including
Moore's law and the AI scaling laws is, you know, they're not really laws, right? They're,
their predictions. But when they work, they become self-fulfilling predictions because they,
they set a benchmark and then the entire industry, right, all the smart.
people in the industry kind of work to make sure that that actually happens. And so they
kind of motivate the breakthroughs that are required to keep that going. And in chips, that was a 50-year
run, right? And it was amazing. And it's still happening in some areas of chips. I think the
same thing is happening with the core scaling laws. The core scaling laws in AI, you know, they're not really
laws, but like they are basically their predictions and then they're motivating catalysts for the
research work that is required to be. And by the way, also the investment dollars are, you know,
required to basically keep, you know, keep the curves going. And look, it's going to be complicated and
it's going to be variable. And, you know, there are going to be walls that are going to look like
they're fast approaching. And then they're going to be, you know, engineers are going to get to
work and they're going to figure out a way to punch through the walls. And obviously, that's,
you know, that's been happening a lot. You know, and then look, there's going to be times when it looks
like the walls have, you know, the laws have petered out. And then they're going to pick up
again and surge. And then, and then it appears what's happening to the eye is there's not multiple,
you know, you know, multiple scaling laws. There's multiple areas of improvement. And I think, you know,
I don't know how many more there are already yet to be discovered, but there are probably some more that we don't know about yet.
You know, like, for example, there's probably some scaling law around world models and robotics that we don't fully understand, you know,
kind of acquisition of data at scale in the real world that we don't fully understand yet.
So that one will probably kick in at some point here.
There's a bunch of really smart people working on that.
And so, yeah, I think the expectation is that, you know, the scaling laws generally are going to continue.
Yeah, the pace of improvement will continue to move really fast.
To your question on, like, what to build.
So I'm a complete believer the scaling laws are going to continue.
complete believer, the capabilities are going to keep getting amazing, you know, leaps and bounds.
The part where I kind of part ways a little bit with what I would describe as the AI purists,
you know, which is, which I would characterize as like the people who are in many ways,
the smartest people in the field, but also the people who spend their entire life, like in a lab,
and have, I would say, have very little experience in the outside world.
The nuance I would offer is the outside world of eight billion people and institutions and
governments and companies and economic systems and social systems is really complicated.
and doesn't, you know, 8 billion people making collective decisions on planet Earth is not a simple process of like, just like, you see this happening.
It's like a bunch of the AI CEOs have this thing, which is just like, well, there's just this, they just all have this kind of thing when they talk in public where they're just like, well, there's these obvious set of things that society used to do.
And then they're like, society's not doing any of those things.
Right.
And it's like, how can society not, you know, whatever their theory is, how can society not see X, XYZ?
And the answer is, well, society is number one, there's no single society.
It's like eight billion people.
and they all have a voice and they all have a vote,
like at the end of the day of how they react to change.
And then, you know, it's just, like, it's just,
human reality is just really complicated and messy.
And so the specific answer to your question is, like, as usual,
it depends.
You know, it depends.
Look, there's no question people are going to like,
there's no question they're going to be companies that's already happening.
There are companies that think that they're building value on top of the models
and then they're just going to get blitz by the next model.
There's no question that's happening.
But I think there's no question also that just the process of adaptation of any technology
into the real messy world of humanity is just going to be messy and complicated.
It's not going to be simple and straightforward.
It's going to be messy and complicated.
And there are going to be a lot of companies and a lot of products.
And in fact, entire industries that are going to get built to basically actually help all of this technology actually reach real people.
The amount of capital going into these companies, I mean, Dario talked about it on the Dork Cash podcast.
And Dork Cash was like, why don't you just buy 10x more GPUs?
And he's like, because I'm going to go bankrupt if the model doesn't exactly hit the performance.
level. How do you think about that also as a risk on, you know, you guys are investors
that in open AI and thinking machines and world apps? It seems like we're leveraging the scaling
loss at a pretty high rate. Like, how comfortable, I guess, do you feel with the downside
scenario? Like, and say like things Peter Al, you think you can kind of like restructure these
buildouts and, you know, capital investments. Yeah. So I should start by saying, so I live through the dot com
crash. And I can tell you stories for hours about the dot com crash. And it was horrible. No, it was
awful. It was apocalyptic.
By the way, a lot of the dot-com crash
was actually, at the time, it was actually a telecom crash.
It was a bandwidth crash. The thing that actually
crashed, that wiped out all the money with the telecom
companies. Global crossing.
I'm from Singapore, and
they laid so much cable over
our oceans. Actually, it was
a scaling law in the dot-com era,
and it was literally the U.S. Commerce Department put
out a report in 1996, and they said
internet traffic was doubling every quarter.
And actually, in 1995 and
and 1996, internet traffic actually did double every
quarter. And so that became the scaling law. And so what all these telecom entrepreneurs did was
they went out and they raised money to build fiber anticipating that the demand for Benwith
was going to keep doubling every quarter. Doubling every quarter, though, is like, you know,
grains of chess and the chessboard. Like at some point, the numbers become extremely large, right? And
and really, and really what happened was the internet, by the way, continuously kept growing
basically since inception. It's, you know, it's continuously grown. It's never shrunk. And it's grown
really fast compared to anything else, you know, in human history. But it wasn't doubling every
quarter as of 1998, 1998, 1999. And so there was this gap in the expectation of what they thought
was a scaling law versus reality.
And that's actually what caused the dot-com crash,
which was they way over,
companies like Global Crossing way overbuilt fiber,
which is sort of the,
by the way,
fiber telecom equipment,
you know,
so all the networking gear,
you know,
and then by the way,
the actual physical data center.
So like that was the beginning
of the data center build
and then the data center overbuilt.
And so you had that,
but it was literally,
I think it was like two trillion dollars
got wiped out, right?
It was like, it was like a big,
and by the way,
the other subtlety in it was,
the internet companies,
themselves never really had any debt because tech companies generally don't run on debt.
But the telecom companies run on debt.
Physical infrastructure companies run on debt.
And so the company's like, well, we're crossing not just raised a lot of equity.
They also raised a lot of debt.
So they're highly levered.
And so then you just do the thing.
It's just like, okay, you have a highly levered thing where you're just overbuilding capacity.
Demand is growing, but not as fast as you hoped.
And then boom, bankrupt.
Right.
And then it's like they say about the hotel industry, which is it's always the third owner of
a hotel that makes money.
It has to go bankrupt twice, right?
You have to wash out all of the over-optimistic exuberance before it gets to actually a stable state and then it makes money.
So, by the way, all of those data centers and all of those, all the fiber that they're in use.
It's all in use today, but 25 years later.
But actually the elapsed time was it took 15 years.
It took 15 years from 2000 to 2015 to actually fill up all that capacity.
The cautionary warning is the overbuild can happen.
And, you know, you get into this thing where basically everybody, everybody who basically has any sort of institutional capital is like, wow, it's just I don't know how to invest in these crazy software.
things, but for sure I can build data centers and for sure I can buy GPUs and I can deploy,
you know, compute grids and all these things. And so, you know, if you're a pessimist,
you can look at this and you can say, wow, this is like really set up to be able to basically
replicate, you know, what we went through in 2000. Obviously, that would be bad.
The counter argument, which is the one I agree with, which is the counter on the other side
is a couple things. One is the companies that are investing all the, the companies that are
invest in the money are like the bluest chip of companies. And so back back in the in the
like Global Crossing was like a, it was like an entrepreneur. It was like a new venture. But like the
money that's being deployed now at scale is Microsoft and you know, on Amazon and Google.
Facebook and Facebook and Vidae and Vidae and these and now, you know, by the way, open Aynthropic,
which are now like, you know, really serious size, you know, as companies with, you know, very serious
revenue. These are very large scale companies with like lots, lots of cash, lots of debt capacity that
they've never used. And so this is institutional in a way.
that that really wasn't at the time.
And then the other is, at least for now,
every dollar that's being put into anything that results
in a running GPU is being turned into revenue right away.
And you guys know this.
Everybody starve for capacity.
Everybody starve for compute capacity.
And then all the associated things memory and interconnect and everything else,
data center space.
And so every dollar right now that's being put in the ground is turning into revenue.
And in fact, I actually think there's an interesting thing happening,
which is because everybody starts for capacity,
the models that we actually have that we can use today are inferior.
versions of what we would have, if not for the supply constraints.
Right.
Suppose a hypothetical universe in which GPUs were 10 times cheaper and 10 times
more plentiful.
The models would be much better because you would just allocate a lot more money to
training and you'd just build better models and they would be better.
And so we're actually getting the sandbag version of the technology.
No, what?
Everything we use is quantized because the labs have to keep the full versions.
Right.
We're not even getting the good stuff.
But getting the good stuff is just, even if technical progress stops,
once there's like a much bigger build of like GPU manufacturing capacity and memory, you know,
all the things that have to happen in the course of the next five or 10 years, once it happens,
even the current technology is going to get, going to get much better.
And then, as you know, like, there's just like a million ways to use this stuff.
Like there's just like a million use cases for this.
Like, you know, this isn't just sending packets across a thing, whatever and hoping people
find something to do with it.
This is just like, oh, we apply intelligence into every domain of human activity.
And then it works like incredibly well.
Here's what I know.
Here's what I know.
In the next three or four years, it's like in some of,
between three or four years out, basically everything is selling out. So, like, the entire supply chain
is sold out or selling out. And so there's no, like, we're just going to have, like, chronic
supply shortage for, you know, for years to come. There's going to be a response from the market that's
going to result in an enormous, you know, it's happening now, an enormous flood of investment in a new
fat capacity and, you know, everything else to be able to do that. At some point, the supply chain
constraints will unlock, you know, at least to some degree. That will be another accelerant to industry
growth when that happens, because the products will get better and everything will get cheaper.
And so I know that's going to happen.
I know that the deployments, you know, the actual use cases are like really compelling.
And then like I said, you know, with reasoning and agents and so forth, like I know they're just going to get like much, much better from here.
And so I know the capabilities are like really real and serious.
I also know that the technical progress is not going to stop.
It is accelerating.
Like the breakthroughs are tremendous.
I mean, even just month over a month, the breakthroughs are really dramatic.
And so, you know, I think if you were a cynic and there are cynics, you can look at 2000.
You can find echoes.
but I can't even imagine betting that this is going to somehow disappoint in, you know, at least
for years to come.
I think it would be essentially suicidal to make that bet.
It was Michael Burry.
That's an interesting guy, huh?
We'll pick on a guy.
Let's pick on one guy.
Well, because he did.
He came out with it.
He doesn't mind.
He was the Nvidia short, right?
He came out of the Nvidia short.
And then you guys probably talked about this, which is the analysis now that, right, the current
models are getting better, faster at such a rate that if you are running an Nvidia, if you're
running an invidia inference chip today, that's three years old, you're making more money on
it today than you did three years ago? Because the pace of improvement of the software is
faster than the depreciation cycle of the chip. And then my understanding is Google is running,
I don't think, I don't know exactly what. These are rumors that I've heard or maybe it's public,
but I think Google's running very old TPUs, very profit. And very profitably. And so it actually
turns out, as far as I can tell, it's actually the opposite of the Burry thesis is actually,
he was actually 180 degrees wrong. It's actually the old Nvidia chips are getting more valuable,
which is something that's like literally never happened before. Like, it's never been the
case that you have an older model chip that becomes more valuable, not less valuable.
And again, that's an expression of that just a ferocious pace of software progress,
ferocious pace of capability payoff that you're getting on the other side of this.
And so I just, the idea of betting against that, like, yeah, yeah.
It's like an invitation to get your face ripped up.
One of my early hits was like modeling the lifespan of the H-100 and H-200s and going,
like, you know, usually they advise like four to seven years and it was, you know,
maybe you sort of realistically care cut it down to two to three.
but actually it's going up and not down.
And that's, I mean, I think that's the dream.
We are finding utilization.
And I think utilization solves all problems.
Like you can find use cases for even like the poor, like even memory we're having a shortage, right?
And even like the shittier versions of memory that we do have, we are finding use cases for it.
So like that's great.
How important is open source AI and kind of like edge inference in a world in which you have three years of supply crunch?
Like do you think in the like, you know, if you fast forward like five years, like have you
about inference in the data center versus at the edge?
Well, so just to start, yeah, so I think open source is very important for a bunch of
reasons. I think edge inference is very important for a bunch of reasons. I think just
practically speaking, if we're just going to have fundamental supply crunches for the next,
I mean, you guys, if you just project forward demand over the next three years relative to
supply, one of the dismaying predictions you can do is what's going to happen to the cost
of inference in the core over the next three years. And like it may rise dramatically, right?
So what is, and then is, you know, like the big model competition is subsidizing heavily right now.
Right.
And so, so what's the, what will be the average person's, you know, per day, per month token cost, you know, three years from now to do all the things that they want to do?
And I don't know.
It's going to be, I mean, I have, you guys probably have friends.
I have friends today who are paying $1,000 a day for open claw, for clod tokens to run open claw.
Right.
And so, okay, $30,000 a month, right?
And by the way, those friends have like a thousand more ideas of the things that they want their claw to do, right?
And so you could imagine there's like latent demand of up to, I.
I don't know, five or $10,000 a day of tokens for a fully deployed personal agent.
And obviously, consumers can't pay that, right?
And so, but it gives you a sense of the future scope of demand, right?
And so even if there's a 10x improvement in price performance, that still, you know, goes to $100 a day,
which is still way beyond when people can pay.
So there's just going to be like ferocious to make.
By the way, the agent thing, the other interesting thing is I think the agent thing,
so up until now, a lot of the constraints of GPU constraints, I think the agent thing now
also translates into CPU constraints.
CPU in memory, right?
memory, right? And so, like, the entire chip ecosystem is just going to get with a network
constraints. That will be the killer. It's all bottlenecked potentially for years. And so I think
that Brad, and I think it's actually possible, I mean, generally inference costs are going to keep coming
down, but I think, let's put it this way, the rate of decline, I think may level out here for a bit
because of these supply constraints. And then at some point, maybe the lab stops subsidizing
so much. And that, that again will be an issue. And so there's just going to be so much more
demand for inference than can be satisfied, you know, kind of with a centralized model. And then,
And then you guys know this, but like all the just the dramatic, I mean, just the dramatic innovations that have happened in the Apple Silicon to be able to do inferences. It's quite amazing. A level of effort being put, like the open source guys are putting incredible effort into getting, you know, this recurring pattern where the big model will never run on a PC and then six months later, it runs in a PC. Right. It's like amazing. And there's very smart people working on that. So there's all that. And then look, there's also, you know, there's also like other, there's other motivators. There's other motivators, which is just like, okay, how much trust are the big centralized model providers? You know, how much trust are they building in the market versus.
you know, how much are, you know, at least for in certain cases with some people for certain
use cases, people being like, well, I'm not willing to just like turn everything over.
So there's all the trust issues.
By the way, there's also just like straight up price optimization.
There's many uses of AI where you don't need Einstein in the cloud.
You just need like a smart local model.
There's also performance issues where you want to, you know, you want, you know, you're going
to want your darnop to have an AI model in it, you know, to be able to, you know, to be able to do,
you know, to be able to do access control.
Obviously, like everything with a chip is going to have an AI model in it.
a lot of those are going to be local.
And so, yeah, no, like, I think you're going to have,
and then you're going, by the way, also wearable devices,
you know, you don't want to do a complete round trip.
You want, you know, whatever your smart devices are,
you want it to be like super low latency.
The question, do we care who makes it?
One of the biggest news this week was the collapse of AI2,
the Allen Institute, one of the actual American open source model labs.
And I'm not that optimistic on American open source.
Like, you guys invested in Mistral,
and Ms. Charles is doing extremely well outside of China.
that's about it.
Yeah, we'll see.
We'll see.
Number one, I do think we care.
I do think we care who makes it.
I would say this.
The previous presidential administration wanted to kill it in the U.S.
They wanted to drown in the bathtub.
And so they wanted to kill it.
So at least we have a government now that actually wants it to have.
And you're on the council?
And the new P-Cast, yeah.
So for whatever other political issues people have, which are many, you know, this administration has, I think, a very enlightened view.
And in particular an enlightened view on AI and in particular on open-source AI.
and so they're very supportive.
My read is the Chinese have a various Chinese companies have a very specific reason to do open source,
which is they don't fundamentally, they don't think they can sell commercial AI outside of China right now,
or at least specifically not in the U.S., for a combination of reasons.
And so they kind of view, I think, open source AI is a bit of a loss leader against basically domestic, you know, paid services,
and then kind of, you know, kind of ancillary products.
You know, they're very excited about it.
By the way, I think it's great.
I think it's great that they're doing it.
I think DeepSeek was like a gift to the world.
I think the great thing about open source.
Open source, the impact of open source is felt two ways.
One is you get the software for free.
But the other is you get to learn how it works.
Right.
And so like the paper.
The paper and the code.
Right.
And the code.
And so like, for example, I thought this was amazing.
So Open AI comes out with 01.
And it's an amazing technical breakthrough and it's just like absolutely fantastic.
But of course, they don't explain how it works in detail.
And then they hide the reasoning traces.
Right.
And then everybody's like, okay, this is great.
But who's going to be able to replicate this?
Are other people going to be able to do this?
You know, is there a secret sauce in there.
And then R1 comes out and it's just like, there's the code and there's the paper.
And now the whole world knows how to do it.
And then, you know, three months later, every other AI model is adding reasoning.
And so you get this kind of double, like even if the Chinese models themselves are not the models
that use, the education that's taken place to the rest of the world, the information
diffusion, you know, is incredibly powerful.
So that happens.
And then I don't know, we'll see.
And, you know, there are a bunch of American, you know, open source, you know,
AI model companies.
I mean, look, there's going to be tremendous, you know, there are
is. There's, you know, there's going to be tremendous, there's tremendous competition among the
primary model companies, you know, there's, depending on how you count, there's like four or five,
you know, big co model companies now that are, you know, kind of neck and neck in different ways,
you know, and, you know, and then obviously both both, both X and then Meta where I've evolved,
you know, both have huge, you know, huge attempts to, you know, kind of leapfrog underway.
And then you've got, you know, a whole fleet of startups, new companies, including a whole bunch
that we're back in that are, you know, trying to come out with different approaches.
And then you've got whatever it is, I don't know, how.
How many main line foundation model companies are there in China at this point?
It's probably six.
It's five tigers, is what they call it.
Quinn is questionable because there's change in leadership.
But that does that include that includes like Moonshot?
Yes.
Kimmy, Deep Seek, ZAI, Quinn, 01 is in there.
Right.
And then bite dance.
Bight dance would be like the next tier.
They weren't as prominence.
They weren't have a leading hobby yet.
Yeah, but they're at least, you know, C-Dance is very inspiring.
And presumably they have more stuff coming in town.
and cent probably has more stuff coming and so forth.
And so,
so like,
look,
here would be a thing you can anticipate,
which is there are not,
these markets,
they're not going to be,
between the U.S.
and China right now,
there's like a dozen primary foundation model companies that are like
at scale at some level of like critical mass.
It's not going to be a dozen in three years,
right?
Like,
just because these industries don't bear a dozen.
It's going to be three or four big winners or maybe one or two big winners.
And so there's going to be like a whole bunch of those guys that are going to
have to figure out alternate strategies.
And I think like open source is one of those strategies.
And so I think you could see like a whole,
I think the question.
the questions like who's going to do open source, I think that could change really fast.
I think that that's a very dynamic thing. I think it's very hard to predict what happens.
And I think it's very important.
Nvidia's doing a lot.
Well, I was going to say, well, exactly. And then you got Invita.
And then, you know, just to get an industrial, there's an old thing in business strategy,
which is called commoditize the compliment.
And so if your Jensen is just kind of obvious, of course you want to commoditize the software.
And he's, and to his enormous credit, he's putting enormous resources behind that.
And so maybe it's literally in video. And I think that would be great.
Yeah.
Yeah.
narrative violation to European projects in the Navy.
Damn. I'm hosting my Europe conference soon and I got both of them.
They got us. They got us, Mark.
Wait a minute. Where was Peter? So where was Steinberger when he did it?
He was in Vienna. Yeah, yeah. He was in Vienna. Oh, he was in Vienna. And then where is he now?
He's moving to S.F.
Okay. Okay. All right. Okay. There we go. And then, yeah, the pie guy, right, the pie guys are European.
Yeah, they're in Australia. Mario is also there.
Right. And are they, yeah, they haven't announced yet any sort of change.
Or have they?
No, they're a deval company there.
Okay, good, good, good.
Anyway, I think pie and open claw are very important software things.
And I just wanted you to just go off on what do you think.
Yeah.
So I think in the combination of the two of them, I think, is one of the 10 most important software.
Open claw got all the attention.
Right.
Talk about pie.
Pie is kind of the idea.
Pie is kind of the architectural breakthrough.
For those of us who are older, there was this whole thing that was very important in the world of software, basically
from like 1970 to, I don't know.
It still is very important, but from 1970 through to, like, basically, the creation of Linux,
which is basically this, this thing used to call, like, the Unix mindset.
Like, so, so, because there were all these different, you know, theories,
there are all these different operating systems and mainframes and then, you know,
all these windows and Mac and all these things.
And then there was this, but kind of behind it all was this idea of kind of the Unix mindset.
And the Unix mindset was this thing where basically you don't have these, like,
like in the old days, like, like the operating system that, like, made the computer
industry really work, like, in the 1960s was this thing called OS 360,
which was this big operating system with IBM developed that was supposed to,
basically run everything. And it was this like giant monolithic architecture in the sky. It was like a,
you know, it was like a giant castle of software. And by the way, it worked really well and they
were very successful with it. But like it was this huge castle in the sky. But it was this thing. It was
almost unapproachable, which is like you had to be kind of inside IBM or very close to IBM. And you had to
really understand every aspect how the system worked. And then the unique sky is originally out of
AT&T and then out of Berkeley, you know, came out and they said, no, let's have a completely different
architecture. And the way architecture is going to work is we're going to have a prompt and a
and a shell. And then, and then we're going to, all the functionality is going to be in the
of these discrete modules, and then you're going to be able to chain the modules together.
And so it's almost like the operating system itself is going to be a programming language.
And then that led to the sort of centrality of the shell.
And then that led to sort of, you know, basically chaining the other Unix tools.
And then that led to the emergence of these scripting languages like Pearl, where you could basically
kind of very easily do this.
And then the shells got more sophisticated.
And then, and then looked, like, you know, that, number one, that worked.
And that was the world I grew up in.
Like I was a Unix guy, you know, sort of from call it 1988, you know,
kind of all the way through my work. And it worked really well. It's in the background.
You know, normal people don't need to, didn't need to necessarily know about it. But like,
if you were doing like system architecture, application development, you knew all about it.
And then, you know, it's been in the background ever since. And, you know, look,
your Mac still has a Unix shell, you know, kind of in there. And your iPhone still has a
unique shell kind of buried in there somewhere. So they're kind of in there. And then, you know,
the Windows shell is kind of a, you know, sort of a weird derivative of that. But, you know,
but look, the internet runs on Unix. And then smartphones, actually, both iOS and Android are
Unix derivatives. And so, you know, kind of Unix did end up winning. But, but anyway, and then we
just started taking that for granted. And then, and then so basically the way I think about
what happened with Pi and then with OpenClaw is basically what those guys figured out is, I always
say the great breakthroughs are obvious in retrospect, right? Which is the best kind.
The best kind. They weren't obvious at the time or somebody else would have done them already.
And so there is a real conceptual leap. But then you look at it sort of the backwards
looking and you're just like, oh, of course. To me, those are always the best breakthrough.
So actually, language models themselves are like that. It's just like, oh, next token completion.
Oh, of course.
What other objective mattered?
Yeah, exactly.
But like, right, but she's even saying it wasn't obvious until somebody actually did it, right?
And so the conceptual breakthrough is real and deep and powerful and very important.
And so the way I think about pie and open claw is it's basically marrying the language model mindset to the Unix basically shell prompt mindset.
And so it's basically this idea that what, what is an agent?
Right.
And as you know, like many smart people have been trying to figure out what an agent is for decades and they've had many architectures to build agents and the whole thing.
And it turns out what is an agent.
So it turns out what we now know is an agent.
is the following. So it's a language model. And then above that, it's a bash shell. So it's a
Unix shell. And then the agent has access to the shell, and, you know, hopefully in a sandbox,
maybe in a sandbox. So it's the model. It's the shell. And then it's a file system. And then the
state is stored in files. And then, you know, there's the markdown format for the, you know,
for the files themselves. And then there's basically what in Unix is called a cron job. There's a
loop. And then there's a heartbeat. And the thing basically wakes up, wakes up. So it's basically
The LLM plus shell plus file system plus markdown plus cron.
And it turns out that's an agent.
And every part of that other than the model is something that we already completely know and understand.
And in fact, it turns out the like the latent power of the Unix shell is like extraordinary.
Because basically like all, like there's just like an there's just enormous latent power in the shell.
There's enormous numbers of Unix commands.
There's enormous number of command line interfaces into all kinds of things already in the, you know, your entire.
I mean, just to start with your computer runs on a shell.
If you're running a Mac or a phone, your computer is running on a shell.
already. And so like the full power of your computer is available at the command line level.
And then it turns out it's really easy to expose other functions as a command line interface.
And so like this whole idea where we need like MCP and these like fancy protocols,
whatever, it's like, no, we don't. We just need like a command line thing. So that's the architecture.
And then it turns out what is your agent? Your agent is a bunch of files stored in a file system.
And then there's the thing that just like completely blew my mind when I write my head around it as a
result of this, which is like, okay, this means your agent is now actually independent of the model
that is running on because you can actually swap out a different.
LAM underneath your agent.
And your agent will change personality somewhat because the model is different, but all of
the state stored in the files will be retained.
Different instruction sets, but you just compile that.
Right, exactly.
And it's all right.
It's like swapping out of ship and recompiling.
But it's still your agent with all of its memories and with all of its capabilities.
And then, by the way, you can also swap out the shell.
So you can move it to a different execution environment that is also a bas-shell.
By the way, you can also switch out the file system.
Right.
And you can swap out the heartbeat, the Cron framework, the loop, the agent framework itself.
And so your agent basically is basically at the end of the day, it's just, it's just its files.
And then there's a person call.
Yeah, it's basically, it's just the files.
And then by the way, as a consequence of that, the agent is, and then the agent itself,
it turns out a couple important things.
So one is it can migrate itself.
Right.
And so you can instruct your agent, migrate yourself to a different runtime environment,
migrate yourself to a different file system, migrate yourself to a different, you know,
like we swap out the language model.
Your agent will do all that stuff for you.
And then there's the final thing, which is just amazing, which is the agent is,
the agent actually has full interest.
it actually knows about its own files and it can rewrite its own files, right?
Which, by the way, is basically no widely deployed software system in history where the,
the thing that you're using actually has full introspective knowledge of how it itself
works and is able to modify itself like that.
I mean, there have been toy systems that have had that, but there's never been a widely
deployed system that has a capability.
And then that leads you to the capability that just like completely blew my mind when
I wrap my head around it, which is you can tell the agent to add new functions and
features to itself.
And it can do that.
Extend yourself.
Right.
Extend yourself.
like extend yourself, give yourself a new capability.
Right. And so literally it's just like you run into somebody at a party and they're like,
oh, I have my open claw do whatever, connect to my eat sleep bed and it gives me better advice
and sleep. And you go home at night and you tell your claw, or they're at the party, by the way,
you tell your claw, oh, add this capability to yourself. And your claw will say, oh, okay, no problem.
And it'll go out on the internet and it'll figure out whatever it needs. And then it'll go out
to cloud code or whatever. It'll write whatever it needs. And then the next thing you know it has
this new capability. And so you don't even have to, like, you can have it upgrade itself without
even having to, without having to do anything other than tell it that you wanted to do that.
And so anyway, so the combination of all this is just, I mean, this is just like a massive, incredible.
I mean, it's just incredible.
Like, if I were 18, like, this is 100, this is what I would be spending all of my time on.
This is like such an incredible conceptual breakthrough.
And again, people are going to look at it.
And they already get this response.
People are going to look at it.
They're going to say, oh, well, where's the breakthrough?
Because all of these components were already known before.
But this is the key, the key to the breakthrough was by using all these components that were known before,
you get all of the underlying capability that's buried in there.
And so, for example, computer use all of a sudden just kind of false trivial.
Of course, it's going to be able to use your computer.
It has full access to the shell, right?
And then you give it access to a browser,
and then you've got the computer and the browser,
and off and away it goes.
And then you've got all the abilities of the browser also.
And so the capability unlock here is profound.
My friends who are, you know, deepest into this
are having their claw do like, like literally,
like a thousand things in their lives.
They have new ideas every day.
They're just like constantly throwing new challenges.
It's the thing.
And by the way, it's early.
And, you know, these are, you know, these are prototypes.
And there are, you know, as you guys know,
there's security issues.
Yeah.
And so, you know,
There's a bunch of stuff to be ironed out, but the unlock of capability is just incredible.
And I have absolutely no doubt that everybody in the world is going to have at least, you know, an agent like this, if not an entire family of agents.
And we're going to be living in a world where I think it's almost inevitable now that this is the way people are going to use computers.
I was going to say for someone who is deeply familiar with social networks, the next step is your claw talking to my claw, posting on claw Facebook, posting their jobs on claw LinkedIn and posting their tweets on claw X-A-I or whatever, you know.
I do think that that is how, you know, we get into some danger there in terms of like alignment
and whether or not we want these things to run.
You guys know, rentahuman.com?
Yeah, the rentage.
Yeah, yeah, yeah, yeah.
I mean, it's Fiverr, it's task private.
Sure, of course.
Mechanical Turk.
Yeah, but flipped.
Yeah.
Right, the agent hiring the people.
Yeah.
Which of course is going to happen.
It's obviously going to happen.
I'm curious if you have any thoughts on the engineering side.
So when you build the browser, the internet, you know, just a bunch of mostly plain text
file plus some images. And today, every website and app is like so complex and like somehow,
you know, the browser kept evolving to fit that in. Are there any design choices that were
made like early in the browser and kind of like the internet and the protocols that you're seeing
agents similar today? So like, hey, this thing is just not going to work for like this type
of new compute and we should just rip it out right now. There were a whole bunch, but I'll give
a couple. So one is, um, and we didn't, you know, to be clear like this, this was not, you know,
This was totally different.
We didn't have the capabilities we have today, but we didn't have the language models underneath this.
But we did have this idea that human readability actually mattered a great deal.
And specifically in those days, it was not so much English language, but there was a design decision to be made between binary protocols and text protocols.
And basically every, every basically old school systems architect that had grown up between like the 1960s and the 1990s basically said, you know, the internet, what do you know about the internet?
It's star for bandwidth.
You just have these very narrow straws.
You know, look, people, when we did the work on Mosaic, like, people who had the internet at home had a 14-kilobit modem, right?
So you're trying to, like, hyper-optimize every bit of data that travels over the network.
And so obviously, if you're going to design a protocol like HTTP, you're going to want it to be a highly compressed binary protocol for maximum efficiency.
And you're going to have it be like a single connection that persists.
And the last thing you're going to want to do is like bring up and tear down new connections.
And definitely you're not going to want a text protocol.
And so, of course, we said, no, we actually want to go completely the other direction.
It's obviously we only want text protocols.
By the way, same thing in HTML itself.
We want HTML to be relatively verbose.
You know, we want the tags to actually be like human readable.
The most inefficient things possible.
Yeah, we want to do the inefficient things.
You're the original token maxer.
Yeah, exactly.
Yeah, yeah.
Yeah, basically it's just like...
Better lesson filled.
Well, yeah, well, actually, this was actually the conscious thing,
which basically says just like, assume a future of infinite bandwidth,
build for that.
And then basically what it was, it was a bet that it was a bet that if the system was,
if the latent capabilities of the system were powerful enough and that was obvious
enough to people that would create the demand for the bandwidth that would cause the supply
of bandwidth to get built that would actually make the whole thing work.
And then specifically what we wanted was we wanted everything to be human readable
because at the engineering level, we wanted people to be able to read the protocol coming
over the wire and be able to understand it with their bare eyes without having to disassemble
it or whatever, right, and have it converted out of binary, right?
And so all the, you know, HTTP and everything else were it was always text protocols.
And the same thing with HTML.
And in many ways, some people say that the key breakthrough in the browser was the view source
option, which is every webpage you go to, you could view source.
which means you could see how it worked, which means you could teach yourself how to build, right, new, to build, do web pages.
There was that. So human readability, and again, human readability in those days still met technical, you know, specs.
Now it means English language, but there's an incredible latent power in giving everybody who uses the system the option to be able to drop down and actually understand I see how it's working.
And that worked really well for the web, and I think it's working really well for AI.
That was one. What was the other? A big part of the idea of web servers was to actually surface the underlying latent capability of the operating system.
and to be able to surface also the underlying latent capability of the database.
Because basically, what was a web server?
What is a web server fundamentally?
Architecturally, it's the operating system.
So it's the operating system's ability to, you know, it's running on top of an OS.
So it's the OS's ability to manage the file system and do everything else that you want to do,
process everything.
And then, of course, a lot of really, you know, a lot of websites are financed databases.
And so you wanted to unleash the underlying latent power of whether it was an Oracle database
or some other, you know, some other Postgres or whatever it was.
And so a lot of the function of the web server was to just bridge,
from that internet connection coming in
to be able to unlock the underlying power
of the OS and the database.
And again, people looked at it at the time
and they were like, well, is this really,
does this really matter?
Like, is this important?
Because we've had databases forever.
And we've always had, you know,
user interfaces for databases.
And this is just another user interface for a database.
It's like, okay, yeah, fair enough.
But on the other side of that,
it's just like this is now a much better interface to databases
and one that 8 billion people are going to use
and is going to be like far easy to use
and far more flexible.
And you're not just going to have old databases.
Now you have a system where people can actually understand
and why they want to build, you know, a million times more database apps than they had in the past.
And then the number of databases in the world exploded.
And so again, this goes to this thing of like building in layers.
Some of the smartest people in industry look at any new challenge and they're like, okay, I need to build a new kind of
application.
So the first thing I need to do is build a new programming language.
Right.
And then the next thing I need to do is build a new operating system.
Right.
And the next thing I need to do is I need to build a new chip.
Right.
And they kind of want to reinvent everything.
And I've always had maybe it's just, I don't know, pragmatic mentality or something or maybe
in engineering over science mentality.
But it's more like, no, you have just like,
all of this latent power in the existing systems.
And you don't want to be held back by their constraints,
but what you want to do is you want to kind of liberate that power and open it up.
And so I think the web did that for those reasons.
And I think it's the same thing now that's happening.
It's a good perspective from the web.
The programming language just is another good thing.
We have Brett Taylor on the podcast and we were talking about rust.
And, you know, rust is memory safe by the fault.
And so why are we teaching the model to not write memory unsafe code?
Just use rust and then you get it for free.
How much do you think there's like time to be spent, like recreating some of these things?
instead of taking them from granite.
I'll be like, oh, okay, Python is kind of slow.
Python typescript.
You know, it's like, yeah.
As imperfect as they are, they are the linguifranca.
I mean, I think this is going to change a lot because I don't think the models care
with language they program them.
And I think they're going to be good at programming every language.
And I think they're going to be good at translating from any language to any other language.
Like, okay, so this gets into the coding side of things.
I think we're going through a really fundamental change.
And look, I grew up, you know, I grew up handcode.
You know, I grew up hand coding.
Everything I did was actually, everything I did actually was written in C.
Back in the day.
I wasn't even using C++, or like Java or any of this stuff, right?
And so everything I ever did, I was like managing my own memory at the level of C.
And then I, you know, I'm still from the generation.
You know, I knew assembly language and, you know, so I could drop down and do things right on the ship.
And so we've just, we've all of us, we've always lived in a world in which software is like this precious thing that like you have to think about very carefully.
And it's like really hard to generate good software.
And there's only a small number of people who can do it.
And like, you have to be very like jealous in terms of thinking about like,
How do you allocate?
Like, what are your engineers working on?
And how many good engineers do you actually have?
And how much software can they write?
And how much software can human beings, you know, kind of maintain?
And I think, like, all those assumptions are being shot right out the window right now.
Like, I think those days are just over.
And I think the new world is, like, actually, high quality software is just, like, infinitely
available.
And if you need new software to do XYZ, like, you're just going to wave your hand and
you're going to get it.
And then if it's, if you don't like the language is written and you just tell the thing,
all right, I want the right now, I want the rest version.
Or, you know, secure, you know, secure.
By the way, we're about to go through, computer security is about to go through the most dramatic change ever, which is number one, like every single latent security bug is about to be exposed.
So we're going to have like the, we're set up here for like the computer security apocalypse for a while.
But on the other side of it, now we have coding agents that can go in and actually fix all the security bugs.
And so how are you going to secure a software in the future?
You're going to tell the bot to secure it and it's going to go through and fix it all.
And so this thing that was this incredibly scarce resource of high quality software is just going to become a completely fungible thing that you're just going to have as much as you want.
Right. And that has like, you know, that has like tons and tons of consequences.
In some sense, the answer to the question that you posed, I think is just somewhat, I don't know,
simpler something or straightforward, which is just if you want all your suffering and rest,
you just all about you want all your suffer and rest.
Like things that used to be like hard or even like seem like an insurmountable mountain to get,
to get through all of a sudden, I think become very easy.
I think Brett had a theory that there would be a more optimal language for LMs.
And so the contention is there isn't.
Like, just don't bother.
Just whatever humans already use.
LMs are perfectly capable porting.
I think we're pretty close to being,
I don't know if this works today.
I think we're pretty close to being able to ask the AI,
what would its optimal language be and let it design it?
It's true.
Okay, here's a question.
Are you even going to have programming languages in the future?
Or are the AI is just going to be emitting binaries?
Let's assume for a moment that humans aren't coding anymore.
Let's assume it's all bots.
What levels of intermediate abstraction to the bots even need?
Or are they just coding binary directly?
Did you see there's actually an experiment?
Somebody just did this thing where they have a language model now that actually emits model weights for a new language model.
Right?
And so will the bots be...
Just predict the weights.
Yeah.
Will the bots literally be emitting not just coding binaries, but will they actually be admitting weights for new models?
Directly.
And conceptually, there's no reason why they can't do both of those things.
Like, architecturally, both of those things seem completely possible.
Very inefficient.
You're basically very inefficient.
Simulation of a simulation and a simulation inside of weights.
Yeah, very inefficient.
But look, LMs are already like incredibly inefficient.
I have a favorite thing.
Ask Claude add 2 plus 2 equals 4, right?
It's just like, you know, it's like, you know, it's like whatever, billions and billions of times more inefficient than using your pocket calculator.
But, but, but yeah, the payoff is so great of the general capability.
So anyway, like I kind of think in 10 years, like I'm not sure.
Yeah, like I'm not sure there will even be a salient concept of a programming language in the way that we understand it today.
And in fact, what we may be doing more and more as a form of interpretability, which is we're trying to understand
and why the bots have decided to structure code in the way that they have.
If you play it through, you don't need browsers then.
That's the depth of the browser.
Well, so I would take it a step further,
which is you may not need to use your interfaces.
So who is going to use software in the future?
Other bots.
Yeah, the bots.
Yeah.
You still need to, I don't know, pipe information in and out.
Really?
Well, what are you going to do then?
Are you sure?
You're just going to log off and touch grass.
Whatever you want, exactly.
Isn't that better?
It's some way to do stuff for me.
But isn't that better?
I mean, look, you know, I don't look like, you know, you know, all the arguments here,
you know, it was not that long ago that 99% of humanity was behind a plow, right?
And what are people going to do if they're not plowing fields all day to grow food, right?
And it just turns out there's like much better ways for people to spend time than plowing fields.
Yeah, dude's growing.
Yeah, exactly.
Exactly.
You know, talking to their friends.
And look, I'm not an absolutist and I'm not a utopian.
And to be clear, like I have an 11-year-old and he's learning how to code.
And, like, I'm, you know, I think it's still a really good idea to learn how to code and so forth.
but I just, if you project forward,
you just have to think forward to a world in which it's just like,
okay,
I'm just going to tell the thing what I need and it's going to do it.
And then it's going to do it in whatever way is most optimal for it to do it.
Unless I tell it to do it,
not optimally,
like if I tell it to do it in Java or in Rust or whatever, it'll do it,
I'm sure.
But like,
if I'm just going to tell it to do it,
it's going to do it in whatever way is like the optimal way to do it.
And then if I need to understand how it works,
I'm going to ask it to explain to me how it works.
Right.
And so it's going to be doing its own interpret.
It's going to be the engine of interpretability to explain itself.
And I just am not convinced that that in that world, you have these historical,
the goals of the abstractions will be whatever the boss need out with you.
Right. Yeah. Well, I'm curious, like, if that's true, then shouldn't the models,
providers be building some internal language representation that they can do extreme, kind of like
RL and reward modeling around? Because it's like today, they're kind of like tied to like,
Tab script and Python because the users need to write in that language versus they can have their own
thing internally and like they don't need to teach it to anybody.
They just need to teach their model.
And I think that's how you get maybe the version between the models.
Like going back to like the pie open cloud thing.
It's like, oh, I built all the software using the open AI model and I'll switch to
the enthrumic model.
But the entropic model doesn't understand the thing.
So it feels like there still needs to be some obstruction.
But maybe not.
Maybe that's the lock-in that the model providers want to have.
I'm not even sure that's lock-in though, because why can't the second model just learn
what the first model has done?
Like, exactly.
Okay, so, okay, giving an example.
So, as you know, models can now reverse engineer software software by it.
Isn't it the whole thing now where people are reverse engineering like Nintendo game binaries?
Yeah.
So you have like, I've seen a bunch of reports like this where somebody has like a favorite game for the 1980s.
And the source code is like long dead.
But they have like a binary bird to do a chip or something.
And now they're reverse engineer negative version that runs in their Mac.
Right.
And so if you reverse it, if this is quite kind of say, if you're reversing like X86 binaries,
then why can't you reverse engineer whatever they create, yeah.
And because we're all on a Unix-based system, it has to be reversible.
because it needs to run on the target.
Yeah, yeah, yeah, yeah, basically.
And so I just think it's this thing where it's just like,
and by the way, and everything we're describing
as something that human beings in theory could have done before,
but just with, right, yeah, yeah, yeah.
But with enormous, where it was just always like cost and labor prohibitive.
Reverse engineering.
I learned how to reverse engineering.
It's like, human beings are reverse engineering binaries.
Yeah.
It's just for any complex binary, I need like a thousand years to do it,
but now with the model you don't.
And so all of a sudden, you get these things,
or another way to think about it is so much of human-built system
sort of compensate for the human limitations.
Yeah.
Right.
And if you don't have the human limitations anymore,
then all of a sudden you have.
And it's not that you won't have a distractions,
but you'll have a different kind of a distraction.
I have two topics to bring us to a close,
and you could pick whichever ones.
Just talking about protocols.
Was it you or someone else,
I forget my internet history,
who said that the biggest mistake
that we didn't figure out in the early days was payments.
Yes.
Was that you?
Yes.
It was a 402.
402 payment required.
We have a chance now.
I don't think we're going to figure it out.
I don't know.
Like, what's your take?
Oh, I think we will.
Yeah.
No, now I think it's going to happen for sure.
Yeah.
And there's two reasons that's going to have for sure.
One is we actually have internet native money now in the form of crypto.
Stable coins.
Stable coins and crypto.
And I think this is the grand unification basically of AI crypto is what's about to happen now.
I think AI is the crypto killer app, I think is where this is really going to come out.
And then the other is just, I mean, it's just, I think it's now obvious.
It's like obviously AI agents are going to need money.
And it's already happening, right?
If you've got a claw and you want it to buy things for you, you have to give it money in some form.
I would say the adoption is probably like 0.1% if that, but yeah.
Oh, today.
Yeah, yeah, yeah, but think forward.
Like, where is it going?
Forward thinking.
The ultimate principle of everything and everything that I think we do is the William Gibson quote,
which is the future is already here.
It just isn't distributed yet.
My friends who are the most aggressive users of OpenClaug just, have given their Clause bank accounts,
and not only they've done it, it's obvious that they needed to do it,
because it's obvious that they needed to be able to spend money on their back.
It's just completely obvious.
And so, and again, like, so the number of people who have done that today,
to your point is like, I don't know, probably 5,000 or something.
But that's how these things start.
Actually, I mean, since you keep mentioning.
And by the way, OpenClaw, by the way, if you don't give it a bank account,
it's just going to break into your credit.
It's going to break into your bank account anyway and take your money.
So you might as well do it.
You might as well do it.
By the way, I really love, I got to tell you, I really love the phenomenon.
I love the YOLO.
I'm not doing it myself, to be clear, but I love the people that are just like,
yeah, what is it, skip, skip.
Dangerously, skipper.
Which, by the way, it's a Facebook thing.
Okay.
Right. Because in Facebook, they have this culture to name the thing dangerous so that you are aware when you enable the flag that you are opting into a dangerous thing.
Okay. Good. And they brought it into OpenEI. And of course, that makes it enticing. Sam runs Codex with skip permissions on his laptop.
Yes, 100%. And so I think the way to actually see the future is to find the people are doing that.
There's a madness, you know, log everything, you know, just watch it. Watch the logs.
But like, let's actually find out what the thing can do. And the way to find out what the thing can do is just.
Try everything.
Yeah, let it try everything.
Let it unlock everything.
By the way, that's how you're going to find all the good stuff it can do.
By the way, that's also how you're going to find all the flaws.
I think the people who turn that on for bots are like,
they're like martyrs to the progress of human civilization.
Like,
I feel very bad for their descendants that their bank accounts are going to get looted by
their bots in the first like 20 minutes.
But I think the contribution that they're making to the future of our species is amazing.
It's like gentlemen science.
Yes, it's, yes, yes.
It's a, yes.
It's a Ben Franklin out with trying to try and trying to get lightning strike his, his balloon and seeing if he
gets electrocuted.
Yeah.
It's, uh, it's, uh,
to salt with the polio vaccine.
Injecting it.
Yes.
So yes, I think we should have like a glory.
We should have like flags and like we should have like monuments to the people that
just let open club run their lives.
More anecdotes.
I was like, what are the craziest or interesting things that people listening to this should
go up, go home and do?
I mean, this is, this is the extreme thing is just like the straight yolo.
Like just, yeah, turn your life.
That's a general capability.
Yeah.
Yeah.
Like a specific story that was like, wow.
And everyone in the group chat just lit up.
I mean, like, you know, so there's tons of, there's already.
tons of health. You know, the health dashboard stuff is just, it's just absolutely
amazing. The number of stories on, I just don't want to violate people's, you know,
obviously personal. Unanimized. But, you know, one of the things OpenClaught is really good at
is hacking into all this stuff in your land. It's really good. So, you know, Internet of Things,
aka Internet of shit. Yeah. Like, super insecure, but great. It's discoverable.
It's discoverable. OpenClaw is happy to scan your network, identify all the things. And then my,
my friends who are most aggressive at this are having OpenClaught take over everything in their
house. It takes over their security cameras. It takes over their, you know, their, whatever, their
access control systems. It takes over their webcams. I have a friend whose claw watches him
sleep. Put a webcam in your bedroom, put the, put the claw on a loop. I have it wake up frequently
and have it watch it. Just tell it, watch me sleep. And I've seen the transcripts and it's
literally like, Joe's asleep. This is good. This is good that Joe's asleep because, you know,
I have his health data and I know that he hasn't been getting enough sleep. And so it's really good
that he's getting asleep. I really hope he gets his full, whatever, you know, five hours around
sleep. Uh, tut-ta-da. Joe's moving.
Joe's moving.
Joe might be waking up.
If Joe wakes up now, he's going to ruin his sleep cycle.
Oh, okay.
It's okay.
Joe just rolled over.
Okay, he's gone back to bed.
Okay, good.
All right.
Okay, I can relax.
This is fine.
He's monitoring the situation.
Monitoring the situation.
And being a bot, like, you know, is just like very focused.
It's just like, this is like, this is reason for existence is to watch Joe's sleep.
And then, and I was talking my friend who did this is like, you know, on the one hand, it's like, all right, this is weird and creepy.
And I need to, I need to, maybe this is taken on my life.
And then the other thing is like, you know,
What? If I had a heart attack in the middle of the night, this thing literally would freak out and call 911.
Like, there's no question this thing would figure out how to, like, alert medical authorities and, like, probably summon SWAT teams and like do whatever would be required to save my life, right?
And so it's like, you know, like, yeah, like that's happening. What else?
If it's a company Unitary that makes the robot dogs, then I actually have one at home, which is actually really fun with the Chinese companies.
The Chinese companies are so aggressive at adopting new technology, but they don't always,
like let's take the time to really package it, package it and maybe think it all the way through.
And so at least the unitary dog I have, so it has a old non-LLM just control system, which by the way is not very good.
It markets well, but it in practice, it's not that good.
It has trouble with stairs and so forth.
And so it's not quite what it should be.
But then the language model thing comes out in the voice.
So they add LLM capability and then they add a voice mode to it.
But that LLM capability is not at all connected to the control system.
So you've got this schizophrenic dog that like is a complete idiot.
when it comes to climb in the stairs,
but it will happily teach you quantum mechanics.
Right,
in like a plummy English accent, right?
It's just like absolutely amazing.
Jagged intelligence.
Yeah,
talking about Jagging.
Now,
obviously what's going to happen in the future
is they're going to connect together.
But right now,
and so right now it's not that useful.
And so I have a friend who has one of these
who had his claw,
basically hack in and rewrite the code,
write new firmware.
Yeah.
Write new firmware for the,
for the unitry robot.
And now it's,
now it's an actual pet dog for his kids.
You should do that before after,
like the motion.
Yeah,
you said it's completely different.
He said it's a complete transformation.
And whenever there's an issue in the thing now, the claw just rewrites the code.
You know, you goes in and does the code.
It kind of goes to your thing here.
So like all of a sudden, this is why we're going to think about AI code.
AI coding is not just like writing new apps.
It's also going in and rewriting all the old stuff that should have worked that never worked.
And so like I think basically, I think the internet, the internet of shit is basically over.
Like I think everything, there's the potential here where like all these devices in your house that have been like basically marginal or, you know, basically dumb.
You know, like all of a sudden they might all get really smart.
Now, you have at home.
You have to decide if, yes, there are horror movies in which this is the premise.
And so you have to decide if you want this.
But this is the first time I can say with confidence, I now know how you could actually
have a smart home with 30 different kinds of things with chips and Internet access where it actually
all makes sense.
It all works together.
And it's all coherent in the whole thing.
And to have that unlock without a human being having to go do any of that work, like,
yeah.
I'm waiting for a story, Mark.
I can't let you open that fridge door.
Exactly.
Exactly, yes, yes.
Because you're not supposed to eat right now.
I have all of, yes.
I have every spread of health information, you know,
and I know you think you're doing, you know,
da-da-da-da, and I think you can do this,
but you know, this is a real,
are you really, you know, are you really sure?
And, you know, you told me, you told me last night,
you really don't want me to let you do this.
So, you know, I'm sorry, but the fridge door is locked.
Open the fridge doors.
Exactly.
And by the way, I know you're supposed to be studying for a test.
So why don't wait, why don't you go,
when you can pass the test,
I will open the fridge door for you.
Yeah.
Final protocol.
and then we can wrap up
proof of human.
Yes.
Right?
That's the last piece
that we've got to figure out.
Yeah.
So I would say there's two massive,
I would say,
sort of asymmetries in the world right now
where we've known these asymmetries
exist and we societally have been
unwilling to grapple with them
and I think they're both tipping right now.
And they're the same thing.
It's a virtual world version.
It's a physical world version.
So the virtual world version is the bot problem.
We're just like the internet
is just like a wash and bots.
Internet's a wash and fake people.
It has been forever.
By the way,
A lot of that has to do with lack of money, you know, and so this is, you know, this is my spicy take was these two are the same thing and corporations of people too, you know.
Interesting.
Yeah, yeah, yeah.
Okay.
So a manga count is proof of human.
Yeah.
Okay.
Yeah, until you give the bots bank accounts.
Yeah, exactly.
So, okay.
Yeah, so there's that.
But yeah, look, look, the bot, I mean, every social media user knows this.
The bot problem is a big problem.
You know, the bot problem has been a big problem forever.
It's a huge problem.
And it's never really been confronted directly, like at any point.
By the way, the physical world version of this is the drone.
the drone problem.
Right.
And so we've known for, you know, we've known for 20 years now that the asymmetric threat,
both in military, in actual military conflict, but also in just like security, like,
like, you know, security on the home front, the big threat is, is the cheap attack drone,
right?
The cheap, the cheap, the cheap, seriously, you know, drone with a bomb.
And we've known that forever.
And by the way, like, you know, it's very disconcerting how like every, you know,
every office complex in the, you know, in the world is like unprotected from drone attacks.
Every, every stadium, every school, every prison, like, is like,
Sure.
Okay.
we've known that. We've never done anything about it. What are you're going to do about it, yeah. One possibility is just leave them in a world of like asymmetric terrorism forever or the other is take the problem seriously and figure out the set of techniques and technologies required to be able to deal with that, whether those are lasers or jammers or early warnings or early warnings or, you know, personal force fields.
Cethetic personal force fields. Exactly. And in both cases, these are economic asymmetries. These are economic asymmetries, right? Because it's really cheap to feel to bot, but it's very hard to tell something about. It's very cheap to field a drone. It's very hard. It's very hard. It's very hard. It's.
very expensive to defend against a drone.
But you see what I'm saying is it's the virtual version of the problem and it's the physical
version of the problem.
The virtual version of the problem, what we need quite literally is proof of human.
The reason is because you're not going to have proof of a bot, especially now that the
bots are too good.
The bots can pass the Turing test.
And if the bots can pass the Turing test, then you can't, you can't screen for bot.
You can't have proof of not a bot.
But what you can have is you can have proof of human.
You can have cryptographically validated.
This is definitely a person.
And then you can have cryptographically validated.
This is definitely like something that a person said.
this video is real, right?
Just to double click on, do you think Alex Blania with World, do you think he's got it?
Or is there an alternative?
Oh, so, I mean, there's going to be, I think there will be, I think many people will try.
We're one of the key, you know, participants in the world, in the World Project.
And I don't know.
Yeah.
So we're partisans.
But yeah, I think, so we think world is exactly correct.
Okay.
And the reason is it has, it has to be, it has to be proof of human.
It has, because you can't do proof of not bought.
You have to do proof of human.
To do prove human, you need, you need biological validation.
You needed to start with, this was actually a person.
right because otherwise you have bots signing up as fake people right so you have to have like something you have to have a biometric and then you have to have cryptographic validation and then the ability to do to do the lookup and then by the way the other thing you need which you also need the selective disclosure so you need to be able to proof of human without privacy all the underlying information by the way another thing you're going to need you're going to prove of age right because there's all these laws and all these different countries now around you need to be 13 or 18 or whatever to do different things and so you're in you're in need you know sort of validate a proof of age um you know to be able to legally operate right and so that that's
coming and then you're going to want like proof of credit score and, you know, proof of like,
you know, a hundred other.
That's a tricky one.
It is a tricky one, but you're going to, there's no reason, like if somebody's checking
on your credit, somebody shouldn't, give you an example, somebody shouldn't need to know your
name in order to be able to find out whether you're credit worthy.
I see independently verifiable pieces of information.
It's like, just likely disclosed.
And this is the answer to the privacy problem at large, which is I only need to prove
and I need to prove at that moment.
So like you're going to need that.
And I think their, their architecture makes sense.
So that needs to get solved.
I think language models have tip, the bots are now.
too good. And so
they're undetectable. And so as a consequence,
we now need to go confront that problem directly.
And then like I said, and then the other problem is we need to go
actually confront the drone problem. The Ukraine
conflict has really unlocked a lot of
thinking on that. And now the
the Iran situation is also unlocking
that. And so I think there's going to be just like this
incredible explosion of both drone and counter.
Our drones are buried in their drones. Is it to keep it that way?
Yeah. Encounter drones.
I think we're going to sneak in one more question.
I'm trying to tie together a lot of things that you
over the year. So at the Milken Institute debate with Teal, which is amazing. You talked about
the lag between a new technology and kind of like the GDP impact of it. The other idea you talked
about is bourgeois capitalism and how, you know, it's kind of a material class was needed
because of this complexity. And I think if you bring AI into the fold, you have like much higher
leverage of people. So like if you have, you know, the Musk industries and you give Elon a GI,
you can run a lot more things at once. And then you have the,
social contract. And I know you were to do a clip of Sam Alman saying, um, we're rethinking the whole
thing and you're like, absolutely not. And I was at an event with Sam last night. And he actually
said in the last couple of weeks, it felt like now people are taking that seriously. So I'm just
curious like how you're seeing the structure of organization changing, especially when you invest in
early stage companies. And, um, yeah, just like how the impact of work structure and all of that is playing
out. Yeah. So there's a whole bunch of, there's a whole bunch of time. I know. Yeah. We could spend,
I know, by the way, we'd be happy to spend more time, but we could, we could spend more time on
that. So just for people who haven't followed this, so this term managerial comes from this thinker in the 20th century, James Burnham, who just one of the great kind of 20th century political thinkers, societal thinkers. And he sort of said, and he was writing in like the 1940s, 1950s. And he said, kind of the whole history capitalism until that point had been in two phases. Number one had been what he called bourgeois capitalism, which was thinking about as like name on the door. Like Ford motor company, because Henry Ford runs the company. And Henry Ford's like a dictatorial model. And Henry Ford just like tells everybody what to do. And he said the problem with bourgeois capitalism,
is it doesn't scale, because Henry Ford can only tell so many people to do so many things,
and then he runs at a time in the day.
And so he said the second phase of capitalism was what he called managerial capitalism,
which was the creation of a professional class of managers that are trained not to be like
car experts or to be whatever experts in any particular field, but are trained to be experts
in management.
And then that led to, you know, the importance of like Harvard business, you know, business
schools and management consulting firms and all these things.
And then you look at every big company today, and like most of the executives and most
of the Fortune 500 companies are not domain experts.
and whenever the company does.
And they're certainly not the founders of those companies,
but they're professional managers.
And in fact, in the course of their careers,
they'll probably manage many different kinds of businesses.
They'll rotate around and they might work in healthcare for a while
and then work in financial services and then go work in something else,
you know, come work in tech.
And what Burnham said is he said that transition is absolutely required
because the problem with bourgeois capitalism is it doesn't scale
and Reeveport doesn't scale.
And so if you're going to run capitalist enterprises
that are going to have millions to billions to customers,
they're going to be operating a level of scale and complexity
that's going to require this professional management class.
And he said, look, the professional management class has its downsides.
Like, they're not necessarily experts of doing the thing.
They're not as inventive.
You know, they're not going to create the next breakthrough thing.
But he's like, whether you think that's good or bad or whatever is what's going to be required.
And basically, that's what happened, right?
And so he wrote that book originally like 1940.
You know, over the course of the next 50 years, basically managerialism,
no, I mean, today, up till today, managerialism basically took over everything.
And, you know, what I'm describing is basically how all big companies run and how all
governments run and how large-scale nonprofits run and kind of
everything runs. Basically, what venture capital does is we basically are a rump sort of protest
movement to that to try to find the next Henry Ford, or just to say, Elon Musk, or the next,
Elon Musk, or the next Steve Jobs, the next Bill Gates, the next Mark Zuckerberg. And so we, we,
we start these companies in the old model, right? We start them out as in the Henry Ford model. And so
we start them out with a founder or a, or a founder with colleagues, but, you know, there's a founder's
CEO. And then we basically bet that the startup is going to be able to do things
specifically innovate in ways that the big incumbents in that industry are not going to be able
to do. And so it's a bet that by basically by relighting this sort of name on the door kind
of thing, this new innovative thing with like a king monarchical political structure, that they're
going to be able to innovate in a way that the incumbent is not going to be able to because the incumbent
is being run by managers. And by the way, and of course, venture being what it is. Sometimes that
works. Sometimes it doesn't. But we're constantly doing that. But I
I've always viewed my entire life as like we're like raging against the dying of the
life.
Like we're sort of constantly trying to fight off managerialism just basically swamping everything
and everything getting basically boring and gray and dumb and old.
Right.
And we're trying to keep some level of energy vitality in the system.
AI is the thing that would lead you to think, wow, maybe there's a third model.
Right.
And maybe, and way to think about it would be maybe it's a combination of the two.
Maybe the new Henry Ford or the new Elon or the new studio jobs plus AI is the best of both.
right? Because it's sort of the spark of genius of the name on the door model, the Henry Ford model,
but then it's give that person AI superpowers to do all the managerial stuff and let the boss drill the managerial stuff.
That may be the actual secret formula. And we've never even known that we wanted this because we never even thought it was a possibility.
But I mean, you know this. What is the thing that these bots are really good, really good at doing paperwork?
Like, they're really good at filling out forms. Like, they're really good at writing reports.
They're really good at reading or they're really good at doing all the managerial work.
they're amazing at it.
And so, yeah, so I think, I think the, 100%,
I think the answer very well might be to get the best of both worlds by doing this.
And then the challenge is going to be twofold.
The challenge is going to be for the innovators to really figure out how to leverage AI to actually do this.
And then the other challenge is going to be for the incumbents that are managerial to figure out, like, okay, what does that mean?
Because now they're going to, they're going to be facing a different kind of insurgent competitor that has a different set of capabilities than they're used to.
And so this really, I think, is going to force a lot of big company.
to kind of figure out innovation. Either I say figure out innovation or die trying.
Do you feel like that structure accelerates the impact on the actual GDP and economy?
If you guys, SpaceX is like the growth is like so fast. And like instead of having these
companies kind of like peter out and growth and impact, they can kind of like keep going,
if not accelerating. That's for sure the hope. The challenge and, you know, and look, the AI utopian
view is of course, of course. And that's going to be the future of the economy and it's going to
grow 10x and 100x and 1,000x. And we're going to train this regime of like much higher economic growth
and consumer chronicopia of everything, and it's going to be great.
And I hope that's true.
I hope that's like the, you know, that's the current kind of utopian vision.
I hope that's true.
The problem is, goes back again, the real world is really messy.
And I'll give you an example of how the real world is really messy.
It requires 900 hours of professional certification training to become a hairdresser in the state
of California.
So it's like 35% of the economy, something like that, you have to get some sort of
professional certification to do the job.
Which is to say that the professions are all cartels, right?
And so you have to get licensed as a doctor.
You have to get license as a lawyer.
You have to get licensed as a, you have to get into a union.
By the way, to work for the government, you need to be, you have both civil service protections and you have public sector unions.
You have two layers of insulation against ever getting fired for anything or anything, anything ever changing.
I'll give you another example.
The dock workers went on strike a couple years ago because they're, you know, robotics.
If you go look at a modern dock like in Asia, it's all robots.
If you go to American doc, it's like all still guys dragging, dragon stuff by hand.
The dock workers are on a strike.
It turns out there are 25,000 dock workers working on docs in America.
It turns out they have incredible political power because it's one of these unified blocks of things.
They won their strike.
And so they got commitments from the dock owners to not implement more automation.
We learned a couple of things in that.
So, number one, we learned that even a union as small, is 25,000 people still has, like, tremendous political stroke.
We also learned that they, it actually turns out the dock workers union has 50,000 people in it.
because they have 25,000 people working at the docs.
They have 25,000 people during full paycheck sitting at home from prior union agreements.
Oh, my God. From prior union agreements.
I'll give you another great example.
There are government agencies.
There are federal government agencies where the employees, right, have civil service protections
and they're in public sector unions.
There are entire federal government agencies that struck new collective bargaining agreements
during COVID where not only have their jobs guaranteed in perpetuity,
but they only have to report to work in an office one day per month.
And so there are entire office building,
in Washington, D.C. that are empty 29 out of 30 days of the year that are still operating
and are still, we're all still paying for. And so then what they do, it turns out what the
employees do is they're very smart in this way. And so they figure out, they come in on the last
day of a month and the first day of the next month. And so they're in there, they're in the office
two days per 60 days, which means these buildings are empty for 58 days at a time. And you see,
you see where I'm heading with this. Like, this is like locked in. Right. This is like locked in
in a way that has nothing to do with like, and people say capitalists, it's like anti-capitalistic.
It's like, it's basically, it's restrictions on trade. It's restrictions on the ability to, like,
change the workforce. And so so much of our economy is, you know, the, I'm describing the entire
healthcare system. I'm describing the entire legal profession. I'm describing the entire housing
industry. I'm describing the entire education system. Right. K through 12 schools in the United States,
they're a literal government monopoly. How are we going to apply A& education? The answer is we're not,
because it's a literal government monopoly.
It is never going to change the end, and there is nothing to do.
By the way, you can create an entirely new school system.
That's the one thing you can do is you can do what Alpha School is doing.
You can create an entirely new school system.
Other than that, you're not going to go in and change what's happening in the American classroom like K through 12.
There's no chance.
The teachers are 100% opposed to it.
It's 100% not going to happen.
So you see what I'm saying is like there's this like massive slippage that's going to take place.
Both the AI utopians and the AI Dumerers are far too optimistic.
Right.
You see what I'm saying?
because they believe that because the technology makes something possible
that 8 billion people all of a sudden are going to change how they behave.
And it's just like, no.
So much of how the existing economy works is just like wired in.
And so we're going to be lucky as a society,
we're going to be lucky if AI adoption happens quickly.
Right.
Because if it doesn't, we're just going to have a stagnation.
Awesome, Mark, I know you got to run.
Yeah, we don't know.
Or still welcome, but it was such a pleasure talking to you.
We're truly living in an age of science fiction coming to real life.
Yes, yes.
Could not be more exciting.
Yeah.
Really, thank you, Mark.
with you guys. Awesome.
Thank you.
Good. That's it.
Good. Thank you.
As a reminder,
please note that the content here is for informational purposes only.
Should not be taken as legal, business, tax, or investment advice,
or be used to evaluate any investment or security is not directed at any investors
or pretend to investors at any A16Z fund.
For more details, please see A16Z.com slash disclosures.
