Dwarkesh Podcast - Tyler Cowen — The #1 bottleneck to AI progress is humans
Episode Date: January 9, 2025I interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I’m always hearing new stuff.We talked about why he thinks AI wo...n't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode.Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.SponsorsI’m grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkesh.Timestamps(00:00:00) Economic Growth and AI(00:14:57) Founder Mode and increasing variance(00:29:31) Effective Altruism and Progress Studies(00:33:05) What AI changes for Tyler(00:44:57) The slow diffusion of innovation(00:49:53) Stalin's library(00:52:19) DC vs SF vs EU Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Transcript
Discussion (0)
Tyler, welcome.
Dwar Keshe. Great to be chatting with you.
Why won't we have explosive economic growth 20% plus because of AI?
It's very hard to get explosive economic growth for any reason, AI or not.
One problem is that some parts of your economy grow very rapidly,
and then you get a cost disease in the other parts of your economy.
That, for instance, can't use AI very well.
Look at the US economy. These numbers are guesses.
But government consumption is what? 18%.
Healthcare is almost 20%?
I'm guessing education.
education is six to seven percent, the nonprofit sector, I'm not sure the number, but you add it
all up, that's half of the economy right there. How well are they going to use AI? Is failure to
use AI going to cause them to just immediately disappear and be replaced? No, that will take, say, 30
years. So you'll have some sectors of the economy, less regulated, where it happens very quickly,
but that only gets you a modest booth and growth rates, not anything like, oh, the whole economy
grows 40% a year, in a nutshell. The mechanism behind cost disease is that there's a
limited amount of laborers and if there's one high productivity sector, then wages everywhere
have to go up so your barber also has to earn twice the wages or something. With AI, you can just
have every barbershop with 1,000 times the workers, every restaurant 1,000 times the workers,
not just Google. So why would the cost disease mechanism so work here? Cost disease is more general
than that. Let's say you have a bunch of factors of production. Say five of them. Now all of a sudden,
we get a lot more intelligence, which has already been happening, to be clear, right? Well, that just
means the other constraints in your system become a lot more binding, that the marginal
importance of those goes up and the marginal value of more and more IQ or intelligence goes
down. So that also is self-limiting on growth, and the cost disease, just one particular
instantiation of that more general problem that we illustrate with talk about barbers and string
quartets and the like. If you were talking to a farmer in 2000 BC and you told them that growth
rates with 10x, 100x, you'd have 2% economic growth after the Industrial Revolution,
and then he started talking about bottlenecks. What do you say to him in retrospect?
He and I would agree, I hope. I think I would tell him, hey, it's going to take a long time.
And he'd say, hmm, I don't see it happening yet. I think it's going to take a long time.
And we'd shake hands and walk off into the sunset. And then I'd eat some of his rice or
weed or whatever, and that would be awesome. But the idea that you can have a rapid acceleration,
and growth rates and that bottlenecks don't just eat it away.
I mean, you could agree with that, right?
I don't know what the word could means.
I would say this.
You look at market data, say real interest rates, stock prices.
Right now, everything looks so normal.
Startlingly normal, even apart from AI.
So what you'd call prediction markets are not forecasting super rapid growth anytime soon.
If you look at what experts on economic growth, right, we had Chad Jones here yesterday,
he's not predicting super rapid growth, though he thinks AI might well.
accelerate rates of growth. So the experts and the markets agree. Who am I to say different from
the experts in the market? You're an expert. In his talk yesterday, Chad Jones said that the main
variable, the main input into his model for growth is just population. If you have a doubling,
an order of magnitude increase in the population, you plug that number in. In his model,
you get explosive economic growth. Why not buy the models? His model is far too much a one-factor
model, right? Population. I don't think it's very predictive.
We've had big increases in effective world population in terms of purchasing power.
A lot of different areas have not become more innovative.
Until the last, say, four years, most of them became less innovative.
So it's really about the quality of your best people or institutions, as you and Patrick were discussing last night.
And there it's unclear what's happened.
But it's also fragile.
There's the perspective of the economist, but also that of the anthropologist, the sociologist.
They all matter.
but I think the more you stack different pluralistic perspectives,
the harder it is to see that there's any simple lever you can push on,
intelligence or not, that's going to give you breakaway economic growth.
I mean, what you just said, where your bottlenecked by your best people seems to contradict
what you were saying in your initial answer, that your bottleneck, even if you, like, boost the best
parts, you're going to be bottlednecked by the restaurants and whatever.
Your bottlenecked, you're one of our best people, right?
You're frustrated by all kinds of things.
I think I'm going to be making a lot more podcast after AGI.
Okay, good.
I'll listen.
I'll be bottlenecked by time.
Just marketing.
Here's a simple way to put it.
Most of sub-Saharan Africa still does not have reliable clean water.
The intelligence required for that is not scarce.
We cannot so readily do it.
We are more in that position than we might like to think, but along other variables.
And taking advantage of the intelligence from strong AI is one of those.
So about a year ago, your co-writer on Marshall Revolution, Alex Havrock, had a post about the extreme scarcity of high-eastern
of high IQ workers. And so if the labor force in the United States is 164 million people,
if one in the thousand of them are geniuses, you know, you have 164,000 geniuses. That's why
you have to do something conductors in Taiwan because that's where they're putting their nominal
amount of geniuses. We're putting ours in finance and tech. If you look at that framework,
I mean, come on, we have a thousand times more of those kinds of people. At the end of the day,
the bottlenecks are going to eat all that away? Or if you ask any one of these people, if you had
a thousand times more of your best colleague, your best coworker, your best co-founder, the bottleneck's
going to eat all that away, your organization isn't going to go any faster?
I didn't agree with that post.
If you look at labor market data, the returns to IQ, as it translates into ages, they're amazingly
low. They're pretty insignificant. And people who are very successful, they're very smart,
but they're people who have, say, eight or nine areas where they're like on a scale of one to
ten, they're a nine. Like they have one area where they're just like an eleven and a half
on a scale of one to ten. And then on everything else, they're an eight to a nine. And
have a lot of determination.
And that's what leads to incredible success.
And IQ is one of those things, but it's not actually that important.
It's the bundle, and the bundles are scarce, and then the bundles interacting with the
rest of the world.
Like just try going to a mid-tier state university and sit down with the committee designed
to develop a plan for using artificial intelligence in the curriculum, and then come back
to me and tell me how that went.
And then we'll talk about bottlenecks.
I mean, all these...
They will write a report.
The report will sound like GPT4, and we'll have the report.
These...
The report will not be bottlenecked, I promise you.
These other traits, look, the AIs are...
If it's conscientiousness, if it's pliability, whatever,
the AIs will be even more conscientious.
They'll work 24-7, and they'll, like,
if you need to be deferential to the FDA,
they'll write the best report the FDA has ever seen
and they'll get things going along.
These other traits, they're not going to be bottlenecked by them, right?
They'll be smart and they'll be conscientious.
That I strongly believe.
Look, I think they will boost the rate of economic growth by something like half a percentage point a year.
Over 30, 40 years, that's an enormous difference.
It will transform the entire world.
But in any given year, we won't so much notice it.
And a lot of it is something like a drug that might have taken 20 years.
Now we'll come in 10 years.
But at the end of it all, it's still our system of clinical trials and regulation.
And if everything that took 20 years takes 10 years over time, that's an immense difference.
But you don't quite feel it as so revolutionary for a long time.
So the whole vibe of this progress studies thing is, look, we've got all these low-hanging
fruits or medium-hanging fruits that if we fix our institutions, if we made these changes to regulations
to institutions, we could rapidly boost the rate of economic growth.
And you're, okay, so we can fix the NIH and get increases in economic growth.
But we have a billion extra people, 10 billion extra people, the smartest people, the most
conscientious people, and that has an iota of difference of economic growth?
Isn't there a contradiction between how much the rate of economic growth can increase?
between these two perspectives?
There's diminishing marginal returns to most of these factors.
So a simple one is how it interacts with regulation, law, and the government.
Another huge one is energy usage.
How good is our country in particular at expanding energy supply?
I've seen a few encouraging signs lately with nuclear power.
That's great.
Most places won't do it.
And even those reports, exactly how many years it will take.
I know what the press releases say.
We'll see, you know, it could be 10 years or more.
And that will just be a smidge of what we'll need to implement the kind of
division you're describing. So yeah, they're going to be bottlenecks all along the way,
the whole way, and it's going to be a tough slog, like the printing press, like electricity.
The people who study diffusion of new technologies never think there will be rapid takeoff.
So my view is kind of like I'm always siding with the experts. So economists, social scientists,
most of them are blind and asleep to the promise of strong AI. They're just out to lunch. I think
they're wrong. I trust the AI experts. But when you
you talk about, say, diffusion of new technologies, the people who do AI are basically totally
wrong, the people who've studied that issue, I trust the experts. And if you put together
the two views who are in each area you trust the experts, then you get my view, which is amazing
in the long run, we'll take a long time, tough slog, all these bottlenecks in the short run.
And the fact that there's like a billion of your, you know, GPT, whatever's, which I'm all in
love with, I promise you, it's going to take while.
What would the experts say if you said, look, we're going to have, forget about AI, because I feel like when people hear AI, they think of GP-D-4, not the humans, not the things that are going to be as smart as humans.
So what would the experts say if you said tomorrow, the world population, the labor force is going to double?
What impact would that happen?
Well, what's the variable I'm trying to predict?
If you mean energy usage, that's going to go up, right?
Over time, it's probably going to double.
I'm not sure it would be a noticeable difference.
Doubling the world population?
Yeah, I'm not sure.
I don't think the Romer model has been validated by the day.
and I don't agree with the Chad Jones model,
much as I love him as an economist.
I don't think it's that predictive.
I mean, look at artistic production in Renaissance Florence.
There's what, 60,000 people in the city,
the surrounding countryside,
but it's that so many things went right at the top level
that it was so amazing in terms of still value added today.
And the numbers model, it doesn't predict very well.
The world economy today is some 100 trillion something.
If the world population was one-tenth of what it is now,
If you only had 1 billion people, 100 million people,
you think we could have the world economy at this level
with our level of technology?
No, the delta's a killer, right?
This is one thing we learned from macro.
The delta and the levels really interact.
So shrinking can kill you,
just like companies, nonprofits, if they shrink too much,
often they just blow up and disappear, they implode.
But that doesn't mean that growing them
gets you 3x, 4x, whatever, proportional to how they grow.
It's oddly asymmetric.
It's very hard to internalize emotionally, that is,
that intuition in your understanding of the real world, but I think we need to.
What are the specific bottlenecks?
Like what?
Humans, here they are.
Bottlenk, bottle neck, hi, good to see you.
And some of you are terrified.
You're going to be even bigger bottlenecks.
That's fine, it's part of free speech.
But my goodness, once it starts changing what the world looks like,
there will be much more opposition, not necessarily from what I call doomster grounds,
But just people like, hey, I see this has benefits, but I grew up, trained my kids to live in some other kind of world.
I don't want this.
And that's going to be a massive fight.
I really have no prediction as to how it's going to go.
But I promise you that will be a bottleneck.
But you can see even historically, you don't have to go from the farmers to Industrial Revolution 10x.
You can just look at actually cases in history where we have had 10x rates, sorry, 10% increase rates of economic growth.
you go to China after Den Xiaoping, they have decades of 10% economic growth,
and then that just because you can do some sort of catch-up,
the idea that you can't replicate that with AI,
or that it's not like infeasible.
Where were the bottlenecks when Den Xiaoping took over?
They're in a mess now.
I'm not sure how it's going to go for them.
They're just a middle-income country.
They struggled to tie per capita income with Mexico.
I think they're a little ahead of Mexico now.
They're the least successful Chinese society, in part,
because of their scale. Their scale is one of their big problems. There's this fear that if they democratize
and try to become a normal country, that the median voter won't protect the interests of the elites.
So I think they're a great example of how hard it is for them to scale because they're the poorest group of
Chinese people on the planet. I mean, like, not the challenges now, but the fact that for decades,
they did have 10% economic growth. And some years 15%. Well, starting from a per capita income of like
$200 per head. And now they, now they're our ancestors, we're going to be like as poor as the Chinese, you know, like,
30 years ago. I'm very impressed by the Industrial Revolution. Like you could argue progress or
progress studies here, most important event in human history maybe. Typical rate of economic growth
during that period was about one and a half percent. And the future is about compounding and
sticking with it and, you know, seeing things pay off in the long run. Just human beings are not
going to change that much. And I don't think that property of our world will change that much,
even with a lot more IQ and conscientiousness. I interviewed you like nine months ago and I was asking
you about AI then? And I think your attitude was like, eh. And I think now, I don't know how, has your
attitude changed since we talked about nine months ago? You know, I don't remember what I thought
in what month, but I would say on the whole, I see more potential in AI than I did a year ago.
And I think it has made progress more quickly than I had been expecting. And I was pretty bullish
on it back then. The 01 model to me is very impressive. And I think further extensions in that
direction will make a big big difference. And the rate at which they come is hard to say,
but it's something we have and we just have to make it better. You showed me your document of
different questions that you came up with for O1 for economic reasoning. I don't think I...
That was for GPT4. Okay. Yeah. But what percentage of them did O1 get right? Because I don't think
I got a single one of those, right? Those questions were too, you know, they were too easy. They were
for GPT4 and it's like abandoned those questions. You know, 100 questions of economic,
How well does a human do on them?
They're hard, but it's like pointless.
So I would not be shocked if somebody's AI model
in less than three years, you know, beat human experts
on a regular basis. Let's put it that way.
Did that update you anyway, that now you've resigned
on these questions because they were too easy for these models?
And then they were initially, like they are hard questions, objectively, right?
They're just easy for 01.
I feel like Kasparov, the first time he met Deep Blue.
You know, there were two matches, and there were two matches,
the first one, Kasparov won. And I lived through that first match. I feel like I'm sort of
in the midst of the first match right now, but I also remember the second match. And in the final
game, you know, Kasparov made that bonehead error in the Karokhan defense. That too was a human
bottleneck. And he lost the whole match. So we'll see what the rate of change is.
Yesterday, Patrick was talking about how important it is for the founders of different institutions
to hang around and be the ones in charge.
I've heard you talk about like, you know, the Beatles were great
because the Beatles were running the Beatles.
Why do you think it's so important for that to be the case?
I think courage is a very scarce input in a lot of decisions.
And founders, they have courage to begin with,
but they also need less courage to see through a big change
and what the company will do.
So Facebook now meta has made it quite a few big changes in its history.
So Mark had a lot of courage to begin with.
But if Mark Zuckerberg said,
we're going to do this, we're going to do that.
It's pretty hard for everyone else to say no in a good way.
I really like that.
So it economizes on courage, having a founder, and you're selecting for courage.
Those would be two reasons.
How does that explain the Beatles success?
Well, Beatles are an interesting example.
I mean, they broke up in 1970, right?
Rolling Stones are still going.
That tells you something.
But the Beatles created much greater value, and the Beatles are the group we still all talk about
much more, even though the Rolling Stones are still with us.
they were always unstable. There's like two periods of the Beatles. Early Beatles, John is the leader. But then Paul works at it and John becomes a heroin addict and Paul gets better, better, better. And ultimately there's no core. There's not a stable equilibrium. The Beatles split up. But that creative tension for like those core seven to eight years was just unbelievable. And it's four founders. Ringo, not quite a founder, but basically a founder because Pete Best was crummy and they got rid of them right away. It's one of the most amazing stories in the world. I like studying these amazing
productivity stories like Johann Sebastian Bach, Magnus Carlson, Steph Curry, the Beatles.
I think they're worth a lot of study. They're atypical. You can't just say, oh, I'm going to be like
the Beatles. Like, you're going to fail. The Beatles did that. But nonetheless, I think it's a good
place to look for getting ideas and seeing risks. Hello, everyone. This is Tyler Cowan,
and I would like to personally thank Jane Street for sponsoring this podcast episode with
Dwarquish Patel. I've been visiting Jane.
St. St. St. St. St. St. St. St. They're renowned for their brainy, challenging environment,
and also for their ability to spot and recruit talent. Those are some of the reasons why, for me,
those are the trips and the visits I look forward to the most. I would just say this. If it is
an appropriate option for you, please do consider working there. I've always had a blast during my
visits, learned a lot, and I recall one time when I gave a talk, we all went out to dinner,
and then quite late, well, people didn't go back home,
but they all went back to the Jane Street office to play chess,
bug house, and other games.
They're better at these games than you might think,
so please update your other expectations accordingly.
Thank you again.
I'm incredibly grateful to Tyler for volunteering to say a few words about Jane Street.
This is the first time that a guest has participated in the sponsorship,
and I hope you can see why Tyler and I think so highly of Jane Street.
If you want to learn more about their open roles, go to jane street.com slash thorkesh.
All right, back to Tyler again.
What did you think of Patrick's observation of the competency crisis?
I see it differently from Patrick, and he and I have discussed this.
So I think there's basically increasing variance in the distribution.
So young people at the top are doing much better, and they're far more impressive than they were in earlier times.
And if you look at areas where we measure performance of the young, chess is a simple example.
We perfectly measure performance.
Very young people are just better and better at chess.
That's proven.
Even like NBA basketball, you have very young people doing things that they would not have been doing, say, 30 years ago.
And a lot of that is mental and training and application and not being a knucklehead.
So the top of the distribution getting much better.
You see this also in science, internet writing.
The very bottom of the distribution.
Well, youth crime has been falling since the 90s.
So the very bottom of the distribution also is getting better.
I think there's some thick middle above the very bottom
and extending like a bit above the median
that's clearly getting worse.
And because they're getting worse,
there's a lot of anecdotal examples of them getting worse.
Like students wanting more time to take the test
or having flimsy excuses
or mental health problems with the young or whatever.
It's a lot more of that because of that thick band
of people getting worse.
and that's a great concern.
But I see the very bottom
and a big chunk of the top of the distribution
as just much better,
and I think it's pretty proven by numbers
that that's the case.
So I would say this increasing variance
with a weird mix of where the gains
and declines are showing up.
And I've said this to Patrick,
and I'm going to say it to him again,
and I hope I can convince him.
It seems concerning, then the composition
is that the average goes down
if you look at PISA scores or something.
The median goes down.
You know, a lot of tests,
they've pushed more people into taking the test.
Pisa scores in particular.
So I suspect those scores adjusted for that
are roughly constant, which is still not great, I agree.
And I think there's some decline,
some of it is pandemic,
and we're recovering a bit slowly,
getting back to human bottlenecks.
But I think a lot of the talk of declining test scores
is somewhat overblown.
At most, there's a very modest decline, I would say.
If the top is getting better,
what do you make of the anecdotal data
he was talking about yesterday,
where the Stanford kids come up to them and say, you know, all my friends, they're stupid,
you can't hire anybody from Stanford anymore. That should be the cream of the crop, right?
There's plenty of data on the earnings of Stanford kids. If there were a betting market
and, you know, what's the future trend, I'm long. How long I should be, I really don't know,
but I visit Stanford, not every year, but, you know, regularly. And there's selection
and who it is I meet. But, yeah, we're talking about selection, and they're very impressive.
And Emergent Ventures has funded a lot of people from Stanford. As far as I can,
tell as a group they're doing very well. So that is of no concern to me. If like you're
worried about the Stanford kids, like something seems off in the level of salience and focus
in the argument because they're overall doing great. And they have high standards. That's good
too. Like, you know, Paul McCartney thought John Lennon was a crummy guitar player and John
thought a lot of Paul's songs were crap. Like in a way they're right, in a way they're wrong,
but it's a sign what high standards the Beatles had.
I mean, you'd hope for...
How old are you, by the way?
24.
Okay.
Now, go back whenever, however many years.
Was there a 24-year-old like you doing the equivalent of podcasting?
Like, it's just clearly better now than it was back then.
People...
And you were doing this a few years ago.
So it's just obvious to me, you know, the young peaks are doing better.
And you're proof.
Wasn't Churchill, by the time he was 24, an international correspondent,
in Cuba, India, and was, I think the highest-paid,
journalist in the world by the time he was 24.
It was similar. I don't know. I mean, what was he paid and how good was his journalism? I just don't know.
I don't think it's that impressive a job to be an international journalist. Like, what does it pay people now?
He did some good things later on, but most of his early life. He's a failure, and then ask the Irish,
getting back to Patrick, ask the Irish and people from India what they think of younger Churchill.
and you'll get an earful.
Like his real great achievement,
I don't know how old he was exactly,
but it's quite late in his life.
And until then, he's a destructive failure.
There was no one on Twitter to tell him,
hey, Winston, you need to rethink this whole Irish thing.
And today there would be.
Sam, Sam, Sam will do it, right?
Sam will tweet Winston Churchill.
Got to rethink the Irish thing.
And Sam is persuasive.
If you read his aphorisms,
I think he would have actually been pretty good on Twitter.
Maybe, but again, you know, like what does the equilibrium look like when everything changes?
But clearly he was an impressive guy, especially given how much he drank.
Okay, so that even if you don't buy the Stanford kids, if you don't buy the young kids,
the other trend he was talking about where if you look at the leaders in government,
whatever you think of Trump, Biden, we're not talking about Jefferson and Hamilton anymore, right?
How do you explain that trend?
Well, Jefferson and Hamilton, they're founders, right?
And they were pretty young at the time.
You can do great things when you're founding a country in a way that just cannot be replicated later on.
Putting aside the current year, which I think is weird for a number of reasons.
But I think mostly we have had impressive candidates,
and most of the U.S. bureaucracy in Washington, I think is pretty impressive.
generals, national security people, top people in agencies, people at Treasury, people at the Fed.
And I interact with these groups, like pretty often.
Overall, they're impressive, and I've seen no signs they're getting worse.
Now, if you want to say the two candidates this year, again, there's something we're not
going to talk about, but there is a lot you could say on the negative side, yes.
But like Obama, Romney, whichever one you like, I think like, gee, these are two guys who should
be running for president.
And that was not long ago.
So then there's a bunch of candidates running who are good.
What goes systematically wrong in the selection process
is the two who are selected
are not even as good as the average of all the candidates?
You mean this?
And I'm not talking about American particular.
You know, if the theory is just like noise,
it seems like it skews one way.
Well, the Democrats had this funny path with Biden
and Kamala didn't get through the electoral process
in the normal way.
So that just means you get weirdness,
whatever you think of her.
as a candidate. Trump, you know, whom I do not want to win, but I think he is extraordinarily
impressive in some way, which along a bunch of dimensions exceeds a lot of earlier candidates.
I just don't like the overall package, but I would not point to him as an example of low talent.
I think he's a supreme talent but harness to some bad ends.
If you look at the early 20th century, like some of the worst things that happened to progress
is just these psychopathic leaders. What happened? Why did it?
do we have so many just awful, awful leaders
in the early 20th century?
Well, give me like a country and a name
and a time period and I'll try to answer it.
He was one of them in particular?
He was from the university, that's what was wrong with him, right?
And just think of what school it was.
Who's right?
Woodrow Wilson.
Yeah.
One of our two or three worst presidents on civil rights,
World War I, he screwed up,
the peace process he screwed up,
indirectly led to World War II,
reintroduced segregation of civil service
in some key regards.
and just seemed he was a nasty guy and should have been out of office sooner given his health and state of mind.
So he was terrible. But he was sort of on paper a great candidate. Hoover on paper was a great candidate and was an extremely impressive guy.
I think he made one very bad set of decisions relating to deflation and letting nominal GDP fall.
But my goodness, like there's a reason they called it the Hoover Institution after Hoover.
But the Hitler-Stalin Mao is, was there something that was going on that explains why that was just a crummy time for world leaders?
I don't think I have like a good macro explanation of that whole trend, but I would say a few things.
That's right after the period where the world is changing the most.
And I think when you get big new technologies, and this is relevant for AI, you get a lot of new arms races.
And sometimes the bad people win those arms races.
So at least for quite a while, you had Soviet Russia and Nazi Germany winning some arms races.
And they're not democratic systems.
Later, you have China with Mao being not a democratic system.
And then you have a mix of bad luck.
Like Stalin and Mao just draw up the urn.
You could have gotten less crazy people than what you got.
And I agree with Hayek, the worst get to the top, under autocracy.
But like, they're that bad?
Like, that was just some bad luck, too.
There's other things you could say.
But I think we had a highly disoriented civilization.
You see it in aesthetics approaching beginnings of World War I, art, music, radically changing.
People feel very disoriented.
There's a lot up for grabs.
Imperialism, colonialism start to be factors.
Just there wasn't like a stable world order.
And then you had some bad luck tossed into that.
And all of a sudden, these super destructive weapon systems compared to what we had had, and it was awful.
I'm not pretending that's some kind of full explanation.
but that would be like a partial start.
You compared our current period to 17th century England
where you have a lot of new ideas, things go topsy-turvy.
What's your theory of why things go topsy-turvy
at the same time when these eras come about?
What causes this volatility?
I don't think I have a general theory.
If you want to talk about 17th century England,
so they have the scientific revolution.
You have the rise of England as a true global power.
Navy becomes much more important.
The Atlantic trade route, because of the new world,
much more important. Places like the Middle East, India, China, that were earlier, you know,
Persia had major roles. They're crumbling partially for reasons of their own, and that's going
to help the relative power of the more advanced European nations. England has a lot of
competition from, you know, the Dutch Republic, France, happening at the same time that for
the first time in human history that I know of, we have sustained economic growth, according
to Greg Clark, starting in the 1620s, of about 1% a year. And that is a
compounding again. Slow numbers, but compounding, and England is the place that gets the compounding
at 1%, starting in the 1620s, and somehow they go crazy, civil war, kill the king, all these
radical ideas, libertarianism comes out of that, which I really like, John Milton, John Locke,
also this brutal conquest of the new world, like very good and very bad coming together,
and I think it should be seen as these set of processes were very good and very bad come together,
and we might be in for a dose of that again, now, soon.
Seems like a simple question, but basically how do you make sure we get the good things and not the crazy Civil War?
You can't make sure.
I mean, you try at the margin to nudge toward the better set of things, but it's possible that all the technical advances that recently have been unleashed now that the great stagnation is over, which of course include AI will mean another crazy period.
It's quite possible.
I think the chance of that is reasonably high.
What's your most underrated cult?
Most underrated cult.
Progress studies?
I think you called peak EA right before SBF fell.
That's right.
I was at an EA meeting and I said, you know,
hey everyone, this is as good as it gets,
enjoy this moment.
It's all basically gonna fall apart.
You're still gonna have some really significant influence,
but you won't feel like you have continued to exist as a movement.
That's what I said.
And they were shocked, they thought I was insane.
But I think I was mostly right.
What specifically did you see?
What was, is the exuberance too high?
Did you see SBF's balance sheet?
Like, what did you see?
Well, I was surprised when SBF was insolvent.
I thought it was a high risk venture
that had no regulatory defense
and would end up being worth zero.
But I didn't think he was actually playing funny games
with the money.
I just have a long history of seeing movements
in my lifetime from the 1960s onwards,
including libertarianism,
and there are common patterns that happens to them all.
We're here in Berkeley, my goodness.
Free speech movement, where's free speech in Berkeley today?
Like how'd that work out in the final analysis?
So it's a very common pattern, and just to think,
well, the common pattern's gonna repeat itself,
and then you see some intuitive signs,
and you're just like, yeah, that's gonna happen.
And the private benefits of belonging to EA,
like they were very real in terms of the people
you could hang out with or like the sex you could have,
but they didn't seem that concretely
crystallized to me in institutions the way they are like in Goldman Sachs or legal partnerships.
So that struck me as very fragile and I thought that at the time as well.
Sorry, I'm not sure I understood. What were the intuitive science?
Well, not seeing like the very clear crystallized permanent incentives to keep on being a part of the
institutions. A bit of excess enthusiasm from some people, even where they might have been
correct in their views, some cult-like tendencies, the rise of it being so rapid.
that it was this uneasy balance of secular and semi-religious elements
that tends to flip one way or the other or just dissolve.
So I saw all those things, and I just thought,
like the two or three best ideas from this
are going to prove incredibly important still,
and from this day onwards,
I don't give up that belief at all.
But just as a movement, I thought it was going to collapse.
When did we hit peak progress studies?
You know, when Patrick and I wrote the piece,
on progress and progress studies.
He and I thought about this, talked about it.
I can't speak for him, but my view at least
was that it would never be such a formal thing
or like controlled or managed or directed
by a small group of people or like trademarked
or it would be people doing things
in a very decentralized way
that would reflect a general change of ethos and vibe.
So I hope it has in many ways like a gentler
but more enduring trajectory.
And I think so far I'm seeing that.
Like I think in a lot of countries, science policy will be much better because of progress
studies.
That's not proven yet.
You see some signs of that.
You wouldn't say it's really flip.
But a lot of reforms.
You're in an area like no one else has any idea, much less a better idea or a good idea.
And some modestly small number of people with some talent will work on it and get like a third
to half of what they want.
And that will have a huge impact.
And like if that's all it is, I'm thrilled.
And I think it will be more than that.
I asked Patrick yesterday, how do you think about progress
studies differently now that you know AI is a thing that's happening?
Yeah.
What's the answer for you?
I don't think about it very differently.
But again, if you buy my view about like slow takeoff,
why should it be that different?
Well, have more degrees of freedom.
So if you have more degrees of freedom,
all your choices, decisions, issues, problems are more complex.
So you're in more need of like some kind of guidance.
So all inputs other than the AI, like Risen
marginal value. And since I'm an input other than the AI or I hope that means I rise in
marginal value, but I need to do different things. So I think of myself over time as less a producer
of content and more like a connector, people, person, developing networks in a way where if there somehow
had been no like Transformers and LLMs, I would have stayed a bit more as a producer of content.
When I was preparing to interview you, I asked Claw to take your persona. And compared to other
people I tried this with. It actually works really well with you. Well, because I've written a lot on the
internet. Yeah, that's why. This is my immortality, right? That's right. So I've heard you say in the past,
you know, you don't expect to be remembered in the future. At the time, I don't know if you were
considering that because of your volumes of text, you're going to have an especially salient persona
in future models. How does that change your estimation of your intellectual contribution going forward?
I do think about this, and the last book I wrote, you know, it's called Goat, who's the greatest economist of all time,
I'm happy if humans read it, but mostly I wrote it for the AIs.
I wanted them to know I, like, appreciate them.
And my next book, I'm writing even more for the AIs.
Again, human readers are welcome.
It will be free.
But sort of, oh, who reviews it?
Like, oh, it's TLS going to pick it up?
Or, like, it doesn't matter anymore.
like the AIs will trawl it and know I've done this
and that will shape how they see me
and I hope a very salient and important way
and as far as I can tell no one else is doing this
no one is like writing or recording for the AIs very much
but if you believe even like a modest version of this progress
like I'm modest in what I believe relative to you and many of you
like you should be doing this you're an idiot if you're not writing for the AIs
they're a big part of your audience and they're like purchasing people
We'll see, but over time it will accumulate and they're going to hold a lot of crypto.
We're not going to give them bank accounts, at least not at first.
What part of your persona will be least captured by the AIs if they're only going by your writing?
I think I should ask that as a question to you. What's your answer?
I don't think AIs are that funny yet. They're better on humor than many people allege, but I don't use them for humor.
It's interesting that you learn so much about a person from when you're interviewing for them for a
job or for you for merchant ventures. You can read their application, but just in the first
10 minutes, their vibe. Three minutes, but yes. Yes. And so whatever's going on there, that's so
informative, the AIs won't have just from the writing. Not at first, but I think, I've heard of projects.
This is secondhand. I'm not sure how true it is, but that interviews are being recorded by companies
that do a lot of interviews, and these will be fed into AIs and coded in particular ways, and then
people, in essence, will be tracked through either their progress.
in the company or a LinkedIn profile.
And we're going to learn something about those intangibles.
At some rate, I'm not sure how that will go.
But I don't view it as something we can never learn about.
Do you actually have a conscious theory of what's going on when you get on a call with somebody
in three minutes later, you're like, you're not getting the grant?
What happens?
Well, often there's like one question the person can't answer.
So if it's someone, say, applying with a nonprofit idea, plenty of people have good ideas for
nonprofits.
And I see these all the time.
when you ask them the question, how is it you think about building out your donor base?
It's remarkable to me how many people have no idea how to answer that.
And without that, you don't have anything.
So it depends on the area, but that would be an example of an area where I ask that question pretty quickly,
and a significant percentage can't answer it.
And I'm still willing to say, well, come back to me when you have a good case.
Oddly, none of those people have ever come back to me that I can think of.
But I think over time some will.
And that's like a very concrete thing.
But there's other intangibles, just when you see what the person thinks and talks about too much.
So like if someone wants to get an award only for their immigration status, that to me is a dangerous sign.
Even though at the same time, usually you're looking for people who want to come to the US, whether they can do it or not.
And there's just a lot of different signals you pick up, like people somehow have the wrong priorities.
priorities or they're referring to the wrong status markers and it comes through more than you
would think. If you just had the, if you had the transcript of the call but you couldn't see the
video, like you would say a no in the case where you could see the video. You might say yes if you
see the transcript. What happens in those cases? Having only the transcript would be worth much,
much less, I would say, if that's what you're asking. Yeah, it would be maybe 25% of the value.
And what's going on with the 75%? We don't know, but I think you can become much better at figuring
out the other 75%, partly just with practice.
Yesterday, Patrick was talking about these concentrations of talent
that he sees in the history of science,
with these labs that have six, seven Nobel Prizes.
And he was also talking about, you know,
second employee at Stripe, as Greg Brockman.
He wasn't visible to other parts of the startup ecosystem
in the same way.
What's your theory of what's going on?
Why are these clusters limited?
What's actually being inherited over and transmitted here?
Well, Patrick was being too modest.
I thought his answer there was quite wrong, but he sort of knows better.
He was able to hire Greg Brockman because he's Patrick.
It's very simple.
He wasn't going to come out and just say that, and he may even, like, deny it a bit to himself.
But if you're Patrick and John, you're going to attract some Greg Brockmans.
And if you're not, it's just way, way harder, because the Greg Brockman's are pretty good at spotting who are the Patrick's and Johns.
So, in a way, that's just pushing it back a step, but at least it's answering part of the question.
in a way that Patrick didn't, because he was modest and humble.
It seems like that makes the clusters less valuable then,
because of Greg Brockman is just Greg Brockman,
and Greg chose Patrick and John because they're Patrick and John,
and Patrick and John chose Greg because he's Greg.
It wasn't that they meet each other great.
It was just like talent sees talent, right?
Well, they make each other much better,
just like Patrick and John made each other much better and still do,
but you're getting back to my favorite human bottlenecks.
Thank you. I'm fully on board with what you're saying.
To get those, like how many Beatles are there?
It's amazing how much stuff doesn't really last.
And it's just super scarce achievement at the very highest levels.
And that's this extreme human bottleneck.
And AI, even a pretty strong AI, remains to be seen how much it helps us with that.
I'm guessing ever since you were the Progress Studies article, you got a lot of applications for Emergent Ventures from people who want to do some progress studies thing.
That's, you know, on the margins, do you wish you got fewer of those proposals or more of them?
Do you just wish they were unrelated?
I don't know, you know, today a lot of them have been quite good,
and many of them are people who are here.
There's a danger that as a thing becomes more popular,
you know, at the margin, they become much worse,
and I guess I'm expecting that.
So maybe mentally I'm raising my own bar on those.
And maybe over time I find it more attractive.
If the person is interested in, say, like the industrial revolution,
if they're interested in progress studies, capital P, capital S,
Like over time, I'm growing more skeptical of that.
Not that I think there's anything intrinsically bad about it.
Like, I'm at a progress studies conference with you.
But still, when you think about selection and adverse selection,
I think you've got to be very careful and keep on raising the bar there.
And it's still probably good if those people do something in capital P, capital S progress studies,
but it's not necessarily good for emerging ventures to just keep on funding the number.
If you buy your picture of AI where it increases growth rates by half a percentage point,
What does your portfolio look like?
I can tell you what my portfolio is.
It's a lot of diversified mutual funds with no trading,
basically pretty heavily U.S. weighted
and nothing in it that would surprise you.
Now, my wife works for the SEC,
so we're not allowed to do most things.
Like even to buy most individual stocks,
you may not be allowed to do it.
Certainly not allowed derivatives or shorting anything.
But if somehow that restriction were not there,
I don't think it would really matter.
So buy and hold, diversify, hold on tight,
and make sure you have some cheap hobbies and are a good cook.
Why aren't you more leverage if you think the growth rate's going to go off,
even slightly?
Well, I think I also have this view.
Maybe a lot of the equity premium is in the past,
that people, especially in this part of the world,
are very good at internalizing value,
and it will be held and earned in private markets and by VCs,
rather than, like, public pension funds.
Why give it to them?
I think Silicon Valley has figured this out.
Sand Hill Road has figured it out.
So what one can do with public equities is unclear.
What private deals I can get on with my really tiny sum of wealth,
like I would say is pretty clear.
So I'm left with that.
And like money for me is not what's scarce.
Time is scarce.
And I do have some very cheap hobbies.
And I feel I'm in very good shape in that regard.
That being said, I think you could get a pretty good deal
flow. You would have a portfolio. I don't know. Like you can only focus on so many things.
So if I have like good deal flow in emergent ventures, which I'm not paid to do, like say I had
a billion dollars from whatever, I wouldn't have any better way of spending that billion dollars
than like buying myself a job doing emergent ventures or whatever. So I'm sort of already
where I would be if I could buy the thing for a billion dollars. So I'm just not that focused on it.
And I think it's good that you limit your areas of focus.
And if some people, it's just money, like, I think that's great.
I don't begrudge them that at all.
I think it's socially valuable.
Let's have more of it, bring it on, but it's just not me.
When I started my career, it was really unknown that an economist could really earn anything
at all.
Like there were no tech jobs with billionaires.
Finance was a low-paying field, like when I started choosing a career.
It was not a thing.
There wasn't this fancy Goldman Sachs.
It was a slow, boring thing.
programmers were weird people in basements, like maybe, who knows, you know, that bad stuff.
And then like an economist, you would earn, like back then, maybe $40,000 a year,
like two people, Milton Friedman, Paul Samuels on an outside income.
And you would know expectation that you would ever earn more than that.
And I went into this with all of that, like relative to that, I feel so wealthy.
Just like, oh, you can sell some books or you, like, you can give a talk.
I don't know. I just feel like I am a billionaire now. And if anything, I want to become what I've called an information trillionaire. I'm not going to make that, but I think it's a good aspiration to have. Just collect more information and be an information trillionaire. Like Dana Joya has that same goal. He and I've talked about this. I think that's a very worthy goal.
Was there a second field that you were considering going to other than economics? It was either economics or philosophy. And I saw back then, this would be.
like the late 1970s, it was much harder to get a job as a philosopher, though not impossible,
the way it sort of is now, and they were paid less and just had fewer opportunities. So I thought,
well, I'll do economics. But I think in a way I've always done both. Okay, I really want to go back
to this diffusion thing we're talking about at the beginning with the economic growth. Yeah.
Because I feel like I don't, what am I, I'm not understanding the, I hear the word diffusion,
I hear the word bottlenecks, but I just like don't have anything concrete in my head when I hear that.
What are the people who are thinking about AI missing here
when they just plug in these things into their models?
I'm not sure I'm the one to diagnose.
But I would say when I'm in the Bay Area,
like the people here to me are the smartest people
I've ever met, on average.
Most ambitious, dynamic, and smartest,
like by a clear grand slam compared to New York City or London or anywhere.
That's awesome and I love it.
But I think a side result of that is that people
People here overvalue intelligence and their models of the world are built on intelligence
mattering much, much more than it really does.
Now people in Washington don't have that problem.
We have another problem.
And that needs to be corrected too.
But I just think if you could root that out of your minds, it would be pretty easy to glide into
this expert consensus view that tech diffusion is universally pretty slow and that's not
going to change. No one's built a real model to show why it should change other than
sort of hyperventilating blog posts about everything's going to change right away.
The model is that you can have AIs make more AIs, right? That you can have them...
Dmitishing returns. Ricardo knew this. Right? He didn't call it AI, but Malthus Ricardo,
they all talked about this. It was just humans for them. Well, people then would breed,
they would breed at some pretty rapid rate. There were diminishing returns to that. You
you had these other scarce factors.
Classical economics figured that out.
They were too pessimistic, I would say.
But they understood the pessimism intrinsic in diminishing returns
in a way that people in the Bay Area do not,
and it's better for them that they don't know it.
But if you're just trying to inject truth into their veins
rather than ambition, diminishing returns
is a very important idea.
In what sense was that pessimism correct?
Because we do have seven billion people,
and we have a lot more ideas as a result.
We have a lot more industries.
Yeah, I said they were too pessimistic.
but they understood something about the logic of diffusion,
where if they could see AI today,
I don't think they would be so blown away by.
Oh, you know, I read Malthus, Ricardo would say,
Malthus and I used to send letters back and forth.
We talked about diminishing returns.
This will be nice, it'll extend the frontier,
but it's not gonna solve all our problems.
One concern you could have a progress in general
is if you look at the famous Adam Smith example,
you've got that pinmaker,
and the specialization obviously has all
these efficiencies. But the pinmaker is just like he's doing this one thing, whereas if you're in
the ancestral environment, you get to do, you get to basically negotiate with every single part of
what it takes to keep you alive and every other person in your tribe. Does individuality,
is that like lost with more specialization, more progress? Well, Smith thought it would be. I think
compared to his time, we have much more individuality, most of all in the Bay Area. That's a good thing.
I worry the future with AI that a kind of demoralization will set in in some areas.
I think there'll be full employment pretty much forever.
That doesn't worry me.
But what we will be left doing, what exactly it will be and how happy it will make us.
Again, I don't have pessimistic expectations.
I just see it as a big change.
I don't feel I have a good prediction.
And if you don't have a good prediction, you should be a bit wary and just like, oh, okay, we're going to see.
but, you know, some words of caution are merited.
When you're learning about a new field,
the vibe I get from you when you're doing a podcast
is like you're picking up like the long tail of different,
you talk to interesting people
or you read the book that nobody else would have considered.
How often do you just have to like,
you got to like read the main textbook
versus you can just look at the esoteric thing.
How do you balance that trade off?
Well, I haven't interviewed that many scientists.
Like Ed Boyden would be one,
Richard Prom, the ornithologist from Yale,
Those are very hard preps.
I think those are two excellent episodes, but I'm limited in how many I can do by my own ability to prepare.
I like the most doing historians because the prep is a lot of work, but it's easy, fun work for me.
And I sort of, I know I always learn something.
So now I'm prepping for Stephen Kotkin, who's an expert on Stalin and Soviet Russia.
And that's been a blast.
I've been doing that for like four months, reading dozens of books.
And it's very automatic, where if you try to figure out, like, what Ed Boyden is doing with the light shining into the brain, it's like, oh, my goodness, do I understand this at all? Or am I like the guy who thinks the demand curve slopes upward? So it just means I'm only going to do a smallish number of scientists, and that's a shame. But maybe AI can fill in for us there.
You recommended a book to me, Stalin's Library, which talks about the different things, the different books that Stalin read and the fact that he was kind of a smart, well-read guy.
And the book also mentioned, I think, in the early chapter
is that, look, he never, in all his annotations,
if you look through all his books,
there's never anything that even hints
that he doubted Marxism.
That's right.
There's a lot of other evidence
that that's the correct portrait.
What's going, like, smart guy
who's read all this literature, all these different things,
never even questions Marxism.
What's going on there?
What do you think?
I think the culture he came out of
had a lot of dogmatism to begin with.
And I mean both Leninism,
which is extremely dogmatic,
Lenin was his mentor, like Patrick's thing about the Nobel laureates.
It happens in insidious ways too.
So Lenin is the mentor of Stalin, Soviet culture, communist culture, and then Georgian culture,
which appealing and fun-loving and wine drinking and dance heavy as it is, there's something
about it that's a little, you know, you pound the fist down and you tell people over the table
how things are.
We had all those stacked vertically, and then we got this bad genetic luck of the draw on Stalin,
and it turned out obviously pretty terrible.
And then you buy Hayek's explanation that the reason he rose to the top is just because
the most atrocious people win in autocracies?
What is that definition missing?
I think what Hayek said is subtler than that, and I wouldn't say it's Hayek's
explanation.
I would say Hayek pinpointed one factor.
There are quite a few autocracies in the world today where the worst people have not risen
to the top.
UAE would be, I think, the most obvious example.
I've been there.
As far as I can tell, they're doing a great job running the country.
There are things they do that are nasty and unpleasant.
I would be delighted if they could evolve into a democracy.
But the worst people are not running UAE, this I'm quite sure of.
So it's a tendency.
There are other forces, but culture really matters.
Hayek is writing about a very specific place and time.
I would say it really surprised me.
They're these family-based Gulf monarchies with very large, clannish interest groups of thousands of people
that have proven more stable and more meritocratic than I ever would have dreamed, say, in 1980.
And I know I don't understand it, but I just see it in the data.
It's not just UAE.
There's a bunch of countries over there that have outperformed my expectations.
And they all have this broadly common system.
When you go around the world, because I know you go outside the Bay Area and the East Coast as well,
and you talk about progress studies related ideas,
what's the biggest difference in how they received
versus the audience here?
Well, the audience here is so, so different.
Like you're the outlier place of America.
And then where I normally am outside of Washington, D.C.,
that's like the other outlier place.
And in a way we're opposite outliers.
I think that's healthy for me,
both where I live and that I come here a lot
and that I travel a lot.
But you all are so, like, out there in what you believe, I'm not sure where to start.
You all, you know, you come pretty close to thinking in terms of infinities on the creative side and the destructive side.
And no one in Washington thinks in terms of infinities, they think at the margin.
And, like, overall, I think they're much wiser than the people here.
But I also know if everyone or even more people thought, like the D.C. people, like our world would end.
We wouldn't have growth.
They're terrible.
People in the EU are super wise.
Like you have a meal with like some sort of French person who works in Brussels.
It's very impressive.
They're culture.
They have wonderful taste.
They understand all these different countries.
They know something about Chinese porcelain.
And if like you lived in a world ruled by them, the growth rate would be negative 1%.
So there's some way in which all these things have to balance.
I think U.S. has done a marvelous job at that.
and we need to preserve that.
What I see happening, UK used to do a great job at it.
UK somehow the balance is out of whack,
and you have too many non-growth-oriented people
in the cultural mix.
The way you described this French person
you were having dinner with.
Which I've had, this is, we have dinners, yeah.
And the food is good, too.
I don't know, it kind of reminds me of you
in the sense that you're also well-cultured
and you all these different esoteric things.
I don't know.
What's the difference between you
and what's like the biggest difference between you
and these French people you have dinner with?
I don't think I'm well-cultured
would be one difference.
There are many differences.
First, I'm an American.
I'm a regional thinker.
I'm from New Jersey.
So I'm essentially a barbarian,
not a cultured person.
I have a veneer of culture
that comes from having collected a lot of information.
So I'll know more about culture
than a lot of people.
And that can be mistaken for being well-cultured,
but it's really quite different.
It's like a barbarian's approach to culture.
It's like a very autistic approach
to being cultured and should be seen as such.
So I feel the French person is very foreign from me,
and there's something about America
they might find strange or repellent,
and I'm just so used to it.
I see intellectually how many areas we fall flat on are destructive,
but it doesn't bother me that much because I'm so used to it.
What is the most misunderstood about autism?
Well, if you look at the formal definition,
it's all about deficits that people have, right?
Now, if you define it that way, like no one here is autistic.
If you define it some other way, which maybe we haven't been down yet, like a third of you here are autistic.
I don't insist on owning the definition.
I think it's a bad word.
It's like libertarian.
I would gladly give it away.
But there is like some coherent definition where a third of you here probably would qualify.
And this other definition where none of you would, and it's like kids in mental homes banging their head against the wall.
So, I don't know, seems that whole issue needs this huge reboot.
These DC people, you know, one frustration that tech people have is that they have very little influence, it seems, in Washington compared to how big that industry is.
And the industries that are much smaller will have much greater sway in Washington.
Why is tech so bad at having influence in Washington?
Well, I think you're getting a lot more influence than maybe you realize quickly through national security reasons.
So, the feds have not stopped the development of AI, whatever you think they should or should
not do.
It's basically proceeded.
And national security as a lobby, they don't care about tech per se, but it is meant that on
a whole bunch of things in the future, you will get your way a bit more than you might be expecting.
But a key problem you have is so much of it is in one area, and it's also an area where there's
a dominant political party.
Even within that political party, there's in many parts of California a dominant faction.
And you compare yourself to like the community bankers who are in like so many American counties have connections to every single person in the House of Representatives.
Your issues in a way are not very partisan.
The distortions you cause through your privileges are invisible to America.
It's not like, you know, Facebook where some John Haid has written some bestselling book complaining about what it is you do.
There's not a bestselling book complaining about the community banks.
and they are like ruthless and powerful and get their way.
And I'm not going to tangle with them.
And you all hear are so far from that,
in part because you're dynamic and you're clustered.
Final question.
So I think based on yesterday's session,
it seems like Patrick's main misgiving with progress
is that you look at the young girl cohort.
There's something that's not going great with them.
And it seems to, you know, you would hope that over time progress
means that people are getting better and better over time
if you buy his view of what's happening with the young people.
What's your main misgiving about progress?
The thing where you're like, if I look at the time series data,
I'm not sure I like where this trend is going.
Well, our main concern always should be war.
And I don't have any grand theory of what causes war
or if such a theory ever is possible.
But I do observe in history that when new technologies come along,
they are turned into instruments of war.
And some terrible things happen.
You saw this in 17th century England.
You saw this with electricity and the machine.
machine gun. Nuclear weapons is a story and process. And I'm not sure that's ever going away.
So my main concern with progress is progress and war interact. And it can be in good ways,
like the world, a la Steven Pinker, has had relative peace. That's fraying at the edges and
the data. The numbers are now moving the wrong way, but it's still way better than most
pastime periods. And we'll have to see where that goes, but there might be a ratchet effect
where wars become more destructive, and even if they're more rare, when they come, each one
a real doozy, and whether we really are or ever can be ready for that, I'm just not sure.
And thank you very much, Dorcas.
This will be the second session.
We have to end on a pessimistic note, but...
No, the optimistic note is that we're here.
Human agency matters.
If we were all sitting around in year 1000, we never could have imagined the world being anything
like this, even a much poorer country.
And it's up to us to take this extraordinary and valuable heritage and do some more with it.
And that's why we're here.
So I say let's give it a go.
Great note to end up.
Thanks, Tyler.
I'm very grateful to the Roots of Progress Institute
for hosting this Progress Conference
at which I got a chance to chat with Tyler
and ask him a few fun questions.
Jason, Heikey, and the whole team
did a wonderful job organizing it, and it was a blast.
And Freethink Media did a great job with the videography,
as you can see.
If you enjoy this episode, please subscribe,
please like, please share it,
and send it to your friends.
who you think might enjoy it.
And otherwise, I guess I'll see you on the next one.
All right, cheers.
