The Joe Walker Podcast - Francis Fukuyama — AGI and the Recommencement of History
Episode Date: July 31, 2025Francis Fukuyama is a Stanford political scientist and the author of (among many other works) The End of History and the Last Man—arguably the most influential work in political science of the p...ast half-century. If “History” is driven by technology, how does Fukuyama now view biotech and AI—and their potential to usher in a new, post-human history? These are difficult questions, but I wanted to ask Frank about topics that are both important and (at least for AI) on which he has spoken little until now. We also get a sneak peek at his forthcoming book and discuss his ideas on bureaucracies, delegation, and state capacity.See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Well, today it's my huge honor to be speaking with Francis Fukuyama.
He truly needs no introduction, so I won't, in fact, introduce him.
Frank, welcome to the podcast.
Well, thanks very much, Joe.
So I've been thinking about what are some topics I can discuss with you that are both
important and which you haven't written or spoken much about.
And that's been a challenge because you're so prolific.
But it seems to me that the topic I'd most like to discuss is this question of
artificial general intelligence and the recommencement of history.
And in doing that, I'd mostly like to draw on one of your lesser-known books,
our post-human future.
But before we get to AI, some questions on biotech.
So our post-human future was obviously mostly concerned with biotech,
especially genetic engineering.
And as I understand it, the reason for your concern was that
if biotech can alter the substrate of our human nature,
then that will have downstream consequences for the political order and for liberal democracy.
Because if liberal democracy is about a system that completely satisfies human nature,
if we change that human nature, then liberal democracy could be undermined.
And we might be ushered into a post-human history.
So the first question I wanted to ask you was it's been about 25 years since you wrote this book.
I'm just curious
whether
biotech generally
and genetic engineering
specifically have played out
in the ways that you expected
they would.
Well, in the 1990s,
I was leading a study group
in Washington
on the impact
of new technologies
on politics.
And we look both
at information technology
and biotechnology.
And at that time,
I thought that,
you know,
so this was before the
you know, the internet was only privatized at around this time and social media was still
15 years in the future. And I thought at that point that biotech was likely to be more
consequential. I think that that may be true in the long run, but certainly the internet has
turned out to be a much more disruptive force than I, you know, imagined at the time. But I think
that both of them are going to provide, you know, fairly large challenges. I think the one
coming from biotech in a way is more fundamental because really if you can alter human nature
as opposed to just altering human behavior, I think that that's going to affect, you know,
things like our understanding of rights, human rights, because it may put into, you know,
contestation what is a human being actually and what's the boundary between humans and, you know,
non-human beings.
So we've had only, I think, three CRISPR babies so far.
There was Lulu and Nana in 2018 and then Amy in 2019, all in China.
Were you expecting, I guess, more genetically engineered humans by this point?
I don't know that I was expecting more.
I do think, though, that it's going to happen.
And I think the barriers to doing this are still fairly high, this kind of human experimentation.
But it just seems to me such an obvious, you know, path for somebody with a lot of money and ambition.
And you already have, you know, a lot of Silicon Valley tech billionaires pouring a lot of money into life extension, which is another, you know, it's different from, it's different from genetic engineering.
But I think it's also going to have huge consequences for human societies.
Right.
I had Laura Deming on the podcast last week who founded the Longevity Fund,
and she's currently working on her own startup now for cryop preservation.
Yeah, you know, it's funny.
This is one area where I part company with almost everybody in this.
Because I think that life extension is a bad idea.
You know, it's something I think is personally desirable.
nobody wants to die, but socially it's going to be a disaster if people start routinely living
to very advanced ages. Because I think, you know, there's actually a good evolutionary reason
why people die. If you didn't have generational turnover, you'd never have social change. And I think
that's really kind of the future we're facing. Right. Yeah, if I remember correctly, that's the main
kind of negative externality that you highlight in the book as far as longevity science is concerned.
it's that our childhood experiences shape our worldview in a very durable sense.
And so you have these generational effects.
And if you have a certain generation living much longer,
kind of like a, I guess like a Joe Biden on steroids, so to speak.
Right.
I mean, literally, I guess.
Then they will entrench their particular worldview and society will become less dynamic.
Yeah, I mean, it's not, we've already had this with individual leaders like Castro or Francisco Franco,
that lived way past their political, what should have been the end of their political lives.
But if you have a whole generation of people that simply don't go away and don't get replaced,
I just think that it's going to be very hard for, you know, human society to advance
because, you know, we'll joke about economists that the field advances one funeral at a time.
I mean, sometimes you really do need an entire generation to be replaced by another one,
before you, you know, open yourself up to new social, political possibilities.
If I wanted to push back, I guess you're focusing on one specific negative possibility,
but if you did a more holistic kind of cost-benefit analysis,
maybe it would come out in favor of longevity.
Well, it's hard to see what that is.
I guess you could say that, you know, as people get older,
they accumulate human capital and, you know, it's better not to have to start over.
But again, I just think that a lot of that human capital becomes rigid and, you know, out of touch with actually the changing environment.
I certainly feel that things around me are very different from, you know, when I was young and a lot of my, you know, and I keep wondering whether a lot of my attitudes are simply reflective of the period I was born in.
So I really do think that, but, you know, the trouble is that nobody wants to die.
And so there's no political support for, you know, passing away, you know, earlier rather than later.
Right.
So which current biotech do you is most likely to drive transhumanism?
Is it CRISPRC-9?
Well, you know, the thing is that with that type of heritable gene editing, you actually affect not just the individual in question, but all of that individual's descendants.
Right.
And so it's, you know, considerably more consequential than something that simply affects the behavior of a living individual and that will die with that individual.
And that's what I think, you know, got me started on thinking about this because I really do believe that human rights are ultimately embedded in human nature.
And if you can change that human nature, you're going to change the nature of rights.
most critiques of the end of history focused on other perceived weaknesses but and correct me if
I'm wrong I think you always viewed the true weakness of the end of history thesis as modern
natural science's ability to alter our fundamental human nature because as as we said at the outset
of this chat if you alter our fundamental human nature then whatever the highest and best
political order looks like could be different to liberal democracy. I'm curious, do you view
liberal democracy as an end in itself? Is it intrinsically good? Well, okay, let me think we need
to unpack a few of those things. I think that there are many, I didn't say it was just genetic
engineering, I think that technology in general has a big effect on the viability of different
political systems. So in the 19th century, with the rise of industrialism, it tended to concentrate
power because you needed large-scale industries, mass production, and that tended to fortify
more centralized government. And the thought, you know, behind the internet originally was
that it would spread information and therefore power out and therefore would be democratizing.
in a way it was all too successful at that.
And so what it's done is actually destroyed the basis of common, you know, empirical knowledge.
I think that's one of the big problems that democracies are facing,
that there are no authoritative sources just of factual information.
So that's not genetic engineering.
That's, you know, that's a consequence of technology that I think a lot of people failed to recognize.
And in fact, it's hard to imagine how democracy really works if people simply don't agree on certain empirical facts and hold them in common in the society that they're living in.
So I think there's, you know, so my statement was not that you couldn't have an end of history if you had genetic engineering.
My argument was that technology in general was the driver of history.
And unless you imagine some kind of technological stasis, you wouldn't have a stasis in political forms.
And I think, you know, we're already seeing that, you know, with the developments in information technology.
The technology that could continue driving history forward didn't have to be technology that alters human nature.
No, no.
I see.
No, I mean, all forms of technology, you know, have big social consequences.
So I, you know, so that was the statement was that you can't really have an end of history
unless you have an end of technological development.
Yeah.
Okay, so I have some questions about natural rights.
So in our post-human future, you argue that human dignity is grounded in something you call
Factor X, which is this kind of emergent, complex bundle of uniquely human traits.
on that basis, would you have denied rights to say Denisovans or Neanderthals?
Well, that poses a real, you know, problem because they wouldn't be, I mean, I think they would
generally be recognized to be not human beings.
And it depends on which of those aspects of, you know, factor X you take the most seriously in terms of
you know, for example, would you allow one of these proto-human humanoids to vote?
You know, you may say that we understand that they feel pain, they feel emotions, you know,
they, if you rip their babies from the mother's arms, it's, you know, it's a terrible tragedy.
And so you want to protect their rights in that respect.
But do they have the intelligence and the capability of actually making political choices
of the sort that we expect a democratic population to make.
You know, we don't allow adolescents or children to vote
because we feel that their mental capabilities
are really not sufficiently developed.
And if you have a proto-human race
that basically doesn't develop past the age of, you know, seven,
I think that, you know, you could make a very strong argument
that they shouldn't have the full set of rights
that, you know, that human beings have.
So maybe they have something like factor X minus N or factor Y or whatever you want to call it.
Yeah.
And so that would imply that it's possible to have more than one set of natural rights at the same time.
Well, that's the problem I think that I saw with genetic engineering.
You know, Aldous Huxley talked about this already in Brave New World.
You had alphas and betas and then the gammas at the bottom.
And they had been deliberately engineered basically to be slaves, you know,
that they didn't have the full set of human capabilities.
and therefore people felt free to exploit them.
And I think that, you know, you could imagine getting there in a number of different ways.
I mean, it could be that you deliberately engineer a kind of subhuman race.
I think that's not that likely.
What's more likely to happen is that elites will start separating themselves,
not just in terms of social status and background and education, but also genetically.
It's actually interesting.
if you look back historically, there were actually biological differences or heritable biological
differences between social classes that, you know, poor people because of bad nutrition in the
middle ages were shorter and less, you know, mentally developed than aristocrats were.
And so that, you know, in a sense, we've already experienced, you know, some version of that.
And I think the simple physical differences between aristocrats and common people reinforced, you know, the belief in the need for political class differences.
And if you could actually get to that same result, you know, through biotechnology, I think you'd also have a call for different classes of rights.
Right.
So to dwell on this a little longer, let me give you my understanding and then you can tell me whether I've got it correctly.
So my understanding is that you weren't concerned so much with the concept of a post-human per se as much as kind of like an uneven transhumanist transitional period.
So if you could flip a switch and upgrade every human in the world into the same kind of post-human, that would be less bad than a kind of transhumanist rollout.
there's a lot of different dangers, I think, mixed up in this.
And I would say that probably one of the most powerful ones is simply unanticipated consequences,
you know, that the current human emotional makeup is the result of, you know,
hundreds of thousands of years of evolutionary experience.
And we have the kinds of characteristics and faculties we do because, you know,
that's proved to be a kind of winning combination in terms of the service.
survival of, you know, the human species. And if you deliberately try to manage that process,
it just seems to me very likely that you're going to get consequences that no one ever thought of.
And dealing with those, you know, is then going to be very, very difficult. So that's one
category of problem. Another is substantively, you know, what would you want human beings to do?
You know, live longer, be smarter. I mean, people would probably pick and
intelligence is the first category that they'd want to monkey with.
But again, that's going to have consequences, you know, as we're saying, for things like rights and, you know, political participation.
So, you know, there are many ways in which this could end up affecting human societies.
I actually think that we're already in something of a crisis in terms of life extension.
You know, when I was on the bioethics council, we spent our last year talking about essentially gerontology, you know, that past, you know, at some point in your mid-80s, roughly half of all people have some chronic degenerative disease.
And it means that, you know, a lot of your population is actually going to live, you know, a good 10, 20 years beyond.
the point that there are fully, you know, capable human beings.
And that's an economic cost that we have.
And, I mean, we're now grappling with it, but, you know, it's likely to get bigger as time goes on.
And, you know, I guess the way I thought about this was that ideally what you would like in a human lifespan,
assuming we do die, is that all of your faculties,
would kind of shut down at the same time.
And I think the likelihood of actually having life extension
in which that happens is very unlikely
that certain faculties are going to shut down,
you know, well before other faculties.
And so you'll have a significant portion of the population
living with some form of disability.
And, you know, we don't really like to think about that.
You know, it gets into these questions of, you know, rights,
but even as we speak, you know, somebody with severe Alzheimer's doesn't have the rights of a, you know, a younger adult that has all their faculties.
They can't drive.
They can't, you know, kind of make independent decisions in the way that a, you know, a fully formed adult would.
And this is all consequences of, you know, the life extension that we've already achieved as a result of our existing biomedical technology.
And so this is why it's not a single thing.
I worry about. I worry actually about a lot of different consequences that we really haven't thought
through. Yeah, a few different threads to pick up on there. I guess the longevity, the worst case
scenario, is that people's cognitive faculties tend to shut down before their other bodily faculties.
I guess it's just not obvious to me that that's the direction in which longevity science is driving.
I don't know enough about it. Well, it's already driven to that point. So the question is, could you
reverse Alzheimer's or, you know, Parkinson's or any of these degenerative diseases, I suspect
that will eventually happen. But there could be other things that start shutting down, you know,
that we're not even aware of, you know, so. Yeah. On the kind of like fiscal consequences of aging.
Oh, yeah. Well, those are, I mean, we're already in a big social security crisis. Yeah. But maybe we'll be
getting AI just in time to kind of rescue us from those. Yeah, well, maybe. Maybe. Maybe.
Yeah, yeah. We'll come to that. Briefly back to genetic engineering, do you view assortative mating as being on a continuum with genetic engineering or qualitatively different?
Well, it's qualitatively different in that the agency is exercised in different ways. So assortative mating is simply done because, you know, you meet a partner that you really like. And, you know, because you're similar social backgrounds and.
so forth, you end up marrying them and having children, whereas genetic engineering is under much more
direct control and can be used deliberately for social purposes. Like if I graduate from Stanford and
marry another Stanford graduate, I'm not thinking to myself deliberately. We're trying to create a
race of super smart, you know, tech entrepreneurs. You're just kind of following your instincts. But I think the
problem with genetic engineering is that that can be done deliberately, you know, with a, you know,
a clear social purpose in mind.
If transhumanism does continue to progress, we do enter a world in which there are different sets
of overlapping but sometimes competing natural rights, how do you adjudicate disagreements
between those sets of rights.
Do you then need to kind of fall back
to a utilitarian framework,
or how do you think about that?
I think that it's, you know, hard to say
because it would depend exactly on how,
you know, these different categories
of human-like creatures, you know, turned out.
I mean, it also depends really on what you mean by utilitarian.
I mean, I think that the, you know, the main charge against utilitarianism is that it doesn't actually take the issue of human agency and human dignity depending on, you know, human moral agency seriously as a, you know, as a basis for rights and for, you know, defining who a human being is.
It simply is a kind of calculus of pain and pleasure.
and I think that if you actually did develop human beings with different moral capacities,
you know, you would rethink rights.
I mean, just to take another possible future scenario, you know, one thing that would be the target of genetic engineering seems to me something like compliance that, you know, all societies want human beings to be more compliant.
and follow rules and not cause trouble.
But I suspect that for evolutionary reasons,
there are good reasons why people take risks
and don't want to follow rules
because otherwise you just live in, you know,
a regimented society, you know, with no personal freedom
and therefore no innovation, no, you know, risk-taking and so forth.
And so do you really want to breed, you know,
a willingness to take risks out of the population
and replace that with a, you know, a tendency to comply with rules and authority.
You know, and so it's that kind of thing that I think worries me is that previously we had lots of ways of trying to make people compliant.
You know, we put them in labor camps and we gave them adjut prop and, you know, tried to educate them in certain ways.
That really, in the end, didn't work because I think human nature itself resisted these kinds of attempts.
to shape behavior, but, you know, maybe in the future we'll have much more powerful tools.
The other phase that we haven't mentioned yet is neuropharmacology because you can produce behavior
change, you know, really directly by using drugs. And that's something that, you know,
we're kind of in the midst of a crisis over right now. It's not heritable. So your children don't, you
necessarily inherit those characteristics, but it also is a way of, you know,
potentially making people more compliant or conforming with, you know, certain social rules
that certain people prefer. And again, I think that that politically can be very problematic.
If we do have these different creatures inhabiting the earth with different sets of natural
rights, are there any obvious ways in which liberal democracy becomes less suitable?
is the political order for accommodating those different rights?
Well, yeah, obviously.
I mean, liberalism is based, well, both liberalism and democracy are based on a premise of human equality.
And obviously, if people accept the fact that they're different categories of, you know, human beings, you're not going to have that.
Okay.
That makes sense.
All right.
Some questions about artificial intelligence.
So artificial intelligences have the potential to become post-humans.
they might be made of silicon rather than carbon,
but in a cultural sense, they'll be our descendants.
When our post-human future was published in 2002,
we were in an AI winter,
and the prospect of human-level AI still seems like pure science fiction.
But since the book was published,
the AI spring has well and truly arrived,
and it's now strikingly plausible
that we could have artificial general intelligence
by the end of the decade.
before I ask you some different questions about what this could mean,
firstly, I'm just curious how you're generally thinking about the concept of
artificial general intelligence, artificial superintelligence,
these coherent ideas to you, do you think they're likely to arrive soon?
Just generally, how are you thinking about these questions?
Well, so the first thought is that I don't like speculating about what the future is going to look like
Because, you know, my analogy is that it's sort of like speculating, what's the consequence of electricity and asking Thomas Edison that?
Right.
Right.
What would you have foreseen about all the uses of electricity in the next hundred years?
Right.
You know, probably almost zero.
Yeah.
And I think that, you know, the one thing I'm convinced of is unlike blockchain or Bitcoin or crypto, which I think is a kind of useless technology, you know, AI and, and, and.
you know, a general purpose AI is really, really big, and it is going to have huge consequences.
It's just very hard to know at this point exactly what, you know, direction that's going to move us in.
So that's my first observation.
And so that probably means that I do think that, you know, the speed is going to be great.
I mean, that's what everybody around here seems to think of the change.
and the capabilities are going to develop very rapidly.
And that's usually not good because social institutions in the past have always adapted to new technology,
but there's always this lag.
And I think that this is going to be an even bigger lag because the technology is going to move that much more quickly.
So if we do get to a post-human future, do you think that will be more likely to be brought about by biotech or by artificial intelligence?
You know, it's hard to know. And it could be the combination of the two. I think that, you know, there's going to probably be this gradual merger of computers and human brains, you know, that are going to operate in, you know, rather similar ways. So, yeah. But again, this is one of those areas I don't want to speculate too much about.
Yeah. No, I understand. One of my worries going into this interview was that I would be inviting you to speculate too much. And I, as much as you dislike that kind of pointless speculation, I'm very sympathetic to you on that. So I'm going to try and ask more, I guess, specific questions that maybe rely on conditional predictions.
Oh, yeah. That's fine. So let me let me see. So presumably
it would take a lot for you to be willing to grant Factor X to artificial intelligences, right?
If Factor X is a complex, emergent bundle of uniquely human traits.
I mean, presumably, you can tell me, but maybe the most important of those traits is something
like consciousness, but it would take a lot before you were willing to say that an AI had Factor X.
Yeah, well, Factor X would consist of a lot of, it's a bundle of different characteristics which point in somewhat different directions.
So, for example, you know, I have a dog. A lot of people have dogs.
I suspect that dogs have some form of consciousness. You know, they imagine things. My dog dreams all the time.
And so obviously she's living in this mental world, you know, just inside her own brain.
And I think the reason that people like dogs as pets is that they're so obviously emotional.
And they have very human-like emotions.
They make eye contact with you.
They're happy to see you.
You know, they have preferences.
They, you know, get angry at certain things.
I mean, all of this I don't think is simply anthropomorphism, I suspect.
You know, that's the other thing that we haven't discussed.
is, you know, we've been talking about, could you breed humans that are less than a full
human being?
But the other thing is, you know, our animals actually, are we going to realize that they're
actually much closer to human beings and we recognize?
Right.
There is an animal rights movement that I think doesn't have a clear philosophical basis,
but, you know, I think what we may come to understand is that actually many of those parts
of factor X that we thought were unique to human beings actually are not.
and that, you know, there are many animals that actually, you know, come close to that.
And so I would, you know, what I think about my dog all the time is that my wife is firmly of this opinion,
that they're sort of like a three or four-year-old, you know.
They have all the emotions and kind of emotional intelligence of a three or four-year-old,
but you wouldn't want them to vote.
They can't speak, you know, kind of basic human uses of intelligence like,
language are beyond, you know, a dog, but they also can suffer and they probably feel things
and have some degree of consciousness. And so, you know, that's one reason why people really don't
want to eat dogs or, you know, use them in a completely utilitarian way because we do attribute
certain of those human factor X characteristics. And I think that you're probably going to have
more choices like that, you know, of creatures that have some degree of human characteristics.
I think, you know, one of the early ideas in artificial intelligence was a Turing test.
I've always thought this is a ridiculous test.
I mean, basically it says that if the external behavior of, you know, an AI is not distinguishable
from that of a human being, then they're basically a human being.
And I never understood why anyone thinks this is, you know, a correct way to do.
If that's the case, we've already got artificial human beings.
Right.
Some of my chatbots, you know, in many respects are not distinguishable from a human interlocutor.
You know, and I think that what, you know, what most people would think of the AI is missing is something like consciousness and, you know, the whole emotion.
emotional suite of reactions that human beings have, right? So the chatbot can replicate emotions.
You know, they can say thank you or please or don't do that, you know, but you don't get the
feeling that that's based on an actual, you know, emotional perception. And this is, this is what
kind of annoyed me about computer scientists that were in this, you know, people like Marvin Minsky,
that they really do believe that the human brain is just a wet computer
and that when the computer gets to be the same scale as a human brain,
it's going to develop consciousness and emotions and all this stuff.
And that seems to me one of the biggest unproven, you know, assumptions there is.
We don't know what the origin of consciousness is.
Right. Yeah.
I guess, so another way to approach this problem in terms of,
the sphere of politics is just to say that
even if it is kind of like John Searle's
Chinese room and there's nothing actually
there's no subjective experience happening in the AI
the AI. The AI is able to convince people that it is
conscious. On some level that's all that really matters
to politics. I mean like one kind of straw in the wind a couple of years ago
we had that Google engineer Blake Lemoyne who was convinced
that one of Google's models was conscious and that was
a very early model, but you can imagine
a few years in the future when these models are much
better and much more persuasive, that
even if they aren't truly conscious,
they still might be able to demand and then
successfully obtain political
rights. Well,
yeah, maybe,
maybe.
Would AIs need to be
themotic before you were willing to grant
them liberal rights? Well, I guess
it depends what you mean by that. Like,
can they feel anger?
You know, I would say that,
you could certainly program them in such a way that they could replicate angry behavior,
but that's not the same as actually saying that they're feeling anger
and that that's what's motivating them to act in certain ways.
So again, it's that Turing test problem that you actually don't know what's going on
on the inside of these machines, even though the behavior is really indistinguishable
from that of a real human being.
Yeah.
You know, there was a lot of my thinking about this was actually shaped by a friend of mine who wrote this book, Non-Zero.
Robert Wright.
In that book, he has this very interesting discussion about consciousness and what it means to be a human being.
And he said that, you know, there's a philip.
philosophical question that nobody has really answered, which is, why do we have subjective feelings
at all? You know, he makes this point, for example, why do we feel pain and fear pain? You could
design a robot, you know, so you put your hand over an open flame, and it hurts and you withdraw your
hand. But you can program a robot to do exactly that, right? You have a heat sensor, and the heat sensor
says, oh, this is a temperature that's too high for my hand to survive, so I'm going to pull
the hand away, without actually having to have this internal emotional state of pain
that makes you, you know, draw away. And, you know, his argument was that it's not clear
why those, you know, those subjective emotional states exist. Again, if all you're interested in is
the external behavior of the being, they don't have to.
You know, you can program creatures that will respond to all sorts of things,
you know, as if they had these internal states.
And I think that's kind of crucial for believing that an AI is actually, you know,
a human being, is some awareness of, you know, the fact that they actually have this kind
of internal subjective feeling.
And I have no idea how you'd know that.
I'm certain that you can get to the point where they can.
pretend to have those, but, you know, whether they actually would or not is, I think, still an
open question. If you had a society of non-themotic kind of Spock-like AIs, as a first
approximation, what would the best political order for them look like? Would it be something
like market-oriented authoritarianism? Yeah, that's the trouble with a lot of these
tech billionaires, like, I mean, you know, this is a characteristic of a certain kind of
intelligence that a lot of them have, a lot of mathematicians and people that are very good
at a certain kind of reasoning have that they feel that, you know, that's the most important
human characteristic. You know, I mean, all these guys, Peter Thiel and Mark Andreessen and, you know,
Elon Musk, I mean, they're all edging towards this belief in a kind of technocratic aristocracy that
there's just certain human beings that are smarter and, you know, better at doing things than
other human beings, that they should have kind of some intrinsic right to rule other people.
And so some of them have actually become overtly anti-democratic, you know, that you really
ought to delegate decision-making to this kind of superior class of individuals.
And I think that that is not good for democracy as we understand it, you know, today.
But if the society was just composed of synthetic AIs and they were non-thamotic, so we're taking that as an assumption.
That's never going to happen.
So, okay, something more realistic then.
So imagine the next few decades unfolds and AI progress kind of continue, the scaling laws continue to hold the LLM.
our progress continues, we get better and better models. Those models start to become agentic.
And there are now sort of millions of like human, Blake Lemoynes in the world who are convinced
that these artificial intelligences are conscious that they do deserve political rights.
Are there any, in that kind of scenario, are there any general intuitions you have about the, or
predictions that you're comfortable making about what the political order starts to look like?
No, I'm not comfortable making any of those predictions. I just think it's so hard to
imagine, you know, I mean, do you think we should even be thinking about this or is it kind of
point of? Well, no, you should think about it. I think that the big issue is going to be one of
power, right? That are you actually going to delegate real power to these machines to
actually make decisions that have big consequences for living human beings?
We already delegate, you know, decision-making power to a lot of computers, you know, in terms of processing information and, you know, telling us, you know, what's going on, having sensors that feedback information to us.
But, you know, are you actually going to delegate to them a power to make, like, life and death decisions, you know, that will directly affect other human beings?
I suspect we probably will and we'll get there at some point.
but that I think, you know, is going to pose a much more
a sharp problem for, you know, for a society.
Right.
Say we had a superintelligence today and it wasn't public knowledge yet.
I don't know.
Sam Altman gives you Frank Fukuyama a kind of preview into Open AI's new superintelligence.
And you ask it whether it could come up with a better political order than liberal democracy.
for today's world, how likely is it that you think it would be able to do that?
Well, I just don't think that it would be, you know, likely to get that right.
It might iterate enough that over time it could work its way towards, you know, something that
would help.
But, you know, the thing is that, and I think this is a very common mistake that mathematically
minded people have about the nature of intelligence. Political intelligence is very different
from mathematical intelligence because it's completely contextual. You know, to really be intelligent
about politics and the way things are going to work out in the political world, you have to
have a lot of knowledge about the environment. And, you know, this is something I teach my students.
We basically teach, you know, comparative politics, things that are doable in China.
are not doable in India.
And in fact, they may be doable in certain parts of India, but not in other parts.
You know, some states may be able to get away with certain things and others not.
And how it affects different classes of people, how it's affected by traditions and culture
and this sort of thing, is, you know, all part of what political intelligence has to draw on.
And so, you know, and it also gets down.
to this lived experience. I mean, I think lived experience is used wrongly in many cases to say that
there are certain experiences that are so unique that really if you haven't actually experienced it,
you don't have a right to even talk about it. But I do think that, you know, the best political
leaders are ones that have certain lived experiences that allow them to empathize with people
or understand pitfalls in, you know, the way that people are thinking or acting. And for a computer
to actually extract that from their environment.
It seems to me it would be very, very difficult
to give the proper weightings to, you know,
all of these experiences and then put them together
in a way that would actually produce, you know,
a certain order.
And then the other problem is that nobody's going to want to give up power.
So supposing the computer comes back and says, you know,
well, actually, I think you ought to delegate power to, you know,
smart machines or smart oligarchs, you know.
How are people going to take that?
So it's not possible to kind of reason your ways to a political order a priori?
No, I don't.
No, I mean, look, I actually believe that evolution is the way that most things, you know, came about,
is that, you know, you have a lot of trial and error.
Certain things work and other things don't.
And that's how we got to be human beings the way we are now.
And I think that's also the way any future political system is going to evolve.
and there might be a certain kind of logic to the mechanisms,
but you can't really predict how they'll unfold.
Right.
So there's an interesting passage in our post-human future
where you talk about how Asian cultures might be more permissive
of biotech developments because there are less inhibitions on biotech
in most Asian cultures.
The reason for that is that many Asian cultures lack a kind of transcendental religious
tradition like Christianity.
And so there isn't a dichotomy.
between humans and non-humans. There's more of a continuum. We see many examples of this
in the kind of like Chinese laws in the past around organ harvesting of prisoners, eugenics,
even in the example of...
Abortion is much more common, right? Even infanticide in some cases. Yeah, I think...
The fact that the three CRISPR babies emerged from China. Yeah.
Yeah, I mean, they don't really, in Asian, most Asian cultures don't have anything like Factor X
a concept that there's some core set of human characteristics
that sharply distinguishes human from non-human.
That has some good consequences.
Well, you know, in both Taoism and Shinto, for example,
you know, they have a belief that spirits inhabit all sorts of things.
They inhabit desks and chairs and, you know, temples and, you know, computer chips, you know.
So the spiritual world really extends to basically all,
material objects in the world. And it means also that, you know, in those cultures, you actually
have more respect for the non-human world, or it's less obviously there to be exploited
than it would be in a Judeo-Christian culture where, you know, man has created, there's a special
creation of man and a sharp distinction between human and non-human. So that's why I think
there are just going to be less inhibitions to this kind of, you know, biotech,
in Asia than, you know, in the West.
Yeah.
Does that also imply that if we do get powerful, agentic AIs, Asian cultures are more likely
to be the first, or Asian countries are more likely to be the first countries to grant them
some form of rights?
Well, maybe.
It's possible, you know, who knows?
Have you seen the 2003 film, The Creator?
No.
Okay.
It's really good film.
It's probably the most compelling depiction of this.
this scenario. So I think it's set in the year
255
super intelligent AI has detonated
a nuclear weapon over Los Angeles
in the Western world rallies
together to annihilate and exterminate
the artificial intelligence. And then this block
called New Asia, which is basically composed
of all the Asian countries
kind of offers safe haven to the AI's.
Anyway, I realize that there was this kind of connection
to your point about the
human non-human continuum in Asian cultures.
It's a good film.
Okay, a couple of questions about AI, China, and the end of history.
And I guess now we can kind of bring our horizons a little closer in
and just think about the next kind of maybe five to ten years.
So we think about large language models in the way they're being used currently.
In terms of how authoritarian regimes might make use of them or be affected by them,
On the one hand, you have concerns that AI will help authoritarian regimes entrench their power
by providing them tools of propaganda or surveillance.
There's this other view that's been gaining currency recently.
And I think our mutual friend Tyler Cowen has been writing about this,
which is that LOMs are imbued with Western,
but specifically American ways of thinking in very subtle ways.
And that this is going to represent a vector into,
into China and a kind of victory of American soft power.
Because even the best Chinese models,
apparently Deepseek is largely based on Open AI's models.
I'm curious how you think about this
and whether you think it's likely that these models will,
whether it's more likely that AI will advance
or set back the cause of liberal democracy in China.
Okay, well, that's precisely the kind of question
that I can't answer.
I think that...
Is there a way we could break it into smaller chunks that way?
Well, I mean, look, these AIs are trained on certain, you know, bodies of writing.
And presumably they will pick up cultural habits that are embodied in, you know, particular literatures and so forth.
And so I would imagine that if there's a Western bias to existing models, it won't be the case with Chinese models when they train them on.
on, you know, Chinese material.
So I'm not too worried, or I guess this is,
worried is the wrong word.
I'm not hopeful that we will undermine China by, you know,
by these hidden biases in our AI models that we're exporting.
Hi, everyone. This is Joe. Just a quick heads up that from this point onwards,
both my main mic and my backup lab mic failed,
which is a rare tragedy in the world of podcasting.
That means my audio quality is much diminished.
I apologize for that.
Fortunately, however, we still have Frank's primary mic,
so his audio remains crystal clear.
Some questions about the future of work,
and it's themotic origins.
So first some questions about megalothemia,
and there's a question about isothermia.
So a lot of reasonable people predict that,
by the end of the century, as a result of AI advances, that we might have machines that can
perfectly substitute the human labor. You might even make human labor. And then obviously you might
have a world in which people are relying on something like universal basic income and they
have all of their material needs met that humans are no longer doing anything economically
valuable in the economy. Assume that world does arrive.
It strikes me that one of the virtues of liberal democracy, which you've written about
is that it provides outlets for megalothemia, and perhaps the most important outlet is entrepreneurship.
And that's for two reasons.
Firstly, megalithymotic individuals generate wealth for society.
But secondly, it keeps them out of potentially disruptive activities in the realms of politics
and military.
Curious, if humans no longer do any of the economically valuable work in society,
what do the new megalothymotic outlets look like?
Well, look, we're already living in that world you describe.
I mean, you know, you have people like Donald Trump and Elon Musk that are intervening in
politics, you know, because of their megalothemia.
And so we're already seeing, you know, I think.
the terrible consequences of that. No, I think in terms of work, this is why universal basic
income, I think, will never take off, is that people don't just have, you know, material needs
for resources to stay alive and, you know, pursue their hobbies. They really feel that their
dignity comes from work. And I think that, you know, in fact, you know, there's a significant
resistance to being on the government dole precisely because people are proud.
You know, they say, I, you know, I am a worker.
I do things that are useful.
My salary reflects my use to society.
And if you're just paying me for existing, you know, that doesn't make me feel good as a
human being.
I think that's a pretty universal kind of reaction.
And that's why I just think universal basic income is just never going to be a tenable
idea.
Right. Just as a complete sidebar on Trump and Musk, I'm curious, given their mutual
megalithemia, how you make sense of the kind of equilibrium they've managed to reach
in their personal relationship. Well, I don't think it's that equilibrium. I've always thought
that Trump is going to drop Musk the moment that he becomes a political liability, and that's
probably going to happen sooner rather than later.
I was it, I don't think I'm breaching any Shadam House rules here, but I can always edit this
if I am, but I was at a dinner in San Francisco recently, and one of the attendees had been at
some kind of fundraising event at Mara Lago in the last month or so. And apparently Trump has this
trick where he goes around before the dinner and asks each of the guests, who do you think is
more successful, me or Elon? Oh, yeah. Someone made the mistake of saying, well, Mr. President,
I think you're more successful in politics, but Elon's more successful in business. And
that person was not invited back. Yeah, yeah. Okay. So, so, so,
So politics might start to become a more important outlet for megalothemia again in a world
in which humans do less work.
Yeah.
Uh-huh.
Well, I don't think, since I don't think we're going to get to that point.
Right.
But the nature of work is definitely going to change.
It's going to be less onerous and more mental and so forth.
So in the second edition of Kosev's introduction to the reading of Hegel, there's this footnote of where he talks about the trip he took to Japan in 1959 and he kind of thought of Japanese societies in some sense being at the end of history because after Shogun Hideyoshi in the 15th century, Japan basically hadn't suffered any civil wars or invasions of the homeland islands.
And he kind of reflected on the rituals and traditions of the aristocratic class and viewed them as engaging in a form of pure snobbery, where they engaged in these kind of elaborate formal activities like flower arranging and no theatre.
And he viewed that as kind of an outlet for their megalithemia.
That would seem to suggest that in a world in which humans have become economic,
redundant that those those kind of activities take on a greater importance. Like maybe
everyone's just like trying to climb Mount Everest or working on very elaborate projects.
It doesn't seem like a very plausible vision of. Well, but I think a lot of that has already
arrived. I mean, how many people and how many billions of dollars are involved in the
video game industry? I see. Right. I mean, we're already creating these artificial worlds
that have really no consequences for human beings,
except that they're an outlet for, you know,
people's thumos and ambitions and so forth.
It's kind of Robert Nozik experience machines.
Yeah.
Yeah, that's only going to continue.
Yeah, I mean, I think that's one of the problems in our politics
is that a significant part of the American population
lives in this fantasy online world, you know,
where reality doesn't,
really intrude very much and you have as many lives as you want and you know you never have
to pay consequences for risks and so forth right some people who've compared or viewed as a model
the lifestyles as the landed gentry in the early modern era as a kind of vision of what
people's lives might look like if slash when artificial intelligence makes people economically
redundant. And I'm curious what you're making that because I can actually see a kind of
disanalogy there where the reason those aristocratic lifestyles worked was because those people
were kind of masters in the Hagellian sense. And so they had that, they had that sense of
recognition. And so they were able to kind of engage in aristocratic activities and not do
much economic work. It doesn't seem like you could apply that same.
model to human feature in which people weren't themselves, masters?
Yeah, maybe.
I think that, again, I just resist this premise that you're going to get to this point
where people don't do economically useful work and, you know, that they then have time
to do other things because, first of all, human desire does not stop expanding.
Right. I mean, what makes billionaire get up in the morning? If you got a billion dollars, you could just kind of lay in bed all day. You could fantasize. You could play video games, right? But they're all out there doing stuff. And I think that, you know, the reason is that there's no level of material wealth at which human beings say, okay, that's enough. I've got everything. I'm not going to do anything anymore. Just doesn't exist.
Yeah. I think.
in this world, it makes sense that people would still be doing projects, broadly construed,
but perhaps not work that generates an income if AIs can be doing it much cheaper.
Well, again, we're already living in that kind of a world, not that AIs are taking over,
but you know, you think about a lot of the products that are sold today, you know,
that are really not in the least bit necessary for any kind of human life,
but people still, you know, are involved in them.
And I just think that, you know, the human desires really don't have any particular limit.
And, you know, once you reach a certain level of material wealth,
you're still going to pile on more objectives and desires, you know,
that presume, you know, that you're already,
at a certain level, but you still want more.
I just don't see how, I mean, what I do think is going to happen is that a lot of activity
will go into things that are not traditionally thought of as, you know, producing material well.
But, you know, you can only drive so many cars, eat so many, you know, chocolate cookies.
Right.
Over the past few years, you may have noticed me putting more and more questions about AI
in conversations with prominent public intellectuals,
from Stephen Pinker in 2022 to Daniel Kahneman in 2023,
and the same to Webb in 2024.
I do this because AI looks poised to become
at least as transformative as the internet,
maybe even the printing press.
So it's probably worth wondering about it a little bit more
and nudging public intellectuals to do the same.
Many experts now believe it's plausible
that humanity could build artificial general
intelligence by the end of the decade, that would unlock extraordinary opportunities and pose
catastrophic risks. As with any general purpose technology, the question is how to maximize the
upside while managing the downside. One organization that's been wrestling with that question for more
than a decade is 80,000 hours, who's sponsoring this episode. They were writing serious analyses
of AI back when it sounded like pure science fiction. 80,000 hours is a nonprofit,
that helps people find fulfilling careers that do good.
Their research suggests that working to reduce risks from advanced AI
could be one of the highest impact things a person can do.
They provide free, in-depth resources,
including detailed career reviews from technical AI safety research
to governance and information security,
a job board with hundreds of high-impact opportunities,
a podcast featuring expert guests like Carl Schoerman and Ajaya Kotra,
and free one-on-one career advice.
To explore those resources and download their career guide,
head to 80,000 hours.org slash Joe Walker.
That's 8000000 hours.org slash J-O-E-W-A-L-K-E-R.
It's free and takes about 30 seconds to sign up.
Okay, some random questions to finish on.
So Hagle thought that there would still be wars
at the end of history that Kojev felt that there wouldn't. How do you make sense of Kojib's view?
Well, I think that Hegel is probably more correct. I mean, if you take the most seriously,
you know, nothing substitute. In fact, I think I said that in one of the last chapters of the end
of history, that there's actually nothing like the risk of violent death in a, you know,
in a struggle, in a military struggle that makes people feel as human, fully.
human. And I think that that's going to be, you know, continue to be the case. I think that a lot of
the political turmoil that we see around us now is really driven by that kind of desire.
You know, people want struggle for its own sake. They want risk and danger. And if their lives
are so contented and peaceful that they don't have it, they'll create it for themselves.
Right? So why are all these kids, you know, at Stanford and Columbia and Harvard and other places camping out, you know, on behalf of the Palestinians, right? I mean, why do they care about the Palestinians, right? What they want is they want to be seen as people that are struggling for justice because, you know, that's a noble human being. And I think that desire is really not going to, is not going to go away. And that's why, you know, the ultimate struggle for justice is,
really one where you actually do risk your life. And I think, you know, that's the sense that
Hegel had about why, you know, war wasn't going to disappear. Yeah. But how do you make sense
of Kojov's view? I don't know. Right. I don't know. Yeah. Interesting. So what more would
have to happen for the singular example of China to falsify the end of history thesis?
Well, I guess if in another 50 years they were the leading power in the way that the United States was and everybody wanted to emulate them, then yeah, I would say there's no, you know, the model, the liberal democratic model.
And we're halfway there, I must say, you know, it's not just China's success. It's our failure, you know, the failure of our democracy to actually produce kind of reasonable outcome.
Do you say any signs of the democratic recession reversing?
Well, yeah, I mean, I wouldn't say that I see signs of it reversing.
I think it's always possible that people can still exercise agency and make different choices.
And so every election that goes by, you know, can actually go in very different directions.
So I think it's important for people to remember that.
and the fact that they can reverse this democratic decline if they, you know, if they struggle for it.
So I, I don't know, but I imagine you as one of the most misunderstood living public intertals.
And that would pertain mostly to the end of history thesis.
And I think people misunderstand it in two basic ways.
Firstly, they think that you were predicting that, you know, liberal democracy would spread to every corner of the close.
over the short term. And then secondly, obviously, more egregiously, some people misinterpreted.
You were saying that there would be an end to history in a sense of significant events.
And I imagine you probably get email every day telling you how you were wrong and you're
probably at the same kind of questions every time you do a public event or a lecture.
If that premise is correct, I'm curious what that experience has been like for you and what
you've learned from it?
Well, I've largely learned to shut it out, you know, as you're right.
I mean, it never goes away.
And, you know, I guess my feeling is that there are enough people that have actually read my books.
In fact, there was a meme going around on Blue Sky where there was a little checklist about things you had to do to basically, I mean, one of them was apologized to Francis Fuguyama for never having read his book, you know.
So, you know, I think there are people that actually did read the book and kind of understand that it's a little bit more complicated.
Right.
Has it changed the way in which you go about being a public intellectual?
Well, not particularly.
Right.
You know, I think that one of the big pitfalls of certain public intellectuals is that, you know, they have a big success.
They get this big dopamine hit early on.
their careers, and they then constantly want to replicate it.
And so they are forced to then take positions that are more and more extreme and ridiculous.
Simply, you know, I think Dinesh D'Souza, this right-wing commentator, is like that.
He was the editor of the Dartmouth Review when he was still in college, and he made a name for
himself by staking out these very conservative positions in a very liberal college.
And he's been trying to replicate that feeling ever since, you know, by saying things that are, you know, yet more outrageous and yet more right wing than the last thing he said.
And I just think that that's a trap that I never wanted to fall into.
I'm perfectly happy to be regarded as boring because I'm taking actually a reasonable position rather than trying to shock people.
I'm not trying to replicate the excitement everybody felt when the original end of history was published
because it'll just never happen and, you know, I'm not going to try to get there.
Is there any other advice you'd have for public intellectuals who want sustainable careers?
Because, you know, you say that you that you didn't want kind of force yourself to replicate the success and the document hit of the end of history.
You know, the later books, like the origins of political order and political order and political
decay are brilliant, in some sense, as significant as the end of history of not as well known.
So what, yeah, what other advice would you have for public intellectuals who want to be sustainable?
Well, I mean, if you want to be a public intellectual, they have to begin by being an intellectual,
meaning that you actually have to think about things and you have to, you know, do research and
take in information and process that information and then, you know, write about it.
You know, it's interesting.
When I published Trust, my second book, one of the reviewers said something that I thought was
right and sort of revealing.
He said that, yeah, this is a pretty good book.
You know, most people that have a big hit like The End of History, you know, that's all
they do in their careers and they don't go on to write a second book that's interesting.
and I think that, you know, for me, actually the success of the end of history was liberating
in the sense that I could actually write about whatever I wanted at that point.
And I could write a serious book, you know, about a very different topic and people would still
pay attention to it, not, you know, trying to replicate the success of the first book,
but, you know, the first book actually freed me to be able to write about whatever I wanted.
And I think that's been a great, you know, that's been a great advantage.
I never had to get tenure.
You know, I actually don't like the idea of tenure because I think that it forces younger academics to tow the line in terms of, you know, the kind of intellectual risks and the issues that they study because it's very, you know, it really narrows you to a very small subdiscipline within your bigger discipline.
and I think, you know, I've tried to avoid that, but yet do things that are intellectually serious.
Final question. What, if any books are you working on at the moment?
Well, I've just written a little bit of an autobiographical memoir in which I try to, you know,
there's actually a thread that runs through a lot of my writing that might not be obvious,
but that connects different books I've written. So one of them is, you know, something we've talked about
already, which is the idea of Thumos, which starts in, you know, the end of history, but it continues
to my most recent book and, you know, about liberalism. But the other one is about bureaucracy
and why I actually spend a lot of time worrying about the state and the nature of the state
and how that's all related to a bunch of different ideas I've had in the course of my career
about delegate, you know, so for example, I got a chapter in the autobiography on delegation
because at a certain point, I began to realize that delegation within the hierarchy is one of
the most difficult and most central questions to management, to public affairs, to law.
And it's something we're still fighting about, right?
Republicans believe that we've delegated too much power to the state.
And, you know, people on the left think we haven't delegated enough, you know.
And so there's a lot of things like that that aren't obvious, I think, to a lot of people.
So anyhow, this is going to try to tie those threads together in a more comprehensive way.
When will this be published?
I have a contract.
You know, I'm going to have to revise it, but probably in the next year or so.
okay so your stuff on delegation it's not it's not readily apparent or that theme isn't readily
apparent or organized in your existing published body of work right yeah can you could you share your
your most interesting takes on well i'll start with an anecdote which is why i really started
thinking seriously about this in the late 1990s you were in the midst of the first dot com boom
and the whole of silicon valley was arguing in favor of flat organization
they're very opposed to hierarchies of, you know, various sorts.
And there was a feeling back then in this very libertarian moment
that everything could be organized on the basis of horizontal coordination.
The idea was the Internet was going to reduce transaction costs involved in this kind of coordination,
and nobody would actually have to listen to a boss, you know, in the future.
And that is not true.
You actually need hierarchy because actually you can't coordinate.
you know, on this horizontal basis.
But so in any event, this being the zeitgeist in the 1990s, I was still working at the RAND
Corporation.
And the last study I ever wrote for them was called the virtual corporation and army organization
because it seemed to me and a colleague of mine, Abe Shulski, that the Army is quintessentially
the hierarchical organization.
and here was Silicon Valley organizing itself in a much flatter thing without these hierarchies
and could the Army learn something from Silicon Valley?
And what I learned, so we went around to a lot of different, this was sponsored by the training
and doctrine command in the Army, went around to a lot of different military bases, talked to a lot of
officers, and we realized that actually Silicon Valley didn't have anything to teach the Army
because they understood this already,
that after Vietnam,
they had done a lot of soul-searching
about why that war went so badly.
They began to change their doctrine.
They borrowed it a lot from the Wehrmacht
and from German military practice.
There is a tradition in the German army
called Alstrog Taktik,
which is basically a doctrine about delegation.
And it says that if you're going to be
a successful military organization,
the senior leaders, the generals, have to give only the broadest strategic direction,
and you have to delegate the maximum amount of authority to the lowest possible command level.
Because in a war, the people that actually know what's happening are the second lieutenants on the ground
that are trying to assault, you know, this village.
And, you know, it's not the general way back 100 kilometers, you know, at headquarters that understands that.
And then I began to realize that in corporate,
organization, that's true as well. You know, the Toyota just-in-time manufacturing system, like every
worker on the assembly line had a cord, and if they saw a production problem or a defect, they'd pull the
cord, stop the entire assembly line. If you think about what that means, you're delegating the ability
to stop the entire output of the factory to every single individual, low-level factory worker.
And that requires trust, but it also requires this huge amount of delegation.
And it's for the same reason the Army was delegating authority is that it's the lowest levels of the organization that actually know what's really going on.
Right.
So it's like a Hayekian point.
Yeah, well, so the article that I always have my students read is Hayek in 1945 wrote an article called the Problem of Knowledge in Society.
And he said that in any economy, 99% of the useful information is local in nature.
It's not something that's known centrally, but it's you and your particular local context.
And that's why he said that a market economy is going to work better than a centrally planned one
because, you know, price making in a market economy is based on local buyers and sellers that haggle
and, you know, set prices and therefore allocate resources efficiently.
And so this, all of a sudden I said to myself, yeah, well, this really is important, you know, that in any hierarchy, you need the hierarchy because you need the generals to set, you know, the broad targets.
But most dysfunctional organizations are one that don't delegate enough authority.
And the Army really fixed itself.
I mean, U.S. Army has become the best fighting force in the world.
I mean, the IDF in Israel, you know, had similar kind of doctrine.
and that's one of the reasons that they got so good at warfare.
That's why the Ukrainians have been beating the Russians
because they absorbed a lot of this American doctrine about delegation, basically.
That's where this all started.
And so if you want to, the only thing I wrote systematically about delegation
was actually this RAND study on, you know, on Army organization.
But it shows up in, you know, other things that I've written.
And so lately, I've been taking on the whole doge, stupid, you know, effort, I mean, this ridiculous effort of Elon Musk's to, you know, combat waste, fraud, and abuse in the government.
I mean, he's so wrong about, you know, so many of the things.
But he repeats this conservative mantra that the bureaucracy has too much autonomy, that it makes all sorts of.
of decisions that are left-wing, out of touch with the American people, and out of the control
of the democratically elected leaders.
And it's exactly the wrong.
I mean, it's 180 degrees wrong.
The problem with the bureaucracy in this country and in most other countries is that it is
too controlled by the political authorities.
There are too many rules that bureaucrats feel they have to follow.
If you want to fix the bureaucracy and make it more efficient, as Elon Murray.
claims, you have to delegate more authority to them. You have to let them use their judgment.
You don't try to control them through thousands of pages of detailed rules and regulations
for buying an office desk or a computer or something like that. So I think, you know, this delegation
issue plays out in contemporary American politics, you know, as well as in military affairs
and as well as in factory organization, all sorts of places.
Right. That's super interesting. Yeah, for me, as an Australian outsider looking in, the Doge effort is very much symptomatic of the kind of lock-in American political culture that doesn't trust government and wants to place more strictures around it, you know, starving the beast, so to speak.
Yeah. And as a result, they kind of get the opposite of what they intended. They got a government.
Right. Because the bureaucracy becomes so risk-averse, sort of breaks down the feedback loop between policy design and policy implementation.
Exactly, yeah. And that's what I teach my students here in this policy program that I run.
Right. I'm not sure whether you've quantified this concept of delegation, but is there a correlation between bureaucracies that delegate more effectively and state capacity?
Yes. So this is also one of the, I just won this award last year. It's a kind of lifetime
achievement award in public administration. So this is a field that Americans really don't
pay any attention to because they don't like bureaucracies. But I think that one of the reasons
I won the award was that I published an article back in 2013 that did exactly that. It said,
what's the appropriate amount of authority to delegate in a bureaucracy? And I said, it's
determined by the capacity of the people to whom you're delegating authority.
So like in the Federal Reserve, the staff of the Federal Reserve are all PhD economists,
and so you can safely delegate a lot of authority to them, whereas the TSA, the Transportation,
security administration are full of, you know, high school graduates, and you're not going to
delegate a lot of authority to them to make complex judgments about, does this person look
like a terrorist or, you know, am I going to stop this person?
You just give them simple rules to follow.
And so, yeah, that's how it plays out, I think, the relationship between capacity and delegation.
It's super interesting.
I'm excited to read more about this in the Humla.
Frank, it's been an honor.
Thank you so much for joining me.
Yeah, well, thank you for talking to me.
Done.
Thank you, sir.