Limitless Podcast - The Intelligence Curse: How AGI Makes Us All Obsolete | Luke Drago & Rudolf
Episode Date: May 27, 2025Welcome to Limitless. Today we’re joined by Luke Drago and Rudolf, authors of the powerful essay series "The Intelligence Curse." Together, we explore a future where artificial general int...elligence (AGI) threatens to upend the economic and social contracts that underpin modern civilization. Will AI empower us or make us obsolete? We unpack how labor-replacing AI could dismantle the very incentives that once gave rise to liberal democracies, social mobility, and human-centered innovation—and what it might take to build a future worth living in.------💫 LIMITLESS | SUBSCRIBE & FOLLOWhttps://pod.link/1813210890https://www.youtube.com/@Limitless-FThttps://x.com/LimitlessFT------TIMESTAMPS0:00 What is the Intelligence Curse4:29 Resource Curse8:20 Pyramid Replacement18:19 Institutional Pushback21:25 Capital, AGI & Human Ambition32:00 Liberalism Falls Apart?36:30 Powerful Actors41:15 Rentier States46:19 Human Labor in a AGI World52:46 Nation States57:37 Shaping the Social Contract1:06:23 AI Snake Oil?1:07:41 Balance of Power1:08:51 Breaking the Intelligence Curse1:16:45 Vitalik’s Defensive Accelerationism1:18:16 Diffusion 1:19:58 Open-Source AI1:22:06 Democratization1:24:06 Who WIns?1:26:43 Action Items1:29:17 The Positive Scenario1:30:21 Closing------RESOURCESLuke Dragohttps://x.com/luke_drago_ Rudolf Lainehttps://x.com/LRudL_ Intelligence Cursehttps://intelligence-curse.ai/ Time Op-edhttps://time.com/7289692/when-ai-replaces-workers/ Contact Formhttps://docs.google.com/forms/d/e/1FAIpQLSft2iBV9z1AYsM3TcDnh8z3juc2k4yD0TQTZ91oy37S-KlSSQ/viewform AI Snake Oil - Arvind Narayananhttps://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil Vitalik’s Defensive Accelerationism https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html ------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
So the intelligence curse pretty briefly defined is the set of incentives that we might get when we unlock artificial general intelligence.
And we're reusing the open AI definition here, which is the ability to automate most or all human labor.
And in that world, we're really concerned that governments, powerful actors, corporations, won't have that incentive to care about regular people.
That doesn't mean they guarantee won't, but it means that strong economic incentive that we've had for basically all of social, all of human history, where
Powerful actors have needed regular people, and so there's an exchange of goods and benefits.
If you sever that, you're relying a whole lot on goodwill.
And we think that's not nearly a stable, or strong of an arrangement, and it mirrors the kind
of patterns you see in economies like those that are afflicted with the resource curse.
Are regular people just everybody? Just like, you know, you, me, white-collar workers,
people in developing countries, people in very developed countries.
Like, who are the regular people to which you speak of?
Yeah, so I think we really do mean here, everyone, particularly like everyone who,
who does not have capital, that is then like very dependent on AI, that it can be physical capital
or financial capital. And I think there's people often talk about white-collar workers.
There's also people in development countries, of course, who we shouldn't ignore their existence.
So it really does mean everyone where like right now the state is that everyone can contribute
economically, states and companies have an incentive to care about everyone. But then if all
everyone's labor is replaced, then this is less true. So I'm really curious to ask you guys,
why now? What did you see that sparked the interest to actually make this post? Because
we've read through it and it was very, very thoughtful, very pragmatic. But what was it that
sparked this now versus a year ago or a year in the future? Was this, was this the exact right
time or do you think this thesis kind of changes as we progress over the next few years?
So I think for us, just for some history of how we got here, I think shortly after 03 was
announced and they looked like, oh, timelines are getting pretty short. Asia doesn't look
you know, 20 years away or 50 years away. It might look five years away or even sooner than that.
I think we were having a series of conversations. We worked at the same building at the time, or
in the same office building.
And I'd had this observation about the resource curse
and oh, this looks somewhat similar to those patterns.
And Rudolph had separately been working on this draft
of an essay, which is now in the full series
called Capital AGI Human Ambition.
We published earlier versions of those essays back in January
that didn't propose any sort of like solution
or way to try to stop this problem.
And we spent the next three or four months
of banging our head against the wall trying to figure out,
well, what can we do about this?
What does it look like to actually solve this
problem. A whole lot of the related essays in the piece that aren't squarely focused on the solution were essays that we were kind of writing and taking notes on as we got what we thought was closer and closer. And in the last couple of weeks, I think we just thought we had enough. And so we went ahead and hit the publish button. We are big fans of shipping in public. Yeah, I think also to mention on the timelines, I think there's also something about like when a, when EGI is far away, it feels very much like a technical problem. And I think for a long time, a lot of the people who take AGI seriously having thinking about it purely for the technical line is thinking about the, when it's thinking about the, when it's
systems and I think just as it draws nearer, you start realizing that, okay, it's not just
going to be a technical thing, this is actually going to interact with the rest of society,
with the real world.
It's not like very concrete effects and that says.
Maybe we should actually think about those.
It seems important.
And it's funny because at Limitless, we are very optimistic about the future.
We are very excited about the pro tech, pro AI future.
And then Ryan surfaced this document with me, your post.
And I was like, I was reading through it and it kind of hurt a little bit.
I was like, this isn't the future that I'm super excited about.
But it was very thoughtful and very pragmatic.
And I wanted to ask you guys, why did you define this as the intelligent curse?
Why is intelligence not a blessing?
So I think the name just strictly comes in the comparison to the resource curse.
And while I don't think we don't root the entire analysis in the resource curse,
it's a very helpful example to really conceptualize what is a similar environment
that has similar incentives, what are the outcomes there?
But the initial observation was this looks a whole lot like the resource curse in
development economics.
So that's kind of where the name came from.
There's this huge wealth of existing literature and debates.
We didn't want to center the entire argument on that.
But we did think that name seemed quite relevant here.
And I think in particular why it might be a curse as opposed to a blessing.
It really depends on how it gets deployed, if it's a centralizing or decentralizing technology,
if it accumulates power in the hands of the few or distributes power out to many, many people.
And so I think it can be a curse.
And we set out the scenario for which it could be a curse, but we also offer the world in which it could be a blessing.
There's so much really to unpack here.
And some of our history at Limitless is comes from also Crypto, right?
And I know this series of essays, you referenced Vitalik Bueyvern's work on decentralized or defensive
accelerationism.
And we've had him on the podcast to actually talk about that.
And that might be indeed one of the ways out.
But when you start to talk about decentralization, that very much does seem like maybe our main
defense against the centralizing effect of this.
But I don't want to project us too far forward in the solution.
And there's so much to unpack here, so much to go through.
We'll do it kind of sequentially by route of essay.
But one thing I do want to get, since we've defined it, we haven't defined it yet, but we've dropped this phrase, the resource curse several times so far.
Luke, could you just define what the resource curse actually is?
I believe this pertains to countries and how endowed they are with maybe natural resources.
Tell us about the resource curse and why it's basically a meme for the name of this essay.
Yeah, so I'll stress first that it's not the sole piece of evidence we rest on.
It's very much so an example or an analogy that we want to build around.
But the resource curse, succinctly put, is the tendency for countries that have lots of natural resources
to oftentimes, instead of having very rich or wealthy citizens, to have actually worse conditions.
And there are a lot of different explanatory mechanisms for why that can happen.
But one of them that I think is pretty prominent in the literature is that if you have oil in the ground,
and all it takes for your state to get really wealthy is to get oil out of the ground onto the roads and onto the ports,
where your incentives are not to build this really complex,
your incentives are to make as much money out of oil as you can.
It doesn't require a whole lot of people to make money off of oil.
It might require workers to actually extract the resource,
they get it out to the ports and to sell it,
but that's a whole lot less people involved in an economy
than, let's say, a more developed advanced economy like the United States
where there's lots of moving parts here.
Now, there's a lot of different ways the resource curse ends up,
but for a whole lot of countries, particularly those that don't have really strong institutions,
the resource curse ends up in pretty terrible poverty.
There are ways out that we think, we kind of talk later for the piece as to, you know,
the piece as to what ways are out, what ways are our potential analogies to that we can be
looking to for solutions. But the core thing here is that you either want a diversified economy
or you want institutions that can withstand the curse.
Okay. And the examples of that are just countries in the Middle East, maybe they're oil-rich
and they really haven't developed their, I guess, civil liberties or kind of the labor economies
of their citizens, right? Or maybe a country like Russia, which is kind of in the grip of authoritarian,
totalitarian powers and it's, you know, they kind of devolved into plutocracy. I suppose that's
what you mean by the resource curse. Now, we also have a counter-examples maybe like Norway is
very well endowed. There's a lot of energy there. Canada might be another example. I mean,
they seem to be doing fairly well with a liberal democracy. But the counterintuitive thing here
and why you're labeling this, the intelligence clerses, you would think that more resources
equals like better. More resources equals better for everybody. And it turns out that's actually
not the case for resources for nation states when it comes to natural resources. Sometimes more
resources actually lead to an instead of structure that makes things worse for the population.
And that could be the same with the intelligence curse. Yeah, that's what we're saying. And we also think
that there's a lot of signs of hope there. We talk a lot about Norway and we talk of it about
on Mon as well as two examples of states that broke them and broke the curse, what we can learn from
those. But yeah, I mean, states like the Democratic Republic of Congo, for example, or Nigeria that are
just like really, they have tons of resources and yet their people are very poor. And the question is,
well, what are the incentives that are creating this outcome? Okay. So let's, let's now that we've got
kind of the gist of it, let's flesh out this argument in a lot more detail. And you have basically
a series of essays with different sections on this. But when I was kind of looking at the
high-level thesis. It feels like you're playing with a few, you know, premises, like, so maybe
three in my mind. Like, one is that AGI is the only game worth playing. There's a famous essay
titled this as well, but basically AGI accrues incredible capability and power. And as you said,
this could be like on the near-term horizon. We're talking about years, maybe five years, for instance.
So that's like the first premise you sort of have to believe. The second is that AI will
replace humans for valuable economic labor. We're going to flesh that out in a, in a
a second. And as a result of that second premise, the third kind of, I guess, idea here is that
powerful actors, these would be like nation states and companies, they no longer have an incentive
to care about the regular people, as you said. Why? Because the regular people used to be
their economic engine and their labor. But now with AGI, the regular people aren't providing
utility. So do we need these welfare states? Do we need these social structures? Do we need civil liberties?
Okay. So that's the base idea we're going to flesh out. And it begins here,
which is this concept of pyramid replacement.
I want you to sharpen this mental model.
So AIs, this idea that AIs will replace humans for all valuable labor.
And I'm showing on the screen a picture of a corporation, I think.
This is a typical company.
Companies are arranged in hierarchies.
At the base of the pyramid, you have your entry-level employees.
In the very top, you have the executives.
You have the C-suite.
So can you describe what this pyramid actually is
in the typical corporation and what you see AI's doing to this pyramid?
Yeah, so basically, so there is currently this hierarchical structure in companies.
And it's actually not from first principles obvious, which end is the pyramid AI will start
automating first, but like empirically it seems like AI are getting good at tasks that have
short time horizons, where the task is completed quickly and then you move on to the next thing
pretty quickly and getting better at longer time horizon tasks slower.
And there's also the social fact that the C-suite is less like to,
to unemployed themselves and to unemployed other people.
And if these people who are the entry-level employees,
because you don't need to fire anyone, you just stop hiring.
And that's why we think the first step in automation,
something that might already be happening in software companies
is that the entry-level employees,
instead of hiring more and more than the company,
you just, instead of giving the senior developers
at a software company an entry-level intern or something,
you just give that senior developer cursor,
and they code with cursor with some other AI-cogent tool,
and they don't need the entry-level employees anymore.
So, Ryan, if you don't mind scrolling down just a little bit.
What I loved about this section was kind of the visual that you guys created, which
would show this pyramid and the pyramid is blue and that means it's all humans.
And then as AI starts to roll out, it starts to absorb the entry level employees.
And then as it goes to junior and as it goes to middle management, it slowly absorbs the bottom layers
until eventually we're just left with the C-suite on top and then nothing.
And then everything gets absorbed to AI.
So I guess it's literally like one big AI like red block, right?
the pyramid becomes just like this AI Borg machine.
It's so long as the pyramid, it's a square.
Yeah, it's a square.
This is one of the, uh, I like that.
I might steal that, but this is one of the things we changed
from the original essay.
I think I still have like the rough draft published on my blog,
but originally it was just the pyramid kept getting smaller
and nothing was replacing it.
And you get to this last slide and it's just blank.
And we had like an outline of a square that looked like
it was just part of the picture the whole time,
and then at this point, it's just like, there's nothing.
I think I wrote the org chart goes blank.
And Rudolph had the idea of the idea of the square
of maybe we should just show people an entire automated company,
actually show it's not just that people are going away,
but that AI is rapidly filling those functions.
So now you have the visual in front of you.
And there's also something here where we don't want to imply that when the AI
takes over in the future company for every human employee,
there will now be one agent, one AI agent that does,
you know, it matches one to one with each original human employee.
I think the optimal way to structure AI's in companies
will really look a bit different from the like current thing
where you stack humans into a pyramid.
And therefore the AI, you know, we represent
with it's like the square thing that sort of like is like blow above AI computes around your sort of like
shrinking amount of humans that are providing that direction.
And this is what I was curious to ask now is because currently in the world of AI,
I feel like I am a leveraged human when I use it, where I am capable of X and then because of
AI I'm capable of Y. And I guess the question to you is, is will humans not just get better jobs?
I guess if you could imagine stacking the pyramid on top of the AI, where now we have this
foundation that provides a lot of leverage for entry level employees all the up to CEOs,
But the productive output that's unlocked as a result of that leverage creates new and interesting problems for them to tackle.
So would that not be the case where we become hyper-leveraged humans while removing some of the workforce, but not all of it?
Yeah, why can't this be a box with like a pointy hat, you know, pointy hidden hat on top?
Well, for it's worth, if you go one up, you'll actually find that box with a nice pointy hat on top.
I guess the real question is the way that we currently structure are like major white-collar companies.
These are big mega conglomerates.
It's a company of like a couple hundred thousand people,
or 10,000 people.
And if you look at one of the success stories
and two, the existing statement CEOs are making,
on the success story side, the cursor has demonstrated
that you can be a multi-billion dollar company,
getting tons of money with only a couple of people there.
And if the general advice is you should only hire
as many people as you need to run the organization,
I'm not sure why Cursor would then hire 50,000 additional people.
I'm not sure if that would actually buy them additional runway right now.
But I think maybe what's more important anecdotally,
and then Rudolph, I head off to the more systematic argument,
But I think to Olingo CEO has now said they're an AI first company,
and this means they're going to ask in every role they hire,
every contract position that they have,
whether or not they can automate this first.
I think we have at the bottom some links to other companies
who've also made similar statements here.
I can't recall off top of my head all of them,
but I mean, the general ethos that we're hearing right now,
as this is kicking off, is what we really want to be doing here
is being more efficient, being leaner.
Rudolph, I hand it off to you for the more systematic argument.
Yeah, and I guess I think there is some hope
that humans currently have this advantage in long-term.
Horizon tasks.
I think basically we know how to train AI to do tasks
for where there's a large data set or where we can build a
digital environment, reinforce from learning environment,
where the correct behavior is rewarded.
And this works for things like writing where there's a lot of data on it.
And you just train to write a right like the average
interperson pretty well.
And it works for things like master code, where it's easy to
verify whether something is correct.
But then it's harder to train AI as you like,
be the CEO because the CEO interacts with the real world,
they take a lot of actions.
This is just like we're currently less good at getting
to be good at this stuff.
So I think the state where like the AI uplift the humans, this will continue for a while and probably longer than some of the most aggressive AI projections estimates.
And I think there is hope that we can extend this period during which humans are mostly just uplifted by the AI.
And this will be like a very good for just human agency and like ability of humans to affect change in the world.
I think right now we are definitely in this regime.
But then in the limit, there's no theoretical reason why the AI can just also get at the long term planning.
That's what the AGL labs are shining track right now.
And at some point the like board will come in and the board will be like, look,
you're the CEO, you have a nice job.
But I'm sorry, but it looks like a GPT-9 is, you know,
starting to get better at making decisions you are.
And we're responsible to shareholders.
And I'm very sorry, you've done a good job.
But now we're going to have to weigh you off.
Can we start at the top?
That would be kind of nice for a change.
You know, I've gotten a couple of those reactions.
And I think the way that we're most likely to be wrong on this model.
And I want to be, you know, as like epistently rigorous as I can be,
is that the middle gets cut first.
It could be the case that there are entry-level roles that we just really
need a whole lot of people to be doing the, like, the base
work and management becomes dramatically easier.
I think the evidence really points towards the former that we're getting this bottom-up pattern
of automation as opposed to this middle out right now.
And I think the reason for that is it just quite simple.
If you cost you $50,000 to hire a person to do something every year, but it would cost
you $10,000 in compute to do the exact same task, it's really hard to justify the additional
$40,000.
And sure, like Mike's a great analyst and you go golfing with him on the weekend, but that's $40,000.
And you can go golfing with Mike whether or not you work with him.
And so I think a lot of companies, when, you know, whether it's a downturn or whether they just want to save some money, they're faced with that question.
Luke, that was a spoken, like, a member of the C-suite.
Let me tell you that it's, you know.
I actually can't play golf.
Sorry, Mike.
I really can't play golf.
We can go off, but you can no longer work here.
Yeah.
So you guys are kind of, I guess, you see it maybe emerging right now as sort of bottom up where, you know, entry-level programmers kind of are the first to go or like support teams, customers support, something like that, kind of the first to go.
and it works it's way upward. But you're also agnostic in this model as to whether it's
middle out or even top down or whether it's bottom up. Correct me if I'm wrong, but I believe
this is particular to white collar jobs, yes? So is that part of the thesis that you kind of the
information like knowledge worker class is kind of going to be the first to go because our robotics
technology hasn't quite caught up to our software at LLMs? Yeah, I think that's the default. I think
right now at least, LOMs are advancing faster than robotics.
And this creates the interesting possibility.
I think Carl Schillman talks about this idea
that we might have a period where humans are valuable,
not for their brains but for their hands.
And maybe we get this, we have like a robotics.
That sounds worse.
So when I made this concrete, imagine your job
is just you're like assembling widgets in a factory,
but you have a new piece where the AI is giving you instructions,
and it gives you like motivational pose
from time to keep you on task or something.
And then if you don't actually have to do any thinking
because the AIs are better at all thinking.
like, maybe this is the future.
However, don't worry, maybe we also fix the robotics
and we get robotics quickly, and then you can't do the widgets either.
You're just like fully unemployed.
So there are many possibilities here.
Okay, so that's the concept of pyramid replacement.
Let's do some pushback though, some objections to this.
So one is kind of the, I guess this is maybe the,
at least I'm familiar with the Tyler Cowan kind of pushback argument
where he's basically like, you know, there's diffusion barriers.
And we've certainly seen this, like, you know, kind of coming from crypto.
So, like, you know, crypto could replace the entire world's money system.
But guess what?
There's actually regulators who kind of don't want that to happen, right?
There's institutions, there's structures.
There's all sorts of breaks in society, meet space, government, that just slow things down.
It's kind of the human piece of it.
And so you might have this technology in a box, in a geniuses in a data center, whatever,
but they might not diffuse through society because society has all of these breaks.
and, you know, like big breaks in meat space.
And so will that kind of slow this down?
I mean, it feels like there's, we can adapt better.
I mean, just the general ideas, we can adapt better
if this happens very slowly versus if this happens,
like, in a period of months to years.
And what do you think about that diffusion argument?
So one, I think a lot of our, like, solutions focus
in the breaking the intelligence section,
is really at its core an argument
to try to extend the augmentation window
so that we get more time to adapt.
And I think if you look at the way the pyramid replacement flows right now, we argue it happens pretty slowly.
It's a bottom-up approach.
I don't think we give an exact time horizon because it's really hard to predict.
But I mean, I think if AGI hits in 2027, I think most people are still employed in 28.
The question really for you is how fast after that moment.
And there are a couple reasons why I should expect companies would want to speed up pretty quickly.
Maybe, for example, they don't do the automation, but a competitor does and they start moving faster.
So now there's a competitive pressure to automate.
And the same way that maybe a state doesn't want to require.
certain weapons capability, but of course, the other state has also acquired that capability,
and now you're in a race to kind of get to the top here.
Maybe it's the case there's an economic downturn in this forces like cost-cutting everywhere,
and you do the layoffs and discover that you actually are at equal productivity or maybe
even faster when you try to automate that away.
So there are a lot of diffusion barriers.
We do not think that this six months after AI, everyone's unemployed.
But it's also important to note that diffusion barriers also have acceleratory pressures
that are pushing against them.
If you have this kind of technology and there's strong reasons to adopt, if investors
hyping it up, if people are seeing it work in the real world, it really is only a matter
of time before critical mass starts to emerge, and the way that we work is fundamentally
change.
Okay. And there's been other points to talk about, you know, sort of AI being very jagged
now, right? Where, you know, like some things, you look at its output and you're like, oh, my God,
you are so dumb, like, I could do this. And other things, you're like, wow, this is incredible.
And so this could happen in a jagged way, I guess. I feel like we've established then that
there is the possibility if we get this kind of like acceleration towards some sort of an
AGI that AIs have the capability to replace the corporate human pyramid.
And I guess this is the capability to, I mean, corporations companies are the economic
engine of like basically all societies, right?
So that's effectively what we're doing is we're replacing the economic engine of these
societies.
Let's move to kind of the second essay and the second piece of this intelligence curse, where
we start to talk about capital.
All right.
So now we've got a world where AIs have maybe started to erode, for
replace the human labor pyramids, our corporations, they're doing the work.
And so I think you're making an argument here that the power, which was in the hands of labor,
of course capital always has power, but we have a large portion of labor in society because
humans are valuable that has power.
That would begin to shift.
And this is almost like a startling revelation that the AIs might make non-human factors
in production more important than the human ones, in particular.
capital. Can you develop some intuition for that for us?
Yeah. So first, I think it's worth clarifying that capital, when economists talk about it often
means, you know, like it means money, it means, but it also means stuff like physical factories.
And therefore, like, you can't talk about factors of production, like land, labor, capital
management, and like capital here is a bucket that includes like factories, GPUs, and also
just like cash on hand. And I think...
Does it include, like, energy too, Rudolph?
Yeah, I would, I think economists would call energy a type of capital because they say, like,
human factor production, it's not a land which works a bit differently, and it's not management,
which is, for our purpose, is kind of a bit like labor because both involve humans currently.
So yeah, and then it's basically, the point here is just the point about, like, right now,
the economy needs a huge amount of human input, and you add more human inputs on the margin,
and the economy goes, you know, up.
And therefore, the marginal unit of human labor is, like, compensated pretty highly.
And, like, at least compared to the historical, like, pre-accident here.
And I think you can see this like historically, so like before the Industrial Revolution
where the human factors, like human capital and sort of like education skills, stuff like this was
less important because there was less like real technology, less like complicated processes,
then also the like amount of power that human labor had was lower. And then yeah, so this is sort of
like general arguments. And then in this essay in particular, we talk a lot about just this
like points that you can just start substituting capital for labor more effectively than you can
right now. And it's actually like I think right now, for instance, if you're a like if you're
trying to hire talented people, that's actually like a big,
button on your ability to convert money into results in the real world.
And then like this will go away if you can just like use money to buy credits from Open
AI to spend on tokens that is like replace the talent.
And it's like, you're sort of like right now there's like a lot of complexity and like free friction
to converting money into real world results, but this will go down a lot once the real world results
you acquire this by just spending money on the AIs.
Tokens become your workforce essentially.
Yeah.
Yeah.
This was an important element that I didn't really realize until after reading this is that there
is this difference in capital between general capital and human capital, the actual labor workforce.
And tokenizing the labor workforce seems a little scary. So I'm curious to get your takes on kind of
the system, the way this kind of rolls out over time in either maybe best case to worst case
scenarios is what happens as humans get replaced by tokens. And as we kind of reduce our
workforce incrementally, does that happen quickly? Does it happen slowly? And what are like the
second order effects downstream of that?
So let me say a bit about the Yek and Root effects here.
So I think one of these is just that,
so the thing I already mentioned by, for instance,
if you have a bunch of money, but you want results in the real world,
you're still bottom-like toning, you can identify talent,
you need higher talent.
There's a lot of friction here.
Another is that a lot of social mobility today
is based on like you are a talented human
and you don't have capital, but you can go out
in the world and do something.
And people with capital have to pay attention to you're like,
nimble startup founder or something.
And as you like, VCs, you have a lot of capital,
just, you know, need to be.
you, they think they will give you money, stuff like this.
I mean, another word for that is the American dream, right?
Yeah.
Yeah.
It's what we're all told, right?
I live in London now, but I grew up in the States and we're all told from very young age,
you know, if you work really hard, if you do well in school, if you go to the right college,
you will have to shot at the American dream.
And the American dream, it looks like accruing enough capital to be able to own things,
or accruing enough capital, be able to make it and have a nice life and fundamentally change your
social position.
And I think a lot of the argument here is that provided that capital can be not
substituted for labor here because you can just sub an AI.
Your ability to walk that way up the social hierarchy just gets a whole lot harder and maybe gets eliminated.
And then, like, as a society, I think a lot of social progress and, like, change depends on this thing that someone who is not currently incentivized to care about the current status quo comes from the outside and shifts things.
And then if you lose this ability to have social mobility, it's not as bad for individuals, it makes society more static as a whole.
Okay, so there's this idea if capital becomes
a substitute, like a general substitute for labor, which you can imagine if pyramid replacement
is true. Basically, what pyramid replacement means is like instead of paying the humans' labor force,
I can just like pay the open API APIs and do this through tokens and pay the geniuses in the
data center, and that's my labor's force. And so I can just take my capital, which is like my assets,
right, my money. And instead of putting my money into the slot machine of human labor, I just put
that into the slot machine of AI. And what you're saying is this kind of like destroys social mobility.
You talk about, Josh was just asking about like kind of, you know, the best to worst case scenario.
And I think maybe these essays are like really focusing on the curse side of things and maybe less
than the blessing side. So one could imagine some blessing. But you talk about like, I mean, I guess,
one of the worst case scenarios, but in a way this is maybe one of the better worst case scenarios,
is this permanent caste system where we're all kind of like locked into the capital ledger
that were born into. And so like maybe if you're born into a nation that has really like,
I guess embraced AI and like your, I don't know, like your father worked at Open AI or was in the
industry, it was early, right? And like really was hooked up to this spigot of
capital, like, that's your cast. You're kind of locked in. It almost sounds sort of futile in that
way. I mean, not having lived in sort of a strict caste society and certainly embracing the idea
of meritocracy, like maybe that all fades away is what you're saying. We're like permanently
cast it into these kind of capital ledgers. Yeah, and I think it's worth noting that social mobility
before the industrial revolution was very low. And I think social mobility depends on this thing of like
human talent matters and also the economy is growing.
and stuff like this.
And then, you know, before the Industrial Revolution,
if you were rich, probably at some point in the past,
your ancestors did something cool and the King gave them a bunch of land
and made him aristocrats or something.
But then like, then you get the Industrial Revolution.
Human talent really matters.
It's like social mobility is possible by going out to inventing things,
pushing science, pushing industry, stuff like this.
But then if, you know, maybe we'll keep having technological progress,
maybe the amount of abundance of society will go up.
But even then you've like lost this element of new people
can enter the elite if there, if AI is a like substitute
for elite human account.
Okay, but that's the thing that's counterintuitive, or like what I'm wondering about the argument.
So we got the Industrial Revolution, which is sort of machines replacing some human physical labor,
and you're saying that was actually good for the humans, basically.
Why does that not follow if we get an intelligence revolution that it's not just like good for the humans?
Well, I think the most important differentiating factor for humans as a species, as our brains,
and that when freed up from physical labor, we're like, we're better than some animals,
at physical tasks were worse at other animals of physical tasks, and having the thumb is a pretty
great advantage at using tools. But at the end of the day, it seems like the single best
advantage that people have is that they can think up new things and execute on them. And so post-industrial
societies get these really complex information economies that spend a whole lot of time,
both, I mean, producing lots of physical abundance in the real world, and with those resources,
using our brains and our heads to come up with even more abundance and more ideas. And you can see
this not just in like existing economies versus old economies. You can see this today between
economies and those kind of more resource curse afflicted states social mobility is lower because
non-human factors of production mean that your ability to have some huge idea and make an outsized
impact is also quite limited capital begets capital and your ability this is true in every society
but your ability to have outlier talent to succeed is less if you don't need outlier talent in
the first place to make money and maybe also to add on this like a quick econ thing it's like
if what matters is or like whether it's a substitute or complement for human labor and the sense of like
like the thing that sets wages basically like when you add one additional marginal units of labor,
like how much of the returns. And if you have like, you know, pre-industrial revolution,
additional unit of labor, you have an additional peasant farmer, it's not very much.
Post-industrial revolution, you have an additional unit of labor, but then they command a lot of
machines, they command a lot of capital. They actually have like boost the economy a lot,
they get high wages. But then if like all the labor is done by AI's, you've got this like
total like substitution of humans, then additional unit of labor, output does not change,
human wages are very low.
In this model, who are you saying owns the capital, right?
So, you know, capital is basically sort of a, it's property rights.
So somebody's got to own it.
In this model, do the humans still own it?
Does it kind of consolidate to the tech companies?
Or do the AIs own it?
Like, how sci-fi are you getting in this?
So we call for a ban of AI ownership of property and being CEO.
So we're willing to call for it.
We do.
But I don't know how likely that scenario is, but I don't want to preclude it.
But we went ahead and said, well, it's a pretty cheap ban to do, right?
It's not that hard to ban it right now.
maybe it's way harder in the future when we've already delegated lots of authority.
I think existing law today, at least in most countries, probably does prevent this outcome anyways.
But it's worth getting that explicit.
But even if it's people, it really matters on how many there are.
I think most people in modern economies don't own a whole lot.
That doesn't mean that's necessarily a bad thing today because, of course, your labor is a very powerful thing to trade.
And in many cases, you might own way more than you did in previous societies.
But it's not the same as being able to own, like, you know, the kind of capital you might need
to command many, many, many AI agents that are replacing lots and lots of labor.
And I think, Rudolph, you've thought a lot about the kind of ways this create a more static
dynamic where, as you mentioned earlier, like, this could lock people into the existing
positions pre-intelligence explosion.
Okay.
So this idea that capital can now buy labor, so human labor is no longer necessary, there's
sort of another implication here, which is that classical liberalism starts to fall apart.
And so, you know, post-feudal societies, I think we've just like in post-enlightment, we
have generally experienced not in all places, not in all countries, not in all regions, of course,
but we've generally, it's generally led to better human outcomes, right?
You know, quality of life, life expectancy in general, wealth, freedoms, the whole concept
of just like humanism.
We've ended terrible practices for humans like chattel slavery, at least in like most places.
So we kind of pat ourselves on the back and we think like, oh, wow, we've really advanced.
We've just, like, gotten some better moral software and we've kind of like clearly evolved.
I think that what you're arguing, though, is that there's just like a more utilitarian perspective on this,
which is like, maybe you could sharpen this argument for me, but it's like nation states gave labor these rights,
citizens, these rights, because they were so damn useful.
And it was just they gave the citizens these rights because they needed to attract the labor pools and the brains to develop their economies.
And if humans become less necessary for, say, nation states, we've already demonstrated how they may be less necessary for corporations, then that entire, I guess, social construct starts to fade out.
Can you sharpen that intuition for us?
I think it's definitely true that there's a little of institutional inertia in the sense of right now, if you live in a sense of,
society that really values humans and cares about humans and like politically might be willing to like
introduce a UBI or stuff like this or like universal basic compute or whatever, then, you know,
there's a strong chance that like this society has a lot of inertia on this direction.
But then like societies don't exist in a vacuum, they compete with each other.
There's a sense which like, for instance, all of all the countries in Europe, like, Ritin was doing the
most to sort of like be compatible with industrialization.
And therefore, as a result, you know, they were doing a lot of, they were quite
political advance for their time.
They had quite a lot of freedoms.
They were like good at encouraging industry, stuff like this.
And as a result, like, Britain becomes the preeminent power.
And there's a thing of like, you know, there's a lot of societies,
a lot of countries in the world, they're in competition with each other.
So it's not just sufficient that like if one society makes this choice, it's sort of like,
they can continue on their own.
It also matters, like, which strategy wins overall in the world.
It's not clear to me if this dynamic is bottom up or top down.
Is it the states gave this right knowing it would attract better competition or that workers
or people who own capital had more power than a state who were able to demand these?
I think about like the Magna Carta in Britain, for example,
the foundational document for modern for the concept of modern democracy,
where landed gentry, people with lots of property,
had powers that weren't necessarily as powerful as the king,
but were in many cases like the fact,
they control the factors of production that created wealth for that king.
And so this put them in a position where they can make a whole lot of demands upon a king.
You see the evolution of British democracy is that it first starts with this like landed gentry class.
And here in America, voting rights begin, you know, the idea of like a self-determined government
It doesn't start with everyone being involved.
It starts with these diffused property-owning men who, because of that position,
had some sort of diffused self-relying power.
There's this Charlie Munger quote that I think was at the top of the original intelligence
curse on my blog, which is just showing me the incentives and I'll show you the outcome.
And I don't think it's the case that cultural evolution plays no role here.
I think it's quite important.
But it's also worth asking, what is the role that economic incentives play on cultural
evolution and how strong are those incentives?
And so I think in the limit, these incentives are probably the dominating
force here. There's this quote, I think, in this part of the essay where you say this, the classical
liberals today can credibly claim that the arc of history really does bend towards freedom
and plenty for all, not out of benevolence of the state, but because of the incentives of capitalism,
geopolitics. But after labor replacing AI, this will no longer be true. Wow. So potentially
classical liberalism on the line here. Let's get to the next essay. So we've talked about capital
and its importance how that could be kind of the dominant feature of a post-AGI type society.
So let's draw some more implications for what that means for kind of, I guess, the nation-state's relationship with its citizens.
So this is the heart of maybe the intelligence curse.
This is where the curse starts to come down on us even stronger.
And the summary is this.
With AGI, powerful actors will lose their incentive to invest in regular people, just as resource-rich states today,
neglect their citizens because their wealth comes
from natural resources rather than taxing human labor.
And this is the intelligence curse.
Why do powerful actors like nation states invest in their people today?
Well, think about it.
If I'm a government right now and I want to make a lot of money
and I don't have, maybe I want to make money for a variety of reasons.
Maybe I'm altruistic and I want to provide better care for my citizens.
Maybe I'm self-interested and I want my state to do well.
There are a host of reasons why you might be a lot of reasons why
might want this because money gets you power.
But in order to get that right now, you can do a couple of things.
One, you can try to find some sort of resource, but maybe you don't have it.
Two, you can offer, you can try to create the increase your return on an investment.
And given that most economies right now really flow through people, and developed economies,
diverse economies go through people and their labor and their work.
You can up that return by doing a couple of things.
You can really increase the quality of education.
You can build infrastructure like roads and public transportation, which helps get
investment flowing in areas. You can build these really reliable governance systems to encourage
investment. You can foster competitive markets. You can support small business formation. You can do all
of these things that make it more likely that your population produces meaningful economic results,
and then you can tax them heavier. I think right now in the United States, taxes on income are
something like 50%, whereas taxes on corporations or something like 12. So a share of total tax
revenue, 50% derives from income taxes, whereas 12 to 13%
arrives with corporate taxes for the government.
And so in that world, of course,
you want to make sure that people are making more money,
because if they make more money, well, then you accrue more tax revenue.
And you can do more things with that tax revenue.
It just so happens that these investments are the kind of things
that we associate with a better quality of life.
And they also give you better bargaining power.
But the other thing, of course, then we wouldn't get into this a bit later,
is maybe you impact the ability of that government
to maintain their, maintain and retain power,
maybe through democratic means like voting
or through credible threats to an auto-
democratic regime.
Okay.
So you might want to buy this voters off.
Yeah.
So basically you're saying like kind of the public goods, the public infrastructure.
Take, for example, like just like free public education to all its citizens.
This is not the government doing kind of like quote unquote the right thing for its citizens.
This is not governments, nation states of the world just being altruistic and kind of like nice
and kind.
They're doing it for a reason for an economic outcome, which is with we have an educated population,
we have greater ability to produce GDP and government benefits from being able to tax that
GDP and that's how it makes income like revenue itself.
And like this is basically the reason, the incentive for governments to invest in education
or for that matter any kind of like good thing for its people.
Does that analysis miss something?
It feels like it feels very mechanistic and bleak.
Maybe that is the way the world works.
But like, you know, is there not like some sense of morality and like doing the right thing
So I'll mention one thing here, which is that this analysis doesn't require anyone consciously
to be this cynical in government.
So for instance, you can imagine in the year 1500, like in Europe under the feudal limits, like,
maybe there were people who are really altruistic and even like, you know, give people
free health care and stuff like this, but then they get invaded by the neighboring country
who like invest in one of their military and stuff like this.
Whereas then today, it's like the people who were altruistic in like 1800, who were like
liberals and like promoting like Yiddish and U.S. like political progress, like these are the
people ended up short of the future because their values at that point in time were associated
with winning strategy.
Well, ask yourself too, just if you hear, I'll tell you nothing about these two politicians,
but a politician who wants to increase spending on education and a politician who wants
to decrease spending in education, gut check, which one feels more altruistic to you?
Increase, right?
Yeah.
And I think it just, it is the case here that the incentives are aligned here.
The thing that sounds really good and is altruistic also is a thing that return for money to the
state.
Now, in the world where the state has to cut that education spending because they're in a fiscal
or because making that spending choice would be super irresponsible.
Maybe there are just some sort of like, as Rolf said, there's like an invasion coming,
they've got that military spending.
It doesn't make that other politician any less altruistic in theory.
It just means that other politician has really strong, countervailing incentives to do something
else.
Okay, so it's not altruism.
It's just a happy coincidence of incentive alignment.
That's why we like get beneficial things.
It's interesting, going back to the idea of the resource curse, maybe we could talk a little
bit more about that because we do have real-world examples, experiments at play.
where there are very resource-rich countries
to the extent that they can generate economic returns
and taxes, revenue, essentially,
to pay for the government based on resources
rather than human capital,
rather than human labor.
And this is the resource curse that we were referring to
at the outset of this episode.
And we actually, there's a name for these,
which are rentier states, I believe.
Okay, so tell us again about the rentier states.
Let's get into some more detail here
because this is essentially the experiment at play.
Again, if your nation is resource rich,
how does it treat its citizens?
What are the experiments that we've seen right now in the real world?
So we can look at the raw incentives.
There's a very interesting counterargument
that I think we actually incorporate,
which is that many states have such good institutions
that they aren't exposed to these incentives
that can avoid these incentives.
But at the raw incentive level,
I think we talk about the Democratic Republic of Congo, for example,
extremely complicated history,
but with literally trillions of dollars in minerals
below their feet and hundreds of billions of dollars,
in, I believe, total revenue from those minerals,
and yet their people subsist on a couple of dollars a day.
And if the state has the kind of resources
that could enable relatively widespread wealth,
the question is, why don't they do it?
And oftentimes in these states, you can imagine,
I think it's very easy for us to imagine in the West,
a state far away that is exposed to high levels of corruption
or exposed to high levels of inequality
because the leaders choose themselves over their people.
It just sounds quite foreign to us here in the West.
But of course, this is what we see time and time
to get in many resource-cursed-afflicted states.
is that the leaders get rich, or the people who control the mining rights get rich,
the oligarchs get rich, but many regular people don't see the benefits of that economic activity
because there's no reason to invest directly in them to reap that reward.
Is there an explanation of counter examples of resource-rich nations that are actually
like protecting their citizens like fairly well?
So like Norway comes to mind, again, like Canada is sort of comes to mind.
The UAE comes to mind as well.
Yeah, how do you explain that?
So one, I think the key thing here is like how much of, one, like, when we're talking about
rentier states, we're talking about states that have like a very large portion of the revenue
coming from natural resources.
I think that's a dominating force in their economy.
There's, I think we cite it in the piece, but there's these two really interesting examples
that the authors give of Norway and Oman as counter examples of states that don't fall into
the resource curse.
In Norway's case, by the time Norway discovers oil, they have this really efficient, really
anti-corrupt, fantastic democracy, where voting incentives kind of override raw capital
incentives where the bureaucracy understands how to do things in really complex ways, and this means
that voters have, like, very real power, and the benefits get dispersed in different ways.
The voters can out-the-vote capital incentives.
In states like Oman, Oman in particular, had a pretty credible threat of some sort of uprising,
a revolution.
I can't recall the exact example from the paper.
I know we cited directly.
And as a result of this, the state capitulates.
And I think the kind of analogy that I've used here before is that if you are the person that
owns the rents, you'd really like to keep all the rents to yourself.
You'd also like to keep your head meaningfully attached to your body.
And so if capitulating on that, if paying out people, if doling out the rent money,
keeps your population in check, you're much more likely to want to do this because otherwise
that credible threat could be quite real.
Now, what we say is twofold.
One, we think that advanced AI makes it way easier for the state to know everything and
suppress things like revolution.
And so we think the revolutionary arguments and autocracies is kind of off the table.
We find ourselves in being the contrarian position.
of defending democracy pretty strongly here,
which is, I think, become kind of contrarion
in some tech circles in recent years,
but we actually think it just is.
It's not contrarian around here.
I hope so.
I appreciate this.
But we want, you know, in the Norway example,
the question you have to ask is, do you think most states
have the kind of robust and rigorous democracy
is Norway, one, and two, do you think the intelligence
curse for placing all labor might be stronger?
In our case, we think it probably could be stronger,
but it really shows us an interesting way out.
That's interesting.
So you're basically saying it's just like,
Norway had the democratic decentralized institutions strong enough to kind of withstand some of the pressures of the resource curse.
But the open question is like how many other countries have that?
And when you get into the intelligence curse, no matter what your institutions, your democratic institutions are, will that be enough to withstand the tidal wave of this intelligence curse?
And that's kind of an open question.
But it does provide maybe a sliver of hope, which is that with robust institutions,
maybe we can start to like, maybe we can withstand the coming tidal wave.
I don't want to get into solutions too soon because we're still on the problem,
but you're like, I'll just earmark that right now.
Yeah, I think what I want to know, as a layman listening to this,
who is just on the street and now understanding that he will soon not be able to trade
his labor for capital.
I'm curious.
You're out of a job.
I'm out of a job.
So, it's a podcast over.
That's what I'm saying.
I'm listening to this.
I'm like, hmm, okay, well, I can't do this.
I can't do this.
I am no longer able to trade my labor for capital.
What does that look like for the average person?
Are they collecting government welfare?
Is their universal basic income?
How does my life, how am I able to accrue capital if I am just one of those elementary
workers in the workforce?
Well, I guess the question is how much do you want to get into our solution section right now?
So perhaps we'll hold that for a second because there is another element that I was also
interested in talking about, which is just the human element.
As a human, I like human interaction.
I like going to hang out with friends.
I like buying homemade things from people.
I like meeting the artist that create the art that's on my walls.
I really enjoy that connection.
And when we introduce this, this AGI element,
this artificial intelligence force,
it feels very inhuman and artificial,
and it feels very sterile in that way.
And when I think as someone who is experiencing
the human experience, I'm really curious,
what element does the human nature have on the way this all plays out?
So I think it is definitely true that there's a lot of things
where humans have a preference for interacting with humans.
And I think this will continue,
and I think there will be a lot of,
like social facing jobs where the humans have a very high bar for replacing that role with a human,
with an AI. And I think this does provide a sort of like buffer where I think there will be some
jobs that last quite long, it is for like maybe like a teacher or maybe if you're for like
interfacing with customers, even if you're like a salesperson who's like very kind of
personal relationships, like humans might prefer that for quite a while. So I guess there's
some things about like how like how charismatic are the AIs, like how like good do they get at like,
um, hijacking human social instincts stuff like this.
But there's also no question around then like, like, so the humans currently have money and there's like some capital flowing around the human economy.
But there will also be like the ayes will be increasingly doing stuff and it might be like increasingly so like the like money flows towards the AI part of the economy.
And in particular like I think like how are the humans earning the money with which they pay each other for the human service or services?
When at least some of that human money also has to be spent on like doing the AI stuff that probably like keeps them alive, keeps them fed stuff like this.
Is it all for the workforce or are there like,
How far does this go?
So I guess if currently a human thing that I would really be really excited about is to teach my kids something or to be a father to my children.
And how far does that go?
Does it get into the household?
Does it kind of remove the need for humans through the entire process?
So there's a really good paper that walks to some of the cultural elements here.
It's called Gradual Disempowerment.
We know the author's quite well.
It came around the same time that ours did.
Weits focused less on this initial cultural element, most because we were trying to isolate.
we think is like a really critical variable here on the economics side.
But I'll tell you, I had an interesting interaction a couple of days ago
with someone who was telling me that they talk to chat CPC constantly
and that they think their dad talks to chat CBT more than he talks to his kids,
that he was like two or three hours a day, maybe more.
And so I think the capacity for machines to alter our relationship with each other seems quite high.
I don't have the exact quote that Mark Zuckerberg said in the podcast recently,
so I hopefully am not portraying this too bad,
but it was something along the lines of, you know,
The average human has four or five friends,
average American is four or five friends,
but they have capacity for 15.
We can substitute a lot of that with machines.
For me, I'm not excited about this vision.
This is really not exciting to me at all.
I really value the real world and the people that I get to interact with.
And maybe this is something I don't want to impose and say that I get to make a choice
that nobody ever gets to go down that rabbit hole.
But it's certainly all the technology that I'm excited about building.
Okay, that's really fascinating.
Because like when you start getting the family,
you just talk about just like being a parent or something,
or just like being a father.
And can an AI really do that better?
But like, then you get into scenarios where, I mean, a lot of, a lot of people grow up
without their father, right?
Just maybe, you know, by route to an early death or just like something else.
And like, is an AI maybe providing some parenthood there?
And it's, you could, I guess what you guys are saying, though, is you're acknowledging that,
you know, AI cannot replace all of our labor because we still might want to go to a arts
and crafts fair and purchase a piece of artwork for cultural reasons from a real human artist
that we just like resonate with and identify with. And that's still going to be a market and
economy. What you're saying is like over time that could become a smaller and smaller and smaller
portion of the economy. And even the human's purchasing power in this world could actually decrease
because like where is their wealth to go purchase the artwork actually coming from? And so you could
imagine like just that economy, that human-to-human kind of economy where only humans can provide
this, that just gets smaller and smaller over time. It's kind of a niche. And to like the humans are
maybe like, I guess, disempowered even though these economies still exist. So yeah, yeah, go ahead.
Or even it could be that like, you know, the human wages stay roughly constant. Everyone has like
vaguely social, like pro-social jobs. The like money flow into human part of the economy comes
from something, something, something government, something existing human wealth. And like,
that's sort of like human wages are what they are today,
but then also humans just don't really have political power anymore
because states worry about like, you know, real things
like energy and GPUs and like military competition
and all of these fields are done by AI.
And then it's, I think the like human role has become a bit like peripheral
and not any more tied to sort of like real power
that exists in the world.
And I think I'm a bit worried about that,
even if sort of like humans are like,
have their wage level at what it is right now.
Another way to think about it too is I hear a lot
that people will always want human teachers, right?
Because yeah, there's this human
interaction that you give with a teacher and it's really hard to replace. A relevant question,
though, is what will be the demand for schools? What is the incentive for states to fund mass
public education in a world where they aren't receiving a return there? That doesn't mean it isn't
going to happen, but you should look up the underlying economic incentives. And it could
be the case, as you described, where like many, many fields are automated. And so the money
flowing in this human economy is just increasingly limited or, you know, dwindows over time. I think there
are a lot of ways which you can reach a pretty bad outcome through different mechanisms here.
And a lot of our solutions based focus on trying to keep humans meaningfully economically
involved in many different ways while also strengthening democratic incentives and democratic
structures so that they can override capital incentives when they need to.
Well, just if we could stretch this a little farther and kind of like imagine a world here.
So like what how do, how do future nation states actually like make money in an AI dominated
economy?
Like how do they tax?
Like obviously now our tax mechanisms are just income tax, capital gains tax, consumption
type tax, excise tax, increasingly tariffs. That's fun. But the nation state is really going to have
to reorient around AI labors. And that's another interesting question. It's like maybe actually
the nation state is not the one in charge. I mean, we're in a world of nation states, but that is kind of a
post feudal model that kind of arose on the back, really, of the last major technological change,
which was the Industrial Revolution. Maybe we're going to reorganize. Balaji Shrinivasan has this concept of the
network state. And you sort of wonder if maybe some of these AI labs could be in a position
to accrue such power that they actually become the dominant force, some kind of like, you know,
open AI network state complete with, you know, like a flag and Sam Altman as the president. I mean,
like, who knows, right? How do you guys see this playing out? Yeah, I think there's definitely this
question over, like, do nation states continue as the main form of political organization or like
main form of organization of power in the world? And I think there's something where like,
So one, I think you should have some prior that these things are pretty sticky.
So even the like Catholic Church, you know, they were extremely powerful.
They run Europe for a few centuries in the past.
But I'm like, you know, they don't want to run Europe anymore.
We still have a Pope.
But they're actually making a lot of commentary about AI recently.
They're like finally like this stuff decay is quite slow.
I think this is a Rudolph has been subject to me spending the last
of days really nerding out about this.
I just, I literally just had a-
About the Catholic Church in AI?
Yeah, so I have a reading list that I'm in the pilot right now.
Because there's the most recent Pope.
The slight sidetrack, and the reason Pope, he's now said publicly,
one of the reasons that he took Leo the 14th
is because Leo the 13th had this very prescient encyclical
called Rio Mnvarum on the Industrial Revolution in the 1890s,
and he views AI as a similar style of societal reorganization.
I actually, I've actually, Pope Francis had a whole lot of commentary here.
I've got a reading list I'm working through right now.
I just was, yesterday we were in Oxford
and I was talking to a friar.
All right, Josh, Josh, new guest requests.
We got to go to Pope on limitless.
Yep, we'll ask his thoughts on AI.
All right.
We'll do this in the Vatican.
Sounds good.
Oh, fascinating.
So we don't really know what the organizing political structure might be in this new world, but we
could imagine it changes, but you're also saying that, hey, the nation state is pretty sticky.
The Catholic Church is still doing like big things.
Maybe it'll fade somewhat.
It probably won't go away, but that's kind of a TBD, like we don't know yet.
Yeah.
And also like if something, if we get, you know, AGI-Lab network states, like the same incentives
kind of apply to them by default.
And also they aren't by default democracies.
unless they become a democracy.
Yeah, a core observation here is that AI can be both destabilizing and centralizing.
And this seems kind of counterintuitive, but it could be the case that there's lots of very
quick disruption and the winners of that disruption that can very quickly accumulate power and capital.
I'm not saying that is certain, but that is one scenario you could see here is that it can both
destabilize a lot of things and then centralized power among the winners.
Yeah, the centralization of power seems to be a massive theme for you guys.
Like, where I'm getting out of this is definitely some worry about AI.
I wouldn't call it Dumerism, right?
It's not, there's some, like, look, there could be a scenario where AI comes to kills humanity.
I think it's just like you can see that point.
That's not really the focus.
The focus is more this attractor basin towards authoritarian totalitarianism, right?
Which could be possible.
I mean, this is even Daniel Schmockenberger's work.
I don't know if you guys have looked him up, but he talks about just with all of these tech revolutions,
what we could see is this attractor basin towards like total society control to actually keep our tech in place.
There's one more concept, though, we got to go through before we actually get to this.
Let me say something about the power concentration thing, where I think one thing that I think
people, so throughout history, we've had really terrible tyranton dictators, really terrible
centralization of power.
But all of them have fundamentally been limited by the fact that they, whoever the dictator
is, they still need, they're not infinitely competent, they can't think incredibly fast,
they still need a lot of other people to do things for them, and they somehow need to get
buy-in of like a big group of, a big bureaucracy and then of the population that they rule over.
And, like, fundamentally, their power, like, is still rooted in humans.
If you're a dictator, you're constantly paranoid about everyone else, like, overthrowing you.
Like, they're still, like, fundamental.
You know what?
They also get strokes.
They also, their life expectancy is only about 80 years.
Oh, I didn't also.
Even though they could pay for such good healthcare, exactly.
Yes.
But then, like, once you don't need the, like, bureaucracy of humans working for you, once you don't need the human military, you just have AI bureaucracy, you have, like, AI military, you don't need the population to run your economy.
The, like, constraints on how total the total terrorism can get, get a lot to worse.
Indeed, they do. Okay. There is a way out, guys, all right?
There is, yes.
For limitless listeners, if you're in despair now, never fear. We've got some solutions for you.
But one more concept to cover. So this is, I think, the last essay before you kind of like conclude
all of the things and give some of your recommendations for the way out, which is this idea
of the social contract, okay? An essay title shaping the social contract. And what you're saying
is the intelligence curse is breaking the social contract. And I really like this diagram that
you sort of show, which is just like, this.
this nice equilibrium balance of power.
You've got like three boxes here.
You've got powerful actors, so these would be corporations, nation states, you know, the big
powerful networks.
You've got the people and then you've got the rules, okay?
And so there's a dependency, there's lines of dependency between the powerful actors, the people,
and the rules.
So the powerful actors, they need the people for value.
We've already established that.
They need labor, right?
And so that's like people, plus one for the people.
The people can displace the powerful actors.
We've seen that throughout history, French Revolution, just like American Revolution, right?
If the powerful actors get too totalitarian, we stage revolts, right?
The people are strong.
And what we've done is we've created these social contracts, basically rules for society.
And so these rules are moral codes, but just like more, I guess, yeah, like in more detail.
It's kind of our legal system.
It's the constitution of the US.
It's the Magnicarta.
So there's this, and the people can influence the rules, the powerful actors,
have to, they're constrained by the rules.
We get the balance of power, separation of church and state,
three co-equal branches, all of these things, right?
It's like all very nice.
And that's our current setup.
That's the status quo.
What you're saying is this whole AGI thing
kind of disrupts the social contract,
because it means the people can't displace powerful actors,
as you were just saying, Rudolph.
It means the powerful actors, so the nation states,
don't need the people for value.
They can just pay, you know, for tokens in the AI geniuses in a data center.
And then the powerful actors have the ability to influence the rules.
The whole social contract is messed up.
Did I, like, flesh this idea out a little bit more.
Is this kind of what you're saying?
So I'll zoom in on just a single interaction here, which I think helps articulate this.
And I know you're listening base.
So let's zoom in on software engineer at Google.
And let's say it's 2021, which I think if I'm correct here is like, that's the big year.
We're like, it was everyone is getting paid crazy amounts of money.
You are negotiating with Google in your contract, and you have something that they want.
In this case, you have like, you're really good at what you do.
They want to hire you.
Well, because of this, you get to extract a whole lot of concessions.
You're a competitive on the marketplace.
You get to ask for more RSUs.
You get to ask for more stock.
You get to ask more money.
You also get things like the free cafe on campus because they've got to attract you somehow.
Or I think it's like 16 or 17 restaurants and Mountain Dew on their case.
It's absolutely crazy.
It's cool campus.
You get a lot of these benefits because of that.
And of course, Google gets something out of you too,
because they might pay you $400,000,
but as long as they've done their vetting here,
they're going to make a whole lot more than $400,000
from your labor.
But everybody wins in this relationship.
Now imagine that Google is able to replace your labor
with a machine that can code way better than you.
This really disrupts the relationship, right?
Because let's say, you know, in this case,
it can create value for Google at a cheaper cost than you.
It costs, I don't know, like $100,000 a year,
$150,000, $200,000.
and $200,000.
That's in the price range right there, where it's really economically sensible for Google
to cut you out of the process, but difficult for you to then go like, to create, you know,
10 trillion clones of yourself and go compete with Google.
And in the limit, this creates a world where powerful actors can get more and more entrenched
just capital, substance for labor more and more perfectly.
Your ability to displace them goes down, while simultaneously, your ability to bargain with them
also decreases because you don't have anything that they need.
This might create a situation in which powerful actors get to set the rules, and you are
constrained by them, and it's very difficult for you to alter that relationship.
That follows through to the government too, right?
And it's basically social contract with its citizens when, like, they don't need the citizens
very much anymore.
I guess my question here, or a bit of pushback is, you know how we call them a social contract,
right?
And that's because it's sort of, it's enforced socially.
Yeah, there's power of the state, there's military, there's kind of like monopoly on violence
types of things.
But over time, human societies have been able to, like, construct their own social contracts.
Like, what is something like the Constitution that's just, like, a set of laws and legal codes
and ideas that we all agree on in this nation called the United States of America, right?
Like, we put that in place.
It's—youvall Noah Harari calls these kind of, like, myths, right?
They're just, like, these shared beliefs that power so much of human society.
So my question is, like, okay, if we get to kind of choose social social.
contracts, why don't we just pick one that doesn't screw over all the humans, that doesn't
screw over citizens in the labor?
Like, we put these things together.
They're just shared myths.
They're socially enforced.
Why don't we pick one that's good?
And by the way, if this AGI thing comes true, won't we have abundance too?
Won't we have, like, basically 10% GDP a year?
Won't we have, like, fantastic wealth?
At least somebody's making the wealth.
And so this abundance, shouldn't this relieve the competitive pressures?
We don't have to think about the basics of food and shelter, because it's all provided for us.
And so we're not in this competitive game anymore.
We can just think about what makes society happy and pick a social contract that enforces that.
I guess maybe one historical example here is like the British Empire tried to enforce a social contract on the US,
or before it was the US.
And then the Americans were like, okay, actually we don't think this is fine.
And it's like a reality check to Brits.
And it turned out the Brits did not have the ability to enforce that.
the institution's real powers against them,
and then the Americans wrote their own social contract,
which became the Constitution.
And there's definitely a lot of power in like culture,
institutional inertia, just like the beliefs that people have
for like myths in the Harare sense to like steer things
and keep things on track.
But then like over a long enough time scale,
or like enough stuff happening in the world that like checks that,
like, is there something behind this?
Like if we if someone tries to change that,
either like, you know, on a bottom up way,
because you know, there's some like social media movement
or like from a top down way,
if the leader of a country
decides to do something, like, are those reality checks, like,
does the like, does the economic structure and the political structure like push back against that
successfully or is it sort of like, oh, like, you can actually shift it.
Because then if you can shift it, probably over time, it's kind of in direction of the
sense of over time.
How about this abundance idea though?
Going back to that, right?
So like, we have abundance.
AIs are creating all of these things.
Won't that relieve competitive pressure for us?
Like, can't we get a utopia out of that?
So I think at a core, you should be.
be really concerned about any arrangement where the long-run arrangement has you with very little
actual power.
And so I think it could be there's lots of abundance, but you aren't creating any of it.
You aren't involved in the creation of any of it.
And so your material power here is entirely political.
This is just way less stable.
Another thing to think about here is that I think we talked with this in the essay that
it's not really clear that competitive pressures or human greed have this intrinsic stopping
point.
I think to paint an additional example, though, here, it could be the case that the worst outcome
is that we have abundance, but you don't have any say in what happens afterwards.
And so your needs are met, but your political reality is quite constrained.
I think about a state like China, which is been able to simultaneously lift a whole lot of people out of poverty.
And one of them, yet the Chinese miracle is the thing that happened, not, you know,
hundreds of millions of people get lifted out of poverty under Deng Xiaoping.
But simultaneously, I wouldn't say that this has resulted in like crazy political freedoms for people in China.
It could be that your material conditions improve, and yet simultaneously your power is unaffected.
This has been quite a Herculean effort by the Chinese state to keep this equilibrium going,
and the Chinese state is in many ways responsive because he's afraid of losing legitimacy
or really afraid of like revolutions.
It has a zero-tolerance policy on protest.
But that is one outcome.
We just happen to think that you should be deeply concerned about scenarios in which you don't have the material power to guarantee abundance for yourself.
And if you're written out of economic social contracts, you are at this point at the mercy of the political one.
We think the political one is better than nothing.
We advocate really hard for strengthening that political contract.
so we can get to that outcome.
But we don't think in the limit,
it's the only thing I'd want to be relying on.
I'd really want to make sure that I have some real stake
at the game here.
One last objection to all of this,
which is basically Professor Arvind,
he wrote the intelligence course,
I don't know if you're familiar with him,
but he has kind of this riff on.
He wrote AI snake oil.
I was going to say, congratulations.
He played your eyes to work.
He played your work, so I just want to let you know right now.
Maybe I'd be frightened to find out he wrote,
He wrote, you know, oh, that's good to know.
I didn't realize that title was taken.
AI snake oil is, yeah, he has this riff where he talks about, basically he's kind of downplays,
AGI, he basically thinks that AI is kind of like more kin to regular tech.
And it's like one of his riffs is there's a difference between AI capability and power.
So there's, there's capability, right?
All of this knowledge, intelligence inside of a data center.
But then that's different than power.
It's kind of constrained.
Like maybe that idea of you guys said earlier,
part of the solution is not giving AI's the ability
to accrue their own wealth, right?
It's like wealth would be a vector for power.
We don't necessarily have to give AI's wealth and power.
And so capability and power could be somewhat isolated.
Like maybe this whole thing is just a question of like,
who gets the power?
How does that idea, the difference between AI capability
and power kind of like,
factor into this whole analysis.
If I'm understanding you correctly, you're saying that it could be the case,
we don't delegate this power to AI systems and then it's relegated in the hands of people.
Is that right?
Exactly.
There's always humans in the loop, you know, like they can't get their own bank accounts or something.
They can't accrue capital.
We always have kind of a check on them.
We don't have to give them the keys to the car.
Well, I think nothing that we've argued is contingent on AI having this power to
self-directed way.
One of the biggest oppressors of people in human history is other people.
Totalitarian states require a whole lot of people doing that oppressing,
and it could be the case that we've actually done,
is we've just expanded the power differential.
We've made it like that that some people are far more powerful than others.
This is already true today, but in the era of liberal capitalism and liberal democracies,
your power as an individual, as a unit of society, has just really never been greater.
And what we're saying here is it could be the case that for a couple of people,
because they have existing access to capital and convert this directly into results,
this could be a world where they have just such dramatic outlier ability to shape the world,
that their ability to materially impact your environment is really, really high,
and your ability to resist that is even lower than usual.
Okay.
I feel like we fleshed out the intelligence curse to a sufficient degree.
Let's talk about the solution.
Let's talk about how to break out of this intelligence curse.
You've got three words here.
You've got Avert, you've got Diffuse, and you've got democratize.
Where do you want to take this?
You want to start with Avert?
How do we get out of this curse?
Let's work it back.
I think we'll see the initial answer.
Yeah.
Okay.
Start with democratize then.
So this is, like, what's the idea here that we're distributing the power to all of the people?
We're just like not concentrating this in the hands of AI labs and AI models themselves.
How do you think about the democratized word?
So I think the way we'll flow this, if this makes sense, is I want to walk through really quickly just the observations backward because we started with democratizes the observation.
And then I think we can kick it off with avert after that.
And then what I mean by this is I just want to walk through the whole argument chain.
Rudolph, the kind of initial observation that we have on democracy, why we need each step here.
Yeah, so I guess the flow here is basically, as we mentioned, you know, like Norway, for example,
solved the resource course.
They just had good institutions and therefore they can just all go to polls and vote for, you know,
everyone's well often and they distribute the oil wealth between the people and everything is great.
And so it's like great if we can get to the point where we have this very like a democratic thing.
A lot of people have power.
They can affect the decisions that are made.
We get like broad distribution of the benefits of AI.
stuff like this.
And there's various ways we list some ways
in which like technology for coordination
and various other things can help with this
in our last section here.
But yeah, this is basically great.
There's various ways you can build tech to make this easier.
And then kind of like the point we're making is that
to be in the state where you can democratize
and like have that be a stable equilibrium,
often what matters is that you have you got political power
often when you have the economic power.
So then this brings us like the idea that you need,
you need diffusion as well.
You want to like diffuse the benefits AI to people such that then sort of like everyone gains in power, gains in capabilities, like how to continue just having some stake in the economy and some like a ownership stake over it.
And therefore like this sort of like makes the step about, this makes the step about democratization more stable because then it is actually an incentive of the powerful actors of people of everyone is to keep the democracy in place.
So then we've gone from democratized to distribute to to diffuse.
And then so there's this worry that sometimes people have is if you diffuse AI too much, if you give it,
one of the AI, you're just like giving out this powerful technology
that people use to do things like create bio weapons
or like the old sort of nasty cyber attacks or whatever.
But maybe the AI like takes over because there's misaligned
and it is like very bad for everyone.
And therefore, in order to make the diffusion step safe,
in order to like proof that, you want to like avert the various
catastrophes that could happen from widespread AI.
And we're especially excited here about like hardly
you need a world against things like bioattacks,
against cyber attacks, and also just making sure
that we don't mess up on the alliance problem.
So from that, we were, we were
backwards here, right? So democratization is clearly a way out because democratic incentives
can beat capital incentives. You can ensure all the things you want out of that. But we've noticed
this pattern where your economic power correlates with democracy. And it's oftentimes the engine
of that. So then we want to defuse. But also, we want to make sure that diffusion happens the way
doesn't create the kind of catastrophes that either would just be bad in it of themselves or could
give like license for states or other actors to freely powerfully centralized. So we have
this avert section. So we kicked things off with the vert and this backwards chain. And we've realized
that in order to get to the democratization, there are some steps we're going to have to take first.
Okay. So democratize is all about power diffusion to the people so that the people can hold the
institutions in check. But it's a political type of thing, right?
Yes.
And we have had democratic protocols in the past, right? And we have them right now, one person, one vote.
We'll come back to that because I want to get into some tangible examples. But that is about
a distribution of power, I guess,
and the humans having this power and retaining this power.
And you're saying one way in order to do that
is that other D word, which is diffuse.
And I think diffuse means give everybody
access to AI tools.
It can't just be a small percentage, maybe.
Maybe you could sharpen the intuition there.
But diffusion is about the distribution,
the tools in the hands of everybody.
And then Avert is just like making sure
that we don't, you know,
completely go off the rails. We have a, you know, misaligned AI or some sort of bioweapon.
And also, I love that you say this because this is super important. A lot of people miss this.
Avert without requiring centralizing control. Because the attractor basin, when you start to clamp
down and you like avert and you sign letters like pause AI or you like Nick Bolstrom proposed
kind of a high-tech panapticon or the, you know, the government has to surveil everybody to make
sure they're not doing a bio weapon with their LLM at home, right?
Then we get this a tractor basin of like totalitarian,
like authoritarian regimes that we then can't get out of.
So you're saying avert this bad outcomes
without requiring centralized control.
Exactly.
That's a lot of chain we flow through.
And the reason why we work through avert defuse, democratize,
in the peace, as opposed to the lot of chain where you go backwards
is because we think it's going to be really hard to diffuse unless you
avert and really hard to democratize unless you diffuse.
So this is kind of like the logic chain works backwards,
and then we present it forwards, if that makes sense.
Okay, it does.
All right, can we get into some real-world examples?
So avert.
Let's, yeah, let's start with averts.
Let's kick it off.
So I think the core observation here is that actually AI can do bad stuff.
And this is like sometimes unpopular to say in, like,
it's funny, I think we're in a position we're saying,
unpopular truths to lots of different people,
and certain truths are more popular to your communities and others.
Now, I think it is the case that AI can make it a whole lot easier
for a lot of people to do bad things,
It can also make it a whole lot easier for systems themselves
to lose control of them and take actions on their own.
And so our observation here is pretty simple.
It'd be really bad if that's the end state.
It is something that is bad for us and not good for us.
And secondly, that historically,
these kind of potential bad outcomes
are the really powerful forces that justify centralization.
You can see this here where just a host of tragedy.
I think a lot about like on September 11th attacks
and how as a result of 9-11,
the government took very broad,
power grabs, the USA Patriot Act, which is actually a fun fact, an acronym.
It was a response a couple of months later that resulted in what I would argue is a pretty
significant restriction of civil liberties for Americans.
I would co-sign on that.
Yeah, it gave the government warrantless wiretapping capacities.
Section 702 in particular has been quite controversial for a host of reasons, and I won't take a
sign on that argument.
But the point is that it rapidly expanded government power.
And government power, once distributed, is very hard, or once unlocked, is very hard to get back.
The other observation that's important here, though, is that if AGI could in fact do a whole lot of economic tasks, you're not just centralizing a technology.
This isn't just like giving only nukes to the government, which is a pretty common sense arguments.
You are also centralizing into a couple of points of failures, the development of the technology that might run your entire economy.
In this case, it kind of looks like centralizing the means of production to the hands of a single or a couple of actors.
That was not a essay somebody wrote a while ago.
Yeah, I think we cite.
I think we don't cite that one.
But we do cite the state revolution as an example of like, it is, you know, we don't think that
the idea of the transition state where a couple of people have all of the power and also
all of the economic power is a good one.
That's a state where you don't have very much power.
And historically, your Stalin risks are pretty high.
Your risks of, you know, drawing the wrong leader out and putting them in the apparatus
that you've built are pretty high.
Your P. Stalin, I guess, goes...
In another essay, we called it P. Stalin specifically.
Yeah.
It's not for this one.
It's not...
We have a piece on Tazin knowledge, and we did in fact call it P. Stalin.
Okay.
Okay, okay. Those are the goals. So how do we get there? I think one thing you cite, which is like near and dear to our hearts, is Vitalik Guterin's defensive accelerationism. Maybe you could flesh that out as like, you know, a part of the solution here.
Yeah, I guess the basic idea of like differential technology development or like differential acceleration, whatever it's called this month, is that like, you know, we can push, we can choose which order technologies arrive into some extent. We can push the technologies we like and that help us guard against risks and like health humans and we can like, you know, then hopefully get these technologies before we get to the bad worrying technologies. And for instance, we probably make sure that by the time, like, you know, chat should be, they can do a cyber attack for you that we're going to the point where like our cyber defenses are good. And at the point where like the AI can design bioweapons that we've actually like, like, hold up.
far in the world against bio weaponist,
we can extend.
And like, so this is true in the averse section
is also true in like diffuse and democratized,
like, I think the like core sort of spirits
of most of our proposals is this thing of like,
let's please build the technologies that enable the good things
before we get to the threats.
And actually like by building technologies
and by making them to come faster,
we can avert a lot of these risks.
A lot of these things are defensive too, right?
When you talk about biosecurity, it's not,
it's more on the defensive,
Loki focus or cybersecurity is kind of like
defending from attackers, you know,
cryptography is sort of,
is sort of similar in that way, but we also need physical security.
AI alignment, of course, the industry is focused on that.
But that's another element of the averting catastrophe here without centralizing.
Let's get to diffuse.
Okay, so what does diffuse mean?
To me, that's just like making sure that everybody, every human has AI superpowers.
So like the example that the text that you give us is like, even Tim Cook doesn't have a
better iPhone than you.
kind of thing, right? We all have the equal access to iPhones and that's great. So does diffuse
mean we all have equal access to these models and other people can't kind of like take them away?
Is it like open source? What are the practical ways to diffuse this?
Yeah. So I think basically the thing you want to do is help as many people as possible
benefit economically from AI as quickly as possible such that by the time they're really radical
AI hits people are like they're like, first of all, there are more people who are like owners,
where more people have like built companies, stuff like this.
And then also you've like distributed technology more benefits.
Like everyone has gone on the AI power up.
I like your phrase about, you know, everyone gets superpowers from AI.
And then so in terms of like grand strategy here,
we have this diagram at some point where you show that,
like, you're like two stages of diffusion.
Where like first when AI is am augmenting,
you sort of like you want to like diffuse AI,
which helps create decentralization.
Like you diffuse AI and sort of like just the EGI labs have the AI,
they use it to benefit themselves,
everyone in society has access to the AI.
And what this means that you get to like decentralization.
Because like the benefits of AI has been more widely spread.
And then like the fact that you have decentralized the AI
then helps you also diffuse the AI because then like once the humans are automated,
they're automated, not by the big AI labs with their own AIs
and they still control the fruits of labor of the AIs that they own.
How supportive are you guys of open source then models in this and like open source
weights and just like all of that?
kind of movement. Is that a key?
Yeah, broadly pretty supportive, especially in the world we've done a lot of the hard work here
and on the hard work on, you know, proofing the world against the biggest disasters, yes, exactly.
And I think to kind of break this down concretely, this looks like two phases. There's this first phase
where right now we're on this track where actually an agency isn't that good and yet everyone
is investing more and more time to getting agency better. It's open to this interesting market
opportunity where AI augmenting tools are both like under-invested in and probably way better. Think about
cursor for a second. And I know I keep coming back to cursor. I love those guys. Curser is not a tool
that does all of the coding for you entirely. It is a tool where usually a software engineer who's
really understands they're doing is like in the driver's seat. And it's enabled vibe coding. It's
enabled a lot of people who don't know exactly how to do it to still set the high level direction.
But ultimately, you are in charge of what's happening. You are steering the ship. There's a huge
market opportunity to build more tools in that space right now and expand the window of AI human
augmentation. We don't think this is like the long-term permanent solution, but going
ahead and starting in that direction now can both access to like untapped, you know, like it's
not different, and really focus on what we can do today. What we're then excited about in the future
is this whole bunch of concepts here, but one of them is something like aligning models directly
to the user. Most people have some sort of like hidden knowledge that is very difficult to
gather, and if you ultimately want like the single super intelligence singleton, you're going
to want to have access to all that information. This gives you a wedge point,
Whereas maybe it is the case that, like, you aren't the perfect data source because you are slow relative to your AI's in 2050.
But it could be the case that there's this like AIs that are trained off of your tacit knowledge, of your data.
They understand you and could behave like you and can represent your taste in judgment faster.
And these AIs are acting throughout the broader economy.
And they're interacting with other systems.
Maybe the systems are smarter, but you have access to that information behind that AI.
And so this is a world where first we've extended the augmented window.
And second, we've aligned systems directly to the users such that even as a systems take,
take off, they're still tied in a meaningful way to the user and therefore the user gets compensated
in some way, shape, or form by their economic activity.
Okay, that's cool.
All right, let's talk about this last point then in more concrete turns.
So democratize, right?
So how do we do that?
How do we, you know, let the humans still maintain some power, right?
We're very used to like one person, one vote.
I mean, are you talking about concepts like maybe you have an AI lawyer, like you have the
right to some sort of AI lawyer or data model to represent you?
Is you like, how do we really, you know, ensure that democracy and human ability, political agency
doesn't decrease in this world?
Yeah.
So I think, so we take a very tech-centric perspective here.
This is not the essay in which we're going to go out and propose how we solve everything in politics.
But I think one thing that is underappreciated again is just like if you push forward technologies
that make governance and like verification and coordination and trust easier, then it becomes easier
for society to like decide to do the bad things and to not do.
to decide to do the good things and decide to avoid the bad things.
So like we're, so there are some ways,
and it's like AI might help with this in particular,
like the AI might help policymakers understand
what voters think.
You can imagine that, you know, yeah,
and then like, in addition to understand
what policymakers think, you know, the AI is going to advocate
on your behalf, especially if you have a model that is aligned
to you in particular.
You can imagine that the AI is like, you know,
you can have a provable guarantees that there's some like
particular AI system that is making a judgment
that is like more incorruptible than a human.
you can imagine the AI is like,
what is in information?
There's this like fundamental difficulty
with using humans to audit that the humans have long-term memory,
whereas the AI is you just like, you know, use the AI,
it's like process and context and then the AI is deleted,
but it returns to like yes or no and like whether you're like
abiding by some protocol or like building a bioweapon.
So you can like warded things without humans
having knowledge about it afterwards.
And like a bunch of ways like this where like technology
gives you building blocks for like governance
that might be and like more effective
and more representative of the desires of the people,
we can do right now is just like stacking humans into bureaucracies and having like laws about that.
I think these three words give us a good framework for directionality, even though you can't
solve everything in one essay, of course.
It's like one last lens and filter for avert, diffuse, and democratize.
Let's say one society's kind of like chooses to do this and puts these things in place in
a more intentional way, but another society chooses not to.
And there's this kind of like geopolitical race condition here, which is like we're in kind
of a, you know, some sort of arms race.
for AI. Like, do you have it, does your essay in the intelligence curse have anything to say about kind of that? Like, how do we, you know, like, one society chooses to go in the direction of trying to solve the intelligence curse, but another society races faster, fully embracing like the curse, they don't care. They like, basically do the authoritarian totalitarian societies win, you know, like, no matter what? And so are we kind of like screwed even if we in the U.S. or we in the West kind of choose these ways out?
So I think here is one of the things where the differential tech development approach is really powerful.
It's fundamentally not about taking a sort of cuts to yourself and becoming less competitive and potentially being overrun by more safety-oriented actors, is about developing the technology such as if they exist, doing the safe, good, pro-human thing is the winning strategy.
And therefore, it shifts the equilibrium.
It's not just about like, it's not reliant on coordination with other actors.
I like that.
So we are doing the munger thing of trying to get the incentives, correct?
Yeah, exactly.
Now, I will say, incentives aren't everything.
And I think we talk a little bit about some policies,
especially in the democratized section.
We talk about some of the more boring ones you hear all the time,
about this is like campaign finance reform and forming anti-corruption laws
and strengthening bureaucratic competence.
And these all sound kind of boring today.
But a really key thing is if you think incentives are about to get radically different,
and the self-interest of politicians might be much more powerful
than it was in years previous, it is really important that the leaders
that you're electing in the next couple of years
are leaders that you would trust to make good decisions on your behalf in stressful situations.
That integrity element that we've kind of lost in modern politics is much more important than ever.
Because one of the ways you can square like Great Man Theory of History away with like a more incentive's dominant view is that oftentimes the great men in history are those that take a decision that looks against the incentives and is ultimately the correct one.
You really want to maximize your chance of grabbing one of those leaders when critical decisions come down because you could spend and you should spend as much effort as you can to get the incentives right.
But you really also want to make sure the person that you have there is someone at that critical moments,
might make a decision that is against those incentives if it matters,
if it's important for your well-being.
So there's that boring answer if you should vote for people
who you actually trust, but you actually should vote
for people who you actually trust.
Interesting. Unfortunately, it also feels like we're
in a shortage of great men these days, at least in our politics.
This has been very fascinating.
I guess my question, and this is, what should listeners do
with this information?
Is there anything kind of actionable?
I think it's a super valuable mental model and kind of like,
hey, you might be out of a jaw.
But what did people do, I guess, personally with this information?
What do you recommend listeners, you know, take action on?
Yeah, I'd be just a big fan of, you know,
go to the solution section of our essay series.
We have a lot of specific tech ideas, you know,
like read through this.
If you're someone who wanted to build something, you know,
go and build something of this list or read this list,
to have your own idea for something.
And it's like, that helps these same goals and go build that out.
And he's like, if we build the right technology,
that makes the equilibrium the good one.
Yeah, there's this, this kind of,
I hear this meme a lot in some of the
the more like air safety communities that, oh, if it's something that has like market value,
the market will solve it.
The market is made up of people.
People are in the market and they do things.
And so you actually have, if you're going to do differential tech development, then some
startup founders got to wake up and decide, okay, I'm going to go build this thing.
And a VC's got to decide to back them.
And we're not being, you know, we're not pontificating here.
We can talk a bit more about it in the future, but the two of us are currently actively
involved in taking a slice of this agenda and building this out ourselves.
So we are going to go down this rabbit hole ourselves because, uh,
I don't know. It's really easy to point at a problem.
It's pretty hard to build a solution.
What we're much more excited about is building out the solution space.
But I think there's stuff for people who aren't just in the tech community.
There are policies that government should be thinking about enacting today.
We call, for example, for Operation Warp Speed for the D-Act-style technologies.
The kind of things that could actually prevent major catastrophes and enable this culture of innovation and democratization.
Governments could be incentivizing that right now.
I mean, Trump's first term did Operation Warp Speed.
The bright people are in place to do something like that, again, this massive
moon shots. I think voters can start really thinking critically about what's going to happen in the
next couple of years and electing politicians on those grounds. And if you're a student, I think
I've had a blog post somewhere talking about, like, depending on who you are, what career decisions
you might won't be making. Because again, if you think diffusion pressures are real, they might
take a while for this to accelerate. There are still some roles where it's pretty obvious that are going to go down first.
Doing bigger, bolder things right now that actually require you to learn how to fail and be your
own actor is really great preparation for Worlder. You might be able to command an army evasion
or a lot of augmentative tools,
even if the big company is in hiring junior analysts anymore.
So there's a lot of things you can do right now today
to both at a macro level, get us on the right path
or at a micro level, orient yourself for the coming wave.
That's great.
So Luke and Rudolph, we spent the last 90 minutes
kind of defining the intelligence curse,
walking through it, providing solutions.
I would love to end this on a positive note,
on something a little optimistic.
What happens if we solve the intelligence curse?
I mean, what is the payoff?
What do we get for solving this problem?
We are talking about a,
what could be the greatest revolution technologically of inhumid,
at least at any time previously in New Industry,
and maybe like the final huge thing.
The promise of that is honestly hard to fathom.
It's things like curing diseases that we couldn't imagine,
of actual total abundance, of unlocking crazy amounts of resources,
of really being able to provide what would have been just, you know,
in this year, an experience only to the ultra-elites to everyone.
That is a world that I want to be able to live in,
where we can do things like, you know,
abolish poverty and abolish disease.
And if we can get this right,
the promise of artificial intelligence is that,
instead of having less agency and less control over your world,
you get more with a whole lot less of the drawbacks.
And I don't know.
That's a vision I'm pretty excited about.
That's a vision I can be very excited about too.
I wanted to thank both you, Luke, and Rudolph,
for joining us today, walking us through everything.
I know you guys mentioned you were working on some stuff.
Where can people find you?
So we've got a, I think, contact form or an email
on intelligence-cursed.
We're both also on Twitter.
I think my handle is Luke underscore Drago underscore.
Rydow, yours is.
Mine is currently at L-R-U-D-L underscore.
Yeah, so if you want to reach out to us to do the contact form there,
or if you want to reach out to us on Twitter,
we're both pretty active, unfortunately.
We definitely tweet a little too much, and that's all right.
Hey, for better or worse, but we very much appreciate you joining us today,
walking us through this entire intelligence curse.
I'm sure everyone listening now has a lot to chew on, a lot of interesting new questions to ask.
A lot of new things to consider, whether it be the exciting case, the optimistic,
ending that we landed on or any of those varied outcomes that we also discussed on the show.
So Luke and Rudolph, thanks again.
Thank you so much for joining us on the episode today.
Thank you so much.
And I guess the last thing I could say is I'm super excited, it's super optimistic and super excited
where this could go.
So I think even if we know about the problem, we also know how to solve or we have a guess
or not to solve it.
I think people should get more excited about jumping at that solution.
Awesome.
Yeah, please build a tech that will save the world.
Absolutely.
I love that.
That's a really optimistic note to end it on.
So thank you again and I appreciate you guys taking the time.
Take care.
Thank you.
Thank you.
