Bankless - LIMITLESS - The Intelligence Curse: AI Makes Us All Obsolete | Luke Drago & Rudolf
Episode Date: May 28, 2025Welcome to Limitless. Today we’re joined by Luke Drago and Rudolf, authors of the powerful essay series "The Intelligence Curse." Together, we explore a future where artificial general intelligence... (AGI) threatens to upend the economic and social contracts that underpin modern civilization. Will AI empower us or make us obsolete? We unpack how labor-replacing AI could dismantle the very incentives that once gave rise to liberal democracies, social mobility, and human-centered innovation—and what it might take to build a future worth living in. ------ 💫 LIMITLESS | SUBSCRIBE & FOLLOW https://pod.link/1813210890 https://www.youtube.com/@Limitless-FT https://x.com/LimitlessFT ------ BANKLESS SPONSOR TOOLS: 🪙FRAX | SELF SUFFICIENT DeFi https://bankless.cc/Frax 🦄UNISWAP | SWAP ON UNICHAIN https://bankless.cc/unichain 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🌐SELF | PROVE YOUR SELF https://bankless.cc/Self 🟠HEMI | BTC & ETH, ONE NETWORK https://bankless.cc/hemi ------ TIMESTAMPS 0:00 What is the Intelligence Curse 4:29 Resource Curse 8:20 Pyramid Replacement 18:19 Institutional Pushback 21:25 Capital, AGI & Human Ambition 32:00 Liberalism Falls Apart? 36:30 Powerful Actors 41:15 Rentier States 46:19 Human Labor in a AGI World 52:46 Nation States 57:37 Shaping the Social Contract 1:06:23 AI Snake Oil? 1:07:41 Balance of Power 1:08:51 Breaking the Intelligence Curse 1:16:45 Vitalik’s Defensive Accelerationism 1:18:16 Diffusion 1:19:58 Open-Source AI 1:22:06 Democratization 1:24:06 Who Wins? 1:26:43 Action Items 1:29:17 The Positive Scenario 1:30:21 Closing ------ RESOURCES Luke Drago https://x.com/luke_drago_ Rudolf Laine https://x.com/LRudL_ Time Op-ed https://time.com/7289692/when-ai-replaces-workers/ Intelligence Curse https://intelligence-curse.ai/ Contact Form https://docs.google.com/forms/d/e/1FAIpQLSft2iBV9z1AYsM3TcDnh8z3juc2k4yD0TQTZ91oy37S-KlSSQ/viewform AI Snake Oil - Arvind Narayanan https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil Vitalik’s Defensive Accelerationism https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Hey, bankless listeners. This is Ryan here. You guys are in for a treat today. So the episode you're about to hear was pulled from our new Limitless podcast feed. That's a podcast feed that I occasionally host. I'm a guest host on that, along with David and also our new Limitless co-host, Josh. So as you're about to hear in this episode, the intelligence curse is not just an AI topic, it's a crypto topic. This is about AI people discovering that without decentralization, the future of AI is
kind of bleak. These aren't just technologies. These are political systems. Now moving forward,
episodes like this will be on the Limitless feed. So you won't see them here anymore.
So what you need to do is stop what you're doing and go subscribe to Limitless. Apple, Spotify,
YouTube, wherever you get podcasts, there's a link in the show notes for all of these.
Limitless explores the frontier of AI, the same way bankless explores the frontier of crypto.
We believe these are the two frontiers for wealth generation and freedom.
this decade. Bankless and limitless, AI and crypto, this is how you level up on both.
Enjoy the episode.
In the wild west of Defi, stability and innovation are everything, which is why you should check
out Frax Finance.
The protocol revolutionizing stable coins, defy, and Rolex.
The core of Frax Finance is FraxUSD, which is backed by BlackRock's institutional
biddle fund.
Frax designed FraxUSD for besting class yields across defy, T-bills, and carry trade
returns all in one.
Just head to Frax.com, then stake it to earn some of the best yields in D.
Want even more?
Bridge your FraxUSD over to the Fraxtal Layer 2 for the same yield plus Fraxtal points
and explore Fractyl's diverse layer 2 ecosystem with protocols like curve, convex, and more,
all rewarding early adopters.
Frax isn't just a protocol.
It's a digital nation, powered by the FXS token and governed by its global community.
Acquire FXS through Frax.com or your go-to decks, stake it and help shape Frax Nation's future.
Ready to join the forefront of Defi, visit Frax.com now to start earning with FraxUSD
and staked frax usd.
And for bankless listeners,
you can use frax.com
slash r slash bankless
when bridging to fraxil
for exclusive fraxil perks
and boosted rewards.
Uniswap is your gateway
to a more efficient
defy experience.
With uniswap swapping
and bridging across 13 chains
is simple, fast,
and cost effective,
helping you move value
wherever, whenever, whenever.
Thanks to deep liquidity
on the uniswap protocol,
you'll enjoy minimal
minimal price impact on every trade.
And now uniswap v4
takes it even further.
Swappers benefit from gas savings
on multi-hop swaps
swaps and each trading pairs, while liquidity providers can create new pools at 99% lower costs.
The best part, you don't have to do anything extra.
Each trade is automatically routed through Uniswap X, V2, V3, and V4, so you get the most
efficient swap without even thinking about it.
Whether you're swapping, on-ramping, off-ramping, or bridging Uniswop's web app and wallet
gives you the tools to unlock Defi's full potential on Ethereum, base, Arbitrum Unichain,
and more.
Use Uniswop's web app and wallet for a more efficient way to use DeFi.
Imagine verifying yourself without hands.
standing over personal data. No hacked databases, no unnecessary personal exposure for
airdrops, and no AI bots ruining community governance. Meet Self, the on-chain identity
verification protocol built for privacy and control. Self-proticle uses zero knowledge proofs
to confirm your identity safely. Users prove key details like age or citizenship without
revealing sensitive personal information. Self never stores your data. It only generates
cryptographic proofs. Here's how it works in three steps. First, register and verify. Use the
self-app to scan your biometric passports RFID chip.
Self-verifies authenticity with zero knowledge proofs.
Each passport creates one unique identity.
Second, you can share proofs privately.
Third-party apps request identity proofs, like confirming your over 18.
You can also link proofs securely to public wallets for airdrops or governance participation.
And then last, secure verification.
Apps validate your proofs instantly on chain, like on cello or off-chain.
Audited by ZK security, the self-app is live on iOS and Play Store.
Visit self.xyZ and follow self-proticle on X.
So the intelligence curse pretty briefly defined is the set of incentives that we might get when we unlock artificial general intelligence.
And we're reusing the open AI definition here, which is the ability to automate most or all human labor.
And in that world, we're really concerned that governments, powerful actors, corporations, won't have that incentive to care about regular people.
That doesn't mean they guarantee won't, but it means that strong economic incentive that we've had for basically,
all of human history, where powerful actors have needed regular people, and so there's an
exchange of goods and benefits. If you sever that, you're relying a whole lot on goodwill.
And we think that's not nearly a stable, a strong of an arrangement, and it mirrors the kind
of patterns you see in economies like those that are afflicted with the resource curse.
Are regular people just everybody? Just like, you know, you, me, white-collar workers,
people in developing countries, people in very developed countries, like, who are the regular
people to which you speak of?
Yeah, so I think we really do mean here everyone, in particular maybe like everyone who does not have capital, that is then like very dependent on AI, that there's going to be physical capital or financial capital. And I think there's people often talk about white-collar workers. There's also people in developing countries, of course, who we shouldn't ignore their existence. So it really does mean everyone where like right now the state is that everyone can contribute economically, states and companies have an incentive to care about everyone. But then if all of everyone's labor is replaced, then this is less true.
So I'm really curious to ask you guys, why now?
What did you see that sparked the interest to actually make this post?
Because we read through it and it was very thoughtful, very pragmatic.
But what was it that sparked this now versus a year ago or a year in the future?
Was this the exact right time?
Or do you think this thesis kind of changes as we progress over the next few years?
So I think for us, just for some history of how we got here, I think shortly after 03 was announced and they looked like, oh, timelines are getting pretty short.
But AGI doesn't look 20 years away or 50 years away.
It might look five years away or even sooner than that.
I think we were having a series of conversations.
We worked at the same building at the time or the same office building.
And I'd had this observation about the resource curse and oh, this looks somewhat similar
to those patterns.
And Rudolph had separately been working on this draft of an essay, which is now in the full
series called Capital AGI Human Ambition.
We published earlier versions of those essays back in January that didn't propose any sort of
like solution or way to try to stop this process.
And we spent the next three or four months of banging our head against the wall, trying to figure out, well, what can we do about this?
What does it look like to actually solve this problem?
A whole lot of the related essays in the piece that aren't squarely focused on the solution were essays that we were kind of writing and taking notes on as we got what we thought was closer and closer.
And in the last couple of weeks, I think we just thought we had enough.
And so we went ahead and hit the publish button.
We are big fans of shipping in public.
Yeah, I love that.
Yeah.
I think there's also something about like when EGI is far away,
it feels very much like a technical problem.
And I think for a long time,
a lot of the people who take AGI seriously
have been thinking about it purely for the technical land
is thinking about the systems.
And I think just as it draws nearer,
you start realizing that, okay, it's not just going to be a technical thing
that's actually going to interact with the rest of society,
with the real world.
It doesn't have like very concrete effects.
And that says, maybe we should actually think about those.
It seems important.
And it's funny because at Limelis,
we are very optimistic about the future.
We are very excited about the pro-tech,
pro-AI future. And then Ryan surfaced this document with me, your post. And I was like, I was reading
through it and I was, it kind of hurt a little bit. I was like, this isn't the future that I'm
super excited about. But it was very thoughtful and very pragmatic. And I wanted to ask you guys,
why did you define this as the intelligent curse? Why is intelligence not a blessing? So I think the
name just strictly comes into the comparison to the resource curse. And while I don't think we,
we don't root the entire analysis in the resource curse, it's a very helpful example to really
conceptualize what is a similar environment that has similar.
similar incentives, what are the outcomes there?
But the initial observation was this looks a whole lot like the resource curse in development
economics.
So that's kind of where the name came from.
There's this huge wealth of existing literature and debates.
We didn't want to center the entire argument on that, but we did think that name seemed
quite relevant here.
And I think in particular why it might be a curse as opposed to a blessing.
It really depends on how it gets deployed, if it's a centralizing or decentralizing
technology, if it accumulates power in the hands of the few or distributes power out to many,
many people.
And so I think it can be a curse.
And we set out the scenario for which it could be a curse, but we also offer the world in which it could be a blessing.
There's so much really to unpack here.
And some of our history at Limitless is comes from also crypto, right?
And I know this series of essays, you referenced Vitalik Beteran's work on decentralized or defensive accelerationism.
And we've had him on the podcast actually talk about that.
And that might be indeed one of the ways out.
But when you start to talk about decentralization, that very much does seem like maybe our main defense
against the centralizing effect of this.
But I don't want to project us too far forward in the solution.
And there's so much to unpack here, so much to go through.
We'll do it kind of sequentially by route of essay.
But one thing I do want to get, since we've defined it,
we haven't defined it yet, but we've dropped this phrase,
the resource curse several times so far.
Luke, could you just define what the resource curse actually is?
I believe this pertains to countries and how endowed they are with maybe natural resources.
Tell us about the resource curse and why it's basically a meme for the name of this essay.
Yeah, so I'll stress first that it's not the sole piece of evidence we rest on.
It's very much so an example or an analogy that we want to build around.
But the resource curse, succinctly put, is the tendency for countries that have lots of natural resources
to oftentimes, instead of having very rich or wealthy citizens, to have actually worse conditions.
And there are a lot of different explanatory mechanisms for why that can happen.
But one of them that I think is pretty prominent in the literature is that,
that's if you have oil in the ground, and all it takes for your state to get really wealthy
is to get oil out of the ground onto the roads and onto the ports.
Where your incentives are not to build this really complex economy, your incentives are to
make as much money out of oil as you can.
It doesn't require a whole lot of people to make money off of oil.
It might require workers to actually extract the resource, to get it out to the ports
and to sell it, but that's a whole lot less people involved in an economy than, let's say,
a more developed advanced economy like the United States, where there's lots of moving parts
here.
Now, there's a lot of different ways the resource curse.
ends up. But for a whole lot of countries, particularly those that don't have really strong
institutions, the resource curse ends up in pretty terrible poverty. There are ways out that we think,
we kind of talk later to the piece as to, you know, what ways are out, what ways are our potential
like analogies here that we could be looking to for solutions. But the core thing here is that
you either want a diversified economy or you want institutions that can withstand the curse.
Okay. And the examples of that are just, you know, countries in the Middle East, maybe
that are oil rich and they really haven't developed their, I guess, civil liberal.
or kind of the labor economies of their citizens, right, or maybe a country like Russia,
which is kind of in the grip of authoritarian, totalitarian powers, and it's, you know,
they kind of devolved into plutocracy.
I suppose that's what you mean by the resource course.
Now, we also have a counter examples maybe like Norway is very well endowed.
There's a lot of energy there.
Canada might be another example.
I mean, they seem to be doing fairly well with liberal democracy.
the counterintuitive thing here and why you're labeling this the intelligence curses,
you would think that more resources equals like better.
More resources equals better for everybody.
And it turns out that's actually not the case for resources, for nation states when it comes
to natural resources.
Sometimes more resources actually lead to an instead of structure that makes things worse
for the population.
And that could be the same with the intelligence curse.
Yeah, that's what we're saying.
And we also think that there's a lot of signs of hope there.
We talk a lot about Norway, and we talk a bit about Al-Mahn as well as two examples of states that broke the curse, what we can learn from those.
But, yeah, I mean, states like the Democratic Republic of Congo, for example, or Nigeria that are just like really, they have tons of resources and yet their people are very poor.
And the question is, well, what are the incentives that are creating this outcome?
Okay. So let's, now that we've got kind of the gist of it, let's flesh out this argument in a lot more detail.
And you have basically a series of essays with different sections on this.
But when I was kind of looking at the high-level thesis, it feels like you're playing with a few, you know, premises.
Like, so maybe three in my mind.
Like, one is that AGI is the only game worth playing.
There's a famous essay titled this as well.
But basically, AGI accrues incredible capability in power.
And as you said, this could be like on the near-term horizon.
We're talking about years, maybe five years, for instance.
So that's like the first premise you sort of have to believe.
The second is that AI will replace humans for valuable economic labor.
and we're going to flesh that out in a second.
And as a result of that second premise, the third kind of, I guess, idea here is that powerful actors,
these would be like nation states and companies, they no longer have an incentive to care about the regular people, as you said.
Why?
Because the regular people used to be their economic engine and their labor.
But now with AGI, the regular people aren't providing utility.
So do we need these welfare states?
Do we need these social structures?
Do we need civil liberties?
Okay.
So that's the base idea.
we're going to flesh out. And it begins here, which is this concept of pyramid replacement. I want you to
sharpen this mental model. So AIs, this idea that AIs will replace humans for all valuable labor.
And I'm showing on the screen a picture of a corporation, I think. This is a typical company.
You know, companies are arranged in hierarchies. At the base of the pyramid, you have your entry-level
employees in the very top. You have the executives. You have the C-suite. So can you describe
what this pyramid actually is in the typical corporation
and what you see AI's doing to this pyramid?
Yeah, so basically, so there is currently this hierarchical structure in companies.
And it's actually not from first principles obvious,
which end is the pyramid AI will start automating first.
But like empirically, it seems like AI are getting good tasks
that have short time horizons,
where the task is completed quickly
and then you move on to the next thing pretty quickly
and getting better at longer time horizon tasks slower.
And there's also the social fact that,
that the C-suite is less likely to unemployed themselves
and to unemployed other people.
And if these people who are the entry-level employees
because you don't even fire anyone,
you just stop hiring.
And that's why we think the first step in automation,
something that might already be happening in software companies
is that the entry-level employees,
instead of hiring more and more than the company,
instead of giving the senior developers
at a software company an entry-level intern or something,
you just give that senior developer cursor.
And they code with cursor
with some other AI-coagent tool,
and they don't need the entry-level employees.
more. So, Ryan, if you don't mind scrolling down just a little bit, what I loved about this
section was kind of the visual that you guys created, which we showed this pyramid and the pyramid
is blue and that means it's all humans. And then as AI starts to roll out, it starts to absorb
the entry level employees. And then as it goes to junior and as it goes to middle management,
it slowly absorbs the bottom layers until eventually we're just left with the C-suite on top
and then nothing. And then everything gets absorbed to AI. So I guess just literally like one big
AI like red block, right?
The pyramid becomes just like this AI
Borg machine something.
It's so long as a pyramid, it's a square.
Yeah, it's a square.
This is one of the...
I like that. I might steal that, but this is
one of the things we changed from the original essay.
I think I still have like the rough draft published on my
blog, but originally it was just the pyramid
kept getting smaller and nothing was replacing it.
And you get to this last slide and it's just blank.
And we had like an outline of a square that looked like it was just part of
the picture the whole time. And then at this point, it's just like
there's nothing. I think I wrote the org chart goes
blank. And Rudolph had the idea of maybe we should just like show people an entire automated
company actually show it's not just that the people are going away but that AI is rapidly
filling those functions. So now you've got the visual in front of you. And there's also something
here where like we don't want to imply that when the AI takes over, you know, in the future
company for every human employee, there will now be one agent, one AI agent that does, you know,
like matches one to one with each original human employee. I think the optimal way to structure
AI's in companies will really look a bit different from the like current thing where you stack
humans into a pyramid, and therefore the AI, you know, we represent it's like the square thing that sort of like
is like blow above AI computes around your sort of like shrinking amount of humans that are providing
our direction. And this is what I was curious to ask now is because currently in the world of AI,
I feel like I am a leveraged human when I use it, where I am capable of X and then because of
AI I am capable of Y. And I guess the question to you is, is will humans not just get better jobs?
I guess if you could imagine stacking the pyramid on top of the AI, where now we have this
foundation that provides a lot of leverage for entry-level employees all the up to CEOs,
but the productive output that's unlocked as a result of that leverage creates new and interesting
problems for them to tackle. So would that not be the case where we become hyper-leveraged humans
while removing some of the workforce, but not all of it? Yeah, why can't this be a box with like a
pointy hat, you know, pointy in the hat on top? Well, for it's worth, if you go one up, you'll actually
find that box with a nice pointy hat on top. I guess the rule. I guess the rule
question is the way that we currently structure are like major white-collar companies, these like big
mega conglomerates. It's a company of like a couple hundred thousand people, like 10,000 people.
And if you look at one of the success stories and two, the existing statement CEOs are making,
on the success story side, the cursor has demonstrated that you can be a multi-billion dollar company,
getting tons of money with only a couple of people there. And if the general advice, you should
only hire as many people as you need to run the organization. I'm not sure why cursor would then
hire 50,000 additional people. I'm not sure if that would actually buy them additional runway right now.
But I think maybe what's more important anecdotally, and then Rudolph, I head off to the more systematic argument.
But I think Duolingo CEO has now said they're an AI first company, and this means they're going to ask in every role they hire, every contract position that they have, whether or not they can automate this first.
I think we have at the bottom some links to other companies who've also made similar statements here.
I can't recall off top of my head all of them.
But, I mean, the general ethos that we're hearing right now, as this is kicking off is what we really want to be doing here is being more efficient, being leaner.
And Rudolph, I hand it off to you for the more systematic argument.
Yeah, and I guess I think there is some hope that humans currently have this advantage in Long Horizon tasks.
I think basically we know how to train AI to do tasks for where there's a larger data set
or where we can build a digital environment, reinforcement learning environment, where the correct behavior is rewarded.
And this works for things like writing where there's a lot of data on it,
and you just train to write a right like the average interperson pretty well.
And it works for things like math or code where it's easy to verify whether something is correct.
But then it's harder to train AI as you like, be the CEO because the CEO interacts at the real world,
take a lot of actions. This is just like we're currently less good at getting asked to be good at this stuff.
So I think the state where like the AI uplifts the humans, this will continue for a while and
probably longer than some of the most aggressive AI projections estimates. And I think there is hope
that we can extend this period during which humans are mostly just uplifted by the AI.
And this will be like very good for just human agency and like ability of humans to affect change
at the world. I think right now we are definitely in this regime. But then in the limit, there's
no theoretical reason why the AI can just also get at the long term planning. That's what the
AGL Labs are turning crack right now.
And at some point, the board will come in,
and the board will be like, look, you're the CEO,
you have a nice job.
But like, I'm sorry, but it looks like a GPT-9 is,
you know, starting to get better at making decisions you are.
And I'm responsible to shareholders,
and I'm very sorry, you've done a good job.
Can we start there?
Can we start at the top?
That would be kind of nice for a change.
You know, I've gotten a couple of those reactions.
And I think the way that we're most likely to be wrong on this model,
and I want to be, you know, as like epistently rigorous as I can be,
is that the middle gets cut first.
It could be the case that there are entry-level roles that we just really need a whole lot of people to be doing the basic work and management becomes dramatically easier.
I think the evidence really points towards the former that we're getting this bottom-up pattern of automation as opposed to this middle out right now.
And I think the reason for that is it just quite simple.
If it cost you $50,000 to hire a person to do something every year, but it would cost you $10,000 in compute to do the exact same task, it's really hard to justify the additional $40,000.
And sure, like Mike's a great analyst and you go golfing with them on the weekend.
But that's $40,000.
And you can go golfing with Mike
whether or not you work with him.
And so I think a lot of companies,
whether it's a downturn
or whether they just want to save some money,
they're faced with that question.
Luke, that was a spoken, like, a member of the C-suite.
Let me tell you, that it's...
I actually can't play golf for once in.
I really can't play golf.
We can golf, but you can no longer work here.
Yeah.
So you guys are kind of, I guess,
you see it maybe emerging right now
as sort of bottom up where, you know,
entry-level programmers kind of are the first to go
or like support teams, customers support, something like that, kind of the first to go,
and it works its way upward.
But you're also agnostic in this model as to whether it's kind of middle out or even top down
or whether it's bottom up.
Correct me if I'm wrong, but I believe this is particular to white collar jobs, yes?
So is that part of the thesis that kind of the information like knowledge worker class
is kind of going to be the first to go because our robotics technology hasn't quite caught up to
our software at LLMs?
Yeah, I think that that's the default.
I think right now at least,
LOMs are advancing faster than robotics.
And this creates the interesting possibility.
I think Carl Schillman talks about this idea
that we might have a period
where humans are valuable, not for their brains,
but for their hands.
And maybe we get this.
That sounds worse.
So when I made this concrete,
imagine your job is just you're like assembling widgets in a factory,
but you have a year piece where the AI is giving you instructions
and it gives you like motivational approach
from time to keep you on task or something.
But if you don't actually have to do anything,
that you guys are better at all thinking.
Like, maybe this is the future.
However, don't worry, maybe we also fix the robotics and we get robotics quickly,
and then you can't do the widgets either.
You're just, like, fully unemployed.
So there are many possibilities here.
Okay.
So that's the concept of pyramid replacement.
Let's do some pushback, though.
Some objections to this.
So one is kind of the, I guess this is maybe the, at least I'm familiar with the Tyler
Cowan kind of pushback argument where he's basically like, you know,
there's diffusion barriers.
And we've certainly seen this, like, you know, kind of coming from crypto.
So, like, you know, crypto could replace the entire world's money system.
But guess what?
There's actually regulators who kind of don't want that to happen, right?
There's institutions, there's structures.
There's all sorts of breaks in society, meat space, government, that just slow things down.
It's kind of the human piece of it.
And so you might have this technology in a box, in a geniuses in a data center, whatever.
But they might not diffuse through society because society has all of these breaks.
and, you know, like big breaks in meat space.
And so will that kind of slow this down?
I mean, it feels like there's, we can adapt better.
I mean, just the general ideas, we can adapt better
if this happens very slowly versus if this happens,
like, in a period of like months to years.
And what do you think about that diffusion argument?
So one, I think a lot of our like solutions focus
on breaking the intelligence section is really at its core an argument
to try to extend the augmentation window
so that we get more time to adapt.
And I think if you look at the way the pyramid,
replacement flows right now, we argue it happens pretty slowly. It's a bottom-up approach.
I don't think we give an exact time horizon because it's really hard to predict. But I mean,
I think if AGI hits in 2027, I think most people are still employed in 2028. The question really
for you is how fast after that moment. And there are a couple reasons why I should expect companies
would want to speed up pretty quickly. Maybe, for example, they don't do the automation, but a
competitor does and they start moving faster. So now there's a competitive pressure to automate.
And the same way that maybe a state doesn't want to require certain weapons capability, but of course,
the other state has also acquired that capability,
and now you're in a race to kind of get to the top here.
Maybe it's the case there's an economic downturn in this forces, like cost-cutting
everywhere, and you do the layoffs and discover that you actually are at equal productivity
or maybe even faster when you try to automate that away.
So there are a lot of diffusion barriers.
We do not think that this six months after AGI, everyone's unemployed.
But it's also important to note that diffusion barriers also have
acceleratory pressures that are pushing against them.
If you have this kind of technology, and there's strong reasons to adopt,
if investors are hyping it up,
If people are seeing it work in the real world, it really is only a matter of time before critical mass starts to emerge.
And the way that we work is fundamentally change.
Okay.
And there's been other points to talk about, you know, sort of AI being very jagged now, right?
Where, you know, like some things, you look at its output and you're like, oh, my God, you are so dumb.
Like, I could do this.
And other things, you're like, wow, this is incredible.
And so this could happen in a jagged way, I guess.
I feel like we've established then that there is the possibility if we get this kind of like acceleration towards some sort of an AGI
that AIs have the capability to replace the corporate human pyramid.
And I guess this is the capability to, I mean, corporations companies are the economic engine of like basically all societies, right?
So that's effectively what we're doing is we're replacing the economic engine of these societies.
Let's move to kind of the second essay and the second piece of this intelligence curse where we start to talk about capital.
All right.
So now we've got a world where AIs have maybe started to erode, replaced,
the human labor pyramids, our corporations, they're doing the work.
And so I think you're making an argument here that the power, which was in the hands of labor,
of course, capital always has power, but we have a large portion of labor in society because
humans are valuable that has power.
That would begin to shift.
And this is almost like a startling revelation that the AIs might make non-human factors of production
more important than the human ones, and particularly.
capital. Can you develop some intuition for that for us?
Yeah. So first, I think it's worth clarifying that capital, when economists talk about it often
means, you know, like it means money, it means, but it also means stuff like physical factories.
And therefore, like, you can't talk about factors of production, like land, labor, capital
management. And like capital here is a bucket that includes like factories, GPUs, and also just like cash on hand.
And I think...
Does it include, like, energy to, Rudolph?
Yeah, I would, I think economists would call energy a type of capital because they say like
non-human factor of production, it's not a land which works a bit differently, and it's not
management, which is, for our purpose, it's kind of a bit like labor because both involve
humans constantly. So, yeah, and then it's basically, the point here is just the point about, like,
right now the economy needs a huge amount of human input, and you add more human inputs on the margin,
and the economy goes, you know, up, and therefore the marginal unit of human labor is, like,
compensated pretty highly, and, like, at least compared to the historical, like, pre-incident
here. And I think you're going to see this, like, historically, so, like, before the Industrial
revolution where the human factor is like human capital and sort of like education skills,
stuff like this was less important because there was less like real thing technology,
less like complicated processes, then also the like amount of power that human labor had was
lower. And then yeah, so this is sort of like general arguments. And then in this essay in particular,
we talk a lot about just this like point that you can just start substituting capital for labor
more effectively than you can right now. And it's actually like, I think right now, for instance,
if you're a like if you're trying to hire talented people, that's actually,
like a big button on your ability to like convert money into results in the real world.
And then like this will go away if you can just like use money to buy credits from open
AI to spend on tokens that is like replace the talent.
And it's like, sort of like right now there's like a lot of complexity and like free friction
to converting money into real world results, but this will go down a lot once the real world
results you acquire this by just spending money on the AIs.
tokens become your workforce essentially.
Yeah.
This was an important element that I didn't really realize until after reading this is that
there is this difference in capital between general capital and human capital, the actual labor
workforce. And tokenizing the labor workforce seems a little scary. So I'm curious to get your
takes on kind of the system, the way this kind of rolls out over time in either maybe best
case to worst case scenarios is what happens as humans get replaced by tokens. And as we kind
of reduce our workforce incrementally, does that happen quickly? Does it happen slowly? And what are like
the second order effects downstream of that?
Yeah, so let me say a bit about the Yakin Rhaar effects here.
So I think one of these is just that, so the thing I already mentioned that, for instance,
if you have a bunch of money but you want results in the real world, you're still bottom-like
on, you've identified talent, you need higher talent.
There's a lot of friction here.
Another is that a lot of social mobility today is based on like you are a talented human
and you don't have capital, but you can go out in the world and do something.
And people with capital have to pay attention to you're like a nimble startup founder or something.
energy like VCs, you have a lot of capital, just, you know, need you. They think they will give you
money, stuff like this. I mean, another word for that is the American dream, right? Yeah, yeah.
It's what we're all told, right? If you, I live in London now, but I grew up in the States,
and we're all told from very young age, you know, if you work really hard, if you do well in school,
if you go to the right college, you will have a shot at the American dream. And the American dream,
it looks like accruing enough capital to be able to own things or accruing enough capital to be able to
make it and have a nice life and fundamentally change your social position. And I think a lot
the argument here is that provided that capital can be highly substituted for labor here,
because you can just sub an AI. Your ability to walk that way up the social hierarchy just gets a
whole lot harder and maybe gets eliminated. And then, like, as a society, I think a lot of social
progress and, like, change depends on this thing that someone who is not currently incentivized to care
about the current status quo comes from the outside and shifts things. And then if you lose this ability
to have social mobility, it's not as bad for individuals, this makes society more static as a whole.
Okay, so there's this idea if capital becomes a substitute, like a general substitute for labor,
which you can imagine if pyramid replacement is true.
Basically, what pyramid replacement means is like instead of paying the humans' labor force,
I can just like pay the open API APIs and do this through tokens and pay the geniuses in the data center.
And that's my labor's force.
And so I can just take my capital, which is like my assets, right?
my money, and instead of putting my money into the slot machine of human labor, I just put that into the slot machine of AI.
And what you're saying is this kind of like destroys social mobility?
You talk about, Josh was just asking about like kind of, you know, the best to worst case scenario.
And I think maybe these essays are like really focusing on the curse side of things and maybe less than the blessing side.
So one could imagine some blessing.
But you talk about, like, I mean, I guess one of the worst case scenarios, but in a way this is maybe one of the better worst case scenarios, is this permanent caste system where we're all kind of like locked into the capital ledger that were born into.
And so, like, maybe if you're born into a nation that has really like, I guess, embraced AI and like your, I don't know, like your father worked at Open AI or was in the industry, it was early, right?
and like really was hooked up to this spiket of capital, like, that's your caste.
You're kind of locked in.
It almost sounds sort of a feudal in that way.
I mean, not having lived in sort of a strict caste society and certainly embracing the idea
of meritocracy, like maybe that all fades away is what you're saying.
We're like permanently cast it into these kind of capital ledgers.
Yeah, and I think it's worth noting that social mobility before the Industrial Revolution was very low.
And I think social mobility depends on this thing of like human talent matters and also the economy is growing and stuff like this.
And then, you know, before the Industrial Revolution, if you were rich, probably at some point in the past, your ancestors did something cool and the King gave them a bunch of land and made him aristocrats or something.
But then like then you get the Industrial Revolution, human talent really matters.
It's like social mobility is possible by going out and inventing things, pushing science, pushing industry, stuff like this.
But then if, you know, maybe we'll keep having technological progress, maybe the amount of abundance of society will go up.
But even then you've lost this element of new people can enter the elite if AI is a like substitute for elite human capital.
Okay, but that's the thing that's counterintuitive or like what I'm wondering about the argument.
So we got the Industrial Revolution, which is sort of machines replacing some human physical labor.
And you're saying that was actually good for the humans, basically.
Why does that not follow if we get an intelligence revolution that it's not just like good for the humans?
Well, I think the most important differentiating factor for humans as a species.
is our brains, and that when freed up from physical labor, we're like, we're better than some
animals at physical tasks.
We're worse than other animals of physical tasks.
And having the thumb is a pretty great advantage at using tools.
But at the end of the day, it seems like the single best advantage that people have is that
they can think up new things and execute on them.
And so post-industrial societies get these really complex information economies that spend
a whole lot of time, both, I mean, producing lots of physical abundance in the real world
and with those resources using our brains and our heads to come up with even more abundance
and more ideas.
And you can see this not just in like existing human families.
like existing economies versus old economies.
You can see this today between economies
and those kind of more resource-cursed-afflicted states,
social mobility is lower
because non-human factors of production
mean that your ability to have some huge idea
and make an outsized impact is also quite limited.
Capital begets capital, and your ability,
and this is true in every society,
but your ability to have outlier talent to succeed
is less if you don't need outlier talent
in the first place to make money.
And maybe also to add on this, like, a quick econ thing.
It's like, if, what rather is,
are like, whether it's a subsidizer complement for human labor,
and the sense of like,
the thing that sets wages is basically like
when you add one additional marginal units of labor,
like how much are the returns?
And if you have like, you know, pre-industrial revolution,
additional unit of labor, you have another additional peasant farmer,
it's not very much.
Post industrial revolution, you have an additional unit of labor,
but then they command a lot of machines,
they command a lot of capital.
They actually boost the economy a lot.
They get high wages.
But then if, like, all the labor is done by AI,
you've got this like total substitution of humans,
then additional unit of labor,
output does not change.
human wages are very low.
In this model, who are you saying owns the capital, right?
So, you know, capital is basically sort of a, it's property rights.
So somebody's got to own it.
In this model, do the humans still own it?
Does it kind of consolidate to the tech companies?
Or do the AIs own it?
Like, how sci-fi are you getting in this?
So we call for a ban of AI ownership of property and being CEO.
So we're willing to-
We do call for it.
We do.
But I don't know how likely that scenario is, but I don't want to preclude it.
But I do, we went ahead and said, well, that's a pretty, it's a pretty cheap ban to do, right?
It's not that hard to ban it right now.
Maybe it's way harder in the future when we've already delegated lots of authority.
I think existing law today, at least in most countries, probably does prevent this outcome anyways.
But it's worth getting that explicit.
But even if it's people, it really matters on how many there are.
I think most people in modern economies don't own a whole lot.
That doesn't mean that's necessarily a bad thing today because, of course, your labor is a very powerful thing to trade.
And in many cases, you might own way more than you did in, in,
previous societies, but it's not the same as being able to own, like, you know, the kind of
capital you might need to command many, many, many AI agents that are replacing lots and lots
of labor. And I think, Rudolph, you've thought a lot about the kind of ways this create a more
static dynamic where, as you mentioned earlier, like, this could lock people into the existing
positions pre-intelligence explosion. Okay. So this idea that that capital can now buy labor, so
human labor is no longer necessary, there's sort of another implication here, which is that
classical liberalism starts to fall apart. And so, you know, post futile societies, I think
we've just like in post-enlightenment, we have generally experienced not in all places,
not in all countries, not in all regions, of course, but we've generally, generally,
it's generally led to better human outcomes, right? You know, quality of life, life expectancy
in general, wealth, freedoms, the whole concept of just like humanism. We've ended
terrible practices for humans like chattel slavery, at least in like most places. So it's, we kind of
pat ourselves on the back and we think like, oh, wow, we've really advanced. We've just like
gotten some better moral software and we've kind of like clearly evolved. I think that what you're
arguing though is that there's just like a more utilitarian perspective on this, which is like,
maybe you could sharpen this argument for me, but it's like nation states gave labor
these rights, citizens, these rights because they were so damn useful. And it was just they gave
the citizens these rights because they needed to attract the labor pools and the brains to develop
their economies. And if humans become less necessary for, say, nation states, we've already
demonstrated how they may be less necessary for corporations, then that entire, I guess, social
construct starts to fade out. Can you sharpen that intuition for us?
I think it's definitely true that there's a little of institutional inertia in the sense of right now,
if you live in a society that really values humans and cares about humans and, like, politically
might be willing to, like, introduce UBI or stuff like this or like universal basic compute or whatever,
then, you know, there's a strong chance that, like, this society has a lot of inertia on this direction.
But then, like, societies don't exist in a vacuum.
They compete with each other.
There's a sense which, like, for instance, of all the countries in Europe, like, Ritin was doing
the most to sort of, like, be compatible with industrialization.
And therefore, as a result, you know, they would do it.
during a lot, they were quite political advanced for their time.
They had quite a lot of freedoms.
They were, like, good at encouraging industry, stuff like this.
And as a result, like, Britain becomes the preeminent power.
And there's a thing of like, you know, there's a lot of societies, a lot of countries
in the world, they're in competition with each other.
So it's not just sufficient that, like, if one society makes this choice, it's sort of like,
they can continue on their own.
It also matters, like, which strategy wins overall in the world.
It's not clear to me if this dynamic is bottom up or top down.
Is it the states gave this right knowing it would attract better competition or that
workers or people who own capital had more.
had more power than a state who were able to demand these.
I think about like the Magna Carta in Britain, for example, the foundational document for
the concept of modern democracy, where landed gentry, people with lots of property,
had powers that, you know, weren't necessarily as powerful as the king, but, you know,
were in many cases like the fact, they control the factors of production that created
wealth before that king.
And so this put them in a position where they can make a whole lot of demands upon a king.
You see the evolution of British democracy is that it first starts with this like landed
gentry class.
you know, here in America. Voting rights begin, you know, the idea of like a self-determined
government doesn't start with everyone being involved. It starts with these like diffused property
owning men who, because of that position, had some sort of diffused self-relying power.
There's this Charlie Munger quote that I think was at the top of the original intelligence
curse on my blog, which is just show me the incentives and I'll show you the outcome.
And I don't think it's the case that cultural revolution plays no role here. I think it's quite
important. But it's also worth asking, like, what is the role that economic incentives play on
cultural evolution and how strong are those incentives. And so I think in the limit, these incentives
are probably the dominating force here. There's this quote, I think in this part of the essay where
you say this, the classical liberals today can credibly claim that the arc of history really does
bend towards freedom and plenty for all, not out of benevolence of the state, but because of the
incentives of capitalism, geopolitics. But after labor replacing AI, this will no longer be true. Wow.
So potentially classical liberalism on the line here.
Let's get to the next essay.
So we've talked about capital and its importance how that could be kind of the dominant feature of a post-AGI type society.
So let's draw some more implications for what that means for kind of, I guess, the nation-state's relationship with its citizens.
So this is the heart of maybe the intelligence curse.
This is where the curse starts to like come down on us even stronger.
And the summary is this. With AGI, powerful actors will lose their incentive to invest in regular people just as resource-rich states today neglect their citizens because their wealth comes from natural resources rather than taxing human labor. And this is the intelligence curse. Why do powerful actors like nation states invest in their people today?
Well, think about it. If I'm a government right now and I want to make a lot of money and I don't have.
And maybe I want to make money for a variety of reasons.
Maybe I'm altruistic and I want to provide better, you know, care for my citizens.
Maybe I'm self-interested and I want my state to do well.
There are a host of reasons why you might want this because money gets you power.
But in order to get that right now, you can do a couple of things.
One, you can try to find some sort of resource, but maybe you don't have it.
Two, you can offer, you can try to create the increase your return on an investment.
And given that most economies right now really flow through people.
I've developed economies, diverse economies, though through people and their labor and their work.
You can up that return by doing a couple of things.
You can really increase the quality of education.
You can build infrastructure like roads and public transportation, which helps get investment flowing into areas.
You can build these really reliable governance systems to encourage investment.
You can foster competitive markets.
You can support small business formation.
You can do all of these things that make it more likely that your population produces meaningful economic results and then you can tax them heavier.
I think right now in the United States, taxes on income are something like 50%.
whereas taxes on corporations or something like 12.
So a share of total tax revenue, 50% derives from income taxes,
whereas 12 or 13% derives from corporate taxes for the government.
And so in that world, of course, you want to make sure that people are making more money
because if they make more money, well, then you accrue more tax revenue.
And you can do more things with that tax revenue.
It just so happens that these investments are the kind of things that we associate with a better
quality of life.
And they also give you better bargaining power.
But the other thing, of course, then we wouldn't get into the same.
a bit later, is maybe you impact the ability of that government to maintain and maintain a
retain power, maybe either through democratic means like voting or through credible threats
to an autocratic regime.
Okay.
So you might want to buy this voters off.
Yeah.
So basically you're saying like kind of the public goods, the public infrastructure.
Take, for example, like just like free public education to all its citizens, this is not
the government doing kind of like, quote unquote, the right thing for its citizens.
This is not governments and nation states of the world just being altruistic and kind of like
nice and kind. They're doing it for a reason, for an economic outcome, which is we have an
educated population than we have greater ability to produce GDP and government benefits from being able to
tax that GDP, and that's how it makes income, like revenue itself. And like this is basically the
reason, the incentive for governments to invest in education or for that matter, any kind of like good
thing for its people. Does that analysis miss something? It feels like,
It feels very mechanistic and bleak.
Maybe that is the way the world works.
But, like, you know, is there not, like, some sense of morality
and, like, doing the right thing here?
So I'll mention one thing here, which is that this analysis doesn't require anyone consciously
to be this cynical in government.
So, for instance, you can imagine in the year 1500, like, in Europe under the feudal of the
limit, like, maybe there were people who were really altruistic and you want to, like, you know,
give people free health care and stuff like this, but then they get invaded by the neighboring
country who, like, invested more in their military and stuff like this.
Whereas then today, it's a little bit of the military.
like the people who were altruistic in like 1800, who were like classical liberals in
like promoting like British and US like political progress.
Like these are the people winning the future because their values at that point in time
were associated with winning strategy.
Well, ask yourself too, just if you hear, I'll tell you nothing about these true politicians,
but a politician who wants to increase spending on education and a politician who wants
to decrease spending an education, gut check, which one feels more altruistic to you?
Increase, right?
Yeah.
And I think it just is the case here that the incentive.
are aligned here. The thing that sounds really good and is altruistic also is a thing that
return from money to the state. Now, in the world where the state has to cut that education
spending spending because they're in a fiscal crisis or because making that spending choice
would be super irresponsible. Maybe there are some sort of like, as Rudolph said, there's like an
invasion coming. They've got that military spending. It doesn't make that other politician
any less altruistic in theory. It just means that other politician has really strong,
countervailing incentives to do something else. Okay. So it's not altruism. It's just a happy
coincidence of incentive alignment. That's why we like get beneficial things.
It's interesting. Going back to the idea of the resource curse, maybe we could talk a little bit more about that because we do have real world examples, experiments at play, where there are very resource-rich countries to the extent that they can generate economic returns and taxes, revenue, essentially, to pay for the government based on resources rather than human capital, rather than human labor. And this is the resource curse that we were referring to at the outset of this episode. And we actually, there's a name for these.
which are rentier states, I believe?
Okay, so tell us again about the rentier states.
Let's get into some more detail here
because this is essentially the experiment at play.
Again, if your nation is resource-rich,
how does it treat its citizens?
What are the experiments that we've seen right now in the real world?
So we can look at the raw incentives.
There's a very interesting counter argument
that I think we actually incorporate,
which is that many states have such good institutions
that they aren't exposed to these incentives
that can avoid these incentives.
But at the raw incentive level,
I think we talk about,
like the Democratic Republic of Congo, for example,
extremely complicated history,
but with literally trillions of dollars in minerals below their feet
and hundreds of billions of dollars in, I believe, total revenue from those minerals,
and yet their people subsist on a couple of dollars a day.
And if the state has the kind of resources that could enable relatively widespread wealth,
the question is why don't they do it?
And oftentimes in these states, you can imagine,
I think it's very easy for us to imagine in the West a state far away
that is exposed to high levels of corruption
or exposed to high levels of inequality
because the leaders choose themselves over their people.
It just sounds quite foreign to us here in the West.
But of course, this is what we see time and time to get in many resource-cursed afflicted states,
is that the leaders get rich, or the people who control the mining rights get rich,
the oligarchs get rich.
But many regular people don't see the benefits of the economic activity
because there's no reason to invest directly in them to reap that reward.
Is there an explanation of counter-examples of resource-rich nations
that are actually, like, protecting their citizens, like,
fairly well. So, like, Norway comes to mind. Again, maybe like Canada, it sort of comes to mind.
The UAE comes to mind as well. Yeah. How do you explain that? So one, I think the key thing here is,
like, how much of, one, like, when we're talking about frontier states, we're talking about states
that have, like, a very large portion of the revenue coming from natural resources. I think
that's a dominating force in their economy. There's, I think we cited in the piece, but there's
these two really interesting examples that the authors give of Norway and Oman. It's counter examples
of states that don't fall into the resource curse. In Norway's case, by the time Norway,
discovers oil, they have this really efficient, really anti-corrupt, fantastic democracy,
where voting incentives kind of override raw capital incentives, where the bureaucracy understands
how to do things in really complex ways, and this means that voters have a very real power,
and the benefits get dispersed in different ways. The voters can out-vote capital incentives.
In states like Oman, Oman in particular had a pretty credible threat of some sort of uprising
or revolution. I can't recall the exact example from the paper. I know we cited directly.
And as a result of this, the state capitulates. And I think that's a result of this, the state capitulates.
The kind of analogy that I've used here before is that if you are the person that owns the rents,
you'd really like to keep all the rents to yourself.
You'd also like to keep your head meaningfully attached to your body.
And so if capitulating on that, if paying out people, if doling out the rent money,
keeps your population in check, you're much more likely to want to do this because otherwise
that credible threat could be quite real.
Now, what we say is twofold.
One, we think that advanced AI makes it way easier for the state to know everything and suppress
things like revolution. And so we think the revolutionary arguments and autocracies is kind of off
the table. We find ourselves in being the contrarian position of defending democracy pretty strongly here,
which is, I think, become kind of contrarianism and tech circles in recent years. But we actually
think it just is... It's not contrarian around here. We're big defenders of decentralization
democracy. Yes. But we want, you know, we... In the Norway example, the question you have to
ask is, do you think most states have the kind of robust and rigorous democracy as Norway, one,
and two, do you think the intelligence curse for placing all...
all labor might be stronger.
In our case, we think it probably could be stronger, but it really shows us an interesting way out.
That's interesting.
So you're basically saying it's just like Norway had the democratic decentralized institutions
strong enough to kind of withstand some of the pressures of the resource curse.
But the open question is like how many other countries have that?
And when you get into the intelligence curse, no matter what your institutions, your democratic
institutions are, will that be enough to withstand the tidal wave of this intelligence?
intelligence curse. And that's kind of an open question. But it does provide maybe a sliver of hope,
which is that with robust institutions, maybe we can start to like, maybe we can withstand
the coming tidal wave. I don't want to get into solutions too soon because we're still on
the problem, but you're like, I'll just earmark that right now. Imagine if your checking account
and defy wallet finally spoke the same language. That's mantle banking. An all-in-one fiat and
crypto account. It lets you save, spend, and invest all from one dashboard. Swipe for coffee,
stake ME3 yield or even use virtual cards for payments through Apple Pay. So it feels Web 2 simple,
yet stays Web 3 sovereign. For allocators, meet Mantle Index 4, the S&P 500 of crypto. A tokenized
institutional grade fund, ceded with $400 million from the Mantle treasury and balance across
Bitcoin Ether's soul and yield enhanced stables. One asset, broad exposure, pure defy
composability. The momentum is real. Emmy Faults, FBTc bridges, and a $2.4 billion dollar community
treasury are all powering the next phase of on-chain finance. Mantle brings real-world action.
yield and utility to digital assets. Ready for the next era of on-chain finance that
actually belongs in 2025? Explore Mantle at mantle.xyz or follow Mantle underscore official. Mantle,
bridging Tradify and Defi so you don't have to. Have you ever imagined Bitcoin and Ethereum
truly working as one? Unlocking the full potential of Bitcoin Defy and more. Meet Hemmy,
a groundbreaking modular network designed precisely for that vision. Co-founded by early Bitcoin
core dev, Jeff Garzic. Unlike other layer twos that treat Bitcoin and Ethereum as separate
silos, HEMI connects these giants into a single, powerful super network. With HEMI, users gain
unprecedented asset portability and possibilities, combining Bitcoin security and value with Ethereum's
versatility. Hemis unique innovation, the Hemi virtual machine, integrates a fully indexed Bitcoin
node directly into an EVM, enabling DAPs that seamlessly interact with both networks. And with
Hemi's proof-of-proof, or POP consensus, users benefit from truly decentralized, censorship-resistant
Bitcoin-level security. Since its recent main net launch, Hemi has rapidly ascended to ranks
as one of the top Bitcoin chains.
With a thriving global community
and robust ecosystem support,
HEMI isn't just building a network.
It's shaping the future of Web 3,
defy, and beyond.
Visit hemi.xyz slash banklist
to learn more,
discover ways to interact,
participate in the leaderboard program
and be part of the community
that's uniting Bitcoin and Ethereum.
Yeah, I think what I want to know,
as a layman listening to this,
who is just on the street
and now understanding that he will soon
not be able to trade his labor for capital.
I'm curious.
You're out of a job.
I'm out of a job.
So, if you're like, I'm listening to this, I'm like, hmm, okay, well, I can't do this. I can't do this.
I am no longer able to trade my labor for capital. What does that look like for the average person? Are they collecting government welfare? Is there a universe-based income? How am I able to accrue capital if I am just one of those elementary workers in the workforce?
Well, I guess the question is how much do you want to get into our solution section right now?
So perhaps we'll hold that for a second because there is another element that I was also interested in talking about, which is just the human element.
I like human interaction. I like going to hang out with friends. I like buying homemade things
from people. I like meeting the artists that create the art that's on my walls. I really enjoy
that connection. And when we introduce this, this AGI element, this artificial intelligence force,
it feels very inhuman and artificial and it feels very sterile in that way. And when I think
as someone who is experiencing the human experience, I'm really curious, what element does the
human nature have on the way this all plays out? So I think it is definitely,
It's definitely true that there's a lot of things where humans have a preference for interacting with humans.
And I think this will continue, and I think there will be a lot of, like, social-facing jobs
where the humans have a very high bar for replacing that role with an AI.
And I think this does provide a sort of like buffer where I think there will be some jobs that last quite long.
It just for, like, maybe like a teacher or maybe if they're for, like, interfacing with customers,
even if you're like a salesperson who's like very dependent on personal relationships, like,
humans might prefer that for quite a while.
So I guess there's some things about, like, how, like, how charismatic are the AIs?
Like, how, like, do they get at, like, um, hijacking human social instinct, stuff like this?
But there's also no question around then, like, like, so the humans currently have money.
And there's, like, some capital flowing around the human economy.
But there will also be, like, the AIs will be increasingly doing stuff.
And it might be, like, increasingly, sort of, like, the, like, money flows towards the
the AI part of the economy.
And in particular, like, I think, like, how are the humans earning the money with which
they pay each other for the human service?
services, well, actually some of that human money also has to be spent on doing the AI stuff
that probably keeps them alive, keeps them fed, stuff like this.
Is it all for the workforce or are there, like, how far does this go?
So I guess if currently a human thing that I would really be really excited about is to teach my
kids something or to be a father to my children.
And how far does that go?
Does it get into the household?
Does it kind of remove the need for humans through the entire process?
So there's a really good paper that walks to some of the cultural element.
here. It's called gradual disempowerment. We know the authors quite well. It came around the same time that ours did. Weitz focused less on this initial cultural element, most because we were trying to isolate what we think is like a really critical variable here on the economic side. But I'll tell you, I had an interesting interaction a couple of days ago with someone who was telling me that they talk to chat Chbc't. And that they think their dad talks to ChatsyBTBT more than he talks to his kids, that he was like two or three hours a day, maybe more. And so I think the capacity for machines to alter.
our relationship with each other seems quite high.
I don't have the exact quote that Mark Zuckerberg said in the podcast recently,
so I hopefully am not picturing this too bad.
But it was something along the lines of, you know,
the average human has, you know, four or five friends.
The average American is four, five friends,
but they have capacity for 15.
We can substitute a lot of that with machines.
For me, I'm not excited about this vision.
This is really not exciting to me at all.
I really value the real world and the people that I get to interact with.
And maybe this is something I don't want to impose and say that I get to make a choice,
that nobody ever gets to go down that rabbit hole.
Certainly all the technology that I'm excited about building.
Okay, that's really fascinating because, like, when you start getting the family,
you're just like being a parent or something or just like being a father, and can an AI really do that better?
But, like, then you get into scenarios where, I mean, a lot of people grow up without their father, right?
Just maybe, you know, by route of an early death or just like something else.
And, like, is an AI maybe providing some parenthood there?
And it's, you could, I guess what you guys are saying, though, is you're acknowledging that, you know, AI cannot
replace all of our labor because we still might want to go to a arts and crafts fair and purchase
a piece of artwork for cultural reasons from a real human artist that we just like resonate with
and identify with and that's still going to be a market and economy what you're saying is like over time
that could become a smaller and smaller and smaller portion of the economy and even the humans purchasing
power in this world could actually decrease because like where is their wealth to go purchase
the artwork actually coming from.
And so you could imagine, like, just that economy,
that human-to-human kind of economy,
where only humans can provide this,
that just gets smaller and smaller over time.
It's kind of a niche.
And to, like, the humans are maybe, like,
I guess, disempowered, even though these economies still exist.
So, yeah. Go ahead.
Or even it could be that, like, you know,
the human wages stay roughly constant.
Everyone has, like, vaguely social, like, pro-social jobs.
The, like, money flow into human part of the economy
comes from something, something, something in government,
something existing human wealth.
And like those are like human wages are what they are today,
but then also humans just don't really have political power anymore
because states worry about like, you know, real things like energy and GPUs
and like military competition and all of these fields are done by AI.
And it's, I think the like human role has become a bit like peripheral
and not any more tied to sort of like real power that exists in the world.
And I think I'm a bit worried about that, even if sort of like humans are like,
have their wage level at what it is right now.
Another way to think about it too is I hear a lot that people will always want human teachers,
right? Because there's this human interaction that you give with a teacher and it's really hard to replace.
A relevant question, though, is what will be the demand for schools?
What is the incentive for states to fund mass public education in a world where they aren't
receiving a return there?
That doesn't mean it isn't going to happen, but you should look at the underlying economic
incentives.
And it could be the case, as you described, we're like, many, many fields are automated,
and so the money flowing in this human economy is just increasingly limited or, you know,
dwindows over time.
I think there are a lot of ways which you can reach a pretty bad outcome through different
mechanisms here, and a lot of our solutions based is focused on trying to keep humans
meaningfully economically involved in many different ways, while also strengthening democratic
incentives and democratic structures so that they can override capital incentives when they need to.
Well, just if we could stretch this a little farther and kind of like imagine a world here.
So like what, how do, how do future nation states actually like make money in an AI dominated
economy? Like how do they tax? Like obviously now our tax mechanisms are just income tax, capital gains
tax, consumption type tax, excise tax, increasingly tariffs? That's fine.
So, but the nation state is really going to have to reorient around AI labors.
And that's another interesting question.
It's like maybe actually the nation state is not the one in charge.
I mean, we're in a world of nation states, but that is kind of a post-feudal model that kind of arose on the back, really, of the last major technological change, which was the Industrial Revolution.
Maybe we're going to reorganize.
Bologi Shrinivasa has this concept of the network state.
And you sort of wonder if maybe some of these AI labs.
could be in a position to accrue such power
that they actually become the dominant force,
some kind of like open AI network state,
complete with like a flag and Sam Altman as the president?
I mean, like, who knows, right?
How do you guys see this playing out?
Yeah, I think there's definitely this question over,
like, do nation states continue as the main form
of political organization, or like main form of organization
of power in the world?
And I think there's something where like,
so one, I think you should have some prior
that these things are pretty sticky.
So, like, even the, like, Catholic Church, you know, they were extremely powerful.
They run Europe for a few centuries in the past.
But, like, you know, they don't want to run Europe anymore.
We still have a pope.
But they're actually making a lot of commentary about AI recently.
They're like, finally, right?
Right.
Like, this is, this is, uh, Rudolph has been subject to me spending the last year of
days really nerding out about this.
I just, I literally just had a, uh, about the Catholic Church in AI.
Yeah.
Yeah.
So I have a reading list that I'm in pilot right now.
Because, uh, there's the most recent Pope, the slight sidetrack, most recent Pope,
But he's now said publicly, one of the reasons that he took Leo the 14th is because Leo the 13th had this very prescient encyclical called Rio Mnivarum on the Industrial Revolution in the 1890s.
And he views AI as a similar style of societal reorganization.
Wow.
Interesting.
I had a whole lot of commentary here.
I've got a reading list I'm working through right now.
I just was, yesterday we were in Oxford and I was talking to a friar.
All right.
All right, Josh, new guest requests.
We've got to go hope on limitless.
Yep.
Ask his thoughts on AI.
All right.
We'll do this in the bad again.
Sounds good. Okay. Oh, fascinating. So we don't really know what the organizing political structure might be in this new world. But we could imagine it changes. But you're also saying that, hey, the nation state is pretty sticky. The Catholic Church is still doing like big things. Maybe it'll fade somewhat. Probably won't go away. But that's kind of a TBD. Like we don't know yet.
Yeah. And also like if something, if we get, you know, AGI lab, network states, like the same incentives kind of apply to them by default. And also they aren't by default democracies unless they become democracies.
Yeah, a core observation here is that AI can be both destabilizing and centralizing.
And this seems kind of counterintuitive, but it could be the case that there's lots of very quick disruption
and the winners of that disruption that can very quickly accumulate power and capital.
I'm not saying that is certain, but that is one scenario you could see here is that it can both
destabilize a whole lot of things and then centralized power among the winners.
Yeah, the centralization of power seems to be a massive theme for you guys.
Like where I'm getting out of this is definitely some worry about AI.
I wouldn't call it doomism, right?
It's not, there's some, like, look, there could be a scenario where AI comes and kills humanity.
I think it's just like you, you can see that point.
That's not really the focus.
The focus is more this attractor basin towards authoritarian totalitarianism, right?
Which could be possible.
I mean, this is even Daniel Schmockenberger's work.
I don't know if you guys have looked him up, but he talks about just with all of these tech revolutions,
what we could see is this attractor basin towards like total society control to actually keep our tech in place.
There's one more concept, though.
we got to go through before we actually get to this.
Let me say something about the power concentration thing,
where I think one thing that I think people,
so throughout history we've had really terrible times
and dictators, really terrible centralization of power.
But all of them have fundamentally been limited
by the fact that whoever the dictator is,
they still need, they're not infinitely competent,
they can't think incredibly fast,
they still need a lot of other people to do things for them,
and they somehow need to get buy-in of a big group,
a big bureaucracy, and then of the population that they rule over.
And fundamentally their power is still rooted in humans.
If you're a dictator, you're constantly paranoid about everyone else, like, overthrowing you.
Like, it's the, like, fundamental, like, or.
They also get strokes.
They also, their life expectancy is only about 80 years.
Oh, I did.
Even though they could pay for such good health care, exactly.
Yes.
But then, like, once you don't need the, like, bureaucracy of humans working for you, once you don't need the human military, you just have AI bureaucracy, you have, like, AI military, you don't need the populations around your economy.
Like, constraints on how total the total terrorism can get, get a lot to worse.
Indeed, they do.
Okay.
There is a way out, guys, all right?
There is, yes.
For limitless listeners, if you're in despair now, never fear.
We've got some solutions for you.
But one more concept to cover.
So this is, I think, the last essay before you kind of like conclude all of the things
and give some of your recommendations for the way out, which is this idea of the social contract,
okay?
An essay title shaping the social contract.
And what you're saying is the intelligence curse is breaking the social contract.
And I really like this diagram that you sort of show, which is just like this nice equilibrium
balance of power. You've got like three boxes here. You've got powerful actors. So these would be
corporations, nation states, you know, the big powerful networks. You've got the people and then you've
got the rules. Okay. And so there's dependencies. There's lines of dependency between the powerful
actors, the people, and the rules. So the powerful actors, they need the people for value. We've
already established that. They need labor, right? And so that's like people, plus one for the people.
the people can displace the powerful actors.
We've seen that throughout history, French Revolution, just like American Revolution, right?
If the powerful actors get too totalitarian, we stage revolts, right?
The people are strong.
And what we've done is we've created these social contracts, basically rules for society.
And so these rules are moral codes, but just like more, I guess, like in more detail.
It's kind of our legal system.
It's the constitution of the U.S.
It's the Magnicarta.
So there's this, and the people can influence the rules, the powerful actors have to, they're constrained by the rules.
We get the balance of power, separation of church and state, three co-equal branches, all of these things, right?
It's like all very nice.
And that's our current setup.
That's the status quo.
What you're saying is this whole AGI thing kind of disrupts the social contract because it means the people can't displace powerful actors, as you were just saying, Rudolph.
off, it means the powerful actors, so the nation states, don't need the people for value.
They can just pay, you know, for tokens in the AI geniuses in a data center.
And then the powerful actors have the ability to influence the rules.
The whole social contract is messed up.
Did I, like, flesh this idea out a little bit more.
Is this kind of what you're saying?
So I'll zoom in on just a single interaction here, which I think helps articulate this.
I know you're listening base.
So let's zoom in on software engineer at Google.
And let's say it's 2021, which I think if I'm correct here is like,
That's the big year where, like, it was, everyone is getting paid crazy amounts of money.
You are negotiating with Google in your contract, and you have something that they want.
In this case, you have, like, you're really good at what you do.
They want to hire you.
Well, because of this, you get to extract a whole lot of concessions.
You're a competitive on the marketplace.
You get to ask for more RSUs.
You get to ask for more stock.
You get to ask more money.
You also get things like the free cafe on campus because they've got to attract you somehow.
Or I think it's like 16 or 17 restaurants in Mountain Dew on their case.
It's absolutely crazy.
It's cool campus.
You get a lot of these benefits because of that exchange.
And of course, Google gets something out of you, too, because they might pay you $400,000.
But as long as they've done their, you know, vetting here, they're going to make a whole lot more than $400,000 from your labor.
But everybody wins in this relationship.
Now imagine that Google is able to replace your labor with a machine that can code way better than you.
This really disrupts the relationship, right?
Because let's say, you know, in this case, it can create value for Google at a cheaper cost than you.
I don't know, like $100,000 a year, $150,000, $200,000, that's in the price range right there,
where it's really economically sensible for Google to cut you out of the process, but difficult for
you to then go, like, to create, you know, $10 trillion clones of yourself and go to compete with Google.
And in the limit, this creates a world where powerful actors can get more and more entrenched
just capital substitutes for labor more and more perfectly.
Your ability to displace them goes down, while simultaneously your ability to bargain with them
also decreases because you don't have anything that they need.
This might create a situation in which powerful actors get to set the rules, and you are constrained by them, and it's very difficult for you to alter that relationship.
That follows through to the government, too, right? And it's basically social contract with its citizens when, like, they don't need the citizens very much anymore.
I guess my question here, or a bit of pushback is, you know how we call them a social contract, right?
And that's because it's sort of, it's enforced socially.
Yeah, there's power of the state, there's military.
There's kind of like monopoly on violence types of things.
But over time, human societies have been able to, like, construct their own social contracts.
Like, what is something like the Constitution that's just like a set of laws and legal codes and ideas that we all agree on in this nation called the United States of America, right?
Like, we put that in place.
It's, you've all, Noah Harari calls these kind of like myths, right?
They're just like these shared beliefs that power so much of human society.
So my question is like, okay, if we get to kind of choose social contracts, why don't we just pick one that doesn't screw over all the humans, that doesn't screw over citizens in the labor? Like, we put these things together. They're just shared myths. They're socially enforced. Why don't we pick one that's good? And by the way, if this AGI thing comes true, won't we have abundance too? Won't we have like basically 10% GDP a year? Won't we have like fantastic wealth? At least somebody's making the wealth. And so this abundance shouldn't. This abundance shouldn't.
this relieve the competitive pressures. We don't have to think about, you know, the basics of food
and shelter because it's all provided for us. And so we're not in this competitive game anymore.
We can just, right, think about what makes society happy and pick a social contract that enforces
that. I guess maybe one historical example here is like the British Empire tried to enforce a
social contract on the US, or like before it was the US. And then the Americans were like,
okay, actually we don't think this is fine. As you're like, a reality check to Brits. And it turned out
the rest did not have the ability to enforce that.
The institution has real powers against them,
and then the Americans wrote their own social contract,
which became the constitution.
There's definitely a lot of power in like culture,
institutional inertia, just like the beliefs that people have
for like myths and the Harare sense
to like steer things and keep things on track.
But then like over a long enough time scale,
or like enough stuff happening in the world
that like checks that like,
is there something behind this?
Like if someone tries to change that,
either like, you know, on a bottom up way
because, you know, there's some like social media movement
or like from a top-down way, if the, like, leader of a country decides to do something,
like, are those reality checks, like, does the, like, does the economic structure and
the political structure, like, push back against that successfully, or is this sort of like,
oh, like, you can actually shift it.
Because then if you can shift it, probably over time, it drifts, it's kind of in the direction
of the incentives over time.
How about this abundance idea, though, going back to that, right?
So, like, we have abundance.
AIs are creating all of these things.
Won't that relieve competitive pressure for us?
Like, can't we get a utopia out of that?
So I think at a core, you should be really concerned about any arrangement where the long-run
arrangement has you with very little actual power.
And so I think it could be there's lots of abundance, but you aren't creating any of it.
You aren't involved in the creation of any of it.
And so your material power here is entirely political.
This is just way less stable.
Another thing to think about here is that I think we talked with this in the essay that
it's not really clear that competitive pressures or human greed have this intrinsic stopping
point.
I think to paint an additional example, though, here, it could be the,
case, that the worst outcome is that we have abundance, but you don't have any say in what happens
afterwards. And so your needs are met, but your political reality is quite constrained. I think about a
state like China, which is like been able to simultaneously lift a whole lot of people out of
poverty. And one of them, yet the Chinese miracle is the thing that happened, not, you know, hundreds of
millions of people get lifted out of poverty under Deng Xiaoping. But simultaneously, I wouldn't say that this has
resulted in like crazy political freedoms for people in China. It could be that your material conditions
improve, and yet simultaneously your power is unaffected.
This has been quite a, you know, hurt-eemly an effort by the Chinese state to keep this
equilibrium going, and the Chinese state is, in many ways, like, responsive because
it's afraid of losing legitimacy or really afraid of, like, revolutions.
It has a zero-tolerance policy on protest.
But that is one outcome.
We just happen to think that you should be deeply concerned about scenarios in which you don't
have the material power to guarantee abundance for yourself.
And if you're written out of economic social contracts, you are at this point at the mercy
of the political one.
We think the political one is better than nothing.
We advocate really hard for strengthening that political contracts so we can get to that outcome.
But we don't think in the limit it's the only thing I'd want to be relying on.
I really want to make sure that I have some real stake in the game here.
One last objection to all of this, which is basically Professor Arvind, he said he wrote the intelligence car.
I don't know if you're familiar with him, but he has kind of this riff on.
He wrote AI Snake Oil.
I was going to say congratulations.
He played your eyes for his work.
I just want to let you know, right?
Right now.
AI snake oil.
Yeah, I'd be frightened to find out he wrote.
Sorry, Marvin.
Oh, that's good to know.
I didn't realize that title was taken.
AI snake oil is.
He has this riff where he talks about, basically he's kind of downplays, AGI.
He basically thinks that AI is kind of like more kin to regular tech.
And one of his riffs is there's a difference between AI capability and power.
So there's capability, right?
all of this knowledge, intelligence inside of a data center.
But then that's different than power.
It's kind of constrained.
Like maybe that idea of you guys said earlier, part of the solution is not giving AIs the
ability to accrue their own wealth, right?
It's like wealth would be a vector for power.
We don't necessarily have to give A.I.'s wealth and power.
And so capability and power could be somewhat isolated.
Like maybe this whole thing is just a question of like, who gets the power?
How does that idea, the difference between AI capability and power kind of like factor into this whole analysis?
If I'm understanding you correctly, you're saying that it could be the case we don't delegate this power to AI systems and then it's relegated in the hands of people. Is that right?
Exactly. There's always humans in the loop, you know, like they can't get their own bank accounts or something. They can't accrue capital. We always have kind of a check on them. We don't have to give them the keys to the car.
Well, I think nothing that we've argued is contingent on AI having this power to self-directed.
way. One of the biggest oppressors of people in human history is other people. Totalitarian states
require a whole lot of people doing that oppressing. It could be the case that we've actually
done is we've just expanded the power differential. We've made it like that some people are
far more powerful than others. This is already true today, but in the era of liberal capitalism
and liberal democracies, your power as an individual, as a unit of society, has just really
never been greater. And what we're saying here is it could be the case that for a couple of people,
because they have existing access to capital and convert this directly into results, this could
be a world where they have just such dramatic outlier ability to shape the world, that their ability
to maturely impact your environment is really, really high, and your ability to resist that is even
lower than usual. Okay, I feel like we fleshed out the intelligence curse to a sufficient
degree. Let's talk about the solution. Let's talk about how to break out of this intelligence curse.
You've got three words here. You've got avert, you've got diffuse, and you've got democratize.
Where do you want to take this? You want to start with?
with avert? How do we get out of this?
Let's work it back. I think let's see the initial application. Yeah.
Okay. Start with democratize then. So this is, like, what's the idea here that we're distributing
the power to all of the people? We're just like, you're not concentrating this in the hands
of AI labs and AI models themselves. How do you think about the democratized word?
So I think the way we'll flow this, if this makes sense, is I want to walk through really quickly
just the observations backward because we started with democratizes the observation. And then I think
we can kick it off with avert after that.
And what I mean by this is I just want to walk through the whole argument chain.
Real quick, Rudolph, the kind of initial observation that we have on democracy,
why we need each step here.
Yeah, so I guess the flow here is basically, as we mentioned, you know,
like Norway, for example, solved the resource cars.
They just had good institutions.
And therefore, they can just all go to polls and, like, vote for, you know,
everyone's well often and they distribute the oil wealth between the people
and everything is great.
And so it's, like, great if we can get to the point where we have this very, like,
a democratic thing.
A lot of people have power.
They can affect the decisions that are made.
We get like broad distribution of the benefits of AI,
stuff like this.
And there's various ways we list some ways
in which like technology for coordination
and various other things can help with this
in our last section here.
But yeah, this is basically great.
There's various ways you can build tech to make this easier.
And then kind of like the point of making is that
to be in the state where you can democratize
and like have that be a stable equilibrium,
often what matters is that you get political power often
when you have the economic power.
So then this brings us to, like, the idea that you need, you need diffusion as well.
So you want to, like, diffuse the benefits of AI to people such that then sort of like
everyone gains in power, gains in capabilities, like, continues having some stake in the economy
and some, like, ownership stake over it.
And therefore, like, this sort of, like, makes the step about, this makes the step about
democratization more stable because then it is actually an incentive of the powerful actors
of people of everyone is to keep the, like, a democracy in place.
So then we've gone from Democrats to,
to diffuse.
And then, so there's this worry that sometimes people have is if you diffuse AI too much,
if you give everyone the AI, you know, you're like giving out this powerful technology
that people use to do things like create bioweapons or like the ulcers nasty cyber attacks
or whatever.
But like maybe the AI like takes over because it's misaligned and it is like very bad for
everyone.
And therefore, in order to make the diffusion step safe, in order to like prove that, you
want to like avert the various catastrophes that could happen from widespread AI.
And we're especially excited here about like hardly need the world against things like
bioattacks against.
cyber attacks and also just making sure that we don't mess up on the Bielandage program.
So from that, we're backwards here, right?
So democratization is clearly a way out because democratic incentives can beat capital incentives.
You can ensure all the things you want out of that.
But we've noticed this pattern where your economic power correlates with democracy.
And it's oftentimes the engine of that.
So then we want to defuse.
But also we want to make sure that diffusion happens the way that doesn't create the kinds of
catastrophes that either would just be bad in it of themselves or could give like license for
States or other actors to really powerfully centralized. We have this avert section. So we kicked
things off with avert and this backwards chain. And we've realized that in order to get to the
democratization, there are some steps we're going to have to take first. Okay. So democratize is all
about power diffusion to the people so that the people can hold the institutions in check.
But it's a political type of thing, right? Yes. And we have had democratic protocols in the past,
right? And we have them right now, one person, one vote.
we'll come back to that because I want to get into some tangible examples, but that is about
distribution of power, I guess, and the humans having this power and retaining this power.
And you're saying one way in order to do that is that other D word, which is diffuse.
And I think diffuse means give everybody access to AI tools.
It can't just be a small percentage, maybe.
Maybe you could sharpen the intuition there.
But diffusion is about the distribution, the tools in the hands of everybody.
And then avert is just like making sure that we don't completely go off the rails.
We have a misaligned AI or some sort of bioweapon.
And also, I love that you say this because this is super important.
A lot of people miss this.
Avert without requiring centralizing control.
Because the attractor basin, when you start to clamp down and you like avert and you sign letters like pause AI
or you like Nick Bolstrom proposed kind of a high-tech panaptical.
or the government has to surveil everybody to make sure they're not doing a bioweapon with
their LLM at home, right?
Then we get this attractor basin of like totalitarian, like authoritarian regimes that we then
can't get out of.
So you're saying avert this bad outcomes without requiring centralized control.
Exactly.
That's a logic chain we flow through.
And the reason why we work through avert diffused democratize and the peace, as opposed to the
lot of chain where you go backwards is because we think it's going to be really hard to diffuse
unless you avert and really hard to democratize unless you diffuse.
So this is kind of like the logic chain works backwards and then we present it forwards,
if that makes sense.
Okay, it does.
All right.
Can we get into some real-world examples?
So avert.
Yeah, let's start with averts.
Let's kick it off.
So I think the core observation here is that actually AI can do bad stuff.
And this is like sometimes unpopular to say in, like, it's funny.
I think we're in a position we're saying unpopular truths to lots of different people.
And certain truths are more popular to your communities and others.
And I think it is the case that AI can,
make it a whole lot easier for a lot of people to do bad things, and can also make it a whole
lot easier for systems themselves to lose control of them and take actions on their own. And so
our observation here is pretty simple. It'd be really bad if that's the end state. If AI is something
that is bad for us and not good for us. And secondly, that historically, these kind of potential
bad outcomes are the really powerful forces that justify sexualization. You can see this here
where just a host of tragedy. I think a lot about like on September 11th attacks,
and how, as a result of 9-11, the government took very broad power grabs.
The USA Patriot Act, which is actually a fun fact, an acronym.
It was a response a couple of months later that resulted in what I would argue
was a pretty significant restriction of civil liberties for Americans.
I would co-sign on that.
Yeah, I gave the government warrantless wiretapping capacities.
Section 702 in particular has been quite controversial for a host of reasons,
and I won't take a sign on that argument.
But the point is that it rapidly expanded government power.
And government power, once distributed, is very hard, or once unlocked is very hard,
to get back. The other observation that's important here, though, is that if AGI could, in fact,
do a whole lot of economic tasks, you're not just centralizing a technology. This isn't just like
giving only nukes to the government, which is a pretty common sense arguments. You are also
centralizing into a couple of points of failures, the development of the technology that might run your
entire economy. In this case, it kind of looks like centralizing the means of production
to the hands of a single or a couple of actors.
That was not a lot of that. Somebody wrote a while ago. Yeah. Yeah, I think we don't cite that one.
We do cite State Revolution as an example of like it is, you know, we don't think that the idea of the transition state where a couple of people have all of the power and also all of the economic power is a good one.
That's a state where you don't have very much power.
And historically, your Stalin risks are pretty high here.
Your risks of, you know, drawing the wrong leader out and putting them in the apparatus that you've built are pretty high.
Your P. Stalin, I guess.
I think in another essay, we called it peace Stalin specifically.
Yeah.
It's not in this one.
It's not.
It's not.
It's a piece on Tasson knowledge and we did in fact call it P.
Stalin.
Okay.
Okay, okay, okay. Those are the goals. So how do we get there? I think one thing you cite, which is like near and dear to our hearts, is Vitalik Guterin's defensive accelerationism. Maybe you could flesh that out as like, you know, a part of the solution here.
Yeah, I guess the basic idea of like differential technology development or like differential acceleration, whatever it's called this month, is that like, you know, we can push, we can choose which order technologies arrive into some extent. We can push the technologies we like and that help us guard against risks and like health humans. And we can like, you know,
then you hopefully get these technologies before we get to the bad worrying technologies.
And for instance, we probably make sure that by the time, like, you know,
Chapsubedia can do a cyber attack for you that we're going to the point where, like,
our cyber defenses are good.
And at the point where, like, the AI can design bio weapons that we've actually, like,
hard in the world against bioweapons system we can extend.
And like, so this is true in the adverse section is also true in, like, diffuse and democratized,
like, I think the, like, core sort of spirits of most of our proposals is this thing of,
like, let's please build a technology than enable the good things before we get to the threats.
And actually, like, by building technologies and, like, me being them to come faster, we can avert a lot of these risks.
A lot of these things are defensive, too, right?
When you talk about biosecurity, it's not, it's more on the defensive, lokiah focus, or cybersecurity is kind of like defending from attackers.
You know, cryptography is sort of similar in that way, but we also need physical security.
AI alignment, of course, the industry is like focused on that.
But that's another element of the averting catastrophe here without centralizing.
All right, let's get to diffuse.
Okay, so what does diffuse mean?
To me, that's just like making sure that everybody, every human has AI superpowers.
You know, so like the example that the tech CEO gives us is like even Tim Cook doesn't
have a better iPhone than you kind of thing, right?
We all have the equal access to iPhones and that's great.
So does diffuse mean we all have equal access to these models and other people can't kind
of like take them away?
Is it like open source?
What are the practical ways to diffuse them?
Yeah, so I think basically the thing you want to do is help as many people as possible benefit economically from AI as quickly as possible.
By the time they're really radical AI hits people like they're like, first of all, there are more people who are like owners, there are more people who have like built companies, stuff like this.
And then also you've like distributed technology more benefits.
Like everyone has gone to the AI power up.
I like your phrase about, you know, everyone gets superpowers from AI.
And then so in terms of like grand strategy here, we have this.
diagram at some point where you show that, uh, like, you're these like two stages of diffusion
where like, first when AI is am augmenting, you sort of like, you want to, uh, you want to like,
diffuse AI, which helps create decentralization.
Like you diffuse AI and sort of like just the EGI labs have the AI, they use it to benefit
themselves. Everyone in society has access to the AI.
And what this means that you get to like decentralization.
Because like the benefits of AI has been more widely spread.
And then like the fact that you have decentralized the AI then helps you also, also
So defuse the AI because then once the humans are automated, they're in automated, not
by the big AI labs with their own AIs and they still control the fruits of labor of the AIs that
they own.
How supportive are you guys of open source then models in those and like open source weights
and just like all of that, that kind of movement?
Is that a key?
Yeah, broadly pretty supportive, especially in the world we've done a lot of the hard work here
and on the hard work on, you know, proofing the world against the biggest disasters, yes, exactly.
And I think to kind of break this down concretely, this looks.
like two phases. There's this first phase where right now we're on this track where actually
an agency isn't that good, and yet everyone is investing more and more time to getting agency
better. It's open to this interesting market opportunity where AI augmenting tools are both
like underinvested in and probably way better. Think about cursor for a second. I know I keep coming
back to cursor. I love those guys. Curser is not a tool that does all of the coding for you
entirely. It is a tool where usually a software engineer who really understands they're doing
is like in the driver's seats, and it's enabled vibe coding.
It's enabled a lot of people who don't know exactly how to do it to still set the high-level direction.
But ultimately, you are in charge of what's happening.
You are steering the ship.
There's a huge market opportunity to build more tools in that space right now and expand the window of AI human augmentation.
We don't think this is like the long-term permanent solution, but going ahead and starting in that direction now can both access to like untapped, untapped, you know, like it's not dependent of it, and really focus on what we can do today.
What we're then excited about in the future is this, uh, a whole,
whole bunch of concepts here, but one of them is something like aligning models directly to the
user. Most people have some sort of, like, hidden knowledge that is very difficult to gather.
And if you ultimately want, like, the single superintelligence singleton, you're going to want
to have access to all that information. This gives you a wedge point, wherein maybe it is
the case that, like, you aren't the perfect data source because you are slow relative to your
AI's in 2050. But it could be the case that there's this, like, AIs that are trained off of your
tacit knowledge, of your data. They understand you and could behave like you and can represent
your taste in judgment faster. And these A.
AIs are acting throughout the broader economy, and they're interacting with other systems.
Maybe the systems are smarter, but you have access to that information behind that AI.
And so this is a world where, first, we've extended the augmented window.
And second, we've aligned systems directly to the users such that even as the systems take off,
they're still tied in a meaningful way to the user, and therefore the user gets compensated in some way,
shape, or form by their economic activity.
Okay, that's cool.
All right.
Let's talk about this last point then in more concrete turns.
So democratize, right?
So how do we do that?
How do we, you know, let the humans still maintain some power, right?
We're very used to, like, one person, one vote.
I mean, are you talking about concepts like maybe you have an AI lawyer?
Like, you have the right to some sort of AI lawyer or data model to represent you?
Is it like, how do we really, you know, ensure that democracy and human ability, political agency,
doesn't decrease in this world?
Yeah, so I think, so we take a very tech-centric perspective here.
This is not the essay in which we're going to go out and propose how we solve everything
politics. But I think one thing that is underappreciated again is just like if you push forward
technologies that make governance and like verification and coordination and trust easier,
then it becomes easier for society to like decide to do the bad things and to not do,
sorry, to decide to do the good things and decide to avoid the bad things. So like we're,
so there's some ways it's like AI might help with this in particular like the AI might
help policymakers understand what voters think. You can imagine that, you know, yeah. And then like
in addition to help you understand what policymakers think, you know, the AI is going to advocate on
your behalf, especially if you have a model that is aligned to you in particular.
You can imagine that the AI is like, you know, you can have a provable guarantees that there's
some like particular AI system that is making a judgment that is like more incorruptible
than a human.
You can imagine the AI is like auditing information.
There's this like fundamental difficulty with using humans to audit that the humans have long-term
memory whereas the AI is you just like, you know, use the AI.
It's like process and context and then the AI is deleted, but it returns like yes or no and
like whether you're like a binding by some protocol or like building a bi weapon.
So you can like warded things without humans having knowledge about it afterwards.
And like a bunch of ways like this where like technology gives you building blocks for like
governance that might be and like more and more effective and more representative of the desires of the people
than we can do right now as just like stacking humans into bureaucracies and having like laws about that.
I think these three words give us a good framework for for directionality.
You can't solve everything in one essay, of course.
Just like one last lens and filter for avert, diffuse and democratize.
let's say one society
kind of like chooses to do this and
puts these things in place in a
more intentional way, but another society
chooses not to. And there's this kind of like
geopolitical race condition here,
which is like we're in kind of a
some sort of arms race for
AI. Like do you have it, does your
essay in the intelligence curse have anything
to say about kind of
that? Like how do we
you know like one society chooses
to go in the direction of trying to solve the intelligence
curse, but another society races
faster, fully embracing, like, the curse, they don't care.
They, like, basically do the authoritarian, totalitarian societies win, you know, like, no matter
what?
And so are we kind of, like, screwed even if we in the U.S. or we in the West kind of choose
these ways out?
So I think here is one of the things where the differential tech development approach is
really powerful.
It is fundamentally not about taking a sort of cuts to yourself and becoming less competitive
and potentially being overrun by more safety-oriented actors is about developing the technology
such as if they exist, doing the safe, good, pro-human thing
is the winning strategy.
And therefore it shifts the equilibrium.
It's not just about like, it's not reliant on coordination
with other actors.
I like that.
So we are doing the monger thing
of trying to get the incentives, correct?
Yeah, exactly.
Now, I will say, incentives aren't everything.
And I think we talk a little bit about some policies,
especially in the democratized section.
We talk about some of the more boring ones
you hear all the time about
sort of like campaign finance reform and reforming
anti-corruption laws and strengthening
bureaucratic competence.
And these all sound kind of boring today.
But a really key thing,
thing is if you think incentives are about to get radically different, and the self-interest
of politicians might be much more powerful than it was in years previous, it is really important
that the leaders that you're electing in the next couple of years are leaders that you would
trust to make good decisions on your behalf in stressful situations. That integrity element
that we've kind of lost in modern politics is much more important than ever, because one
of the most oftentimes, one of the ways you can square like great man theory of history away
with like a more incentive's dominant view is that oftentimes the great men of history
are those that take a decision that looks against the incentives
and is ultimately the correct one.
You really want to maximize your chance
of grabbing one of those leaders
when critical decisions come down
because you could spend, and you should spend
as much effort as you can to get the incentives right,
but you really also want to make sure
the person that you have there is someone
and that critical moments might make a decision
that is against those incentives if it matters
if it's important for your well-being.
So there's that boring answer
if you should vote for people who you actually trust,
but you actually should vote for people
who you actually trust.
Interesting.
Unfortunately, it also feels like we're at a short
of great men these days, at least in our politics.
This has been very fascinating.
I guess my question and this is,
what should listeners do with this information?
Is there anything kind of actionable?
I think it's a super valuable mental model
and kind of like, hey, you might be out of a jaw.
But what did people do, I guess, personally, with this information?
What do you recommend listeners, you know, take action on?
Yeah, I'd be just a big fan of, you know,
go to the solution section of our essay series.
We have a lot of specific tech ideas, you know,
like read through this.
If you're someone who wanted to build something,
you know, go and build something of this list
or read this list,
you have your own idea for something
that helps these same goals
and go build that out.
And it's like,
if we build the right technology,
that makes the equilibrium
the good one.
Yeah, there's this,
there's this kind of,
I hear this meme a lot
in some of the more like air safety communities
that, oh, if it's something that has like market value,
the market will solve it.
The market is made up of people.
People are in the market and they do things.
And so you actually have,
if you're going to do differential tech development,
then some first startup founder's got to wait
up and decide, okay, I'm going to go build this thing.
And a VC's got to decide to back up.
And we're not being, you know, we're not pontificating here.
We can talk a bit more about it in the future,
but the two of us are currently actively involved
in taking a slice of this agenda and building this out ourselves.
So we're going to go down this rabbit hole ourselves,
because I don't know, it's really easy to point out a problem.
It's pretty hard to build a solution.
We're much more excited about is building out the solution space.
But I think there's stuff for people who aren't just in the tech community.
There are policies that government should be thinking about enacting today.
We call, for example, for Operation Warp Speed for the Deax-style technologies, the kind of things that could actually prevent major catastrophes and enable this, like, culture of innovation and democratization.
Governments could be incentivizing that right now.
I mean, Trump's first term did Operation Warp Speed.
The bright people are in place to do something like that, again, this massive moonshots.
I think voters can start really thinking critically about what's going to happen in the next couple of years and electing politicians on those grounds.
And if you're a student, I think I've had a blog post somewhere talking about, like, depending on who you are, what career decisions.
you might want to be making. Because again, if you think diffusion pressures are real,
they might take a while if this to accelerate, there are still some roles where it's pretty
obvious that are going to go down first. Doing bigger, bolder things right now that actually
require you to learn how to fail and be your own actor is really great preparation for a world
or you might be able to command an army of agents or a lot of augmentative tools,
even if the big company's in hiring junior analysts anymore. So there's a lot of things you can do
right now today to both at a macro level, get us on the right path or at a micro level,
orient yourself for the coming wave.
It's great.
So Luke and Rudolph, we spend the last 90 minutes kind of defining the intelligence
curse, walking through it, providing solutions.
I would love to end this on a positive note on something a little optimistic.
What happens if we solve the intelligence curse?
I mean, what is the payoff?
What do we get for solving this problem?
We are talking about a, what could be the greatest revolution technologically of inhumanity,
at least at any time previously in ministry, and maybe like the final huge thing.
The promise of that is honestly hard to.
fathom. It's things like curing diseases that we couldn't imagine, of actual total abundance,
of unlocking crazy amounts of resources, of really being able to provide what would have been
just, you know, in this year, an experience only to the ultra-elites to everyone. That is a world
that I want to be able to live in where we can do things like, you know, abolish poverty and
abolish disease. And if we can get this right, the promise of artificial intelligence is that,
instead of having less agency and less control over your world, you get more with a whole lot of
less the drawbacks. I don't know. That's a vision I'm pretty excited about. That's a vision I can
be very excited about, too. I wanted to thank both you, Luke and Rudolph for joining us
today, walking us through everything. I know you guys mentioned you were working on some stuff. Where can
people find you? So we've got a, I think, contact form or an email on intelligence dash curse.
org. We're both also on Twitter. I think my handle is Luke underscore Drago underscore Rudolph.
Yours is. Mine is currently at L-R-U-D-L underscore. Yeah. So if you want to reach out to us through the
the contact form there, or if you want to reach out to us on Twitter, we're both pretty active,
unfortunately. We definitely tweet a little too much, and that's all right.
Hey, for better or worse, but we very much appreciate you joining us today,
walking us through this entire intelligence curse. I'm sure everyone listening now has a lot to chew
on, a lot of interesting new questions to ask, a lot of new things to consider, whether it
be the exciting case, the optimistic ending that we landed on or any of those varied outcomes
that we also discussed on the show. So, Luke and Rudolph, thanks again. Thank you so much for
joining us on the episode today. Thank you so much. And I guess the last thing I could say is I'm super
optimistic and super excited where this could go. So I think even if we know about the problem, we also
know how to solve or we have a guess or how to solve it. I think people should get more excited
about jumping at that solution. Awesome. Yeah, please build a tech that will save the world.
Absolutely. I love that. That's a really optimistic note to end it on. So thank you again and
appreciate you guys taking the time. Take care. Thank you.
