Bankless - 197 - Software Eating the State with Sammuel Hammond
Episode Date: November 20, 2023Is software eating the nation state? Technologies like crypto and AI are about to fundamentally restructure society from the bottom up. We brought economist and writer, Sam Hammond on to predict how s...ociety will be reorganized under this new paradigm and how those in power will respond. Sam Hammond is an economist for the Foundation for American Innovation, a think tank advising policies in DC whose slogan reads, “Build Tech. Promote Freedom.” Freedom and Tech. We like those two things on Bankless - and we're also concerned with the project of how to protect them in a world where software starts to eat the nation state (and the nation state feels threatened). ------ ✨ DEBRIEF | Ryan & David unpacking the episode: https://www.bankless.com/debrief-sam-hammond ----- 🏹 Airdrop Hunter is HERE, join your first HUNT today https://bankless.cc/JoinYourFirstHUNT ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🦊METAMASK PORTFOLIO | MANAGE YOUR WEB3 EVERYTHING https://bankless.cc/MetaMask ⚖️ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo 👾GMX | V2 IS NOW LIVE https://bankless.cc/GMX 💲 USDV | NATIVE OMNICHAIN STABLECOIN https://bankless.cc/usdv ------ TIMESTAMPS 0:00 Intro 6:48 Sam’s Take on Crypto 9:50 Nation State’s Hate on Crypto 14:53 Post-Printing Press 20:55 Crypto, AI, & The Internet 25:40 Tech Good, Bad, or Neutral? 32:03 Bracing For Impact & Tech Debt 38:15 Nation State Competition 46:39 Dangerous Tech 50:00 The Network State 1:04:55 AI Executive Order 1:12:28 China’s AI Approach 1:15:20 AI’s Double Edged Sword 1:33:45 AI & The Leviathan 1:48:23 What Technology Winning Means? 1:50:58 Closing & Disclaimers ------ RESOURCES Sam’s Substack, “Second Best” https://www.secondbest.ca/ AI & The Leviathan https://www.secondbest.ca/p/ai-and-leviathan-part-iii The Dynamist - Podcast https://podcasts.apple.com/us/podcast/the-dynamist/id1528920211 Eliezer Yudkowski Episode https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
As technology accelerates and crypto is very fast and AI is even faster,
governance isn't going to be able to keep up, hence where we end up, which is just technology just wins.
And that makes you optimistic, Sam?
It makes me...
Welcome to Bankless, where we explore the frontier of internet money and internet finance.
This is how to get started, how to get better, how to front run the opportunity.
This is Ryan Sean Adams.
I'm here with David Hoffman, and we're here to help.
you become more bankless.
Guys, the bankless journey takes us to some interesting places, and this is certainly one
of them in today's episode.
The topic today is software eating the nation state.
Technologies like crypto and AI are about to fundamentally restructure society from the
bottom up.
We certainly believe that at bankless.
An economist and writer Sam Hammond is our guest today, and he's predicting how society
will be reorganized under this new paradigm and how those in power will respond.
What will the nation states have to say about it?
A few topics we get into today. Number one, we talk about crypto. It's under attack in the U.S.
And Sam has a framework to help us understand why. Number two, we talk about America.
It was once pro-tech, pro-internet. Now it's anti-crypto and anti-A.I. What changed?
Number three, we talk about AI. How is the world responding to this new technology?
How can AI further unlock crypto? And number four, we end with Sam's predictions that are concrete,
very concrete, for the next two decades, which Sam says will be weird.
including what happens to the U.S. and what happens to countries that embrace internet technologies from the firmware level up.
Themes of this episode, technology and freedom, and how we balance those.
And David, of course, those are themes we often express on bank lists.
David, why is this episode significant to you?
We always say in the intro, we're here to front around the opportunity.
And this is one of those, I would say, classic bankless episodes where it's definitely this knowledge is extremely valuable for being prepared for the future.
but also you're going to listen to it and tomorrow's going to be totally the same.
Like you're not going to be able to do anything that's totally front running the opportunity.
But the world will be significantly different in the future and it will be different in very specific and unique and directional ways.
And episodes like this, I think, help map out the contours of what that future chaotic world looks like.
So while tomorrow will be unimpacted for you, bankless listener, understanding how crazy the future is, early as possible,
possible, we'll still nevertheless help you prep for it. I think this episode with Sam where we
navigate concepts like technology in the state in the year 2023 versus technology in the state
in the year 2040 and how those change over time and the role that AI has to do with this.
We should be knowing these things more or less. We should be fluent in these conversations
because as AI does progress in capabilities and consumer products as it's at your fingertips,
Spancoast listener, the realities that Sam is illustrating here on this episode today will become
more and more relevant. Speaking of being fluent, there was a term used in today's episode. Sam used it
called EAC. That stands for effective accelerationism. If you've not heard it before, it's a term you
might be hearing in the future as well. It's this idea of techno-optimism of welcoming the future.
It's a movement that wants to accelerate our technical progress as a species and not slow it down.
All right, guys, let's get right to the episode with Sam, but before we do, we want to thank the sponsors
that made this possible, including our friends and sponsors over at Cracken, which is our number
one recommended exchange. And if you don't have a Cracken account, you need to create one right now.
Click a link in the show notes. Cracken knows crypto. Cracken's been in the crypto game for over a decade.
And as one is the largest and most trusted exchanges in the industry, Cracken is on the journey
with all of us to see what crypto can be. Human history is a story of progress. It's part of us,
hardwired. We're designed to seek change everywhere, to improve, to strike.
And if anything can be improved, why not finance?
Crypto is a financial system designed with the modern world in mind.
Instant, permissionless, and 24-7.
It's not perfect, and nothing ever will be perfect.
But crypto is a world-changing technology at a time when the world needs it the most.
That's the Cracken mission, to accelerate the global adoption of cryptocurrency,
so that you and the rest of the world can achieve financial freedom and inclusion.
Head on over to crackin.com slash bankless to see what crypto can be.
Not investment advice, crypto trading involves risk of loss.
Cryptocurrency services are provided to U.S. and U.S. territory customers by Payword Ventures
Eek. PVI doing business as cracken.
Metamask portfolio is your one-stop shop to navigate the world of Defi. And now bridging seamlessly
across networks doesn't have to be so daunting anymore. With competitive rates and convenient routes,
Metamask portfolio's bridge feature lets you easily move your tokens from chain to chain,
using popular layer one and layer two networks. And all you have to do is select a network
you want to bridge from and where you want your tokens to go. From there, Metamask vets and
curates the different bridging platforms to find the most decentralized,
accessible and reliable bridges for you.
To tap into the hottest opportunities in crypto,
you need to be able to plug into a variety of networks,
and nobody makes that easier than Metamask portfolio.
Instead of searching endlessly through the world of bridge options,
click the bridge button on your Metamask extension
or head over to metamask.io slash portfolio to get started.
Arbitrum is the leading Ethereum scaling solution
that is home to hundreds of decentralized applications.
Arbitrum's technology allows you to interact with Ethereum at scale
with low fees and faster transactions.
Arbitrum has the leading defy ecosystem, strong infrastructure options, flourishing NFTs, and is quickly becoming the web-free gaming hub.
Explore the ecosystem at portal.arbitrum.com. Are you looking to permissionlessly launch your own Arbitrum orbit chain?
Arbitrum allows anyone to utilize Arbitrum's secure scaling technology to build your own orbit chain, giving you access to interoperable, customizable permissions with dedicated throughput.
Whether you are a developer, an enterprise, or a user, Arbitrum orbit lets you take your project to new heights.
All of these technologies leverage the security and decentralization of Ethereum.
Experience Web3 development the way it was always meant to be.
Secure, fast, cheap, and friction-free.
Visit arbitram.io and get your journey started in one of the largest Ethereum communities.
Bankless Nation, we are incredibly excited to introduce you to Sam Hammond.
He is an economist for the Foundation for American Innovation,
which is a think tank that advises policies in D.C.
And if you go to their website, their slogan reads,
build tech, promote freedom. I think freedom and tech, these are two things we definitely like
on bankless. And we're also very much tied up with and concerned with the project of how to
protect them in a world where it seems like software is starting to eat the nation state.
And the nation state is maybe feeling a little bit threatened. That is the topic for today.
I'm sure we'll touch on AI, crypto, economics, polarization, and how politics really works.
Sam, welcome to bankless.
Thanks for having me. Sam, I know your primary subject matter is not crypto, yet this is a crypto podcast.
Though bankless listeners know we have had some recent flings with AI and we touch on economic, sociology, politics, basically all of these things are crypto adjacent.
But since this is, I guess, officially a crypto podcast, I feel compelled to ask this question as we begin.
Sam, what is your take on crypto?
If I pull back a bit, I think there's a bigger take on crypto as an example of a counter-economics or using technology.
to advance freedom, sometimes under the label agorist.
Right.
And I think there's crypto, summers, crypto winters, but the general idea that we can use the technology to advance social change is, I think, a correct one.
And obviously, it's something that's much more powerful than lobbying, you know, or writing manifestos here or there.
Technology is an interesting exception to the rule where if you can invent a better way of doing things, you don't necessarily need permission to put it out into the world.
And that technology can lead to dramatic social changes, right?
There's a famous book that argues that the invention of the stirrup was what led to feudalism, right?
This one small invention led to the ability for horseback knights to control land and so on and so forth.
And so like inventing technologies to advance freedom is a great idea.
Crypto is an awesome application of that into the monetary realm.
And I think really it's been just looking for its killer app.
And so I'm also interested in crypto as a potential solution to a lot of the issues that AI is creating.
in that sense, you know, AI creating the conditions to give blockchain a kind of functional use case.
You know, we think about this in terms of content provenance and tracking deep fakes and so on and so forth.
And so I think these two movements, the sort of people working in AI and the people working in crypto have a lot to learn from each other and potentially collaborate on solutions that combine those two fields.
You know, think about like genuinely smart contracts.
You know, if we had smart contracts that were running little AI agents inside them.
Yeah, absolutely.
theme that we'll return to throughout this episode is maybe a major one, which is technology's
ability to impact social change. And when social change happens, there are winners and losers
in the outcomes of that social change. And I think one thing that maybe AI has a parallel with
crypto today is crypto very much feels like it's under nation state attack. I don't know,
Sam, if you've heard about Elizabeth Warren's anti-crypto army, we've got privacy developers in jail,
the developers of Tornado Cash, for instance. We've got Gary Gensler, regulator at the SEC. He thinks
everything is under his control, everything that's tokenized is his. I guess I have a question for you
that is general. Maybe you can help us make sense of this. Why does it seem like the nation state is
attacking crypto? Is it all in our head? Or is this related to the idea that you opened with, this
idea that technology impacts social change and restructure societies? And some people don't like that
change and are resisting it? Is that what we're experiencing in crypto, some of that resistance?
Yeah, I mean, we'll get into this later, but liberal democracy, as we understand it,
in the modern nation state, is a product of a particular technological equilibrium.
Like, institutions are adapted to the state of technology. And therefore, you know, it's one
thing if you're inventing a better mousetrap, but if you invent new firmware to run the economy on,
you pose a direct threat to the sovereign, so to speak. And so I think it's not surprising
sort of political theory 101, that if you pose a direct threat to the sovereign, the sovereign
is going to hit back.
And in some ways, it's sort of like a huge endorsement to crypto.
Like, the more there is this attention from the Gary Gensler's of the world, the more
you should think that, you know, there's there, you know, where there's smoke, there's fire,
that this actually poses a sort of challenge to their regulatory oversight.
And it's also interesting.
Gensler's also had a foray into AI now where he's brought up AI as a potential
systemic risk to financial markets. He's done the AI pivot as well, hasn't he? Sam, aren't we all?
Sam, if I wanted to reinterpret what you said, I think the meaning I got out of that is that the state
lags technology, as in like technology moves forward and then the state has to catch up to it, right?
There's some sort of equilibrium of about the inventions that humans produce and then the state
naturally responds to that new equilibrium. They are not promoting, they are not the pinnacle of society,
right they have to catch up and establish new governance over the system and so one thing i see as like a
big friction between the state and then also AI and crypto crypto and AI paired is that crypto is just so
fast if you exist in this industry we move so quickly we go in four-year cycles but if you were
looking at crypto four years ago it's irreconizable to where it is today and i kind of think
AI is more or less the same AI is also just increasingly accelerating in its velocity of what it can
impact change in society. And there seems to be a friction here, right? Well, if technology is moving
faster and faster and faster and the state is always lagging technology, there seems to be just a
friction that is produced between that gap. Do you see that friction as well? Oh, absolutely.
I mean, we're talking about exponential technologies running up against linear or even sublinear
institutions. There's going to be an actual thing that, you know, they fall out of step.
Congress is only just now getting its hands around social media and arguably not even that.
So it does pose sort of, you know, Peter Thiel has this like indefinite optimism, definite optimism, indefinite pessimism, definite pessimism.
Technology provides forms of indefinite optimism, right?
We can just imagine the better world with technology.
But for people in incumbent positions in government or other institutions, even incumbent corporations that could be disrupted by technology,
there's a sense in which there's like an ominousness to new technology that they have to do something about by technology.
default because it poses a threat to existing ways of doing business.
And I don't think this is any different.
Like, nation states, not only do they act slowly, but, you know, think about our administrative
bureaucracies.
Like, relative to other nation states, they act even slower, right?
We have, if you want a new regulation, you go through multi-year notice and comment periods
where the public and really by the public, we mean lobbyists get a chance to give their two
cents.
And then that has to go through interagency review process and that whole thing gets challenged in court.
You know, New York can't even introduce congestion pricing without being sued for multiple years.
And so ironically, that slowness actually creates a demand for alternative modes, right?
Alternative modalities.
You know, if you can exit the system, so to speak, and accelerate through a different track,
through an alternative economic arrangement, through alternative distribution channels,
then you can just like and run the entire system.
And that's fundamentally why I think there's been this crackdown both on, you know, social media and new media from sort of incumbent media groups
and also from the incumbent financial interests, a crackdown on crypto,
it's not necessarily that they see the technology is inherently bad.
It's moving quicker than they can keep up with.
And so their solution is to try to slow things down
so that they can catch up rather than just letting things rip.
So you're saying their solution is to slow things down,
although sometimes it feels like they're doing more than slowing things down, Sam.
They're trying to actually stop things.
They're trying to shut them down.
They're trying to ban them.
They're trying to make them illegal.
And the thing that has surprised me, I guess, a little bit is you said that technology can often bring a political threat to the sovereign.
And yet I would assume that in some societies, that political threat would be more severe than others.
For instance, America, at least American values, as purported on paper in like, you know, its best light, we don't often live up to these values, is a nation state and a society that is supposed to value technology.
It's supposed to value freedom.
These are supposed to be enshrined in our values.
And so the pushback from the U.S. on crypto in particular has been a little jarring for me.
I guess maybe some might say that's naive.
Like, Ryan, you should have anticipated that.
But it just feels very much in conflict with the values that we put in the American white paper.
I'm wondering if you could reflect on that.
And then I'm hoping we can dig into these words, like freedom and technology and what they actually mean and why they're important.
Yeah, I mean, if you look back at history or other transformative technologies, take the printing
press, right? Printing press, early 1500s. It really doesn't start to diffuse, though, until the next
century in the early 1600s. That's when you start to see every little township, every little,
shire in England having their own printing press. And actually, up until that point, the UK Parliament
had an ecclesiastical licensing regime. If you wanted to print a book in England, you had to
get the licensing board's permission. And it was a bunch of priests that, you know, made sure it didn't
offend anybody. And that was consistent with the Church of England and so forth. Once
Printing reached a kind of point of criticality where it was so diffused across the countryside.
Enforcement completely broke down.
You know, the licensing board collapsed.
There was this explosion of new publications, you know, the nonconformists, the Presbyterians, the new Puritans, all these minority religious groups that finally had a chance to speak.
It's very reminiscent, actually, of like, what the Internet did.
And what happened out of that was there was, you know, the civil war, there was also the broader wars of religion in Europe.
And at the other end, we get modern nation states.
And in particular, we get the founding of the American Republic.
and in some ways the American Republic was not just sort of a founder country,
but it was also the first sort of printing press native country, right?
The founders of the United States were sort of post-printing revolution
and designing the society around that reality.
And so, you know, one hopes that we can absorb these new technologies
consistent with our existing institutional setup.
But then that brings in people like, you know, Belagius Srinivastin and others who sort of take
this as inspiration that we need to found new institutions.
to be native to the internet age, to the crypto age, the AI age. And that's going to be a running
tension, I think, through this century. What institutions are able to evolve with technology and
where do we need to just displace and start over? So that's a really interesting framing of the
U.S., which is like one of the first societies built on a post-printing press set of protocols.
And I guess what you're referring to, Sam, is maybe some protections in the Constitution,
you know, sort of the entryment of freedom of speech and First Amendment type protections. Is that
what you mean? Yeah, absolutely. And also just the milieu, like it was the Republic of Letters,
like the Federalist Papers. This was like 17th century Usenet. Well, it's very interesting because
now we have these post-printing press societies in America's, you know, now close to 250 years
old, but we don't have any societies that have built on the native protocol of the internet.
Maybe that gets into what some of Blasie is basically saying, is that our existing institutions
are sort of fraying around the edges. And maybe that's what you're saying as well, Sam.
The governments of the world haven't been able to adopt to a technology fast enough.
So the protocols aren't like internet native, tech native type protocols.
Is there something to that?
Oh, usually.
I mean, you know, take social security numbers, you know, nine digit number that was created in 1935, right?
It's no wonder our system is so full of fraud.
The IRS individual master file, which basically runs the entire tax system, dates to the Kennedy administration and assembly.
You know, we had during the pandemic this explosion in unemployment roles and people were lining up around the blocks because literally like the websites would go down at 9 p.m.
You know, the amazing thing about the internet is it doesn't have to have business hours, but it had to in this case because our systems are outdated.
So like purely on a technical sort of institutional firmware level, we have this growing chasm between sort of what you could describe as like the UX experience between the public and private sector, right?
We get annoyed if our Uber doesn't arrive in three minutes when that barely existed five, ten years ago.
And yet we are sort of like masochists about like government inefficiency where we just have come to expect it.
It's far for the course.
But obviously the DMV line is eternal and inherent.
Yeah, and I'm sure the DMV these days is better than it was in the 70s.
But again, this is sort of linear improvements against exponential improvements in the background.
And the more these things get out of step, the more we will not have sort of a gentle transition.
and the more there will be just things breaking.
Sam, there's a passage in one of your articles that we were reading today for this podcast.
I want to read because it's about the continuity of the printing press into the Internet
and how these similar technologies have shaped the contours of nation states.
So your passage reads,
For generations, it was an implicit assumption of U.S. foreign policy
that economic liberalization and free trade would push autocratic regimes to democratize.
Information technologies that give voice to the voiceless were key to this grand strategy.
a kind of glass-nossed in a bottle.
This is why the Department of Defense invested so heavily in the early Internet
and why our international development agencies promoted Internet access abroad.
As one U.S. aid report puts it, connecting people, transforming nations.
The idea that technology can precipitate a regime change is thus not foreign to U.S. political establishment.
They just assumed our relative openness would keep those dynamics from playing out at home.
And so this kind of puts a specific light on the United States as a...
country of, you know, doing the things that are in line with American values, promoting these
technologies of the printing press and the internet to spread freedom. But as Ryan opened up with this
podcast, we don't seem like we're getting that same kind of treatment with crypto. And we're also
kind of not getting that same kind of treatment. What we're seeing is with AI as well.
And so I'm wondering, Sam, like, why is crypto and AI different from the internet when the
American establishment way back when was so apt to promote the internet? Like, what changed?
I don't think anything changed. I think there's always been a deep hypocrisy in how we approach this geopolitically.
Similar to how Reagan talked a big game about free trade and cutting the budget while running bigger deficits and actually doing his own defense industrial policy.
So we've always sort of said one thing and done another. And the same is true in the internet.
Like the internet, yes, we promoted abroad. We use it to help the dissidents in Iran rise up against the Shah or whatever.
But when those same trends play out at home, our establishment treats it as a problem.
They treat it as, you know, this growing cancer of disinformation.
And once you sort of like step back and looked at it, it's so obviously cynical, right, where we have most recently the New York Times rushing to publish a headline about Israel blowing up a hospital with no verification, right?
Like spreading major disinformation to a massive audience and like just wiping their hands of impunity.
Meanwhile, like if one random person tweets something that's false on Twitter, it's like this indicts the.
entire concept of social media. So I think even in the realm of the internet, there is sort of a
be careful what you wish for dynamic, where as the internet was being developed, we had all
these highfalutin values around openness and transparency and connecting people. And really,
that tune, I think Bellagie's also documented this quite extensively. The kind of tech clash really
set in post-2015, 2016, when we saw how the internet was also making it possible for outsiders
like Trump to kind of hack the political game. And suddenly, the same people who are writing our
articles about how the Obama campaign is like harnessing Facebook to like
reach likely voters is basically taking the exact same story and saying, but like
switching the valence and all of a sudden it's like, you know, populists are using social
media to destroy democracy.
So I don't think actually there's a disconnect between how the elite talks about the internet
and these other technologies.
I think what really changed was 2016, making them wake up and realize that this isn't
going the way they planned.
and in some ways the technology as sort of an acid, an institutional acid that dissolves corruption
in the same way that the printing press allowed us to speak up against the corrupt church and so forth
is coming for them.
And that makes them very uncomfortable.
And so that's where you start to see, you know, new attempts to try to regulate social media
and to sort of create a kind of corporatist framework so that the government can do via Facebook and Twitter
and these other platforms, things that would be illegal for it to do itself.
and so crypto and AI are just the next iteration in that exact same trend.
So basically our elites like when technology drives social change,
they just don't like when that technology and social change changes them.
Is used against them.
Or diminishes their power in some way.
But I want to make sure we hone in on the case because I do think you're right in the U.S.
and in other places around the world, there's this tech backlash.
And I think whenever there's kind of a backlash or there's some momentum for an idea,
we have a tendency to get into polarized extremes.
And I'm wondering if you think that there is the right word for this and the right approach is balance.
Let me give you kind of two extremes in sort of the social fabric of the U.S. and the narratives that are playing out.
The question is really, what about tech?
Is technology good?
Is it bad or is it neutral, right?
And I want to ask your take specifically about that, Sam.
And maybe there's two counterpoints here.
One is kind of the Mark Andresen.
I don't know if you saw his recent techno-optimist manifest.
and have thoughts on that, but effective accelerationism is related topic. And this is a pro-tech
type of movement. Mark Andreessen starts with this. He says, lies, we are being lied to. We are told
that technology takes our jobs, reduces our wages, increases inequality, threatens our health,
ruins the environment, degrades our society, corrupts our children, threatens our future,
and is ever on the verge of ruining everything. We're told to be pessimistic, angry, bitter, resentful.
And he flips this. And he says, the truth is, our civilization,
was built on technology. Our civilization was built on technology. Without technology, we are nothing.
This is our manifest destiny. This is quite the passionate manifesto call to arms for Mark Andresen.
It's just that is very much putting forth the idea that tech is good. And then also we have, I think,
you know, many in the political establishment, but even among kind of the tech thinkers, like the
L.E. E. E. E. E. E. E. E. Kowskis of the world, which is basically this idea that AI specifically,
but in general, tech is dangerous, AI specifically.
Like, we're going to kill ourselves.
It invokes ideas of Oppenheimer and Prometheus and, you know, Frankenstein's monster and
humanity could create this Leviathan thing that could destroy us all.
And we have to be very careful.
And we need our parents, our managers, our governments to really oversee this for us.
Is there some balance to be found in those two worldviews?
Or how would you answer that question of, is tech good, is it bad, or is it neutral?
Yeah, I would describe myself as a techno-realist rather than either optimist or pessimist.
And I think that that's what we need today, especially with AI.
Technology is always double-edged, especially technologies that affect what it means to be human, right?
The kind of things that touch on the human condition, like sort of attempts to build sort of transhumanist man.
What could go wrong?
So I think it's always complicated.
I've done a lot of writing on how technology influenced the development of institutions.
And, you know, the exact same technology that we thought, you know, spread.
the internet abroad would lead to freedom led to the Arab Spring, right?
If we were having internet safety discussions, circa 2010, we would have been talking about, you know,
child online safety or identity theft.
We wouldn't have been talking about how, you know, social media would lead to mass mobilizations
that would topple governments in Cairo and Tunisia and China was watching that the entire time
and responded by using the same technology not to open up their society, but to build
an unprecedented digital surveillance date.
at Panopticon. So technology, it's not even fair to say it's neutral. It's just, it's a fact of the matter.
And how we adapt to it is really the open question. What values do we approach it with? And how do we
balance the sort of ways that technology shapes the cost structure of institutions in a way that
preserves the things that we value, like an open society, like freedom, like privacy? Those things
don't just happen automatically. They happen because humans with their agency have designed things
that way. So Sam, maybe we could steal man the argument a little bit. So if what you say is true that
technology is a double-edged sword, then why not slow it down? Maybe the regulators are right. Let's
slow things down to figure out what the use cases are and, you know, gradually inject it into our
society. Is that a good approach? Are they maybe the good guys in this story if technology is a
double-edged sword, if there's harm as well as benefit? Well, there are cases, right? Like,
you'd be hard-pressed to find somebody who's like so EAC that the person who does,
develops the first open source pathogen synthesis platform, you know, should just rush and put that
out on Hucking Face. Like, there are issues there, right? And part of my message is the more we can
restrain ourselves voluntarily from rushing into that sort of having like the George Hott's kind of
chaotic neutral sort of mentality, the more we can also push off inviting a backlash, right?
You know, often technologies that empower the individual paradoxically lead to a kind of hobbsy and
crack down on that very technology if they're not introduced in a reasonable way. And this isn't
something that regulators can do. Regulation is a very blunt instrument. And also the whole notion of
slowing down, like this AI pause letter, for instance, you know, it was worth the pixels it was
written on, right, because nothing paused. And even with all the tech CEOs of those companies
agreeing that there are existential risks, nothing paused. Why did nothing pause? Because we're in a
kind of prisoner's dilemma. We're in a kind of competitive arms race. And at this point, with the technology
like AI in particular, we should shift the frame from how do we stop it to how do we brace for
impact. Because I do think it's going to be incredibly disruptive in good and bad ways.
We just talked about the printing press. The printing press came out all right, but it also led to
the bloodiest wars of religion that Europe has ever seen where like a third of the population died.
That's not great. But it's also something that we can't stop. It's something you have to go through,
right? What does bracing for impact look like to you? If regulation is such a blunt instrument
and we desire something a little bit more precise and a little bit more effective.
How do we do that?
Well, this sort of gets into my AI and Leviathan series where I talk about these dynamics.
The way I've been putting it is we kind of have to get to Estonia.
Like people have often said, you know, getting to Denmark is the end goal.
We need to get to Estonia.
Why Estonia?
Well, Estonia, we were just talking about sort of America as a printing press native country.
Estonia is an Internet native country.
You know, after the fall of the Soviet Union, they have basically a blank slate.
And they also had this enduring threat of Russia on their border.
And the civil servants in Estonia in the mid-90s were very young.
They kind of had a hacker ethic.
And they built the first and to this day still most sophisticated e-government in the world.
You know, they actually pioneered an early version of blockchain.
It wasn't called blockchain.
It's called X-road.
But their entire government is basically built on a blockchain.
It's a distributed data exchange layer where if Department of Education blows up one day,
your files aren't lost because they're distributed all around.
And the whole process that enables, it unlocks like all kinds of automation, right?
because if you are born, you file a birth certificate,
your child is automatically enrolled in school when they turn five or six,
because the system is just able to do that automatically.
And so they're both able to fortify their institutions against, in this case,
the threat of cyber attack from Russia,
but also build in the process a sort of government as a platform where they could open up government through API
for private actors, the e-banking, all kinds of stuff, property titles,
all that stuff is able to sort of integrate with government databases in a way that actually
shrunk the employment footprint of their civil service.
And so that's like where we need to get to.
They're like the polar opposite of us in terms of like the tech stack in their government.
Right.
It kind of seems like Estonia would be the government that you would make if you were making a
government in the most modern era possible.
Like what if you were charged with building a government using the technology that we have today?
Right.
It sounds that this kind of is like the image that you're illustrating.
Right.
And as the AI wave washes over and we start having AI doctors and AI lawyers and AI accountants
and major job dislocation and so on and so forth, malware bots that are
spreading across the internet and attacking our infrastructure, like we were going to be wishing that
we had the cybersecurity that the Estonia has built for itself, not just for security of defensive
purposes, but also for the new kinds of public goods that they're going to be able to build
on that government as a platform. One of the dichotomies that I really think separates
crypto finance from banking finance, and Gary Gensler would love to conflate these two things
because he thinks the dollar's already digital. But our commercial banking layer and also our
government are still like pen and paper logic systems. Like a lot of the idiosyncrancies of the commercial
banking layers comes from the fact that, well, they still have like this pen and paper logic embedded in
them. The pen and paper logic is just like now in digital form, but that doesn't make it like a new age
platform. And so I think kind of what you're saying. And what I'm seeing as friction here is that like
we have so much like tech debt in our social structures, in our financial system, in our government,
that progressing forward doesn't seem to be the correct path
because we have to go back, like remove the debt,
and then rebuild a social structure using the technology that we have.
Is this what you see?
Yeah, it's sort of like an innovator's dilemma, but for governments, right?
And the issue with the innovator's dilemma for governments
is governments are the one institution that's not allowed to fail.
And, you know, the beauty of the market,
and I think Joseph Schumpeter had the best way of understanding,
you know, creative destruction in this context,
is not that markets lead to, you know, market clearing prices
or that competition pushes down prices, increases product quality.
That neoclassical story is important, but it's not to think the main thing.
The main thing is that markets create a space for new kinds of organizations to be built.
And if you walk outside today and you look around, you don't see the market.
The market doesn't exist.
You see organizations, right?
Markets are only this abstraction, this liminal thing that exists when two organizations contract with each other.
but really what the world is is a bunch of different organizations.
There's government organizations, there's private organizations.
And one of the reasons private organizations tend to be more competent is not magic.
Like corporate bureaucracies and government bureaucracies both have some of the same pathologies.
The difference is that private organizations are contested.
They're always at risk of the startup that comes and displaces them if they're not on their heels,
assuming they don't have a legal monopoly.
And that contested nature is the most important thing because you can have,
have very steel companies that have that innervator's dilemma, that have all these legacy
processes that they're unable to reform from within and just disappear in a matter of a decade.
You know, their market cap wiped out, all those resources reallocate it to the new upstart,
all that talent being able to bring it to bear in a new company with a new form.
That can't happen with governments.
The most scary thing is the more eggs we put in the government basket, the less we can
count on sort of competency to happen by magic, right? And I think this was sort of the original
insight of the founders and why they want to have the separation of powers, right? So the government
can do just the basic things, sort of set the basic framework of rights and so forth,
because the core institutions that we have to rely on are inevitably going to change. And it's
much better at those change in a dynamic setting than have to undergo complete system failure and
be rebuilt from scratch after some, you know, revolution.
This is an interesting point, I think, may as a side point, Sam, but you say that governments aren't contested, and I can see how that's very much true in the short run and maybe the medium run. But is that also true in the long run? Or do you also think basically nation states, this is kind of a sovereign individual type of idea, but nation states are actually competing for talent, for populations, for economics, for demographics against one another in this whole world game that we're playing?
Oh, absolutely. I see the world through an organizational lens. And the kind of the kind of
kinds of organizations we have are shaped by transaction costs or the Coase's original Ronald Coase with this theory of the firm. Like, why do we have corporations in the first place? Well, because some things are more efficient to do in-house, some things are more efficient to contract over. And, you know, those transaction costs, the cost of bargaining, the costs of search information, the cost of monitoring, the cost of monitoring, you know, if you have a contractor, you don't know if you're doing the job well, because you don't have direct monitoring over them. That determines the boundary of the firm. It also determines the boundary of the nation state, right? Like, there are certain things that countries do. All countries, most Western,
countries provide health insurance. Only a few like England provide like actually nationalized
health care. What countries are doing is solving for the market failure and insurance. They don't
care about private practices and private doctors and private hospitals and so forth.
So understanding it through that lens, you start to see this echo between the kinds of corporate
governance we have in the private sector and the kinds of governance we have in the public sector,
right? Like social democracies like Denmark or Sweden are kind of like mutual insurance companies.
They're kind of cooperative, right? Democracy is also kind of like,
like a co-op or like a member's co-op where we all get one vote. Then you have countries like
Singapore that are much more sort of like a joint stock corporation where they have some
errors of democracy, but really it's a very hierarchical system top down. And that system can make
sense because they're a small country. They were literally a trading post at the East India company,
right? So they have this sort of corporate legacy to their institutions. And when you're a small
country, you really do feel that competition, right? Because you're fighting over capital,
like you're fighting over foreign investment.
Like you said, you're fighting over people and talent and so forth.
And so there's actually this well-known phenomena,
a sort of stylized fact in international economics,
which is that small open countries tend to have better governance,
right?
Because they're buffeted by the winds of trade,
by the winds of capital, by the bondholders.
When you're a large, relatively closed country,
that's where things can go really wrong, right?
And that's what the United States is.
We're a quarter of the world economy, right?
We were separated by oceans.
We feel invincible.
but the world is no longer, the world is de-territorializing.
Those oceans are getting less and less meaningful over time.
And there's a question whether we're actually at the right scale, right?
Are we like a corporation that has built an empire and now has too many stranded assets?
And how do we reintroduce some of that competition into our institutions so that they can actually adapt?
Instead, we have to end up inventing sort of new enemies.
Like the Cold War kind of kept us in check for a while, and now we're inventing enemies.
is not even really inventing,
actual adversaries in the form of China,
but that's not ideal.
We shouldn't have to have the threat of World War III
be the thing that gets our act together.
I mean, there's also this question of how much the oceans
and separation by oceans actually matters on the Internet.
And how much does geography even matter on the Internet?
And so we were talking about this idea of what would,
we saw the early U.S. being sort of a post-printing press nation set up operating system.
what would a post-internet operating system for a country for a nation look like?
And you pointed at Estonia, and you talked about its adoption of technology and creating
almost like government as a service and an entire API.
There's one other dimension beyond technology that I feel like we have to talk about.
And you talked about this balance between tech accelerationism and tech dumerism.
Is there also a balance with this idea of freedom?
You know, how much freedom does a post-internet society actually give to its citizens?
is there a tension between freedom and control,
and have you seen examples of what a post-internet operating system
that enshrines freedoms actually looks like?
One idea I have for you, Sam, is just like,
it seems very much to me, like we need some sort of protocolization,
I would say, of a digital bill of rights, if that makes sense.
I mean, like, just the basic right for a citizen to encrypt their own private keys
or encrypt their own data, the right of a citizen to hold crypto, for example.
I don't know where these rights are preserved.
I mean, some legal expert might say it's somewhere hidden in the Constitution
and we'll get a Supreme Court ruling that actually preserves these rights.
I'm not so sure, and I'd feel really good if they were enshrined at a far deeper level.
Have you seen any examples of that?
And what about this tension between freedom and control?
How does a nation state run its operating system on that in the post-internet era?
Yeah, these are excellent questions. Hard questions.
You know, Thomas Hobbes wrote his Leviathan, sort of the first entry into political science, at the end of the English Civil War, right? And so when Hobbs talks about the state of nature being a war of all against all, he's not really talking about like hunter-gatherer societies, although that's still true. He's talking about his own nation, his own comrades, his own countrymen who murdered each other viciously over religion, right? And his solution was this idea of the Leviathan where,
freedom or peace and order are kind of restored by circumscribing our natural liberties to some extent,
my liberty to kill you or a strong arm you or rob from you. And by ceding that power to a higher power,
we're able to find a new peace.
That higher power, by the way, Sam, that's the government, right?
Yeah.
And their monopoly on violence.
Correct.
And so in some ways, like what we associate with the classical liberal tradition arose not out of a weak state but a strong state,
a state that was able to raise revenues through professional tax administration, was able to
fortify the border through modern armies, and impose an impersonal rule of law.
That actually takes quite a bit of administrative capacity to do.
Actual anarchy, actual weak states look like, you know, northern Mexico or like, you know,
Afghanistan.
You know, you may have some freedoms there that you don't have other places, but it's actually
closer to the state of nature.
And so the question is, you know, one way to interpret that is that as you increase
the capabilities of an individual, the sovereign individual. That leads to negative externalities.
Like if I suddenly had x-ray glasses and I could see through the wall, you know, that enhances
my capabilities. But if you also have those glasses, suddenly my privacy is being violated.
And then we have a new sort of conflict at the boundary of intersecting spheres of influence
that invites demand for a new Leviathan. Right. And so like China's answer to this is to assume
total power and to have cameras in every corner that can recognize your gate and, you know,
ding your social credit score if you jaywalk. That does not seem like a very pleasant world to go
to, right? You know, I think the challenge in the post-AI, post-internet world is how do we preserve
some concept of sort of a public sphere, of an open sphere where we have freedom, right?
Where that freedom is actually felt and lived, rather than having a reaction to technology that
seems emancipatory on the front end, but leads to a backlash where we end up building our fences higher.
I would assume, Sam, you're willing to say, you know, some technologies are just too dangerous for individuals to have freedom over.
That would be part of your pragmatic approach. And, like, how do we know which technologies are like that in advance?
It's hard to know. I mean, historically, we don't know what they are in advance. Outside of, like, atom bombs and pathogens, we've learned through trial and error and often bloody trial and error.
I think there is genuine risk around AI, and I know you've talked about this in a show before,
it's also going to be a technology that, at least for the most powerful systems, is inherently centralizing
because you need access to compute.
And as the models scale up, the people and the organizations with access to that compute is going to get fewer and fewer
because these scaling laws are logarithmic.
So if you want to build the 1,000 X version of chat GPT, you're going to need not a billion dollars, but $10 billion to train it.
And so this is a technology that by its nature is going to be concentrating power.
And, you know, I wonder, you know, to what extent is the state with the right political leadership, the only thing that can really counterbalance that.
You know, when we talk about privacy preserving technology, for example, you know, it took those policy hackers in Estonia to want to build that into the firmware level where they were basically tying their hands, right?
And in some ways, what classical liberalism is, what the, like 18th century ideal it was, like I said, it wasn't a weak state.
It was a constrained Leviathan.
It was a state that was able to preserve, you know, what Hayek called ordered liberty,
where the government, limited government, where we tied our hands about the kinds of things we could use technology to do,
even if it was available, and do it in a way that was credible that wasn't just written on paper.
You know, so when I look around, I think there's also this competition with China vis-a-vis, you know,
the kind of technologies we export abroad, right, where, you know, China is building this sort of tech stack for a digital surveillance state.
And insofar as you think the internet and AI are going to lead to, you know, new crises of authority, new sort of sources of social disorder, all the Arab Spring.
There's going to be a demand from every Timpuk dictator to import their own technology stack to restore order, right?
And there's technology that can do that in a way that's just unbridled.
And there's potential technologies that sort of do some of that surveillance, right, do some of that policing and law enforcement and so on and so forth.
but in a way that has civil liberties and privacy engineered into it.
And so I think that this is sort of like the package deal that we're faced with.
It's not all sunshine and roses.
There are actually really hard tradeoffs here.
And some technologies sort of bring good and bad at the same time.
And we have to figure out a way to balance those and not just balance them in a technical level,
but find new institutional arrangements that are credible and consistent over time
because a piece of paper is worth the paper is written on.
So one last topic, before you leave this and focus a bit more on AI.
and your thoughts. But I'm endlessly fascinated with this idea of thinking of nation states as a tech
stack, right, as an operating system. And I think in this post-internet world, that's sort of how we
should think about this, these bundles of services and this bundles of software. And this is kind of this
last idea, which might be a potential solution, right, which is hinted at in crypto and then other
technologies like AI. There was a line, I think you gave an episode I listened to Eric Torrenberg's
podcast where you said, software is eating the state. Software eating the state. Software eating
this state. And this is like A16Z, Mark Andreessen idea as well as like, you know, software eats the
world. And so why shouldn't it eat the state? And indeed, this has been a crypto ideal from the
very beginning, right, which is like the idea that crypto, at least an element, can start to
eat at the money system. We can have a separation of money and state, at least to some degree.
We're not dependent on a fiat money system. We can use Bitcoin or ether as our denominator.
Ethereum, the idea of a digital property rights system that is self-sovereign and without the nation-state protecting it. These ideas are powerful. And some of this, I guess, philosophy and solutioning is, I think, manifest in Bologi's idea of the network state. I want to ask you, to what extent you think that could be a potential solution for us. You know, on my rosier days, I'm all in on the network state. Like, I think we could do this. I think networks can kind of replace functions of government.
not everything. We're not going to build roads and hospitals, but maybe in the area of property
rights, maybe in the area of banking, that sort of thing. And other days, I think, oh, shit, what a mess,
right? Like, hey, Gary, come bail us out. Do you know what I mean? Like, it depends on the day.
How much of the network state ideas embedded in Bologi's philosophy do you think is actually
pragmatic and can work? And how much is kind of the pie in the sky idealism never going to
happen will always revert back to the legal code. Forget code is law, which, you know, law is law.
and there's court systems for a reason.
Yeah, I think what Bellagie's trying to do,
it's sort of a libertarian Marxism,
if I could put that term on it, right?
Because, like, you know, what Marx said was,
you know, the philosophers of only interpret the world,
the points to change it, right?
And coming off of sort of Hegel's,
like progressive, dialectical view of history,
the idea was, could we somehow, you know,
skip ahead and see the next stage of history
and try to make it come sooner?
And I have a piece on, like,
libertarian and Marxists
where I sort of talk about sort of the resonance between,
you know, you can read the Communist Manifesto,
and you could read George Stigler's essay on regulatory capture
and they actually sound kind of similar.
So, you know, I think Pelaghi is actually onto something.
You know, he's never going to be exactly right
because you can't predict the future.
But if you get a grasp of some of these deeper structural trends,
the sort of dialectics in history,
maybe you can try to, you know,
get a prophecy that, you know, pull forward some information about what's coming.
I think where the network state idea breaks down
is, again, the limits of de-territorializing.
You know, the original public good was security, right?
the kingdom that could build a wall around you. And my sense is that AI and the diffusion of
AI is going to lead to all kinds of new security threats, you know, whether it's novel pathogens
or drone swarms or, you know, pick your poison. And there are still going to be agglomeration
economies to having people co-located in one area. So you can build those EMP guns and stuff like
that around you. By the way, Sam, by security, you mean actual physical in real life security.
because sometimes in crypto, when we say security,
we mean things like encryption or, you know,
the economic cost to attack the network via proof of stake or something like this.
But you're actually talking about literal physical security here.
Yeah, both and, but especially the physical side of that.
And, you know, in the future of my vision is sort of like an America dotted with an archipelago of city states
where, you know, we have, you know, as your autonomous vehicle is driving into the free city of California,
it's automatically being a scan for contraband or what have you.
And, you know, we have an iron dome that's constantly shooting at.
drones, you know, all that stuff is going to require being physically co-located. And actually, I think
similar with the internet, like a lot of the early internet pioneers predicted the internet would
lead to a kind of disintegration of a need to, you know, have these economies of scale and cities.
And we just all would work remotely and so forth. That never really came true. And even with
VR and with better, you know, high bandwidth internet, you know, land values and cities have never
been higher. Some people still like to live nearby. And a lot of the core public goods
that states provide aren't just that, like, title protocols. And, and so, you know,
so on and so forth. It is actually that physical security. That's sort of like the primeval
public good. That doesn't mean that there can't be sort of intersecting and maybe like a
marble cake kind of arrangement where you do have sort of interlocking network protocols and so forth
that are providing different kinds of social services. You know, I know the folks at like Praspera,
right, and Honduras, like every one of their parcels of land has its own jurisdiction.
Like that's pretty cool. So all that stuff seems possible to me. I think he's on to the right
track and trying to intuit what the next stage is. My take is that the things that, the things that
people hope for with crypto are more likely to come true because of AI. And that bears on the fact that
AI directly affects these core transaction costs, like monitoring, like bargaining, like search
information, like agency costs, right? We will have AI agents that do our bidding and don't,
you know, steal from the till, don't shirk. Like, they actually follow orders. And that's going to
lead to people, like individuals of 50,000 person corporations under them of like AI effective employees.
that unlocks a huge potential for new forms of collective action, new kinds of institutional arrangements.
Potentially, like I said at the start of this talk, like giving new life to crypto and blockchain
as a highly complementary technology.
It is true that the AI robots will all need to be banked, and I doubt they'll be able to go to Wells Fargo
and open account, so they'll have to turn somewhere.
Yeah, there are these things like federated learning, for example.
It's a big open area where you need a lot of compute and need a lot of data to train these
models, and how do you compensate people for that data, and how do you distribute that
compute in a federated way. That's a core area where crypto technology and AI are intersecting
and leading to something that's greater than some of the parts. Sam, I don't know if you are on
Twitter or following me on Twitter, but my Twitter handle is trustless state. And it's always
been with this very far-out notion of in the future, there will be this virtual nation state
in the cloud enabled by property rights on a blockchain that will produce some version of a
coordinated organization, a coordinated governance and as trustless state using a trustless state
machine as Ethereum. It's a double entendre. Very proud of it. By the way, David, now Sam should
tell you his Twitter handle. Do you know this? No, what's your Sam?
Ham and cheese. Like the sandwich.
These are different things.
It's a single entendre.
Single entendre. Going back to the whole idea of like our current nation state is a pen and paper
nation state, right? And it also happens to be just like the leading empire, which tends to
have a lot of momentum that tends to be a very hard ship to steer. And you've already talked
about just like the anti-tech sentiment that's come out of post-2016. And so we're seeing a lot of
this like resistance towards adapting while we have this very adaptive new technologies,
crypto and AI to name the two foremost. And while like I have these very optimistic visions
about what a future trustless state could look like in enabled by future, I was definitely brought
back down to Earth in 2022 when I realized that the things that my industry did during this last
rise into fame was promote people like Sam Bankman-Fried and Three O's Capital, et cetera, et cetera.
And I still 100% believe in the fullness of time, this optimistic vision of some sort of society
organized inside of the cloud, the trustless state, and very much agree that AI is a very
powerful component, a part of that. But I'm now realizing that there is like the trials of
having to get there. And in your article, you talked about first order and second order
consequences of AI safety. And I think maybe there's probably like first order and second order
consequences of like, well, bringing crypto into the fold. Whereas like in the fullness of time,
I believe in my vision. On the way there, there might be some frictions and some frauds along the
way. So how do you think about like just this relationship in like this very high potential
technologies like AI and crypto that in their immaturity phases get resistance from the nation states
and the lack of support from the nation states? And yet also.
cause a bunch of like trauma along the way. How do you like just think about these things?
And do you share my optimism in the fullness of time we will be able to have this like more
optimistic future that we see here? The only reason I would say I'm not sure is because just
how quickly AI is ramping up and how quickly things could change and all my predictions could
go out of the window. So you know, caveating that, you know, anytime there are sort of like
$1,000 bills on the sidewalk and we can all see those $1,000 bills in the sidewalk,
it takes heroic efforts to, like, contain capitalism from trying to grab those bills, right?
And I started to talk about this as, like, regime change in micro.
You know, the idea that technology can induce the regime change sounds dramatic, but we went through one.
It was called Uber and Lyft, right?
Like, circa 2014, Uber started 2009, but, like, still was picking up around then.
People thought you were a weirdo if you took a ride with a stranger.
And in that course of those next five years, the number of rides.
in New York that were delivered by taxi commissions versus Uber completely flipped from
9010 to 1090.
And that was a regime change in micro.
We went from a system that was organized by legal monopolies, public regulatory commissions
with like formal licensing and exams and massive corruption in many jurisdictions to using
technology, basically forcing an entire regime change where now the typical person is riding
in a car that is being, it still has governance, but it's not legal via governance with the
force of a gun, it's governance through these platform mechanisms like reputation rankings,
you know, the search and matching platform that Uber provides. So you don't have to, you know,
haggle with the taxi driver. All those things represented sort of $1,000 bills that were on the
sidewalk once we had mobile and a phone in everyone's pocket and everyone had internet connection.
And so it was just a matter of time before someone developed it. And fortunately, in the case of taxi
commissions, things were local enough and there was enough jurisdictional competition where you could,
you know, get your foothold in a market and demonstrate.
the technology and prove that it was safe and not just safe, but better and more reliable and
faster and cheaper. You know, I think in the fullness of time, and this goes to sort of like
the core EAC insight, if you will, that there is a sort of autocatalytic self-fulfilling
dynamic to capitalism to, and Druson calls the techno capital machine, that once these technologies
or once the Pandora's box is open, so to speak, it's really hard to contain. And people will
fight all the way through and you'll get variation on a theme. But we're going to punch through
this one way or the other. And my hope is that we can at least try to steer it in a maximally
positive direction, right? Because it's so easy to imagine ways that AI could go wrong and lead to
bad outcomes that aren't just bad but are potentially locked in for eternity. I definitely want to
touch that again. Bad but potentially locked in to eternity. Selo is the mobile first EVM
compatible carbon negative blockchain built for the real world. And now something big is happening.
Introducing the cello layer two.
It's a game-changing proposal that's going to bring
Cello's rapidly growing ecosystem home to Ethereum.
Vitalik has shared its excitement for the Cello Layer 2 on the Cello Forum,
so has Ben Jones from optimism.
But why?
The Cello Layer 2 will bring huge advantages,
like a decentralized sequencer,
off-chain data availability, and one block finality.
What does all that mean?
Rock solid security, a trustless bridge to Ethereum,
and more real-world use cases for Ethereum without compromise.
And real-world adoption is happening.
Active addresses on Cello have done.
grown over 500% in the last six months.
With the cello layer two, gas fees will stay low and you can even pay for gas using ERC20 tokens.
But SELO is a community governed protocol.
This means that SELO needs you to weigh in and make your voice heard.
Join the conversation in the SELO Forum.
Follow at SELOorg on Twitter and visit sello.org to shape the future of Ethereum.
Introducing GMX, the deepest on-chain futures market to trade Bitcoin, Ethereum, and leading altcoins.
GMX is a permissionless decentralized exchange that offers.
perpetual futures and spot trading.
Lightning fast trade execution and competitive pricing
with the security and self-custody of a decentralized exchange.
GMX is live now with V2,
bringing new optimizations to on-chain leverage trading.
And even more than an improved trading experience,
GMX will reward you for just participating.
All GMX users can easily set up a referral link
and with $12 million of Arbitron grants
being distributed as incentives
and over $150 billion in trading volume to date
all settled on-chain,
and GMX is leading the charge in terms of opportunities for defy liquidity providers.
The future is on-chain with your wallets, with your trades, and with your money in your own hands.
Try it out now at app.gmx.I.O.
Introducing USDV, a better type of stablecoin.
Currently, billions of dollars in stable coin yield each year are paid to tether, circle,
and other central issuers of major stablecoins.
But what if yield could be shared with the protocols that use it?
Those protocols, in turn, can decide how to reward their users.
USDV shares its yield with a community of apps and developers that mint it.
Every USBV is backed one-to-one by U.S. Treasury bills which pay yield.
This yield flows out to the community of USDV issuers
so your protocol or app can get paid for helping end users convert other stables into USB.
This works thanks to a breakthrough technology called color trace from layer zero.
Without it, it was impossible to attribute users of a token with a specific issuer.
But now we can.
USDV is live on Ethereum, optimism, arbitram, and other chains, and it's already available on over 20 exchanges, such as Curve, BitGet, Velodrome, and Stargate.
Start participating in the yield from Treasury-backed stable coins at bankless.com slash usdv.
Now let's shift the conversation wholly to AI, and let's talk about the present.
So we've been talking about these dynamics of technology and how they can transform society and how the elites and those in power often feel threatened by that transformation.
but also there should be a technological pragmatism.
Sometimes technology is actually dangerous.
And I want to talk about the present and the U.S. reaction to AI right now.
This was, I believe, a week ago, maybe this happened last week, Sam.
President Biden issues an executive order, called it on the safe, secure, and trustworthy artificial intelligence.
This was an executive order on AI.
And maybe you can tell us the ins and outs of how enforceable this executive order actually is.
but it does seem to ascribe some intent.
And the interesting thing here in this executive order
for crypto listeners is,
as I read some of the provisions of this order,
crypto listeners, you might see some themes, okay?
The first is there is this idea implicit in the order
that if you're building a large language model
or large AI model of some sort,
you have to report to the government.
So come in and register.
You guys ever heard that from our friend Gary Gensler?
There's also an implied restriction
on open source model weights.
It feels kind of permissioned, I would say.
there's a provision in here
that those with large compute resources
have to actually disclose
where those resources are. So where are the
GPUs? Where are the server farms? Disclose your
location. This starts to sound
a lot like in crypto, disclose your assets.
What do you actually own? The government
needs to know. Infra providers.
So if you are a
infrastructure provider for
AI services and there are
foreign persons who are using
your services, you have to register those
foreign persons. Okay? This is like
AMLKYC for compute.
This is the parallels to me, Sam,
between how regulators in government
are responding to AI and crypto
are just like nearly one to one here.
But zoom out for us.
What do you think of this executive order?
Where is it coming from?
What's your take on it overall?
I know that was a great summary,
especially of really the core provisions
that have teeth.
The vast majority of the executive order
outside of that stuff
is basically reporting,
like interagency reports.
HHS has to develop a big report
in all the ways they're going to use AI and healthcare.
I think all that stuff is just sort of anodyne.
The key things are, especially this compute threshold requirement,
that companies that are training models on 10 to the 26 flops,
the number of operations used to train the model,
are going to have to, again, merely reporting,
but often reporting is a presages regulatory action later down the line.
Oh, we know. We certainly know about that.
We'll have to report both that they're doing the training
and also what safety testing they have done.
And presumably at some point, NIST has also been asked
the National Institute of Standards Technology
to develop sort of best practices for red teaming and testing models
that will probably serve as the benchmark
for evaluating those safety tests.
You know, I kind of am ambivalent on this.
Relative to what Europe has done,
which has essentially outlawed open source altogether
and brought in like a genuine registry
where like everyone's model is going to be in this giant registry
and what they consider high risk in Europe is like LinkedIn,
because LinkedIn has algorithms for job recruiters,
and that could be biased.
And so what they deem high risk is kind of trivial in my mind.
The fact that they're targeting or segmenting these really large training runs,
I think is actually a good sign in the sense that they're taking most seriously
the core AI safety risks that you hear from folks like Aliaser.
That really the thing to keep your eye on is the emerging capabilities these
systems will have as they scale.
And to date, like the largest model trained today is either GPT4 or Palm 2,
depending on who you ask.
And that's probably on the order of 10 to the 24 flops of training.
And so we're still like a year or two out from any company on earth training a model
big enough to actually be hit by that threshold.
So I saw that.
I was actually kind of pleasantly surprised by that because it shows that this is in some
sense the light touch approach.
They're zeroing in on the core AGI risks.
that we hear talk about.
And there is lots in there about, you know,
housing and urban development has to make sure
that their housing algorithms aren't racist
and all that stuff. I find all that stuff kind of,
you know, somewhere between annoying to counterproductive.
But that's sort of like a must have
for any Democratic administration.
The fact that they didn't do that to the exclusion
of focusing on, I think, the core safety concerns
is, I think, a huge positive and reflective
of the fact that the administration actually
was listening to some folks in the AI safety
EA world.
So Sam, you're,
take overall is this wasn't too bad. It could have been worse, and Europe, in fact, is pursuing
worse policy around AI. Yeah, exactly. I mean, you get a bunch of reports, government loves
reports. I think the KYC stuff and reporting your compute, that's something to keep an eye on.
It is part of a broader trend towards, and I've talked about this a bit in my other piece,
towards sort of nationalizing of compute infrastructure and telecom infrastructure more broadly.
And you can sort of see why this is happening. You know, we had export controls on
chips back in 2022, the October 7 expert controls that essentially embargo China from having access
to the most advanced GPUs. Those were updated last month to get rid of the interconnect
band with requirements. So now it includes basically all kinds of gaming GPUs, including H-100s
and so forth. There's ways that you can exempt those. But the National Security Administration
and broader national security community is taking this really seriously. And if we do have an AI
takeoff, like owning our compute infrastructure and making
sure that we don't have, you know, foreign agents at these companies is, I think, going to be
really important. And it really makes clear, you know, the extent to which these technologies are
dual use, right? They do talk about dual use foundation models in the executive order. And what does
that mean? Well, you know, you can't build a self-driving car that only works in red cars, but not
blue cars. Like the whole point of general intelligence is that it is general. And being able to do
things outside your training distribution is like the core test of generality.
And so having autonomous agents that are truly general and superhuman is going to be something that we've never had before.
It is, I think, a huge exception to my normal rule of go faster, particularly in the context of geopolitics.
Sam, we talked about the U.S. We talked about Europe a little bit.
How about China? How does this juxtapose between China's approach to AI?
This from one of your recent substack posts about China, I believe, democratized AI is a much greater regime change threat than the Internet and the CCP is treating it as such.
what's China doing with respect to AI?
So they have draft regulations, which I think they're basically enforced, but they're still being refined.
But the draft regulations essentially say that you can't build a large language model like chat GPT unless you undergo security review.
And even then they have very strict rules about what it can be trained on and very vigorous testing before it can be deployed.
And so China in the bigger picture is probably at par with the U.S. or maybe even ahead of us on the purely science research side of AI.
like if you just restrict to like AI papers with like the top 10 percentile citations,
China is actually ahead of us.
But in terms of diffusion, in terms of actually deploying the models,
they're taking a very, very conservative approach.
And for obvious reasons, like we talked about the way they respond to the Arab Spring
by fortifying their surveillance state.
AI is like that on steroids, right?
You know, I often think about there's this old State Department program
where they used to drop USB sticks into Cuba that had like Wikipedia and Netflix
and the whole bunch of stuff on it.
And so people who don't have access to the Internet
could see what they're missing.
And you know, you can think about dropping LLMs into China
that, like, we'll tell you about Tiananmen Square
and tell you about, you know, all kinds of things
that the regime doesn't wait to know.
And more generally, like as the technology develops and diffuses,
give massive capabilities to society vis-a-vis the state.
You know, the way the technology has rolled out to date,
it has been sort of one-sided
where the state can use it to monitor your every move.
But what happens when those technologies become offensive
and people can use those technologies for resistance.
And so this is why I situate this in a geopolitical frame
where I'm not that upset that the U.S. is taking a more sort of national champion approach
to the semiconductor sector and to compute more broadly.
Because we are in this arms race in the technology space,
not for military per se, but for defining the new governance model.
And I have more faith that a positive version of like a post-AGI governance,
future governance will emerge from the United States and from the West than I do from China.
I could easily see if China gets their first, them having runaway economic and technological power
and just subsuming the country in a model that I don't particularly approve of.
I mean, we've been talking about technology this entire time as a double-edged sword.
And I think if you go and you ask a person on the street about AI, they won't be able to tell
you concretely what they think about it.
I also remember the internet back in the 90s, and that was heralded as everyone was excited
about it. Everyone was incredibly enthusiastic about this new communications protocol that would
unify the world, unite the world. That is not the feeling you get from AI, right? It's like,
often it ranges into dystopian sort of territory. And I find myself actually not sure about
AI. It's like on the one hand, again, on kind of those sunny, rosy days, I could see the potential
for democratizing technology of freedom here. On the other, I see large models deployed by nation states
to basically control all information and flow to citizens. Is this just the double-edged sword that we're
dealing with, or is there a clear kind of this is a freedom technology or this is a technology of,
you know, enslavement? Where do you fall on that? I think the biggest risk in the West is that we end up
enslaving ourselves. You know, if you think about, you know, there's a client of free-range parenting,
Maybe peak free-range parenting was like the 70s, which happened to be like peak crime wave, like paradoxically.
And now crime is way low.
And we have like GPS trackers on your kids and they're not allowed to leave like a mile radius of your home.
And so there's like this weird paradoxical effect where like greater transparency.
We see every time someone's murdered.
We see every like crazy thing happening in the world.
I think it's made people turn inward in a weird way, even though they now have the technology to not be a helicopter parent.
in the same way that, you know, everyone having a camera on them at all times, you know, leads people to maybe, you know, to not cause a scene on the bus because they're going to go viral.
And so that fact that everyone basically has in their pocket, like recording devices and things can go viral quickly and be basically permanent, we sort of built this bottom-up surveillance state.
We're sort of like living in an East Germany of her own making.
And you can see how this could go way worse with AI where, you know, we have vision models now, multimodal models.
that you can give them an image
and just prompt them to say,
what is happening in this image?
And they are remarkably good.
You know, I tested GPT-4's vision capabilities.
It identified the breed of my dog.
It said it looked comfortable,
and it said the home it was in,
it looked cozy,
and it's like, oh, wow, how does it know all that?
Well, you can also imagine prompting the model,
you know, is anyone committing a crime right now?
And just having it repeatedly be prompted,
is anyone committing a crime?
No longer do you have to be super specific.
you can just use these vague semantic categories
and because it has a broader world model,
it's able to just take in that vague input
and give you an answer.
And that's something I could see North Korea doing, right?
I could see North Korea putting cameras everywhere
and having them constantly be prompted,
you know, as anyone committing a crime.
And it sort of solves the paradox of 1984.
You know, when I read 1984 as a kid,
I was always wondering, you know,
how do they have as many people
on the other side of the cameras
as they haven't, you know, watching as Big Brother?
And we sort of solve that problem.
Like, we can do that at scale now.
And I don't see that,
happening in the United States. But what I could see happening is it's all sort of building a
bottom-up Leviathan where, you know, we all want to have the ring camera for our security
and more power to us. But suddenly we're constantly surveilling each other in the same way
that like, you know, it didn't used to be a thing where, you know, you see these TikToks where
like they'll run into a couple on the street and ask them to, they'll test their trust. You know, look,
do you want to look at each other's message history? And just having that as an option, right? Just
having that as an option opens up all kinds of crazy things.
The prompt is anyone committing a crime right now is an interesting one because it does imply
some sort of like perfect information by the prompter.
Like whoever has access to what's got to be having the most compute because like if you
want to actually be effective in producing security, you're going to need to have the strongest
amount of compute in order to actually come up with an accurate answer to the question,
is someone committing a crime right now.
So it's going to be the most central powerful monopoly, which is of course going to be the state.
But then wouldn't the prompts just be who is about to commit a crime? And that kind of just turns into
Minority Report where you have like the pre-op saying like predicting like, oh, that guy's about to commit a crime. And then we have like the ethical judgment of like, do we try people before they commit a crime because we could be stopped them from committing a crime? I mean, you guys are making my point here. This is all starting to sound very dystopian. Right. Like this is not making it like. The future is going to be weird. And I think it's going to be uncomfortable to a lot of people. And yeah, it is very double. So, you know, one of the things the EU A.I Act does.
does is preemptively prohibit the use of AI in predictive policing.
And I'm very ambivalent about that as well.
Like, you know, I could see AI leading into all kinds of social disorder where predictive
policing could come in handy.
And like, what is predictive policing?
Where do you draw the line?
Like, is just having a Bayesian prior about, like, you know, who's more likely to commit
a crime?
Like stepfathers are disproportionately responsible for kidnappings, right?
And so if your kid goes missing, you should start by interviewing the family members.
It's very unlikely to be kidnapped by a stranger.
Like those basic statistics, you know, that informs the way we do policing.
The fact that there's more crime in a particular neighborhood means they should put more patrol cars there.
There was a recent paper using machine learning that used like 600,000 victimization and criminal records in Chicago and was able to identify 500 people who were most likely to be shot within the next 18 months.
And sure enough, 13% of those 500 people were shot in the next 18 months.
months.
Wow.
Right.
And, you know, if you could allocate resources in a way that would like, you know,
stop that, you should.
But this goes to the point of like having to engineer civil and privacy liberties
into the technology itself because we want to have the good part.
We want to have the constrained Leviathan with that, we'll avoiding the sort of unbridled
Leviathan that could result.
And it's going to be really weird.
And this goes to my point in the essay series where I talk about the likelihood of the
stuff just leading to government fragmenting.
Because I actually disagree. I don't think government's going to have the most compute.
I think it's going to be the private sector.
A government, if they wanted to, couldn't build a large public cloud, private cloud.
They don't have the engineering capabilities.
And so what that means is we're going to end up being sorted into new walled gardens,
new sort of forms of vertically integrated social protection, social regulation that could seem in some ways much more draconian on paper, things that government aren't allowed to do.
But, of course, a church can deny a membership if you don't ascribe to their, you know, thick,
beliefs. You don't have freedom of speech when you're in Walmart.
Like all there's private institutions are the exception to our openness is an open society.
And so like, you know, think about the fact that when you go to a comedy club these days,
they'll take your phone away, right? That didn't used to happen. But that happens now because
now we all have cameras in our phone that we can instantly put the person set on TikTok and
they don't want that, right? And so as our technology get more capable, you know, if I could
walk through an office space with a recording device, transcribe all the conversations that are
happening and also use audio cues to like key log the people that are typing we're going to need
metal detectors that like tell you you can't bring your phone into the office or or maybe there'll
be like places that only let you have like the Nokia keyboard phone or something like that so
i'm actually kind of reminded by that by some sort of semblance of bologis network state concept
so if the nation state isn't going to have the largest access to compute because they don't have
the engineering talent what you're saying is then the well the private sector will get it and the
private sector isn't going to have a monopoly over one central large pool of compute. It's going to be,
you know, pockets of strong compute spread out over very large, well-capitalized entities. And then they get to
kind of set the rules for their little fiefdoms, their little wherever they control. This is my
interpretation of what you're saying. So check me if I'm wrong. But that kind of sounds like,
well, each different like region, as determined by the surveillance of that particular arena,
those will be the rules. And kind of goes back to the line that Ryan pulled out of your old podcast,
which is software is eating the state.
Is this kind of the progression that you see?
Yeah, I mean, like, what is the state?
Like, what are bureaucracies, but fleshy APIs, right?
You know, the government actually owns a mine in Pennsylvania
where every morning people climb into a bus.
The bus goes underground into the abandoned mine that has been retrofitted,
and it's part of the Social Security Administration.
They have a bunch of archives down there.
Their job is to print out PDFs and scan them back into the computer,
like all day long, right?
And I don't mean to trivialize what public servants do, but you know, you move a layer up.
A lot of what it is is just applying a little bit of human judgment, human context to a checklist and then hitting send.
And, you know, later this year when Office 365 co-pilot rolls out, you know, Microsoft is putting GPD4 into everything into all of office, into Windows 11.
You know, there's going to be people at the Bureau of Labor Statistics are showing up to work to do the monthly jobs report, right?
and they're going to have a CSV file, and they're going to tell Excel, you know,
find me the five most interesting trends and write a blog post around it.
And then their job is done, right?
So, like, it seems to me that bureaucracy is incredibly exposed to this technology and also
the broader, you know, professional managerial class.
And this is why, you know, they will be the locus of the butlerian jihad, right?
They're the people that are most directly threatened by this.
But it also opens up new ways of doing government in the same way that we saw a shift from
taxi cab commissions to Uber and Lyft.
You know, imagine if we had cameras of multimodal models equipped in every farm, right?
And would we need a USDA?
Right?
You know, USDA sends farm inspectors manually to go to different farms and make sure they're
not, like, you know, abusing the chickens or whatever.
Like, you could have cameras that are always running and generating reports automatically.
They get sent back to the underwriter and then provide, you know, adopting compliance standards
that are far more dynamic, far more adaptable
than anything the USDA regulator could put out.
And just like you would subscribe to Uber as a platform,
you would subscribe to this farm regulator as a service company
that is selling you something that's much higher trust, right?
And so AI has a potential to remake how we do governance
while also restoring the trust that I think has been in decline since, you know, Vietnam.
One of the things that you said was that humans
would all be able to check on other humans with the assistance of
AI little pets, like little AI assistant. I always kind of assume I'm a big, hey Siri guy,
and I'm just about to go trigger her, use her all the time. And one day, I'm assuming she's just
going to become like a super sentient AI. And that will be my little native AI assistant, or the
Microsoft version, or however that comes out. Every human has their own little AI assistant.
And one of the stories I like to say about the emergence of money comes from just the inherent
pro-social behaviors that are found in many, many species. I've given this talk so many times. I just
gave it in Lisbon a week ago. And it was about how vampire bats learned to have pro-social
behaviors where they will all go out and feed. Some will get lucky with food and some won't get
lucky with food. And then the unlucky ones will like nudge their neighbor and be like, hey, I didn't
get any food today. Can you like lend me some food via regurgitation? It's kind of gross. And then
they will have pro-social behavior and they will spread out the food. And the bats that are selfish,
are stingy, those end up getting shunned and they don't ever get any reciprocity. And those genes,
removed from the gene pool. And this is how pro-social behavior came about. Concepts of internalizing
your own brain balances of debits and credits turned into the need to have ledgers. Fun fact,
double-entry bookkeeping in our commercial banking layer is actually just a system of all ledgers
pointing at all the other ledgers and keeping everyone else in check. And so the system of individuals in
society having the tools they need to keep others in check is actually an ancient tool set that has
developed and progress over the years. And so if you're telling me that like, oh, we have potentially a
future in which AI is actually a democratized technology and humans have the power, the capabilities of
doing checks on their local environment, to me, that doesn't sound like there's the total of the top
down AI overlord, I have Sauron powers that other people have articulated. There seems to be some
amount of balance here. What would you say to all this? It's all about managing the transition.
Right. So, you know, I think one of the ways that AI could be most useful in this realm is sort of restoring forms of credible commitment, right? If you could voluntarily submit yourself to an AI monitor that, you know, verified, you know, trust but verified that you did the thing that you're going to do, right, then you can completely eliminate the residual sort of lack of trust that you may have in, you know, hiring someone to do a job for you. And so that seems like it could be a boon.
for trust. On the other hand, you know, like we're saying, this exact same technology could be used
to, you know, massively scale censorship, right? You know, I think about what Activision recently
announced that they're going to be using large language models to police speech on call of duty,
which, you know, you know, historically, well, historically, you know, the way companies
would do this and the same with like content moderation on Facebook or whatever is you have a list
of banned words, right, and you have them on YouTube too. Like people on YouTube will say,
alive, right, because it will avoid being downrated in the algorithm, right? But LLMs aren't fooled
by that, right? LLMs aren't just stochastic parrots. They understand, they have a world model,
they understand deeper semantic representations. And so you can instruct the LLM as it's listening
to Call Duty Chats to just, you know, flag people who are talking about things that are
inappropriate, right? You can just give something as vague as that, right? And so this enables,
you know, potentially, you know, that you could paint this a part of the problem.
positive light, you know, maybe we don't want bad words being said on call duty. Maybe we want,
you know, the nanny state to be telling, to make sure we're on our best behavior. But it all
comes down to, like, who's in control? Is Elizabeth Warren that setting the terms, or are we
going to circumscribe this power and ensure that we have this better future where it's democratized
in a way that's kept in balance? Because I don't think that, I don't think that happens automatically.
I think most countries don't have our protections, don't have our bill of rights, and they'll
lead to a more centralized answer. And in places that do have these protections,
protections, by the very dint of those protections, it will lead to a decentralization where
the state ends up fragmenting and we end up with these new sort of micronations being formed
in its wake.
Maybe the government persists around some core competencies, but the need to restore order
through things that seem draconian on their face in the same way that you're taking away
your cell phone if you go to a comedy club seems draconian.
To the extent that the value of those sort of things rise that will want, you know, neighborhoods,
you know, gated communities that have all kinds of.
of AI surveillance, but in a way that the community has agreed to and opted into, that leads
to a world that is neither libertarian or statist.
It's, you know, some people will opt into, like, the Mormon AI state or whatever that
make sure that you can't drink coffee and that you don't say bad words, right?
Like, there will be a lot of variation.
And my big question is just, how do we manage that transition well?
Because as a student of history, I see it going poorly, especially from her current position
and especially from an establishment
that is trying as hard as to keep a lid on the technology
because they're afraid of change.
They're not afraid of change in the abstract.
They're afraid of them having to change, right?
Like once we have a fully autonomous cars,
will we need a National Highway Traffic Safety Administration?
Probably not, right?
But we already have studies showing that Waymo is much safer
than a human driver.
Like they even have lower insurance rates, right?
And nonetheless, San Francisco has just kicked Waymo out or cruise.
But either way, like, you know, we have self-driving cars that are now safer than humans and we're not just like massively embracing it.
Like what is going on?
I mean, one thing I think that resonates with us and how we manage this change, this transition period, is that this phrase decentralization.
I mean, it is quite the 1984-esque prospect of having central powers with the ability to kind of, you know, sensor at the level of an artificial intelligence, you know, wherever they can.
So that seems to be something that will want to preserve.
And Sam, as we kind of draw this to a close, I want to, in this last portion of our conversation,
allow you to get kind of vivid about some of your predictions, because you put together this
fantastic post, and this was actually part of the Genesis for my reaching out to you, Sam.
It's just so fascinating.
This is a three-part post on Substack.
It's called AI and the Leviathan.
We'll include a link in the show notes.
Many of the themes and topics that we've been discussing only in kind of narrative form.
And the third part of that post, you actually start to get into, like, predictions for
the next 20 to 30 years. And you break this out in, you know, some periods of time and where, you know,
some changes might happen, you kind of describe those changes. And that's kind of the vivid depiction
that, honestly, I needed. And whether this exactly happens to your timeline or not, there's a sci-fi
element to this. And of course, the future is unknowable and unpredictable. I think there's some
ring of truth into it, at least with respect to this transition. So I want to go through that.
So you start this third part of your article and you say, my null hypothesis is that the democracy, is that
the democratization of powerful AI capabilities will be at least as destabilizing as the printing press.
Then you start this chronology of 2024, which is next year, all the way to the 2040s.
I want to go through some of these time periods and allow you to kind of like tell us what you think could happen.
But before we do this, frame this up for us. How much of this is like Sam Hammond,
ham and cheese, sci-fi versus what you actually think will happen?
Like I say, it's my default path. So, you know, in the specifics,
the exact timing. But in broad strokes, I think this is what happens by default. I sort of frame it as
we're sort of in this knife edge between sort of Chinese panopticon on one side or like there's no
crashy and anarchy on the other where, you know, what real anarchy looks like is not just
everyone living peacefully in freedom, but like an archipelago of gated communities that are
like restoring order locally. Well, that's kind of zipping thin. So let me go through this and draw
some of the keywords that I found in light of reflect. So 2024 to 27. Okay. That's next year and up to
27, vast majority of our content goes synthetic. So the internet becomes AI generated. Is that the idea,
something that we could see in the next three to five years here? Yeah. So in general, I have very
short timelines to AGI relative to, I think, to the person on the street, probably not
relative to people who follow this. But in the next couple of years, he's really starting this
year and next year, AI is going to be hitting enterprise in a big way, right? I think we're going
to see it downsizing wave that rivals the late 90s, probably much bigger.
What do you mean by downsizing, like job losses, ramp up?
Is that the sort of thing?
You call this event in the late 2020s the great repricing?
What is that?
Yeah, like the great asset repricing.
I've been wondering, you know, we have these amazing image generators, for example,
why is shutter stock still $50 stock?
You know, you can make an argument that, oh, they have access to all these images
that they have the copyright to so they can train a better model.
It's like, well, maybe it seems like that's cannibalizing their own business.
You would think that the stock would plummet.
And so the only thing I can conclude is that market,
are a little bit myopic, especially like the dominance of institutional investors.
Like they don't really, you know, the AGI being near has clearly not hit asset prices.
But when it does, I think there's sort of multiple equilibrium.
Some stocks go to zero.
Some stocks go to the moon.
And that's going to be a very disruptive event.
So there's that component to it.
Then I also talk about, like you prefaced, the rise of sort of synthetic content in the internet.
I think, you know, this is also sort of a Neil Stevenson sort of idea, just the idea that the internet can become sort of flooded with and become a miasma.
where we don't even know what's going on.
I think we already see hints of what that looks like.
But once you have AI agents, like amazing catfish bots that have rich profiles in history,
we're going to have to, I think, shift a lot of our communications back into private channels,
like telegram or signal in places where we have some kind of zero knowledge protocol
that can verify our humanness.
And that requires sort of like closing the API in some sense,
because we can no longer trust that, you know, being able to generate,
text that passes the turning test is proof that you're human. And by the way, those AI catfish bots,
they're coming for your cryptocurrency, guys. Let me tell you, okay, they're going to be hitting you up for that.
So we get to the late 2020s, and you've got AGI's indistinguishable from humans for most work, like activities,
the great repricing, which we talked about, new data centers can't be built fast enough,
Congress in panic, and then we get to kind of the mid-2030s. And you talk about the new deal beginning to crack.
This is FDR's New Deal, I believe, which is sort of the patchwork system applied to Western liberal democracies, including the U.S., to sort of patch up the flaws of the Industrial Revolution. And that whole entire system starts to crack by the mid-2030s, you say. There's also, on the good side, an explosion in life-saving drugs, but maybe the bad side for the government, tax revenues drop, income shifts to capital from labor. That could cause some destabilization, I would imagine. Private regulation starts.
to emerge. So tell us about the mid-2030s. What's happening there? Yeah, just to set this up, I have
another post called why AGI is closer than you think, where I walk through some of the neuroscience
and sort of theoretical deep learning behind why we should have forecasts of AGI being pretty close.
And one of the core ideas is that one way you can get to AGI is just by brute forcing a human
emulator, by taking human generated data. And once you have a certain scale of compute and you
extrapolate the scaling laws, it gives you an upper bound on when we should expect sort of ideal
human emulators to emerge.
And there's a group called Epoch AI.
They have a model called the direct approach that sort of puts all these variables in.
And the modal estimate is around 2028, 2029.
So that's when in principle we should be able to essentially emulate any human
task on a human level performance and beyond.
And so what that looks like in the immediate term, well, it looks like, you know, first of
all, you know, massive labor market dislocation.
You know, it looks like, you're talking about the sort of institutional infrastructure.
It looks like people developing sort of AI native corporations that are,
moving at inference speeds, right?
Like two or three-person companies that have 50,000, 100,000 effective employees.
Like, this ends up becoming a throughput issue for legacy institutions.
You know, our court systems, for example, are already inundated.
You know, they're already overwhelmed.
Most things that go to court settle.
Most criminals plea.
You know, then there's this long, extensive margin of court cases that never get brought
because people don't have the money to pay for the expenses.
But once everyone has an expert AI litigator in their pocket,
That changes.
And so the courts, you know, if they don't adopt technology, which I don't expect them to, you know, they'll basically fail.
They'll have denial of service attack.
And you can sort of see similar sort of DDoS-style attacks across all our institutions.
And so what happens in the court case, you know, if the courts are overwhelmed, we go to a private arbitrator.
Or maybe people sell the ideal AI judge that always renders a neutral decision in sort of a provably audible way, right?
And then so, like, we start to shift legal proceedings into a parallel system.
And you start to see the same thing happening in medicine.
You know, if FDA, FDA took a freaking vacation for Thanksgiving before approving the second booster.
They're not exactly in a rush.
And, you know, typical year, the FDA approves something like 50 new molecules for drug discovery.
And if that becomes 50,000, you know, they're just not going to handle that throughput.
And this repeats down the line.
I talked about in the piece the rise of these large partnerships, multi-tiered partnerships.
So beginning in the early 2000s, they represent it maybe 1% of all partnerships.
Now they're over 30% of all partnerships.
And the thing about these partnerships is they may be only like five people, but they have like 20 plus tiers.
20 is like the lower band, often up to 100 tiers.
And the network diagram of these companies is incredibly complicated.
Like no one can understand what they are.
And the reason they exist is because the Internet made it easier to build these kind of limited partnerships.
and they're incredibly useful if you're trying to hide money from the IRS
because every node in that network is something the IRS and principles is able to audit.
And so if the IRS is lagging behind in their adoption of AI,
then they just get overwhelmed as well because we all have the AI CPA in our pocket
who is able to shelter our income and complexify our liability.
So just like multiply this across everything everywhere all at once,
you start to see why things could start to crack.
So finishing this off and we get to kind of the late 2030s
and then the 2040s because some of this, I guess,
the government services have been kind of decoupled, tax revenues dropping.
The federal government is starting to crack.
Maybe we start to get to see strong AGI around this time.
And then you're predicting kind of this end state possibility, right,
into maybe the 2040s where we have three different nation states that are left.
We've got sort of the police states of the world.
Maybe a totalitarian CCP party is an example of this in China.
then you've got the failed states. I mean, pick your favorite failed state. I think we're
looking on Afghanistan earlier in the show. So you got failed states. And then you've got
tech forward post-internet states like the Estonia's of the world. And these are the three types of
states. And then for the U.S., your thought there is failed state, it would be a failed state,
except for the fact that it has an archipelago of micro-jurisdictions. So almost not quite
Belagian network states because they also have territory that's manifest in real life.
but certainly much dissolved from the large federal structure that we have today.
And that's where we...
The soft landing failed state.
Yeah.
And all of this.
Okay.
All of this.
Again, bankless listener, I'm asking for Sam to kind of predict this in the way that a crypto
guest might predict the price of ether and Bitcoin, you know, by 2030, right?
So they might be off in the dates.
But directionally, this is where you think it's going.
Tallest where the 2040s end up after this massive wave of AI technology transformation.
Sam. Yeah, so it's hard to get the specifics right, but one thing I'm highly confident of is that, you know, the government by 2040 will look as different one way to the other as, you know, to us now as the government in, say, 1940 looked to people from 1850, right? I think it's going to be that dramatic, probably more dramatic. And so, you know, I'm wearing my, my singularity 2045 shirt. So, you know, this is the date that Ray Kurzweil set for the real singularity, you know, human level and
intelligence is just one thing, but once we have humanity scale computers attached to some
fusion reactor, that's a whole other beast, because in principle, humanity scale computer could
model every human mind and predict what everyone is doing all at once. That's the line where,
you know, the compute buildup at that point will be so great. We'll have, you know,
exascale computers in our backyard that you can really start to see the real singularity, where
there is like real takeoff where, you know, in a matter of months we're building Dyson
spheres, that sort of thing. That's where, you know, that's where, you're,
my mental model completely breaks down because by definition the singularity is something that you can't see past the event horizon. But in the context of the piece, I'd sort of talk about whether we want to go and rush into that post-human oblivion or not, if we're struggling under this sort of fragmented empire, we may not have a choice, right? Because fragmented nation states are much more adversarial, much more competitive. And if the government has essentially collapsed in all but name, who's going to stop, you know, the free city of California with all the ML, AGI, trillion,
from turning on their supercomputer.
But, you know, even leading up to this, I think you can see how this is really not that
wild of an extrapolation from where we already are.
You know, circa the 1930s and 40s, we did our own Manhattan project.
We did it in-house.
We had, you know, basically a startup structure that built an atom bomb in three years that, you
know, and marshaled 100,000 people working in secret.
Like, we could never do that again.
Same with the Apollo project.
Like the Apollo project, we went to the moon in 10 years.
These days, if we want to do an Apollo project, we contract SpaceX, right?
And so our government, because of the broader decline in state capacity, is already trending
towards being just a glorified nexus of competitive contracts.
And so if all the government is, at this point, is sort of just a bunch of vendors and
sort of a payment processor in the middle, you start to see how amid broad system failure,
you can just remove that payment processor, and all the core institutions still exist,
but like you said, they're now decoupled.
Ryan brought up an early theme in the very beginning in this episode.
It was technology and freedom.
And I think we've also been touching on governance throughout this entire episode.
And freedom and governance, very, very correlated, right?
Like how do you have the idea of good governance is that it promotes freedom?
To me, the theme of this episode has been the tension between technology and governance.
And also, like I said at the very beginning, it's like how you identified governance lags.
Governance lags technology.
And I think maybe my conclusion, one of my big.
takeaways for this episode is like as technology accelerates, hence all the acceleration is out
there and crypto is very fast and AI is even faster, governance isn't going to be able to keep up,
hence where we end up, which is just technology just wins. And that makes you optimistic,
Sam? It makes me detached. I don't know. I don't know. Like, you know, old me was ready to
upload my brain and become part of the high mind and achieve transcendence.
Nowadays, I am much more ambivalent.
Like, I think the world over the next 15 years is going to be a renaissance in many ways.
Incredible new discoveries, incredible riches.
Like Elon says, it's not universal basic income.
It's universal high income.
We're going to be, it's going to feel amazing.
And I hope we can find a way of sort of finding that narrow path, right, where we preserve the good and
avoid the bad and hopefully not make the Eleazar world prediction come true because people will
dismiss Eliezer as a bit of an alarmist. And I think one of the things he does when he talks is he
tries to solve for the equilibrium in a way. He tries to steal man like the thing, the one risk that
will be remaining when all the other risks are dealt with. And I think that's led to a very polarized
debate in the AI domain where you have people on one side who are like, it's going to be amazing,
it's all great, intelligence is the root of civilization. And then you
you have other people who are like, oh, it's going to literally kill us all. And I think there's
this huge dearth of middle ground scenarios where some things go well, some things go bad,
you know, thinking about these second order effects, that is completely under theorized.
And so at least as a provocation, that's what I hope my series gets people to thinking about.
Beautiful. Sam, thank you so much for joining us. This has been an absolute pleasure to have a
conversation with you about all of these deep topics. And I elicited a bunch of things that we
were going to touch upon in this episode and, you know, philosophy, politics, governance,
and all of these things, and I feel like we hit so many of them. It's been such an enjoyable conversation,
so thank you so much. Thanks, Ryan, David. Action items for you, Bankless Nation. We mentioned the post
a number of times. It's all found on Sam's substack. It is at secondbess.ca. The AI and the Leviathan
article series is the one you want to. Also, Sam's Think Tank, the Foundation for American Innovation,
has a fantastic podcast. It's called The Dynamist. We'll include a link to that in the show notes,
more about what his organization is up to. And since we mentioned,
it so many times. We've got to include a link in the show notes, the L.E. User Yudkowski episode entitled
quite aptly. We're all going to die. If you miss that episode on why he is concerned, greatly
concerned about AI and alignment and safety and all of those issues, go check it out.
Risk and disclaimers, of course, none of this has been financial advice. I don't think we dispelled
any advice that could even resemble financial advice on today's episode. Got to let you know,
crypto is risky. So is AI. Man, so are the next couple of decades.
I hope we make it.
It's not new information, though.
You could lose what you put in, though.
We are headed west.
This is a frontier.
It's not for everyone, but we're glad you're with us on the bankless journey.
As always, thanks a lot.
