Your Undivided Attention - The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
Episode Date: June 12, 2025The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow pa...th where technological power is matched with responsibility at every step.Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIA Tristan’s TED talk on the Narrow PathSam’s 95 Theses on AISam’s proposal for a Manhattan Project for AI SafetySam’s series on AI and LeviathanThe Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James RobinsonDario Amodei’s Machines of Loving Grace essay.Bourgeois Dignity: Why Economics Can’t Explain the Modern World by Deirdre McCloskeyThe Paradox of Libertarianism by Tyler CowenDwarkesh Patel’s interview with Kevin Roberts at the FAI’s annual conferenceFurther reading on surveillance with 6GRECOMMENDED YUA EPISODESAGI Beyond the Buzz: What Is It, and Are We Ready?The Self-Preserving Machine: Why AI Learns to Deceive The Tech-God Complex: Why We Need to be Skeptics Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin EsveltCORRECTIONSSam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.” Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner’s guide to sociopolitical collapse.”
Transcript
Discussion (0)
Hey, everyone. Welcome to your undivided attention. This is Azaraskin.
And this is Daniel Barcai.
So today's guest is Sam Hammond. He's the chief economist of the Foundation for American Innovation.
And I'm very excited to have this conversation with Sam, in part because we just come from different backgrounds.
We have different worldviews, sort of take different stances about the world. And yet on the
biggest thing, we seem to agree. And so we really wanted to have this be a conversation
about, well, how AI is going to go and how it can go well. The recap is AI companies and global
superpowers are in a race to develop ever more powerful models, moving faster and faster without
guardrails. And of course, this dynamic is unstable. Putin has said whoever wins AI wins the
world. Elon Musk says AI is probably the way World War III starts.
We've just passed the threshold of the latest anthropic models, starting to have expert-level virology skills.
And there are really two end states that we've talked about on the podcast, and I think Sam sees two,
and that either we end up in a dystopia where a handful of states and companies get a previously unimaginable amount of wealth and power,
or that power is distributed to everyone, and that world ends in increasingly uncontrollable chaos,
which will then make the move to dystopia even more likely.
And so there is a narrow path to getting it right,
where power is matched with the responsibility at every scale,
but right now we aren't on that path.
And so today's episode is really about how we might get on that path.
So, Sam, thank you so much for coming on your undivided attention.
Thank you for having me.
Now, Sam, as you could probably hear,
I'm just getting over a little cold,
but I was really looking forward to this conversation,
so I wasn't going to let that stop me.
So Aza just talked about this in the industry,
intro, but you come to the AI conversation from a different perspective of a lot of our guests.
Like a lot of the people that we have come primarily from AI safety or harm reduction.
And here at CHD, that's our priority as well.
But you have what some might call an innovation first approach to AI development.
And you've described yourself as a former accelerationist, a techno optimist in the Mark Andresen vein,
but you also talked about updating your worldview because of the fragility of our institutions.
Can you just tell our listeners a little bit about where you're coming from
and the top line of how you think about technology and AI?
Sure. So I've always thought of myself as maybe a techno-realist, more than optimist per se.
So I got into this area as a young kid interested in philosophy of mind, cognitive science,
evolutionary biology, debating, you know, is the mind a computer?
And coming to the conclusion that by some description it is, a pretty young age,
And trying to reverse engineer that.
And intellectually, I think one of my earlier sort of philosophical transitions was being very much a hardcore libertarian and coming to realize via an understanding of this history that institutions like property rights, the rule of law, religious freedom, these things are actually kind of new constructions and are not natural.
They're part of recent Western history.
And they don't necessarily result from a weak state, right?
They resulted from, in some cases, the strengthening of the state.
out of the feudal era with early technological growth that favored, you know, the consolidation
of militaries, the early ability to collect taxes, formal bureaucracies. These things were
driven by the printing press, by other technological currents. So this way in which technology
shapes and alters the nature of our institutions became very apparent to me. You know,
at the same time, I was also very interested in political philosophy. And a lot of my interest in
innovation and technology
came from understanding the history
of the Industrial Revolution and how
sort of out of equilibrium we are.
Most inventions that we use
on a daily basis were the biggest
impact things were invented
in a span of less than 100 years.
And you can think of that as the first singularity.
George Mclosky has this
what she calls the hockey stick curve
of history where you look at
GDP growth over time and for
most of human history is basically zero
and then sometime around the late 1700s or the 1800s
so it goes vertical.
And we're on that vertical curve
and everything we owe to our civilization
is a result of that,
of that's dependent economic growth.
And so that begs the question,
could this happen again?
Is there a further inflection point in the near future?
And so this first singularity you talked about
the industrial singularity.
Like we're living in a world of just pure industrialization now, right?
And it's so different from what we had before.
And you're saying that moving forward into this other world, it could be as different as it was between industrial society and pre-industrial society.
Talk a little bit about how you think of that transition, because you also come at this from this mixture of this could be this beautiful transition, but also this could be quite a chaotic transition.
If there is going to be another technological transition, we should, I think, by default, assume that there will be similar institutional transitions of similar magnitude.
And no one could have foreseen circa, you know, 1611 with the first publication of the King James Bible that in a few hundred years we'd have the first railroads and the Enlightenment, telegraph networks.
So I just fully expect that if we do get to AGI, and I think we're quite near, that we'll have a similar transition, but probably one that's much more compressed in time, and that will challenge all our assumptions of, you know, effective governance, well, right-sized institutions.
than all the dressings of modern nation states.
So, you know, I first learned of your work, Sam,
I think we first met at a little conference in Berkeley last year,
and you're giving a talk.
And actually, out of a little bit of your talk,
we barred for that when Tristan just gave his TED talk on a narrow path.
And I think you borrowed some of the things on surveillance and other bits
from our AI dilemma talk.
It's sort of nice to see the reciprocity.
And actually, I'm going to ask you in a second,
to sort of recapitulate a five, ten-minute version of that talk.
Because I think there's so many really great points in there.
But I think we should start with this sort of thought experiment that you give in that talk.
You invent a new technology and it uncovers a new class of responsibilities.
And then society has to respond.
And you give that as x-ray glasses.
And so I'd love for you to just give that example.
Yeah.
So the intuition here, by the way, comes from looking at the ways in which even small
technical changes can lead to very large
qualitative outcomes. Like the birth control pill
drove qualitative changes to society.
And so, you know, in this talk I give, I open up
with the thought experiment. Imagine one day we woke up
and just like manna from heaven, we had x-ray style glasses
that we could put on and see through walls, see through clothing,
everything you could do of x-ray glasses. There's really
three canonical ways society could respond. There is
the cultural evolution path, which is, you know, we all adapt
to a world of post-privacy norms.
We get used to nudism.
Then there's the adaptation mitigation path.
Well, can I slow that down just a little bit?
Just because if you're listening to this,
so if you invent x-ray glasses
and everyone can all of a sudden do what?
They can see through walls, see through clothing.
A bunch of parts of our society
that we sort of depend on
that we've gotten used to being opaque,
suddenly become transparent and things break.
Right. So anyway, keep going.
Right.
Then there's the adaptation mitigation path, right?
So we could retrofit our homes with copper wiring or things that could block out the x-rays.
We could wear a leaded underwear.
We could take a variety of mitigations.
And then there's the regulation and enforcement path, which is maybe government uses this monopoly on the force to pull all the x-ray glasses, say, we're the only ones allowed to use the x-ray glasses.
And probably, you know, society would have some mixture of all three of these things.
But what wouldn't happen is no change.
Right.
It's a classic collective action problem.
So what I love about this example is that on one hand, you know, if everyone gets the x-ray glasses, you're thrust into this kind of chaos where all these people are doing things they shouldn't, understanding what buildings are unlocked, when people aren't home, it can cause chaos in society. On the other hand, if only the government has the x-ray glasses, then you're entering this kind of dystopia, right? Where it's sort of corruption, state power, overreach, or in the third case that you're saying, where we're just adapting to all of this,
It's like throw out all of the social norms.
We have to invent a new society from scratch.
And it feels like we don't want any of the three of these things, right?
We want to find a narrow path where we don't have to worry about everyone wreaking havoc.
We don't have to worry about the government sort of having all the power,
and we don't have to worry about all of our social norms that we built our whole society onto unraveling.
And we here at CHD are committed to finding that narrow path between these kind of all three of these bad outcomes.
We may have some differences of opinion about how we get there, which we can discuss.
but I wanted to give you
a chance to set the stakes for this conversation
what are the different pitfalls of us doing this wrong
and why should people care that we get this transition correct?
Yeah, you know, connecting this back to my libertarian evolution
part of it was understanding the Industrial Revolution
as sort of this package deal, right?
And the economist Tyler Cowen has an old essay
called The Libertarian Paradox where he points out that
it was sort of libertarian ideas around laissez-faire, markets,
capitalism that spawned the Industrial Revolution
and kicked off this tremendous phase of growth.
But by the same token, it set off a series of dynamics and new technologies capabilities,
new kinds of negative externalities that necessitated the growth of first bureaucracies
to regulate things like public health and safety,
and then welfare states to facilitate compensation for people who lost her job through no fault
with their own.
And so there's always going to be these tradeoffs.
And so that concept of the narrow path comes from Garinosa Mogul,
the Nobel Prize winning economist now
who has a book called De Niro Cordor
where it is this history of the transition
into modernity and how
following the English Civil War and the wars of religion
there was a realization that we need to consolidate
power within nation states
while also maintaining respect for
freedom of religion, for rule of law,
for equality under the law,
and striking a balance between the power of state
and the power of society. And so the challenge
is almost like in terms of like differential
calculus or something like that. How do we stay
on this stable path
and deal with the shocks
that are both simultaneously strengthening
the power of the state
and the power of society, right?
Because AI is not merely enabling
security agencies to be able to do
more bulk of data collection
and things like that. But it's also an
aggregate empowering individuals
to have the power of a CIA
or a Mossad agent. And
what does that mean in aggregate?
As society
just by dint of
there being way more computation available
to everyone else, starts to overwhelm.
the capabilities of the state.
So many people assume that you can just have a change
to technology and then it won't change society
nearly as deeply as you think. And I think
that's right, but I think we believe that we can do
this better and worse. And I hear people kind of
throw up their hands and say like, oh, it's just inevitable.
You know, we're going to have to change everything
and that's okay. Whereas I kind of worry
about this, right? I think we can do radically better
or radically worse at these transitions.
And I'm worried that if you just say it's a
package deal, then we're not factoring in our own
agency to make this go differently.
You know, there are hinge points in history where human agency matters a lot,
but you can't just, you know, to be a good surfer, you need to know when to catch the wave.
And it's necessary, but not sufficient condition, that you know how to surf.
And they're better and worse surfers.
But if the wave is not cresting, then you're not going to do anything.
And so there are these big tidal forces in history.
And then there are ways in which, you know, things really are packaged deals
because of the way they alter the kind of coordinating mechanisms.
we have in society.
We had a gala last year for our 10th anniversary
where we had Kevin Roberts,
the president of Heritage Foundation speak,
very conservative organization.
And Dwarkesh Patel,
the Dworkas podcast, interviewed him
and asked him for his takes on superintelligence,
which I thought was fun.
And, you know, he said,
you know, if we have superintelligence,
we might have 10% GDP growth or greater,
but we'd also potentially, you know,
rapidly go down a post-human branch of the evolutionary tree.
And, you know,
Kevin Roberts was like, oh, I love, you know, I'm a conservative, I love GDP growth.
I'm all for that 10% GDP growth, but I'm also like a Christian conservative.
I don't want to become post-human.
And to point out that it's a package deal is not to deny its agency, but just to make us
reflect on the ways in which you can't have one with the other in some cases.
And if we are going to go into this future with clarity, then we need to be realistic
about the ways in which these things are bundled together.
Okay, so I hear that, which is to say, like technology.
Gio always changes the landscape on which the game is played.
So the game is going to change.
And you can't help that.
But which game you decide to choose to play on top of that game board is still up for grabs.
And the initial conditions matter a lot.
But I do want to see if I can get you to tease out of a little bit of the counterfactual of just imagining what an F grade might have looked like for the Industrial Revolution.
And the reason why I want you to paint that out is, I would argue right now,
we have very little sort of state power intervention
into trying to put guardrails on AI.
And I'm curious how that would have looked.
If we're in the same place now that we were in the Industrial Revolution,
what would that have looked like?
Yeah, I mean, we could have had a nuclear holocaust, right?
We could have had, you know, a march through Europe of the Third Reich
or of the Soviets taking over the world,
and you can get situations where you get lock-in in a less-than-ideal equilibrium.
You know, I think it is kind of miraculous that we haven't blown ourselves up so far.
You know, obviously, the Industrial Revolution was, like, a massive boon for human living standards,
well-being and knowledge, creation, and understanding.
And I think it was worth it, even with all the calamity that we had to pass through.
By the same token, you know, the printing press arguably, you know, precipitated the wars of religion
with much inferior technology.
And yet, I think we wouldn't deny that, like, you know,
I'm glad that we have the written word and books
and academic publishing and all these things.
Former colleague of mine El Adirado has a blog post called
on the collapse of complex societies,
reviewing some of the literature on how complex civilizations collapse.
And one of the recurring themes is that you often will have
these sort of technological trends that are moving quicker
than institutions can adapt.
And partly one of the reasons institutions fail to adapt
is because there's an incumbent that is
forestalling the eventually
necessary adaptation.
And I see this playing out with
debates around artificial intelligence
and, you know, it's like we're going to have to give
some things up, right? And maybe one of
those things is like our understanding of
intellectual property. It may be
you know, we may
want to have restrictions on the
level and degree of surveillance, but
you know, at least from my vantage point,
it seems like, and I'm not saying this is good or bad,
just like some kind of surveillance
state probably is going to be inevitable in our future. And the question is, what are the guardrails and
what are the limitations on that and how is it actually governed? And so there's, I think,
co-equal risks in trying to steer the narrow corridor in a way that's not really progressing
anywhere, that's not really taking the developmental trajectory of a society seriously, that's
actually in a weird way trying to hold down to some aspect of the ancient regime.
Actually, I think that point you just brought up on sort of total ubiquitous technological surveillance is a thing I don't hear talked about enough.
Just that without AI, total ubiquitous surveillance is impossible, but with an AI, it's inevitable.
You know, already AI is enabling things like Wi-Fi routers and 5Gs to, like, see through walls.
And certainly with the next generation of cell phones, 6G, the companies like Nokia and Erickson are talking about,
a feature network as a sensor that is because it's in terrahertz range the network can tell your
heart rate your facial gestures micro expressions where your hands are and that means everywhere
human beings are in cities everything is known and how do you possibly find an enemy when you have
no secrets and that just seems like a thing that we're not talking about enough and that sort
of gets into this next question at the stakes right like right now in congress you know
someone is trying to sneak a provision in that says states cannot make their own rules around
AI, that it can only happen at the federal level.
So it means there can be no sort of like rule-based innovation that's not for the entirety of the
United States.
And so getting to the narrow path, actually, and like what we should be doing as a society
is, it's like, it's right here, it's right now.
And I really want to get you to talk about, because you've said you're not a fan of
preemptive regulation. What should we be doing in your mind? How do we get onto a narrow path?
So I think there's different buckets of things that seem obviously good. One to start with is
going back to this initial conditions point. I look at the experience of the Arab Spring where
weaker states actually failed because effectively of information technology, Facebook. And they were
much less adapted to a world of suddenly ubiquitous ability to coordinate and mobilize
and critique government actors
and expose corruption and hypocrisy.
Now, coming out of that,
China, other countries
saw what was going on.
I was like, damn, we need to like get control
over the information ecosystem.
And in a sense, China is now well adapted
to a world of, you know,
ubiquitous open source AI models
and all kinds of powerful information technology
because they control the pipes.
So the question is like from these initial conditions,
if China or the West
get to very powerful intelligence,
first, is there a kind of winner-take-all dynamic where one of them pulls ahead in the same way
the U.S. pulled ahead of the Soviet Union in terms of GDP and technological capacity,
potentially exporting technology that enables weaker states to surveil their citizens in a way that
doesn't respect human rights and civil liberties? So I think that it's, you know, point A is it's incumbent
on if we care about Western liberal democratic values for the West to maintain it and grow its lead
in AI and then to export
its technologies around the world. And
in some cases, export tools
will be used for surveilling, but
have embedded within them privacy and civil
reviews enhancing principles and values.
Well, and you add on to that,
the idea that it's not just about surveillance,
the idea that technology
in general, but especially AI, may radically
change the game theory of centralized versus
decentralized states. That
sort of capitalist democratic states
ended up out competing in the 20th century
might have been an artifact of,
of the technological environment of industrialization,
but now AI might give an advantage to highly centralized governments.
And to your point, I want to live in a world
where we maintain human rights, democratic values,
some of these things,
but we have to figure out how that works within an AI world.
Yeah, absolutely.
But I think that that's going to be an ongoing kind of learning by doing
in many cases, and the question is who's doing that learning by doing.
And so my zero-th-order policy recommendation is always,
do whatever it takes to ensure that the U.S. and the broader West maintain their AI advantage
and hardware in the models themselves and energy and the inputs that go into these models
and then to proactively engage the world for adoption purposes.
Yeah, so let's double click on that a bit.
You have called for a Manhattan project for AI to try to do some of this stuff.
Tell us a little bit about what you think should happen in order to maintain that competitive advantage
in order to make sure that AI strengthens our society.
Yeah, to be clear, the piece I wrote was a Manhattan project for AI safety.
Yeah, yeah.
I might have co-opted it a little bit for the conversation.
I've been critical of this idea that the federal government should have some secret
black site, five gigawatt data center and build AI in a lab.
I think that would be very dangerous and actually in some ways as decelerationist
because if we just let the companies proceed,
they're going to move much faster than the Department of Defense.
So, you know, there's going to be a component of this is in a national standard setting.
A component of this is fixing our own internal problems around energy permitting data center infrastructure
and, you know, then controlling the export of our most advanced hardware.
So like the video chips, China has a trillion-one, $138 billion state VC that is their Stargate project in a sense.
It's doing this big push to build data centers for their leading tech companies.
And right now, the best chips in the world are export controlled.
And so there's this cat and maskin going on and how we allocate global compute.
And so I think that's a very important vector for maintaining the aggregate amount of compute
that is in the jurisdiction of Western countries or our close allies.
I think one of the challenges in this conversation are like the crux as I often think slips by.
I'm curious how you'll react is AI as a technology is very different than every other.
technology because with other technologies like if you need to build a more powerful airplane that
means you need to understand more about how airplanes work if you want to build a taller skyscraper
you need to understand more about like the foundations of building but with AI you don't actually
need to know more about how the internals of AI works to build a bigger faster more powerful more
intelligent AI and that means there's I think there's a confusion when we say we need to
beat China there's a smuggling the in of well that means
that whatever we're building we can control,
but actually what we've seen is that the more powerful the models,
the less able we are to control them.
And so shouldn't the race be towards strengthening a society
versus racing for a technology that we don't yet know
how to either individually or cybernatically control?
I guess I kind of question the premise.
I think as these models have gotten more powerful,
in some senses they've gotten easier to align.
I think we're rapidly moving to a world,
of reinforcement learning, post-training,
and I think that's going to open up a whole host of other problems.
But where we stand today, in some sense,
the biggest, most powerful LLMs are vastly more aligned and controllable
than the ones that we had two or three years ago.
Although we're also seeing increasing rates of, like, 03 deceives a lot more
than previous models.
And I don't think that, I don't think any of those things are insurmountable.
So, you know, a lot of these AI safety debates sort of end up conflating
a variety of different concerns people have.
One of them is the classic alignment problem.
How do we control very powerful superintelligence?
The things I've been talking about are more on like,
suppose we have very controllable superintelligence.
The world that still looks very different.
And I think we're going to move into a world
where there's not just like one singleton AI that takes over,
but one where there's just a diffusion of very powerful capabilities
in many people's hands and powerful AI agents
all doing more or less what you ask them.
And there will be probably cases of like, quote unquote,
rogue AIs that like shut down the colonial pipeline or do a ransomware attack or something like
that. But I think for, you know, just using basic, you know, access control techniques, I don't
think we need to have like a full mechanistic interoperable, like understanding of how these models
work to know almost at the level of physics that they have the behavior that we desire.
We still enter into this very new world. And most of the biggest problems are still unresolved
because obviously people have very different interests, right? You know, an ex-boyfriend could
sick a malicious AI bot that is fully aligned in the sense that it does
exactly what he asks on his ex-girlfriend and then have that thing
like autonomously replicate itself and constantly be terrorizing her
those things are not an alignment failure it's on the part of the humans
not being aligned which right oh okay I mean fair enough
but so if we step back a bit you're what you're talking about is
there's two big problems alignment is typically a question of are you
the AI doing what you're asking it right and then there's the question about
fragility of our institutions
So forget about the alignment problem for a second.
Let's assume you're right.
I question that a little bit.
I think it's going to be much harder to make sure that these things are aligned.
But never mind.
Let's keep that on the side for a second.
You talk a lot about the fragility of our institutions.
And I really want you to go more deeply into that
because I'm worried that our institutions are not going to be able to keep up with this
and that we're going to enter, even with aligned AI,
a period of a very chaotic transition that is quite avoidable, in my opinion.
And I think some of the reading I've done from yours is we're on the same page here,
that we need to really watch out to make sure
that deploying this recklessly across our society
doesn't create a whole bunch of chaos
that we wish we had never done.
So can you walk us through a little bit
about your view on institutional fragility with AI
and a little bit on what we can do to avoid it?
Sure.
So in big picture, what I see is this differential arms race
between public sector and private sector
AI diffusion, where AI is diffusing much more rapidly
into the private sector than the public sector.
And so I think there's a need for more accelerated
adoption in the public sector. That's point number one. But that doesn't really go quite far enough,
right? Because if you think about the different vectors in which AI is going to cause, not from
misuse, but just from valid use at unprecedented scale, like essentially institutional denial
service attacks, where you can imagine if we all add AI lawyers in our pocket, we all can just
like be suing each other constantly. The courts are going to get overwhelmed unless they adopt like
AI judges in some form. Because I don't foresee like our court system, which is,
technologically, like they still use human stenographers.
Right.
Like adopting AI at that pace,
you know, I think there is a world where there's a kind of displacement effect,
where just in the same way that Uber and Lyft and modern, you know,
ride-hailing technology displaced licensed taxi commissions, right?
Where, you know, maybe we will have like the Uber but for drug approvals
or the Uber but for educating contracts and commercial disputes.
And when you say Uber, do you mean that there's some start.
up somewhere that says, aha, I'm going to solve a law. There's going to be a new, it's like
court, but without all the vowels. And then what was part of the state moves into something
that is private and that private thing is now subject to all of the VC like incentives. Is that
what you're saying? Yeah, it might not be one company. It might be something that's done
competitively or bottom up. You could easily imagine like a world where a lot of things that today
require formal institutional processes and bureaucracies get sort of pushed out to the edge of
using kinds of raw intelligence. And that may not come from anyone company, but it will look
like a very different way of doing business. Can you tell us what institutions you think are
the most vulnerable for disruption around AI? And what are the kinds of disruptions you're
expecting to see? First of all, you look at where there's going to be the most rapid progress and what
institutions we already have to govern those processes. And then what is their willingness to actually
adapt and co-evolve? You know, look to look to
where there's likely to be very rapid AI progress.
Dario Amadeh and his Machine's Living Grace essay
talks about the potential for, in the very near term,
AI scientists that could perform basic R&D and biology
in other areas autonomously in parallel
with thousands of other AI scientists.
That could, you know, theory, lead to a speed up
of scientific discovery, you know, collapsing
what we used to take a century into a decade or less.
You know, if we have institutions like the FDA
that are in charge of approving drugs
and they do that through a three-phase clinical trial process
where you have to get human volunteers or patients
to have two different treatment and controls
and it's a very long, drawn-out process.
If you stretch this out long enough,
you could even imagine us one day
having human models in silica
that could completely characterize the effect of a drug
on a particular disease
without ever needing human trials
and have that validated against humans
to prove that it's accurate.
The barrier is not like the,
the people at the FDA don't see this coming, but that these things are written into law.
And the question of how fast can the FDA adapt is fundamentally the question of how fast can Congress write new laws
and how forward-looking are they? And what makes it so hard is it's not just the FDA, it's not just drugs,
it's not just science, it's going to be everything, everywhere all at once. And so this is what gravitates me
towards just seeing us not quite getting this all right and for things to shift off into your private sector
solutions. Okay, but help me see the balance there because in your blog post 95 Theses on
AI, one of the things you say is periods of rapid technological change tend to be accompanied by
utopian political and religious movements that usually end badly. You know, when you're saying,
okay, we're going to revolutionize the FDA and we're just going to let this play out and we're not
prepared for the change that's going to come. That's what comes to mind for me is that this feels like
a utopian movement saying we can just allow it to run roughshod across our institutions and that
that will usually end badly.
So help me, what's the balance here?
I agree that we don't want to get stuck
with our existing institutions
dragging this into the dirt.
But at the same time,
I don't want some sloppy thinking
about this will all end well,
lead us into this sort of religious belief
that this is going to go well
and not have us think hard
about how we roll this out.
Yeah, I certainly don't have religious beliefs
that will go well.
I think what characterizes the utopian movements
is believing in some end state
or knowing how the story ends
and trying to move us closer
to the end of the story.
And I don't know how
the story ends. And I don't think anyone really knows how the story ends. I think what we can know
are sort of general principles for complex adaptive systems. There's no one in charge of the thing we
call America. And when I look at the things that are barriers to AI, you know, you mentioned earlier
the AI moratorium that's been proposed. I think that's built on this faulty assumption that the
thing slowing down AI are AI-specific laws. When actually the thing that's going to slow down
AI diffusion are all the laws that deal of everything else, right?
the laws in health care or finance or education.
And so I think if I could wave my magic wand and do two things at once,
I, you know, A, have in some sense more rigorous oversight over AI labs,
more AI-specific safety rules and standards for the development of powerful forms of AGI.
At the same time as I'm essentially doing a jubilee on all the regulations that currently
exist in most sectors, not because we want a world that's totally deregulated,
but because those regulations are starting to lose their direction of fit.
what I am hearing you say is we're going to need new paradigm institutions.
It actually was reminding me of a moment.
I was at the Insight Forum, which is that moment in history where, for the first time, Congress called all of the CEOs from IBM and OpenAI and Google, and there was Elon Musk and Mark Zuckerberg to come to the Capitol to answer.
the question like, what's about to happen? How can this go well? And it's a funny thing for me
to be sitting there across the table from $6 trillion of wealth. But after that event, I end up
going through a long walk in D.C. And I ended up somehow at the Jefferson Memorial. And
the southeast portico, I saw a quote of his that I'd never seen before. And it said,
I'm not an advocate for frequent changes in laws and constitutions, but laws and
institutions must go hand in hand with the progress of the human mind. As that becomes more developed,
more enlightenment, new truths discovered, and opinions change with the change of circumstances,
institutions must advance also to keep pace with the times. We might as well require a man to
wear still the coat from which fitted him a boy as a supply society to remain ever under the regime
of their barbarous ancestors. And that just really hit me, which is I'm sitting in
the Capitol. And we're basically having a debate where we're not even really talking to each other.
Like, this is such an old institution. There are many ways of updating our institutions using the new
technology so that it can scale with AI. And so I just love for you to like get specific. You were
starting to about some of the other ways that we might fundamentally upgrade our institutions.
Yeah. So the fact that right now in Congress are debating the big beautiful business,
bill, this big tax bill, so over a thousand pages, why don't members of Congress who often
have 24 or 48 hours to read these things, have AI tools where they can just plop the bill
in and ask, you know, what does this do for my state? Does this have any poison pills? Are there
any provisions in this law that say one thing but could be used to do something else?
And you could imagine, you know, this would be incredibly useful not only in itself, but
because Congress is notoriously short-staffed. You know, this is just one area.
and I've done a little bit of work on this
and pushing Congress to modernize its tech stack
and actually begin embracing these tools
because as it stands today,
most congressional offices that I talk to
use ChatsyPT in a regular basis,
but in violation of their own guidelines.
Sure, right.
And you see this up and down
in the federal agencies as well.
This goes back to my FDA point
is like it's not enough to give
FDA officials an AI co-pilot.
Like we're going to need fundamental process reform.
And I think a lot of these more scalable mechanisms
are going to look something
similar to like Twitter's transition from having a trust and safety team to having community
notes where they went from something that was you think of like the the elect that were like
deciding what posts violate the rules or not to something that was bottom up you know we can
critique Elon Musk's broader interventions and his own trustworthiness but the community notes algorithm
is incredibly like inventive and actually aligning incentives so that groups of people that tend
to disagree if they agree on a particular note that note gets amplified.
And so are there other community notes like solutions for the things that government does?
And then are other areas of government that just genuinely obsoles, right?
And I think this is where there's going to be the biggest tooth-polling exercise
because there are certain aspects of things that governments do that are technologically contingent.
Will we need a national highway traffic safety administration if all the cars are autonomous
and we don't have a single traffic death?
And, you know, will it just wither away or will, like, metastasize,
into some other beast. And, you know, I think that this is going to be one of the biggest
fights. I have to admit, I'm both hopeful. I love the sort of pro-democracy tech angle,
especially using LMs to figure out ways of supercharging our governance and not using this sort
of 19th century, 20th century system to try to, you know, really govern, but rather get inside
and change some of this stuff. But also, I'm kind of worried about these short timelines
introducing a technology into the heart of some of our most important facets of government
where we still don't really know how it works, if it's ready.
Do you have any ideas on how you think those transitions should go over what timeframes,
when is the tech ready to integrate?
I mean, when the rubber meets the road, do you have any specific recommendations?
Yeah, I'm going to say things that will sound a little contradictory.
On the one hand, I think that it's important to open up the ability for folks within government to experiment.
Right. And right now, the way the rules are written for IT procurement, for instance,
is all around compliance and minimizing risk.
and it's this very risk-averse culture.
But that risk aversion has come out
from sort of codifying processes
that worked in the past.
And the analogy I sometimes give
is when you're designing, say, a park
or the quad for university,
you could lay down the sidewalks
that you think are the right sidewalks,
or you could just leave the field barren
and let people choose the path that they walk on
and when you start to see a path forming,
that's where you build the sidewalk.
And I think there's going to be analogous things
with the use of AI,
because it's so general purpose,
we don't know all the ways
could use it productively. And so we need to have, you know, pilot programs and sort of a more
permissive ability for individuals within government and within corporations and other
large institutions to experiment without needing permission and see what works. And then only later
that you start codifying things. At the same time, we've also seen in government when they
do these sort of, especially mega projects or big pushes for adoption, that's really important
that you not be too early. You know, when you're even a few years too early to a technology where
everyone so sees where the ball is going, you can get locked into something inferior.
And so the way things have historically worked is that the U.S. government has been a fast
follower of the corporate sector. And so I think we're going to need to see something similar
in this era where the hyperscalers of our current day are like the Carnegie's and the
Rockefellers and so forth from the earlier era. And they need to bring their learnings into government
and make the government also hyperscale. What it seems like you might be advocating for here
is treat the government a little bit more like a corporation. We've just seen
with Doge and Elon some version of a whole bunch of young 20-year-olds
rushing into the government.
And actually, when we last talked at the Curve Conference,
you were very optimistic.
I'm curious now that we've sort of seen it.
And you're like, we may be living in the best possible world.
Are we living in the best possible world?
How has that gone?
Yeah, ex ante.
I thought and still think that if there was going to be this narrow corridor path,
it would take something like Doge,
something that was detached from all these, you know,
political constraints and public choice problems
that would hold back more dramatic reform.
And, you know, as Doge has played out,
it's been obviously a huge mixed bag, right?
And that's probably because they're not a singular thing.
You know, Doge is in part a tool to enact the president's agenda, per se.
And the reason that went after U.S. aid as their first target
was because the president,
signed an executive order, putting a pause on all foreign aid. And so it wasn't that Elon or
the Doge kids had it out for foreign aid is because they were tasked with using information
technology as the conduit to reestablish executive control over the bureaucracy. And that just
happened to be the way that played out. At the same time, and this has not been nearly as reported
on, behind the scenes, there's a lot of a genuine modernization going on. I have a friend
at Health and Human Services, who's now the CIO,
and HHS is the sprawling agency.
They have, I think, 17 or 19 other sub-CIOs.
One of his jobs right now is the fact that no one in government
can share a file with someone else in HHS
because they're all using different file systems.
And it's this mundane, like, fragmentation that has accumulated over time
that I think Doge should be trying to solve.
Because at some point, we are going to have, like,
very powerful, like, AI bureaucrats, for lack of a better word,
tools and agents that could replace
hundreds of thousands of full-time
employees within the government. And we need
some of that infrastructure in place. And
there's just some basic firmware level government reforms
that are needed. And Doge is addressing them
while also, you know,
being a bull in the China shop.
Right. So it seems like we're stuck between
this adaptation regime that you were
talking about. Like, how do we make the U.S. government
resilient and adopting AI?
But then also you're worried that, you know,
this may just be sufficient
to cause these institutions to collapse.
totally. This seems like we're back in paradox territory. Can you talk a little bit about that and
do you think that adaptation is going to work? I've mentioned the innovator dilemma before.
The example of like, you know, with the taxi commissions built their own Uber and Lyft, by default
they don't. The question is can you can you do the impossible and defy the innovator's
dilemma? And public institutions like the federal government, one of the big disadvantages they
have over private institutions is private institutions, private companies are constantly being born
and then dying, and there's this constant rejuvenation process.
And we only have one federal government.
That being said, the U.S. government has undergone kinds of what you could call
like re-foundings or reboots, whether that was, you know, Lincoln or FDR, or you could say
the Great Society was a partial.
We've gone through these sort of constitutional resets in the past.
And I see the Trump administration more broadly as trying to facilitate another one of these
constitutional recess.
Now, it's bundled up with all kinds of other political commitments.
around trade, around immigration,
things that I don't necessarily agree with.
And what it comes down to is, like,
is the bureaucracy this headless beast
that just keeps ongoing on, you know,
business as usual on autopilot,
or do you have some source of agency within government
that can actually begin reorganizing it
and preparing it for major change?
And this gets back to my earlier point about,
we can't be utopian and know what the end state is,
but we can apply general principles
for complex adaptive systems.
And one of those is rapid feedback loops,
experimentation, fail-safe testing,
and we just at the moment
completely lack the infrastructure to do that.
And so there's some precursory work
that needs to be done.
I would love it.
Just like I think you believe there needs to
have been Manhattan Project for
AI safety, I think we
need an Apollo mission
for massively
upgrading society's defenses
because VCs generally are not
going to put money there. So the market
isn't really going to get there until it's a little
too late. And we,
We've seen examples of this in cyber security,
whereas our infrastructure digitized,
there just was no strong incentive for private corporations
to massively invest in their defensive,
and it's just less America's cyber capacities deeply vulnerable.
And so I think we're going to see something similar in that,
unless we can do a kind of large-scale Apollo mission,
which is not to say some big centralized thing,
but we certainly need enough resources
to accelerate our defenses.
I keep hearing back to your 95 Theses
on AI because there's so many gems in there.
But there's one that I really love,
which was building a unified superintelligence
is an ideological goal, not a fait accompli.
And there's something in here that really resonates with me.
You'll hear from people about we're building AGI,
we're out to build superintelligence,
this is this goal,
and you'll hear people talk about this like
it's not even a goal. Like it's a foregone conclusion. Like it's just the tech path in front of us.
But, you know, Aza and I, I think, are quite aligned that how we build this technology is very much up to us.
And whether we're racing to one goal or another is a choice. So can you talk a little bit about why you say that it's an ideological goal? And how do you see that?
Yeah. I mean, I think, you know, machine learning and deep learning is a general on purpose technology. And we could use that to construct better weather forecasts or to solve protein folding.
But this idea that we need to have a single coherent unified system with agency, sentience, and so forth, that is vast, superior to human intellect in every possible way, it doesn't seem necessary to me.
It's not like there's a big market demand for that.
I definitely see the case that there's a market demand for, you know, human-level agents that do, you know, routine work, office work, and stuff like that.
So I do worry, and this gets into the ideological undercurrent in Silicon Valley, that there's a strong kind of messianic almost milieu where we are going to bring on the sky god.
And I don't think we know if that's inevitable or not.
It does seem clear that if something like that were to happen, it's not this big structural thing.
Certainly China is not racing to build AI sky god.
they're gracing to build, you know, automated factories.
They're much more pragmatic and practical.
It's going to come down to, like, the CEOs of a handful of companies
with a kind of glint in their eye.
I think it's such an important point.
And, you know, Jaron Lanier, for example, talks about
we should be building AI like tools and not like creatures.
And I personally, I think it's a real choice that we have.
And it's not some foreground conclusion.
We can build a more tool-like future with AI and not just build the sky god.
Yeah.
It certainly will be safer.
And in order to not build such a thing or to deploy them safely will require human beings doing perhaps the hardest thing, which is solving multipolar traps, learning how to coordinate, where often our behavior is bound by the fear of me losing to you, my company losing to your company, my country losing to your country.
But the fear of all of us losing has to become greater than that paranoic fear of me losing to you.
And that to me is like, is the calling card of how to walk the narrow path or the narrow corridor is solving the ability to coordinate at scale while still maintain it mean, like, honest rivalry.
Yeah, 100%.
There's a few enough factors in the world that will be able to build those systems in the near term that they should, at least in theory, be able to coordinate in the same way we have coordinated over nuclear weapon proliferation, biological weaponization, chemical weapons.
even now, you know, gain of function research, right?
And in many ways, the stuff that is going on in these AI labs is a kind of gain of function
research.
Right.
For listeners that don't know, you know, gain of function research in biology is where you deliberately
train into a biological organism, a undesirable characteristic, for example, the ability
to jump species or the ability to become more infectious.
And then you try to study it and figure out what makes that happen in theory, so you can
prevent that from happening, right?
But that's where the hubris comes in.
You're being very Promethean.
You're giving an ability to an organism,
the ability to do something that you don't want it to do,
and then you're assuming that you can control it.
The one saving grace is that AI models
don't get into your respiratory tract.
Right, right.
They just get into your economy and get into your politics.
And then get into your mind.
Well, I just want to say, Sam,
it's been such a pleasure having you come on the podcast.
We really are in this, like,
the sliding, closing window of
what AI is
before it becomes fully entangled
with our GDP and we can't make
changes that we're in that
final period of choice
and even though we come from
very different like ideological
stances it seems like there's a lot that we've agreed on
also that we haven't
but I'm just very grateful to get to have this conversation
and get it out to a white group
so thank you so much for coming on your undivated attention
thank you it's a lot of fun
Yeah, thanks, Sam. This is great.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott, Josh Lash is our researcher and producer,
and our executive producer is Sasha Fegan, mixing on this episode by Jeff Sudaken,
original music by Ryan and Hayes Holiday,
and a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
You can find show notes, transcripts, and so much more at HumaneTech.com.
And if you liked the podcast, we would be grateful if you could rate it on Apple Podcasts.
It helps others find the show.
And if you made it all the way here, thank you for your undivided attention.