Your Undivided Attention - What Would It Take to Actually Trust Each Other? The Game Theory Dilemma
Episode Date: January 8, 2026So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all ...that matters is the race to make the most money, gain the most power, and play the winning hand. This way of thinking can feel inescapable, like a fundamental law of human nature. But our guest today, professor Sonja Amadae, argues that it doesn’t have to be this way. That the logic of game theory is a human invention, a way of thinking that we’ve learned — and that we can unlearn.In this episode, Tristan and Aza explore the game theory dilemma — the idea that if I adopt game theory logic and you don’t, you lose — with Dr. Sonja Amadae, a professor of Political Science at the University of Helsinki. She's also the director at the Center for the Study of Existential Risk at the University of Cambridge and the author of “Prisoners of Reason: Game Theory and the Neoliberal Economy.”The history of game theory as an inhumane technology stretches back to its WWII origins. But humans also cooperate, and we can break out of the rationality trap by daring to trust each other again. It’s critical that we do, because AI is the ultimate agent of game theory and once it’s fully entangled we might be permanently stuck in the game theory world.RECOMMENDED MEDIA“Prisoners of Reason: Game Theory and the Neoliberal Economy” by Sonja Amadae (2015)The Cambridge Centre for the Study of Existential Risk“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern (1944)Further reading on the importance of trust in FinlandFurther reading on Abraham Maslow’s Hierarchy of NeedsRAND’s 2024 Report on Strategic Competition in the Age of AIFurther reading on Marshall Rosenberg and nonviolent communicationThe study on self/other overlap and AI alignment cited by AzaFurther reading on The Day After (1983) RECOMMENDED YUA EPISODESAmerica and China Are Racing to Different AI FuturesThe Crisis That United Humanity—and Why It Matters for AILaughing at Power: A Troublemaker’s Guide to Changing TechThe Race to Cooperation with David Sloan Wilson Clarifications:The proposal for a federal preemption on AI was enacted by President Trump on December 11, 2025, shortly after this recording. Aza said that "The Day After" was the most watched TV event in history when it aired. It was actually the most watched TV film, the most watched TV event was the finale of MASH Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hey everyone, it's Tristan Harris.
And I'm Aza Raskin.
Welcome everyone to your undivided attention.
So Tristan, today I think, is actually one of our favorite episodes because we're diving really deep into a way of seeing the world that feels very obvious, that feels sort of like you're naive if you don't adopt it, but that is causing the deadening of a world.
And that is game theory.
Yeah, I mean, and the simple way to boil that down is the logic that you've heard of this podcast before around AI and social media.
Well, if I don't do it, they will.
You know, if I don't race for that attention and hijack people's psychological vulnerabilities to build social media doom-scrolling machines,
then I'm just going to lose to the other company that will.
If I'm a movie studio and I don't release Spider-Man 7 while the other guy is releasing Batman 10,
I'm just going to lose the game of building successful movies.
if I don't build the advanced AI as fast as possible and take all the shortcuts,
even though taking shortcuts is bad for humanity,
well, then I'll just lose and they'll win.
And cooperation, therefore, is for suckers.
And this logic, you know, feels inescapable.
It feels like it's a fundamental law of human nature.
But this episode with our guest, Sonia Amadai,
is about why it's not actually a fundamental law.
It's a specific way of looking at the world,
a way of looking that was invented by humans.
We sort of call this the game theory dilemma, which is to say that if I adopt game theory and you don't, you lose.
So game theory was actually invented in the 1940s by one of the greatest mathematicians and physicists of all time, John von Neumann.
And he was trying to understand how do you formalize how you win parlor games like chess and poker.
And this ended up getting used all the way up to our most existential threats like,
the nuclear bomb, how it gets deployed.
But there's something very interesting that happened,
which is to treat all of human endeavors
like a chess or poker game that is winnable.
And so there's been this propagation of games,
winnable games, to be the fundamental substructure
of everything from war to AI.
So our guest today, Sonia Amade, argues that it doesn't have to be this way,
that game theory misses fundamental aspects of what it means to be human.
She's a professor of political science at the University of Helsinki.
She's also the director at the Center for Excessantial Risk at the University of Cambridge.
She's the author of a book on exactly this topic, The Prisoners of Reason, Game Theory and the Neoliberal Economy.
Professor Amadei, welcome to your invited attention.
I'm delighted to be here. Thank you for the invitation.
Just to sort of lay out the problem, it's that if
I use game theory and you don't, I will out-compete you because I'm acting strategically wisely.
So if you don't know game theory, then you're the sucker. So that sucks everyone into using game theory,
but that changes who we are. You're changing the basis of trust or changing the kind of society
that gets created. And we don't want to live in the society that is purely ruled by game theory.
And that's sort of like the game theory dilemma, if you will.
The dilemma of game theory itself.
So the reason that Aza and I were so interested in doing this episode is if you look around the world,
the world kind of feels like it's being colonized by this cold, strategic logic.
Let's just give a few examples of like where this is showing up across a few different domains.
It struck me in doing research for this episode that game theory can colonize dating.
So pick up artistry is like a game theory version of dating,
where people are making a cold calculus of,
I'm going to say and speak the thing that will get me the outcome that I want,
and I can measure that if I do this action versus this action,
it will lead to this result.
If I'm designing software, like I should be designing software like Asa's dad,
who started the Macintosh project, thinking about what's good for people,
how do I make this really usable, what's going to lead to these really positive outcomes for society?
But then I noticed that there's these other guys that are making software
in a race to hijack human attention,
which means they're racing to hijack human vulnerabilities,
which means that they're actually measuring using A-B testing.
If I design it this way versus this way,
I'll actually get more results.
I'll get more engagement.
I'll get more screen time.
I'll get more people scrolling for longer.
They'll come back more often if I make the button red instead of blue
or if I use a notification or if I highlight their best friend
or their girl that they've been spying on actually liked their post.
And because they're in this logic of measurement,
game theory colonized software design.
Or then memetics and culture and political campaigns where you have a politician who maybe wants to say something authentic and true for them and meaningful and heartfelt and sincere.
But then they're told by their advisors, no, you can't say that.
We measured the results of these different communications and you should say it this way versus that way.
And what it leads to is this kind of deadening of culture, this deadening of dating, this deadening of relationships, this deadening of software design.
And then now you get to AI where AI is here.
and instead of designing AI in a way
where we focus on designing cures for cancer
for all of us who have loved ones with cancer right now
and really focusing on that
so we can actually get the benefits of that direct outcome
that supposedly this is all for.
We're seeing companies in a race
to scale these crazy,
uncontrollable, inscrutable, powerful intelligences
under the maximum incentives to cut corners on safety.
And so in every way,
game theory has colonized
not just technology and software,
but like more and more of our total world.
And I want people to get this because I think it helps explain,
almost there's a good news to it,
which is what you see out there in the world,
when it feels dead or meaningless or cold or strategic,
that's not authenticity.
That's actually just a world that has been colonized by game theory.
And so what I want to get for this episode is how do we help expose,
how did this logic really take over?
So I think can we tease that out a little bit
just so that people can get a little bit of a flavor of why this is
so critical.
The most basic point would be to look at the original text, which was John von Neumann
and Oscar Morganstern's theory of games and economic behavior.
The expected utility theory was part of this technological decision theoretic breakthrough
that allowed social scientists that were using that approach to claim that anything that has
any value at all can be captured by expected utility theory.
Van Neumann thought that all value could actually be monetized, which you could argue about.
But that's the way he thought about it.
He thought that you could put a monetary value on anything by watching people's behavior, seeing what they're willing to pay to have a certain outcome.
Basically, he had that idea.
You could put a monetary value on everything that would motivate people, that would incentivize people.
And the expected utility theory let you do that.
It's probably important to let people know a little bit about von Neumann.
Yeah, who was John von Neumet. He seems like such a pivotal figure.
John von Neumann is, well, first he was operating in quantum thermodynamics.
So he axiomitized quantum theory.
So he's a mathematical prodigy and genius.
He immigrates to the United States prior to the Second World War because it wasn't safe he had the Jewish ancestry.
So he moves to the United States.
And he takes up at Princeton, which then,
was this location from where he ended up playing a pivotal role in the Manhattan Project,
which is in building the atomic bomb that was then used in Hiroshima Nagasaki.
During the Second World War, he actually chose the targets of Hiroshima Nagasaki.
He was on the committee that made those decisions.
So just to quickly tie, let's see if I'm getting this history right,
Von Neumann is trying to understand
how to win at games of chess and poker
he's trying to formalize these sort of parlor games
and to do that he has to make an assumption about human nature
and an assumption about the game being played
which is that you have to win
there is no such thing as cooperation in chess
then that model that he creates
gets picked up and used because he's part of the Manhattan project
to model the quote-unquote game
between all the great powers.
And so now this very dimensionally reduced model
of what humans are, ones where we don't cooperate,
is now the basis for the most important decisions
the world is making.
We've applied a theory of parlor games to nuclear weapons.
Yeah.
Yeah, exactly.
And that's how you end up with a world
where thousands and thousands of nuclear weapons
are built on both sides,
enough to destroy the entire world.
And that is what keeps the world safe,
even though it's safe under the, you know, just hair-trigger,
hairline sort of level of fragility where just one little false step could still end the world.
And yet that was the, quote, rational thing for us to do.
But if you try to escape that logic, like you say, well, we shouldn't build nuclear weapons,
and you come in as a peace activist, and you say we should just dismantle all nuclear weapons.
Well, how do you stop the other guy from doing that?
And you end up with game theory feels inescapable.
If I don't do it, I just will lose to the other one that will.
Yeah, and what you see a lot today in the way that game theory and the Prisoner's
dilemma is projected in these arms race over AI is asymmetric power.
So the UK security strategy for 2025 is all about asymmetric advantage.
And that is a real change of worldview from a classic liberal, multilateral world,
where we would be hoping for mutual benefit.
And game theory would lead you to conclude there's no other way to come to this solution,
quote-unquote of this situation.
It's non-negotiable, non-navigable.
If I'm the guy that is going to be cooperating,
people will trample me.
I will not survive and propagate.
You're seeing game theories, it's in public policy,
it's in economics, it's in political science,
it's in nuclear deterrence,
it's in biology, evolutionary game theory.
And the idea in game theory is that
you would only ever say something strategically.
And when you are a game theory,
actor, every time that you say anything, it is only what you need to say to get a specific
outcome. So it's deeply embedded in the architecture of our world. So a moment ago, you heard Sonia
refer to the Prisoner's Dilemma. This is a classic game theory problem showing why two rational
individuals might not cooperate, even when it seems beneficial, and that leads to a worse outcome
for both. What's called the Prisoner's Dilemma, because it imagines a scenario where there's two
prisoners from a crime and they're being interrogated separately and each one has to decide do I stay
silent or do I betray the other if they both stay silent and say that they didn't do it then they
both get light sentences but each is tempted to betray the other and say that the other one did it
and that way they can go free but if they both give into that temptation then they both end up with
the harser sentences than if they had just cooperated in my book prisoners of reason one of the things
I really struggled with is how do you present the prisoner's dilemma in such a critical way
that when people finish reading the book, they would question the logic of the prisoner's
dilemma. And the whole book is written under that attempt to unlearn it from people,
even though it's teaching the prisoner's dilemma at the same time. So people become critical consumers
of game theory. And it's very, very, very difficult to do that. And then there's this anomaly
about, well, why is it that actual humans don't necessarily follow the logic of a game theory,
and especially those that are untutored in game theory,
the ones that haven't been exposed to this logic
or taught it methodically in classes,
they end up being the ones that would probably be more cooperative.
I work in Finland at the University of Helsinki,
and I think it's actually a crime of some kind to teach the prisoners limit
because the students just cooperate there.
They can't fathom.
And if I've done these not experiments but simulations,
and often it's the foreign students
that would be more prone to be in a scenario
where they would try to take advantage.
And for the Finnish students, they can't,
the logic doesn't make any sense
because Finland is a very high-trust society
and it doesn't run according to this logic
of either game theory or the prisoner's dilemma,
not at the moment anyway.
And is the reason that it would be a crime,
or you feel like it's a crime to teach to the Finland students,
is it because once they learn it,
it even starts to shift some of their thinking and behavior?
Yeah.
Finnish kids, students, they are,
naturally more cooperative, creating a more trusting society, and to introduce game theory
to them, interpersonally means you're changing the basis of trust, or changing the kind of
society that gets created. And we don't want to live in the society that is purely ruled by
game theory. We want to look at strategic rationality. Exactly. And that's sort of like the
game theory dilemma, if you will. Once you see it that way, it's like it's almost a, it's
own memetic kind of infection. It actually infects everyone else's thinking. And the more people
think in terms of that way, the more people are actually operating from a calculated place,
the more people's speech is calculated, the more they start to out-compete others, and the more
that group starts to out-compete everybody else who's not operating with game theory. So it has
this kind of dominating, totalizing. You can see it like a global virus, like coronavirus,
but it's a game theory virus, sort of colonizing the world and bringing more people into that
mode of reasoning.
So theoretically, if actors can actually find some authentic, trustworthy place, like there's jokes about, what was it, Esselin was doing hot tub diplomacy, where you had some of the Soviet nuclear scientists with the American, I don't know if they were nuclear folks, but I know there were people that were involved and there's these jokes about hot tub diplomacy. You've got to get people in a hot tub just like actually talking to each other as raw human beings reckoning with what's actually at stake. But to do that, you need this communication, you need authentic communication.
You need, you are a trustworthy actor who's communicating with me honestly about what you actually feel.
And I'm a trustworthy actor who is receiving your communication and communicating honestly in return.
And in a way, the whole problem is trustworthiness.
So when people start to shift from communication that's honest to communication that's calculating,
where the word communication is almost a false idea, we're actually signaling to each other.
So I'm speaking tokens at your brain that I'm calculating and you know that I'm speaking tokens at your brain.
And so then you counterrespond with tokens at my brain.
You see how game theory starts to kind of make the whole world feel inauthentic,
make the whole world feel calculating.
And if we don't do something about it, we end up in this bad outcome.
And that's what nations do, right?
North Korea sends a calculated statement where they use exactly these words, but not these words,
because they're trying to escalate in kind of this tiered signaling regime.
But you're just saying you're bringing up so many important points about the way that
communication is so fundamental, but then also the way that communication itself doesn't get to be
a useful tool in game theory
because it becomes itself colonized by game theory.
And just to build on that a little bit,
the game theory dilemma
is that if we can all
see that the world that everyone
operating on game theory, then AI, which
perfectly operates on game theory,
that world that creates either
is non-existent or nobody wants to live in.
And it's by seeing
that that's a world nobody wants to live in,
that we create the opportunity
for choosing something much more human.
And just to sort of double underline
why AI is so central to this conversation,
and we said this in the AI dilemma talk we gave several years ago,
is that AI arms every other arms race.
If there's a military arms race,
AI arms and supercharges the military arms race.
If there's a corporate arms race,
if there's an AB testing, memetic political communication arms race,
AI will arm that arms race too.
And so the reason that we have to reckon with game theory itself
is because AI is like the maximization of game theory logic,
which is its own kind of singularity of just catastrophe.
And so AI is almost like a gift to actually look at the inadequate framework of game theory
because it's already been inadequate, but we kind of keep pushing the can down the road.
But now because it's sort of making every problem that comes from game theory so visible,
we have to reckon with it itself.
So in the search for solutions about how we escape game theory,
It's really important for us to look at what are the assumptions that game theory makes about human nature
so we can start finding where there are cracks.
So can you outline what are the assumptions that game theory makes about human nature?
So according to game theory, value has to be scarce.
And since game theory says that everything valuable can be accounted for in its metric accounting system of what is valuable.
then everything that humans would value would need to be scarce.
But if you look at, for example, my favorite, the Maslow Pyramid,
where you look at all the different levels of what has value,
and if you look at esteem, self-confidence,
all of the higher levels of the Maslow pyramid are usually,
there are positive some aspects that it doesn't,
if someone gets a good night's sleep, for example,
that usually doesn't take away from somebody else getting a good night's sleep,
or if somebody feels self-esteem, that shouldn't detract from somebody else.
So right away, we're in a world where all of the things that we can put a valuation on
are scarce and we're going to be competing over them.
And actual relationships, friendship, love, family, having children, most of what we value,
I would argue is actually these positive some goods that you're never going to even begin to enter into some kind of
the game theory, payoff, right? That's the word, what's the payoff? Just for listeners, this is
the Maslow's hierarchy of needs. It's a framework that Abraham Maslow came up with for what are
the different hierarchies of human needs, starting at the base foundational level of, you know,
shelter and sleep and biophysical needs, but going up to these more abstract needs of self-esteem,
and then eventually self-actualization, love, belonging, community. And your point is that
those things are not zero-sum. If I have, you know, esteem, this is why, you know, corporations and
Organizations are always about, you know, doing appreciation days,
and we really appreciated this employee who did this and this and this.
This is ways of so doling out more of a fulfilling society that's not zero-sum.
And there's also hearing in there the assumption that only things that can be measured matter
because only then can you reason on them.
So how do you put a number on love or on friendship?
And so then game theory just doesn't have anything to say about it, so it doesn't model it.
No, it's worse.
It will do a Sophie's choice move and say,
No, but you will save one child before the other if there's a fire.
And that's the horrible thing about the way game theory does valuation of what's important to people.
We'll say, no, it can always, that's what von Neumann would say.
No, you can always put someone in a situation where they'll need to choose.
And when they're making that choice, then you can do that preference architecture of mapping what people's desires are
and maybe now their intentions.
So it's very insidious because it lifts us out and it constructs a world if you're, if you're a creative.
creating institutions according to this logic, you're constantly putting people in situations
where they will feel like it's non-navigable to start perceiving and acting in a world
according to that fundamental assumption that anything that's valuable is scarce and competitive.
It's very frightening. It's like a nightmare. It's just like putting ourselves in a nightmare
world and then saying, oh, but you'll never wake up from this nightmare.
I think it's important to note that in a world that has sort of been colonized by so much by game theory
and what is effective
and what is just Machiavellian
and that world
selects for psychopaths and
Machiavellianism, the Dark Triad characteristics
basically. So Dark Triad being the
narcissism,
Machiavellianism, and
psychopathy, so the inability to empathize with
others, because the better you
are at not empathizing with others, the more you can
act just cold rationally, the better
you'll do at those kinds of cold games.
The more Machiavellian and strategic
your mind is, and you can just reason that way,
the better you'll do at these games.
and the more sort of narcissistic and kind of self-important you are,
the better you'll do at these kind of games.
And so when you look out there in the world
and you say the world looks like it's run by psychopaths,
well, that's because the system, being run more by game theory,
selected for those who would actually be complicit
and not have a problem with playing that perverse game.
And so it takes people that might even start compassionate warm,
et cetera, in their lives,
and the ones who continue to play the game
and don't burn out and don't want to keep doing it,
the ones who don't want to do it, they burn out, they do something else.
The ones who do want to keep doing it
are the ones who are capable of becoming
sort of those dark triad folks.
And I want people to know that
that doesn't mean that actually
that's the vast majority of people.
It's actually a small set of people
who've been selected for
and put in the top positions of power.
So you were getting through the assumptions
and you just gave us the first one of game theory.
The assumptions.
The other is this essentialism.
This is not an invention.
This is a discovery.
This idea that we evolved to be,
these machines that have to propagate, and the way that you would do that is to be the perfect
strategic actor. So it's an essentializing of this rationalities, and then that reinforces that
there's really no alternative. Those of us who might want to be a different way, we will get
suckered. We're going to fall by the wayside, all of those bad things. And then the other
assumption that we are programmed to be this way means there is no alternative, that you cannot
but be an individual competitor or a strategic competitor or you will you'll pay the price for that let's see if I'm getting it right so it's like the core assumptions essentialism that were programmed to be strategic competitors that if you're rational then you do X becomes proscriptive not just descriptive you have scarcity only scarce things have value hence competition is inevitable and then the last one is that there's no alternative the
strategic competition is non-negotiable.
If you don't play the game, you lose.
If you opt out, you lose.
And so if we dive into these core assumptions now,
so if these are the assumptions that undergird,
that game theory locks in, this is the only one way
to see the world, how would we explore these assumptions
or see if they're limited one by one?
Well, the first one is easy, the value,
because I'm not sure about everyone.
But many people probably do feel that there
are aspects of their lived experience.
if you're spending time with a loved one, or if you're feeling that this person is in some
kind of pain and you have that empathy, I think most of us experience the higher levels of
the Maslow pyramid and know that those are not zero-sum goods. They're inherently positive
some where if one person has self-esteem, it doesn't take away from another person's self-esteem,
not if you're in the advanced top of the Maslow pyramid,
maybe for a narcissist,
if someone else has self-esteem,
you'd want to destroy it,
but not for mature adults
that have evolved to the top of the pyramid.
So that one, I think, is pretty easy to grasp,
and then it's just a question,
but how do we bring that love, empathy,
and positive some goods into our world?
So that would be the next question.
So I have spent a long time thinking about that,
And I think it starts with understanding this logic of the prisoner's dilemma
because if you're in the world of scares, everything is a prisoner's dilemma
and you really, it is non-navigable.
But the way out of that, and I think it's so simple,
is that you just ask yourself the question,
if the other guy went ahead and cooperated ahead of me,
do I cooperate or not?
Do you believe my signaling that I was trustworthy?
But if I'm actually not a game theoretic, strategic, rational actor, I will cooperate if the other guy does.
And then what you're trying to build is assurance and trust based on the fact that I am trustworthy.
And we all know if we're trustworthy.
And the trustworthiness just comes down to, do I cooperate if the other person does?
And then you've broken out of the prisoner's dilemma and you're starting to think about value in ways where value, it expands into two major concepts.
one is solidarity where you
you feel that solidarity with a common cause
with other people and you'll fight for a cause
and we know, like look at Tiananmen Square in China
look at out people, that video that lives on
in all of our minds at the man standing in front of the tank,
why? Why did he do that?
He was not strategically rational.
But the people that were protesting over and again in history
like in the Gandhi peace movement,
they had the solidarity, which meant that
They had this way of connecting and working together that was very powerful.
They stepped outside the logic of all this was inevitable.
There's nothing that we can do.
And they did something that broke out of it.
And they were trustworthy.
And they somehow, the actions that they did,
tapped into something in the collective consciousness
that broke through and popped out of some of the container somehow.
Yeah.
And a lot of working game theory has been to say that is irrational,
that if you are able to work with solidarity,
that that's evil, that is communist,
that it can only happen if there's some kind of a dictator
that's incentivizing people and controlling them,
that it's not natural for people to have solidarity
in terms of some kind of a connection and a common cause.
And the other thing is commitment.
And commitment basically means that if you promise something,
you go through with it.
In Finland, for example, is such a high-trust society
that if you give you word on something,
then that is who you are.
stepping entirely out of the world of game theory and saying, I will carry through on my promise
no matter what. I mean, so banal, right? Keeping one's word, how did we lose that? It's fundamental
to civil society or that that would be a choice. How did we lose the idea that that's just a
fundamental choice for being a moral agent in a political economy? That's just baffling.
We have to combat that by, it's very subtle and simple, but we have to believe what we say
and believing what we say
it sounds so trivial
but it's actually pretty difficult
because how many times
you just say whatever it takes
just to get some outcome
versus believing what we're actually saying
and that's a basic duty
for being a citizen in society
is stating what we believe
and then trying to make our statements to be true
so those are three pretty basic antidotes
that we're all able to put into action.
So let's just talk about how this all connects to the AI arms race.
RAND, the same nonprofit defense think tank that has been involved in research
in nuclear game theory and deterrence, et cetera,
has also been doing research on the military and strategic implications of AI since the 1950s.
And AI was framed exactly like nukes, existential technology that's requiring strategic dominance.
where fear drives the race, game theory legitimizes the fear.
If anything, game theory got even more powerful inside of the reasoning about AI
because AI is unique in the fact that it can create step functions in my knowledge of physics
or step function in my knowledge of math or step functions in my knowledge of energy production.
And those step functions at any of those scientific domains
could create a step function in military domains or a step function in industrial domains
where if suddenly you can produce energy in order of magnitude more cheaply than me
or produce all goods in order of magnitude more cheaply than me,
or produce suddenly an infinite supply of weapons in a way that I don't have,
because AI is a race to arm every other arms race
and a race to these step functions,
it actually favors this kind of race to an asymmetric advantage,
which then becomes the policy,
which then becomes the kind of we shouldn't do anything to regulate
or set guardrails on this at all.
And it's why you have currently in the United States
a proposal for a federal preemption on AI,
meaning we don't want any states to regulate AI.
we're going to stop and actively prohibit regulations at the state level because we need a no-holds-barred, you know, race to asymmetric advantages on every sector.
Yeah. And then the AI is programmed to be a strategic rational actor because rationality is this thing that is game theory.
When you put those two together that we interpret that there has to be this AI arms race, the U.S. wants total strategic dominance in AI for that exact reason, that it's going to give the advantage where there's no coming back.
Once the U.S. dominates an AI, it's escalatory in the sense the AI will keep feeding back that logic for being rational.
And then the human makers of policy will say, but we need to say symmetric advantage.
And that's like the ultimate winning of this paradigm, the paradigm won.
And then it is harder because you and I can take those easy steps of knowing there's more value than scarce value.
We can be trustworthy.
We can believe what we say.
and we can cooperate with others and form groups.
But how do we break that out of the highest-flung policy environment,
especially when you see that the people that are in that environment
have been trained for years in this way of thinking?
So how do you redo this,
especially since the AI is going to be amplifying that set of beliefs?
That's, I think, where we are right now.
And I think that's quite a predicament.
This reminds me also of an example that I think we might have mentioned
on this podcast before of how do you break out of this?
trap. It's not fully true, but if in the world of, you know, relationship vicious spirals,
so two people are in a relationship and they're in a vicious spiral or one starts criticizing
the other, the only way that the other knows how to respond is, well, you criticize me. So
tit for tat, I'm going to criticize you. Well, did you know that you left the dishes out or
you did this bad thing? And then you end up in a downward spiral where both parties actually
don't feel good at the end of the day. And they're left with kind of a collective relationship
commons between them that is degraded from the fact that they both openly criticized each other.
And if you're operating in that paradigm, it might seem like, well, that's the only thing that could
have happened. Like, clearly that person criticized me. That's the only route that we could
have gone from there. And then you have Marshall Rosenberg come along, the inventor of nonviolent
communication, who says, you know, actually it might appear that way, but it turns out there's
this other communication, I don't want to call it a strategy because that makes it like calculated
in game theoretic, but you basically respond with what it felt like to receive that or hear
that. When you said this, I noticed I felt that. Yeah. And you just start with that because I'm sharing
what the effect of what you just said was and what it did to me, but in sharing what I feel because of
it, now the other person's empathizing with the impact of their actions. So it's creating
connection at a higher dimension than the sort of value metric of who's winning the war of that
communication exercise. And in a certain way, you can think of that as a kind of creative move that
up until Marshall Rosenberg, maybe people had that in some other languages and other tribes,
you know, throughout history.
But Marshall Rosenberg kind of put a new move onto the menu of human relationship communication dynamics.
And, Asa, you've talked about how, you know, just like there was Move 37 in the game AlphaGo,
so when the AI that Google DeepMind built, that played Go and beat the Go player, it came up
with a new move that no human had ever done called Move 37.
And if you had AIs that are simulating, you know, the way that this could go and actually
can discover move 37s that are positive some that look for cooperative dynamics that everyone
was convinced there's no other move there's definitely no other better way to do this and i think
whether it's move 37 for relationships or for treaties you know as you've talked about this for treaties
what it would move 37 for treaties look like alpha treaty um and maybe there are ways that AI can both
be a tool in searching for positive some games in a world that looks like we're locked in you know
zero some games and that brings into my mind just on the um
I think both of our favorite work in AI alignment, which is about self-other overlap.
Because a lot of what you're saying here in nonviolent communication is that you are internalizing the effect of your words on someone else.
It becomes part of you.
There's mirror neurons.
And in self-other overlap, this research is very interesting.
They train an AI not to be able to distinguish the difference between I and you, self and other.
so that the sentence like you stole because your family needed food and I stole because my family
needed food, they become sort of the same because I is equal to you.
I think it's really interesting that AI has been programmed to use the personal pronoun I
when we can wonder if it has that embodiment of it being a human communicator.
And actually some of my colleagues, well, I put out that maybe if we'd never let AI use a personal pronoun,
then at least we could have disambiguated it if that had been just hard and fast regulation.
And my two colleagues thought that that actually would have helped us not be where we are.
But if we are trying to solve the alignment problem and we don't really care if the AI refers to itself as I or not,
then it does seem that it might be possible to program it to not have that barrier or distinction.
But that would be a bit of an experiment.
And it's been tried.
Well, but if we really, if we're going to solve the alignment with that, that,
And we just cast it loose.
That would be interesting to see what happens writ large.
But I still think there's worries about language changing
and whether language is a strategic signaling game
and how would language function between I and you
if we dissolve that barrier.
But if language is still strategic,
because I think we'd want to not look at language
or treat language or experience language
as a means of control.
And I think this is so important with AI
because up until recently, chat GPT when it launched, we prompt AI.
But what's changing in 2025 and certainly 2026 is that AI prompts us.
And so AI is A-B testing.
We've had politicians and marketers trying to figure out what is the most effective language.
And they have a small surface area over our lives.
But AI is increasingly in relationship with major portions of the population.
I think, what is it, like one in eight human adults are now in some kind of communication relationship with AI.
And so AI can search through all signal language base to find the most effective ways to manipulate us.
And that is a kind of threat that humanity has never had to deal with.
Yeah, and you had that sentence that was in your video that was the main one that you have on the website.
When you talk about that now language is the fundamental unifier under all of these different domains that AI would have been unleashed on.
And that now, because language is how we socially construct the world, that we're letting AI take control of this profound tool of this social common world construction with whatever logic is programmed into how it uses language.
and that it does have the ability to just totally dissolve our social reality
if we don't find a way to control it.
I thought that was probably the most profound of many profound moments
in your conversation for the AI dilemma.
The real main thing we've been exploring here is whether in kind of AI
creating the zenithification of the game theory logic,
is there a way out of that?
And then I'm kind of curious about the ability to kind of
have this be kind of a jubilee, a break,
the kind of maximization of game theory
leading to this desire to change game theory,
to wake up from a kind of a single cellular,
narrow self-interested logic that dominates the world
into this kind of multicellular, collaborative logic
that in which we can perceive the fear of all of us losing greater
than we can fear and feel the world where I lose to you.
But in order for that to be true,
The way in which all of us lose has to be extraordinarily clear and trustworthily communicated
and received by every agent who is in charge of making decisions about the way this goes.
I have three thoughts.
One, we have a lot of freedom of choice, and that starts with being trustworthy, and that
starts with if the other guy cooperates, I will.
If the other guy doesn't cooperate, I'm not going to cooperate, but if the other guy cooperates,
I will.
So there is freedom of choice that we have fundamentally as agents.
Then I was thinking about the nuclear movie The Day After.
The movie Sonia is referring to here is called The Day After.
It's a 1983 movie that depicts the brutal aftermath of a full-scale nuclear conflict between the U.S. and Russia.
It was seen by millions of Americans.
In fact, it was the most watched television event in history.
And it was screened for President Reagan and the Joint Chiefs of Staff.
Reagan later said that the film actually changed his mind on U.S. Needswar Strategy,
and it encouraged him pursue de-escalation with the Soviet Union.
Maybe the point there is to create a Hollywood blockbuster
that would be that for this moment, that would build up from,
we can undermine those assumptions and we can have that individual freedom outside of the AI world
to have that sort of wake-up moment.
And then the third thing would be, I don't know about the major programming parties that are at the AI companies.
You guys are probably way more in touch with those people.
But there is no reason that we would need to be stuck with this orthodox strategic form of rationality.
I don't know if the deep mind scientists, if their approach is radical enough.
But yeah, why are we stuck with a prisoner's dilemma, prisoner of reason?
type of approach to strategic rationality, wouldn't it be possible to centralize a different kind?
I mean, I think that if people could be, I don't like the word educated, but if there could be
some kind of participatory environment where leaders are exposed to alternative ways of thinking
that would be carefully thought through the way that you guys generate content.
But those three things together, making people feel that they can opt out at an individual
level and they have the tools, even though knowing where it is hard to opt out, something
that's a collective kind of imaginary event that captures this moment and then to just go back
to the foundations and realize we have so many alternatives. And there's so much goodwill and there's
so many alternative realities and constructions of where we could be to draw from. So I guess I love
this conversation optimistic with at least thinking those three things and some others take us in a
better direction. One of the things is just to summarize that I think the day
after did was that made the cost of defection negative infinity. It became existential. So now
cooperation becomes the rational thing to do. And I think the point of this conversation is to say
with AI, game theory becomes destiny. And that destiny is a thing nobody wants. That also has
negative infinity. And so if we can all see that and see it clearly, that means cooperation does
become the rational thing.
Clarity, we say in our work, creates agency.
And if we have clarity about the current destination being an outcome that no one wants,
we can choose something else.
And, you know, it's a difficult picture.
It is probably the hardest problem that humanity has ever faced,
certainly the hardest coordination problem that we've ever faced.
And yet this whole conversation I'm reminded of a quote I was just pointed to recently
by Louis Alvarez, who was the winner of the 1968 Nobel Prize in,
physics, perhaps the greatest experimental physicist of the century, remarked that the advocates
of these sort of game theoretic schemes were, quote, very bright guys, no common sense.
There's this kind of over-intellectualization of that highly intelligent people build elaborate
abstract models. They trust their mathematical formalism too much, but they ignore obvious
real-world constraints, incentives, human behaviors, and deeper sort of truths of human nature,
inside of which may lie the answer of snapping ourselves out of this cold mathematical logic.
And so maybe we can, you know, since we're appealing to the high credibility gods here of inspiring figures of history,
if Einstein is sort of just pointing us at what is the higher level of consciousness we need to be operating from
to snap out of the lower level consciousness of just pure mathematical logic of game theory.
Well said.
I wanted to just call back.
there were sort of two competing schools
is my understanding that came post-Darwin
to interpret Darwin. One
is, it's just
brutal competition. And the
other one was, well, this is about mutual aid
and cooperation. I think Darwin was the first
person to ask, where do
the noble traits come from?
Like, altruism and, like,
heroism, and like, where does it
come from? And we have an episode
with David Sloan Wilson, who worked
closely with the sociobiologist,
E.O. Wilson. And they have this, like,
wonderful phrase that sums it all up.
It's why the selfish gene is sort of wrong.
It misses this, which is selfish individuals do out-compete altruistic individuals.
But groups of altruistic people out-compete groups of selfish people and everything else's
commentary.
And game theory misses this kind of noble traits that comes from groups operating together.
It's like, because noble traits are about giving.
something up for a greater whole.
Yeah, it's a team reasoning.
And team reasoning, you break entirely out of game theory.
And really, that's where we're on the planet now, right?
I mean, if we don't figure out a way to cooperate rather quickly,
and if we don't find a way for not to be, like,
we've already been colonized by institutions operating about the game theoretic logic.
But once the AI is building those institutions and changing language
and changing what's normal,
to ever and ever more higher bars of strategic competition.
If we don't find a way to derail from that,
it's going to be pretty desperate.
But knowing it's an option and we can be trustworthy
and we can believe what we say
and we can have value that's not scarce,
maybe just that's an inner light
that just starts to create a possible different imagining.
If we can start to believe that there would be an alternative possibility,
then maybe that's the first step
with some very minimal building blocks,
maybe we can start to create other social patterns
and not to lose hope that we need to be
the strategic cutthroat actors.
Sonia, thank you so much for coming on your undivided attention.
It's been, I really think,
one of the most important, completely under-the-radar conversation
that needs to happen.
Yeah, absolutely.
Thank you, Sonia, so much for coming on your undivided attention.
We're so grateful to have you.
And your work with your book, The Prisoners of Reason, is just so illuminating to highlight this for everybody.
So thank you so much for writing it and for coming on.
I'm delighted.
Really nice to meet you both.
Your undivided attention is produced by the Center for Humane Technology, a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott.
Josh Lash is our researcher and producer, and our executive producer is Sasha Fegan, mixing on this episode by Jeff Suddakin.
Original music by Ryan and Hayes Holiday
and a special thanks to the whole
Center for Humane Technology team for making this
podcast possible. You can
find show notes, transcripts,
and so much more at HumaneTech.com.
And if you like the podcast, we would be
grateful if you could rate it on Apple Podcasts.
It helps others find the show.
And if you made it all the way here,
thank you for your
undivided attention.
