Your Undivided Attention - A Problem Well-Stated is Half-Solved — with Daniel Schmachtenberger
Episode Date: June 25, 2021We’ve explored many different problems on Your Undivided Attention — addiction, disinformation, polarization, climate change, and more. But what if many of these problems are actually symptoms of ...the same meta-problem, or meta-crisis? And what if a key leverage point for intervening in this meta-crisis is improving our collective capacity to problem-solve?Our guest Daniel Schmachtenberger guides us through his vision for a new form of global coordination to help us address our global existential challenges. Daniel is a founding member of the Consilience Project, aimed at facilitating new forms of collective intelligence and governance to strengthen open societies. He's also a friend and mentor of Tristan Harris. This insight-packed episode introduces key frames we look forward to using in future episodes. For this reason, we highly encourage you to listen to this edited version along with the unedited version.We also invite you to join Daniel and Tristan at our Podcast Club! It will be on Friday, July 9th from 2-3:30pm PDT / 5-6:30pm EDT. Check here for details.
Transcript
Discussion (0)
Inventor and engineer Charles Kettering once said,
A problem well stated is a problem half-solved,
while a problem not well-stated is unsolvable.
Here on Your Undivided Attention, we explore various different problems,
addiction, disinformation, polarization, climate change, and more.
But what if many of these problems are actually symptoms of the same meta-problem,
or meta-crisis?
And what if a key leverage point for him?
intervening in this meta-crisis is evolving our collective capacity to solve problems.
What if stating the problem in this way as a problem with our problem solving
makes it, as Charles Kettering said, half-solved.
Your undivided attention is back, and we just turned two years old.
And here with us to explore how we might, let's say, solve the problem with problem solving,
is my dear friend and mentor, Daniel Schmachtenberger.
Daniel is focused on the ways of improving the health and development of individuals and society
for the purpose of creating a more virtuous relationship between the two.
He's a founding member of the Consilience Project, aimed at improving public sensemaking and dialogue.
And with Daniel's episode, we're going to be doing something a little different.
We're releasing two versions of the episode, an edited version, along with an unedited version.
and I highly encourage you to listen to both
so that you can learn some new frames
that we're going to start using on this show.
And then come to our podcast club.
Daniel and I will actually be in dialogue with each other
and with you.
The podcast club will be on Friday, July 9th.
Details are in the show notes.
And with that, here we go.
So Daniel, welcome to your undivided detention.
Thank you, Tristan.
I've been looking forward to us dialoguing
about these things publicly for a while.
So, Daniel, maybe we should just start with what is the metacrisis and why are these problems seemingly not getting solved, whether it's climate change or anything that we really care about right now?
I think a lot of people have the general sense that there is an increasing number of possibly catastrophic issues, whether we're talking about future pandemic related issues or whether we're talking about climate change or climate change as a forcing function.
for human migration that then causes resource wars and political instability or the fragility
of the highly interconnected, globalized world where a problem in one part of the world can
create supply chain issues that create problems all around the world. There's a sense that
there's an increasing number of catastrophic risks and that they're increasing faster than we are
solving them. And like with the UN, while progress has been made in certain defined areas of the
sustainable development goals. And progress was made back when they were called the Millennium
Development Goals. We're very far from anything like a comprehensive solution to any of them.
We're not even on track for something that is converging towards a comprehensive solution.
We still haven't succeeded at nuclear disarmament. We did some very limited nuclear disarmament
success while doing nuclear arms races at the same time. And we went from two countries with
nukes to more countries with better nukes. And the major tragedy of the commons issues like
climate change and overfishing and dead zones in the oceans and microplastics in the oceans
and biodiversity loss. We haven't been able to solve those either. And so rather than just think
about this as like an overwhelming number of totally separate issues, the question of why are the
patterns of human behavior as we increase our total technological capacity, why are they
increasing catastrophic risk and why are we not solving them well? Are there underlying patterns
that we could think of as generator functions of the catastrophic risk,
generator functions of our inability to solve them,
that if we were to identify those and work at that level,
we could solve all of the expressions or symptoms.
And if we don't work at that level, we might not be able to solve any of them.
The first one I noticed when I was a kid
was trying to solve an elephant poaching issue
in one particular region of Africa
that didn't address the poverty of the people
that had no mechanism other than black market on poaching didn't address people's mindset towards
animals didn't address a macro economy that created poverty at scale so when the laws were put in place
and the fences were put in place to protect those elephants in that area better the poachers
moved to poaching other animals particularly in that situation rhinos and gorillas that were
both more endangered than the elephants had been so you moved a problem from one area to another
and actually a more sensitive area and we see this with well can we solve hunger by
by bringing commercial agriculture to parts of the world
that don't have it so that the people don't either not have food
or we have to ship them food,
but if it's commercial agriculture based on
the kind of unsustainable, environmentally unsustainable
agricultural processes that lead to huge amounts
of nitrogen runoff going into river deltas
that are causing dead zones in the ocean
that can actually collapse the biosphere's capacity
to support life faster than we're solving
for a short-term issue that's important
and driving even worse long-term issues.
You get the idea.
Over and over again, the way that we solve short-term problems
may create other problems on balance sheets
that we don't discover until later.
This is similar to the problem of Facebook's fact-checking program.
Fact-checking seems like a solution
to the problem of fake news,
but it actually can cause more polarization,
because in a world that's already been divided
by Facebook's personalization rabbit holes,
showing people fact checks can actually just drive up
more polarization and disagreement.
In the case that you in,
Human Center for Humane Technology, it brought so much attention to with regard to the
attention, harvesting, and directing economy. It's fair to say that it probably was not Facebook or
Google's goal to create the type of effects that they had. Those were unintended externalities.
They were second order effects. But they were trying to solve problems, right? Like, let's solve
the problem from Google of organizing the world's information and making better search. That seems
like a pretty good thing to do. And let's recognize that only if we get a lot of data,
will our machine learning get better? And so we need to actually get everybody on this thing. So we
definitely have to make it free. Well, then the nature of the ad model, doing time on site
optimization, ends up appealing to people's existing biases rather than correcting their
bias, appealing to their tribal in-group identities rather than correcting them and appealing to
limbic hijacks rather than helping people transcend them. And as a result, you end up actually
breaking the social solidarity and epistemic capacity necessary for democracy.
So let's define a few terms here.
When Daniel talks about limbic hijack, he's referring to the way technology is hijacking
our limbic system or our paleolithic emotions and brains in order to drive clicks and
behavior.
And when he says epistemic capacity, he's referring to, and this is something that's really
important that we're going to keep using on your undivided attention, he's referring
to epistemology, which means how we know what we know. So instead of talking just about
fighting fake news, we can talk about better epistemology, better sensemaking for how we know
what we know. And Daniel's concerned about how the social media platforms are breaking the epistemic
capacity necessary for democracy. It's like, oh, let's solve the search problem. That seems like a
thing. The side effect is we're going to destroy democracy and open societies in the process
and all those other things. Like those are examples of solving a problem in a way that is externalizing
harm, causing other problems that are oftentimes worse. So I would say that the way we're trying
to solve the problems is actually mostly impossible. It either solves it in a very narrow
way while externalizing harm and causing worse problems or makes it impossible to solve it all because
it drives polarization. And so going to the level at which the problems interconnect where
that which everybody cares about is being factored and where you're not externalizing other
problems, while it seems more complex is actually possible.
And what makes it possible is understanding the underlying drivers, the generator functions
of existential risk, of which Daniel says there are three.
The first generator function of existential risk is rival risk dynamics, and it expresses
itself in two primary ways. And the two primary ways it expresses itself is arms races and tragedy
of the commons. And the tragedy of the common scenario is if we don't overfish that area,
virgin ocean, but we can't control that someone else doesn't, because how do we do enforcement
if they're also a nuclear country? That's a tricky thing, right? How do you do enforcement on
nuclear countries, equipped countries? So us not doing it doesn't mean that the fish don't all get
taken. It just means that they grow their populations and their GDP faster, which they will use
rivalrously. So we might as well do it. In fact, we might as well race to do it faster than they do.
Those are the tragedy of the commons type issues. The arms race version is if we can't ensure that
they don't build AI weapons or they don't build surveillance tech and they get increased near-term
power from doing so, we just have to race to get there before them. That's the arms race type thing.
It just happens to be that while that makes sense for each agent on their own in the short term,
it creates global dynamics for the whole in the long term that self-terminate
because you can't run exponential externality on a finite planet.
That's the tragedy of the commons one.
And you can't run exponential arms races and exponential conflict on a finite planet.
So that's the first generator function of existential risk,
which is rival risk dynamics.
And we see rival risk dynamics everywhere over and over again on your undivided attention.
If I don't go after the attention of those preteen social media users
and you do, then you'll win and I'll lose.
If I don't seize the dopamine reward system to build an addiction into my app,
and you do, then you'll win and I'll lose.
And if I don't use negative campaign ads to win an election to make you hate the other side,
then you'll win and I'll lose.
These rival risk dynamics bring us to the second generator function of existential risk,
which is the subsuming of our substrate.
These are the substrates or the environments that make human safety.
civilization possible in the first place. Environmental degradation from overfishing, attention
degradation from apps that are competing for our attention, or social trust degradation from
politicians competing to make us outraged. And the rival risk dynamics of runaway capitalism
erode the substrate that all of our civilization depends on.
And the third generator function of existential risk is exponential technology, or technology that
grows and improves exponentially. So you can think of that like the rivalry between two people
with stone clubs, to the rivalry between two people with semi-automated weapons, to two actors
with nuclear bombs that can blow up the whole world instantaneously. Think about a rivalry
between two advertisers who are putting up a single billboard in the city that can influence
about 100 people, to a rivalry between two agents using Facebook's global ability to influence
3 billion people with millions of A-B tests and precision-guided micro-targeting,
the greater the exponential power and technology, the more exponential risk is created.
So these are the three generator functions of existential risk.
Rival risk dynamics, the subsuming of the substrate or playing field,
and exponentially growing power and technology.
Daniel says that any civilization that doesn't address these three generator functions
will inexorably self-terminate.
Not great news.
So let's take a step back.
How did we get here?
How did we get to this level of unmanaged global existential risk?
Before World War II, catastrophic risk was actually a real part of people's experience.
It was just always local.
But an individual kingdom might face existential risk in a war where they would lose.
So catastrophic risk has been a real thing. It's just been local. And it wasn't until World War II that we had enough technological power that catastrophic risk became a global possibility for the first time ever. And this is a really important thing to get because the world before World War II and the world after was different in kind so fundamentally because the wars were fundamentally winnable, at least for some, right? They weren't winnable for all the people who died, but at least for some. And with World War II and
the development of the bomb became the beginning of wars that were no longer winnable and that if we
employed our full tech and continued the arms race even beyond the existing tech, it's a war
where win-lose becomes omnilus-lose at that particular level of power. And so that created
the need to do something that humanity had never done, which was that the major superpowers didn't
war. The whole history of the world, the history of the thing we call civilization, they always did.
And so we made an entire world system, a globalized world system, with the aim of preventing
World War III. So the post-World War II, Breton Woods, Mutually Assured Destruction, United Nations
World was a solution to be able to steward that level of tech without destroying ourselves.
And it really was a reorganization of the world. And it was predicated on a few things.
Mutually assured destruction was critical. Globalization and economic trade was critical that we,
if the computer that we're talking on and the phone that we talk on is made over six continents
and no countries can make them on our own, we don't want to blow them up and ruin their
infrastructure because we depend upon it. So let's create radical economic interdependence
so we have more economic incentive to cooperate. That was kind of like the basis of that whole
world system. And we can see that we've had wars, but they've been proxy wars and cold wars.
They haven't been major superpower wars and they've been unconventional ones. But we haven't had
a kinetic World War III. Now we're at.
at a point where that radically positive some economy that required an exponential growth of the
economy, which means of the materials economy, and it's a linear materials economy that unrenewably
takes resources from the Earth faster than they can reproduce themselves and turns them
into waste faster than they can process themselves, has led to the planetary boundaries issue
where it's not just climate change or overfishing or dead zones in the ocean or microplastics
or species extinction or peak phosphorus, it's a hundred things, right?
Like there's all these planetary boundaries, so we can't keep doing exponential linear materials
economy. And then the mutually assured destruction thing doesn't work anymore because we don't
have two countries with one catastrophe weapon that's really, really hard to make and easy to monitor
because there's not that many places that have uranium, it's hard to enrich it, you can monitor
it by satellites. We have lots of countries with nukes, but we also have lots of new catastrophe
weapons that are not hard to make, that are not easy to monitor, that don't even take nation states
to make them. So if you have many, many actors of different kinds with many different types of
catastrophe weapons, how do you do mutually assured destruction? You can't do it the same way.
And so what we find is that the set of solutions post-World War II that kept us from blowing
ourselves up with our new power lasted for a while, but those set of solutions have ended.
and they have now created their own set of new problems.
So there is catastrophic risk before World War II, which was locally existential,
and then there was catastrophic risk from World War II to now, which was globally existential,
but managed by what Daniel might call the Bretton Woods Order,
which includes the Bretton Woods Agreements, the United Nations, and Mutually Assured Destruction.
But in Daniel's eyes, the Bretton Woods order is no longer up to the task.
The UN has 17 Sustainable Development Goals.
There's really one that must supersede them all, which is develop the capacity for global
coordination that can solve global problems.
If you get that one, you get all the other ones.
If you don't get that one, you don't get any of the other ones.
That becomes the central imperative for the world at this time.
So in the vacuum of what Daniel sees as a failure of our institutions to do global coordination
well, what are we left with?
How are we responding to these unmanaged existential risks caused by exponential technology?
Daniel sees two bad attractors that we're currently getting pulled towards, and those attractors
are oppression and chaos.
Oppression looks like China's digital authoritarianism model, ruled by the state from above.
So we're going to have quantum computing, AI, godlike technology that psychologically influences
billions of people, but it's managed by the state and limits the freedom of citizens.
Or we can have chaos, instantiated by the West's democratic dysfunction, where exponential
technologies aren't really managed at all because social media has deranged our society
to be maximally addicted, distracted, outraged, polarized, and misinformed until people don't
know what's true at all.
So how do we manage global existential risk without devolving into oppression or chaos?
What could a new attractor be?
I think it was a Jefferson quote of the ultimate depository of the power must be the people.
And if we think the people too uneducated and unenlightened to be able to hold that power,
we must do everything we can to seek to educate and enlighten them,
not think that there is any other safe depository.
One of the core things is the relationship between rights and responsibilities.
So if I have rights and I don't have responsibilities, there ends up being like tyranny and
entitlement. If I have responsibilities and I don't have any attendant rights at servitude,
neither of those involve a healthy just society. So if I want the right to drive a car,
the responsibility to do the driver's education and actually learn how to drive a car safely is
important. And we can see that some countries have less car accidents than others associated
with better driver's education. And so increasing the responsibility is a good thing.
We can see that some countries have way less gun violence than others, even factoring a similar
per capita amount of guns based on more training associated with guns and mental health
and things like that.
So if I have a right to bear arms, do I also have a responsibility to be part of a well-organized
militia, train with them, and be willing to actually sacrifice myself to protect the whole
or sign up for a thing to do that?
Do I have to be a reservist of some kind?
Those are the right responsibility of pairing.
If I want the right to vote, is there a responsibility to be educated about the whole?
the issue. Yes. Yes. Now, does that make it very unequal? No, because the capacity to get
educated has to be something that the society invests in making possible for everyone. And of course,
we would all be silly to not be dubious factoring the previous history of these things.
We should be very dubious, given the historical use of education to suppress the black vote.
But Daniel's saying we should design systems to enable people to be maximally informed
and maximally participate in their own governance.
So how do we make the on-ramps to learning available for everyone, not enforced, but we're actually
incentivizing? Can we use those same kind of social media behavior-incenting technologies to
increase everyone's desire for more rights and attendant responsibilities so that there's actually
a gradient of civic virtue and civic engagement? Yeah, we could totally do that.
So this new attractor is nothing short of a kind of new cultural enlightenment, which sounds
ambitious, I know. Our last enlightenment was a shift from superstition, myth, and irrationality
to logic, science, and rationality, and in pursuit of new ideals like liberty, tolerance,
and representative government. The new cultural enlightenment would be a shift from a culture
that manages risk through oppression, or that doesn't manage risk at all because it's fallen into
chaos, to a culture that has the emergent wisdom to manage exponential technologies, a cultural
enlightenment that is supported by humane technology.
How do we utilize the new exponential technologies, the whole suite of them, to build new systems
of collective intelligence, new better systems of social technology? How do you make a fourth
estate that can really adequately educate everyone in a post-Facebook world? So let's say we take
the attention tech that you've looked at so much that when it is applied for a commercial
application is seeking to gather data to both maximize time on site and maximize engagement
with certain kinds of ads and whatever. That's obviously the ability to direct human behavior
and direct human feeling and thought. Could that same tech be used educationally to be able
to personalize education to the learning style of a kid or to an adult to their particular
areas of interest and to be able to not use the ability to, could,
control them for game theoretic purposes, but use the ability to influence them to even
help them learn what makes their own center, their locus of action more internalized, right?
We could teach people with that kind of tech how to notice their own bias, how to notice their
own emotional behaviors, how to notice group think type dynamics, how to understand propaganda,
media literacy. So could we actually use those tools to increase people's immune system
against bad actors' use of those tools? Totally. Could we use them pedagogically in general to be
able to identify, rather than manufacturing desires in people, or appealing to the lowest angels
of their nature because addiction is profitable? Can you appeal to the highest angels in people's
nature, but that are aligned with intrinsic incentives and be able to create customized educational
programs that are based on what each person is actually innately, intrinsically motivated by,
but that are their higher innate motivators? Could we do that? Yeah, totally we could. Could we have an
education system as a result that was identifying innate aptitudes, innate interests of everyone and
facilitating their developments? So not only did they become good at something, but they became
increasingly more intrinsically motivated, fascinated, and passionate by life, which also meant
continuously better at the thing. Well, in a world of increasing technological automation coming
up, both robotic and AI automation, where so many of the jobs are about to be obsoleted.
Our economy and our education system have to radically change to deal with that, because one of the
core things an economy has been trying to do forever was deal with the need that a society had
for a labor force. And there were these jobs that society needed to get done that nobody would
really want to do. So either the state has to force them to do it, or you have to make it to where
the people also need the job. So there's a cemetery and so kind of the market forces them to do it.
So if one of the fundamental like axioms of all of our economic theories is that we need to
figure out how to incent a labor force to do things that nobody wants to do, an emerging technological
automation starts to debase that, that means we have to rethink economics from scratch because
we don't have to do that thing anymore. So maybe if now the jobs don't need the people,
can we remake a new economic system where the people don't need the jobs?
What is the role of humans in a post-AI robotic automation world?
Because that is coming very, very soon.
And what is the future of education where you don't have to prepare people to be things that you can just program computers to be?
Well, the role of education has to be based on what is the role of people in that world.
That is such a deep redesign of civilization because the tech is changing the possibility set that deeply.
So at the heart of this are kind of deep existential questions of what is a meaningful human life
and then what is a good civilization that increases the possibility space of that for everybody
and how do we design that thing.
So what Daniel's saying, and as previous guest Yuval Haurari pointed out,
is that the new technology forces us to reimagine our previous social systems.
Within the context of personalized AI that can tune educational experiences,
what is the new education?
Within the context of automation of most tasks, what is work?
Within the context of a post-Facebook digital age, what is the fourth stage?
China is answering these questions, but for the purpose of a digital closed society,
but Daniel's encouraging us to answer these questions for the purpose of a digital open society,
with examples like what Audrey Tang and Taiwan and others are already doing.
But before we go on, let's take a step back and have some humility here.
we don't know all the answers about how this is all going to work.
But what we do know is that the question of what would make social media
slightly less bad or less harmful is not adequate to answering the question of existential risks
caused by the three generator functions that Daniel has outlined.
Humane technology must be supporting the capacity of culture
to have the wisdom to steward exponential tech amidst rival risk dynamics.
And in that spirit, how might we use technology in a way that enables people to meaningfully
participate in their own governance and to have that culture become the new attractor that can
manage global existential risk?
What if all government spending was on a blockchain?
And it doesn't have to be a blockchain.
It has to be an uncorptible ledger of some kind.
Holo chain is a good example that is pioneering another way of doing it, but an uncruptible
ledger of some kind where you actually see where all taxpayer money goes and you see how it is
utilized the entire thing can have independent auditing agencies and the public can transparently
be engaged in the auditing of it. And if the government is going to privately contract a
corporation, the corporation agrees that if they want that government money, the blockchain
has accounting has to extend into the corporation. So there can't be, you know, very, very bloated
corruption. Everybody got to see that when Elon made SpaceX all of a sudden he was making rockets for like
hundreds of the price that Lockheed or Boeing were who had just had these almost
monopolistic government contracts for a long time. Well, if the taxpayer money is going to the
government, is going to an external private contractor who's making the things for 100 to a thousand
times more than it costs, we get this false dichotomy sold to us that either we have to pay more
taxes to have better national security or if we want to cut taxes, we're going to have less
national security. What about just having less gruesome bloat?
because you have better accounting, and we have better national security and better social services
and less taxes. Everyone would vote for that, right? Who wouldn't vote for that thing? Well, that wasn't
possible before uncorruptible ledgers. Now, that uncorruptible ledger also means you can have provenance
on supply chains to make the supply chain's closed loop so that you can see that all the new stuff
is being made from old stuff and you can see where all the pollution is going and you can see
who did it, which means you can now internalize the externalities rigorously. And nobody can
destroy those emails or burn those files, right?
What if the changes in law and the decision-making processes also followed a blockchain process
where there was a provenance on the input of information? Well, that would also be a very
meaningful thing to be able to follow. So this is an example of like, can we actually structurally
remove the capacity for corruption by technology that makes corruption much, much, much harder
that forces types of transparency on auditability.
What if also you're able to record history?
You're able to record the events that are occurring in a blockchain
that's incorruptible where you can't change history later.
So you actually get the possibility of real justice and real history
and multiple different simultaneous timelines that are happening.
That's humongous in terms of what it does.
What if you can have an open data platform and an open science platform
where someone doesn't get to cherry pick,
which data they include in their peer-reviewed paper later, we get to see all of the data that was
happening. We solve the Oracle issues that are associated. And then if we find out that a particular
piece of science was wrong later, we can see downstream everything that used that output as an
input and automatically flag what things need to change. That's so powerful.
Let's take AI. Well, with AI, we can make super terrible deep fakes and destroy the epistemic
commons, you know, using that and other things like that. But we can see the way that the AI
makes the deep fake by being able to take enough different images of the person's face and movements
that it can generate new ones. We can see where it can generate totally new faces, averaging faces
together. Somebody sent me some new work that they were just doing on this the other day. I found
very interesting. They said, we're going to take a very similar type of tech and apply it to
semantic fields where we can take everybody's sentiment on a topic and actually generate a proposition
that is at the semantic center.
Then can you have digital processes
where you can't fit everybody into a town hall,
but everybody who wants to can participate
in a digital space that rather than vote yes or no
on a proposition that was made by a special interest group
where we didn't have a say in the proposition
or even the values it was seeking to serve?
You start by identifying what are the values
everybody cares about.
And then we say the first proposition
that meets all these values well
becomes the thing that we vote on.
these completely change the possibility space of social technology.
And we could go on and on in terms of examples.
But these are ways that the same type of new emergent physical tech that can destroy
the epistemic commons and creativeocracies and pre-catastrophic risks could also be used
to realize a much more pro-topic world.
So I love so many of those examples, and I especially on the blockchain and corruption one,
because I think something that the left and the right can both agree on is that our systems are not
really functional, and there's definitely corruption and defection going on.
And just to add to your example, imagine if citizens could even earn money by spotting
inefficiencies or corruption in that transparent ledger so that we actually have a system
that is actually profiting by getting more and more efficient over time and actually better
serving the needs of the people and having less and less corruption.
And so there's actually more trust and faith, and that's actually a kind of digital society
that when you look at, let's say, the closed China's digital authoritarian society,
and you look at this open one that's actually operating more for the people with more transparency,
with more efficiencies. That's just an inspiring vision.
What's also very inspiring is what Daniel's building, the Consilience Project.
This conversation you and I are having is very central to the aims of the Consilience Project,
which is we're wanting to inspire, inform, and help direct a innovation zeitgeist,
where the many different problems of the world start to get seen in terms of having
interconnectivity and underlying drivers. And so we have a really great team of people that are
doing research and writing, basically the types of things we're talking about here in more
depth, explaining what is the role of the various social systems? Like, what is the role of
education to any society, help understand fundamentally what that is, understand why there's a
particularly higher educational threshold for open societies where people need to participate,
not just in the market, but in governance,
understand how that has been disrupted by the emerging tech
and will be disrupted further by things like technological automation.
And then envision, what is the future of education
adequate to an open society in a world that has the technology that's emerging?
And the same thing with the fourth estate,
the same thing with law, the same thing with economics.
And so the goal is not how do we take some small group of people to build the future.
It's how do we help get what the criteria of a viable future must be.
And if people disagree, awesome, publicly disagree and have the conversation now.
But if we get to put out those design constraints, someone says, no, we think it's other ones.
At least now the culture starts to be thinking about the most pressing issues in fundamental ways
and how to think about them appropriately and how to approach them appropriately.
So fundamentally, our goal is supporting an increased cultural understanding of the nature of the problems that we face,
a clearer understanding rather than just there's lots of problems and it's overwhelming and it's a bummer.
And so either some very narrow action on some very narrow part of it makes sense or just nihilism.
We want to be able to say actually because there are underlying drivers, there is actually a possibility to resolve these things.
It does require the fullness of our capacity applied to it.
And with the fullness of our capacity, so it's not a given.
But with the fullness of our capacity applied to it, there is actually a path forward.
I think what CHT did with the social dilemma took one really critical part of this meta-crisis
into popular attention, maybe in a more powerful way than I have seen done.
Otherwise, because as big a deal as getting climate change in public attention is,
it's not clear that climate change is something that is driving the underlying basis of all the problems,
but a breakdown in sense-making and they control of patterns of human behavior that kind of downgrade people like,
oh, wow, that really does make all these other things worse. So I see that as a very powerful
and personal on-ramp for those who are interested to be able to come into this deeper
conversation. And some people say, I can actually start innovating and working with this stuff.
Yeah, I think what we've essentially been outlining here is the Charles Kettering quote,
which I learned from you, and I've learned so many things from you over the years,
which is that a problem not fully understood is unsolvable and a problem that is fully understood
is half solved. And I just want to maybe leave our listeners with that, which is I think people can
look at the long litany of problems and feel overwhelmed or get to despair in a hurry, I think is your
phrase for it. And I think that when you understand the core generator functions for what is
driving so many of these problems to happen simultaneously, there's a different and more empowering
relationship to that. And you've actually offered a vision for how technology can be consciously
employed, these new technologies can be consciously employed, in ways that should feel inspiring and
exciting. I mean, I want that transparent blockchain on a budget for every country in the world.
And we can see examples like Estonia and Taiwan moving in this direction already. And we can see
Taiwan building some of the technologies you mentioned to identify propositions of shared values
between citizens who want to vote collectively on something that previously would have driven up
more polarization. I think we need to see this as not just an upgrade, but the kind of
of cultural enlightenment that you speak of that so many different actors are in a sense already
working on we used to have this phrase that everyone is on the same team they just don't know it yet
i'll just speak to my own experience when i first encountered your work and i encountered the kind
of core drivers that that drive so much of the danger that we are headed towards i immediately
i was kind of already in this direction already but i reoriented my whole life to say how do we be in
service of this not happening and of creating a better world that actually meets and addresses
these problems. And I just hope that our audience takes this as an inspiration for how can we in
the face of stark and difficult realities as part of this process gain the kind of cultural
strength to face these things head on and to orient our lives accordingly.
If you take the actual risk seriously, it should reorient your life.
Yeah. That's how I genuinely feel. Me too.
Daniel Schmachtenberger is a founding member of the Consilience Project, online at
consulienceproject.org. You can find that link in the show notes, along with more information
about our podcast club with Daniel on July 9th. Your undivided attention is produced by the
Center for Humane Technology. Our executive producer is Stephanie Leap. Our senior producer is Natalie
Jones and our associate producer is Nur al-Samurai. Dan Kedmi is our editor-at-large,
original music and sound designed by Ryan and Hayes Holiday, along with David Sestoy,
and a very special thanks to the whole Center for Humane Technology team for making this podcast
possible. A very special thanks goes to our generous lead supporters, including the Omidyar Network,
Craig Newmark Philanthropies, the Evolve Foundation, and the Patrick J. McGovern Foundation,
among many others. I'm Tristan Harris, and if you made it all the way here,
here, let me just give one more thank you to you for giving us your undivided attention.