Modern Wisdom - #348 - Daniel Schmachtenberger - Building Better Sensemaking
Episode Date: July 22, 2021Daniel Schmachtenberger is a founding member of The Consilience Project and works in preventing global catastrophic risk. Having accurate sensemaking is a superpower in the 21st century. As the volume... of information we need to sort through increases, the ability to distinguish signal from noise becomes ever more important. Given this, I wanted to ask Daniel exactly how he would advise someone to become an adept sensemaker. Expect to learn the characteristics that a good sensemaking agent should have, why the relationship between sense, meaning and choice making is so crucial, whether Daniel thinks that humanity is too emotional to reach our full potential, at what stage of personal actualisation we should begin to help the world and much more... Sponsors: Get 20% discount & free shipping on your Lawnmower 4.0 at https://www.manscaped.com/ (use code MODERNWISDOM) Get 83% discount & 3 months free from Surfshark VPN at https://surfshark.deals/MODERNWISDOM (use code MODERNWISDOM) Extra Stuff: Check out The Consilience Project - https://consilienceproject.org/ Check out Daniel's Website - https://civilizationemerging.com/ Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hello friends, welcome back to the show.
My guest today is Daniel Schmacktemberger.
He's a founding member of the Consilience Project and works in preventing global catastrophic
risk.
Having accurate sense-making is a superpower in the 21st century, as the volume of information
we need to sort through increases, the ability to distinguish signal from noise becomes
ever more important.
Given this, I wanted to ask Daniel exactly how he would advise someone to become an adept
sense maker.
So, today, expect to learn the characteristics that a good sense-making agent should have,
why the relationship between sense, meaning, and choice-making is so crucial, where the
Daniel thinks that humanity is too emotional to reach our full potential at what stage
of personal actualization we should
begin to help the world and much more.
If you're not familiar with Daniel, then today is a little bit of a change of pace.
It's a very considered, very patient conversation.
We're talking about incredibly deep, difficult topics here.
What does it mean to communicate and coordinate in a modern era when we have ancient programming? How should we contribute to the world at large in the best way that we can?
These aren't simple questions to answer, and to be honest, after I have a conversation like this,
I feel moderately exhausted afterward, but it's incredibly worthwhile. I love talking to Daniel,
I love having the limits of my cognitive capacity pushed. So make sure that you've got the blood caffeine levels optimally balanced because today is a big one.
Before I get to the other news is the modern wisdom reading list is finally completed.
It's over 10,000 words, it's taken me, maybe six months to write, and it features 100 books
that everybody should read before they die. Everyone's been asking for it for ages,
and it is finally completed. It's off with the designer making sure that it looks beautiful
and optimized, ready for me to send it to you. It'll be live within the next couple of weeks,
so stay tuned and I will tell you where and when you can pick up your copy.
But now it's time for the wise and very wonderful. Daniel Schmackton-Burger. What I think people really liked about our first conversation was that we brought some of your work down
to an individual level.
So a friend referred to it as creating a narrative of resonance.
And given that, I thought it would be nice to start by looking
at something that you talk about a lot, which is sense-making,
but from the level of the actor.
So how do you define a sense-making agent?
Well, I don't know what background the other people
listening to your show will have on the general topic when you're mentioning individual agents.
I think you mean individual humans. Obviously, an organization act like an agent, act like a unit of agency. But I think you mean individuals seeking to make sense of the world
they live in better. We oftentimes talk about sense making at a societal level, meaning
and currently it comes up a lot like why do we have such a hard time coming to clear understanding
about what the nature
of climate change is or what the nature of COVID viral origins or vaccines or whatever systemic
racism or how we should deal with nuclear disarmament or why does it seem like there are such radically
divergent views, meaning the way that we're sensing the world is leading to very different
senses of the world, which of course leads to very different senses of what should happen,
which makes it very hard to coordinate, which makes it very easy to have conflict.
And so when we're talking about sense making, we usually talking about it in the context of shared sense making
as a prerequisite for shared choice making, what IE governance sense making is not the only prerequisite.
When we talk about governance, and by governance
I don't mean government, which is maybe a specific establishment that has a rule of law
and monopoly of violence, but governance meaning some process by which a bunch of people
who want different things and see the world differently come to coordinate in some effective,
positive, productive ways.
So we're talking about how do we get people to have some kind of coherence, a coordination between the choices they make, such that it isn't making choices
that we would think of as crime or something that really messes up each other's
choice-making capacity, and where we need to coordinate on choices,
like we're not going to all make our own roads and things like that, that we're able to coordinate effectively regarding shared resources, shared
infrastructure, shared choices. It happens to be that lots of humans who don't know each other
and have experienced the world differently and feel different things and want different things
coordinating on what the right choices is a tricky thing, right? Because they have a different sense of what is and what they want and what should be. So this is why for most of human history,
the number of people that would coordinate with small tribes or small, they classically stayed
beyond, you know, smaller than the Dunbar number, give or take 150-ish, where there were no strangers.
Everybody that you were coordinating with you to know your whole life, they had known your whole
life, everybody had the same shared basis of experience and everybody could
be in a single conversation around a campfire.
And so the ability to coordinate, we could coordinate sense-making because we were sensing
the same stuff.
We were living in the same place, right?
We weren't even reading different books.
We weren't even watching different TV shows.
Like we were nobody had been, we weren't speaking different languages.
We were exposed to the same stuff and so you could also fact check anybody just by
looking there right by just having such a shared basis and it was pretty easy
to unify values because the culture that had conditioned them was the same
culture so there might be little differences of waiting. And so then the ability to
you know, unified choice making, if we have shared values and we have shared sense of the world,
so both what we think is and what we want to be, it's not that hard. Once we started to get to
larger scales where now I've got to maybe make compromises for strangers. I've got, you know,
we're going to have some coordination with people who I don't have any shared sense of
real feel to you with whatever that becomes a different topic and where they really do see the world differently.
And so this is where mostly the order came through some kind of imposition or oppression or top down force, which is why it was largely empires.
And then that still meant a number of people smaller than Dunbar that would equal the king and a council
Making the decisions and imposing it by rule of law and force on everybody, right?
So shared sense making didn't matter
Because people didn't really have meaningful choices. They were gonna do the thing that they were gonna do within that context
So the idea of something like democracy or a republic or an open society where some humongous number of people
who don't know each other,
who don't have the same experiences
are all going to not just do totally different stuff
that creates chaos, but also not need somebody to kind of rule.
They're gonna find order, it isn't imposed,
it's emergent order, that's actually a wild idea, right?
Like it's a really fucking wild idea
that that would even be possible.
And the modern democracies came following the enlightenment,
the cultural kind of European enlightenment with the idea
that we could have this thing called the philosophy of science,
where we could all measure the same thing
and give the same result independent of biases.
Didn't matter what we thought beforehand,
if we measure the speed of sound in the right way or whatever it is,
we're going to get the same results.
So there's this unifying nature of objectivity that allows us to sense mate together,
which is why Karl Pupper,
who advanced the philosophy of science,
was the guy who termed open society,
that we could do open societies based on the ability
to do shared sense-making, using a more methodological
rather than I had divine revelation,
and it's true, and you don't know, kind of approach.
And then, but like we said, the idea of governance
is that there's some kind of emergent choice making
or order at the level of the choices we're making.
The choices are both the result of our sense making.
What do we think is actually happening?
What do we think the causes of what's happening are?
And if we do X, what do we think will happen?
That's kind of forecasting sense making.
But it's also what do we want to happen, which is our values, which is not sense making.
Sense making is sensing what is, the values is what ought, what do we think ought to be, what do we really care about?
So we can call that values generation a meaning-making.
So sense-making and meaning-making are the prerequisites
for choice-making.
The thing that we call governance in an open society
is that there's some coordinated process for choice-making
that doesn't have to be imposed by a king.
Doesn't just turn into there's no way we can get on the same page so it has to be chaos
because we can sense the world together and we can sense each other's values
and find a higher order set of values that includes everyone.
So this is another part of the enlightenment was the idea that we could do a dialectic on values.
You could say I really believe in doing X, whatever X proposition is.
And we're like, why do you want to do it?
Well, because it's in service of decreasing infant mortality.
And the value that you have is infants.
And we're like, yeah, but if you do that thing, it'll be bad for this other thing,
because it whatever, it'll damage the water supply.
So what you care about is the water supply.
Well, let's not focus on the proposition for a moment.
What you value is children. What you value is the water supply. Let's not focus on the proposition for a moment.
What you value is children, what you value is the water supply.
Let's hold all those as legitimate values.
What you value is individual freedom.
What you value is the responsibility of the individuals
to the collective that they are benefited by.
So the ability to hear each other's values and synthesize
and say a good solution will meet everybody's values as best as possible.
So often we get stuck with a proposition that's created to meet some value before even looking at what all the values are.
And so it benefits the environment, but it hurts the economy of benefits, the economy and hurts the environment or whatever it is.
So those who feel particularly connected to the thing being hurt or like this is terrible,
we have to do everything to fight it.
And those who feel connected to the thing
that's being benefited are like this is critical.
Someone finally gets us.
Now those two sides have to become enemies.
If the only chance they have is to vote
on a pre-existing proposition that is a shitty proposition
because it was based on a theory of trade-offs
between those.
It was never even consciously explicated.
They never even said, oh, this is going to harm this thing.
Both these are values.
Is, can we take these values and find a better way forward,
a better proposition that maybe could meet them both better?
Maybe rather than that bridge that is going
to harm the environment, the way that it is,
but it helps transportation, which will help the economy, a barge could do it without harming the environment, or we could just build better local economies
on both sides or whatever it is.
So the dialectic process is where I want to hear the values that you care about.
And so you believe everyone needs to be vaccinated or no one should be forced to be vaccinated,
or everyone should have to wear a mask or nobody should, what is the value you care about,
independent of the strategy,
the strategy is the way to fulfill value.
There is something legitimate in the value,
even if, in the sense making about,
is that thing about vaccines or about masks
or about whatever is that true?
Is separate from, is that value legitimate, right?
And so, if, so we don't have participatory governance
in the US, we don't really in the world
in any very meaningful way.
We have the legacy story of it, but we don't have a population hardly anywhere that are really seeking to understand the world we live in, where the government is going to make choices on stuff. And for the government to be hug-formed by the people that we understand enough to be able to weigh in well.
And we seek to understand our own values
and other people's values and be able to have
the dialectical conversations to see
if we're missing some sense making.
Somebody knows some stuff we don't,
and we really want to hear it,
rather than have our in-group continue to feel right
by saying how dumb the people in the out-group are.
So there isn't anything like participatory governance, which is why open societies are basically
failing and doing shouldly while the authoritarian societies that aren't even claiming to do that
and are just doing top-down government better, are just doing better at long-term planning and infrastructure.
So, you know, people will hear me talking about sense making usually it's in this context of how do we develop better
capacity as a society as a whole for everyone to be doing a
better job making sense of the world than just believing
whatever happens to come through the Facebook feed that is
algorithmically optimized to appeal to their current biases
and kind of limitically hijacker, maximally,
bother them and drive in group dynamics where people's fear of being out
group by believing the wrong thing and their
messes with subtle, deep tribal biases.
How can we do better job with sense-making?
At the level of training individuals,
at the level of how we change education and train people,
and at the level of the quality of media we put out,
at the level of how we
design the information architecture so that rather than a Facebooker you do having an algorithm to
maximize time on site that it does through appealing to your biases which make you spend more time.
It, which makes people on the right more right the left more left anti-conspiracy theorists, more hate
conspiracy theorists conspiracy theorists farther down that direction.
And so there's this just hyper fragmentation
as a result of the financial model
of this information technology, right?
So how do we make better information technology?
How do we make better media and fourth to stay?
How do we do better education?
All those things so that we can actually have better sense
making about what is real, better dialogue and communication around what is meaningful or the
values. So that those types of conversations can lead to what would a good
proposition even be that meets that factors all of what's real as constraints,
factors what matters as constraints and works to find the best proposition
forward. So if we take as the background, that's the societal context of where we're usually coming
from and talking about the needs for sense-making, that was a long preface to then say, you want
to bring it to the level of the individual and say, all right, so I'm not trying to fix
Facebook's algorithms for sense-making right now.
I'm not trying to necessarily fix participatory governance or democracy or the fourth estate or public education.
I'm trying to say, how do I, as a person, do a better job of making sense of the things that I should make sense of,
that affect the choices I need to make in the world?
What things should I just actually not bother myself with because I really don't have a choice and it's not the best use of my life energy. What thing should I, or do I care to, and how
do I do a good job? And how do I know if I'm doing a good job? Because almost everyone
that we're sure is wrong is sure they're right. And so we are one of the everyone that
they're pretty sure is wrong, right? And so we should all be pretty dubious of our own
certainty. Because statistically, we're almost certainly wrong about most of the things that we're sure we're right about.
And that's dangerous, right? Like it's dangerous that I'm clear about lots of things I think other people are wrong about.
And I'm clear about a lot of things I think people in the past were wrong about.
And I'm clear about things that I was wrong about in my past.
But I probably can't say anything that I'd say I believe today that's probably wrong.
That's tricky because it's probably mostly wrong.
And so how do we, and now when ego gets tied up in that, right?
And then when belonging gets tied up in that, if I don't,
if I don't say the right narrative about masks or about vaccines or about social justice,
I'll be totally outrooked, right?
Because now I'm an anti-vaxxer or a sheep or whatever it is.
So there's a lot of reasons to have everybody double down on their worst traits of unwarranted certainty and sanctimony.
So if we want to ask the question, how important is it to our own life to develop our sense-making?
How do we know how well we're doing with it? How do we do it? We can get into that more.
Absolutely. One of the things that I've got in my head there is
how much more considered and slow decision-making at the governance level
would have to be in order to factor in all of these different values and choices.
No, expediency is something that people value.
If the bridge needs to be built and people think that it's going to make their life better,
but in order to factor in everybody's different values, it's going to take
two years of debating and planning and all of the rest of it.
And that situation of consideration and being moderate and more nuanced with your thinking.
That happens at the individual level as well, right?
It's far easier to just react, take something that we think is the closest approximation
of correct and just move forward.
No, this is gibberish argument.
So let's say we don't have clear sense making on a topic, but we have to act.
Why do we have to act? Are there
real consequences? Are just made up bullshit like an election cycle or whatever that we
could change that we would do in governance? If there's real consequences, how consequential
is it to get it wrong? Well, it might be more consequential than taking more time, right?
It depends. There are times we have to make consequential choices
under uncertainty.
We're not choosing fast enough is also a choice.
That's a real thing.
But it's that way less often than we pretend that it is.
And so then what is the consequence of getting it wrong
and doing something that might be much more harmful?
But also, what is the time effect of moving forward
with something?
Because we just have to move forward that a huge part of the world thinks is wrong and bad
and are going to actively keep fighting.
Like how efficient does that end up being?
They're all going to pay lobbyists.
They're all going to help pay for academics to sponsor counter narratives.
They're going to pay for politician, candidacy processes,
whatever it is, so how fast does it really end up being
to try to advance something that half the world thinks
is a terrible idea?
And so what you can see is we try that.
We're like, no, the science is settled.
That's a famous bullshit line on the millions things, right?
And you'll see it on both sides of all kinds of things. The science is settled is just a nice way to say my unwarranted certainty is true. But the science is settled, climate change is the
thing it is we've got to move forward, and we don't have time to educate you dumb fucks anymore
about it. And we're just, you know, so we're going to carbon trade.
This is the way to do it or cap and trade carbon tax, whatever.
Okay, well, how well does that work when all of the groups are going to keep lobbying against it
and getting Republican candidates who will then try to undo the laws in four years that those,
you spend four years trying to do the shit, knowing that the next four
years will undo all of it. And actually, you never even plan on doing something that won't
have returns within those four years because it won't possibly get you elected. And all
the things that need done need to have 10, 20, 30, 50 year timelines. And no one will ever
even look at it. And most of the time, you're actually just working on getting political
support and campaigning to be reelected.
So the expediency we just have to move forward is usually a bullshit argument for someone in a power position, moving forward in a way that will advance their power position, with plausible deniability that it's something else.
If you want to move a civilization forward, either you're moving forward in a way that everybody's getting on board with
or you're deciding to use force to oppress the fact that everybody's not on board or you're
deciding to keep fighting the fact that they also have for it's like you just have to be realistic
about that. It's interesting the option of delaying a choice is also a choice there's always a
third option of being more considered yeah I like a choice. There's always a third option
of being more considered. Yeah, I like that. Okay, so back down to the individual, how can
someone become an adept, since making agent? We're at the mercy of certain things that individually,
immediately, we can't control, therefore making the most of the capacities that we do have
is a good idea.
I would not say the answer for this is the same for everybody based on
what they feel called to their dispositions, their
vocation and their kind of sense of what their mission to do is
if someone is a nurse caring for patients, if someone is a mother raising children, how much does them understanding what's really happening with the digitally you want and whether it's
going to become the reserve currency of the world or whether or not the US microgrids are
susceptible to EMP attacks?
Like, how much does their sense making matter?
How much agency do they have to do fucking anything
about that?
Pretty little.
How much does it likely stress them out?
Probably a lot.
Does that make them a better nurse or mom?
Probably worse.
Could that same energy be applied
to doing better sense making about tuning into their children
and their patients better
and where they actually have some agency.
So, when we recognize that sense-making is to inform choice-making, right?
Do I have choice around this thing?
Like, what is the basis of why I'm wanting to do sense-making?
Is it simply because I need to know which side of the narrative or I'm on because I think I have to be on one of the sides?
What if I just don't? What if I say, I don't know? I don't know what I
think about systemic racism. No, you have to know, but I don't. Well, you know, and people
can try to do a forcing function, and then you're complicit or whatever it is. Okay, well,
I can put a huge amount of time and energy and then still not actually have any real agency to move this thing forward, given where I am in my life, but I can put that energy into studying better nursing
or whatever it is, right? So I don't think that the idea that like everyone should be deeply
informed about all of the existential risks and understand the entire effect of the tech stack
and globalization
and planetary boundaries and geopolitics is like a thing that everyone should have.
I don't think that's true.
So the first question is like, what matters for me to make sense about based on what
choices I actually have to make in my life.
That's an important question.
Because it's easy to get sucked into the thing for somewhat unconscious reasons.
Now we can talk about how to do good sense making
on geopolitical and environmental
and complex scientific topics there.
But the first part is make sure that the reason
that you feel cold to do that makes sense.
And I'm of course not saying that if you don't have a company or an organization or a euro,
not a politician, in some way that can directly affect that thing, you shouldn't know anything
about it. There is something about general informiveness as a citizen that can have value,
but you do want to pay attention to
like, I have finite units of life energy, and where do I want to put my attention that
is also connected to my creativity?
So I want people's sense-making to be informing their creativity, right, to be informing their
agency and their choice-making and the quality of life for themselves and the people they touch and for the world at large as they can touch it.
Now,
how to actually do good epistemology we can get into next, but does that does that part make sense?
Yeah, absolutely.
I'm interested to hear what the underlying principles are.
Yeah, absolutely. I'm interested to hear what the underlying principles are. Presumably, there must be a structure upon which, or some commonalities, that all sense-making agents,
whether they be the nurse, the government official, the creator, the mother,
other some commonalities between all of them.
Are there commonalities in how to do good sense making in any domain, regardless of the
domain?
Sure.
They're going to be different.
There are certain places where it's like how to get, how to really get this particular
kind of backflip is not something I get from reading Wikipedia.
Like I only get it from trying to do backflips.
It's an embodied sense making. There's like, oh from trying to do backflips. Like it's an embodied sense
making. There's like, oh, I clicked and I got it. And there's no amount of reading Wikipedia
that's going to or watching YouTube that's ever going to give it to me. So there isn't
like one type of sense making. Like there's no amount of reading music theory that will actually
get my fingers to grok how to play Chopin. So there are different kinds of creative capacities
that require different kinds of like,
because you're sensing how something works, right?
Sense making is not purely cognitive.
It's taking your senses and having a pattern emerge,
sense making of like, ah, I got it.
And we've all had that experience playing the piano,
we're trying to do the backflip, where it's like,
I got it, that sense making of a type.
That's a bunch of sensory perceptions that came into it's like, I got it, that sense-making of a type. That's a bunch of sensory perceptions
that came into a pattern where now I have it
in a way that can inform my creativity.
But I'll stick in the cognitive domain for now,
since that's largely what I think
most people were talking about.
Some super helpful basic tips.
If I'm trying to make sense of a topic that is conflicted,
where the public opinion on it, or even the scientific opinion,
or whatever on it is highly conflicted,
I should understand the conflicting views
before coming to one on my own.
That's a very helpful thing to do.
So you were mentioning,
how do I do better since making them nursing?
Well, let's say I'm a nurse and it's COVID time
and there's like major conflict
of the Cybermectin work or not.
And should we be doing this with people
and who do we think
actually is too much contraindication for these vaccines or whatever it is like those
are places where a nurse would actually maybe want to do some sense making and they might
not feel that they have any time. They might not feel the agency that if they came to think
something different than hospital policy they could do anything other than get fired.
But they still might care anyways because they're fuck, I signed up to this thing because of a call in an oath,
and I need to know, right?
So one place there, I like to start,
is I like to see, okay, are there two primary narratives,
or are there a few narratives, right?
Let's begin, say there's two narratives.
I've remarked and really works, and it's awesome,
it doesn't work at all, it's dangerous, right? Typically there's more than that. Typically there's like five or six.
Maybe it works early case, but not later. It works for these kinds of situations or there's
some indication of works that we don't really know or whatever. But let's take kind of primary
narrative camps. Because in today's world world most people are trying to sense
make between pre-existing narrative camps and it's kind of important to
understand there is a very strong incentive for everyone to fall in narrative
camps are basically these strange attractors.
And so there are underlying forces that drive what you can think of as polarization,
to rather than just like, well, whatever is true, it's going to be,
it's much easier to believe something is true that someone with expertise
says with a lot of certainty and other people agree with.
Especially if there's a lot of literature and I'm unskilled right on the time
of how I'm going to read all of it myself. And then you get a narrative and then
you get people who say that narrative has something false with it and they do a
counter narrative that is usually an anti-narrative that are also kind of smart
and typically based on either a different emphasis in values. Hey, this is about
personal freedoms. This is about public health. This is about my right to decide on
my own body, this is about not being a grandma killer, whatever it is, right? Sometimes it's just a
difference of values that affects their sense-making because their sense-making the thing that seems
most aligned with their own values are not actually paying attention to the sense-making,
they're looking at the narrative of truth that fits the value that they seem to care more about. I don't think anyone
should be comfortable with the idea of more imposition on people's personal freedoms than
necessary. I think everyone should be dubious of anyone who feels that they are in a position to say
what is necessary and imposed it by force. Like everyone should be dubious.
Oh, you have a monopoly of violence that can impose necessary limits on everyone's freedom.
And who is the authority? Like what is the authority process that is not influenced by power or
fucked up motives or ego or mistakes at all that deserves to have that fucking power over everybody.
Like everybody should be legitimately concerned about that.
That there is such a thing as adequately legitimate authority to wield monopoly of power.
Simultaneously everybody should be legitimately concerned about unnecessarily being a grandma killer, right? Like about taking
a risk as a young person, that would be not that consequential for you, probabilistically,
there'd be way more consequential for other people. And everyone should have some sense
of like, yeah, we actually have a duty to each other. We have a social responsibility, a
social field, that insofar as we're affecting each other, we're not just
a ton of them. And if we can affect each other invisibly, but
still tangibly, there's real consequence to that. Like, even as
libertarian as I want to be, non-aggression, I don't have the
right to come up and hit you in the face, right? Well, do I have
the right to dump toxic waste in the river on my property if you live right downstream from me and that's the river that
feeds your well? No, I'm aggressive on you, right? So if I'm sneezing and coughing in your space
and I might have an infection, like there's a real situation there regarding what is the limit
of personal sovereignty and what is the limit of civic duty. And everybody's comfortable, and it's interesting,
because a lot of people who really like
libertarian sovereignty feel comfortable
with the idea of civic duty to go die and more,
including where there's a draft, if it has to be, right?
Not just even where it's voluntary, so we're like, okay,
there is a relationship between,
where it's voluntary, so we're like, okay, there is a relationship between the way that the individual affects the larger holes that they're part of and is affected by them.
And so how do we maximize everyone's liberty and maximize the well-being of the whole
in a way that no one's liberty is unduly harming anybody else's, right?
Obviously, we'd like to do that with emergent order rather than impose. So rather than a law
doing lockdown, more conscientious citizens who understand more and care more would be better.
Right? If you had citizens who cared more and did the research better and really came to
understand a better, then they wouldn't need police. They would be self, then you don't have to worry about who is the authority that has a monopoly of violence.
It's there is a population that is well educated, conscientious, communicating with each other
respectful and self-policing in that way, right? Self-mondering.
So the point is that oftentimes there are these values that have to live in a dialectical
relationship, but we will forget that and focus on one of them, then we'll focus on the
sense-making narrative that supports that one and think and then we'll weigh it's the
scientists who believe in it as being credible. And when people quote them, we're like,
this is a credible scientist, but then when someone quotes the credible scientist on the other side, we'll say, you're doing
a logical fallacy of appeal to authority. And it's like, really, you just did that. Like,
it's an appeal to authority when they pick their scientists, but this is a credible person
when you do it. And that kind of subtle bias is just all over the place, right? And it's
fundamentally a kind of bad faith sense-making
that people don't even realize they're doing most of the time.
So we call motivated reasoning.
Motivated reasoning is tricky.
And there's so many reasons for it.
Sometimes I want to be certain just because
I'm fucking scared to have to say I have no idea what's going on about super
consequential stuff.
There's a pandemic or the variance going to get worse or is the vaccine going to make
them worse?
Is everybody going to die?
Am I going to get go outside again?
This guy seems really certain and the story is not too scary or whatever it is.
Sometimes there's deep subconscious stuff like my desire for safety and certainty seems
to be a path to it.
So the same place in people that get scared of the dark is just when they can't see what's
going on, they project nasty stuff into it, or they get scared of deep water because when
they can't see what's going on, they project nasty stuff into it.
Get scared of the unknown in general.
Projects nasty stuff and then wants to pretend there is no unknown, so they want excessive
certainty about everything. So they get scared of death and so then they want to project
certainty about what happens in the afterlife and make up religions. When you recognize
that how much of reality is unknown and actually unknowable, there is no way through, other than actually, there's no way through
well with grace that doesn't involve deep friendship with the unknown, where you don't project
nastiness into the dark spaces. You just say, I don't know, like just a lot of things in life have
been really interesting so far and I'm curious what happens, right? And I'd rather than pretend that I know,
and possibly steer really wrong, I'd like to just keep my eyes open and keep paying attention.
So I was giving one example of how motivated reasoning and values and fears and all like that
can affect people's sense-making. There are other things that can affect people's sense-making
and get into. But the first simple principle you're saying across any domain, you know, lably hypothesis versus natural
zoonotic origin or anthropogenic climate change being terrible really soon versus not or
whatever it is, find people who seem very well researched and earnest, who hold strong versions of the
various narratives, and see if you can study their narrative and their reason for it well
enough that you can steal manate.
You can be like, I actually really get and can give like an essentialized version of this.
Then, when you see the difference between them, see if you can come to understand why.
Like, are they drawing on different data? Are they both cherry picking their data? And it's
probably something that neither of them are saying there, there's a lot of data and
they're each cherry picking, they're each framing. It is one of them following much more
motivated reasoning and less good empiricism than the other one. But generally that dialectical process, where one, it'll point out to you where you're
faster to start to believe in something rather than something else because of your own biases.
And if you notice that and you realize that your bias will be its bias, which means it
mislead you.
Mislead bias is like, if I let go of the steering wheel, my car starts going left.
I have to go actually get it adjusted.
If I let go and it goes right, like that's dangerous.
I want to let go and it stays straight.
Bias cognitively is the same thing.
It means I'm going to be very off of reality, naturally based on what appeals as more true
or less true to me because of various things, right?
Drama's conditionings, partial value sets,
in group identities, the fact that the world of my childhood
is not a fair representation of the whole world,
but I was early imprinted that it was,
so I'll take those imprints onto everything.
So people should be fairly scared of their own biases.
Right, like they should want to seek out and find their biases and correct them. And so anyone who gets upset when someone says, I think you're
wrong about something, is actually fucking up their own life. If you protect your biases
because they're protecting some sacred thing like your fear of uncertainty or whatever.
Then you won't grow in this way and your life will stay upper bound at whatever the limit of truth
that those allow you to understand and whatever vulnerable things are underneath it.
And whatever partial values are underneath it. But if, if instead you like actually,
I don't think I can navigate well on a did not. It just doesn't make sense.
Any place where my maps are, if I want to know,
by definition, I can't see my own blind spots.
It's what a blind spot is.
Everybody has biases.
I can see everybody else's,
I'm just pretty sure I don't have any.
The best gift somebody can give me
is where they actually tell me.
If they're like, I think you're off about this.
Now, they can be an asshole and just judge me in whatever and I still want to listen.
Maybe they're wrong, but I want to listen to see if there's possibly a gift in it.
But if they're my friend, they'll be like, look, I know you're really trying.
I love you.
I grew up with a lot of things.
I think there's something you're missing here.
I'm like, tell me.
Because the worst thing I can imagine
is that I harm the things I care about or serve what I care about less well because of something
I can't even see and somebody saw it and didn't help me. So having this is another principle of
sense-making. Have friends that disagree with you on really deep things, like have different biases. If you're a liberal, have conservative friends.
If you're strongly LGBTQ, et cetera,
have traditionalist friends.
Don't be so sure that your moral set
is the only and superior moral set,
that your sense-making set is the only one.
Have friends who have different orientations
and see if you can actually see the world
through their eyes in a non-pejorative way.
Like, oh yes, I can see if I was as uneducated and promised highest and whatever is there.
But see if you can actually be like, wow, yeah, I can feel the clarity and rightness of seeing it this way. And so, have friends and see things differently
and ask them their take on things and listen
and ask their take on your take on things and on you.
So this is one of the other things that I think is,
this is one of the other things very destructive
as social media is the filter bubble phenomena is
since Facebook is gonna to give me what's
it's not trying like there's not a person having an agenda there's an algorithm that is optimizing
time on site and it just happens to be when I see stuff that disorient me and I don't I'm getting
less certainty when I want more certainty I have. But when I want more certainty and I'm
getting more in-group validation and I'm only getting outraged at the out-group that makes me feel
even more like I need to double down or whatever it is, I spend more time. So it just happens to be
that the appeal and I don't even know I've bail. I just keep scrolling in the fast infinite
scroll because it didn't capture my attention because there's so much shit in the infinite scroll
that I'm only going to stop to look if the person is hot enough
or if it looks like something that my brain is pre-triggered
to say that's important.
Pre-triggered to say that's important
means appeals to an existing bias.
And otherwise it just scroll, right?
And just kind of don't even notice that I passed it.
But what that means is that I'm going to have both content
and people in the nature of that
world that will be confirming my biases rather than correcting them.
And so then, of course, you will get increasing polarization on everything as a nature of
even just the info, technology, infrastructure itself, right?
If you haven't seen the social dilemma, they should watch it.
It covers this really well. So just to even, if you keep
Facebook at all or whatever social media curate it, for the most part, the more time you spend
on it, the less good your sense-making will be because it is optimizing for something that is not
your sense-making. But if you're going to use it, curate it. So I went and
intentionally found groups and public intellectuals that represented opposite sides of every topic. And
I liked and followed all of them to just confuse the fuck out of the algorithm. Because it's like who
likes the sunrise movement and the Kato Institute at the same time and then what you know.
algorithm because it's like who likes the sunrise movement and the Kato Institute at the same time and then what you know. And then to be careful because I know it pays attention that if I'm going to like stuff,
I want to kind of have a balanced distribution of my likes, which can confuse other people socially.
But it's because I want to see a representative feed, right?
And specifically, I want to pay attention to when I notice that I have a leaning on a topic.
I want to find the best thinkers to disagree and I want to read their stuff more.
To see, am I missing stuff?
Right?
Like, is there anything in here I'm missing?
So curating your algorithm that way is helpful.
Getting the fuck off Facebook and just doing better internet research than that or just following the recommendations on YouTube, which are so sticky, they're so bad,
damn hard to avoid, especially because it's going to send up some hypernormal stuff.
We're pretty soon you're just watching MMA or bloopers or something. And then you're
like, where did two hours go? And you're like, oh, I was doing internet research. And
so just being real about how messed up those algorithms
are, you're like, okay, what is it I'm trying to get clarity on?
What are the narratives?
And who are good?
First, let's just Google who are the scientists,
academics, thinkers, whatever that are representing these well?
Let me go read some articles, right?
Then let me find who's critiquing those well.
If I can get to the point where I can make each of the arguments
as well as they can make it,
and then I get a sense of why they disagree.
Is it different data?
Is it different values?
Is it different models?
Is it disingenuousness?
Like why do I think that?
Then I might be able to start to say,
do I see a synthesis here?
Where they each cherry picking
and there is a higher order kind of insight.
Very often it's looking at this cross section
and this cross section of the cylinder
and saying it's a circle and a rectangle
and there's partial truth,
and they're just too low a dimensional insight.
They're hyper focusing on a thing like,
you know, the founding fathers were slaveholders and the whole thing is illegitimate and they built all of these institutions just by slavery. So all of our institutions are built on supporting that. So
this institutional race is everywhere and how can you possibly like say anything good about her quote these guys and it's like yeah and the people who want a better world
than all the racism and slavery want a world that is aligned with the Declaration of Independence
more than other articles of governance written and And civil rights were slow, but emerged out of some of the structures
that were Hippocratic as fuck that emerged.
And there was greatness in the nature of what happened to countries.
So there was like evilness and greatness mixed together.
And so I can make each of those partial narratives by themselves
adamantly and talk past each other.
But the reality is it's complex, right? Like some of the individual people involved
were I have a friend, Gilbert Morris, who's a professor on these topics and he's like Benjamin
or he's like Jefferson, was a great man and not a good man at all. And you have to hold both.
Like what does it mean to be able to hold both of those? Because I can tell each of those stories on their own
and they're both bullshit partial stories.
So can I start to find a higher order synthesis
than either of the cherry picked or partial stories?
That's the thing I want to start to look for.
And then not just jump to artificial certainty
that now I got the whole thing, right?
I got something and there's probably still lots of insights.
So how do I stay oriented to continue to gain insights?
So those few things of make sure
that what you're being exposed to
is the various different ideas.
Make sure you're seeking to understand them
and synthesize them.
Curate your info environments to support that and curate your info environments to support that,
and curate your friend circles to support that.
Those are a few things.
There's a sentence in CrossFit that says,
get comfortable being uncomfortable.
I guess here it's get comfortable with the unknown,
would be in equivalent.
And there is some, it's very easy to have
uncomfortable with the unknown, so they're related, right?
Yeah. And the reason CrossFit says that is because
comfort and growth don't happen in the same place. And
good sense making and high certainty don't happen in the
same place. Yeah, it seems to me that it's going to be effortful.
You know, to do this, to undertake good sense-making is going to require you to go through discomfort,
to go through unknown, and to spend a lot more energy than just the limbic sort of reflex
action.
There's a quote from last year, it was in the Times, I've got this newspaper clipping.
Matthew Saird, I think, identified it. It's called compensatory control. He said,
when we feel uncertain, when randomness intrudes upon our lives, we respond by reintroducing order
in some other way. Superstitions and conspiracy theories speak to this need. It is not easy
to accept that important events are shaped by random forces. This is why, for some, it makes more sense to believe that we are threatened
by the grand plans of maligned scientists than the chance mutation of a silly little microbe.
15 months hence now with the lably hypothesis, this feels even more sort of new one, new
one's interesting, but yeah,. But yeah, that confidence. That's the important. I want to touch on that for a minute. Yeah.
There was something that that writing did. I don't know. I don't
hear who you said did it. So I'm saying this with no allegiance or anti-alegiance.
with no allegiance or anti-alegiance.
It presented a thought about,
it presented a position on a polarized topic.
Right, that the conspiracy theories aren't true and this was just a,
and there weren't mad scientists plotting
and anything else and this was just a bug.
It conflated that with a high moral,
almost spiritual insight that everybody would naturally
agree with and feel elevated by,
which was that we've all had the experience
of feeling disorder and then seeking more of a restart
clean in the house where we had procrastinated
when we, our taxes come in, we don't know how to pay it,
we don't want to feel productive in some way
or whatever it is. Like we've all experienced that thing. And so people are like, in, we don't have to pay it, we don't want to feel productive in some way, or whatever it is.
Like, we've all experienced that thing.
And so people are like, yeah, that's true.
And then they're like, no, it is true.
We should just be able to embrace the uncertainty.
So there's like a resonant true thing,
there's like an aspirational thing,
and then there's like a given conclusion on a topic
that is not concluded.
Is there, that's a kind of narrative warfare.
Where it almost makes it seem like believing that belief is aligned with the high moral, almost poetic. Like there's
both a good, there's the ethical, and there's almost a beautiful aesthetic, and
then the true, right? So the best narrative warfare takes the true, the good, and the beautiful,
distorts them all a little bit and braids them to align with a particular position.
And that's how I feel when I'm reading
the New York Times or something where I'm like,
if I believed anything other than this,
I would just be a bad person.
It's so clear what moral high ground is
and right side of history.
And it's written so beautifully.
Whoever wrote this, a fucking brilliant writer
and poetic, it's achingly beautiful or whatever.
And it seems so clear because they're quoting the New England Journal of Medicine in Harvard
and whatever it is.
And it seems like the best scientists all agree in the peer reviewed journal settings.
It doesn't make a true, like it really, it doesn't mean that the morals that are there
are like, so the lablyesis is a really great example.
I have not done my research on it to have my own opinion on it adequately,
so I am not going to, but I'm going to look at it just from a narrative point of view.
The lablykipothesis was up till whenever a couple months ago,
like being a flat-erther, right? Like a flat-erther,
anti-vaxxer, tinfoil, haplaring, reptilian, big blood drinkers around the world. And, you know,
you have to believe all that nonsense, and it's like, but even more, it's like,
anyone who's saying that it could have leaked out of a lab not only doesn't understand science
and it's anti-science, but they're trying to cause a war with China.
They're xenophobic.
They're against Asians.
They're like, all this moral sanctumony of what a bad person you are, what the bad effects
will be and how dumb you are.
If you think that it's reasonable, that it might have escaped a lab that happens to be
in that area that happens to work with those viruses.
And because the science is settled, because of something that we later came to realize
was not settled science.
And so you're like, how the...
And so then it starts to come out that actually wasn't settled science.
And we're like, how the fuck did the zeitgeist get that powerful,
that quickly, that you were a dumb and horrible person
for believing a thing, because the science was settled,
and the science was never settled,
and everyone who believed the science was settled,
and everyone else's dumb and bad,
should be reflecting like, what the fuck?
I got captured.
Like, I got captured.
I was certain about something
and I didn't even read the article in Nature
that proved that it was certainly Zoonotic hypothesis
and I'm not qualified
and I wouldn't have known how to do the rebuttal
but came out later.
But the guy in the New Yorker
whoever was that wrote about it seemed really certain
and it appealed to my sensibilities
and the institutions that agreed with it
were the institutions that seemed high-minded
that I like to agree with.
So I hope people take seriously right now
as an example, regardless of where the virus actually came from.
The Delablic hypothesis was not dumb. Whether it was true or not, it was not proven false and was not dumb,
and the way that was said and the narrative, and then like, how did that narrative get that?
Like, what was the force that wanted
to make it seem that certain, and to push against the other narrative so strong.
That should be a very interesting question for everybody.
And yeah, it's fascinating in this whole situation I have seen zeitgeist formation that
is more intense and faster than I've ever seen previously.
That is not based on good sense making, but other stuff.
Just rapid news cycle iteration.
And yeah, I, I, one of the, the terms that ease a lot is talking about good faith and bad faith actors.
And, um, I guess that this ties in with sense making individuals or actors,
if you want to say that sense-making agents, what
I find, especially over the last 15 months and the Labelic hypothesis is a good identify
because it is so flagrant and in your face, was people who had complete certainty plus
powerful distribution to be able to convince others of their certainty,
are able to reverse their position, essentially without an apology within the space of 15
months. But it's, here was a thing that I'm certain is true, and now here's another story about
a potential other truth, without referring to the fact that the first truth that we made you believe was true was untrue.
We saw this with Joe Biden last year where he said that shutting down travel from China was xenophobic in February.
And then by May was saying that Donald Trump had left it too late to close the borders. Like you don't get to do that. You do not get to fucking do that. You're supposed to be the people leading the country. You're supposed to be the ones
that we hold to the highest levels of good faith actor requirements.
Yeah, that's cute. Like obviously they do get to do that because people are easy enough to capture and move along in that way. That's why it happens.
What's interesting is each time, say, a narrative changes,
What's interesting is each time, say, a narrative changes,
we made a mistake before, we couldn't have known or science takes time,
which sometimes is true, but now we know, right?
Like it's always, and now we know.
So it's a continuous justification
for the authority we have.
And the Consilience Project, we just published an article there recently
called where arguments come from, a team that worked on it, did a really, really good job,
and it basically shows, like, where do arguments in the public sphere come from? Like in order for a lot of people to have heard it,
a lot of amplification of the message had to happen,
which meant a lot of people
who have the ability to amplify a message had to care about it,
or some people that had the ability to amplify it
had to do a lot of work to make that happen.
And so typically, there's a narrative
that somebody who has invested interest wants, right?
So they have like a demand for a narrative,
because it'll create a demand for a thing in the population.
And so they find a source of narrative supply.
They find a academic or a think tank or whatever it is that
already thinks that thing or thinks something close enough.
So they don't have to get somebody to lie. They
just upregulate the narrative that currently wasn't like that person who's been
writing about that thing for 30 years and never got any traction. Now it's
everywhere. Because there's now an agenda that is useful for it that will
upregulate it where before it is women upstream. And so it's important to just
really think about the mechanics of what allows a meme to propagate.
What allows a narrative as I guys to propagate and not just allows it, what propagates it, right?
There is energy involved in propagation, the energy has an interest in seeking ROI on that energy.
And so this is why DC is filled full of think tanks that are intellectuals putting out public policy that is already
predetermined in advance what the ideology is they're doing it on every new topic that emerges.
Right?
Um, yeah.
What do you think a good sense-making agent is not or what are some of the most common pitfalls
that people have when trying to become one.
I mean we've been talking about the pitfall of the excessive certainty this whole time.
An epistemic certainty is I know what is true, excessive certainty is, I know what is true.
Excessive moral certainty is, I know it is good.
Also called sanctumony, right?
Those are both pretty big pitfalls.
There is another one on the other side,
which is I don't know and I don't care, is nihilism.
What I find interesting is that most people will,
or many people will flip from certainty
to nihilism in like one step where they're pretty certain of something.
If they find out that that's not true, they're like, I can't make sense anything.
I'm giving up.
I can go watch TV or whatever it is.
Because the hard work of having to sit in, I don't know.
And I'm gonna work at it and I'm still not gonna know.
I'm gonna work at it more and I'm still gonna know for a long time.
I've a bunch of friends that get really frustrated with me
because they send me a thing and say,
what's your opinion on this thing, whatever it is?
And it's like something about a different narrative
on early human civilization and hominid origins or whatever it is.
And I'm like,
they're like, do you read the paper? I said, yeah, it's interesting.
And they're like, well, what do you think? I'm like, it would take me hundreds of hours to start to have a sense of it. Like, it's an interesting topic. I don't have the hundreds of hours
to put into it. I'm not nihilistic in that I don't care. It just might not make it to the top of my
stack. And a lot of the things that are on the top of my stack, I also say that I don't care, it just might not make it to the top of my stack.
And a lot of the things that are on the top of my stack, I also say I don't know, but I'm
working on it, right?
So when we talk about sense-making, being good sense-making agent and grounded, what we're saying is,
I actually want to understand the world I live in as best I can
because I actually hold that life is meaningful.
And I hold that my life could be meaningful,
which means that my choices can be meaningful. And so I want them to be informed as well as they can be.
If my choices are me acting on and in and with the world, I want to understand things about me and the world and about what those actions will do as best I can.
Because if my choices really matter, I don't want to believe that it's going to go a certain way and I'm wrong.
Unnecessarily, and I don't, like I want to understand as much as possible because it matters, right? Like ultimately because it matters and I care.
And then the hardware say, well, it matters and I don't know. And I might still have choices to make and I care. Right? It's easier to jump to, I know, or I don't care. Because I don't know and I care
and it's consequential and it's moving is fucking hard, right?
Like that's scary, it can be heartbreaking, but it's like there's an epistemic humility
and an epistemic commitment at the same time. I don't know, but I can progressively come to know
better, right? Not all positions are equally good positions. Some of them have more error and
some of them are more inclusive of more perspectives. So, I can progressively come to know better so that my choices can be better informed so
they can be more effective and more meaningful.
But in order to do that, I'm going to have to stay the course of seeking understanding for
quite some time, which means I'm going to have to not prematurely come to think I
are to figure it out too quickly.
Or defer my sense making to someone else who thinks they figured it out. Easy exits out of the discomfort. It's deciding to give up halfway through the
workout, it's sandbagging it so that you don't go to your maximum heart rate.
Yeah, it's dropping the weight down so that the discomfort's a little bit less.
Yeah. The get uncomfortable with the unknown, I think, is a really good sort of overarching
heuristic that you've got there.
One of the things that was...
Just one more way to say that.
Get okay with the unknown.
Like, there's an even more poetic and beautiful way to say it that I actually feel and think everybody feels if they
drop in is actually connect to your love of reality.
If you if you if you didn't care about like if you didn't have a love for
reality wouldn't care if it got hurt you wouldn't care if you lost it you wouldn't care like the fear of losses because there's something meaningful you didn't care about, like if you didn't have a love for reality, you wouldn't care if it got hurt, you wouldn't care if you lost it, you wouldn't care.
Like the fear of loss is because there's something meaningful you don't want to lose.
The anger at anyone doing the wrong thing is because they're harming something you care
about.
Like, care and love are the origin of all the other emotions because otherwise you would
just be apathetic and occupant issues, right?
So ultimately I give shits about things because there is a care and love about life, my life,
others' lives, reality, that's real.
So there is a love of reality that is at the basis of the meaningfulness of anything.
And reality is mostly unknown to me.
I know the tiniest fragment.
So my love of reality and my love of the unknown, right?
If I have a love of reality and it's mostly unknown, that means not just comfort with the
unknown, but, and this is the awe of the mystery, this is the spiritual sense of faith and
trust, whatever, right? It's actually extending the love of reality into the fact that most
of it's unknown.
That's nicer. We'll take that one. One of the things that I was very interested in is to try and work out at what stage of personal actualization trying to go and fix the world and
when do you know when you're ready for one of the better term?
Yeah, definitely total enlightenment before sweeping the kitchen.
And joke because it shows how ludicrous it is to think that it is not both always.
If I am seeking to help the world and I have not learned what's going on in the world, I might be doing stuff that's totally not needed or not very not very useful. I'm trying to solve a problem, but I don't understand the upstream
things that are causing the problem, so my actions will mostly be useless.
So, do I want to work on myself in this place, just means work on my cognitive models and maps
that I understand the issue well and if I can be helpful? Sometimes I care about the thing I want
to help, but the first thing is, do I understand the problem well enough
to be able to help? Sometimes that doesn't matter. There's a trash on the beach I can pick it
up. Did I fix the issues trash on the beach? No, I don't know who's putting the trash there, why
what cultural effects are causing that, so, but I still picked up the trash that day. Cool.
That is not a comprehensive solution to pollution. It's a meaningful activity in the moment.
But to the degree I want a comprehensive solution to pollution, I have to start to understand
the financial incentives to make throw away plastics, have to understand what it would take
technologically to be able to make plastics that biodegraded, I have to understand the
culture of why people do that here and they don't in Japan and how we could change the culture.
There might be a number of ways I can come to understand it well enough, right?
So I might want to work on my understanding of a problem before trying to help it so that
I have a sense of how to really be effective.
Particularly the more complex the issue is, the more consequential it is,
and the more consequential my action is going to be.
Right?
Oh, we're going to do a solar remittance program where we, that's not the right word,
but where we reflect 20% of the sunlight out of the earth through geoengineering.
Should probably be pretty fucking sure.
Pretty fucking sure that's a good idea
because it's pretty consequential, right?
Or we're gonna try and sequester CO2
using these genetically modified plants
that we've never planted at Scam Donut
and the biological effects of the modified organisms.
Like more sense sense making before choices
that are really consequential.
So that's one example of,
I wanna work on my own cognitive maps
of what is needed, what's going on,
what would be effective,
enough that I have a sense of what to do.
But then also there's a point at which there's no more
research that will work, I need to field test the thing, try some stuff and be like,
oh, it didn't work for reasons. The lab would have never told me. I didn't realize that the local
don't even like that thing or they don't believe it or. And so there's a place where the application
layer also ends up being part of the epistemology. It's the testing, right?
So that's one example. There's also the example of, well, what if I'm working
on trying to help the world?
And it's not the problem isn't a lack of cognitive
development, it's a lack of certain kind of emotional
healing and development where that is affecting
how I'm showing up.
Well, let's say whatever wounding issues in childhood
have me have an outsized need for credit seeking
because of not having ever felt loved enough or enough or only having felt good
enough based on performance and credit attribution, whatever. Will I possibly
mess up a project to ensure that I get the credit out of it? Well, under my
another people or sabotage or whatever it is or emphasize to me getting
credit more than the effect of the project, where me doing work on not needing that as much because of healing whatever
kind of place that isn't, you would actually make me a much better agent for change in the world.
Yeah, that's like a real thing, right? That's a real thing with that kind of stuff.
Messed stuff up. Or where I'm trying to heal a particular issue in the world that I don't realize is unresolved
wounds in me where I see something resonant out there.
And when I heal the thing in me, I have a totally different assessment of it, right?
Like I have a totally different assessment because I really, I was trying to fix marriages in a particular way because of my trauma around
my parents' divorce or whatever it was.
And so I was seeing it through a traumatized lens.
I didn't even realize that I had this whole mission and nonprofit and whatever it is.
So there are times where our own trauma will get projected on the world.
This is why Lao Tzu said, if you want to protect your feet from rocks, better to put on shoes
and try to cover the world and leather, right? That idea of like,
but that doesn't mean that any pain you feel looking at the world is just your pain.
Like, I think if someone was as healed and integrated as they could be and they see a factory farm,
they would feel the empathetic pain of the pain of those animals if they see hungry kids
They feel an empathetic pain if they didn't there would be something wrong with them
They wouldn't be enlightened. They'd be sociopaths
And this is why you see the Buddha crying, right?
This is why you see the the passion of the Christ is the idea of the enlightenment is not just oh
I can see your suffering and doesn't do anything to me. It's like sociopathic enlightenment is not that interesting
it's It's like sociopathic enlightenment is not that interesting.
But this is where it can be either way, right? Am I clear?
And I'm really feeling the pain of the other and feeling called to help?
Or is the some pain somewhere else just triggering my pain and then rather than face it in myself,
I'm going to try to solve it in the world in a way that will always keep my sense making and my
effectiveness off? Right? So these are examples that people will give, people understand. This would
like, you do more work on yourself first, or my own need for excessive certainty, because
of my uncomfortable with the unknown that will make me do shit where I'm wrong, but certain
I'm right too often, right? These are, and we, I'm sure the listeners can generate
100 more examples.
So should I just do a bunch of psychotherapy and a bunch of zen
meditation and a bunch of study until I am second tier or third tier or whatever the
fuck the developmental metric I want to look at is that means I am now whole enough and integrated enough that I can work in the world.
No, that's ridiculous.
You can't even, like, so many of the ways we learn about the problems is by engaging with them. And you couldn't only do it in study. And so many of the ways we learn about our issues is by engaging in the world and
seeing, oh, I really did try to get too much ego credit there. And I'm reflecting on it.
And I was an asshole and like, I need to work on that. And I wouldn't have seen it. Otherwise
or wow, that project failed. And I was so certain. That's how I'm seeing my certainty issue.
Right. So as I heal and learn and grow, I can show it better in the world.
But as I show up in the world, I also get to see those things if I'm looking for them.
If I don't look for them, I'll always blame the world.
Every failure with somebody else's fault.
But if I'm looking for it, then I can see those things.
And rather than just get crushed on a piece of shit, that's all there is to it, I can
take it as, oh, this was some belief trauma pattern that created a self-fulfilling prophecy or whatever, but that I could shift.
Right? So I want to bring in empowerment where I will look at what in me was off,
not to just beat myself up and hate myself, not to pretend there was nothing in me off,
but to be able to see it look at it work on
integrating it and growing past it. But similarly, there's also this thing that we're showing up to
the world with things that we're passionate about. It motivates our growth because let's say
I'm afraid of public speaking. Let's say I'm like catatonically afraid of public speaking.
Let's say I'm like catatonically afraid of public speaking.
I can just avoid that forever and don't have to go through it.
But let's say I'm somebody like Jane Goodall or whatever and I go and I'm working with the primates in the wild, and I watch the poaching, right?
And I'm so fucking broken by that.
And it matters so much more than whether the people
like me or not, I get up on stage and talk about it
and like we have to stop this poaching,
we have to, because something bigger than me
and my fear of public speaking is actually not moving me.
And if when I get up on the stage to talk, I'm still in the like,
are people going to like me or not like me, place, I won't get over the fear.
If I'm touched by something that is so much more important than that,
I can actually transcend that because it's not about me. I'm talking about the
topic, right? I'm talking about the issue. I don't even need to talk about,
give me the fuck off, have somebody else talk about, I'll just talk about
if there's nobody else who's doing it. So I also find that like the hardest parts of our healing are hard, right? Like we avoid those
things for reasons. We don't notice them. They're in the shadow for reasons. And oftentimes this is
why I like so many people only heal patterns when they have kids. We see this a lot is because there's
something bigger than them for which they're willing to work because they're like, man, I'm fucking my kids up the way my parents fucked me up.
I told myself I wasn't going to do this.
I'm repeating the same patterns.
I see they're going to get it.
And that's the only thing that has them like double down on what it takes to shift that.
So whether it's your kids or whether it's some other calling, there's a place we're showing
up to the world is actually the only thing that can make something more important than you that can allow you to transcend the parts of self that were just too hard for the
rest. So what we want is a virtuous cycle between growing as a being and having who we are show
up for what we care about. And where as we show up for what we care about, it gives us insight about
As we show up for what we care about, it gives us insight about ourself,
about the situation and it gives us motivation.
And as we grow and heal more as a person,
it can show up for what we care about better.
That's beautiful.
Have you got any sense of whether you think
on the whole people tend to more toward the side of showing up
or more toward the side of working on themselves first,
if you were to give most people a little bit of a push in one direction, would you say,
consider the outside or consider the inside more?
There are just different groups of people.
Right, you have a personal growth world and a psychotherapy and healing world and call it Eastern
Enlightenment world that is very focused in that direction.
You have an entrepreneurial and activist in various types of action oriented that's focused
in that direction.
Bias isn in both sides.
Yeah. If you considered the potential that humans are just two at the mercy of our emotions are
programming to be able to reach our civilizational potential, I was thinking about this when the most
recent potential release about aliens was coming about and I had a conversation with a friend saying, I don't think that aliens could be even 10% more emotional than us by whatever
criteria you want to cause that more emotionally reactive because coordination would become
so difficult if you were to turn it up to maybe 10 or 20% more, but you'd be able to achieve
shit. Like, have you considered that that we might just be kind of bouncing off a glass ceiling,
that the creatures that we are are so self-limiting, that no matter how much we try to transcend our
own programming, that we are received many times of like,
do I actually get human nature?
And I think that was a, that's given the human nature on that.
Well, they're related questions, Okay. Right? It's a related
question. And it usually comes in the form of, I am dubious of two things. I'm
dubious that people can be, or I'll say it another way. I'm concerned that
people are too irrational and too rival-risk to do anything like this
emerging coordination that you're talking about, that the level of rationality and the
level of anti-rivalry, right, so like wisdom and compassion or whatever it is that would
be needed don't seem to be well demonstrated across the population anywhere, so what kind
of aquarium nonsense is this? And so let's address that. It's a real politic critique, right, or concern.
Am I asking the same thing you're asking? Not far off, yes. I mean, I wasn't accusing
you directly. It was more abstract, but yes, yes, you're right. Yeah, have I ever considered? Yeah
so
I'll tell you the the first part of how I approach this so the same way I'll actually use it as an example of when I was saying a good way to Sense-make is to do dialectic
so is to do dialectic. So everything on the nature versus nurture arguments
and the range of what people thought nurture could do
were topics I really wanted to see
what were all of the thinking
and what was the basis for the thinking.
And were there any kind of axioms
that were unquestioned or new possibilities
that could change the landscape from even those ideas?
new possibilities that could change the landscape from even those ideas. So one thing is when we look at say how violent versus altruistic or rational or whatever
metric we want to look at and however we assess looking at across the population to get some
kind of distribution.
First I would say I'm extraordinarily dubious
of pretty much all social science of this type
for a bunch of reasons.
One is it almost all started post industrial revolution
and much more recent than that.
And almost all of human history that conditioned us genetically and otherwise
was in tribal environments. And those are so different. And just like we're saying, even
the nature of Facebook, engaging with the changes, the patterns of what we think feel, believe,
react to. And there's tests that Facebook has run of like we can make people more depressed
and happier and believe different things just by putting what's in the out, changements
in the algorithm a little bit, right?
So post ubiquitous capitalism and ubiquitous industrialization and ubiquitous nuclear
family homes and a bunch of things none of which were natural to the human evolutionary
environment, but they're conditioning that kind of won. And so it became a new ubiquitous conditioning. We do
the social science then, and then pretend that that's not conditioning and call it human nature
because it's ubiquitous conditioning. That's silly, right? And then there's so few indigenous
people left or whatever, we can just make them statistical outliers. Even though they have very different patterns on a lot of those things.
Also, so that's one reason that I'm very dubious of the social sciences.
And this is even like when they're trying to do a good job, not like the nonsense social science that was
reifying why whites were superior in the early US based on bad interpretations of Darwin and Phrenology and stuff, right?
But you can see from that stuff how easily
bias influences something as complex as social science, complex and consequential.
The other thing is that there are a lot of things about people's behavioral dispositions
that change with development and development is not factored.
We don't factor levels of higher stages of human development post just becoming 18, even
though they're very real things.
We just put it all together under a bell curve. But when you look at the work of
PGA and then the kind of Neo PGA, the educational philosophers that were looking at human development
and childhood, it's very clear. There's neurologic development and corresponding change and
find motor skills and logic skills and verbal skills and etc. And we get to like 18 schools over in the
development ends, right? And that's a fully developed person, this gibberish, right?
Like it's not a fully developed person. So what is development beyond that? So you
have a bunch of people who have worked on higher stages of development and
Zach Stein's a good colleague of mine work on that very heavily and looked at the
work of Colberg and Graves and lots of people who've worked on that, but it's like
The complexity of someone's cognitive model the development of their moral models the development of their aesthetic the development of their
capacity to perspective take perspective seek and perspective synthesize those things keep developing
Right or where they can keep developing one One can do things to develop them.
And then at those higher development capacities, there are different behavioral dispositions.
And this is not just typologically, they're typologically left or right or whatever. This is, but they're, so we could say,
if the society was supporting more development of that type, you would have a totally different
bell curve, right? But that's not a topic that's usually factored. So there's a bunch of things
like this where I would say the social science all needs to take away some grains of salt.
One thing I have looked at is on the traits that matter most to a civilization that would work
well. I've looked for positive deviants, outliers of the statistical norms on the positive side to see is there an upper boundary that we think of really the upper
boundary, or are there places where what we think of as the upper boundary is the median,
right? Like it's quite different. And so if you, there's heaps of examples, but if you look at like through much of the last
few thousand years across lots of different cultures and different geographies, have Jewish families
raised better educated kids than the people around them much of the time.
Yeah, they have.
And so is there cultural dispositions
that can lead to higher qualities of education
and correspondingly different qualities of ways of being,
different types of disposition?
So then you have that for a long time,
the Jews, all as a diaspora,
pretty much didn't defect on each other. Right? And the way the Jewish law is structured,
they're, it's kind of like a formal logical system. So they're also getting very good at how to
be able to think in formal ways, which makes them good at,
which is why they became good at science and finance and other things that were thinking in formal
logic as well. It's a really important example because you could say, well, if what Jewish culture gave
in terms of the development of education and rationality and non-defection
on each other could happen across the whole population with that change things.
Yeah, it really would.
What about the Jannists?
You have a religion where across a long period of time nobody hurts bugs or plants.
Yeah, you do.
How do you, what about the violent kids in the society?
What about the sociopaths?
Across a huge population, the violence bell curve
is completely different, right?
You have extraordinarily low violence
across the whole population.
How can that be?
Well, they're developed differently.
Can you have a population where almost everyone is violent?
Yeah, there's a few cultures where violence is ubiquitous,
right?
And you can see in cultures where kids grow up as child soldiers that you don't make it to adulthood without killing people. And so it's like the John Jouide and the James are both
possible in human nature depending on conditioning. So the idea that what we naturally are is
the median of that is just gibberish. It's just not understanding how we create societal structures depending on conditioning. So the idea that what we naturally are is the
median of that is just gibberish. It's just not understanding how we create
societal structures that create conditioning that support the societal structure.
So what if we had something that was conditioning non-violence and
compassion more like James or Buddhist or Quakers and conditioning rationality
more like Nordic-Bildam countries or. And what if we had a few of those things and we brought them together and
not just in educational but a cultural developmental system, could we have, is it within human nature, if rightlyil condition of higher potentials. What if in addition we created an economic system where we addressed perverse incentive?
So rather than the guy who externalizes the most harm to the environment makes
the most money and then gets the most chicks and status and whatever to actually
all of the harms, the externalized harms are internalized to the cost. So the
guy who gets the most money is the one who does the most omnibentive and no harm anywhere.
Well now there's no sociopathic niche to condition bad behavior and bad values in people
and doing the thing that's good for others ends up being good for you actually
conditioning the values even from self-centeredness. It starts to bridge in that way.
Well how do we make an economic system that rigorously internalizes externalities and addresses perverse incentive?
That's a really deep question for changing what we would call human nature. That isn't
human nature. It's the nature of made up human coordination systems, right? Does all
property law, does all access to resource have to be at the low individual private property? No.
Can we do things that change that fundamentally? Is every good fundamentally ravelrous because
it scares? No, digital goods made it very clear that you have things that are not only
not scarce, but anti-ravelrous. The more people that use them, the more valuable it becomes,
but we still make them artificially scarce. Because of the artificially scarce dollars,
because the artificially scarce material's economy artificially scarce materials economy can we make a materials economy that isn't artificially
scarce by making it closed loop with enough energy to run it. Yeah, we can't. So
the point is do we see positive deviants of you know you you look at a very
wealthy population, old wealth families while they still have the integrity of
how to do dynasty or even just the kids going to the best prep schools in
The U.S. Right and then who go to the best Ivy League schools and how all the best tutors
Do you have the same distribution of success in life of the kids coming out of Exeter and the kids coming out of an average public school?
No, they're totally different
Well, what if everybody went to Exeter
and had that corresponding life
since they were little?
It would be totally different.
Well, but we can't afford to do that.
Yeah, so here's the thing.
The idea of the dumb masses is class propaganda.
Because the upper class that has access
to the things that develop them having more capacity is why they end up having more capacity is a major part of why they have more capacity.
And then the idea that some people like them need to be in positions to rule because the masses are too dumb is a self-fulfilling prophecy because we'll keep the masses done by not giving them better educational resources
and other types of things that would create a difference there.
So I actually think that the idea of the irrationality
and the rivallessness of the masses
is one of the deepest parts of like propaganda zeitgeist
of ruling classes forever because it justifies the basis for rulership. Cultural conditioning, masquerading as human nature in the modern world
is something that I've never even thought of before. That is so interesting.
Yeah, I mean, if you look at, you know, you've rent these clubs and you see all these
teenagers come into the clubs now they behave.
And so you've seen patterns that are on repeat so much.
You're like, I know human nature.
I've seen this a hundred thousand times, right?
But if you went to the tribes in the Western Amazon and saw how the teenagers
there were engaging who'd already been doing ayahuasca for 10 years
since the time they were little. It's not the same. Like there are some things, yes, they have
a sex drive, right? Yes, there are certain aspects of paying attention to social hierarchies that
everyone's going to notice. But there's more that's different, like a lot more that's different, right?
That's conditioning. Given the fact that at the moment, the cultural conditioning
appears to be making human nature into a rivalrous game,
it's difficult. We have coordination problems.
We, not everybody is a Jane or a Jew at the moment,
or some perfect amalgamation of both.
With that in mind, he said something in our first conversation
that was where
gods were just shitty gods and it was a comment on the difference between technological
prowess and sort of wisdom or ability to deploy that technological prowess. If you could
do you think it would be optimal to curtail technological development? For perhaps a
couple of millennia say, while we let our wisdom wisdom catch up or do we risk more by accumulating background risk and potentially losing galactic real estate by not progressing within that time.
Have you got any sense of how that balances?
It's irrelevant because it's impossible.
It's impossible to slow the progress much.
If you had a God's eye view.
If I had a God's eye ability to slow it, would I?
Yes.
Well, if I had the God's ability to slow it, I would just speed up the rate of the wisdom.
Because it's not easier.
What it takes to grow the wisdom of everyone
and what it takes to stop the progress
are actually the paired thing.
Because outside of the wisdom,
the multipolar traps win.
Anybody who says, I'm going to get there first
is going to just win the world
because there's so much power.
So now, they have everyone as maximum incentive
to get there as fast as they can,
including lying to other
people about that they're not doing it so people aren't trying to race. And so, you know, you can say,
God, we are not ready to be stewards of this much technological power, let alone power that we
want to even be stewards of, because it'll become auto-putting and run itself like AGI, right?
So we need to just slow this fucking thing down.
OK, so we can become a unibomber, which that was his idea,
right?
Ted's idea was like, we're not ready for tech.
We sucked with spears.
We were assholes with spears.
We were assholes with guns.
We were assholes with ICBMs.
And now we're going to have drone weapons everywhere.
Like, no, we've got to slow that stuff down
because we have been assholes with all the weapons we've had.
And all of the destroyed environments with way less capacity than we have now.
Okay, so you come to that idea and you you can do what if you don't have tech you don't have the power to affect stuff.
Because the tech ends up being the power so other people disagree with you and they want the power. So other people disagree with you and they want the power. So if you want to stop them, you got to get more power than them, which means you got to beat them at the race
to get the power to beat to stop them. So the techno optimists, the naive techno optimists,
who just say, techos all of the problems, this is gibberish position. It's not just gibberish.
It is super dangerous. In my opinion,
it's the most dangerous worldview currently because like a militant jihadi worldview doesn't make
AI and CRISPR tech and things like that. It just doesn't make the tools that can be tools of
destruction and scale. Only the the worldview that supports the increased rapid development
of exponential tech is the thing that increases the destruction, the fundamental destruction
power, not just the application of the existing destruction power, right? So the idea that
the faster we build the stuff, the better everything will get, because AI will just solve all
the problems and we can't possibly solve it.
So let's just get to the AI fast, let it solve all the problems and etc.
Like, yeah, that's an extraordinarily dangerous view.
The Luddite view, right, the techno pessimist who says,
we have always been bad stewards of power. We cannot wield
exponential power well. We've used power for war, exponential war destroys everything.
We've used power to extract from the environment and externalize the cost, exponential
externality destroys everything. We've used our power to create radical asymmetries of
power. Exponential asymmetries of power sounds like a shitty world for almost everybody.
So, why, like we need to stop that thing.
Well, that view, while that's true,
that view ensures that it will have no power to do anything.
Right? So the only view that can forward is the one that embraces the tech that is
where the power is. But embraces it, recognizing that it's not a given that that tech is good.
It could be developed in ways that are positive, but it can also, like if Facebook wasn't developed with a time-on-site
maximizing algorithm, it could be a very different force, right? Like the ability to take all of my
behavioral data, make advanced psychographic profile on me, and then use AI to curate an infinite
new scroll to affect my mind and behavior. That's fucking powerful tech. But if the goal is optimized my time
on site to sell me and my information to advertisers, then it's going to optimize for putting in front of
me the limbic hijacks and the cognitive bias in the in-group and the things that drive addiction.
But if it's desire, if it was optimized for developmental metrics of like expose the
person to the things that will actually help
them see alternate views that they don't already see and help them learn and grow in perspective
seeking. Like, it could be techniques like that could be the most powerful tools of consciousness
elevation and education that have ever been. So it's not that the technology is definitely bad.
It's it and it's also not that the technology's value is agnostic. It's not value the technology is definitely bad.
And it's also not that the technology's value is agnostic.
It's not value is agnostic.
No technology is value is agnostic.
I, if I make a plow, it's not value is agnostic that can be used good or bad.
The plow will make a lot more food for my people than just hunting and gathering.
So that means that if other people use it and I don't, I'm just going to lose
in terms of making it through famines and having stuff to trade and whatever. So I have to use the plow as soon as it exists. It's pretty much, right?
And the plow codes a pattern of behavior. Now I have to yoke an ox
and beat it all day long to do that.
So I before I was animistic. When we were doing hunting gathering, I'd kill the buffalo ever
once in a while, but I believed in the spirit of the buffalo. I can't believe in the spirit of
the buffalo and beat it all day long after cutting its testicles off and putting something through
its nose and whatever. So I have to change my whole view on the spirit of the animal, right?
Was the tech values agnostic? No, it coded
values into me by am I using it, by the fact that I had to use it for game theoretic advantage,
right? So the idea that tech is just tech and it's not good or bad, it's how we use it. This is a
misunderstanding. Yes, of course I can use a hammer to build a house for the homeless,
or beat somebody's skull in, and it's a positive and negative. But it's also, the hammer will code certain patterns of behavior.
Those patterns of behavior will end up coping values into me, right?
And so what it means is, if I have social tech, I can develop social tech that is in like Facebook, right? I can develop a social technology
knowing that it will code patterns of human behavior that will code their values and how they see
the world. I can develop it intentionally that will code patterns of behavior and values that
some model of human development says is actually a more developed person. Versus is a more
attention extracting and profit extracting person, which usually means a less developed person.
So it's not that the tech is value is agnostic and it's not that it's necessarily good as the
technopharmist things are necessarily bad. It's that we can design it in a way where the fact that
it's not value is agnostic and it will be conditioning values can be
good or bad but it comes not just to how we use it but how we design it. Right?
The nature of the design itself will end up affecting the use patterns which will
end up affecting the beings in the society. So in order to forward we have to both
utilize the technologies that have power.
Otherwise, the stuff we're doing just won't matter.
Those who are utilizing it will just win.
But we have to do it in a way that is also aligned with the human development and the
social values and the integrity of the planet and the commons that we want to see.
So, right now, the exponential technology, like it's fair to say that exponential tech is in
Converse so much more power than all other legacy forms of power that only those who are developing and guiding exponential tech will have much say in the future
Right now
There is like two attractors for what happens, either the exponential technologies
just cause catastrophic destruction,
because you have exponential warfare or exponential externalities, right?
Or we figure out how to avoid those by some good control systems using the tech,
and now we get exponential control systems. And so we see
authoritarian nation states using the tech to make better authoritarian nation states. And
we see some companies using the tech to make companies that are more powerful than countries.
But they don't have jurisprudence of foreign by the people.
And so that is like more like a new kind of feudalism. So both the
authoritarianism and feudalism are like technologically empowered
autocratic structures. So there's basically catastrophes and dystopias, or
the only two things that are currently on the landscape. If we want something
that is not a catastrophe and not a dystopia, then we have to utilize the tech in a way that binds the tech.
So it doesn't cause the catastrophes,
that does it in a way that's not dystopic,
meaning that the order is emergent rather than imposed.
How do we utilize AI and crypto and social
and attention tech and all of these things?
How do we utilize them
to increase collective intelligence and collective coordination
so that we get increased effective coordination
in order without it being a kind of dystopic control system.
Like there's a lot of innovations
we can really implement there.
So the question of should we slow down
the rate of progress? If I could slow it down and just say,
hey, the guys who are getting way too close to
super dangerous AI,
if I could slow it down, I'd do it.
Yes, yes, I would like that.
Because right now, our growth in doing it rightly and wisdom is
not good relative to our growth of getting the technology more powerful.
If I could get like how powerful CRISPR technology is becoming as cheaply as it's becoming,
to slow down so that it wasn't so easy to have very small groups have bio-weapons capacity. Like,
yes, I would like that to slow the fucking down. It's not going to because there is no world
authority that can stop it
everywhere. And anyone that does it is advancing so that nobody really wants to
stop it. So what we have to do is get the utilization of those technologies to a
better attractor that can guide them to happen even faster. The consideration of
what sort of a world we would be in if Facebook optimized for well-being or
happiness or insight or whatever,
where you think about how powerful it is and some of the changes that we've seen in human
nature, which we now know might very well be cultural conditioning.
It's crazy.
Think about the sort of society that we would have. If you had multiple, so you prefer to learn through 15 second video clips, okay, TikTok
education is TikTok mindfulness is for you.
And then Twitter, are you prefer to actually read, but you get away with more pithy sort
of aphoristic stuff?
Okay, so Twitter enlightenment, that's your place.
Like, yeah.
And like, okay, let's take, so the type of media we get people, but then also the nature That's your place. Yeah, and like okay
Let's take so the type of media we give people but then also the nature of the content to be
bias challenging more than bias confirming
um, I don't know
so if you take a there there are some
YouTube channels and Facebook groups that just document police violence.
And so you can just watch videos of cops beating the shutout of people that ways that seem unproaked,
and some of them are just cops beating up black people.
And as much as I am aware of the statistics, and I'm aware of how we're affected by this,
I can't watch that channel for more than a few minutes without just seeing red everywhere, right?
And that's the only issue I can care about.
I can't care about anything else in that moment.
And then you watch a different channel that is a blue lives matter one that basically just shows cops risking themselves to protect other people and then people attacking cops,
which is why they are the way they are.
And maybe one that just shows black people attacking cops.
And then you're like, fuck, what a fucked up job that is
and how amazing they're doing
and how much self-restraint.
And now neither of these are giving me
statistical representations, right?
I just watched four videos.
Now, there's a million interactions,
or thousands of interactions in every city every day.
And there's all kinds of complexity in this.
And I didn't even watch what happened leading up to it.
But if I'm a black person living in some inner city area
and I watch a few of those videos,
and it profiles me, and it shows me more of those,
because I spend time on psychs, if fucking hurts. I'm just getting vicariously traumatized by watching someone that looks like me that I resonate
with, get fucking killed or beat up or whatever.
What does that do?
But then the other guy who's watching it, who's watching the other one, right? The guy in Texas watching the Blue Lives Matter one
is getting more and more both patriotic
and he's getting by kerosene traumatized
with the way the cops are being wrongly attacked
by the Black Lives Matter folks, the protesters,
look whoever it is, and that person
is actually becoming more racist than they were, right?
Becoming more scared or bigger.
And so what if they got the other videos?
Like what if the algorithm was actually giving them
the other content and what if it was giving them
the specific subsets of the other content
that would be likely to actually touch something
and appeal to something in them.
So that there was just some,
so one, it wasn't just traumatizing them.
And two, it might be giving them some insight
how to do something other than culture war,
how to do something that could possibly bridge
where there's trauma on different side simultaneously.
So if you start to think about
let alone the science of yay or no on Ivermectin, right,
or whatever.
So if you start to think about could we curate it to have the right media forms
and the right distribution of the types of content
and exposure to the types of people
that would help this person be more trans-perspectival,
more perspective seeking,
more perspective taking, more holistic
and their thoughts and insights.
I mean, it's, and even where in so far as someone is watching something and it's
they're liking it, it's being seen with their watching where, where in so far as like
status shit is being hijacked, we use it for the right thing where people start to have
status conferred by the amount of stuff that they're looking at from different perspectives
in its educational in nature.
I think it's fascinating to start to think about
and still people will be scared hearing this
is like who the fuck thinks that they know
what human development is and what I should see
and is gonna socially engineer me for their good idea.
And they should be dubious of that.
The thing is you're being socially engineered right now
and you never aren't, right?
So it's not like socially engineered or not.
It's bring consciousness to the fact
that it can't not happen
because we are all conditioned by the environments we're in
and then take responsibility to say,
how do we actually do that intentionally well?
What does that mean?
I mean, at the moment, I'm sure that there would be a way to make it worse,
to make it more polarizing or more limbically high gacked,
but it feels like there's a lot more ways to make it better than there are ways to make it worse.
It's interesting the ways to make it worse. The search algorithm oriented there.
Better, it's like, is it possible to make food that is much worse for you than
hostess and McDonald's? Well, like, only if it's not food at all, it's just poison, right?
Like, I'm doing just pure poison. But I couldn't really make anything that could even
masquerade its food much worse. Because there's not that many things that are food. And
they, they split test optimized for the most addictive ones and the with the easiest palatability with the drives the most addiction is like
maybe they'll come out with a new type of twinkie that's even more addictive the problem like they can already did most of that search space right
and they're making innovations like okay can we make porn that is even more addictive? Yeah, VR, right? So it will do, oh, shit. But the thing is like hyper-normal stimuli that lead people to addictive behavior
is just good for capitalism. If I run a business, I want to supply something people are addicted to.
want to supply something people are addicted to.
Because the first rule of business, if I'm an MBA, the first rule is to maximize lifetime revenue of the customer. And I maximize lifetime revenue very well through addiction.
And but what that means is that there are lower and higher angels in my nature, and it's
easier to make money off the lower angels of my nature than the higher ones.
Which means the money will go into developing technologies that will drive the lower
angels from people's nature.
Which is why the underlying incentive system is one of the things we have to work on deeply.
Because whilst that's still in place, there's always going to be these particular individual
agents here and there that will just take advantage of other people deciding to slow down.
If you can't coordinate effectively.
Yeah, I mean, the reason to have rule of law is to bind the predatory aspects of market incentive.
To say yes, I know you can make money by cutting down the national forest, but you know you're
not allowed.
We actually do have a monopoly of violence with a police force, so if you try to take
your goons in there to say we're going to do it anyways, we'll actually come physically
stopping.
And yes, you can make money killing people and harvesting their organs and selling them and know you're not allowed to do that, right?
Like there's a bunch of things that are just bad, you shouldn't do that.
And so this is why we'd say, no, not just total free for all free market
because then what you end up giving is a few people who have all the money,
like we have and most people have no money and the people who have most,
all of the money have relatively unchecked power, this kind of radical power asymmetry,
to impose things that might totally suck for the will of all these people.
So these people say, we're going to pool our power into a kind of labor union of sorts
called the state.
And the state is going to take our collective values that will encode as rule of law, right?
Our values as the basis of jurisprudence, creating rule of law with a government of people
that are supposed to have no vested interest at all because it's up for and by the people
that are bookweathed of the monopoly of violence so the state's even more powerful than the
top of the people in the market, more than the billionaires at the top of the power line distribution, so that the values of the people can check the otherwise radical asymmetries of market differences.
That's the idea, right?
It obviously breaks down because what that means is those at the top of the market have
maximum incentive to corrupt the state, to capture it, to capture the regulators.
And so you see someone who works at the FDA,
who used to work at Big Ag,
or someone who works in the DOD,
who used to work at Lockheed, or whatever it is,
and you're like, oh, that seems like an incentive problem.
And then you see that GDP goes up when there's war, right?
And because we spend a lot of money on military manufacturing. And so the regulator that
is supposed to regulate the state ends up getting, I mean, they're, they're supposed to regulate
the market, ends up getting captured by the market. Because the state was supposed to be regulated
by the people, right? A government of foreign by the people with high transparency with the people saw what was going on. So
the people, so the representatives were really representing the will of the people, not
representing their own private interests that were being paid for by some kind of state
interest. So the state can only check the market and so far as the people are checking the
state. The people obviously are not checking the state at all. The state is not trying and
nor are the market forces trying to support the people to do that. They're trying to
support the people to believe that they can do that by voting every four years or something,
but having no real transparency inside awareness. But yeah, so the thing about perverse incentive, you have to be able to say no, that way that
you can make money, and yes, you'll be able to say, hey, they want it, right?
I'm Facebook is like, we're just providing a service people want, right?
And that's what the drug dealer says when they're providing the drug to kids, they're paying
for it, right?
We're just providing a service they want.
Yeah, there are weaknesses in people that you can exploit, and then they'll want it,
you'll fuck up their lives.
We should not do that.
Right. That's not like authentic voluntaryism.
That's like exploiting people's weaknesses and fucking up their life because of bad incentives.
That's the thing we should not do.
We should provide goods and services that enrich the quality of people's lives and not
provide the particularly predator ones.
So that's where you need a state or you need some kinds of forces to be able to identify
those and check them.
And this is where we need better collective sense making to be able to identify, oh,
these are perverse incentives.
Oh, we should actually make different kinds of laws and regulations around that.
Oh, our whole process of law and regulation is too slow for the rate of tech moves.
How do we actually change the structure of...
But our governance system hasn't employed any of the new tech. Why is that China's
government's employing the new tech before an autocratic system? We're not employing it to make
better open societies. Why not? Taiwan is starting to we could. It's working there.
Yeah.
If you ever read Seven Eaves by Neil Stevenson, it's a hard sci-fi book.
So the moon explodes in the first sentence and the next two years are humanity trying to
work out how they're going to survive, how they're going to get genetic progeny somewhere,
right?
And they decide to go through sort of a forked strategy. They send some up to Izzy, the ISS that then gets made huge and lots of other
stuff happens. And they send some under the water. Given the fact that you spend a lot of time
thinking about existential risk, we don't have a second community on Mars or another planet
or anything. Why haven't we created a siloed community somewhere, which is that totally self-sufficient, defended air gaped from the rest of the world, so that if anything was to happen,
there is a contingency already in place like that.
Well, there kind of is in terms of do we have deep underground military bases for the continuity of governance or government?
Yes, of course, right? Like, especially after World War II and during the Cold War, the idea if there's a nuclear attack, how do we have continuity of government?
Let's make the bases do that and that they have the
self-sustaining resource to be able to do that. And then, of course, plenty of billionaires have their own bug out bunkers for those scenarios. Why has the world not created a its own
breakaway civilization? Say something that the world has worked together to do at
all, like with coordination that isn't in the interests of those who are working on it.
That's the deeper question there.
Because if we could work together and make a breakaway civilization, why not just make
this one much better?
Yeah.
The global coordination does seem to be the challenge you are right.
It's just, it's something that interested me.
I'm reading this book.
I'm looking at all of the challenges that occur when you have an imminent threat, but as
anyone that spent a bit of time learning about existential risk, the fact that you can't
see the imminence of something doesn't mean that it isn't imminent.
Like it could be.
It could be around the fucking corner.
You know, we didn't need much of a difference in some of the parameters of COVID to have
made this a very imminent sort of danger. And Rob suggested
to Rob Reed last week and he said that he could think of a way where this would almost be
like conscription in a way, like you would do your time in the humanitity 2.0 bunker or
whatever, perhaps people would cycle in and cycle out for a couple of years at a time.
And it would be something that would be really prestigious, and people would be picked based on
genetic markers or attributes that they would want, and we would always have a siloed
civilization just there ready in case something was to happen.
To me, I'm aware that it's probably not going to be super fun, but also it might be fun.
There's not many things you can do that not many other people have done.
It seems like for any country to do relatively small cost. Yeah, I'm going to suck for some people, but it's a pretty small outlay for.
I mean, so the tangents.
University of Arizona biosphere to project was something in that direction, right? Can we make a closed, sustaining biosphere?
And it's hard.
And there's easier versions that are not quite ambitious
that all the big countries do have,
which is if the world blows up, is there
somebody that still makes it? Well, all of the serious nuclear power submarines are that.
And they know that, right? When they go, when they go under, you could have full scale
strategic thermonuclear war on the surface, and they're still doing their thing. And they
have the ability to do that thing for a while. Only have the ability to blow up a big portion of the world from the artillery they're carrying
on them. So like, they're very interesting. Like the the risk that they pose and the psychological
experiment that nuclear-equipped submarines are is actually very interesting. And so, but that's also
a continuity of government military capacity thing was,
okay, well let's say first strike happened to all of a sudden we've got these guys out there
and they have the ability to respond independent of whatever else got blown up. So that's like
partial experiments like that have happened. I think this is how Elon describes
part of his goal with Mars is that we can take a stand
somewhere else other than Earth, asteroids, and whatever.
And as just an inspiring enough project to motivate us to think positively about the future
and do something interesting, I'll tell you what I really like about the Mars colony is,
and what I like about the idea that you're saying,
even if it wasn't Mars colony,
but I think Mars colony is maybe the most popular version
of it right now, and also kind of well-resourced one.
I like it as a thought experiment
for how to design civilization from scratch.
Because if I'm making a Mars colony,
of course, there are some differences there
in here with like microgravity and cosmic rays and microbiomes, which are pretty serious
issues. But let's take those issues off and just take all the other ones that a identical
Earth would have. I still have this issue of how do I make a civilization that does not require import and that doesn't mess itself up?
There's going to be very limited oxygen.
I can't have the inefficiencies of a bunch of unnecessary farm animals breathing the oxygen
if that's not a good way to produce nutrient
imploric density, I can't have criminals who aren't contributing anymore
breathing the oxygen. So how do I create a social system that doesn't create
criminals or that deals with criminality in a way of putting people in prisons for a
long time? I can't just assume that we can get new shitties
Lee not only can I not produce waste or trash I can't just assume that we can get new shitties, Lee. Not only can I not produce waste or trash, I can't even produce micro pollution.
Polytile organic compounds from the epoxies or whatever in this space, in this very finite amount of air supply.
I have to be able to make all of our own hardware and software and mining and everything, and all of our own biotech.
And if anything breaks, we have to be able to make the tools to fix it all right in the same space.
Are we gonna use the same law that is retrofitting stuff
back from like the 1200s,
or are we gonna redo law from scratch?
If so, are we gonna do it on a blockchain?
What is the basis?
Well, if we're gonna redo law,
what is the jurisprudence upon which it's based?
Well, what are the values upon which the whole thing's based?
What is the constitution?
What is the metaphysics that gives rise
to how we pick the Constitution?
If you really wanna think about a Mars colony,
you're thinking about everything.
You're thinking about the full tech stack,
the full suite of social technologies.
How are we doing education in towards what?
What is a developed human being
that we're trying to develop human stards?
And ultimately, what is the value system?
What is the metaphysics that we are,
that we're basing the Constitution and the educational theory
and everything on?
As we get clear on that because of that thought experiment,
then it's, well, could we rebuild the world here that way?
Could we build floating cities at sea
that where we don't have to produce our own oxygen
and deal with all of our on CO2?
We can still piggyback off this biosphere.
And we just have to ship it across the ocean
not across the solar system
for the import stuff that we do need.
Could we do ground up civilizations that way? And, you know,
could some nation states pick up levels of design iteration? And even if we can't do it
ground up, knowing what it would look like, can we say, now we know how to vector to make
a 50-year plan to vector the current system in that direction? That's something. So, the
actuality of making it on Mars actually interests me less than the thought
experiment of what the thing worth making would be and then what it portends for us to get clear on that.
So the constraints of being somewhere like Mars where inefficiencies, whether they be metaphysical,
biological, technological, sociological, legal, all of these show up potential flaws within the system.
And is it right to say that on Earth, because the the externalities of getting these things
wrong have sufficient slippage or they're opaque enough that we don't actually get to see
when they happen, we have it surplus resources or at least we feel like we have surplus
resources that can kind of chew up some of these inefficiencies.
Yet when you start to bring those constraints in my super siloed second world or whatever, or on Mars, that's when you actually get to see them in a harsher light.
Yeah, it's not that Mars has more constraints. It has more and less constraints in ways that lead to more innovative design.
It has more constraints in all the ways you just mentioned.
It has less constraints to be an iteration
of the previous things, to stay bound
to being intelligible to the previous things.
It would be very hard to try to make
a really fundamentally different law and culture
in England, because there's a very, very strong
tradition. There's very strong basis of law. What would be the, to bring it into being,
what would the basis of law, in current law be, to bring it into being, and does it even
make that kind of thing possible? Whereas in the Mars column, it's like, we got here
first fuck off, we're doing it, right? It's more that kind of thing.
So whether they get there or not, even the thought experiment of it is less constraints
about what it has to be or can be based on our past and where the constraints are more
just physics.
But those constraints are more obvious to us because we don't have the huge buffer
of pollutability and more resources.
Daniel Schmachtenberger, ladies and gentlemen, what can people expect from you over the
the next few months where should people go to check out interesting stuff that you're doing at the moment?
Well, the project that is still in just a very, very early beta phase, but it has, most of my
attention is called the Consilience Project.
I'm one of the members of that team, and so ConsilienceProject.org, and it's really working
on right now through a bunch of articles, and then the translation of those articles through
podcasts and then maybe animation, other forms of media, on helping people
to understand the problem space of the world better, the types of things we're talking
about, the relationship between sense making, meaning making, choice making, imposed order
versus chaos, and how do we have emergent order to understand some of the things well enough
to be able to start to think about the innovations
that would actually make a difference at a fundamental level. How do we understand the metacrisis
well enough that we can design better, we can employ the more powerful exponential physical
technologies to make better social technologies that are neither the catastrophes or the dystopias.
And how do we make both the need for that
and the design criteria of what the solutions would look like?
Not exactly the solutions,
but the design criteria of the solutions
to be able to kind of drive it,
innovation zeitgeist.
So both existing institutions can say,
wow, we need to reform ourselves
and we need to reform ourselves in these ways
and new independent groups like blockchain governance
and paradigms and whatever can also innovate informed by these things. So that's where
much of the attention is. And then everyone's why I get invited by yourself to do a podcast
on interesting and fun and random topics. So I have a blog that is basically just a place I put podcasts
called civilizationemerging.com.
You can check that out.
Awesome.
That'll be linked in the show on it below.
Daniel, it's always a pleasure.
Thank you so much.
Thank you, my friend.
Thank you for having me.
It's good to be back with you. Offends, offends, offends