Lex Fridman Podcast - #101 – Joscha Bach: Artificial Consciousness and the Nature of Reality
Episode Date: June 13, 2020Joscha Bach is the VP of Research at the AI Foundation, previously doing research at MIT and Harvard. Joscha work explores the workings of the human mind, intelligence, consciousness, life on Earth, a...nd the possibly-simulated fabric of our universe. Support this podcast by supporting these sponsors: - ExpressVPN at https://www.expressvpn.com/lexpod - Cash App – use code "LexPodcast" and download: - Cash App (App Store): https://apple.co/2sPrUHe - Cash App (Google Play): https://bit.ly/2MlvP5w This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 03:14 - Reverse engineering Joscha Bach 10:38 - Nature of truth 18:47 - Original thinking 23:14 - Sentience vs intelligence 31:45 - Mind vs Reality 46:51 - Hard problem of consciousness 51:09 - Connection between the mind and the universe 56:29 - What is consciousness 1:02:32 - Language and concepts 1:09:02 - Meta-learning 1:16:35 - Spirit 1:18:10 - Our civilization may not exist for long 1:37:48 - Twitter and social media 1:44:52 - What systems of government might work well? 1:47:12 - The way out of self-destruction with AI 1:55:18 - AI simulating humans to understand its own nature 2:04:32 - Reinforcement learning 2:09:12 - Commonsense reasoning 2:15:47 - Would AGI need to have a body? 2:22:34 - Neuralink 2:27:01 - Reasoning at the scale of neurons and societies 2:37:16 - Role of emotion 2:48:03 - Happiness is a cookie that your brain bakes for itself
Transcript
Discussion (0)
The following is a conversation with Yosha Bach, VP of Research at the AI Foundation
with the history of research positions at MIT and Harvard.
Yosha is one of the most unique and brilliant people in the artificial intelligence community,
exploring the workings of the human mind, intelligence, consciousness, life on earth,
and the possibly simulated fabric of our universe.
I can see myself talking to Yosha many times in the future.
Quick summary of the ads.
Two sponsors.
ExpressVPN and CashApp.
Please consider supporting the podcast by signing up at expressvpn.com slash lexpod and
downloading CashApp and using code lexpodcast. This app and using code Lex Podcast.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe on YouTube, review it with 5 stars and Apple Podcasts, support
it on Patreon, or simply connect with me on Twitter, at Lex Friedman.
Since this comes up more often than I ever would have imagined, I challenge you to try to figure out how to
spell my last name without using the letter E. And it'll probably be the correct way.
As usual, I'll do a few minutes of ads now and never in Yaz in the middle that can break
the flow of the conversation.
This shows sponsored by ExpressVPN. Get it at ExpressVPN.com slash Lex pod to support this podcast and to get an extra
three months free on a one year package. I've been using ExpressVPN for many years. I love it.
I think ExpressVPN is the best VPN out there. They told me to say it, but I think it actually
happens to be true. It doesn't log your data, it's crazy fast, and it's easy to use.
Literally, just one big power on button.
Again, for obvious reasons, it's really important that they don't log your data.
It works on Linux and everywhere else too.
Shout out to my favorite flavor of Linux of Ubuntu Mate 204.
Once again, get it at ExpressVPN.com slash Lex Pod to support this podcast and
to get an extra three months free on a one year package. This show is presented by CashApp,
the number one finance app in the App Store. When you get it, use code Lex Podcast.
CashApp lets you send money to friends by bitcoin and invest in the stock
market with as little as $1. Since cash app does fractional share trading, let me mention
that the order execution algorithm that works behind the scenes to create the abstraction
of the fractional orders is an algorithmic marvel. So big props to the cash app engineers
for taking a step up to the next layer of abstraction over the stock market making trading more accessible for new investors and
diversification which easier. So again, if you get cash out from the App Store Google Play and use the code Lex podcast
You get $10 and cash app will also donate $10 to first an organization that is helping advance robotics and STEM education
for young people around the world. And now here's my conversation with Yosha Bach. As you've said, you grew up in a forest in East Germany, just as we were talking about
off my two parents who were artists.
And now I think, at least to me, you've become one of the most unique thinkers in the AI
world.
So can we try to reverse engineer your mind a little bit? What were the key philosophers,
scientists, ideas, maybe even movies or just realizations that impact on you when you're
growing up that kind of led to the trajectory or were the key sort of crossroads in the trajectory
of your intellectual development?
My father came from a long tradition of architects, a distant branch of the Bach family,
and so basically he was technically a nerd,
and nerds need to interface in society
with non-standard ways.
Sometimes I define a nerd as somebody who thinks
that the purpose of communication is to submit your ideas to peer review. And normal people
understand that the primary purpose of communication is to negotiate alignment. And these purposes
tend to conflict, which means that nerds have to learn how to interact with society at large. Who is the reviewer in the nerd's view of communication?
Everybody who will be considered to be a peer.
So whatever hapless individual is around,
well, you would try to make him or her the gift of information.
OK.
So you're not, by the way, my research will malenformed me.
So you're an by the way my research will malenformed me. So you're architect or artist.
So he did study architecture, but basically my grandfather made the wrong decision.
He married an aristocrat and was drawn into the war and he came back after 15 years.
So basically my father was not
parented by a nerd, by somebody who tried to tell him what to do
and expected him to do what he was told.
And he was unable to. He's unable to do things if he's not intrinsically motivated.
So in some sense, my grandmother broke her son
and her son responded by when he became an architect to become an artist.
So he built 100-wassar architecture, he built houses without right angles.
He built lots of things that didn't work in the more brutalist traditions of Eastern
Germany.
And so he bought an old water mill and moved out to the countryside and did only what
he wanted to do, which was art.
Eastern Germany was perfect for Bohem,
because you had complete material safety. Put was heavily subsidized,
healthcare was free, you didn't have to worry about rent or pensions or anything.
So it was a socialized communist side of the country.
And the other thing is it was almost impossible not to be in political disagreement with your government,
which is very productive for artists. So everything that you do is intrinsically meaningful,
because it will always touch on the deeper currents of society,
of culture, and being conflict with it,
and tension with it, and you will always have to define yourself
with respect to this.
So what impact did your father, this outside of the box,
thinker against the government, against the world artists,
have respect to Daniela?
He was actually not a thinker. He was somebody who only got self-aware to the degree
that he needed to make himself functional.
So in some sense, he was also late 1960s
and he was in some sense a hippie.
So he became a one person cult.
He lived out there in his kingdom.
He built big sculpture gardens and started many avenues of art and so on
and convinced a woman to live with him. She was also an architect and she adored him and
decided to share her life with him. I basically grew up in a big cave full of books. I'm almost
feral and I was bored out there. It was very, very beautiful, very quiet and quite lonely.
So I started to read and by the time I came to school, I read everything until fourth grade and then some.
And there was not a real way for me to relate to the outside world.
And I couldn't quite put my finger on why and today I know it was because I was a nerd obviously.
And it was the only nerd around. So there was no other kids like me.
And there was nobody interested in physics or computing or mathematics and so on.
And this village school that I went to was basically a nice school. Kids were nice to me.
I was not beaten up, but it also didn't make many friends or built deep relationships.
They only happened in starting from 9th grade when I went to a school for mathematics and physics.
Do you remember any key books from the small myself?
Yes, they basically read everything. So I went to the library and I worked my way through
the children's and young adult sections and then I read a lot of science fiction. For
instance, Dunes, LaFlem, basically the great author of Cybernetics has influenced me.
Back then I didn't see him as a big influence because everything that he wrote seemed to be so natural to me. And so only later
that I contrasted it with what other people wrote. Another thing that was very influential on me
were the classical philosophers and also the literature of romanticism. So German poetry and
arts and Druster Hildeshoff andainer and Up to Hesse and so on.
Hesse, I love Hesse. So at which point do the classical philosophers end at this point
when the 21st century, what's the latest classical philosopher? Does this stretch through
even as far as Nietzsche or is this, are we talking about Plato and Nersalyn?
I think that Nietzsche is the classical equivalent of a shit poster.
So he's very smart.
It's easy to read.
But he's not so much trolling others.
He's trolling himself because he was at Otsvist about largely his romantic relationships
didn't work out.
He got angry and he basically became a nihilist.
And isn't that a beautiful way to be as an intellectual,
is to constantly be trolling yourself
to be in that conflict, in that, no,
in the tension.
I think it's like self-awareness.
At some point, you have to understand the comedy
of your own situation.
If you take yourself seriously,
and you are not functional, it ends in tragedy
as it did for Nietzsche.
I think you think he took himself
too seriously in that
tension.
And it's you find the same thing in Hezsche and so on.
The step involved syndrome is classic, a dollar sense, where
you basically feel misunderstood by the world.
And you don't understand that all the misunderstandings are
the result of your own lack of self-evidence.
Because you think that you are a prototypical human.
And the others around you should behave the same way as you expect them based on your innate instincts and it doesn't work out.
And you become a transcendentalist to deal with that. So it's very, very understandable
and have great sympathies for this to the degree that I can have sympathy for my own intellectual
history. But you have to grow out of it.
So is an intellectual life well lived a journey, well, travel is one where
you don't take yourself seriously. No, I think that you are neither serious or not serious
yourself, because you need to become unimportant as a subject that is, if you are a philosopher,
belief is not a verb. You don't do this, what the audience, you don't do it for yourself. You have
to submit to the things that are possibly true, and you have to follow wherever
your inquiry relates, but it's not about you.
It has nothing to do with you.
So do you think then people like I and Rand believe it's sort of an idea of there's objective
truth?
So what's your sense in the philosophical, if you remove yourself that's objective from
the picture, you think
it's possible to actually discover ideas that are true, or are we just in a measure relative
concepts that are in either true nor false, it's just a giant mess.
You cannot define objective truths without understanding the nature of truths in the
first place.
So, what does a brain mean by saying that it's to cover something as truth. So for instance, a model can be predictive or not
predictive, then there can be a sense in which a mathematical
statement can be true because it's defined as true
under certain conditions.
So it's basically a particular state that a variable can
have in a simple game.
And then you can have a correspondence between systems
and talk about
truths, which is again a type of model correspondence.
And there also seems to be a particular kind of ground truth.
So for instance, you're confronted with the enormity of something existing at all, right?
It's stunning when you realize something exists rather than nothing.
And this seems to be true, right?
There's an absolute truth in the fact that something seems to be happening. Yeah, that's, that, that to me is a showstopper. I could
just think about that idea and be amazed by that idea for the rest of my life and not going
you further, because I don't even know the answer to that. Why does anything exist at all?
Well, the easiest answer is existence is the default, right? So this is the lowest number of
pits that you would need to encode this. Who's the answer? Who brought that? The simplest answer.
The simplest answer.
That existence is the default.
What about non-existence? I mean, that seems...
Non-existence might not be a meaningful notion in this sense.
So, in some sense, if everything that can exist exists,
for something to exist, it probably needs to be implementable.
The only thing that can be implemented is finite automata.
So maybe the whole of existence is a superposition of all finite automata.
And we are in some region of the fractal that has the properties that it can contain us.
What does it mean to be a superposition of finite?
So any superposition of all possible rules?
Imagine that every automaton is basically an operator that acts on some substrate.
And as a result,
you get emergent patterns. Most of substrate. No idea to know. So it's based on substrate.
It's some something that can store information. Something like a store information. There's
a time. Something that can hold state. Still doesn't make sense to me. The why that
exists at all. I could just sit there with the beer or a vodka and just enjoy the fact
pondering the why. It may not have a why. There might be the wrong direction to ask into this.
So there could be no relation in the wide direction without asking for a purpose or a cause.
It doesn't mean that everything has to have a purpose or a cause, right?
So we mentioned some philosophers in that early just taking
a brief step back into into that. Okay, so we asked ourselves when the classical philosophy
end. I think for Germany, it largely ended with the first revolution. That's basically
when we chose that. This was when we ended the monarchy and started a democracy. And at this
point, we basically came up with a new form of government that
didn't have a good sense of this new organism that society
wanted to be and in a way it decapitated the universities.
So the universities went on, so modernism, like a headless
chicken.
At the same time, democracy failed in Germany
and we got fascism as a result.
And it burned down things.
In a similar way, Stalinism burned down
intellectual traditions in Russia. And Germany, down things in the similar way as Stalinism burned down intellectual traditions
in Russia. And Germany, both Germany's have not recovered from this. Eastern Germany had this
valkyrie, dialectic materialism. And Western Germany didn't get much more edgy than Habermass.
So in some sense, both countries lost their intellectual traditions and killing off and driving
out the Jews didn't help. Yeah, so that was the end. That was the end of really rigorous,
what you would say is classical, classical philosophy.
There's also this thing that in some sense, the low-hanging foods in philosophy were mostly
wrapped. And the last big things that we discovered was the constructive is turn in mathematics.
So to understand that the parts
of mathematics that work are a computation, it was a very significant discovery in the first half of
the 20th century. And it hasn't fully permeated philosophy and even physics yet. So physicists checked
out the code libraries from mathematics before construction became universal.
What's constructivism?
What are your friends and girls in completeness there?
And that kind of those kinds of ideas.
So basically, uh, good for himself, I think didn't get it yet.
Hilbert, uh, could get it.
Hilbert saw that, for instance, a country's sets here where the experiments and
mathematics led into contradictions.
And he noticed that, uh, with the current semantics, we cannot build a computer
and mathematics that runs mathematics without crashing. Right.urdl could prove this. And so what Gurdl could show is using
classical mathematical semantics, you run into contradictions. And because Gurdl strongly believed
in these semantics and more than in what he could observe and so on, he was shocked. It basically
shook his world to the core because in some sense he felt that the world has to be implemented
in classical mathematics.
And for Turing, it wasn't quite so bad.
I think that Turing could see that the solution is to understand
that the question mathematics was computation all along,
which means your, for instance, pie in classical mathematics
is a value.
It's also a function, but it's the same thing.
And in computation, a function is only a value when you can compute it. And
if you cannot compute the last digit of pi, you only have a
function. You can plug this function into your local sign. You
let the dryn until the sun burns out. This is it. This is the last
digit of pi you will know. But it also means that there can be no
process in the physical universe or in any physically realized
computer that depends
on having known the last digit of pi.
Yes.
Which means there are parts of physics that are defined and such a way that cannot strictly
be tool because assuming that this could be tool leads into contradictions.
So I think putting in computation at the center of the world view is actually the right way
to think about it.
Yes.
And Wittgenstein could see it.
And Wittgenstein basically preempted the logitist program of AI that Minskie started
later, like 30 years later.
Turing was actually a pupil of Wittgenstein.
Really?
I didn't know there was any connection with you, sorry.
Wittgenstein even canceled some classes when Turing was not present because he thought
it was not worth spending the time on the Internet.
If you read the attractados, it's a very beautiful book.
It pays a one-sort on 75 pages.
It's very non-typical for philosophy,
because it doesn't have arguments in it,
and it doesn't have references in it.
It's just one thought that is not intending to convince anybody.
He says, it's mostly for people that had the same insight as me.
Just spell it out. And this insight is, there is a way in which mathematics
and philosophy ought to meet.
Mathematics tries to understand the domain of all languages
by starting with those that are so formalizable
that you can prove all the properties of the statements
that you make.
But the price that you pay is that your language is very,
very simple, so it's very hard to say something meaningful
in mathematics. And it looks complicated to people, So it's very hard to say something meaningful in mathematics.
Yes.
And it looks complicated to people,
but it's far less complicated than what our brain
is casually doing all the time,
and it makes sense of reality.
That's right.
And philosophy is coming from the top.
So it's mostly starting from natural languages,
which vaguely define concepts.
And the hope is that mathematics and philosophy
can meet at some point.
And Wittgenstein was trying to make them meet,
and he already understood that, for instance,
you could express everything with the non-calculus
that he could reduce the entire logic to non-gates
as we do in our modern computers.
So in some sense, he already understood
Turing University before Turing spelled it out.
I think when he wrote the tractatos,
he didn't understand yet that the idea was so important and significant.
And as I suspect, then, when Turing wrote it out, nobody cared that much.
Turing was not that famous when he lived.
It was mostly his work in decrypting the German codes that made him famous and gave him some
notoriety.
But the same status that he has to computer science right now and the AI is something that I
think he could acquire later.
That's kind of interesting. Do you think of computation and computer science right now. I think that I think he got quite a later. That's kind of interesting.
Do you think of computation and computer science?
You kind of represent that to me.
Maybe that's the modern day.
You in a sense are the new philosopher by the computer scientist who dares to ask the bigger
questions that philosophy originally started.
Is the new philosopher?
Certainly not me. I think I mostly still this child that grows up in is the new philosopher. Certainly not me, I think.
I mostly spill this child that grows up in a very beautiful valley
and looks at the world from the outside
and tries to understand what's going on.
And my teachers tell me things and they largely don't make sense.
So I have to make my own models.
I have to discover the foundations of what the others are saying.
I have to try to fix them.
To be charitable, I try to understand what their must, originally, about their teachers or their teachers' teachers must
have thought until everything got lost in translation and how to make sense of the
reality that we are in. And whenever I have an original idea, I'm usually late to the
party by say 400 years. And the only thing that's good is that the parties get smaller
and smaller the older I get and the more I explore.
The party gets smaller and more exclusive and more exclusive.
It seems like one of the key qualities of your upbringing was that you were not tethered
whether it's because of your parents or in general maybe you're within your mind, some
genetic material, you were not tethered to the ideas of the general
populace, which is actually a unique property. We're kind of, you know, the education system
and whatever from that education system, just existing in this world forces certain
sets of ideas on to you. Can you disentangle that? Why were you? Why are you not so tethered, even in your work today, you seem to not care
about perhaps a best paper in Europe's, right?
Being tethered to particular things that current today in this year, people seem to value
as a thing you put on your CV and resume.
You're a little bit more outside of that world,
outside of the world of ideas that people are especially focusing on the benchmarks of today,
the things. What's, can you disentangle that? Because I think that's inspiring. And if there were
more people like that, we might be able to solve some of the bigger problems that sort of AI
dreams to solve. And that's a big danger in this, because in a way you are expected to marry into an
intellectual tradition and visit this tradition into a particular school.
If everybody comes up with their own paradigms, the whole thing is not cumulative as an enterprise,
right?
So, in some sense, you need a healthy balance, you need paradigmatic thinkers, and you
need people that work within given paradigm.
Basically, science is today to define themselves largely by methods.
And it's almost a disease that we think is the scientist, somebody who was
convinced by the guidance counselor that they should join a particular discipline.
And then they find a good mentor to learn the right methods.
And then they're lucky enough and privileged enough to join the right team.
And then their name will show up on influential papers.
But we also see that there are diminishing returns
with this approach.
And when our field, computer science and AI started,
most of the people that joined this field
had interesting opinions.
And today's thinkers and AI either don't have interesting opinions
at all, or these opinions are in consequential for what what they're actually doing because what they're doing is that I
apply the state of the art methods with a small epsilon.
And this is often a good idea if you think that this is the best way to make progress.
And for me it's first of all very boring if somebody else can do it, why should I do
it?
Right.
If the current methods of machine learning lead to a strong AI, why should I do it? If the current methods of machine learning
lead to a strong AI, why should I be doing it?
I will just wait until they're done
and wait until they do this on the beach
or read interesting books or write some and have fun.
But if you don't think that we are currently doing
the right thing, if we are missing some perspectives,
then it's required to think outside of the box.
It's also required to understand the boxes, but it's necessary to understand what worked
and what didn't work and for what reasons. So you have to be willing to ask new questions
and design new methods whenever you want to answer them. And you have to be willing to dismiss
the existing methods if you think that they to be willing to dismiss the existing methods
if you think that they're not going to yield the right answers. It's very bad career advice
to do that.
So maybe to briefly stay for one more time in the early days, one would you say for you
was the dream before we dive into the discussions that we just almost started,
what was the dream to understand or maybe to create human level intelligence
born for you?
I think that you can see AI largely today as advanced information processing.
If you would change the acronym of AI into that, most people in the field would be happy.
It would not change anything what they're doing. We're automating statistics and
when you, of the statistical models are more advanced than what statisticians had in the past,
and it's pretty good work, it's very productive. And the other aspect of AI is philosophical
project. And this philosophical project is very risky and very few people have gone it and it's not clear if it succeeds.
So first of all, you keep throwing a lot of really interesting ideas and I have to pick which ones we go with.
But first of all, you use the term information processing, just information processing, is if it's the mere, it's the muck of existence, as if it's the
epitome of the entire universe, maybe information processing, consciousness, the intelligence,
maybe information processing.
So that, maybe you can comment on if the advanced information processing is a limiting kind
of realm of ideas. And then the other one is,
what do you mean by the philosophical project? So I suspect that general intelligence is the
result of trying to solve general problems. So intelligence, I think, is the ability to model.
It's not necessarily gold-directed rationality or something. Many intelligent people are bad at this.
gold directed rationality or something. Many intelligent people are bad at this,
but it's the ability to be presented
with a number of patterns and see a structure in those patterns
and be able to predict the next set of patterns,
to make sense of things.
And some problems are very general.
Usually intelligence serves control,
so you make these models for a particular purpose
of interacting as an agent with the world
and getting certain results.
But it's the intelligence itself is in the sense instrumental to something.
But by itself, it's just the ability to make models.
And some of the problems are so general that the system that makes them needs to
understand what itself is and how it relates to the environment.
So as a child, for instance, you notice you do certain things despite
you perceiving yourself as wanting different things.
So you become aware of your own psychology.
You become aware of the fact that you have complex structure in yourself,
and you need to model yourself to reverse engineer yourself,
to be able to predict how you will react to certain situations,
and how you deal with yourself in relationship to your environment.
And this process, if this project,
if you reverse engineer yourself, in your relationship to reality environment. And this process, if this project, if you reverse
engineer yourself, your relationship to reality in the nature of a universe that can continue,
if you go all the way, this is basically the project of AI, or you could say the project of AI
is a very important component in it. The tutoring test in a way is, you ask a system, what is
intelligence? If that system is able to explain what it is, how it works,
then you should assign it the property of being intelligent in this general sense.
So the test that you were in was administering in a way.
I don't think that he couldn't see it, but he didn't express it yet in the original 1950 paper,
is that he was trying to find out,
rather he was generally intelligent
because in order to take this test, the RAP is,
of course, you need to be able to understand
what that system is saying.
And we don't yet know if we can build an AI.
We don't yet know if we are generally intelligent.
Basically, you win the Turing test by building an AI.
Yes.
So in a sense, hidden within the Turing test
is a kind of recursive test.
Yes, it's a test on us. Yeah.
The touring test is basically a test of the conjecture where the people are intelligent
enough to understand themselves.
Okay.
But you also mentioned a little bit of a self-awareness and the project of AI.
Do you think this kind of emergent self-awareness is one of the fundamental aspects of intelligence. So as opposed to goal oriented,
as you said, kind of puzzle solving,
is coming to grips with the idea
that you're an agent in the world.
And like, you can find that many highly intelligent people
are not very self-aware, right?
So self-awareness and intelligence are not the same thing.
And you can also be self-awareident if you have good priors,
especially without being especially intelligent.
So you don't need to be very good at solving puzzles
if the system that you are already
implements the solution.
But I do find intelligence.
So you kind of mentioned children, right?
Is that the fundamental project of AI
is to create the learning system that's able to exist in the world?
So you kind of drew a difference in self-awareness and intelligence.
And yet you said that the self-awareness seems to be important for children.
So I call this ability to make sense of the world and your own place.
And so to make you able to understand what you're doing in this world,
sentience. And I would distinguish sentience from intelligence because sentience
is the processing certain classes of models. And intelligence is a way to get to
these models if you don't already have them.
I see. So can you maybe pause a bit and try to answer the question that we just said we
may not be able to answer and it might be a recursive meta question of what is intelligence.
I think that intelligence is the ability to make models.
So models is I think it's useful as examples, very popular now neural networks, form representations
of large scale data sets, they form models of those data sets.
When you say models and look at today's neural networks, what are the difference of how
you're thinking about what is intelligent in saying that intelligence is the process of making models?
Two aspects to to this question one is the representation is the representation adequate for the domain that we want to represent and
The other one is is the type of the model that you arrive at adequate. So basically are you modeling the correct domain and
I think in both of these
cases, modern AI is lacking still. And I think that I'm not saying anything new here, I'm not
criticizing the field. Most of the people that design our paradigms are aware of that. And so one
aspect that we're missing is unified learning. When we learn, we at some point discover that everything that we sense
is part of the same object, which means we learn it all into one model and we call this model
the universe. So our experience of the world that we are embedded on is not a secret direct
wire to physical reality. Physical reality is a view at quantum graph that we can never experience
or get access to. But it has this properties that it can create certain patterns that our
systemic interface to the world. And we make sense of these patterns and the relationship
between the patterns that we discover is what we call the physical universe. So at some
point in our development is a nervous system. We discover that everything that we relate
to in the world can be mapped to a region in the same three-dimensional
space by and large.
Now, no, in physics, that is not quite true.
Well, it's not actually three-dimensional, but the world that we are entangled with at
the level which we are entangled with is largely a flat three-dimensional space.
So this is the model that our brain is intuitively making.
And this is, I think, what gave rise to this intuition of RESTX10enza, of this material world, this material domain. It's one of the mental domains,
but it's just the class of all models that relate to this environment, this three-dimensional
physics engine in which we are embedded. Physics engine, which we're embedded, I love that.
Right. It's a slowly pause. So the quantum graph, I think you called, which is the real world, which you could never
get access to, there's a bunch of questions that I want to sort of disentangle that.
But maybe one useful one, one of your recent talks I looked at, can you just describe
the basics?
Can you talk about what is dualism, what is idealism, what is materialism, what
is functionalism, and what connects with you most.
In terms of, because you just mentioned, there's a reality we don't have access to.
Okay.
What does that even mean?
And why don't we get access to it?
Are we part of that reality?
Why can't we access it?
So the particular trajectory that mostly exists in the West is the result of our indoctrination
by a card for 2000 years. A card which went all the way. Yes, a castle it cards mostly.
And for better or worse, it has created or defined many of the modes of interaction that we have
that have created this society, but it has also in some sense scarred our rationality.
and scarred our rationality. And the intuition that exists if you would translate the mythology of the Catholic Church
into the modern world is that the world in which you would me interact is something like
a multiplayer role-playing adventure.
And the money and the objects that we have in this world, this is all not real.
Or Eastern philosophers would say it's my eye.
It's just stuff that is, it appears to be meaningful
and this embedding in this meaning and people leaving it
is samsara.
This, it's basically the identification
with the needs of the mundane secular everyday existence.
And the Catholics also introduced a notion
of higher meaning, the sacred.
And this existed before, but eventually the natural shape of God is the platonic form
of the civilization that you are part of.
It's basically the super organism that is formed by the individuals as an intentional agent.
And basically the Catholics used a relatively crude methodology to implement software on
the minds of people and get the software
synchronized to make them walk on lockstep to basically get this got online and to make it efficient
and effective. And I think God technically is just a self that spends multiple brains as opposed
to your and myself which mostly exists just on one brain, right? And so in some sense, you can construct yourself functionally as a function that is implemented by brains that exists across
brains. And this is a guide with this emoji. That's one of the if you look, you've all
harrari kind of talking about, this is one of the nice features of our brains. It seems
to that we can all download the same piece of software like God in this case and kind of share it.
Yeah. So man, you give everybody a spec and the mathematical constraints that are intrinsic
to information processing, make sure that given the same spec, you come up with a compatible
structure.
Okay, so that's, there's the space of ideas that we all share and we think that's kind
of the mind. But that's separate from the idea is from from Christianity
for from religion is that there's a separate thing between the mind. There is a real world. And this
real world is the world in which God exists. God is the quarter of the multiplayer adventure,
so to speak. And we are all players in this game. And that's dualism. You would say yes, but the
dualism aspect is because the mental realm is exists in a different implementation than a physical realm.
And the mental realm is real. And a lot of people have this intuition that there is this real
room in which you and me talk and speak right now, then comes a layer of physics and abstract rules
and so on. And then comes another real room where
our souls are. And our tool form is the thing that gives us phenomenal experience. And this,
of course, a very confused notion that you would get. And it's basically, it's the result of
connecting materialism and idealism in the wrong way. So, okay, I apologize, but I think it's
really helpful if we just try to define, try to define terms. Like, what is dualism? What is idealism in the wrong way. So, okay, I apologize, but I think it's really helpful if we just try to define
Try to define terms like what is dualism? What is idealism? What is materialism for people?
Don't know so the idea of dualism in our cultural tradition is that there are two substance as a mental substance and
the physical substance and they interact by different rules and
Physical world is basically causally closed and is built on a low level causal structures.
So they're basically a bottom level that is causally closed.
That's entirely mechanical and mechanical in the widest sense.
So it's computational. There's basically a physical world in
which information flows around and physics describes the laws
of how information flows around in the child.
Would you compare it to like a computer or you have a hardware and software?
The computer is a generalization of information flowing around.
Basically, but you weren't discovered that there is the universal principle.
You can define this universal machine that is able to perform all the computations.
So all these machines have the same power.
This means that you can always define a translation between them, as long as they have unlimited memory, to be able to perform each other's
computations. So would you then say that materialism is this whole world is just the hardware
and idealism is this whole world is just a software? Not quite. I think that most idealists don't
have a notion of software yet, because software also comes down to information processing. Right? So what you notice is the only thing that is real to you and me is this
experiential world in which things matter, in which things have taste, in which things
have color, phenomenal content, and so on. And you realize bringing up consciousness,
okay? And this is distinct from the physical world, in which things have values only in an abstract sense. And you only
look at cold patterns moving around. So how does anything feel like something? And this
connection between the two things is very puzzling to a lot of people, of course, too many philosophers.
So idealism starts out with the notion that mind is primary, materialism, things that matter
as primary. And so for the idealist, the material patterns that
we say playing out, a part of the dream that the mind is dreaming, and we exist in a mind on a
higher plane of existence, if you want. And for the materialist, there is only this material thing
and that generates some models and we are the result of these models.
In some sense, I don't think that we should understand if you understand it properly, materialism
and idealism is a dichotomy, but there's two different aspects of the same thing.
So the weird thing is we don't exist in the physical world.
We do exist inside of a story that the brain tells itself. Okay, let me, let me, my, my information processing, take that in.
We don't exist in the physical world, we exist in the narrative.
Basically a brain cannot feel anything.
New York cannot feel anything, their physical things, physical systems are unable to experience
anything, but it would be very useful for the brain or for the organism to know what it would be like to be a person and to feel something. So the brain creates
a simple acromb of such a person that it uses to model the interactions of the person.
It's the best model of what that brain, this organism, thinks it is in relationship to
its environment. So it creates that model. It's a story, a multimedia novel that the brain
is continuously writing and updating. But you also kind of said that you said that we kind of exist in that story.
In that story, what is real in any of this? So like, there's a, again, these terms are,
you kind of said there's a quantum graph. I mean, what is this whole thing running on then?
Is the story, and is it completely fundamentally impossible to get access to it? Because isn't the
story supposed to, isn't the brain in something in existing in some kind of context?
So what we can identify as computer scientists, we can engineer systems and test our theories
this way that might have the necessary insufficient properties to produce the phenomena that we
are observing, which is the self in a virtual world that is generated in some of these
new cortex that is contained in the skull of this primate here.
And when I point at this, this indexicality is of course wrong.
But I do create something that is likely to give rise
to patterns on your retina that allow you to interpret
what I'm saying, right?
But we both know that the world that you and me
are seeing is not the real physical world.
What we are seeing is a virtual reality generated
in your brain to explain the patterns on your retina.
How close is it to the real world? That's kind of the question. Is it, when you have, we have
like people like Donald Hoffman, let's say that like that you're really far away. The thing we're
seeing you and I now that interface we have is very far away from anything like we don't even have
anything close like to the sense of what the real world is,
or is it a very surface piece of architecture?
Imagine you look at the Mandelberg Tracto, this famous thing that Bona Mandelberg discovered.
If you see an overall shape in there, but you know that if you truly understand it, you
know it's two lines of code. It's basically in a series that is being tested for complex numbers
and in the complex number plane for every point. And for those where this series is diverging,
you paint this black. And where it's converging, you don't. And you get the intermediate colors
by taking how far it diverges.
This gives you this shape of this fractal.
But imagine you live inside of this fractal
and you don't have access to where you are in the fractal
or you have not discovered the generator function even.
Right, so what you see is all I can see right now
is this spiral and this spiral moves a little bit
to the right.
Is this an accurate model of reality?
Yes, it is, right?
It is an adequate description.
You know that there is actually no spiral in the model
but practically it only appears to like this to an observer
that is interpreting things as a two-dimensional space
and then defines certain irregularities in there
at a certain scale that the currently observes.
Because if you zoom in, this spiral might disappear
and turn out to be something different
at the different resolution, right?
Yes.
So at this level, you have this spiral and then you just cover this spiral moves to the right
and some point it disappears.
So you have a singularity.
At this point, your model is no longer valid.
You cannot predict what happens beyond the singularity.
But you can observe again and you will see it hit another spiral and at this point it
disappears.
So maybe now have a second order law.
And if you make 30 layers of these laws
Then you have a description of the world that is similar to the one that we come up with when we describe the reality around us
It's reasonably predictive. It does not cut to the core of it
So you explain how it's being generated. Yes, it actually works
But it's relatively good to explain the universe that we're entangled with
But you don't think the tools of computer science that the tools of physics could get, could step outside,
see the whole drawing, and get at the basic mechanism
of how the pattern the spirals are generated.
Imagine you would find yourself embedded into a motherboard
fractal, and you try to figure out what works.
And you have, you have, somehow, a Turing machine,
there's enough memory to think.
And as a result, you come to this idea, it must be
some kind of automaton. And maybe you just enumerate all the possible automata until you
get to the one that produces your reality. So you can identify necessary and sufficient
condition. For instance, we discover that mathematics itself is the domain of all
languages. And then we see that most of the domains of mathematics that we have discovered
are in some
sense describing the same fractals.
This is what category theory is obsessed about, that you can map these different domains
to each other.
So they're not that many fractals.
And some of these have interesting structure and symmetry breaks.
And so you can discover what region of this global fractal you might be embedded in from
first principles.
But the only way you can get there is from first principles. So basically,
you are understanding of the universe has to start with automata and then number theory and
then spaces and so on. Yeah, I think Steven Wolf from Still Dreams that he's
that he'll be able to arrive at the fundamental rules of the cellular automata or the
generalization of which is behind our universe.
You've said on this topic, you said in a recent conversation that quote,
some people think that a simulation can't be conscious and only a physical system can,
but they got it completely backward. A physical system cannot be conscious. Only a simulation can. Yeah. Consciousness is a simulated property
that simulates itself. Just like you said, the mind is kind of the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, produce the simulation. It's the software that is implemented by your brain.
And the mind is creating both the universe that we are in and the self,
the idea of a person that is on the other side of attention and is embedded in this world.
Why is that important?
That idea of a self?
Why is that important feature in the simulation?
It's basically a result of the purpose that the mind has.
It's a tool for modeling, right?
We are not actually monkeys.
We are side effects of the regulation needs of monkeys.
And what the monkey has to regulate is the relationship
of an organism tool in outside world
that is a large part also consisting of other organisms.
And as a result, it basically has regulation targets
that it tries to get to.
These regulation targets start with priors.
They're basically like unconditional reflexes
that we are more or less born with.
And then we can reverse engineer them to make them more consistent.
And then we get more detailed models about how the world works
and how to interact with it.
And so these priors that you commit to are largely target
values that our needs should approach, set points.
And this deviation to the set point creates some urge,
some tension.
And we find ourselves living inside of feedback loops,
right?
Consciousness emerges over dimensions of disagreements
with the universe.
Things where you care, things are not the way
there should be, where you need to
regulate. And so in some sense, the sense itself is the result of all the identifications that you're
having. And now, the identification is a regulation target that you're committing to. It's a dimension
that you care about. Do you think is important? And this is also what locks you in. If you let go of
these commitments of these identifications, you get free. There's nothing that you have to do anymore.
And if you let go of all of them, you're completely free and you can enter Nevada because you're done.
And actually, this is a good time to pause and say thank you to sort of friend of mine, Gustav Sora Ström,
who introduced me to your work. I want to give him a shout out. He's a brilliant guy. And I think
the AI community is actually quite
amazing and Gustav is a good representative of that, you are as well. So I'm glad, first
of all, I'm glad the internet exists, YouTube exists where I can watch your talks and then
get to your book and study your writing and think about, you know, that's amazing. Okay,
but the, you've kind of described in sort of this emerging phenomena of consciousness from the simulation.
So what about the hard problem of consciousness? Can you just linger on it?
Like, why does it still feel like I understand you're kind of the self as an important part of the simulation?
But why does the simulation feel like something?
So if you look at a book by sage archaer Martin with the characteristic plausible psychology,
and they stand on a hill because they want to conquer the city below the hill and they're
done in it and they look at the color of the sky and they are apprehensive and feel empowered in all
these things. Why do they have these emotions? It's because it's written into the story.
It's written to the story because it's an adequate model of the person that predicts what they're going to do next.
And the same thing is true for us. So it's basically a story that our brain is writing. It's not written in words. It's written in perceptual content, basically multimedia content.
And it's a model of what the person would feel if it existed.
So it's a virtual person.
And you and me happen to be this virtual person.
So this virtual person gets access to the language center and talks about the sky being
blue.
And this is us.
But hold on a second.
Do I exist in your simulation?
You do exist in almost similar way as me.
So there are internal states that are less accessible for me that you have and so on.
And my model might not be completely adequate.
There are also things that I might perceive about you that you don't perceive.
But in some sense, both you and me are some puppets, two puppets that enact this play in my mind.
And I identify with one of them because I can control one of the puppet directly.
And with the other one, I can create things in between.
So for instance, we can go in an interaction that even leads to a coupling to a feedback loop.
So we can think things together in a certain way or feel things together.
But this coupling is itself not a physical phenomenon.
It's entirely a software phenomenon.
It's the result of two different implementations interacting with each other.
So, I used to just think, like the way you think about it, is the entirety of existence,
the simulation, and where kind of each mind is a little sub simulation
that like, why don't you, why doesn't your mind
have access to my mind's full state?
Like, for the same reason that my mind
hasn't have access to its own full state.
So what, I mean, there is no trick involved involved so basically when I know something about myself
it's because I made a model.
Yes, but part of your brain is tasked with modeling what other parts of your brain are doing.
Yes, but there seems to be an incredible consistency about this world in the physical sense
that there's repeatable experiments and so on.
Yeah.
How does that fit into our silly the center of the Apes simulation of the world?
So why is it so repeat, why is everything so repeatable and not everything? There's a lot of
fundamental physics experiments that are repeatable for a long time all over the place and so on.
The laws of physics, how does that fit in? It seems that the parts of the world that are not deterministic
are not long-lived. So if you build a system, any kind of automaton, so if you build simulations
of something, you'll notice that the phenomena that endure are those that give rise to stable dynamics.
So basically, if you see anything that is complex in the world, it's the result of usually of some
control of some feedback that keeps it stable around certain attractors.
And the things that are not stable that don't give rise to certain harmonic patterns and so on, they tend to get vided out over time.
So if we are in a region of the universe that sustains complexity, which is required to implement minds like ours. This is going to be a region of the
universe that is very tightly controlled and controllable. So it's going to have lots of interesting
symmetries and also symmetry breaks that allow to the creation of structure.
But they exist where? So the decision-interesting idea that our mind
stimulation that's constructing the narrative, My question is just to try to understand how that fits with this, with the entirety of the
universe. You're saying that there's a region of this universe that allows enough complexity
to create creatures like us. But what's the connection between the brain, the mind, and
the broader universe, which comes first, which is more fundamental, is the mind, the mind, and the broader universe, which comes first, which is more fundamental,
is the mind, the starting point, the universe is emergent, is the universe, the starting point,
the minds are emergent.
I think quite clearly the letter, at least a much easier explanation, because it allows
us to make causal models, and I don't see any way to construct an inverse causality.
So what happens when you die to your mind simulation?
My implementation ceases.
So basically the thing that implements myself will no longer be present.
Which means if I am not implemented on the minds of other people, the thing that I identify with.
The weird thing is I don't actually have an identity beyond the identity that I construct.
If I was the
Dalai Lama, he identifies as a form of government. So basically, the Dalai Lama gets reborn, not because
he's confused, but because he is not identifying as a human being. He runs on a human being. He's
basically a governmental software that is instantiated in every new generation and you. So his
advice is for pick someone who does this in the next generation. So if you identify
this, you are no longer a human and you don't die in the sense that what dies is only the
body of the human that you run on. To kill the Dalai Lama, you have to kill his tradition.
And if we look at ourselves, we realize that we are to a small part like this.
Most of us, so for instance, if you have children, you realize something lives on in them,
or if you spark an idea and the world something lives on, or if you identify with a society
around you, because you are a part that you're not just a human being.
Yes.
And it says you are kind of like a Dalai Lama.
In the sense that you, Joshua Bach, is just a collection of ideas. So like you have
this operating system on which a bunch of ideas live and interact. And then once you die, they kind
of part some of them jump off the sheet. You put it put it the other way. Identity is a
software state. It's a construction. It's not physically real. Identity is not a physical concept.
It's basically a representation of different objects on the same world line.
But identity lives and dies. Are you attached? What's the fundamental thing? Is it the ideas
that come together to form identity or is each individual identity actually a fundamental thing?
It's a representation that you can get agency over if you care.
So basically you can choose what you identify best if you want to.
No, but it just seems if the mind is not real,
that the birth and death is not a crucial part of it. Well, maybe I'm silly. Maybe I'm attached to this whole biological organism,
but it seems that the physical being a physical object in this world is an important aspect of
birth and death. It feels like it has to be physical to die. It feels like simulations don't have to die.
The physics that we experience is not the real physics. There is one that's no color and sound in the
real world. Color and sound are types of representations that you get if you want to model
reality with oscillators, right? So colors and sound in some sense have octaves. Yes.
And it's because they are represented probably with oscillators. Right. So that's why colors form a circle of use. And colors have harmonics,
sounds have harmonics is a result of synchronizing oscillators in the brain.
Right. So the world that we subjectively interact with is fundamentally the result
of the representation mechanisms in our brain. They are mathematically to some degree
universal. There are certain regularities that you can discover in the patterns and not others, but the patterns that we get, this
is not the real world. The world that we interact with is always made of too many parts to
count. Right. So when you look at this table and so on, it's consisting of so many molecules
and atoms that you cannot count them. So you only look at the aggregate dynamics at limit
dynamics. If you had almost infinitely
many patterns of particles, what would be the dynamics of the table? And this is roughly what
you get. So geometry that we are interacting with is the result of discovering those operators
that work in the limit that you get by building an infinite series that converges. For those parts,
where it converges is geometry, for those parts where it doesn't converge its chaos.
All right, and then so all that is filtered through
the consciousness that's emerging to our narrative.
So the consciousness gives it color,
gives it feeling, gives it flavor.
So I think the feeling flavor and so on,
is given by the relationship that a feature has to all the other features.
It's basically a giant relational graph that is our subjective universe.
The color is given by those aspects of the representation or this experiential color
where you care about, where you have identifications.
There's something mean, something where you are, the inside of a feedback loop.
When the dimensions of caring are basically dimensions of this motivational system that
we emerge over.
The meaning of the relations, the graph.
Can you elaborate that a little bit?
Where does the, maybe we can even step back and ask the question of what is consciousness
to be more systematically?
What do you, how do you think about consciousness?
What is consciousness?
Consciousness is largely a model of the contents of your attention.
It's a mechanism that has evolved for a certain type of learning.
At the moment, our machine learning systems largely
work by building chains of weighted sums of real numbers
with some non-linearity.
And you will learn by piping an error signal through these different
chain layers and adjusting the weights in these weighted sums and you can
approximate most polynomials with this if you have enough training data but
the prices you need to change a lot of these weights. Basically the error is
piped backwards into the system until it accumulates
at certain junctures in the network. And everything else evens out statistically. And only at these
junctures, this is where you had the actual arrow in the network, you make the change there.
This is a very slow process. And our brains don't have enough time for that because we don't get
all enough to play go the way that our machines learn to play go. So instead, what we do is an
attention-based learning. We pinpoint the probable region in the network where
we can make an improvement and then we store the this binding state together
with the expected outcome in a protocol. And there's ability to make index
memories for the purpose of learning to revisit these commitments later.
This requires a memory of the contents of our attention.
Another aspect is when I construct my reality and make mistakes.
So I see things that turn out to be reflections
or shadows and so on, which means I have
to be able to point out which features of my perception
gave rise to a present construction of reality.
So the system needs to pay attention to the features
that are currently in its focus.
And it also needs to pay attention to whether it pays attention
itself, in part, because the attentional system gets trained
with the same mechanism, so it's reflexive,
but also in part, because your attention lapses,
if you don't pay attention to the attention itself.
So it's the thing that I'm currently
seeing just a dream that my brain has spun
off into some kind of daydream, or am I still paying attention to my percept?
So you have to periodically go back and see whether you're still paying attention.
And if you have this loop and you make it tight enough between the system, becoming aware
of the contents of its attention and the fact that it's paying attention itself and makes
attention, the object of its attention, I think this is the loop over which we bake up.
So there's this attention to mechanism
that's somehow self-referential,
that's fundamental to what consciousness is.
So just to ask you a question,
I don't know how much you're familiar with the recent break,
there was a natural language processing,
they use attention to mechanisms,
they use something called transformers
to learn patterns and sentences by allowing them to focus its attention to particular parts of the sentence that each individual. So like parametrize and make it learnable, the dynamics of a
sentence by having like a little window into the into the sentence. Do you think that's like a
little step towards the adventure will take us to the intentional mechanisms from which consciousness
can emerge? Not quite. I think it models only one aspect of attention. In the early days of
automated language translation, there was an example that I found particularly funny,
where somebody tried to translate a text from English into German, and it was a bad broke the window.
And the translation in German was a nephlete mouse, it's a brach, that's fenced him with an embespolschleger.
So to translate back into English a bad, this this flying mammal broke the window with a baseball bat.
And it seemed to be the most similar to this program
because it somehow maximized
the possibility of translating the concept
bad into German in the same sentence.
And this is a mistake that the transformer model
is not doing because it's tracking identity.
And the attentional mechanism in the transformer model is not doing because it's tracking identity. And the attentional mechanism in the transformer model is basically putting its finger on individual
concepts and make sure that these concepts pop up later in the text.
And tracks basically the individuals through the text.
And it's why the system can learn things that other systems couldn't before it, which
makes, for instance, possible to write a text where it talks about the
scientist, then the scientist is a name and has a pronoun and it
gets a consistent story about that thing.
What it does not do, it doesn't fully integrate this.
So it is meaningfuls apart at some point.
It loses track of this context.
It does not yet understand that everything that it says has to
refer to the same universe.
And this is where this thing falls apart.
But the attention in a transformer model
does not go beyond tracking identity.
And tracking identity is an important part of attention,
but it's a different, very specific,
attention mechanism.
And it's not the one that gives rise
to the type of consciousness that we have.
It's just a link on what do you mean by identity
in the context of language.
So when you talk about language,
we have different words that can refer to the same concept.
And in the sense, the space of concepts.
Yes, and it can also be in a nominal sense or in an
in an ethical sense that you say,
this word does not only refer to this class of objects,
but it refers to a definite object to some kind of agent
that waves their way through the story
and is only referred by different ways in the language.
So the language is basically a projection
from a conceptual representation,
from a scene that is evolving
into a discrete string of symbols.
And what the transformer is able to,
it learns aspects of this projection mechanism
that other models couldn't learn.
So, have you ever seen an artificial intelligence or any kind of construction idea that allows
for unlike neural networks or perhaps within neural networks that's able to form something
where the space of concepts continues to be integrated. So what you're describing,
building a knowledge base, building this consistent, larger and larger sense of ideas
that would then allow for deeper understanding.
Vidkins stands thought that we can build everything from language, from basically a logical
grammatical construct. And I think to some degree, there's also what Minskie believed.
So that's why he focused so much on common sense reasoning
and so on.
And project that was inspired by him was psych.
That was basically it.
That's all going on.
Yes.
Of course, ideas don't die.
Only people die.
And that's true.
But in Alzheimer's, a productive project is just probably not one that
is going to converge to general intelligence. The thing that Wittgenstein couldn't solve,
and he looked at this in his book at the end of his life, philosophical investigations,
was the notion of images. So images play an important role in track titles, that track titles
in a attempt to basically turn philosophy into logical probing language to design a logical language in which you can do actual philosophy that
rich enough for doing this. And the difficulty was to deal with perceptual content.
And eventually I think he decided that he was not able to solve it.
And I think this preemdred the failure of the Logitist program in AI and this solution,
as we see it today is,
we need more general function approximation.
There are functions, geometric functions
that we learn to approximate that cannot be efficiently
expressed and computed in a grammatical language.
We can of course build automata that go via number theory
and so on to learn in algebra and then compute
an approximation of this geometry.
But to equate language and geometry is not an efficient way to think about it.
So, function, you kind of just said that neural networks are sort of the approach
and your neural networks takes is actually more general than what can be expressed through language.
Yes. So, what can be efficiently expressed through language at the data rates at which we process
grammatical language?
Okay.
So you don't think so.
You don't think languages.
So you disagree with Wittgenstein that language is not fundamental to.
I agree with Wittgenstein.
I just agree with the late Wittgenstein.
And I also agree with the beauty of the early Wittgenstein.
I think that the track titles itself is probably the most beautiful philosophical text that was written in the 20th century.
But language is not fundamental to cognition, intelligence and consciousness.
So I think that language is a particular way, or the natural language that we're using,
is a particular level of abstraction that we used to communicate with each other. But the languages in which we express geometry
are not grammatical languages in the same sense.
So they work slightly different.
They're more general expressions of functions.
And I think the general nature of a model
is you have a bunch of parameters.
These have a range.
They are the variances of the world.
And you have relationships between them, which are constraints, which say if certain parameters have a range, the variances of the world, and you have relationships between them, which are constraints,
which say if certain parameters have these values,
then other parameters have to have the following values.
This is a very early insight in computer science,
and I think some of the earliest formulations
is the Boltzmann machine.
The problem with the Boltzmann machine
is that while it has a measure of whether it's good,
it's basically the energy on the system, the amount of tension that you have left in the constraints,
whether constraints don't quite match.
It's very difficult to, despite having this global measure to train it,
because as soon as you add more than trivially few elements,
parameters in the system, it's very difficult to get it settled in the right architecture.
And so the solution that Hinton and Zenovsky found was to use a restricted ballsman machine,
which uses the hidden links, the internal links in the ballsman machine and only has
based the input and output layer. But this limits the expressivity of the ballsman machines.
So now he builds a network of small of these primitive ballsman machines.
And in some sense, you can see almost continuous development
from this to the deep learning models
that we're using today.
Even though we don't use Boltzmann machines at this point.
But the idea of the Boltzmann machines,
you take this model, you clamp some of the values
to perception, and this forces the entire machine
to go into a state that is compatible with the states
that you currently perceive, and this state
is your model of the world.
I think it's a very general way of thinking about models,
but we have to use a different approach to make it work.
This is, we have to find different networks that train the Boltzmann machine.
So the mechanism that trains the Boltzmann machine
and the mechanism that makes the Boltzmann machine settle into its state
are distinct from the constrained architecture of the Boltzmann machine and the mechanism that makes the Boltzmann machine settle into its state are distinct from the constrained architecture of the Boltzmann machine itself.
The kind of mechanism that we want to develop, you're saying?
Yes, so there's a direction in which I think our research is going to go.
It's going to, for instance, what you notice in perception is our perceptual models of
the world are not probabilistic but probabilistic, which means you should be able to perceive things that are
improbable but possible. Right? A perceptual state is valid not if it's
probable but if it's possible, if it's coherent. So if you see a tiger coming
after you should be able to see this even if it's unlikely. And the
probability is necessary for convergence of the model. So given the state of
possibilities that is very, very large and a set of perceptual features, how should you change the
state of the model to get it to converge with your perception. But the space of ideas that are
coherent with the context that you're sensing is perhaps not as large.
I mean, that's perhaps pretty small.
The degree of coherence that you need to achieve depends, of course, on how deep your models go.
For instance, politics is very simple when you know very little about game theory and human nature.
So the younger you are, the more obvious is this is how politics would work, right?
Yes. And because you get in a career in the aesthetics from relatively few inputs and
the more layers you model the more layers you model reality the harder it gets to satisfy all the constraints.
So you know the current neural networks are fundamentally
Supervised learning system with a feed forward neural network,
is back propagation to learn.
What's your intuition about?
What kind of mechanisms might we move towards
to improve the learning procedure?
I think one big aspect is going to be meta learning
and architecture search starts in this direction.
In some sense, the first wave of AI,
classical AI work by identifying a problem into a possible solution and
implementing the solution, right, program that plays chess.
And right now, we are in the second wave of AI. So instead of
writing the algorithm that implements the solution, we write an
algorithm that automatically searches for an algorithm that
implements the solution. So the learning system in some sense,
is an algorithm that itself discovers the algorithm that solves the problem like go, go is too hard to implement it by
this solution by hand, but we can implement an algorithm that finds this solution.
So now let's move to the third stage, right? The third stage would be meta learning, find
an algorithm that discovers a learning algorithm for the given domain. Or brain is probably
not a learning system, but a metal learning system.
This is one way of looking at what we are doing.
There is another way, if you look at the way our brain is, for instance, implemented.
There is no central control that tells all the neurons how to wire up.
Instead, every neuron is an individual reinforcement learning agent.
Every neuron is a single-celled organism that is quite complicated, and in some sense,
quite motivated to get fed.
And it gets fed if it fires on average at the right time.
And the right time depends on the context
that the neuron exists in, which is the electrical
and chemical environment that it has.
So it basically has to learn a function over its environment
that tells us when to fire to get fat.
Or if you see it as a reinforcement learning agent, every neuron is in some sense,
making a hypothesis when it sends a signal and tries to pipe a signal through the universe
and tries to get positive feedback for it.
And the entire thing is set up in such a way that it's robustly self-organizing into a brain.
Which means you start out with different new types that have different priors in which hypothesis to test
on how to get this reward. And you put them into different
concentrations in a certain spatial alignment. And then you
will entrain it in a particular order. And as a result, you get
the well organized brain.
Yeah, so, okay, so the brain is a meta learning system with a
bunch of reinforcement learning agents.
And what, I think you said, but just to clarify,
where do the, there's no centralized government
that tells you, here's a loss function,
here's a loss function, here's a loss function.
Like, what, who says what's the
objective?
Also, governments which impose loss functions on different parts of the brain. So we have
differential attention. Some areas in your brain get specially rewarded when you look at
faces. If you don't have that, you will get post-op agnosia, which basically mean the
inability to tell people apart by their faces. So, and the reason that happens is because
it was had an evolution advantage. So like evolution comes reason that happens is because it was a had an evolution
advantage. So like evolution comes into play here about it.
But it's basically an extraordinary attention that we have for faces. I don't think that people
visit Posea Agnosea have a per se a defective brain. The brain just has an average attention
for faces. So people with Posea Agnosea don't look at faces more than they look at cups.
So the level at which they resolve the geometry of faces is not higher than the one that then for cups. And people that don't have pros
of a knows you look obsessively at faces, right? For you and me, it's impossible to move
through a crowd without scanning the faces. And as a result, we make insanely detailed
models of faces that allow us to discern mental states of people.
So obviously we don't know 99% of the details of this metal learning system,
that's our mind. Okay.
But still we took a leap from something which dumber to that from through the
evolutionary process.
Can you, first of all, maybe say, how hard these, how big of a leap is that
from our brain, from our ape ancestors to multi-cell organisms?
And is there something we can think about as we start to think about how to engineer
intelligence, is there something we can learn from evolution?
In some sense life exists because of the market opportunity of controlled chemical
reactions.
We compete with damp chemical reactions and we win in some areas against this damp combustion
because we can harness those anthropocardial radiance where you need to add a little bit
of energy in a specific way to harvest more energy.
So we are competing combustion.
Yes, in many regions we do, we try very hard because when we are in direct competition we lose, right?
Yeah.
So because the combustion is going to close the entropy gradient much faster than we can run.
Yes, we got it.
That's quite a...
So, basically we do this because every cell has a touring machine built into it.
It's like literally a red-white head on a tape. And so everything that's more
complicated than a molecule that just is vortex around the tractors that needs a touring machine
for its regulation. And then you bind cells together and you get next level organization, an
organism where the cells together implement some kind of software. And for me, very interesting to discover in the last year was the word spirit, because I
realized that word spirit actually means it's an operating system for an autonomous robot.
And when the word was invented, people needed this word.
But they didn't have robots that they built themselves yet, the only autonomous robots that
were known were people, animals, plants, ecosystems, cities, and so on.
And they all had spirits.
And it makes sense to say that the plant is an operating system.
If you pinch the plant in one area, then it's going to have repercussions throughout the
plant.
Everything in the plant is in some sense connected into some global aesthetics, like
another organisms.
An organism is not a collection of cells.
It's a function that tells cells how to behave.
And this function is not implemented
as some kind of supernatural thing,
like some more for genetic field.
It is an emergent result of the interactions
of each cell with each other's cell, right?
So what you're saying is the organism is a function
that tells the cells what to do and the function is an emerges
from the interaction of the cells.
Yes.
So it's basically a description of what the plant is doing in terms of macro states.
And the macro states, the physical implementation are too many of them to describe them.
So the software that we used to describe what the plant is doing,
the spirit of the plant is the software,
the operating system of the plant, right?
And this is a way,
in which we, the observers, make sense of the plant.
Yes.
And the same is true for people.
So people have spirits, which is their operating system,
in a way, right, and there is aspects of that operating system
that relate to how your body functions and others, how you socially interact, how you interact
with yourself and so on. And we make models of that spirit. And we think it's a loaded term,
because it's from a pre-scientific age. But we took the scientific age a long time to
rediscover a term that is pretty much the same thing and I suspect that the difference is that we still see between the old word and the new word
of translation hours that travel over the centuries.
Well, can you actually linger on that?
Like, why do you say that spirit, just to clarify, because I'm a little bit confused, so
the word spirit is the powerful thing.
But why did you say in the last year or so that you discovered this?
Do you mean the same old traditional idea of a spirit or do you mean that?
I try to find out what people mean by spirit. When people say spirituality in the US,
it usually is the refers to the phantom limb that they develop in the absence of culture.
And a culture is in some sense, you could say, the spirit of a society that is long game.
This thing that is becomes self-aware at a level
above the individuals where you say, if you don't do the following things, then the grand,
grand, grand grandchildren of our children will not have nothing to eat. So if you take this long
scope, where you try to maximize the length of the game that you were playing as a species,
you realize that you are part of a larger thing that you cannot fully control. You probably need to submit to the ecosphere
instead of trying to completely control it.
There needs to be a certain level
at which we can exist as a species if you want to endure.
And our culture is not sustaining this anymore.
We basically made this bet with the industrial revolution
that we can control everything.
And the modernist societies with basically unfettered growth,
that to a situation in which we depend on the ability to control the entire planet.
And since we are not able to do that, as it seems, this culture will die.
We realize that it doesn't have a future, we call it our children generation,
that it's not very optimistic things. doesn't have a future, but it'd be called our children generations that are optimistic.
I think that's where it is.
Yeah, so you can have this kind of intuition that our civilization, you said culture,
but you really mean the spirit of the civilization, the entirety of the civilization, may not
exist for long.
Yeah.
So can you, can you untangle that?
What's your intuition behind that?
So you, you kind of offline mentioned to me
that the industrial revolution was kind of a,
the moment we agreed to accept the offer sign on the paper,
on the dotted line with the industrial revolution,
we doomed ourselves.
Can you elaborate on that?
This is a special, I of course don't know how it plays out. But of course, it seems to me that
in a society in which you leverage yourself very far over an entropic abyss without land on
the other side, it's relatively clear that your county leverage at some point going to break down
into this entropic abyss. Do you have to pay the bill?
Okay.
Russia is my first language, and I'm also an idiot.
Me too.
This is just two apes instead of playing with the banana trying to have fun by talking.
Okay.
Anthropic what?
In what's Anthropic?
Anthropic.
Anthropic. Anthropic in the sense of entropy anthropic? Antropic. Antropic.
So, anthropic in the sense of entropy.
Entropic.
Oh, and tropic guys.
Yes.
And anthropic was the other word you used.
Abyss?
What's that?
It's a big quartz.
Oh, a bis.
Abyss, yes.
And tropic abyss.
So many of the things you say are poetic.
It's part of my break my brain.
It's amazing, right?
It's mispronouncement.
Which makes you do more poetic.
This is what Wittgenstein would be proud.
So, in tropica bis, okay, let's rewind then the industrial revolution.
So how does that get us into the in tropica bis?
So, in some sense, we burned 100 million years worth of trees to get everybody plumbing. Yes. And the society that we had
before that had a very limited number of people. So basically
since 0BC, we hovered between 300 and 400 million people.
And this only changed with the enlightenment and the subsequent
industrial revolution. And in some sense, the enlightenment and the subsequent industrial revolution. And in some sense, the enlightenment freed our rationality and also freed our norms
from the pre-existing order gradually. It was a process that basically happened in feedback
loops, so it was not that just one cost the other. It was a dynamic that started.
And the dynamic worked by basically increasing productivity to such a degree that we could feed all our children.
And I think the definition of poverty is that you have as many children as you can feed before they die, which is in some sense the state that all animals on earth are in.
The definition of poverty is having enough.
So you can have only so many children as you can feed and if you have more, they die.
Yes.
And in our societies, you can basically have as many children as you want, they don't die.
Right.
So the reason why we don't have as many children as we want is because we also have to pay
a price in terms of you have to insert ourselves in the lower source of freedom as we have
too many.
So basically everybody in the under middle and lower upper class has only a limited number of children because having
more of them would mean a big economic hit to the individual families. Yes. Because children,
especially in the US, super expensive to have. And you only are taken out of this if you are
basically super rich or if you are super poor. If you are super poor, it doesn't matter how many
kids you have because your status is not going to change. And these children are largely
not going to die of hunger. So how does this lead to self-destruction? So there's a lot of unpleasant
properties about this process. So basically what we try to do is we try to let our children survive
even if they have diseases. Like I would have died and before my mid 20s,
without modern medicine, and most of my friends would have as well.
And so many of us wouldn't live without the advantages
of modern medicine and modern inescalized society.
We get our protein largely by
stop doing the entirety of nature.
Imagine there would be some very clever microbe that would live in our organisms and would
completely harvest them and change them into a thing that is necessary to sustain itself.
And it would discover that, for instance, brain cells are kind of edible, but they're not quite
nice, so you need to have more fat in them and you turn them into more fat cells. And basically, this big organism would become a vegetable. That is
barely alive and it's going to be very brittle and not resilient when the environment changes.
Yeah, but the some part of that organism, the one that's actually doing all the using of the
there'll still be somebody thriving. So it relates back to this original question.
I suspect that we are not the smartest thing on this planet.
I suspect that basically every complex system has to have some complex
regulation. If it depends on feedback loops.
And so for instance, it's likely to that we should describe a certain
degree of intelligence to plans.
The problem is that plans don't have a nervous system, so they don't have a way to telegraph
messages over large distances almost instantly in the plant.
And instead they will rely on chemicals between adjacent cells, which means the signal processing
speed depends on their signal processing with a rate of a few millimeters per second.
And as a result, if the plant is intelligent, it's not going to be intelligent at similar
time scales as well.
Yeah, the building puts the time scale is different.
So you suspect we might not be the most intelligent, but we're the most intelligent in this
spatial scale and our time scale.
So basically, if you would zoom out very far, we might discover that there
have been intelligent ecosystems on the planet that existed for thousands of years in an almost
undisturbed state. And it could be that these ecosystems actively related to environment. So basically
change the course of the evolution within this ecosystem to make it more efficient and less brittle.
There's possible something like plants is actually a set of living organisms on ecosystem
of living organisms that are just operating a different time scale and are far superior
in intelligence than human beings.
And then human beings will die out and plants will still be there and they'll be there.
Yeah.
There's an evolutionary adaptation playing a role at all of these levels.
For instance, if mice don't get enough food and get stressed,
the next generation of mice will be more sparse and more scrawny.
The reason for this is because they, in a natural environment,
the mice have probably hidden a drought or something else.
If they're overgrace, then all the things that sustain them might go extinct.
There will be no mice, a few generations from now,
to make sure that there will be mice and, a few generations from now. So to make
sure that there will be mice and five generations from now, basically the mice scale back.
And a similar thing happens with the predators of mice. They should make sure that the mice
don't completely go extinct. So in some sense, if the predators are smart enough, they will
be tasked with shepherding their foot supply. And Maybe the reason why lions have much larger brains in antelopes
is not so much because it's so hard to catch an antelope as opposed to run away from the lion,
but the lions need to make complex models of the environment, more complex than the antelopes.
So first of all, just describing that there's a bunch of complex systems and human beings may not
even be the most special or intelligent of those complex systems even on Earth.
Makes me feel a little better about the extinction of human species that we're talking about.
Yes, maybe if you're just a guy I've deployed to put the carbon back into the atmosphere.
Yeah, this is just a nice, we tried it out.
The big stain on evolution is not as it was trees.
First evolved trees before they could be digested again.
There were no insects that could break all of them apart.
Cellulose is so robust that you cannot get all of it with microorganisms.
So many of these trees fell into swamps and all this carbon became inert
and could no longer be recycled into organisms.
And we are the species that is tested to take care of that.
So this is kind of a ticket out of the ground, put it back at the atmosphere
and the Earth is already greening.
So within a million years or so when the ecosystems have recovered from the rapid changes,
they're not compatible with it right now, the Earth is going to be awesome again.
And there won't be even a memory of us, of us little apes.
I think there will be memories of us.
I suspect we are the first generally intelligent species in the sense.
We are the first species within the rest of the real society because we will leave more phones than bones in the stratosphere.
Well, see, I have more phones than bones. I like it. But then let me push back. You've
kind of suggested that we have a very narrow definition of, I mean, why aren't trees
more general, a higher level general intelligence than trees were intelligent?
And they would be at different timescales, which means within a hundred years, the tree is
probably not going to make models that are as complex as the ones that we make in 10 years.
But maybe the trees are the ones that made the phones, right?
You could say the entirety of life did it, you know, the first cell never died.
The first cell only split, right? And every divided and every cell in our body is still
an instance of the first cell that split off from that very first cell.
There was only one cell on this planet as part as we know.
Yeah.
And so the cell is not just a building block of life.
It's a hyper organism, right?
And we are part of this hyper organism.
block of life. It's a hypereorganism, right? And be a part of this hypereorganism.
So nevertheless, this hypereorganism, no, this little particular branch of it, which is us humans,
because of the industrial revolution and maybe the exponential growth of technology might somehow destroy ourselves. So what do you think is the most likely way we might destroy ourselves?
So some people worry about genetic manipulation.
Some people, as we've talked about, worry about either dumb artificial intelligence or super intelligent artificial intelligence destroying us.
Some people worry about nuclear weapons and weapons of war in general.
What do you think? If you're to, if you were a betting man,
what would you bet on in terms of self-destruction? And it would be higher than 50, would it be higher than 50 percent? So it's very likely that nothing that we bet on matters after we've been our bet. So
I don't think that bets are literally the right thing to go about. I mean once you're dead, it doesn't,
you won't be there to collect the earnings. So it's also not clear if we, as a species, go extinct. But I think that our
present civilization is not sustainable. So the thing that will change is there will
be probably fewer people on the planet than are today. And even if not, then still most
of people that are alive today will not have offspring in 900 years from now, because
of the geographic changes and so on and the changes in the food supply.
It's quite likely that many areas of the planet will only be livable with a close cooling chain in a hundred years from now.
So many of the areas around the equator and in
subtropical climates that are now quite pleasant to live in will stop to be
inhabitable without air conditioning. So you honestly, wow, cooling chain, close-knit, cooling chain communities.
So you think you have a strong worry about the effects of global warming.
But itself, it's not the big issue. If you live in Arizona right now,
you have basically three months in the summer, in which you cannot be outside.
Yes.
And so you have a close cooling chain, you have air conditioning in your car and in your
home and you're fine.
And if the air conditioning would stop for a few days, then in many areas, you would not
be able to survive.
Right.
You just pause for a second.
Like you say so many brilliant, poetic things like, what is a close, is that do people use
that term closed cooling chain?
I imagine that people use it when they describe how they get meat into a supermarket, right?
A good thing.
They're cooling chain and this thing starts to saw your trouble and you have to sew it away.
And this thing in the back.
It's such a beautiful way to put it.
It's like calling a city a closed social chain or something like that.
That's right.
I mean, the locality of it is really important.
Yeah, but it basically means you bake up in a climatized room.
You go to work in a climatized car.
You work in a car.
You shop and you shop in a climatized supermarket.
And in between you have very short distance, which you run from your car to the supermarket,
but you have to make sure that your temperature does not approach the temperature of the environment.
Yeah.
So the crucial thing is the wet pub temperature, the wet pub temperature.
It's what you get when you take a wet clothes and you put it around your thermometer.
And then you move it very quickly through the air.
So you get the evaporation heat.
And as soon as you can no longer cool your body temperature via a
power operation to a temperature below something like I think 35 degrees you die.
And which means if the outside world is dry, you can still cool yourself down by sweating,
but if it has a certain degree of humidity or if it goes over a certain temperature, then
sweating will not save you.
And this means even if you are healthy, fit individual within a few hours, even if you
try to be in the shade and so on, you'll die.
Unless you have some climate,
a,
a, a, a,
equipment.
And this itself, if you as long as you maintain civilization and you have energy supply and you have foot trucks coming to your home that are climatized, everything is fine.
But what if you lose a large scale, open every culture at the same time.
So basically you run into food insecurity because
climate becomes very irregular or weather becomes very irregular and you have a lot of extreme
weather events. So you need to roll most of your your food maybe indoor or you need to import your
food from certain regions and maybe you're not able to maintain the civilization so without the
planet to get the infrastructure to get the foot to your home.
Right.
But there could be a significant impact in the sense that people begin to suffer.
There could be wars over resources and so on.
But ultimately, do you not have a, not a faith, but what do you make of the capacity of technological innovation to help us prevent some of the worst
damages that this
condition can create. So as an example,
as a almost out there example is the work of the SpaceX and Elon Musk is doing of trying to
also consider our
propagation throughout the universe in deep space to colonize other planets.
That's one technological step.
But of course, what Elon Musk is trying on Mars is not to save us from global warming,
because Mars looks much worse than Earth will look like after the worst outcomes of global warming,
imaginable, right? Mars is essentially not habitable.
It's exceptionally harsh environment, yes. But what he is doing, what a lot of people
throughout history since the Industrial Revolution are doing, are just doing a lot of different
technological innovation with some kind of target. And when it ends up happening, it's totally
unexpected new things come up. So trying to, trying to terraform or trying to colonize Mars, extremely harsh environment, might give us totally
new ideas of how to expand or increase the power of this closed cooling circuit that empowers
the community.
It seems like there's a little bit of a race between our open-ended technological innovation of this
communal operating system that we have and our general attendants do want to overuse resources
and thereby destroy ourselves. You don't think technology can win that race?
I think the probability is relatively low,
given that our technology is,
points the US is stagnating since the 1970s, roughly,
in terms of technology, most of the things that we do
are the result of incremental processes.
What about Intel?
What about Moore's law?
It's basically, it's very incremental.
The things that we're doing is,
so the invention of the micro-pocessor
was a major thing, right? incremental, the things that we are doing is, so the invention of the micro processor was
a major thing, right?
The miniaturization of transistors was really major.
But the things that we did afterwards largely were not that innovative.
So we got a lot of structural changes of scaling things into GPUs, CPUs into GPUs and things like that. But I don't think that there are
basically not many things if you take a person that died in the 70s and was at the top of their
game, they would not need to read that many books to be current again. But it's all about books,
who cares about books. There might be things that are beyond books might be a very papers or No papers forget papers. There might be things that are so papers and books and knowledge. That's
That's a concept of a time when you were sitting there by candlelight and individual consumers of knowledge
What about the impact that you were not in the middle of we're not might not be understanding of
Twitter of YouTube the reason you and I are sitting here today is because of Twitter and YouTube.
So the ripple effect, and there's two mines, sort of two dumb apes, are coming up with a new,
perhaps a new clean insights, and there's 200 other apes listening right now,
200,000 other apes listening right now. And that effect, it's very difficult to understand
what that effect will have. That might be bigger than any of the advancements of the microprocessor
or any of the industrial revolution, the ability of spread knowledge. And that knowledge,
that it allows good ideas to reach millions much faster. And the effect of that, that might be the new.
That might be the 21st century, is the multiplying of ideas, of good ideas.
Because if you say one good thing today, that will multiply across huge amounts of people.
And then they will say something, and then they will have another podcast, and they'll
say something, and then they'll write a paper. I'll say something and then the writer paper that that could be a huge you don't think that.
Yeah, we should have billions for noimans right now for noimans right now in two rings
and we don't for some reason.
I suspect the reason is that we destroy our attention span also the incentives of course
different, but uh...
We have team Kardashians, yeah.
So the reason why we are sitting here and doing this as a YouTube video is because you
and me don't have the attention
Spent to write a book together right now and you guys probably don't have the attention spent to read it. So let me tell you
that guarantee you're still listening.
Burst the care of your attention.
It's very short.
But we're, you know, we're an hour and 40 minutes in and I guarantee you that 80% of the people are still listening.
So there is an attention span. It's just the the form, you know, who said that the book is the optimal way to transfer information? That's
it. This is still an opa question. I mean, that's what we're.
Something that social media could be doing that other forms could not be doing. I think the
end game of social media is a global brain. And Twitter is in some sense a global brain
that is completely hooked on dopamine. It doesn't have any kind of inhibition. And as a result,
it's caught in a permanent seizure. Yes. It's also in some sense, a global brain that is completely hooked on dopamine doesn't have any kind of inhibition and as a result is caught in a permanent seizure.
It's also in some sense, a multiplayer role playing game.
And people use it to play an avatar that is not like them, as they were in this same
world and they look through the world to the lens of their phones and think it's the real
world.
But it's the Twitter world that is tormented by the popularity incentives of Twitter.
Yeah. bit of all that is tormented by the popularity incentives of Twitter. Yeah, the incentives and just their natural biological, the dopamine rush of
a like, no matter how like I consider I try to be very kind of zen like and
minimalist and not being influenced by likes and so on.
But it's probably very difficult to avoid that to some degree.
probably very difficult to avoid that to some degree.
The speaking of a small tangent of Twitter, what, how can Twitter be done better?
I think it's an incredible mechanism
that has a huge impact on society
by doing exactly what you're doing.
Sorry, doing exactly what you described,
which is having this,
we're like, is this some kind of game
and we're kind of our individual RL agents
in this game and it's uncontrolled
because there's not really a centralized control.
Neither Jack Dorsey nor the engineers at Twitter
seem to be able to control this game.
Or can they, that's sort of a question.
Is there any advice you would give
on how to control this game?
I would give advice because I am certainly not an expert, but I can give my thoughts on
this. And our brain has solved this problem to some degree. Our brain has lots of individual
agents that manage to play together in a way. And they have also many contexts in which
other organisms have found ways to solve the problems
of cooperation that we don't solve on Twitter.
And maybe the solution is to go for an evolutionary approach.
So imagine that you have something like Reddit or something like Facebook and something
like Twitter.
And do you think about what they have in common, what they have in common, their companies
that in some sense own a protocol.
And this protocol is imposed on a community,
and the protocol has different components
for monetization, for user management,
for user display, for rating, for anonymity,
for import of other content, and so on.
And now imagine that you take these components
of the protocol apart,
and you do it in some sense, like communities,
visit this social network.
And these communities are allowed to mix and match
their protocols and design new ones.
So for instance, the UI and the UX can be defined
by the community.
The rules for sharing content across communities
can be defined.
The monetization can be redefined.
The way you reward individual users for what can be redefined.
The way users can represent themselves and to each other can redefine.
Who could be the redefiner? So can individual human beings build enough intuition to redefine those things?
It itself can become part of the protocol. So for instance, it could be in some communities,
it will be a single person that comes up with these things and others it's a group of friends.
Some might implement a voting scheme that has some interesting weighted voting.
Who knows?
Who knows what will be the best self-organizing principle for this?
But the process can be automated.
I mean it seems like the brain is...
Or can be automated so people can write software for this.
And eventually the idea is let's not make an assumption about this thing if
you don't know what the right solution is. And those areas that we have no idea whether
the right solution will be people designing this at Hock or machines doing this, whether
you want to enforce compliance by social norms like Wikipedia or with software solutions
or with AI that goes through the posts of people, or with the legal principle, and so on. This is something maybe you need to find out. And so the idea would be if you let the communities
evolve, and you just control it in such a way that you are incentivizing the most sentient
communities, the ones that produce the most interesting behaviors that allow you to interact
in the most helpful ways to the individuals. So you have a network that allow you to interact in the most helpful ways to the individuals,
right?
So you have a network that gives you information that is relevant to you.
It helps you to maintain relationships to others in healthy ways.
It allows you to build teams.
It allows you to basically bring the best of you into this thing and go into a coupling
into a relationship with others in which you produce things that you would be unable
to produce alone.
Yes, beautifully put. So, but the key process of that with incentives and evolution is things that
don't adopt themselves to effectively get the incentives have to die. And the thing about
social media is communities that are unhealthy or whatever you want to define as the incentives really don't like dying.
One of the things that people really get aggressive protests aggressively is when they're censored.
Especially in America, I don't know, I don't know much about dressed the world, but the idea of freedom of speech, the idea of censorship is really painful in America.
And so what do you think about that have been grown up in East Germany?
Do you think censorship is an important tool in our brain and the intelligence and in
the social networks?
So basically, if you're not a good member of the entirety of the system,
they should be blocked away. Well, locked away, blocked.
An important thing is who decides that you are a good member?
Who is it distributed or what is the outcome of the process that decides it?
Both for their individual and for society at large. For instance, if you have a high
trust society, you don't need a lot of surveillance. And the surveillance is even in some sense undermining
trust. Because it's basically punishing people that look suspicious when surveyed, but do
the right thing anyway. And the opposite, if you have a low trust society, then surveillance
can be a better trade off. And the US is currently you have a low-trust society, then surveillance can be a better trade-off.
And the US is currently making a transition
from a relatively high trust or mixed trust society
to a low-trust society, so surveillance will increase.
Another thing is that beliefs are not just in your representations.
There are implementations that run code on your brain
and change your reality and change the way you interact
with each other at some level.
And some of the beliefs are just public opinions that we use to display our alignment. So for instance,
people might say, our cultures are the same and equally good, but still they prefer to live
in some cultures over others, very, very strongly so. And it turns out that the cultures are defined
by certain rules of interaction. And these rules of interaction lead to different results when you implement them.
So if you adhere to certain rules, you get different outcomes in different societies.
And this all leads to very tricky situations when people do not have a commitment to shared purpose.
And our societies probably need to rediscover what it means to have a shared purpose.
And how to make this compatible with a non-totalitarian view.
So in some sense, the US is called an equinondrum between totalitarianism and diversity and
doesn't need to how to resolve this.
And the solutions that the US has found so far are very crude because it's a very young
society that is also under a lot of tension. It seems to me that the US will found so far are very crude because it's a very young society that is also under a lot of
Tension that seems to me that the US will have to reinvent itself
What do you think just
Philosophizing what kind of mechanisms of government?
Do you think we as a species should be involved with the US or broadly? What do you think will work well?
As a system. I of course we don't know it all seems
to work pretty craply. Some things worse than others. Some people argue that communism is the best
others say, yeah, look at the Soviet Union. Some people argue that anarchism is the best and then
completely discarding the positive effects of government. You know, there's a lot of arguments.
the positive effects of government. There's a lot of arguments.
US seems to be doing pretty damn well in the span of history.
There's a respective for human rights, which seems to be a nice feature, not a bug.
And economically, a lot of growth, a lot of technological development.
People seem to be relatively kind on the grand scheme of things.
What lessons do you draw from that? What kind of
government system do you think is good? Ideally, government should not be perceivable,
right? It should be frictionless. The more you notice, the influence of the government,
the more friction you experience, the less effective and efficient the government probably is,
right? So a government, game theoretically, is an agent that imposes an offset on your
payout metrics to make your Nash equilibrium compatible with the common good.
Right, so you have these situations where people act on local incentives.
Yeah. And these local incentives, everybody does the thing that's locally the
best for them, but the global outcome is not good. And this local incentives, everybody does the thing that's locally the best for them,
but the global outcome is not good. And this is even the case when people care about the global
outcome, because a regulation mechanisms exist that creates a cause for relationship between what
I want to have for the global good and what I do. So, for instance, I think that we should fly less.
And I stay at home, there is not a single plane that is going to not start because of me, right?
It's not going to have an influence, but I don't get from A to B.
So the way to implement this would basically to have a government that is
sharing this idea that we should fly less and is then imposing a regulation that for instance makes flying more expensive and
gives incentives for
inventing other forms of transportation that are
less putting that strain on the environment for instance incentives for inventing other forms of transportation that are less
Putting that strain on the environment for instance
So there's so much optimism and so many things you describe and yet there's the pessimism of you think our civilization is gonna come to an end
So that's not a hundred percent probability nothing in this world is so
What's the trajectory
Out of self-destruction, do you think?
I suspect that in some sense, we are both too smart and not smart enough, which means we
are very good at solving near-term problems, and at the same time, we are unwilling to submit
to the imperatives that we would have to follow in if we want to stick around.
So that makes it difficult. If you were unable to
solve everything technologically, you can probably understand how hard the child mortality needs
to be to absorb the mutation rate. And how are the mutation mutation rate needs to be to adapt to
slowly changing ecosystemic environment. Right. So you could in principle compute all these things
game theoretically and adapt to it. But you cannot do this because you are like me and you have children, you don't want them to die.
You will use any kind of medical information to keep mortality low.
Even it means that our vision, the future generations, we have enormous genetic drift.
And most of us have allergies as a result of not being adapted.
But the changes that we made to our food supply.
That's for now, I say technologically speaking,
which is a very young, you know,
300 years industrial revolution.
We're very new to this idea.
So you're attached to your kids being alive
and not being murdered for the good, a good society,
but that might be a very temporary moment of time.
Yes, that we might evolve in our thinking.
So like you said, when we're both smart and not smart enough.
We have probably not this first human civilization that has discovered technology that
allows to efficiently overgrace our resources.
And this overgracing is a thing at some point we think we can compensate this because if
we have eaten all the grass, we will find a way to grow mushrooms.
Right.
Right. But it could also be that the ecosystem's tip.
And so what really concerns me is not so much the end of the civilization,
because we will invent a new one.
But what concerns me is the fact that, for instance, the oceans might tip.
So for instance, maybe the plankton dies because of ocean acidification
and cyanobacteria take over.
And as a result, we can no longer raise the atmosphere.
This would be really concerning.
So basically a major reboot of most complex organisms on earth.
And I think this is a possibility.
I don't know if what the percentage for this possibility is, but it doesn't seem to
be outlandish to me.
If you look at the scale of the changes that we've already triggered on this planet.
And so Danny Huller suggests that, for instance, we may be able to put chalk into the stratosphere
to limit solar radiation.
Maybe it works.
Maybe this is sufficient to counter the effects of what we've done.
Maybe it won't be.
Maybe we won't be able to implement it by the time it's prevalent.
I have no idea how the future is going to play out in this regard.
It's just, I think it's quite likely that we cannot continue like this.
All our cousin species, the other home in its organ.
So the right step would be to rewind towards the industrial revolution
and slow the...
So try to contain the technological process that leads to the overconsumption of resources.
Imagine you get to choose, you have one lifetime.
Yes.
You get born into a sustainable agricultural civilization, 300, maybe 400 million people on the planet tops.
Or before this, some kind of nomadic species is like a mammalian or two million.
And so you don't meet new people unless you give birth to them.
You cannot travel to other places in the world.
There is no internet.
There is no interesting intellectual tradition that reaches considerably deep.
So you would not discover to run completeness, probably.
And so on.
So we wouldn't exist.
And the alternative is you get born into an insane world. One that is doomed to die
because it has just burned a hundred million years worth of trees in a single century.
Which one do you like? I think I like this one. It's a very weird thing. Then when you find
yourself on a Titanic and you see this iceberg and it looks like we are not going to miss it.
And a lot of people are in denial. And most of the counter arguments sound like denial to me.
They don't seem to be rational arguments.
And the other thing is we are born on this Titanic.
Without this Titanic, we wouldn't have been born.
We wouldn't be here.
We wouldn't be talking.
We wouldn't be on the internet.
We wouldn't do all the things that we enjoy.
And we are not responsible for this happening.
It's basically, if we had the choice, we would probably try to prevent it.
But when we were born, we were never asked when we want to be born, in which society we
want to be born, what incentive structures we want to be exposed to.
We have relatively little agency in the entire thing.
Humanity has relatively little agency in the whole thing.
It's basically a giant machine that's tumbling down a hill and everybody is ventically trying
to push some buttons.
Nobody knows what these buttons are meaning, what they connect to.
And most of them are not stopping,
it's tumbling down the hill.
Is it possible the artificial intelligence
will give us an escape latch somehow?
So there's a lot of worry about existential threats
of artificial intelligence.
But what AI also allows in general forms of automation
allows the potential of extreme productivity growth
that will also, perhaps, in a positive way, transform society
that may allow us to inadvertently to return to the more, to the same kind of ideals of closer to nature
that's represented in hunter-gatherer societies, you know, that's not destroying the planet,
that's not doing overconsumption and so on.
I mean, generally speaking, do you have hope that AI can help some more?
I think it is not fun to be very close to a nature until you completely subdue nature.
So our idea of being close to nature means being close to agriculture, basically forests
that don't have anything in them that eats us.
See, I mean, I want to disagree with that. I think the niceness of being close to nature is to being
fully present and in like when survival becomes your primary, not just your goal, but your
whole existence. I mean, that is a, I'm not just romanticizing, I can just speak for myself.
I am self-aware enough that that is a fulfilling existence.
That's one that's very true to be in nature,
but not fight for my survival.
I think fighting for your survival,
while being in the cold and in the rain and being hunted by animals
and having open wounds is very unpleasant.
Well, there's a contradiction in there.
Yes, I and you, just as you said, would not choose it.
But if I was forced into it, it would be a fulfilling existence.
Yes, if you are adapted to it, basically, if your brain is wet up in such a way that
you'll get rewards optimally in such an environment. And there is some evidence for this that for a certain degree of complexity,
basically, people are more happy in such an environment because it's what we
largely have evolved for. In between, we had a few thousand years in which I
think we have evolved for a slightly more comfortable environment.
So there is probably something like an intermediate stage in which people
would be more happy than there would be if they would have to fend for themselves and small groups in the forest and often die.
Versus something like this where we now have basically a big machine, a big more door in which we run a concrete boxes and press buttons and machines and largely don't feel well cared for as the monkeys
that we are.
So returning briefly to, not briefly, but returning to AI, what, let me ask a romanticized question,
what is the most beautiful to you, silly ape, the most beautiful or surprising idea in
the development of artificial intelligence, whether in your or surprising idea in the development of artificial intelligence,
whether in your own life or in the history of artificial intelligence that you've come across.
If you build an AI, it probably can make models at an arbitrary degree of detail,
right, of the world. And then it would try to understand its own nature. It's tempting to think
that at some point when we have general intelligence, we have competitions where we will let the AI's wake up in different kinds of physical
universes and we measure how many movements of the Rubik's cube it takes until it's
figured out what's going on in its universe and what it is in its own nature and its own
physics and so on. Right. So what if we exist in the memory of an AI that is trying to understand
its own nature and remembers its own genesis and remembers Lex and Joshua sitting in a hotel sparking some of the ideas of
that led to the development of general intelligence.
So we're a kind of simulation that's running in the AI system that's trying to understand
itself.
It's not that I believe that, but I think it's a beautiful.
It's a idea. Yeah. I mean, you kind of returned to this idea with a touring test of intelligence being of
intelligence being the process of asking and answering what is intelligence?
I mean, what?
Why do you think there think there is an answer?
Why is there such a search for an answer?
So does there have to be like an answer?
You just had an AI system that's trying to understand the why of what, you know, understand
itself.
Is that a fundamental process of greater and greater complexity, greater, greater intelligence?
Is the continuous trying of understanding itself?
No, I think you will find that most people don't care about that, because they're well
adjusted enough to not care.
And the reason why people like you and me care about it probably has to do with the need
to understand ourselves.
It's because we are in fundamental disagreement
with the universe that we wake up in.
It looks out on me and I see, oh my God,
I'm caught in a monkey. What's that?
Some people are unneeded.
It's the government and I'm unhappy with the entire universe
that I find myself in.
Oh, so you don't think that's a fundamental aspect
of human nature that some people are just suppressing?
Yeah. That they wake up shocked
They're in the body of a monkey. No, there is clear adaptive value to not be
Confused by that and by
Well, no, that's not so
So you have to clear adaptive value then
There's clear adaptive value to while fundamental your brain is confused by that by creating an illusion
another layer of the narrative that says
you know that tries to suppress that and instead say that
you know what's going on with the government right now is the most important thing what's going on with my football team is the most important thing
but it seems to me the
like it was like for, it was a really
interesting moment reading Ernest Beck is denial of death, that, you know, this, this
kind of idea that we're all, you know, the fundamental thing from which most of our
human mind springs is this fear of mortality.
And being cognizant of the immortality
and the fear of that mortality,
and then you construct illusions on top of that.
I guess I'm just a push on it.
You really don't think it's possible
that this worry of the big existential questions
is actually fundamental as the existentialist thought to our existence.
I think that the view of this only plays a role as long as you don't see the big picture.
The thing is that minds are software states, right? Software doesn't have identity. Software in some sense is a physical law. But it feels like there's an identity.
I thought that was for this particular piece of software,
and the narrative it tells, that's a fundamental property of it,
of assigning it to the identity.
Maintenance of the identity is not terminal,
it's instrumental to something else.
You maintain your identity so you can serve your meaning.
So you can do the things that you're supposed to do
before you you die.
And I suspect that for most people, the fear of death is the fear of dying
before they're done with the things that they feel left to do, even though they cannot quite put their finger on it,
what that is.
Right.
But in the software world, the return to the question, then what happens after we die?
What you care, you will not be longer there.
The point of dying is that you are gone.
Well, maybe I'm not.
This is what, you know, it seems like there's so much,
in the idea that this is just,
the mind is just a simulation that's constructing a narrative
around some particular aspects of the quantum mechanical wave function world that we can't
quite get direct access to, then the idea of mortality seems to be fuzzy as well.
Maybe there's not a clear answer.
The fuzzy idea is the one of continuous existence.
We don't have continuous existence.
How do you know that?
Because it's not computable.
Because you're saying it's going to be the right thing to do.
There is no continuous process.
The only thing that binds you together with the Lex Wheatman from yesterday is the illusion
that you have memories about him.
So if you want to upload, it's very easy.
You make a machine that thinks it's you.
Because this is the same thing that you are. You are a machine that thinks it's you. Because the same thing that you are,
you are a machine that thinks it's you.
But that's immortality.
Yeah, but it's just a belief.
You can create this belief very easily.
Once you realize that the question whether you are immortal
or not depends entirely on your beliefs
and your own continuity.
But then you can be immortal by the continuity of the belief.
You cannot be immortal, but you can stop being afraid of your mortality,
because you realize you will never continuously exist in the first place.
Well, I don't know if I'd be more terrified or less terrified of that.
It seems like the fact that I existed.
Also, you don't know this state in which you don't have a self.
You can turn off yourself, you know. I can't turn have a self. You can't turn off yourself, you know.
I can't turn off.
You can't turn it off.
You can't turn it off.
I can.
Yes.
And you can basically meditate yourself in a state where you are still conscious.
There are still things that are happening where you know everything that you knew before,
but you no longer identified with changing anything.
And this means that yourself in a way dissolves.
There is no longer this person.
You know that this person construct exists in other states
and it runs on this brain of leg's treatment.
But it's not a real thing.
It's a construct.
It's an idea.
And you can change that idea.
And if you let go of this idea,
if you don't think that you are special,
you realize it's just one of many people
and it's not your favorite person, even, you realize it's just one of many people and it's not your favorite person even, right?
It's just one of many and it's the one that you are doomed to control for the most part and that is basically informing the
Actions of this organism. Yeah as a control model. And this is all there is and you are somehow afraid that this control model gets interrupted
Or loses the identity of continuity. Yeah, so I'm attached. I mean, yeah, there is a very popular. It's a somehow
compelling notion that being being attached, like there's no need to be attached to this idea of
an identity. But that in itself could be an illusion that you construct. So the process of meditation, while popular, you start of us getting under the concept of identity, it could be just putting a cloak over
it, just telling it to be quiet for the moment. You know, I think that meditation is eventually just
a bunch of techniques that let you control attention. And when you can control attention, you can
get access to your own source code, hopefully not before you understand what you're doing.
And then you can change the way it works temporarily or permanently.
So yeah, meditation gets a glimpse at the source code, get under the, so basically control
or control. The entire thing is that you learn to control attention. So everything else
is downstream from controlling attention.
And control the attention that's looking at the attention.
Not only the only get attention in the parts of our mind
that create heat, where you have a mismatch between model
and the results that are happening.
And so most people are not self aware
because their control is too good.
If everything works out roughly the way you want
and the only things that don't work out
is whether you're your football team wins,
then you will mostly have models about these domains and it's only when, for instance, you have fundamental
relationships to the world around you don't work because the
ideology of your country is insane and the other kids are not nerds and don't understand why you understand physics and you don't
why you want to understand physics and you don't understand by somebody would not want to understand physics.
So we kind of brought up neurons in the brain as reinforcement learning agents.
And there's been some successes as you brought up with go with Alpha Go Alpha zero with
ideas of self play, which I think are incredibly interesting ideas of systems playing each other in an automated way to improve by playing other systems
of in a particular construct of a game that are a little bit better than itself and then
thereby improving continuously.
All the competitors in the game are improving gradually.
So being just challenging enough and learning from the process of the competition.
Do you have hope for that reinforcement learning process to achieve greater and greater
level of intelligence?
So we talked about different ideas in AI that we need to be solved.
Is RL a part of that process of trying to create an AGI system?
So definitely forms of unsupervised learning,
but there are many algorithms that can achieve that.
And I suspect that ultimately the algorithms that work,
there will be class of them or many of them.
And they might have small differences
of like a magnitude and efficiency.
But eventually what matters is the type of model
that you form.
And the types of models that we form right now are not sparse enough.
Sparse, what does mean to be sparse?
So it means that ideally every potential model state should correspond to a potential world state.
So basically if you vary states in your model, you always end up with valid world states.
And all mine is not quite there.
So an indication is basically what we see in dreams.
The older we get, the more boring our dreams become.
Because we incorporate more and more constraints that we learned about how the world works.
So many of the things that we imagine to be possible as children turn out to be constrained
by physical and social dynamics.
And as a result, fewer and fewer things remain possible.
And it's not because our imagination scales back,
but the constraints under which it operates become tighter and tighter.
And so the constraints under which our new networks operate are almost
limitless, which means it's very difficult to get a new network to
imagine things that look real.
Right.
So I suspect part of what we need to do is we probably need to build dreaming systems.
I suspect that part of the purpose of dreams is to, similar to a generative adversarial network,
to learn certain constraints.
And then it produces alternative perspectives on the same set of constraints so you can
recognize it under different circumstances.
Maybe we have flying dreams as children because we recreate the objects that we know and
the maps that we know from different perspectives, which also means from a bird's eye perspective.
So I mean, we're doing that anyway.
I mean, not without with our eyes, with our eyes closing when we're sleeping, how we're
just constantly running dreams
and simulations in our mind
as we try to interpret the environment?
I mean, sort of considering all the different possibilities,
or the way we interact with the environment seems like,
essentially, like you said, sort of creating a bunch
of simulations that are consistent with our expectations, with our previous
experiences, with the things we just saw recently. And through that hallucination process,
we are able to then somehow stitch together what actually we see in the world with the
simulations that match it well and thereby interpret it.
I suspect that you or my brain are slightly unusual in this regard, which is probably what
got you into MIT.
So this obsession of constantly pondering possibilities and solutions to problems.
Oh, stop it.
I think I'm not talking about intellectual stuff.
I'm talking about just doing the kind of stuff it takes to walk and not fall.
Yes, it's a slightly automatic.
Yes, but the process is, I mean, it's not complicated.
It's relatively easy to build a neural network that in some sense learns the dynamics.
The fact that we haven't done it right so far doesn't mean it's hard,
because you can see that a biological organism does it with relatively few neurons.
So basically you build a bunch of neural oscillators that entrain themselves
with the dynamics of your body in such a way that the regulator becomes isomorphic
and it's modeled to the dynamics that it regulates.
And then it's automatic and it's only interesting the dynamics that it regulates, and then it's automatic, and it's only interesting
the sense that it captures attention when the system is off.
See, but thinking of the kind of mechanism
that's required to do walking as a controller,
as a neural network, I think it's a compelling notion,
but it discards quietly, or at least makes makes implicit the fact that you need to have something
like common sense reasoning to walk. It's not, it's an open question whether you do or not, but the
my intuition is to be, to act in this world, there's a huge knowledge base that's underlying it
somehow. There's so much information of the kind we have never been able to construct
in our in neural networks and artificial intelligence systems period, which is like it's humbling
at least in my imagination, the amount of information required to act in this world humbles
me. And I think saying that neural level is going to accomplish it is missing, is
missing the fact that we don't, yeah, we don't have yet a mechanism for constructing something
that I common sense reasoning. I mean, what's your sense about, um, to linger on how much
you know, to linger on the idea of what kind of mechanism would be
effective at walking.
You said just a neural network, not maybe the kind we have, but something a little bit
better, would be able to walk easily.
Don't you think it also needs to know, like, huge amount of knowledge that's represented
under the flag of common sense reasoning.
How much common sense knowledge do we actually have?
Imagine that you are really hard working through all your life and you form two new concepts
every half hour or so.
Yes.
You end up with something like a million concepts because you don't get that old.
So a million concept, that's not a lot.
So it's not just a million concepts.
I think it would be a lot, I personally think it might be much more than a million.
So if you think just about the numbers, you don't live that long.
If you think about how many cycles do your neurons have in your life, it's quite limited.
You don't get that old.
Yeah, but the powerful thing is the number of concepts in their probably deeply hierarchical
in nature, the relations as you described between them
is the key thing.
So it's like, even if it's a million concepts,
the graph of relations that's formed,
and some kind of probabilistic relationships,
that's what's common sense reasoning
is the relationship between things.
That. So in some sense, I think of the concepts as the address space for our behavior programs.
And the behavior programs allow us to recognize objects and interact with them, also mental
objects.
And a large part of that is the physical world that we interact with, which is this
rest extender thing, which is basically navigation of information in space.
And basically, it's similar to a game engine.
It's a physics engine that you can use to describe and predict
how things that look in a particular way, that feel,
when you touch them in particular way,
the low-proper reception, the low auditory perception,
and so on, how they work out.
So basically, the geometry of all these things.
And this is probably 80% of what our brain is doing is dealing with that with this real
time simulation. And by itself, a game engine is fascinating, but it's not that hard to
understand what it's doing. Right? And our game engines are already in some sense approximating
the, the fidelity of what we can perceive.
So if we put on an Oculus Quest, we get something that is still
relatively crude with respect to what we can perceive, but it's also in the
same ballpark already.
It's just a couple of odd of magnitudes away to homestead
or rating our perception of the complexity that it can produce.
So in some sense, it's reasonable to say that our,
the computer that you can buy,
that put into your home is able to give a perceptual reality
that has a detail that is already in the same ballpark
as what your brain can process.
And everything else are ideas about the world.
And I suspect that there are relatively sparse
and also the intuitive models that we form
about social interaction. Social interaction
is not so hard. It's just hard for us nerds because we have our wires crossed so we need to
deduce them. But the piers are present in most social animals. So it's an interesting thing
to notice that many domestic social animals like cats and dogs have better social cognition than children. Right.
I hope so. I hope it's not that many concepts fundamentally to do to existence world.
So for me, it's more like afraid so because this thing that we only
appear to be so complex to each other because we are so stupid
is a little bit depressing.
Now, one that's, yeah, to me, that's inspiring. If we're indeed
just as stupid as it seems.
The things our brain stones scale and the information processing that we build tend to scale very well.
Yeah, but one of the things that worries me is that the fact that the brain doesn't scale means that that's actually a fundamental feature of the brain. You know, all the flaws of the brain, everything we see, we see as limitations, perhaps there's
a fundamental, the constraints on the system could be the requirement of its power, which
is like different than our current understanding of intelligent systems where scale, especially
with deep learning, especially with reinforcement learning.
The hope behind open AI and deep mind, all the major results really have to do with huge
compute.
And, yeah.
It would also be that our brains are so small, not just because they take up so much glucose
in our body, like 20% of the glucose, so they don't arbitrarily scale.
There's some animals like elephants, which have larger brains than glucose, so they don't arbitrarily scale. There's some animals like elephants, which
have larger brains than us, and it don't seem to be smarter.
Right. Elephants seem to be autistic. They have very, very good
motor control and they're really good with details, but they
really struggle to see the big picture. So you can make them
recreate drawings, stroke by stroke, they can do that, but they
cannot reproduce is still live. So they cannot make a drawing
of a scene that I see. They will only be only able to reproduce the line drawing, at least as far
from what I could see in the experiments. So why is that? Maybe smarter elephants would
meditate themselves out of existence because their brains are too large. So basically the elephants
that were not autistic, they didn't reproduce. Yeah, so we have to remember that the brain is fundamentally
interlinked with the body in our human and biological system.
Do you think that AGI systems that we try to create
or greater intelligent systems would need to have a body?
So I think that should be able to make use of a body
if you give it to them.
But I don't think that a fundamental need a body.
So I suspect if you can interact with the world
by moving your eyes and your head,
you can make controlled experiments.
And this allows you to have many magnitudes,
fewer observations in order to reduce the uncertainty
in your models.
So you can pinpoint the areas in your models
where you're not quite sure, and you just move your head
and see what's going on over there.
And you get additional information. If you just have to use YouTube as an input and
you cannot do anything beyond this, you probably need just much more data. But we have much more data.
So if you can build a system that has enough time and attention to browse all of YouTube and
extract all the information that there is to be found, I don't think there's an obvious limit
to what it can do.
Yeah, but it seems that the interactivity is a fundamental thing that the physical body
allows you to do.
But let me ask on that topic, sort of that that's what a body is, is allowing the brain
to like touch things and move things and interact with the, whether the physical world exists
or not, whatever, but interact with some interface to the physical world.
What about a virtual world?
Do you think we can do the same kind of reasoning, consciousness, intelligence
if we put on a VR headset and move over to that world?
Do you think there's any fundamental difference between the interface to the physical world
that is here in this hotel and if we were sitting in the same hotel in a virtual world?
The question is does this physical, this non-physical world or this other environment
entice you to solve problems that require general intelligence? If it doesn't, then you probably
will not develop general intelligence. And arguably most people are not generally intelligent because
they don't have to solve problems that make them generally intelligent. And even
for us, it's not yet clear if we are smart enough to build AI and understand our own nature
to this degree, right? So it could be a matter of capacity. And for most people, it's in the
first place a matter of interest, they don't see the point because the benefit of attempting
this project are marginal because you're probably not going to succeed in it and the cost of trying to do it requires complete dedication of your entire life, right?
But it seems like the possibility is of what you can do in the virtual world. So imagine
a kind as much greater than you can in the real world. So imagine a situation,
maybe interesting option for me, if somebody came to me and offered what I'll do is,
so from now on, you can only exist in the virtual world.
And so you put on this headset and when you eat we'll make sure to connect your body
up in a way that when you eat in the virtual world your body will be nourished in the same
way in the virtual world.
So the aligning incentives between our common sort of real world and the virtual world.
But then the possibilities become much bigger. I could be other kinds of creatures that could do,
I can break the laws of physics. As we know them, I could do a lot of, I mean, the possibilities
are endless, right? As far as we think, it's an interesting thought whether like what existence
would be like, what kind of intelligence would emerge there, what kind of consciousness, what kind of maybe greater intelligence, even me and me,
Lex, even withizant of my existence
in this physical world,
it's interesting to think how that child would develop.
And the way virtuality and digitization
of everything is moving,
it's not completely out of the realm of possibility
that we're all, that some part of our lives
will, if not entirety of it,
will live in a virtual world.
To a greater degree, then than we currently have living on Twitter and social media and so on
Do you have I mean does something draw you intellectually or?
Naturally in terms of thinking about AI to this virtual world where more possibilities are
I think that currently it's a waste of time to deal with the physical
world before we have mechanisms that can automatically learn how to deal with it. The
body gives you second order agency, but what constitutes the body is the things that you
can indirectly control. Third order are tools. And the second order is the things that are
basically always present. But you operate on them with first order things, which are mental operators. And the zero order is in some sense the direct
sense of what you're deciding. So you observe yourself initiating an action. There are features
that you interpret as the initiation of an action. Then you perform the operations that you
perform to make that happen. And then you see the movement of your limbs.
And you learn to associate those and thereby model your own agency over this feedback, right?
But the first feedback that you get is from this first order thing already.
Basically, you decide to think a thought and the thought is being thought.
You decide to change the thought and you observe how the thought is being changed.
And in some sense, this is, you could say, an embodiment already, right?
And as a suspect, it's sufficient as an embodiment or intelligence.
And so it's not that important, at least at this time, to consider variations in the second
order.
Yes.
But the thing that you also put a, a, a, a, a, a, a, resistance against you. If there's nothing to control, you cannot make models, right?
There needs to be a particular way that resists you.
And by the way, your motivation is usually outside of your mind.
It resists your motivation is what gets you up in the morning,
even though it would be much less work to stay in bed.
So it's basically forcing you to resist the environment
and it forces your mind to serve it, And so it's basically forcing you to resist the environment and
It forces your mind to serve it to serve this resistance to the environment So in some sense it is also putting up resistance against the natural tendency of the mind to not do anything
Yeah, but so some of their resistance is just like you describe with motivation is like in the first order
It's in the mind
some resistance is in the second order like actual it's in the mind. Some resistance is in the second order,
like actual physical objects pushing against you,
so on.
It seems that the second order stuff in virtual reality
could be recreated.
Of course.
But it might be sufficient that you just do mathematics,
and mathematics is already putting up enough resistance
against you.
So basically just with an aesthetic motive,
this could maybe sufficient to form a type of intelligence.
It would probably not be a very human intelligence, but it might be one that is already general.
So to to mess with this zero order, maybe first order, what do you think about ideas of
brain computer interfaces?
So again, returning to our friend, Elon Musk and neural link, a company that's trying
to, of course, there's a lot of
cure diseases and so on with the near term, but the long term vision is to add an extra
layer to, so basically expand the capacity of the brain connected to the computational
world. Do you think one that's possible to, how does that change the fundamentals of
the zero-th order and the first order?
It's technically possible, but I don't see that the FDA would
ever allow me to drill holes in my skull to interface my neocortex devate Elon Musk
envisions. So at the moment, I can do horrible things to mice, but I'm not able to do useful
things to people except maybe at some point down the line of medical applications.
So this thing that we are envisioning which means
recreational and recreational brain computer interfaces are probably not going to happen in the present legal system. I love it how I'm asking you out there philosophical and sort of
engineering questions and for the first time ever you jumped to the legal FDA. There would be enough people that would be crazy enough
to have holds throughout in their skull to try
a new type of brain computer interface.
But also if it works, FDA will approve it.
I mean, yes, it's like, you know, I work a lot with autonomous vehicles.
Yes, you can say that it's going to be very difficult regulatory process
of approving the tonnose, but it doesn't mean a tonnose vehicles are never going to happen.
So, no, they will totally happen as soon as we create jobs for at least two lawyers and
one regulator per car.
So, yes, lawyers, that's actually, like lawyers, this is the fundamental substrate of reality.
And then you ask, it's a very weird system.
It's not universal in the world. The law is a very
interesting software once you realize it. Right? These circuits are in some sense streams of software
and it largely works by exception handling. So you make decisions on the ground and they get synchronized
with the next level structure as soon as an exception is being thrown. It's a yeah. So it
escalates the exception handling. The process is very expensive, especially since incentivize is the lawyers for producing a work for lawyers.
Yeah, so the exceptions are actually incentivized for firing often, but to return outside of lawyers, is there anything fundamentally, like, is there anything interesting inside full about
the possibility of this extra layer of intelligence added to the brain?
Yes.
I do think so, but I don't think that you need technically invasive procedures to do so.
We can already interface with other people by observing them very, very closely and getting
in some kind of empathetic resonance.
And I'm aware it's not very good at this, but I notice that people are able to do this
to some degree.
And it basically means that we model an interface layer of the other person in real time.
And it works despite our neurons being slow because most of the things that we do are
built on periodic processes, we just need to entrain yourself with the oscillation that happens.
And if the oscillation itself changes slowly enough,
you can basically follow along.
Right.
But the bandwidth of the interaction,
the, you know, it seems like you can do a lot more
computation when there's, of course.
But the other thing is that the bandwidth
that our brain, our own mind is running on,
is actually quite slow.
So the number of thoughts that I can productively think in any given day is quite limited.
But as much if they had the discipline to write it down and the speed to write it down,
maybe it would be a book every day or so.
But if you think about the computers that we can build, the magnitudes at which they operate,
right, this would be nothing. It's something that it can put out in a second.
Well, I don't know. So it's as possible sort of the number of thoughts you have in your
brain is it could be several orders of magnitude higher than what you're possibly able to
express through your fingers or through your voice. Like it. Most of them are going to
be repetitive because they know that. If they have to control the same problems every day and I walk
they are going to be processes in my brain that model my walking pattern and
regulate them and so on but it's going to be pretty much the same every day but
that's the biggest step but I'm talking about intellectual reason I thinking so
the question what is the best system of government so you sit down and start
thinking about that one of the constraints is that you don't have access to a lot of
like you, you don't have access to a lot of facts, a lot of studies, you have to do, you
always have to interface with something else to learn more to, to aid in your reasoning
process. If you can directly access all of Wikipedia and trying to understand what is
the best form of government, then every thought won't be stuck in a loop.
Every thought that requires some extra piece of information will be able to grab it really
quickly.
That's the possibility of if the bottleneck is literally the information that the bottleneck
of breakthrough ideas is just being able to quickly access huge amounts
of information, then the possibility of connecting your brain to the computer could lead
to totally new, like, you know, totally new breakthroughs, you can think of mathematicians
being able to, you know, just up the orders, the magnitude of power in their reasoning
about mathematical.
What if humanity has already discovered the optimal form of government to a evolutionary
process?
There is an evolution going on.
And so what we discover is that maybe the problem of government doesn't have stable solutions
for us as a species because we are not designed in such a way that we can make everybody
conform to them. But there could be solutions that work under given circumstances,
or that the best for a certain environment,
and depends on, for instance, the primary forms of ownership and the means of production.
So if the main means of production is land,
then the forms of government will be regulated by the land owners and you get a monarchy. If you also want to have a form of
government in which a substance you depend on some form of slavery, for instance where the peasants have to work very long hours for very little gain,
so very few people can have plumbing. Then maybe you need to promise them that you get paid in the afterlife the overtime, right? So you need a theocracy. And so for much
of human history in the West, we had a combination of monarchy and theocracy that was our form of governance,
right? At the same time, the Castic Church implemented game theoretic principles. I recently
re-read Thomas O'Kynes. It's very interesting to see this, because he was not a dualist. He was translating Aristotle in a particular way
for designing an operating system for the Catholic society.
And he says that basically, people are animals,
and very much the same way as Aristotle envisions,
which basically organism is cybernetic control.
And then he says that there are
initial, rational principles that humans can discover.
And everybody can discover them.
So they are universal.
If you are sane, you should understand
you said submit to them because you can
rationally deduce them.
And these principles are roughly,
you should be willing to self-regulate correctly.
You should be willing to do correct social regulation,
it's intra-organismic.
You should be willing to do correct social regulation, it's inter-organismic.
You should be willing to act on your models,
so you have skin in the game.
And you should have goal rationality,
you should be choosing the right goals to work on.
So basically these three rational principles,
goal rationality, he calls prudence or wisdom.
Social regulation is justice, the correct social one,
and the internal regulation is temperance.
And this thing to be willingness to act on your models
is courage.
And then he says that they are additionally
to these four cardinal virtues, three divine virtues.
And these three divine virtues cannot be
rationality-dused, but they reveal themselves by the harmony, which means if you assume them and your extra polite what's going to happen, you will see that that makes sense.
And it's often been misunderstood as God has to tell you that these are the things so they're basically there's something nefarious going on.
The Christian conspiracy forces you to believe some guy was a long beard that they discovered this.
But so these principles are relatively simple. Again, you need, it's for high level organization,
for the resulting civilization that you form, commitment to unity. So basically, you serve
this higher, larger thing, this structural principle on the next level, and decals that
face. Then there needs to be a commitment to shared purpose.
This is basically this global reward
that you try to figure out what that should be
and how you can facilitate this.
And this is love.
The commitment to shared purpose is the core of love.
You see this sacred thing that is more important
than your own organismic interests in the other.
And you serve this together and this is how you see
the sacred in the other.
And the last one is hope,
which means you need to be willing to act on that principle without getting rewards in the
here and now, because it doesn't exist yet. Then you start out building the civilization, right?
So you need to be able to do this in the absence of its actual existence yet. So it can come into being.
So yeah, so the way it comes into being is by you accepting those notions and then you see there
these three divine concepts and you see them
Realized that the other one is divine is a loaded concept in our world
But because we are outside of this cult and we are still scared from breaking free of it
But the idea is basically we need to have a civilization that acts as an intentional agent like an insect state
And we are not actually a tribal species,
we are a state building species.
And what enabled state building
is basic deformation of religious states
and other forms of rule-based administration
in which the individual doesn't matter as much
as the rule or the higher goal.
We got there via the question,
what's the optimal form of governance?
So I don't think that castlesism is the optimal form of governance? So I don't think that Kassos, that Kassosalism, is the optimal form of governance,
because it's obviously on the way out, right?
So it is for the present type of society that we are in.
Religious institutions don't seem to be optimal to organize that.
So what we discovered right now that we live in in the West is democracy.
And democracy is the rule of oligarchs,
that are the people that currently own the means of production.
That is administered not by the oligarchs themselves because they, there's too much disruption, right?
We have so much innovation that we have in every generation new means of production that we invent and
corporations die, usually after 30 years or so. And something other
takes the leading role in our societies. So, it's administered by institutions.
And these institutions themselves are not elected,
but they provide continuity.
And they are led by electable politicians.
And this makes it possible that you can adapt to change
without having to kill people.
So, you can, for instance, if a change in governments,
if people think that the current government is to corrupt
or is not up to date, you can just elect new people
Or if a journalist finds out something inconvenient about the institution and the institution has no plan B like in Russia
The journalist has to die. This is what when you run society by the deep state. So ideally you have
administration layer that you can change if something bad happens, right?
So you will have a continuity in the whole thing.
And this is the system that we came up in the west.
And the way it set up in the US is largely a result of low level models.
So it's mostly just second, third order consequences that people are modeling
in the design of these institutions.
So it's a relatively young society that doesn't really take care of the downstream
effects of many of the decisions that are being made.
And I suspect that AI can help us this in a way if you can fix the incentives.
The society of the US is a society of theaters.
It's basically cheating is so indistinguishable from innovation and we want to encourage innovation.
Can you elaborate on what you mean by cheating?
It's basically people do things that they're no wrong.
It's acceptable to do things that you know are wrong in the society.
It was a certain degree.
You can, for instance, suggest some non-sustainable business models and implement them.
Right, but you're always pushing the boundaries.
I mean, you're...
Yes, you're...
And yes, this is seen as a good thing, actually.
Yes.
And this is different from other societies.
So for instance, social mobility is an aspect of the social mobility is the result of individual
innovation that would not be sustainable at scale for everybody else.
Right.
Normally, you should not go up, you should go deep, right?
We need bakers, indeed, we are very good bakers, but in a society that innovates, maybe
you can replace all the bakers with a really good machine.
Right.
And that's not a bad thing.
And it's a thing that made the US so successful, right?
But it also means that the US is not
optimizing for sustainability, but for innovation.
And so it's not obvious as the evolutionary processes
unrolling is not obvious that that long term would be better.
It has side effects.
So basically, if you cheat, you will have a certain layer
of toxic sludge that covers everything
that is a result of cheating.
And we have to unroll this evolutionary process to figure out if these side effects are so
damaging that the system is horrible or if the benefits actually outweigh the negative
effects.
How do we get to the system of government is best?
That was from, I'm trying to trace back last like five minutes. I suspect that we can find a way back to AI by thinking about the way in which our brain has to organize itself
Right in some sense our brain is a society of neurons and our mind is a society of behaviors
And then you to be organizing themselves into a structure that
implements regulation. And government is social regulation. The often see government is
the manifestation of power or local interests, but it's actually a platform for negotiating
the conditions of human survival. And this platform emerges over the current needs and possibilities
in the trajectory that we have. So given the present state, there are only so many options on how we can move
into the next state without completely disrupting everything.
And we mostly agree that it's a bad idea to disrupt everything because it will
endanger our food supply for a while and the entire infrastructure and fabric of society.
So we do try to find natural transitions.
And they're not that many natural transitions available at any given point
What do you have natural transitions? So we try to not to have revolutions if you can help it. Right
So speaking of revolutions and the connection between government systems in the mind you've also said that
You said that in some sense becoming an adult means you take charge of your emotions, maybe
you never said that, maybe I just made that up.
But in the context of the mind, what's the role of emotion?
And what is it?
First of all, what is emotion?
What's its role?
It's several things.
So, psychologists often distinguish between emotion and feeling and then common-day parlance we don't. I think that an emotion is a configuration of the cognitive system.
And that's especially tool for the lowest level for the affective state. So when you have an
affect, that's the configuration of certain modulation parameters like arousal valence,
your your attentional focus, whether it's wide or narrow inter-receptional, extra-reception, and so on.
All these parameters together put you in a certain way to relate to the environment
into yourself, and this is, in some sense, an emotional configuration.
In the more narrow sense, an emotion is an effective state that has an object.
The relevance of that object is given by motivation, and motivation is a bunch of needs that
are associated with rewards,
things that give you pleasure and pain.
And you don't actually act on your needs,
you act on models of your needs.
Because when the pleasure and pain manifests,
it's too late, you've done everything.
But so you act on expectations
what will give you pleasure and pain.
And these are your purposes.
The needs don't form a hierarchy,
they just coexist and compete.
And your organism has to, or your brain
has to find a dynamic homeostasis
between them, but the purposes need to be consistent.
So you basically can create a story for your life and make plans.
And so we organize them all into hierarchies.
And there is not a unique solution for this.
Some people eat to make art, and other people make art to eat.
And they might be end up doing the same things,
but they cooperate in very different ways. Because their ultimate goals are different and we cooperate based
on shared purpose. Everything else, it is not cooperation on shared purpose is transactional.
I don't think I understood that last piece of achieving the homeostasis, are you distinguishing
between the experience of emotion and the expression of emotion?
Of course. So the experience of emotion is a feeling.
And in this sense, what you feel is an appraisal that your perceptual system has made of the situation at hand.
And it makes this based on your motivation.
And on your estimates, not your, but of the subconscious geometric parts of your mind that
assess the situation in the world with something like a new network and
This new network is making itself known to the symbolic parts of your mind
to your conscious attention by mapping the them as features into a space
So what you will feel about your emotion is a projection
usually into your body map. So might feel anxiety in your solar plexus and you might feel
it as a contraction, which is all geometry, right? Your body map is the space that is always
instantiate and always available. So it's a very obvious cheat if your symbolic parts
of your brain try to talk to your symbolic parts of your brain to map the
feelings into the body map. And then you perceive them as pleasant and unpleasant, depending on whether
the appraisal has a negative or positive valence. And then you have different features of them that
give you more knowledge about the nature of what you're feeling. So, for instance, when you feel
connected to other people, you typically feel this new test region around your heart.
And you feel this is an expensive feeling
in which you're reaching out, right?
And it's very intuitive to encode it like this.
That's why it's encoded like this for most people.
It's encoded.
It's a code in which the non-symbolic parts of your mind
talk to the symbolic ones.
And then the expression of emotion is
then the final step that could be sort of gestural
or visual, so on.
That's part of the communication.
Probably evolved as part of an adversarial communication.
So as soon as you started to observe the facial expression and
poster of others to understand what emotional state they're in,
others started to use this as signaling and also to subvert your model of
their emotional state.
So we now look at the inflections at the difference between
the standard phase that they're going to make in this situation. When you are in the funeral,
everybody expects you to make a solemn face. But the solemn face doesn't express whether
you're sad or not. It just expresses that you understand what face you have to make
at a funeral. Nobody should know that you are triumphant. So when you try to read the
emotion of another person, you try to look at the delta between a set, a truly set expression and the things that are animated, mating this
face behind the curtain. So the interesting thing as so having done this, having done this
podcast and the video component, one of the things I've learned is that now I'm Russian
and I just don't know how to express
an emotion on my face. One, I see that as weakness, but whatever. The people look to me after you say
something, they look to my face to help them see how they should feel about what you said,
which is fascinating because then they'll often comment on why did you look bored or why did you particularly
enjoy that part or why did you whatever. It's a kind of interesting, it makes me cognizant of
I'm part like you're basically saying a bunch of brilliant things but I'm part of the play
that you're the key actor and by making my facial expressions and then and therefore telling the
narrative of what the big point is,
which is fascinating, makes me cognizant that I'm supposed to be making facial expressions.
Even this conversation is hard because my preference would be to wear a mask or sunglasses
to where I could just listen. Yes, which is why.
I understand this because it's intrusive to interact with others this way.
And basically basically Eastern European
society have a taboo against that and especially Russia, the further you go to the east and in the
S it's the opposite you're expected to be hyper animated in your face and you're also expected
to show positive affect. And if you show positive affect without a good reason in Russia, the people will think you are a stupid un sophisticated person
Exactly and here positive effect without reason
goes either appreciate or go down notice
No, it's the default. It's being expected. Everything is amazing. Have you seen these
Lego movie? No, there was a diagram where somebody gave
the appraisals that exist in US and Russia, so you have your back curve. And the lower
10% in US are, it's a good start. Everything above the lowest 10% is amazing.
It's amazing. And for Russians, everything below the top 10% is terrible.
And everything except the top percent is, I don't like it.
And the top percent is even so.
Yeah, it's funny, but it's kind of true.
Yeah.
But there's a deeper aspect to this.
It's also how we construct meaning in the US.
Usually you focus on the positive aspects
and you just suppress the negative aspects.
And our Eastern European traditions,
we emphasize the fact that if you hold something
above the water line, you also need to put something
below the water line because existence by itself is as best neutral. Right. That's the basic intuition.
If that's neutral, it means that it's just suffering the default. There are moments of beauty,
but these moments of beauty are inextricably linked to the reality of suffering. And to not
acknowledge the reality of suffering means that you are really stupid and unaware of the fact that basically every conscious being
spends most of the time suffering.
Yeah, you just summarized the ethos of the Eastern Europe.
Yeah, most of life is suffering
with an occasional moment of beauty.
And if your facial expression is not acknowledged,
the abundance of suffering in the world
and in existence itself, then you
must be an idiot.
It's an interesting thing when you raise children in the US and you in some sense preserve the
identity of the intellectual and cultural traditions that are embedded in your own families.
And your daughter asks you about Ariel the mermaid.
And ask you why is Ariel not allowed to play with the humans?
And you tell the truth. She's a siren. Siren's eat people. You don't play with your foot. It does
not end well. And then you tell the original story, which is not the one by Anderson, which is
the romantic one. And there's a much darker one. The original story. What happened? So
which is a DNA story. What happened?
So, Undin is a mermaid or a water woman.
She lives on the ground of a river
and she meets this prince and they fall enough
and the prince really, really wants to be with her
and she says, okay, but the deal is,
you cannot have any other woman if you marry somebody else
even though you cannot be with me
because obviously you cannot breathe underwater
and I have other things to do
than managing your kingdom with you up here,
you will die.
And eventually after a few years,
he falls on the surface with some princess and marries her
and she shows up and quietly goes into his chamber
and nobody is able to stop her or willing to do so
because he is fierce.
And she comes quietly and said out of his chamber
and they ask her, what has happened? What did you do?
And she said, I kissed him to death.
All done.
And you know the Anderson story, right?
In the Anderson story, the mermaid is playing with the sprints that she saves and she falls
on love with him and she cannot live out there.
So she is giving up her voice and her tail for a human-like appearance. So she can walk among the humans
But this guy does not recognize that she is the one that she would marry instead. He marry somebody who has a kingdom and
Quantumical and political relationships to his own kingdom and so on as he should quite charge
And she dies. Yeah
And she dies.
Yeah.
Yeah, instead Disney, uh, the little mermaid story has a little bit of a happy ending. That's the Western, that's the American way.
My own problem is this, this, of course, that I read Oscar Wilde before I read the other things.
So I'm indoctrinated, inoculated with this romanticism.
And I think that the mermaid is right.
You sacrifice your life for romantic love.
That's what you do, because if you are confronted with either serving the machine and doing
the obviously right thing under the economic and social and other human incentives, that's
wrong.
You should follow your heart.
So do you think suffering is fundamental to happiness along these lines?
No, suffering is the result of caring about things that you cannot change.
And if you are able to change what you care about to those things that you can change, you will not suffer.
But would you then be able to experience happiness?
Yes, but happiness itself is not important. Happiness is like a cookie.
When you are a child, you think cookies
are very important and you want to have all the cookies in the world. You look forward to being
an adult because then you have as many cookies as you want, right? Yes. But as an adult, you realize
a cookie is a tool. It's a tool to make you eat vegetables. And once you eat vegetables,
anyway, you stop eating cookies for the most part because otherwise you will get diabetes and
will not be around for your kids. Yes, but then the cookie, the scarcity of a cookie, if scarcity is enforced,
nevertheless, so the pleasure comes from the scarcity.
Yes, but the happiness is a cookie that your brain begs for itself.
It's not made by the environment.
The environment cannot make you happy.
It's your appraisal of the environment that makes you happy.
And if you can change your appraisal of the environment that you can learn too,
then you can create arbitrary states
of happiness.
And some meditators fall into this trap.
So they discover the womb, the basement womb
in their brain where the cookies are made,
and they indulge in stuff themselves.
And after a few months, it gets really old
and the big crisis of meaning comes.
Because they saw before that their unhappiness
was the result of not being happy enough.
So they fixed this, right?
They can release the newer transmitters at will if they train. And then the crisis of meaning pops
up a deeper layer. And the question is, why do I live? How can I make a sustainable civilization
that is meaningful to me? How can I insert myself into this? And this was the problem that you
couldn't solve in the first place. But at the end of all this, let me then ask that same question, what is the answer to that?
What could the possible answer be of the meaning of life? What could an answer be? What is it to you?
I think that if you look at the limiting of life, you look at what the cell is. The life is the cell.
Right? Is this cell?
Yes, or this principle, the cell.
It's this self-organizing thing that can participate in evolution.
In order to make it work, it's a molecular machine.
It needs a self-replicator and an entropy extractor and a touring machine.
If any of these parts is missing, you don't have a cell and it is not living, right?
And life is basically the emergent complexity over that principle.
Once you have this intelligent super molecule, the cell, there is basically the emerging complexity over that principle. Once you have this
intelligent supermodelic, you all the cell, there is very little that you cannot make it to. It's
probably the optimal compute for a new home, and especially in terms of resilience, right? It's very
hard to sterilize the planet once it's infected with life. So it's active function of these three
components or the supercell of cell is present in the cell,
it's present in us, and it's just...
We are just an expression of the cell.
The certain layer of complexity and the organization of cells.
So in a way, it's tempting to think of the cell as a von Neumann probe.
If you want to build intelligence on other planets, the best way to do this is to infect
them with cells.
And wait for long enough, and with a reasonable chance, the stuff is to do this is to infect them with cells. And wait for long enough,
and with a reasonable chance, the stuff is going to evolve into an information processing principle
that is general enough to become sentient. Well, that idea is very akin to the same dream
and beautiful ideas that express the cellular atomite in their most simple mathematical form.
If you just inject the system with some basic mechanisms of replication
and so on, basic rules, amazing things would emerge?
And the cell is able to do something that James Tardy calls existential design. He points
out that in technical design, we go from the outside and we work in a highly controlled
environment in which everything is deterministic like our computers, our labs, or our engineering
workshops.
And then we use this determinism to implement a particular kind of function that we dream up,
and that seamlessly interfaces with all the other deterministic functions that we already have in our world.
So it's basically from the outside in. And biological systems designed from the inside out,
as seed will come seedling by taking some of the relatively
unorganized matter around it and turn it into its own structure.
And thereby subdue the environment.
And cells can cooperate if they can rely on other cells having a similar organization
that is already compatible.
But unless that's there, the cell needs to divide to create that structure by itself.
So it's a self-organizing principle that works on a somewhat chaotic environment.
And the purpose of life in the sense is to produce complexity.
And the complexity allows you to harvest neck entropy gradients that you couldn't harvest
without the complexity.
And in the sense, intelligence and life are very strongly connected because the purpose
of intelligence is to allow control and the conditions of complexity.
So basically you shift the boundary between the ordered systems into the realm of chaos.
You build bridgeheads into a chaos with complexity.
And this is what we are doing. This is not necessarily a deeper meaning.
I think the meaning that we have priors for, that we are all for outside of the priors, there is no meaning, meaning only exists if a mind projects it.
Right?
The narrative.
That is probably civilization.
I think that what feels most meaningful to me is to try to build and maintain a sustainable
civilization.
And taking a slice step out, outside of that, we talked about a man with a beard and God, but something, some mechanism perhaps must have planted
the seed, the initial seed of the cell. Do you think there is a God? What is a God? And what would
that look like? So, if there was no spontaneous biogenesis, in a sense that the first cell formed
by some happy random accidents where the molecules just happened to be in the right consolidation
tree, rather, but there could also be the mechanism of that allows for the random, I mean,
there's like turtles all the way down. There seems to be, there has to be a head turtle
at the bottom. Let's consider something really wild. Imagine, is it possible that a gas giant could become
intelligent?
But would that involve?
So imagine you have vertices that spontaneously
emerge on the gas giants, like big storm systems that
endure 4,000 of years.
And some of these storm systems
produce electromagnetic fields because some of the clouds
are ferromagnetic or something.
And as a result, they can change how certain clouds react rather than other clouds.
And thereby, produce some self-sabilizing patterns that eventually
to regulation feedback loops, nested feedback loops, and control.
So imagine you have such a thing that basically has
emergent self-sustaining, self-organizing complexity.
And at some point, this makes up and realizes,
and basically, lambs Solaris. I am a thinking a thinking planet. But I will not replicate because I kind of
recreate the conditions of my own existence somewhere else. I'm just basically
an intelligence that has spontaneously formed because it could. And now it builds
a von Neumann probe. And the best von Neumann probe for such a thing might be
the cell. So maybe it will because it's very, very clever and very enduring very enduring create cells and sense them out and one of them has infected our planet.
And I'm not suggesting that this is the case, but it would be compatible with the Prince Bermium hypothesis and with my intuition that our biogenesis is very unlikely.
It's possible, but it's you probably need to roll the cosmic dice very often, maybe more often than the planetary surfaces. I don't know.
So God is just a large enough, a system that's large enough that allows
randomness. Now, I don't think that God has anything to do with creation.
I think it's a mistranslation of the time would into the Catholic
mythology. I think that Genesis is actually the childhood memories of a God.
So the, when, sorry, that Genesis is the is actually the childhood memories of a God. So the, when, oh, sorry, that Genesis is the world, the childhood memories of a God. It's basically a mind that is remembering how it came into being. And we typically interpret
Genesis as the creation of a physical universe by a supernatural being. And I think when you'll
And I think when you'll read it, there is light and darkness that is being created and then you discover sky and ground, create them. You construct the plants and the animals and you give everything that names and so on. That's basically cognitive development.
It's a sequence of steps that every mind has to go through when it makes sense of the world. And when you have children, you can see how initially they distinguish light and darkness.
And then they make out directions in it and they discover sky and ground and they discover
the plants and the animals and they give everything their name.
And it's a creative process that happens in every mind because it's not given, right?
Your mind has to invent these structures to make sense of the patterns on your right
ina.
Also, if there was some big nerd who set up a server and runs this world on it, this
would not create a special relationship between us and the nerd.
This nerd would not have the magical power to give meaning to our existence, right?
So this equation of a creator God, which the God of meaning is a slate of hand.
You shouldn't do it.
The other one that is done in Catholicism is the equation of the is a slate of hand. You shouldn't do it. The other one that is done in
Catholicism is the equation of the first mover, the prime mover of Aristotle, which is basically
the automaton that runs the universe. Aristotle says, if things are moving and things seem to be
moving here, something must move them, right? If something moves them, something must move the
thing that is moving it. So there must be a prime mover. This idea to say that this prime mover is a supernatural being is complete nonsense.
Right, it's an automaton in the simplest case. So we have to explain the enormity that this
automaton exists at all. But again, we don't have any possibility to infer anything about
us properties except that it's able to produce change and information.
Right. So there needs to be some kind of computational principle. This is all there is.
But to say this automaton is identical again with the creator of first cause or with the
thing that gives meaning to our life is confusion. Now, I think that what we perceive is the higher
being that we are part of. And the higher being that we are part of is the civilization.
It's the thing in which we have a similar relationship
as the cell has to our body.
And we have this prior, because we have evolved
to organize in these structures.
So basically the Christian God in its natural form,
without the mythology, if you address it,
is basically the platonic form of a civilization.
Is the ideal, is the... Yes, it's this ideal that you try to approximate when you interact with others,
not based on your incentives, but on what you think is right.
Wow, we covered a lot of ground, and we're left with one of my favorite lines and there's many which is happiness is a cookie
that the brain bakes itself
It's been a huge honor and a pleasure to talk to you. I'm sure our paths will cross many times again
Jarja, thank you so much for talking today. Really pretty. Thank you, Lex. You know, it's so much fun. I enjoyed it. Awesome
Thanks for listening to this conversation with Yoshabach and thank you to sponsors,
ExpressVPN and CashApp. Please consider supporting this podcast by getting ExpressVPN at expressvpn.com
slash Lex pod and downloading CashApp and using Code Lex Podcast.
If you enjoy this thing, subscribe on YouTube, review it with 5 stars and Apple Podcast,
support it on Patreon, or simply connect with me on Twitter, at Lex Friedman.
And yes, try to figure out how to spell it without the E.
And now let me leave you with some words of wisdom from Yoshabah.
If you take this as a computer game metaphor, this is the best level for humanity to play.
And this best level happens to be the last level, as it happens against the backdrop of
a dying world.
But it's still the best level.
Thank you for listening and hope to see you next time.
you