The Jordan B. Peterson Podcast - 387. What You See and Feel is Not Reality | Dr. Donald Hoffman
Episode Date: October 12, 2023Dr. Jordan B Peterson and cognitive neuroscientist Dr. Donald Hoffman discuss what we know as reality, why space time is now being considered a “doomed” framework, and how consciousness can be und...erstood as a vast probability space within which we orient ourselves. Hoffman received a Bachelor of Arts degree in quantitative psychology from the University of California at Los Angeles, and then his Doctorate of Philosophy in computational psychology at MIT. He briefly worked as a research scientist at the MIT Artificial Intelligence Lab, before taking on the role of assistant professor at the University of California at Irvine. He is now a professor in the department of Cognitive Sciences. He has written four books (below), on the topics of human vision, perception, consciousness, and the effects of/reasons for evolution on each. He is also a key proponent of MUI (Multimodal User Interface) theory, which states that "perceptual experiences do not match or approximate properties of the objective world, but instead provide a simplified, species-specific, user interface to that world." Hoffman argues that conscious beings have not evolved to perceive the world as it actually is but have evolved to perceive the world in a way that maximizes "fitness payoffs.” - Links - For Dr. Donald Hoffman: "The Case Against Reality: Why Evolution Hid the Truth from Our Eyes" (Book) https://www.amazon.com/Case-Against-Reality-Evolution-Truth-ebook/dp/B07JR1FDXH/ref=sr_1_1?crid=1OMOD0ZGPHCT9&keywords=the+case+against+reality&qid=1697060197&s=digital-text&sprefix=the+case+against%2Cdigital-text%2C189&sr=1-1 "Visual Intelligence: How We Create What We See" (Book) https://www.amazon.com/Visual-Intelligence-How-Create-What/dp/0393319679/ref=sr_1_1?crid=QZBIXXULF7VC&keywords=Visual+intelligence+hoffman&qid=1697060273&s=digital-text&sprefix=visual+intelligence+hoffman%2Cdigital-text%2C138&sr=1-1-catcorr Dr. Hoffman on X https://twitter.com/donalddhoffman?lang=en Dr. Hoffman's Ted Talk https://www.ted.com/talks/donald_hoffman_do_we_see_reality_as_it_is?language=en
Transcript
Discussion (0)
Hello everyone watching and listening. Today I'm speaking with author and cognitive neuroscientist
Dr. Donald Hoffman. We discuss Dr. Hoffman's research on what we know is reality
why space time itself is now considered by many adumed framework of interpretation
and how consciousness might be best understood as a vast probability space within which we orient
ourselves. Hello Dr. Hoffman, it's very good to see you. I've been interested in your
theory for a long time, partly because I'm quite attracted by the doctrine of pragmatism, which was
really part of what I tried to discuss with Sam Harris many, many times. And it seems that your work
bears, well, it's a broad general interest, but it also bears on specific interests of mine,
because I've always been curious about the relationship between Darwinian concepts of truth,
and let's say the concepts of truth put out by the more Newtonian, say, objective materialists.
They don't seem commensurate to me. And so would you start by explaining your theory, your
broad theory of perception? I know that'll take a while, but it's a tricky theory. So do
you want to lay it out for us to begin with?
Most Darwinian scholars would agree that evolution shapes sensory systems to guide adaptive
behavior, that is to keep organisms alive long enough to reproduce.
But many also believe that in addition, evolution shapes us to see
reality as it is, at least some aspects of reality that we need for survival.
So that's often among my colleagues in studying evolution with natural selection,
they'll say, yeah, seeing the truth will make you more fit in many cases. And so even though
Darwin says it's evolution, shape, sensory systems, just to keep you alive long enough to reproduce,
many people think that seeing aspects of reality as it is,
we'll also make you more fit and make you more likely
to reproduce.
So I decided with my graduate students a few years ago
to look into this, there are tools.
At Darwin's Theory is now a mathematical theory.
We have the tools of evolutionary game theory
that John Meadard speth and others
invented in the 1970s.
And so it's a wonderful theory.
So Darwin's ideas can now be tested with mathematical precision.
And I thought that maybe what we would find is that evolution tries to do things on the
cheap.
It doesn't, you know, if you have to spend more calories, then you have to go out and
kill something to get those calories.
And so there are selection pressures to do things cheaply and quickly, and heuristics.
And so I went into a thinking that maybe that would make it so that many sensory systems
didn't see all of the truth, but I just wanted to share and see what would happen.
To my surprise, when we actually started studying this,
there came up principles that made me realize
that the chance that we see reality as it is
on Darwinian principles is essentially zero.
And that was a stunning...
Why zero was a very low number. So why zero?
That's right. So and I can, it's a bit technical, but in evolutionary theory,
there are, in the evolutionary game presentation of it, you think of evolution as like a game,
and in a game, you're competing with other players and you're trying to get points.
Now, in the game of evolution, the way its model is there are these fitness payoff functions.
And those are sort of the points that you can get for being in certain states and taking
certain actions.
And so these fitness payoffs are what guides the selections.
They guide the selection. They guide the evolution. And so we began to analyze those fitness payoffs,
right? The fitness payoffs to be very concrete about a fitness payoff. Suppose that you're
a lion and you want to mate. Well, a steak won't be very useful for you for that process, right?
You'll have very little fitness payoff for a steak if you're a lion looking to mate. If you're a lion that's looking to eat and
you're hungry, then of course this stake will have high fitness payoffs for you. So if
fitness payoff depends on the organism, like a lion versus, say, a cow, a stake of no fitness
payoff for any cow, for any purposes. Quite the contrary.
Quite the contrary. Quite the contrary.
That's right.
So, the fitness pay off depends on the organism.
Its state, hungry versus sated, for example, and the action, feeding, fighting, fleeing,
and mating, for example.
So, these fitness payoffs are functions of the world.
They depend on the state of the world and its structure.
And the organism, its state, and its action.
So, they're complicated functions.
And in some sense, you could think that there's just effectively one fitness payoff function.
There's this one big fitness payoff function, which handles the world and all possible
organisms, all possible states and actions.
So there's a big fitness payoff.
The question is, but we can think about it as many fitness payoffs if we want to as
well.
The question is, suppose then, so this fitness payoff function takes as its starting point, the state of the world, right?
That's the domain of the function.
And the range of the function might be the fitness payoff value, say from zero to a hundred.
Zero means you lose, a 100 means you did as good
as you could possibly do, so zero to 100 say.
So it's a function from the state of the world,
cross-organism, interstate and action,
into these numbers, and zero to 100 to zero to 1000,
whatever you want to use.
So the question then is, does this function
preserve information about the structure of the world?
But this is the function that's guiding the evolution of our sensory systems. So does this function,
if the function is what mathematicians call a homomorphism, the structure preserving map.
So, for example, the world might have an order relationship, like one is less than two is less than three, like a distance or a distance metric or something like that.
Then to be a homeomorphism would be that if things were like in a certain order in the world, their function would take them into that same order or some homomorphism with the order onto the states
of the payoffs.
So that's the technical question.
What is the probability that a generically chosen payoff function will be a homomorphism
of a metric or a total order or partial order or topology or measurable structure.
Any structure that you can imagine the world might have, you can ask, what is the probability
that a generically chosen payoff function will preserve it?
If it doesn't preserve it, there's no information in the payoff function to shape sensory systems
to see that truth, to see that structure of the world.
So what's remarkable is that evolutionary theory
is indifferent about the payoff functions.
They don't say they have to be a certain shape.
In other words, every fitness payoff function
that you could imagine is unequal footing
on current evolutionary theory,
from every other one.
There's nothing in Darwin's theory that says these are the fitness payoff functions and
this is their structure.
So what we had to do then is to say, okay, we have to just look at all possible fitness
payoff functions and ask how many of them, what fraction of these payoff functions would
preserve a total order or a metric or a measurable structure or whatever it might be.
And here's the remarkable and retrospect obvious thing, to for a payoff function,
to preserve a structure like a metric or a total order, it must satisfy certain equations.
So you have to write down these equations that the homeomorphism must satisfy,
that the function, the fitness panel function must satisfy to be a homeomorphism. Well,
once you write down an equation, most payoff functions simply aren't going to satisfy it. I mean, the equations are quite restrictive. And in fact, in the limit, as you look at, you know,
a world that has an infinite number of states and a payoff
value that go from zero to infinity the fraction of payoff functions that actually are homomorphic
is goes to zero precisely. All right so this is going to be a somewhat meandering question
because it's it's a very complicated thing to get right. So people who think that the world is made out of self-evident facts
underestimate the complexity of perception.
And so here's how I'll make that case.
And you can tell me what you think.
You can imagine you could ask an engineer a simple question.
Can you build a bridge?
And you might think the fact of the bridge will be a fact
and the answer to the question, which would be yes or no, will be a fact, and that's that. It's
all self-evident. It's sort of like the behavior is assuming that the stimulus was self-evident.
It's very much analogous to that. Okay, but here's the problem. There's a whole set of assumptions built into that question
that people don't even notice. And so let me walk through some of the assumptions. It's like,
well, I can't build a bridge if you want it to last 50 million years. So I could build a bridge that
would last a century or two centuries. I can't build a bridge for no money with no labor, with materials
that are just at hand. So the thing you define as a bridge is already subject to all sorts
of constraints. Now, you and I mutually understand those constraints without even having to speak
about them. So I'm also going to assume that if you say, if I ask you,
can you build a bridge and you say, yes, you're also saying,
I'm willing to work with you.
I'm willing to work honestly.
I'm willing to hire the right number of people.
I'm not going to screw you during the construction.
The bridge that we build, we both understand that human beings will be able to walk
across it. And as many as we'll fit on the bridge without the bridge falling
down and also cars, and that means it'll have to be about the same width as a car or a truck or four lanes
of cars or trucks.
And it'll have to abide by all the building codes and so forth.
There's so many constraints in that question that it would take you an unlimited amount
of time to list them all.
And you don't because you're talking to an engineer and he's a human being
like you and cultureated like you. And so he understands the world like you do. And so there's
a hundred million things you don't have to talk about. And they're, but they're there. They're
constraining the, the set of facts that's relevant to the, to the issue. And they're constraining
them seriously. Okay. So now those constraints, those are nested in an even higher orders set of constraints,
which are Darwinian, right?
It's like, well, the axiomatic agreements that you and I come to as a consequence of our
shared perceptions, our shared embodiment and and our shared inculturation, our consequence
of a broader process, which is essentially Darwinian. Now, that Darwinian set of constraints
is instantiated in motivational systems in part. So we might say, well, anything that
you and I do together will have to be done well taking into account hunger and anger
and fear and pain, the whole emotional potentiality of people, plus our fundamental motivational
systems, the manner in which we lay out this particular task will have to satisfy all
that.
That's also unspoken. Now when you talk about a evolutionary game theory and pragmatic
constraints, let's say you talked about that lion who wants to mate and not eat, you're referring to
one motivational system or another, one governing sex per se and the other governing hunger.
And then the manner in which the lion is going to perceive the world or the manner in which we're
going to perceive the world is going to be bounded by the operation of that motivational system,
and the perception is going to be deemed sufficient if when we enact it, the motivational system is
satiated. Is there enough? Okay. Okay, now, but then there's a there's a more interesting issue that
pertains to the big fitness payoff. So if you look at how the nervous system is structured,
you have these underlying motivational systems, which are goal setting machines and which define
the parameters within which a perception is valid. But all those systems have to interact
together and they cause conflict.
Right? So if you're hungry and tired, you don't know whether you should get up and make a peanut
butter sandwich or if you should just go to sleep and leave it till the morning. Like there's
inbuilt conflict. And part of the reason that the cortex evolved was to mediate sub-cortical
conflicts. And then even at the cortical level, the manner in which you integrate
your fundamental motivations and the manner in which I integrate mine have to be integrated
or will fight.
And so I would say, and I don't know if evolutionary theorists have dealt with this, and it's relevant
to your theory that perception doesn't map the real world, is there a higher order set of integrated constraints
that serves reproduction over the long run that all the lower order fitness payoffs are
necessarily subordinate to?
And I know this is a terribly complicated question.
Is that the reality that perception serves?
You know, you made the case that perceptions
will not map one to one on reality.
And I suppose that's partly because reality is,
it's infinitely complex, right?
I mean, you can fragment it infinitely
and you can contextualize it infinitely.
So it's very hard to calibrate, All right, so we got put that aside,
but then I would say, well,
maybe there's another transcendent fundamental reality
that's Darwinian in nature
that integrates everything with regards
to optimized long-term survival
and perceptions are optimized to suit that.
So I know that's a terribly complicated question, perceptions are optimized to suit that.
So I know that's a terribly complicated question, but this is a terribly complicated subject.
So.
Well, so I think we have to think a little out
of the box on this question,
because when we conclude that evolution shapes us not
to see reality as it is, then the question is,
well, what isn't shaping our
sensory systems to give us?
As well as what is reality, right?
And what is reality comes up? Yeah.
Absolutely. And so the way I like to think about it is that
evolution shapes sensory systems to serve as a user interface.
So like the desktop on your computer, for example. So when you're actually working on a computer, you're in this metaphor, what you're literally
doing is toggling millions of voltages in a computer in circuits.
And you're having to toggle them in very specific patterns, millions of them in exactly
the right pattern.
Well, if you had to do that by hand,
if you had to deal with that reality and interface with that reality,
one voltage and get it and exit,
well, it'd take you forever and you'd probably wouldn't get it right,
and you wouldn't be able to write your email or edit your picture,
whatever you're doing on your computer.
So, what we spend good money and people spend a lot of time building interfaces that allow you to
be ignorant, completely ignorant. Most of us have no idea what's under the hood in our laptops.
We have no idea. We know that there's circuits and software, but most of us have never studied it.
And yet we're able to very swiftly and expertly edit our images and send texts and emails and
so forth without having any
clue, literally no clue what's under the hood, what's the reality that we're actually
toggling. And so it seems that that's what evolution has done for us has given us an
incredibly dumb down interface. We call it space and time and physical objects. So we think
of space and time as the fundamental reality and physical objects as truly existing in that objective reality. But it's really just, in this metaphor,
a virtual reality headset. We've evolved a virtual reality headset that utterly hides
the very nature of reality. And on purpose, quote, unquote, on purpose, so to speak, because
of reality. And on purpose, quote unquote, on purpose, so to speak, because it would be we drown in the complexity. Right. Okay. So, so some evidence for that, as far as I'm concerned,
is the following. I mean, first of all, if you look at a desktop, it consists, let's
say, in part of folders. Now folders are actually something in the real world that you can pick
up, and we understand them. You can manipulate them. You can see how they operate by using part of folders. Now folders are actually something in the real world that you can pick up
and we understand them. You can manipulate them. You can see how they operate by using your,
but as a consequence of your embodiment. And so that embodiment gives you a deep understanding
of the function of a folder and then you can represent it abstractly and you can put it on a
desktop and everyone understands what it means. And that understanding is something like able to map a certain set of functions for a certain
set of purposes.
That's what under us, and it's a constrained set of purposes.
This is what really struck me about reading the pragmatists, say they said, and Pippers
and James studied Darwin deeply, and they were the first philosophers to realize exactly what implications Darwinian theory had for both ontology and epistemology ontology, which is the study of knowledge. But the fact that it had implications for what reality is, per se, is something that very few scientists have yet grappled
with. And the pragmatist always said, look, when you accept something as a fact, one of
the things you don't notice is that you set up conditions for that to be factual. And
the fact is something like, this definition will do during this time span
for this very constrained set of operations. Fact. Okay, but the problem with that is that's
not a dead objective fact just lying on the ground. That's a fact by necessity nested
inside a motivational system. So facts now all of a sudden become motivated facts.
And that just wreaks havoc with the notion of objective, like of a distant objective materialism.
Because the fact is supposed to be separate from motivation and the pragmatist as far as I'm
concerned, following Darwin, demonstrated in controversially that that's like you pointed to.
I think it's an it's an it's an elegance. That's actually
impossible. Now, because you have to constrain reality in order to perceive it, because it's
too complex, you drown in the details otherwise, you drown in the complexity. Now, you made
the claim, and I want to interrogate this a bit, that There's really no direct relationship, let's say, between the desktop
icon that you think is an object when you look at the world and the actual world. But let me
offer you an alternative and tell me what you think about this. So there's this idea, this
is a weird way of approaching this, but I'm going to do it anyways.
There's a very strange stream of primarily Catholic thought, I believe, that tried to
wrestle with the idea of how God could become man.
So because God, of course, is infinite and everywhere, and man is finite and bounded, and so
the question is, well, how do you establish a relationship between the infinite and the bounded? And that's analogous to the same problem that we're
trying to solve. And they came up with this hypothesis of Kenosis, which means emptying.
And their notion was, well, Christ was God, but in some ways, like a low resolution representation
of God, an image of God, right? So there was a correspondence,
but not a totality, at least not at any one instance. Now, the reason I'm bringing that
up is because it seems to me that when we perceive an object, that it isn't completely
without, you call it homomorphism with the I believe, with the underlying world.
It's just extremely low resolution.
Like, it's a low resolution functional tool.
That's what an object is.
But, and it's, now, and I would say I would advance in support of that, for example, obviously
the icons that we have on a computer screen, we can use and we treat them like they're
real and clearly they're low resolution.
But also, when we watch an animated show, for example, like the Simpsons, we're looking at cartoon-like icons.
They're emptied even further than, like, if I saw a Simpson cartoon of you, it would be like a very low resolution representation
of the UIC, which is a very low resolution representation
of whatever the hell you are in actuality.
Like it's a seat, but I think the,
there's an element of that perception
that's an unbiased sampling of the underlying reality,
although it's bent to pragmatic
ants, pragmatic motivation lens.
Now, I don't know what you think about that.
I've thought about it for a long time.
I can't find a hole in it, but I'm wondering what you think.
Well, I think here's an analogy that might help explain the way I see it.
Suppose you're playing a VR version of Grand Theft Auto.
So you have a headset and a body suit on and you're VR version of Grand Theft Auto. So you have a headset and
body suit on and you're playing a multiplayer Grand Theft Auto. You're playing with someone
in China and England and so forth. And I'm sitting there in my ride. I've got a steering wheel
and gas pedal and dashboard and I'm looking out and I see to my ride I can see a red Ferrari
to my left. I see a green Mustang. Well, now, of course, what I'm really interacting with in this
analogy is some supercomputer somewhere, right? And if I looked inside that supercomputer,
and look for a red Ferrari, I would find no red Ferrari's anywhere inside that supercomputer.
I would find voltages. So there, in that sense, the red Ferrari is a symbol in my headset
in the, in the game. And there's nothing in the objective reality in this metaphor
that it's a low-resolution version of. It's just literally a completely different kind of beast.
Okay, okay. There are no referents. Okay, so let me ask you about that. So I get your point,
especially your main with regards to the online game. But is it not the case that in that supercomputer architecture, there's a pattern
that is analogous to the red Ferrari pattern that's the externalized representation of the
pattern, let's say, on your retina and then that propagates into your brain. Like there is a
there is a there is a conservation of pattern. Now that Ferrari pattern in the supercomputer
would be a very tiny element of an infinite landscape
of patterns in the computer, but it's not,
and it's definitely not a pattern of a car per se, right?
It's a pattern of a representation of a car.
And, but it still got some correspondence
with a pattern of voltages, let's say,
that does have some existence
within the supercomputer architecture.
Well, so in that case,
I would say that there's a causal connection,
that what's going on inside the supercomputer
has a causal connection with the sequence of
pixels that are being illuminated in my headsets so that I see a red Ferrari. So there's a causal
connection. But if I asked, is there some sense in which there's a homomorphism of structure
between what's going on inside the computer and what I'm seeing on the screen as a red Ferrari,
I would say there's probably no homomorphism at all.
And in that sense, we can't think about it as like
a low resolution version of something to be specific.
The electrons in the computer have no color.
My Ferrari is red.
The shape of the Ferrari and the shapes of the electrons
or even the pattern of motion of the electrons or even the pattern
of motion of the electrons is independent.
And what's going on in part is that the pattern of electrons in the super computer, they're
programmed to operate in a way to cause certain other things to happen in my headset to trigger
voltages that trigger pixels to have certain colors.
And so there's a whole sequence, a whole cascade of events that are going on there.
And so to say that there's a homomorphism, I think it's just barking up the wrong tree.
Okay, so I want to push on this a bit more because I want to understand it.
All right, so I'm going to do that from two angles.
The first is that in the supercomputer architecture, let's say, there are levels of potential
patterning ranging from the quantum subatomic atomic molecular, et etc. all the way up to the apprehensible,
sonomenological world, multiple layers of potential patterning.
So I would say in response to your objection that if you looked at the electrons, for example,
they have no color, that color is only a pattern that can even be replicated
analogously at certain levels of that multi-level patterning.
So you won't detect it in the quantum realm,
you won't detect it at the subatomic realm,
maybe not even at the atomic realm.
You detect it at the level of patterns of molecules
at one level and then not above that,
it'd be very specific level.
So it could still be there even though it wasn't propagating through the entire system.
And then I want to add another twist to that that I think is relevant.
So I was talking to a biologist last week about how the immune system functions.
And basically the way that it functions, you imagine there's a foreign molecule in your bloodstream,
and it's got a shape. Well, it has a very complex, has an endless number of very complex shapes
that make up its surface, and the complexity of that shape would be dependent on the resolution
of analysis, right? Because the subatomic contours would be different than the atomic contours
and different than the molecular contours. Okay. Now, what the what the immune system wants to do is get a grip on that molecule
and it just has to get enough of a grip so that it can register the pattern, replicate the pattern
and get rid of the molecule. So that's its goal. You could say that it's motivational frame.
Now, the way it does that is sort of the way your arm works.
Imagine you were trying to figure out how to pick up a basket ball.
Now, baby, you'll do that in the crib.
The first thing a baby will do when it's trying to figure out
how to use its arms is it'll, it uses them very non-specifically.
It'll slay all about, maybe it'll hit the ball.
Now, hitting the ball isn't throwing the ball, but it's more like throwing the ball than
not hitting the ball, right?
And then the baby does this, and then that works, and then it gets a little bit more sophisticated
and does this, and then it gets a little more sophisticated, and it does this, and then
finally, it can manipulate its fingers, So it's specifying the grip.
At some point, the baby can grab the ball and throw it.
That's kind of what the immune system does.
It makes the molecules that kind of skip to the surface.
And then those modify so they stick even better.
And then the sticky molecules modify so it's six even better.
But the point I'm making is that the the immune system
appears to generate a sufficient homolog of the molecule to grab it and get it out. Now
you could say that that homolog that it generates, there's many levels of reality that foreign
body participates in that aren't being modeled by the immune system homolog.
But I would say, yeah, but there's enough of a homology so that the immune system can get a grip
and get rid of the molecule. Now, and we're running around the world. This is a very good analogy
because we're running around the world trying to get a grip all the time. And we presume that the map that we've made of the world is sufficiently real.
If we get a good enough grip to perform the operation that we're intending to perform,
but that still to me, that still implies that there's some level of representation
that has at least the echo of a genuine homology. So I'm wondering,
if you have objections to that or what you think about that.
I think that we can't count on any kind of homology or homomorphism. I think that,
for example, the way I think about it now is that space time itself and all the particles
that we see at the subatomic level in the whole bit, that's all just a headset. And physicists
actually agree, they say space time is doomed. So Neymar Kaniha-Med, David Gross, and many
are saying that we need a new framework for physics
that's utterly outside of space time and quantum theory.
So, and they're finding structures like decorated permutations and so forth.
These are structures not sort of curled up inside of space time, but utterly outside of
space time.
And so, I think science is telling us, a Darwin, Darwin's theory, I think is agreeing.
It's saying that space time is not fundamental.
It's just a headset.
Okay, okay.
So if I said there's no ultimate homology,
but there are proximal local homologies,
would that do the trick?
I have a reason for torturing you about this
and only leave it soon.
But I'm not, I'm not.
Yeah, no, I'm not.
Because the issue of grip really makes a difference as far as I'm concerned, because getting
a grip is very, it's sort of the basis of understanding, is all of our cognitive enterprises,
you could think in some real sense, our extensions of our ability to manipulate the world with
our hands.
I mean, the fact that our left hemisphere is linguistically specialized, looks like it's a consequence of its specialization
for articulation at the level of the hand.
And so getting a grip is crucial here.
And the homology seems to me to be demonstrated in the fact that like if you pick up a hammer,
it actually comes off the ground.
Now I think you could reasonably object that that homology is tremendously
limited, but it's hard for me to exceed to the notion that it's absent. Now, having
said that, I don't want to push that point to stop you, let's say from questioning something as fundamental as the objective reality of space and time,
I think you can have your cake and eat it too in that regard.
And I want to turn to those more radical claims right away.
But if I said, well, if I pick up a hammer and it does, in fact, raise off the floor, how is that not an indication of a homology? Would you just,
you would reduce that again to mere function? Like, it's merely the case that it worked,
and that's not demonstration of anything beyond the thing is it worked, that's the thing is that
that's why I can't shake the notion of some homology. Well, I would again say that there's a causal connection.
You could talk about a causal connection between the reality behind your headset and what
you're seeing in the headset.
But I think it would be a stretch to talk about some kind of homology of structure.
It's actually not necessary to right? To be successful is not
necessary. Well, and as you pointed out very early in this discussion, it also might be hyper-expensive,
right? You actually don't want to know more about something than you need to know in order to
perform the requisite action. That's part of efficiency, right? So, okay, so all right, so let's leave
that aside. Let me, let me, let me, I'll just say one little, all right. So, let's leave that aside. Let me let me.
I'll just say one little, if you have a like a desktop folder on your laptop,
and for a file and it's blue and rectangular in the middle of your screen,
well, the file is not blue. It's not rectangular. It's not in the middle of the computer.
There's literally no homology for anything that you can see on the,
in the symbol on the screen
and the file itself.
It's just a useful symbol without homology, but there is a causal connection between the
voltage.
But no homology.
Okay, so let maybe we can go down that route.
Sure.
I guess I'm then unclear about what you mean. What exactly do you mean by causal then?
Right, so that's already sort of smuggling in a space-time kind of analogy. Right, right, exactly,
exactly. So I'll just say that there is a mathematical connection, that maybe not causal, but
there's some kind of mathematical connection, but the mathematics need not be a kind of mathematics that preserves structure, for example. So there's a mathematical connection.
Okay, I'm going to have to grind away on that for a bit, because you are stating that there is
a relationship at least of function, and I'm unable to on the fly thoroughly discriminate between
some grip of structure and some function because grip is a function. So I'll just put that aside.
Now let's go on to consciousness itself. Now you said a variety of very radical things, including
criticizing the entire notion of space and time. And so we'll delve into that.
But, but I want to tell you something that I learned from reading mythology,
and I want you to tell me how that relates, if at all,
to the way that you're conceptualizing consciousness,
which is obviously not the way that people generally conceptualize it.
Okay. So I've read a lot of different mythological accounts and I've
studied a lot of analysis of mythological accounts. And I think I've been able to extract
out commonalities and regularities across the methods of assessment. And I think I've been
able to triangulate them against findings from neuroscience, let's say the neuroscience of perception.
findings from neuroscience, let's say the neuroscience of perception. Now, the mythological stories that represent the structure of reality,
proclaim, you could say, that there are three interacting causal,
three interacting fundamental causal agents or structures. Causal agents is probably a better way of thinking about it.
There's a realm of potential from which order can be extracted.
That's often given feminine symbolism, the realm of potentiality.
And I think that's because feminine creatures are the creatures out of which new creatures
emerge.
So there's a deep analogy there.
So there's a realm of potentiality.
Then there's a realm of a priori order. That's often given patriarchal or paternal symbolism.
That's the great father. And so if you read a book, let's say, the book offers you a realm of
potentiality, which is the multitude of potential interpretations that the book consists of.
which is the multitude of potential interpretations that the book consists of.
But then you impose an order on that
that's a consequence of,
well, every book you've ever read
and every experience you've ever had.
And the book itself is a phenomena
that emerges as a consequence of the interplay
between the interpreter and the realm of potentiality.
Okay, then there's one additional, which I think is identical to consciousness
itself. It's associated in mythology with the sun, with the sun that sets, and then rises
triumphant in the morning. It's associated with the conquering hero, and it's the thing, it's the
active agent that transforms this infinite potentiality into concretized reality.
It literally makes order out of chaos.
That's the right way to think about it.
That we part, as conscious beings, we partake in that process.
In fact, that process is our essence.
That's what makes us made in the image of God, let's say, but also instantiated with
something like intrinsic value.
You have a very strange concept of consciousness.
And so partly because you are attempting to make the case that what we think of as objective
reality.
So that's just the facts, ma'am.
Objective reality is actually an emergent property.
Tell me if I've got this wrong.
It's actually an emergent property of consciousness itself
and so that in your scheme of things consciousness is
more fundamental than
Then objective reality doesn't even obvious in your scheme that objective reality so to speak is exists
So so so tell me what tell me how you've grappled with the relationship between consciousness and the world as such,
what have you concluded?
Darwin and physics, high energy theoretical physics, agree that space time is doomed.
It's not fundamental reality.
The search is on in the last 10 years among physicists to find structures entirely beyond
space time, not curled up inside space time, beyond space time.
They found structures I mentioned like the
decorative permutations, amplitude, hedron and so forth. And so I'm also thinking
about consciousness utterly outside of space-time. So it's a fundamental
reality and space-time, which we have thought of for most of human history as
the fundamental reality that we're embedded in is a trivial headset.
That's all it is. We've mistaken a headset for the truth because it's easy. If that's all you've seen all your life,
use a headset. It's hard to imagine something outside of it. But science is good enough to recognize that
space time is just a headset. So now we can we're free using mathematics to ask what kind of structures could we posit
beyond space time. And in my case, I'm trying to also deal with the mind body problem. How is consciousness related to what we call the physical world?
So I've decided to try to get a mathematical model of consciousness. Now, of course
spiritual traditions and
humanity for thousands of years has thought about consciousness and so forth. But as a scientist, what I want to do, of course, is listen to their
insights, but I need to write down as minimal a mathematical structure as I can to boot up a
completely rigorous theory. And so what we've done in our theory, we call it the theory of conscious agents is a very minimal structure.
A conscious agent has a probability space that it's defined on.
So it's a probability space.
A probability is that probability space equivalent to, let's say, a realm of potential around
my students.
Yes.
And I tried to model anxiety as a response to entropy.
So imagine that what you have in front of you is a set of branching possibilities,
some of which can be realized with comparatively less effort. So they're more probable,
let's say, given your current state, some of which are virtually impossibly distal, but in principle could be managed if you were smart enough and could gather the resources.
But so you have a probability space in front of you, some of which is sort of at hand, like it's pretty easy for me to pick up this pen.
Right. So that's a high probability pathway laid out in front of me. So I mean the the the mythological motifs that I referred to insist
that what people face is something akin to the pre-cosmogonic chaos that God himself faced
when the cosmos first sprang into being, right? And so that the way to construe the world isn't
as a place of clockwork, automaton machines, self-evident objects, but as a realm of possibility that
differs in probability.
And then the issue becomes how do you best orient yourself so that you contend, you can
contend properly with that probability landscape.
Now is that, am I walking on parallel ground here? We're in broad agreement in the sense that our theory
of conscious agents, by writing down a probability space,
it is a space of potentiality.
For example, to be very, very concrete,
suppose my experiment is just to flip a coin twice,
heads and tails.
Well, what's my probability space?
Why could get heads, heads, tails,
tails, tails or tails, heads. Right. So there's four possible. Oh, I could land on the edge.
Yeah, right. So yeah, yeah, yeah, yeah. Well, then I have to increase my probability space to,
if I wanted to include that. But now notice, I write down the probability space first,
but I haven't flipped my coin yet. So it's the space of potential outcomes
of things that I can do. And that's what probability spaces are. And so, yeah, okay.
So when I write down a probability space for consciousness, it's a probability space
in which I'm thinking about in the first instance that it's about what is the probability
of this all experience green or mint or the sound of a trumpet or so all these different conscious
experiences. So the probability space is a space of all possible kinds of conscious experiences
that this particular agent might have and you can imagine that there's for some agents maybe
that's they're simple they only have the experience of red period that's it that's all this agent
has red. The other one the other one can experience red and green. And the other one can have 10 trillion experiences. You could imagine agents with, and then they can
be related. Well, maybe the red agent can be thought of as a subspace of the one that says red
and 10 million other things. So we can now, depends on how articulated the organism is.
Right. So yeah, the simpler organisms, exactly. The probability space around them collapses.
So yeah, the simpler organisms exactly their probability space around them collapses
That's right, and so right right and so all the infinite number of potential
Probabilities that we see in front of us just collapse into maybe five choices something like that
Sometimes yeah, okay, so so you know Carl Friston. So this is quite interesting. So I talked to Carl Friston about emotion
Mm-hmm about about hope positive emotion, let's say, incentive reward, positive emotion. So positive emotion in that sense is a reward
that signals advancement towards a goal. Now, I'd already been conceptualizing with my students as
had Friston anxiety as a marker for the emergence of entropy. But Friston pointed out, now I want to make
a connection between his thinking and yours here. Friston pointed out that you can map positive
emotion with respect to entropy too, because if you're looking for a desired outcome, so imagine
you're trying to get a grip on the world to bring about a certain reality. If you see yourself making a step towards
that end such that the number of potential pathways to that end decreases somewhat, that produces
a dopamine kick. And that's a signal of reduced entropy in relationship. And it seems to be that
entropy is always calculated in relationship to a goal, right? You're saying, well, how entropic
is the current space? And you can't answer that. You have to say,
how intropic is the current space in relationship to the ordered state that I'm trying to bring
about as a consequence of my actions? And then now and then you'll stumble across something
that blows up in your face, let's say. Like, I've always thought about this, like, imagine
you're driving your car to work. Okay. And you might say, well, what is your car?
And the objective materialist would say, well, it's a enclosed shell with four tires
and give you a materialist description.
But I would say, no, no, no, that's not how your nervous system is responding at all.
Your nervous system, for your nervous system, the car is a conveyance from point A to point
B. So it's a tool, and
it's a tool that signifies zero entropy, essentially, as long as it performs its function.
And then let's say your car breaks down and now you're on the side of the road.
Now what happens to you is the probability space around you, I would say it becomes more
distal.
Any of your desired goals become more expensive
and harder to compute, right? What's wrong with my car? Was I an idiot for buying that car?
Am I generally an idiot? Am I going to get in trouble with my boss? What's going to happen
to the rest of the day? What's going to happen when I go see the mechanic? Right? The landscape
blows into a broader range of unconstrained potentialities, and that seems to be signaled by anxiety.
And anxiety then prepares your body for a multitude of potential
actions, and the problem with that is that it's very physiologically
costly, right? So that stress, and that'll wear you to a
frazzle. So, okay, so is any of that not in accord with the
manner in which you are modeling your theory of conscious agents?
Right, so in the theory of conscious agents, I should say that in addition to the probability space and the conscious experiences that it allows, there is the dynamics.
It's a mark, what a mark of shame, a Markovian dynamics, where you have these matrices that describe the probability if I'm experiencing red now with the probability of experience green the next time I haven't experienced or so so there's a dynamical
and when we do the analysis it turns out that our Markovian dynamics need not have an entropic arrow of time. It can be a stationary dynamics in which the entropy does not increase.
So entropy in this realm of touch.
That's one of the things that makes things constant, right?
Is that you assume that the anthropic transformation is negligible.
That's why you can ignore things, right?
When you ignore things and you ignore almost everything, you're assuming that the anthropic transformation
is negligible.
Well, what I'm saying is that it's possible
to model a reality in which entropy
doesn't increase, period.
This is not ignoring anything.
That's the nature of this deeper reality
outside of space time.
But then it turns out to be a theorem
that if you take a projection of that non-entropic,
there's no arrow of time in the sense of increasing entropy
of this Markovian dynamics,
but if you take a projection of it
by conditional probability, any projection of it,
it's a theorem that you will,
as an artifact of projection,
have the illusion of an arrow of time.
You will get and did the-
Right, well, is that because, well, look, if
you're, if you're pursuing a pragmatic goal, things can fall apart and go wrong. And that
is an increase in entropy within the universe defined by that goal. That may say nothing
about entropy per se as a characteristic of broader reality. See, I've always had this
issue with entropy because entropy always seemed to be to be by necessity, subjectively defined.
It has to be disorder in relationship to some posited state of order. And then you get
back into the Darwinian problem at that point, like if it's well, if it's bounded by motivation,
then it's encapsulated within a Darwinian space. So, okay, so in terms of your conception of objects, let me try this out. So,
I'm looking at this teleprompter here and you're sitting in the middle of it. Now, I'm treating that
like a set of conditional probabilities, right? I'm presuming that what this machine
like a set of conditional probabilities, right? I'm presuming that what this machine is doing right now is very much predictive of what it's going to do in a second. And I'm predicating my perception itself
on that reality. Now, you know, it could burst into flames. Now, I feel that the probability of
that is very low. So I'm not going to perceive the machine that way. Now, you know, there are disorders,
obsessive-compulsive disorders, a good example, where people stop being able to reduce that
probability landscape to predictable safety, and they start reacting to almost everything as if
it's unpredictably dangerous. And, you know, things are, you know, so I had clients, for example,
they would go into a building.
And the first thing they would do is look for all the fire escapes.
And what they asked me was, well, why don't you do that?
Because the building could burn down and people do get trapped in buildings, and that's a horrible
way to die.
So the mystery isn't why they did that.
The mystery for them was why everyone didn't do that all the time.
And I actually do believe that the great mystery is why people aren't scared out of their
skulls all the time, not why there are sometimes calm.
But so can you imagine an object now?
The object is surrounded by a probability distribution, I would say.
And that probability distribution is all the things that object might turn into in some
period of time, let's say.
And I would say to some degree, when you look at the object, you actually also perceive that
probability space, because, you know, although I see that this teleprompter is stable, it's
unstable enough and dynamic enough to provide me with a representation of you. And so I'm playing with the...
by seeing the object and interacting with it,
I'm playing with the probability space around it.
So the... is it the case that you see the damn probability space
when you look at the object?
Well, I don't know if we see the space itself.
We're estimating what we think
are the probabilities for various good things
and bad things to happen.
But I would say that this whole business
about entropy increasing and so forth.
First, I should point out that Shannon entropy,
which is what we're talking about here,
turns out not to be the most general notion of entropy.
There are mathematicians and physicists
are looking at broader definitions of entropy.
There's something called the solace entropy and others.
So there are technical reasons for why,
I mean, Shannon entropy is greatness,
it's very, very useful.
And what I was talking about,
the entropy of our dynamical systems
and not having increasing entropy. I was talking about
Shannon entropy, but there are more general notions of entropy that are important. So I would say that
that the whole structure of needing to estimate probabilities and worrying about outcomes and
rewards and so forth from the
point of view of our dynamics of conscious agents.
All of that, in fact, all of Darwinian theory, is an artifact of projection.
So here's a dynamic of conscious agents outside of space time.
There need not be any competition, no limited resources, no, no, no,
error of time.
And yet, when I take any projection of that dynamics to get a new
Markovian dynamics that has lost just a little bit of information, I will have an
error of time and it can look like separate organisms competing for resources
and so forth. In other words, I mean, I love Darwin's theory of evolution of natural selection is very powerful. I think the entire theory is not a deep insight into reality.
I think it's an artifact of projection, the very arrow of time. You think about the arrow of time.
It is the fundamental limited resource in evolutionary theory. Time is the fundamental limited resource.
If I don't get food and time, I die. If I don't make time, I don't reproduce. If I don't get food and time, I die. If I don't make it in time, I don't reproduce. If I don't breathe air in time, so time is the fundamental limited resource. And the
arrow of time itself need not be fundamental. It could be entirely an artifact of projection.
So that what that means is in this against again to the...
So, okay, well, then I'd like to know this is back to the most fundamental possible question
we could be describing is,
well, what's the nature of reality itself? I mean, when I was debating with Sam Harris, we
got hung up on this consistently because I wasn't willing to use the same definition of truth
that he was. He uses an objective materialist definition and I think that truth flies like an
arrow, let's say, it's got a functional element to it that you cannot eradicate.
There's no way out of that with an objective materialism
as far as I can tell.
Now, you said that Darwinian race and the arrow of time
is just an artifact, but if I said, well, hold on a second,
I don't exactly know what you mean by artifact then,
because if I don't act like there's
an arrow of time and restricted resources in that regard, then I'm going to die.
And that's real enough for me.
You might even say, well, my death has little to do with the fundamental structure of reality,
but I would say, well, it has enough to do with it so it happens to concern me.
And so we start to get into a discussion
about what constitutes reality itself.
If this is just a projection, what in principle
would be real?
Right, so on this theory, then consciousness
is the fundamental reality.
And the conscious experiences that observers have
is the fundamental reality.
And the experience that we have of space and time
is a projection of a much deeper reality.
And that projection, because it loses information,
is necessarily going to have artifacts in it.
And among the artifacts are things
like separate objects in space and time.
Space and time itself is an artifact.
So one reason I'm not a materialist is because our best
materialist theory is namely evolution of natural selection
and also quantum field theory and Einstein's theory of gravity.
They tell us that space time has no operational meaning at 10 to the minus 33 centimeters or 10 to the minus 43 seconds.
In other words, our scientific theories that are the foundation of our materialist ideas
tell us precisely the scope and the limits of materialism.
Materialism, that kind of materialism, is fine down to the Planck scale, 10 to the minus
33 centimeters. And after that, it completely falls apart.
It's utterly irrelevant.
It's, that's right, the space-time, physicalist, matter, kind of materialism falls apart and
it's not because of religious ideas I'm saying.
It's, I'm just listening to the science.
The science tells us space-time has no meaning beyond Planck scale.
And that's why the avant-garde, high energy theoretical physicists are now looking for structures
entirely outside of space time,
not curl-defincyte space time, entirely beyond.
So it's in that sense that,
I materialism, and by the way,
this is, I should say, this about all scientific theories.
My view about all scientific theories is that each scientific theory
starts with certain assumptions,
the promises of the theory. And it says, if you grant me those assumptions, then I can explain
all this wonderful stuff. Okay, so how did you come to that conclusion? Because that's, see, see,
this is, I've been trying to wrestle with this with regards to say the potential relationship between the integrity of the scientific process and an underlying transcendent ethic.
So I think, for example, I talked to Richard Dawkins about this a little bit, although we didn't
get that far for a variety of reasons.
But like I think that to be a scientist, there's certain things that you have to accept on
faith.
These would be equivalent to those axioms.
And I'm not talking about necessarily a scientific
theory here as you were, but the practice of science itself. So, for example, you have to act as if
there is truth. You have to act as if the truth is discoverable. You have to act as if you can
discover it. Then you have to act as if you discovering the truth and communicating it is good.
that you discovering the truth and communicating it is good. And none of that is provable scientifically.
You have to start with those axioms before you can even make a move.
And it could be wrong, you know?
I mean, we think that delving into the structure of the world with integrity is redemptive.
We think that knowledge is useful pragmatically, but, you know, we've invented all sorts of things that could easily wipe us out,
but the hydrogen bomb, perhaps, being foremost among those.
And so the evidence that that set of claims is true is sorely lacking, or you could say it's 50-50.
That's another way of thinking about it.
But I'm very curious about how you came to the conclusion that scientific theories themselves have to be axiomatically
predicated.
How did you walk down that road?
Because that's not a road that very many people walk down.
Well, if you just look at any scientific theory, say, Einstein's theory is special relativity.
He says, let's start with two assumptions that, you know, the speed of light is universal
for all observers, and that the lots of things are the same in all
initial frames.
So if you grant me those two miracles, then the whole, and you get in health space.
And the same thing.
And so this remin.
And Darwin starts off and says, grant me that there are organisms in space and time and resources
and these organisms are competing for resources. Now I'll give you a theory.
So, so, Eric, when you just, if you just look at any scientific theory,
a good theory will make explicit the assumptions, but if it's not, you can find what the assumptions are. So there's no theory, okay, so there's no theory, very, do you think that there's,
is there any difference between,
technically, I'm thinking philosophically, I don't see any difference between the claim that
a given theory has to have axioms
that aren't provable from within the frame of that theory.
That's Gidele's theorem, as far as I can tell,
applied much more broadly.
I don't see any difference between that and the proposition
that to get the game started,
there has to be, it's something akin to a miracle.
I mean, because these axioms, imagine that a miracle inside a system is defined as any
occurrence that isn't governed by the rules that apply within that system.
That's a good working definition.
Now your proposition is, well, I don't care what theory you're coming up with. There's going to be a set of axiomatic presuppositions
that are a launching point. See, I also think those axiomatic presuppositions are where
you put all the, all the entropy. You say, grant me this. It's like, well, that takes care
of 95% of the mystery. So we'll just shelve that invisibly, right? Because it's hidden inside the axioms.
And then you can go about manipulating the small remnant of trouble that you have left over.
I also think this is why people don't like to have their axioms challenged,
because if you say, well, I'm not going to accept that, then you let loose all the demons that are
encapsulated within those axioms, and they start roaming about again, and people don't like that at all.
are encapsulated within those axioms and they start roaming about again and people don't like that at all.
Well, yeah, a good scientist will want to have their assumptions made it absolutely mathematically
precisely and explicit.
So they're just laid out there and they say these are the assumptions of the theory and
given these assumptions, I can now prove this.
This is the glory of science where we put down precisely what our assumptions are.
And then we look at it mathematically and we can get both the scope of those assumptions,
how much can we do with those assumptions, and the limits. Like in the case of space-time,
the limits are 10 to the minus 33 centimeters. Game over, by the way, it's not that deep,
in my view, it's not 10 to the minus 33 trillion centimeters,
it's just 10 to the minus 33 and the game is over for space time. So that's a good antidote for
dogmatism because your own theory, a mathematical precise theory, will tell you the limits of your
assumptions and then say, okay, now you need to look for a broader framework with deeper assumptions, but they will be new assumptions.
And so I view this as infinite job security for scientists, because we will never, ever
get a theory of everything.
We will always have a theory of everything except our current assumptions.
And I agree with you that those assumptions will essentially be the whole Bayleywick of
what we're doing.
So there's a reality, whatever it is, this is for me something of an interesting mystery.
Our theories, in some sense, don't even scratch the surface of the truth.
And yet, because this process will go on forever and will still essentially
And yet, because this process will go on forever and will still essentially have measure zero of the truth.
And yet, Einstein's theory and quantum theory gave us the technologies that allow you
and me to talk across the country.
Well, so you could say, well, you could say that partly what's happening there is that
the more sophisticated the theory, the broader range of probable
states of any given object or system of objects can be predicted.
It's something like that.
The PSJ pointed that out when he was talking about developmental improvement in children's
cognitive theories.
So if you look at someone like Thomas Kon, Coon presumed that we under
took multiple scientific revolutions, but there was no necessary progress.
There were just different sets of axioms.
And Piaget knew about Coon's theory, by the way, but Piaget's point was, no, you've got
it slightly wrong because there is a progression of theory in that a better theory allows you to predict everything, the previous theory allowed you
to predict plus some additional things. Now, your point would be, well, we can just
continue that movement upward forever, right? Because the landscape of potentiality is
inexhaustible. And so again, you can have your cake and eat it too. We can learn more Einstein
got us farther than Newton, which doesn't mean that Einstein's axiomatic set is the final set.
Okay, so let me put a twist in this. I've been thinking about this recently. I'm writing a new book
and one of the things I'm doing in that book is doing an analysis of the story of Abraham.
Abraham's very interesting story. Okay, so Abraham is cold out into the world, even
though he, he sort of hung around his father's tent till he's like 70. So he, he had, he
had utopia at hand. He didn't have to do any work to get everything he needed. But that
wasn't good enough. So a voice comes to him. It's the voice of conscience, I would say,
and says, look, you've got all this security,
but that isn't what you're built for.
Get the hell out there in the world.
And so he does that.
And then all hell breaks loose.
It's one bloody catastrophe after another.
Starvation and tyranny and warfare and the necessity of sacrificing his son.
It's just like one bloody thing after another.
Okay.
But during that process, Abraham,
Abraham continues to aim up and he makes the proper sacrifices. And the consequence of that is
that God promises him that his descendants will be more numerous than the Starks. So I was reading
that from an evolutionary perspective, and I thought, okay, what's happening here is that the narrative is trying to map out a pathway
that maximizes reproductive fitness. All things considered. Now, the problem I have with theories
like Dawkins, let's say, is Dawkins reduces, and you tell me if you think this is wrong, Dawkins
implicitly reduces sex to lust. Then he reduces reproduction to sex.
And the problem with that is that reproduction is not exhausted by lust or sex, quite the
contrary, especially in human beings, because not only do we have to chase women, let's
say, but then when we have children, we have to invest in them for like 18 years before
they're good for continual reproduction. And we have to interact with them in a manner
that's predicated on an ethos that improves the probability of their reproductive fitness.
And so reproduction, see, this is something that the Darwinists, the casual Darwinists do very
incosiously as far as I'm concerned, because they identify the drive to
reproduction with sex. And that's a big mistake, because sex might ensure your
reproduction, proximally, for one generation, but the pattern of behavior that
you establish and instantiate in your offspring,
which would be an ethos, might ensure your reproduction multigenerationally.
You see, and that appears to be what's being played out in this story of Abraham is that
the unconscious mind, let's say trying to map the fitness landscape, is attempting to determine what pattern of behavior is most appropriate
if the goal is maximal reproductive fitness, calculated across multiple generations, or maybe
across infinitely iterating generations.
And so that points to something again, like you said earlier, you called it a general fitness, what was it?
I got to get it here.
Big fitness payoff, right?
And that could be the ethos to which all these subsidiary ethos are integrated.
See?
Okay.
Okay.
Okay.
So, I'm wondering what you think about that is that, first of all, what you think about
the proposition, that evolutionary biologists,
the Dawkins is a good case in mind, have aired when they've too closely identified reproduction
with, like with short term sex. It's like that isn't a guarantee of reproduction.
We wouldn't invest in our children if that was the case. We would just leave them.
The sex is done, we've reproduced.
You need an ethos to guarantee reproductive fitness across time.
Well, there's several levels here.
First, Dawkins of course understands that most sex is a sexual, most reproduction is a
sexual, right?
So sexual reproduction is a relatively recent thing.
Most reproduction has been asexual.
So docons is very famous for talking about the selfish gene.
And it's really when he talks about reproduction, it's about genes reproducing themselves.
It's really not so much about sex is one way of having that happen, but bacteria do it
without sex.
And so the emphasis on sex is, I would say, docans of course understands that sex isn't fundamental.
Now, when it comes to human motivations and mammal motivations, perhaps in that specific context,
you might then be talking about it. But even there, when you start talking about sexual reproduction,
there are many, many strategies that organisms use. So for example, some spiders will have just hundreds of babies and eat some of them.
They'll eat some of them.
And let the others do that.
Having the babies is their only job.
And after that, the babies are on their own.
And so there are different strategies.
So this is where Dawkins is quite famous justifiably for his work on the self-ashgene idea, that
is there are different strategies, but the only thing that matters in this framework is
what is the probability that particular genes spread through the population in later
generations?
Sex came along apparently to deal with...
Okay, as one of the pathways to that.
One of the pathways to that, right? One of the path, that that's right.
And so, but, but there's another framework in thinking about all this as well.
So again, I love evolutionary theory.
I think in terms of models of, of, or evolution and so forth of, of creatures and, and their
behaviors, it's an incredibly powerful theory.
I've used it a lot.
My book, Case Against Reality, talks about it in great detail. It's a wonderful theory. But I think that from this deep
or framework that science is now moving into beyond space time, all of evolutionary theory,
all of it is an artifact of projection. In other words, if you're looking like from
a spiritual point of view for some deep principles, deep spiritual principles. Evolution, I won't think is deep enough. I think that it's
all of it is an artifact of space-time projection. If you're going to be looking for deep principles about
that spiritual tradition is talking about Abraham, and really thinking big, I think that thinking inside
space-time is not big enough. You've got to step entirely outside of space time.
Space time has all these artifacts.
And we're so used to being stuck in the heads of the city.
Well, there is an insistence upon that
in the Judeo-Christian tradition,
because God is conceptualized, what would you say?
Traditionally, as being entirely outside of time and space.
And so whatever works for human, like the human landscape and the divine landscape,
they're not the same.
There's a relationship between them, however, but they're not the same.
Okay, so now, okay, so let me, let me ask you about that.
Now, you have made the case, not least in this interview, that consciousness is primary.
Now, consciousness uses these projections.
So how do you reconcile the notion that consciousness is primary?
And I want to make sure I'm not misreading what you're saying, that consciousness is primary.
But consciousness operates in the world with these projections.
See, because this is the thing I grapple with is that if survival itself is dependent
on the utilization of a scheme of pragmatic projections, in what sense can we say that
reality is something other than that?
Like, because, see, part of this is something that person and William James wrestled with too. It's like, well, why make the claim that there is a reality outside of the human concern
with survival and reproduction?
And if consciousness is the primary reality, and it's using projections to orient itself
so that it can survive and reproduce in the biological sense.
How can you even begin to put forward a claim that there is a reality that transcends that? Like, on what grounds does it transcend it in relationship to what?
Right, so these are deep waters. The idea that I'm playing with right now is that
this consciousness is there's
one ultimate infinite consciousness. And it what is up to knowing itself, but how do
you know yourself? Well, there are certain theorems that say that no system can actually
completely know itself. Right.
Right.
So, if this one infinite consciousness wants to know itself, all it can do is start looking
at itself through different perspectives, so putting on different headsets.
So space time is one headset.
And from that perspective, here's a projection of the one infinite consciousness.
And in that perspective, it looks like evolution
of a natural selection.
It looks like quantum field theory and so forth.
And it looks like I need to play the game this way.
But this is a trivial headset.
I think what I'm cheaper headsets.
That's very interesting.
So one of the things, so while writing the book
that I'm writing now, I've been walking through all these biblical narratives. And one of the things, so while writing the book that I'm writing now, I've been walking through all these biblical narratives.
And one of the things they do, every single narrative provides a different
characterization of the infinite.
There's no real replication. It's like, well, here's a picture of the divine, and here's another one, and here's another one, and here's another one and here's another one and here's another one. Now there's an
insistence that runs through the text, this unites the text that those are all manifestations
of the same underlying reality. But it is definitely the case that what's happening is that
these are movies, so to speak, shot from the perspective of different directors. And it does
seem to me akin to something coming to know itself. There's this ancient Jewish idea.
This is a great, it's like a Zen cone.
It's a great little mystery.
It says, so here's the proposition.
So God is traditionally imbued the following characteristics,
omniscience, omnipresence, and omnipotence.
What does that lack?
And you know, you think, well, that's a ridiculous question,
because oh, by definition, that lacks nothing, but the answer is limitation. That lacks limitation,
and that's actually the classical explanation for God's creation of man is that the unlimited needs
the limited as a viewpoint, and has something to do with the development of, as you pointed
out, I believe, it has something to do with the, with the possibility of coming to, it's
something like conscious awareness. You see this in T.S. Eliot too. I don't remember which
poem where he talks about coming back to the point of origin, which is like the return
to childhood, you know, that, that heavenly notion that to enter the kingdom of heaven,
you have to become as a little child.
It's like, but there's a transformation there so that that return to the point of origin
is accompanied by an expansion of consciousness.
It's not a collapse back into childish unconsciousness.
It's the reattainment of a, what would you say?
It's the reattainment of the state of play.
That's a good way of thinking about it.
That obtained when you were a child,
but with conscious differentiated knowledge.
So there is this tremendous narrative drive
in the Western tradition towards differentiated
comprehensive understanding as a positive good.
And that seems tied up with the continual
drama between God and man. So when I do think the scientific enterprise is an offshoot
of that, that's what it looks like to me historically. So, okay, so how in the world do you survive
in psychology departments given what you're thinking about?
Well, I've got the mathematics. So as long as if I was just talking this stuff without any mathematical underpinnings to
it, it would be dismissed, of course.
But the, you know, we've, in the case of the evolutionary stuff, we've published papers
and the journal Theoretical Biology, for example, and elsewhere, where we actually put the
mathematics out there.
So it's peer-reviewed.
And I think that it's a bit surprising, but
and I, you know, I'm a minority, a small minority, but you know, that's the way science progresses.
It precedes one funeral at a time. And so it progresses by minorities of one.
Exactly right. So, and and scientists understand that you you
You want to have independent ideas think out of the box make it mathematically precise most of our ideas will be
Nonsense including mine, but you you got to put them out there and push them and and see see what happens.
I have I'll say in terms of I've gotten some stiff pushback. For example, some philosophers have published papers
recently where they give the following argument
against my Darwinian theory.
They'll say, look, Hoffman uses evolutionary game theory
to show that space and time and physical objects and organisms
don't exist.
Well, he's got himself what they say,
an uninvailable, dialectical situation.
Either evolutionary game theory faithfully represents
Darwin's ideas, or it doesn't.
They say, okay.
So if it doesn't, then he can't use it to say
that organisms and resources are not fundamental
in space-time.
And if it does faithfully represent Darwin's ideas,
well, Darwin's ideas are that space-time is fundamental
in their organisms and resources.
So it couldn't possibly contradict that.
So either way, Hoffman is screwed, right?
There's nothing he can do.
So, and that's been published actually
in high value of floss materials.
And my response is quite simple.
It understands science completely.
Every scientific theory has,
when you write it down mathematically,
it has a scope and its limits.
And the mathematics tells you both the scope
and the limit.
So for example, just to be very concrete,
Einstein's theory of gravity.
And I think 1907 or so, he had this big idea.
I was standing on a weighing
machine in an elevator and all of a sudden the cord was cut and now it was some free fall.
The I would all of a sudden be weightless. That was his big idea for his theory of gravity. It
took him years, seven or eight years to actually make the mathematics, but he wrote down his field
equations. So those field equations are limestone's mathematics to capture his idea
that space time is fundamental and has certain properties. Well, a year after he published
it, a shortchild, a German scientist discovered that they entailed black holes. And we've eventually
found out that the distherian tales that space time itself has no operational meaning beyond 10 to the
minus 33 centimeters. So we could use the same argument that's been used against me against
Einstein. Now look, Einstein's field equations, either they're faithfully representing Einstein's
ideas or they're not. So we can use the same argument against Einstein, using his wife there.
Now, either Einstein's field equations capture his ideas faithfully or they don't. If they don't, then we couldn't use them
to show that space time is in fundamental.
And if they do, they couldn't possibly show that space time
is in fundamental.
That last step is the wrong one.
The equations are there to show you the limits
of your concepts.
They give you precise, and that's,
so that's what these philosophers have missed,
is that the equations that we write down tell us
not just the scope,
but the limits of our theories. And that's why science is so valuable because it tells us your
theory, your assumptions go this far and no further. So that's all I've done with
withdrawing three of evolutions to say, this story is okay, man. That also sounds to me very much
like a vindication of the fundamental claim of the
pragmatists, which is that we accept something as true without noticing that what we mean
is true in a time frame with certain implications for instantiation and something like that.
And so true is a lot more like, does the stand up when a hundred cars go across it?
It's not some final comprehensive
all encompassing definition of the truth for all time
and you've already made the case that it can't be
because that truth is never receding goal.
It's always bounded, okay.
So when I came across that, I thought, okay,
well, bounded by what?
And it's well, it's bounded by our aim.
And then that's bounded by our motivation. And then that's
inestid inside of Darwinian world. Okay. Now, let's go after the game theory.
Well, let me just say one thing about that. Sorry. Yeah. Yeah. I would just say that the very deep,
deep spiritual traditions really say that up front, like the talented chink starts off,
it says the talent that can be spoken of is not the true talent. Once you
understand that, then you have to read the rest of it. That's a good example,
because that's a great book. That's a great book. And I think that that's also
the way we should think about our science. The science that can be spoken of is
not the final reality. But given that, it's a wonderful thing to do science.
We should do science and we should do it very rigorously.
But we should always understand that if we're talking about a theory of everything,
it should be with a wink and a nod, because there is no theory of everything that we can write down.
The theory of everything that we've discovered so far, maybe,
but it will never be the final theory of everything.
Right, and it might have a broader, broader range of potential applications as well, but
that doesn't mean that we've exhausted the landscape of comprehensive theories.
Right?
Okay, so now the philosophers that you described as objecting to your theory said that if
evolutionary game theory is correct, and it models Darwin's propositions appropriately,
then.
Well, so game theory is extremely interesting to me, although I wouldn't say I'm an expert
in its comprehension, but I understand it's just, I believe, and it seems to me to be
something like this, is that if you iterate interactions and ethos of one form or another
emerges.
So for example, if you play tip for tab simulations,
you find out that the best trading strategy is cooperate, but slap when necessary, and
then forgive something like that. And so, what it points to very interestingly is something
like a concordance between objective reality in so far as objective reality is an emergent
pattern coming out of iterative interactions
and something like an ethos. So the first question I have is, like, why are you interested in
evolutionary game theory? And why do you think that it is a valid representative, a more
differentiated representative? I've got the language right of Darwinian theory.
Well, I'm interested in it because that's within the field of evolutionary theory itself,
evolutionary game theory is taken as the prize mathematical tool for really understanding
things.
So that's just the framework of the science itself.
So that's accepted as far as you're concerned.
Yeah, of course, there's always debate, but by the vast subs, but it's the received
opinion.
So, as a scientist, if I wanted to analyze Darwin's theory for this issue of truth, and
I wanted to do it rigorously, the tool was evolutionary game theory.
That was the tool to use.
And that's not because I think it's the final word or the truth, it's just our current state of play
in the field.
Right now, that's the best we have,
and I wanted to use the best tool we have.
And that's the way we're always pulling ourselves
up by the bootstraps in science, right?
We always say, these are the best theories we have
and the best tools we have so far.
Of course, our goal is not to prove that we're right.
Our goal is to find the limits of our current theories and transcend them.
So we're looking for the best tools that will say, aha, Darwin goes this far and no further.
Spacetime goes this high energy theoretical physics.
Line science wonderful theories.
They're incredible gift.
They go to 10 to the minus 33 centimeters and they stop.
That gift stops right there. And now we have to go entirely outside.
And that will be the never ending pattern of science is that
whatever the scientists are finding outside of space time,
that will just be our next baby step and we'll analyze that and then say,
okay, what's beyond that and beyond that?
And science will continue to go.
So as long as you recognize that that's the game,
you'll realize that there's no theory of everything in science and
Then the question is who am I who are we?
That are able to do this game and that's an a very interesting question
Well, you know
There's lots of things I'd like to ask you about but that's a pretty good place to stop and we're damn near at an hour and 30
so
I hope I have the
privilege of furthering my discussion with you at some point in the not new not to dear future.
I would like to say, is there anything in closing that you would like to bring to the attention
of the listening audience, the watching audience, that you think that we needed to cover
to make what we have covered comprehensively.
Or is that also in your estimation a good place to stop?
I'll just say one little thing, I guess.
And that is some people might think, well,
he's got the story of consciousness outside of space time.
So what? Who cares?
And the, and I would agree with that, unless I did something more.
So what we're trying to do now is scientists
to just say we have this mathematical model
of consciousness outside of space time.
We just published a proposal for how to actually test it.
So we're going to have a projection into space time.
We're working on that projection.
We'd like to model the inner structure of the proton.
We'd like to have a dynamics of conscious agents
that projects down and gives us what's called the momentum distributions of the proton. We would like to have a dynamics of conscious agents that projects down and gives us what's called
the momentum distributions of quarks and gluons
inside a proton, and all the Burek and X and Q squared,
the different spatial and temporal resolutions
that particle physicists have studied.
And the reason we're going there
is not because I think that's the most important application
of a theory of consciousness, It's the most accessible one.
That's the simplest part of our science right now.
Ultimately, of course, the brain has the
nice neural correlates to consciousness,
we want to understand that.
But that's really complicated.
So we're going to go after, if we can model the proton
and get it exactly right,
get the momentum distributions to several decimal places,
that doesn't mean our theory is right,
but it does mean it can't be dismissed out of hand. And so that's what our goal is to take a theory of consciousness, not just
in a very, very waiver hands, but to actually get in there and predict the structure, the
the inner structure of the proton with great detail. If we can do that, then I would say
we then construct and move up, you know, to molecules and then ultimately to neural systems and the
brain, I try to understand the neural correlates of consciousness.
But not the neural correlates, the brain does not cause consciousness on this model.
The brain is merely a symbol inside the headset, right?
So, in fact, I would say that neurons do not even exist when they're not perceived.
Neurons cause none of our behavior.
And yet I'm a cognitive neuroscientist.
And I think that we should study, we, we, we, we, we, neuroscience is wonderful and we
need more funding for it because it's more complicated than we thought.
We thought, we look inside the brain, we see neurons.
That's because that's the reality.
There are neurons.
No, that's, that's the interface description of something that's much more complicated.
We have to reverse engineer neurons to this network of conversations outside of space
time.
So we need more funding for neuroscience.
It's much more complicated.
So I would just a little brief, of course, as you can imagine, I'm talking about something
that could take hours to go into detail, but just to put put those out there and say these are objections and people might have. So we're headed
Okay, well, I do have one. Okay, I do have one other question that I guess I do have to throw it out. So
you have a very radical conception of consciousness. What has that done for you
existentially, do you think? I mean, you're obviously thinking about the place of consciousness,
while you're thinking about it existentially.
You're thinking about the place of consciousness in the cosmos,
and you regard it as a fundamental reality.
So what has that done to the manner in which you contemplate your own
say mortality or the purpose of your life?
And what's that done for you on that on that side of things.
Quite a bit. It's really hit me in the face because I'm intuitively as much a physicalist
and a materialist as anybody else. Right? I'm wired up to believe all that. And so it's
come as a terrible shock to me. My whole self image has had to change. And it says, and I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I, I best science says that this is, you know, my body is just an icon in a headset.
So in some sense, it's just an avatar.
This body is just an avatar.
And so death is more like taking off headset.
So, but my emotions don't agree with that.
So I've got this really interesting, right?
Well, that's probably just as well.
Right, exactly.
So it's, so I do spend a lot of time in meditation and my father was a Protestant minister, a fundamentalist Protestant minister.
I was raised on the Christian church and so I look at those points of view. I look at the Eastern mystical stuff.
I meditate myself and my my ultimate thinking about this is as I said, we can never have a theory of everything.
And that includes of who I am. So the question about who I am, my best guess right now is,
at the deepest level, I and you are, in fact, the one consciousness just looking at itself
through different avatars. So it's really the one using a Jordan avatar to talk to the one in a
Hoffman avatar and that's what's what's going on here. And in that sense, are
you responsible for being the best possible avatar you can be, so to speak. Well, in some sense, within this projection,
within this headset, morals of a certain kind are the rules of the road. But my guess
is that when we take the headset off, we'll just laugh. That was what we had to do in
this headset. But that was, I am not this avatar.
I am the consciousness that's far that transcends space on time.
Well, you know, the next time we talk, maybe that's a road we should wander down because
we didn't get into the metaphysics of ethics, let's say during this conversation. And
there's plenty of that's obviously a whole other area.
Okay, okay. Well, that would be good. All right. Well, so to everyone watching and listening,
thank you very much for tuning into this podcast. As most of you know, I'm going to talk to Dr.
Hoffman for another half an hour behind the daily wear plus platform. And I'm going to see if I
can find out where in the world his interests stemmed from and how they initially manifested
themselves and developed across time.
We'll do that as much as we can in half an hour.
Thank you to the crew here up in Northern Ontario for journing up here to do this podcast.
Thank you, Dr. Hoffman, very much for your time today to the daily wire plus people for
making this possible.
That's also much appreciated and we'll see all of you watching and listening, hopefully
on another podcast.
Thank you very much, sir.
Thank you, Jordan.
you