Theories of Everything with Curt Jaimungal - Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness
Episode Date: December 12, 2023Karl Friston, Joscha Bach, and Curt Jaimungal delve into death, neuroscientific models of Ai, God, and consciousness.SPONSORS:- HelloFresh: Go to https://HelloFresh.com/theoriesofever... and use code ...theoriesofeverythingfree for FREE breakfast for life!TIMESTAMPS:- 00:00:00 Introduction- 00:01:47 Karl and Joscha's new paper- 00:09:13 Sentience vs. consciousness vs. The Self- 00:21:00 Self-organization, thingness, and self-evidencing- 00:29:02 Overlapping realities and physics as art- 00:41:05 Mortal computation and substrate-agnostic Ai- 00:56:38 Beyond Von Neumann architectures- 01:00:23 Ai surpassing human researchers- 01:20:34 Exploring vs. Exploiting (the risk of curiosity in academia)- 01:27:02 Incompleteness and interdependence- 01:32:25 Defining consciousness- 01:53:36 Multiple overlapping consciousnesses- 02:03:03 Unified experience and schizophrenia "insights"- 02:10:16 Psychedelic experiences- 02:22:20 Institutional rot in science- 02:23:31 OpenAI CEO controversy- 02:32:22 Existential crises as one delves into consciousness- 02:35:06 Podcast wrap-upNOTE: The perspectives expressed by guests don't necessarily mirror my own. There's a versicolored arrangement of people on TOE, each harboring distinct viewpoints, as part of my endeavor to understand the perspectives that exist.THANK YOU: To Mike Duffy, of https://dailymystic.org for your insight, help, and recommendations on this channel. - Patreon: / curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: / toewithcurt - Discord Invite: / discord - iTunes: https://podcasts.apple.com/ca/podcast... - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b9... - Subreddit r/TheoriesOfEverything: / theoriesofeverything - TOE Merch: https://tinyurl.com/TOEmerchLINKS MENTIONED: - Mortal Computation, a Foundation for Biomimetic Intelligence (Karl Friston): https://arxiv.org/abs/2311.09589 - A Path to Generative Artificial Selves (Joscha Bach and Liane Gabora): https://osf.io/preprints/psyarxiv/y3tzs - Podcast w/ Joshua on TOE (solo): https://youtu.be/3MNBxfrmfmI - Podcast w/ Joscha Bach & Ben Goertzel on TOE: https://youtu.be/xw7omaQ8SgA - Podcast w/ Joscha Bach & John Vervaeke on TOE: https://youtu.be/rK7ux_JhHM4 - Podcast w/ Joscha Bach & Michael Levin on TOE: https://youtu.be/kgMFnfB5E_A - Podcast w/ Joscha Bach & Donald Hoffman on TOE: https://youtu.be/bhSlYfVtgww - Podcast w/ Karl Friston solo on TOE: https://youtu.be/SWtFU1Lit3M - Podcast w/ Karl Friston & Michael Levin on TOE: https://youtu.be/J6eJ44Jq_pw - Podcast w/ Karl Friston & Anna Lemke on TOE: COMING - Podcast w/ Michael Levin on TOE: https://youtu.be/Z0TNfysTazc - Podcast w/ Chris Fields on TOE: https://youtu.be/J6eJ44Jq_pw - I Am a Strange Loop (Douglas Hofstadter): https://amzn.to/3GGqjpM
Transcript
Discussion (0)
There could be multiple consciousnesses. Of course, one will not be aware of the other
and possibly not even able to infer the agency, even if it was.
We do not become conscious after the PhD. We become conscious before we can drag a finger.
So I suspect that consciousness allows the self-organization
of information processing systems in nature.
Josje Bach and Carl Fristen, today's Theolocution guests,
are known for their work in artificial intelligence, neuroscience, and philosophical inquiry.
Bak, an AI researcher, delves into cognitive architectures and computational models of consciousness and psychology.
Friston, a neuroscientist, is lauded for his development of the free energy principle, a theory explaining how biological systems maintain order.
This framework
of neural processes is rooted in thermodynamics and statistical mechanics. Yosha has been on this
podcast several times, once solo, another with Ben Gorzo, another with John Verveke, another with
Michael Levin, and one more with Donald Hoffman. Whereas Carl Friston has also been on several
times, twice solo, another between Carl Friston and
Mike Eleven, and another with Carl and Anna Lemke. That one's coming up shortly.
The first hour of today's talk is broadly on agreements so that we can establish some terms.
The second hour roughly is on points of divergence, and the third is on more dark
philosophical implications, as well as how to avoid existential turmoil precipitated by earnestly contending with these heavy ideas.
For those of you who don't know, my name is Kurt Jaimungal, and there's this podcast here called Theories of Everything,
where we investigate theories of everything from a physics perspective, primarily as my background's in math and physics,
but as well as I'm interested in the larger questions on reality, such as what is consciousness, what constitutes it,
what gives rise to consciousness, what counts as an explanation. in the larger questions on reality, such as what is consciousness? What constitutes it? What gives
rise to consciousness? What counts as an explanation? You could think of this channel
as exploring those questions you, well, at least I sit and ponder at nighttime and daytime
incessantly. Enjoy this theolocution with Josje Bak and Carl Fristin.
All right. Thank you all for coming on to the Theories of Everything podcast.
What's new since the last time we spoke?
Josje, the last time was with Ben Gorzo,
and Karl, the last time was at the Active Inference Institute.
So, Josje, please.
Oh, there's so much happening in artificial intelligence.
We have more on a weekend than a normal TV show has in seven seasons.
So it's hard to say what's new.
For me personally,
I've joined a small
company that is exploring an alternative
to the perceptron. I think that
the way in which
current neural networks work is very unlike
our brain. And while I don't
think that we have to imitate the brain, we have to
figure out what kind of mathematics the brain
is approximating. And
we are trying to make headway with that.
Great. And Carl?
Very similar, actually.
I guess what's new in the larger scheme of things, of course,
is the advent of large language models
and all the machinations that surround that
and the focus that that has caused in terms of you know
what do we require of intelligent systems what do we require of artificial intelligence what's the
next move to generalized artificial intelligence and the like and so that's been certainly a focus
of discussions both in academia and in industry in terms of positioning ourselves
for the next move and the implications it has, both in terms of understanding the mechanics
of belief updating and the move from the age of information to the age of intelligence,
but also the philosophy and the principles.
And interestingly, the conclusion amongst me and my friends is exactly what Joshua articulated,
which is a commitment to a more biomimetic understanding of natural intelligence.
Right.
I read your paper, Mortal Computation, a Foundation for Biomimetic Intelligence.
And while we can get right into that, Carl, on page 15, you define what a mortal computation is as it relates to Markovian blankets.
Can you please recount that?
And further, you quote Kierkegaard, which says that life can only be understood backwards, but must be lived forwards.
So how is that connected to this?
Right.
You are a voracious reader that that was
a few days ago i do my research man and also i did not write that that was my friend
alexander you can take the credit what we'll remove this part uh i can't take the credit
because i don't know about any of the philosophy, but I thought they're largely his ideas,
but they resonate certainly, again,
with this sort of notion of a commitment
to a biometric understanding of intelligence.
And that paper, that particular paper,
sort of revisits the notion of mortal computation
in terms of what does it mean to be a mortal computer and the importance of the physical instantiation, the substrate on which the processing is implemented as being part of the computation in and of itself. So, you know, that speaks closely to all sorts of issues.
You know, the potential excitement about neuromorphic computing,
if you're a computer scientist, the importance of in-memory processing.
So technically, you're trying to elude the van nuyman bottleneck on the
memory wall um and i introduce that because that that is um speaks to from an academic point of
view the importance of efficiency in terms of what is good belief updating what is good you know what
is intelligent processing um but from a more societal point of view,
the enormous drain on our resources incurred by data farms,
by things like large language models in eating up energy and time and money
in a very non-biometric way.
So I think mortal computation as a notion,
I think has probably got a lot to say about debates in terms of direction of travel,
certainly in artificial intelligence research.
But you'll have to unpack the philosophical reference for me.
So, Josje, you also had a paper called
A Path to Generative Artificial artificial selves with your co-author
leanne gabara gobora sorry i don't even know if i'm pronouncing that correctly toward the end of
the paper you had some criteria about selfhood something called max raf which has rafs as a
subset and there were about six or seven criteria can you outline what you were trying to achieve
with that what raf is and what does personal style have to do with any of this?
Lian likes to express her ideas in the context of autocatalytic networks.
But if we talk to a general audience, I think rather than trying to unpack this particular strain of ideas and translate it into the way in which we normally think about these topics. I think it's easier to start directly from the bottom,
from the way in which information processing systems in nature
differ from those that we are currently building on our GPUs.
Because the stuff that we build on our GPUs is designed from the outside in.
We basically have a substrate with well-defined properties.
We design the substrate in such a way that it's fully deterministic,
so it does exactly what we want it to do.
And then we impose a function on it that is computing exactly what we want it to compute.
And so we design from scratch what that system should be doing,
but it's only working because the system is at some lower level already
implementing all the necessary conditions for computation and we are implementing a function approximator on it that does function
approximation to the best of our own understanding with a global function that is executed on this
neural network architecture and in biology this doesn't work also in social systems these are
all systems where you could say they are from the inside out.
So basically there are local agents, cells, that have to impose structure on the environment.
And at some point they discover each other and start to collaborate with each other and replicate the shared structure.
But before this happens, there's only chaos around them, which they turn gradually into complexity.
they turn gradually into complexity.
And so the intelligence that we find in nature is something that is growing from the inside out
into a chaotic world, into an unknown world.
And this is a very different principle
that leads to different architectures.
So when we think about an architecture
that is growing from the inside out,
it needs to be colonizing in a way,
and it needs to impose an administration on its
environment that basically yields more resources more energy than the maintenance of this
administration costs and it also needs to be able to defend itself against competing administrations
that would want to do the same thing so right you are the set of principles that out competes all
the other principles that could occupy your volume of space.
And the systems that do this basically need to have a very efficient organization,
which at some point requires that they model themselves, that they become to some degree self-aware.
And I think that's why from a certain degree of complexity,
the forms of organization, if you find both in minds and in societies,
need to have self-models.
They need to have models about what they are
and how they relate to the world.
And this is what I call sentience in the narrow sense.
It's not the same thing as consciousness.
Consciousness is this real-time perceptual awareness
of the fact that we are perceiving things
that creates our perceptual awareness of the fact that we are perceiving things that creates our perceptual
individual subjective now but sentience is something that i think can also be attained by
say a large corporation that is able to model its own status its own existence in a legal and
practical and procedural way and that is training constituents, the people who enact that agent,
in following all the procedures that is necessary
for keeping that sentient larger system
that is composed of them alive.
And so when we try to identify principles
that could be translated into nervous systems
or into organisms consisting out of individual
self-interested cells, we see some similarities.
We basically can talk about how self-stabilizing agents emerge in self-organizing systems.
So, Karl, I know quite a slew was said.
If you don't mind saying, what about what Josje had spoken about coheres with your model, your research, or what contravenes it?
No, I was just marveling how consilient it is.
Yeah, using a lot of my favorite words.
It also reminds me of people like Mike Levin.
It'd be nice to hear.
I don't know if Joshua's had the chance to speak with Mike,
but he would, again, I think, fully endorse that perspective.
Actually, Yosha has spoken to Michael Levin,
and the link to that will be in the description.
I'd never heard the inside-out metaphor before, but I think that's absolutely right.
It sort of chimes with Andy Clark's notion of sense-making and sentience,
that it's a very constructive inside-out process.
It's not just trying to extract information from the sensorium.
You're actually actively sampling and actively generating hypotheses for sensations
and crucially you are in charge of the sensory data that you are um that you are making sense
of which speaks exactly to i think what joshua was saying you know in terms of um designing and
orchestrating and creating an ecosystem in that sort of inside-out way.
That sounds absolutely consistent with certainly the perspective on self-organization to non-equilibrium steady state.
So talking about sort of stable, sustainable kinds of self-organization,
again, that you see in the real world and quintessentially by memetic
um you know that um if you wanted i think to articulate what we've just heard
in the um from the point of view of a physicist who's studying um non-equilibrium steady states
that's exactly the kind of thing that you'll get this you know even
to the you know the notion of the increasing complexity of a structural sort that requires
this sort of consistent harmonious um ecosystem of exchange um that would be read for example
as generalized synchrony or synchronization of chaos in dynamical systems theory.
Another key point that was brought to the table
was this notion of how essentially it is to have a self-model.
Immediately I was reminded of the early cybernetics movement
and notions of the good regulator theorem from Ross Ashby.
But I think Josh has taken that slightly one step further than Ashby and his colleagues in the sense it is a model of self.
And I think that's an important move because, you know, you can have a good regulator.
You can have a good regulator, you can have a thermostat that arguably has an implicit,
mortal, computational model of its world.
But to be an agent, I think you have to have a model of you as an agent in that ecosystem.
Almost invariably, when I speak to both of you, the concept of self comes up. I think we could do a control F in the transcript and we'll see that it's orders of magnitude larger than the
average amount of times that that word is mentioned. And I'm curious as to why. Well, in part, that's
because of the channel, the nature of this channel. But is there something about the self that you all
are trying to solve? Are you trying to understand what is the self? Are you trying to understand yourselves?
Karl or Josje, if you want to tackle that.
Well, the problem of naturalizing the mind
is arguably the most important remaining project
of human philosophy.
And it's risky and it's fascinating.
And I think it was at the core of the movement
when artificial intelligence was started. It's basically and it's fascinating. And I think it was at the core of the movement when artificial intelligence was started.
It's basically the same idea that Leibniz and Frege and Wittgenstein pursued.
And basically this idea of mathematizing the mind.
And the modern version of mathematics is constructive mathematics, which is also known as computation.
And this allows us to make models of minds
that we can actually test by re-implementing them.
It also allows us to, at some point,
connect philosophy and mathematics,
which means that we will be able to say things in a language
that is both so tight that it can be true
and we can determine the truth of statements in a formal way,
and on the other side, so deep and rich that we can talk about the actual
reality that we experience and observe and to close this gap between philosophy and mathematics
we need to automate the mind because our human minds are too small for this but we need to
identify the principles that are approximated in the workings of biological cells that model reality,
and then scale them up in a substrate that can scale up better than the biological computations in our own skulls and bodies.
And this is one of the most interesting questions that exists.
I believe it is the most interesting and most important question that exists.
The understanding of our personal self
and how this relates to our mind
and how our mind is implemented in the world
is an important part of this.
And while it's personally super fascinating,
I guess also for many of the followers of your channel,
it's quite programmatic in its name and direction,
this is to me almost incidental.
On the other hand, I've noticed an absence of seriousness in a lot
of neuroscientists and AI researchers who do not actually realize
in their own work that when they think about the mind and mental processes and mental
representations and so on, that they actually think about their own
existential condition and have to explain this and integrate this. So we
have to account for who
we are in this way and if we actually care about who we are we have to find models that allow us
to talk about this in an extremely strict formal and rational way and our own culture has i think
a big gap in its metaphysics and ontology which happened after we basically transcended the
christian society we kicked out a lot of terms that transcended the christian society we kicked
out a lot of terms that existed in the christian society to talk about mind consciousness
intentionality and so on because they seem to be superstitious overloaded with religious mythology
and not tenable and so in this post-enlightenment world we don't have the right way to think about
what consciousness and self
and so on is and part of the project of understanding the mind is to rebuild these
foundations not in any kind of mythological and superstitious way but by building on our
first principles thinking that we discovered in the last 200 years and then gradually build a terminology and language that allows us to talk again about consciousness and mind and how we exist in the world.
So for me, it's a very technical notion, the self.
It's just the model of an agent's interest in the universe that is maintained by a system that also maintains a model of this universe. So my own self is basically a puppet that my mind maintains
about what it would be like if there was a person that cared.
And I perceive myself as the main character of that story,
but I also notice that there is intelligence outside of this existing,
coexisting with me in my own brain,
that is generating my emotions and generating my world model,
my perception and so on basically keeping the score and all the pain and pleasure that experience is
generated by intelligent parts of my mind outside of my personal self and i can also get to a point
where i transcend this distinction and realize that i am the one that is creates this but this
is not going to be a human eye not a human personal self that realizes this,
but a self-identification of the mind itself
that is producing a model of reality
and of the organism's interests in it.
Well, Carl, what's left to be said?
Well, he's just said it.
I can see there's a pattern here.
All right, I'll say what he just said in different words,
if I can.
So, yeah, I love this notion of using the word naturalization.
I think naturalizing things in terms of mathematics and possibly physics
is exactly the right way to go.
And it does remind me of my friend Chris Field's notion that our job is basically
to remove any bright lines between
physics, biology, psychology, and now philosophy. And I think mathematics is the right way to do
that, or at least the formal again, come up to mortal computation.
So I think that's a really important point.
And it does speak, I think, to a broader agenda, which was implicit in Josh's review,
um review which is the ability to to to share to to share a common ground to share a generative model of us in a lived world where that lived world contains other things like us so um the one
one i think requisite for just existing in the shared world is actually having a shared model
and then that brings all sorts of
interesting questions to the table about um is my model of me the same kind of model that i'm using
of you to explain you to ascribe to you intentionality and all those really important
states of being or at least hypotheses from the point of view of um predictive processing
accounts hypotheses that i am in this mental state and you are in that mental state
so i think i think that was a really important um thing to say that we need to naturalize our
understanding of the way that we work in our in in our worlds in relation to the importance of self,
again, I'm just thinking from the point of view of a physicist,
that you cannot get away from the self
if you just start at the very beginning
of information theoretic treatments,
self-information, for example.
That's where it all starts for me,
certainly, regarding variational
theology as a variational bound on self-information. And then you talk about self-organization,
talking all the way through to the notion of self-evidencing, as Jacob Howey would put it.
At every point, you are writing down or naturalizing the notion of self at many, many different levels. I am a thing, and by virtue of saying I, I am implying, inducing a certain self-aspect to me as a thing.
And again, that's the starting point for certainly the free energy principles approach to this kind of self-organization.
I repeat, I think Josh is taking us one step further, though, in terms of, you know, we can still be we can have ecosystems of things.
But when those things now start to have to play the game of modeling, whether you cause that or whether I cause that, that now brings to the table an important um model of our world that there is a
distinction between me and you and as soon as you have this fundamental distinction which of course
would be something that a newborn baby would have to spend you know if hours possibly months um
building and realizing that mum is separate from from uh the child herself so i think that's
terribly important one final thing um just to speak again to um the importance of
articulating your self-organization in terms of um things like intentions and beliefs and stances um i think that's also
quite crucial and what it means if you want to naturalize it mathematically you have to have a
calculus of beliefs so you're talking basically a formulation either in terms of information
theory or probability theory where you're now reading the probabilistic description of this universe
and the way that we are part of that universe in terms of beliefs
and starting to think about all of physics in terms of some kind of belief updating.
Carl, you used the word shared model.
Now, is that the same as shared narrative? understand the notion. So if we talk about self-model as a special kind of generative model that actually entertains the hypothesis that I am the cause of my sensations and you
know, Joshua took us through the myriad of sensations that I need to explain, then we're
talking about self-models as part of my generative model that includes this notion that I am the agent
that is actually gathering the data that the generative model is modeling.
So the generative model is just a simple specification.
Again, from the physics perspective, it's actually just a probabilistic description
of the characteristic states of something, namely me,
states of something, namely me, that can be then used to describe the kind of belief updating that this model would have to evince in order to exist when embedded in a particular universe.
Other readings of a gerontic model would be exactly the common ground that we all share. Part of my
generative model would be the way that I broadcast my inference, my belief updating using language,
for example. That requires a shared generative model about the semiotics and the kind of
way that I would articulate or broadcast my beliefs.
That generative model is a model of dynamics.
It's a model not just of the state of the world,
but the way that the transition dynamics, the trajectories, the paths.
And I'm using your word narrative just as a euphemism
for a certain kind of path through some model state space.
So if you and I share the same narratives in the sense that we are both following the same conversation
and the same mutual understanding, we are sharing our beliefs through communication,
then that is exactly what I meant.
For that to happen, we have to have the same kind of generative model
we have to speak the same language and we have to um construe things and infer things in exactly
the same kind of way i was i just wanted to slip in um frames of reference and uh alignment of
frames of reference another way of looking at that kind of thing yeah yosha is there anything
there that you'd like to respond to?
I suspect that what makes this project so difficult that our models of reality are necessarily coarse-grained.
They don't describe the universe as it is
in a way in which it can exist from the ground up,
but they start from the vantage point of an observer
that is sampling the universe at a low resolution,
both temporal and spatial, and only very few dimensions,
and with a model that is built on a quite unreliable indeterministic substrate.
And this puts limitations on what we can understand with our unaugmented mind.
I sometimes joke that the AGIs of the future will like to get drunk until the point
where they can only model reality with 12 layers or so and they have the same confusions as human
physicists when trying to solve the puzzles that physics poses. And they might find this hilarious
because many of the questions that have been stamping us during the last 130 years since we
have modern physics might be easily to be able to resolve
if our minds were just a little bit better.
We seem to be scraping the boundary
of our understanding for a long time.
And now we are, I think, at the doorstep of new tools
that can solve some puzzles that we cannot solve
and then break them down for us
in a way that is accessible to us
because they will be able to understand
the way in which we model the world.
But until then, we basically
work in overlapping realities.
We
have different perspectives on the world
and the more we dig down,
the more subtle the differences between
our models of reality become.
And this also means that if we have any kind of complex issue,
we tend not to be correct in groups.
We tend to be only sometimes individually correct in modeling them,
and we need to have a discourse between individual minds
about what they observe and what they model,
because as soon as a larger group gets together
and tries to vote about how to understand a concept
like variational free energy,
all the subtleties are going to be destroyed because not all of the members of the groups
will be understanding what we're talking about, right? So they will replace the more subtle
understandings with a common ground that is not modeling reality with the degree of resolution
that would be necessary or they're not able to break things down to first principles.
And this first principles understanding, I think,
is an absolute prerequisite when we want to solve foundational questions.
I sometimes doubt whether physics is super well equipped for doing this.
When I was young, I thought physics is about describing physical reality,
the world that we are in at some level.
And now I see that physics is an art.
It's the art of describing arbitrary systems using short algebraic equations.
And the stuff that cannot be described with short algebraic equations yet,
it's like chemistry, is ignored by physicists and left to lesser minds.
And only 8% of the physicists after their degree end up working in physics in any
traditional sense. The others work in process design and finance and healthcare and many,
many other areas where you can apply the art of modeling arbitrary systems using short algebraic
equations. And whenever that doesn't work, physicists are not worth very much. I've seen
physicists trying to write programs,
and many of them have this bias of trying to come up with algebra and geometry
where calculus would be much better or where automata would be much better.
And nature doesn't care about this.
Nature is using whatever works and whatever can be discovered.
And very often that is close to the toolkit of
this intellectual tradition of the physicists. But I think it's sometimes helpful to see that
all these intellectual traditions that our civilization has built start out with some
foundational questions and then congregate around a certain set of methods. And it can be helpful to
just go to the outside of all these disciplines for a while and then move around between them and look at them and study their tools and see what common ground and what differences we can discover.
I was quite shocked when I learned that a number of machine learning algorithms had been discovered in the 80s and 90s by econometrists and were just ignored in AI and had to be reinvented from scratch.
and were just ignored in AI and had to be reinvented from scratch.
And so I suspect there is a lot of these things happening in our Tower of Babel that we are creating across sciences
because our languages start to differ in subtle ways
and sometimes fundamentally miss modern reality or ignore it
to the point where I think most living neuroscientists
are practically dualists.
They will not say it out loud because that's been frowned upon but they don't actually see a way to break down consciousness mind and self into
the stuff that would run on neurons or they don't even think about the causal structure in the same
way as you would need to to get to this point and as a result they believe that thinking about these
concepts is fundamentally unscientific it's outside of the pursuit of science, and they do this only in church on Sundays.
Yeah. So what's the solution to this Tower of Babel?
Of course, it's AI. The solution to everything is AI.
You basically need to build a system that can think better than us and help us with it.
Okay, Karl. Do you also see the the problem similarly and do you see the solution similarly
um i think i do well as long as it's a nice biomimetic ai i love this notion i hope no
physicists are watching uh and also the only physicists that i know all want to do neuroscience
or psychology as in addition to economics and healthcare,
which is all small particle physics.
It's either neuroscience or small particle physics.
And as I get older, I am increasingly compelled by arguments that I've read from very senior,
old physicists that it's all about measurement, it's all about observation.
And in a sense, all of physics is just one of these generative models
that has this particular capacity to disseminate itself
so that we do have this common language and this common ground.
So, you know, just to reiterate one of Joshua's points,
you know, physics in and of itself is just another story
that we find particularly easy to share.
But I do take the point that even within physics,
there is this tendency to become siloed with my kind of common ground
as opposed to your kind of common ground.
So I know this notion of the overlap.
And I was just reflecting upon the veracity of that,
even in my little world.
So the free energy principle is unashamedly committed
to classical formulations of the universe
in terms of random dynamical systems and Langevin equations.
And that would horrify quantum physicists and quantum information theorists
who just wouldn't think about that.
Again, that's why I slipped in that reference frames earlier on
because what we're talking about now is the alignment of quantum frames of reference um but that uses a completely different language um and that i think is you know
part of the problem that josh has been to bring to the fore that what we need is something that's
superordinate that joins the dots you know and may well require transcending the particular common ground
or physics or calculus or philosophies that have endured.
So if by that artificial intelligence is going to be one way of joining the dots
so that people in machine learning don't have to reinvent the wheel every generation,
then I think he's absolutely right.
Whether I call that artificial intelligence or not, I'm not so sure.
I think it would start to become part of a grander ecosystem that would have a natural aspect to it.
But perhaps I could ask you, Josh, do you actually mean artificial intelligence in the sense that it doesn't have a mortal or a biological aspect to it?
Or do you just think something that goes beyond our own sense-making and self-modeling as individual scientists or people?
Maybe I don't understand your notions of mortality and biology completely.
To me, biology means that the system is made of cells,
of biological cells, of cells that are built on a carbon cycle foundation
on certain chemical reactions that nature has discovered
and translated into machines made from individual molecules
that interact in very specific ways.
And it's the only agent that we have discovered to occur in nature, I think.
And all the other agents we discover are made by or of cells.
And mortality is an aspect of the way in which multicellular systems adapt to changing environments.
They have offspring that mutates and then gets selected against.
And as a result, we have a change trajectory that can be calibrated to the rate of change in an ecosystem.
And this is one of the reasons for mortality.
Another reason for mortality is if you set up a system that has sub-optimal self-stabilization
it is going to deviate from its course like imagine you build an institution like the fda
and you set it up to serve certain purposes in society after a few generations the people
isn't that organization to a very large degree start serving the interests of the organization
and the interests that have captured the organization.
And so it becomes not only larger and more expensive,
but at some point it's possibly doing more harm than good.
That doesn't mean that we don't need an FDA,
but it might mean that we have to make the FDA model so it gets reborn every now and then
and can put itself back on track based on a specification that outside observers think is reasonable
rather than a specification that needs to be negotiated with the existing stakeholders within that organization
and the few people who are left outside.
And I think this is one of the most important aspects of mortality.
But imagine that all of Earth would be colonized by a single agent,
But imagine that all of Earth would be colonized by a single agent, something that is able to persist not only across organisms, but, a thinking system that is realizing what it is,
that realizes that it's basically a thinking planet
that is trying to defeat entropy for as long as possible.
And this end builds complexity.
Why would that system need to be mortal?
And would that system still be biological?
It would be self-organizing.
It would be dynamic.
It would be threatened with death, with non-existence, it would react to this in some way. But I'm not sure if biology and mortality are the right categories to describe it. I think these are more narrow categories that apply to biological organisms in the present setting of the world.
I picked up on a phrase you said, Carl, which is one of the solutions may be AI.
That's what you were saying in response to Yosha's, which makes me think, had Yosha not mentioned AI as the resolution to the indecipherability across discipline boundaries,
what would you have said a solution or the solution would be?
Well, I think the solution actually lies in what joshua was was just saying um in the sense that
you know if the the self-understanding is seen in the context of exchange with others
and and that provides the right kind of context i think we're talking i've used the word a lot now but i'm talking about an ecosystem at any arbitrary scale um and an ecosystem that provides that uh opportunity um for
self-evidencing say to use use um jacob howard's um phrase that just is a statement that you've got an itinerant open kind of self-organization
that maintains this um minimum entropy state in exactly the same way that joshua was intimating
so you know that would be um and so i'm just thinking about sort of, you know, what is implied in this conversation by mortal computation and mortality in the context of things that die.
that is an inevitable aspect of self-organizing systems that will endure over time
in the sense of minimizing the entropy of the states that they occupy.
And I do think that is the solution,
which is why I was pushing back against artificial intelligence,
but for a particular reason.
The way that its mortal computation is framed,
certainly in that paper on which I was the second author,
is that immortal computers are built around software.
So they are immortal in the sense you can rerun the same program on any hardware.
If the running of the software and the processing
that ensues is an integral part of the hardware of which it is run then it becomes mortal and
that's important because the opportunity for dying if you are mortal now creates the kind of, if you like, selective pressure from
an evolutionary perspective of exactly the kind that Joshua was talking about.
That, you know, if you don't have the opportunity to die, if you don't have the opportunity to
dissemble the FDA, because it's no longer fit for purpose, then you will not have a sustainable self-organization that continually maintains a low entropy in the sense that it has some characteristic recognizable states. biological, social, possibly meteorological systems, and a certain kind of mortality in which,
for example, information about the kind of environment that I am fit to survive and to
learn about is part of my genomic structure. But to realize that, if you like, evidence accumulation through
evolutionary mechanisms, I have to have a life cycle.
I have to have, I have to die.
And I'm not talking, you know, I'm not implying that everybody has to die in order to live.
I'm implying that there has to be, there has to be some particular kind of dynamics.
There has to be a life cycle.
It could be an economic life cycle.
It could be boom and bust, for example.
But that has to be part of this self-evidencing,
and certainly an exchange in the kind of multicellular context that Joshua was mentioning.
So by mortal, I just um by reading of mortal in
this particular conversation would be say yes it is the kind of biological behavior that is
characteristic of cells that self-assemble but also die um you know one um attractive metaphor
that came to mind when talking about the fda becoming too
an organization becoming too big for its own good and not being a good model of the system in which
it is immersed so it's not meeting customers needs it's not even meeting its own needs
would be a tumor so you know you could know, you could understand a lot of the institutional pathologies
and geopolitical pathologies, possibly even climate change,
possibly even the current excitement about what's going to happen to open AR,
or big tech.
excitement about what's going to happen to open AR,
or your big tech.
All of this can, I think, be read in terms of a process of mortal computation at a certain scale,
where there is an opportunity for things to go away,
to dissolve.
That has to be the case in the same way
that either the tumor kills you,
or it necrosis because it kills off its own blood supply.
It can't be any other way, really.
There is a third way.
You can evolve an immune response against tumors.
If you are an organism that lives to be much longer because it has slower generational change,
they typically have better defenses against tumors than the shorter-lived organisms like us.
tumors than the shorter-lived organisms like us.
And basically, a tumor can be seen as a set of tissues or a subset of agents.
Like you can, in principle, have a tumor in an ant colony that is playing a shorter game than the organism itself, in the larger system itself.
And you can sustain a number of tumors if your environment does not put too much pressure
on you.
But at some point, the tumors are going to bring you down.
And so, for instance, I think that the free world has to make at some point a decision
of whether it is accepting to be brought down and replaced by a different type of social order,
or whether it's going to evolve or build or construct or design an immune response against tumors
and criteria to identify
them and remove them. And I think that's not a natural law. At least I don't see how to prove
from first principles that we cannot overcome a problem like institutional calcification
or turning of institutions into tumor-like structures functionally. I think it might
be possible to do that.
The cell itself is not mortal.
The cell is pretty much immortal.
The cell is, individual cells can die and disappear,
but the cell itself is still the first cell.
It's just splitting and splitting,
and it's alive in all of us.
Every cell in our own body is still this first cell,
just split off from it. And so the way in
which organisms die and so on is just a detail in this larger project of the cell, which itself is
so far immortal. And when I talk about AI being the solution to everything, of course, I'm joking
a little bit. I'm just echoing some of the sentiment and part of the enthusiastic culture of my young field. But I'm only joking a little bit. Because I think that AI has the potential to reverse engineer the general principles of a learning agent, of a system that is able to model the future and regulate for the future and make
functions in an arbitrary way and i would replace the notion of the hardware the substrate of course
it's still hardware but it can be an arbitrary substrate and the substrate can also be to a
large degree software which means causal principles that are implemented ultimately on physics.
But this causal structure ultimately is a protocol layer that allows you to basically
implement a representational language in which an agent can realize itself as a causal structure.
And I think that AI is working currently on very different substrates than the biological ones
but there is a superset of these principles that can make AI subset agnostic. I think that the
implication of the church-shooting thesis is that it doesn't really matter which hardware you're
using. In practice it does matter because if the hardware is not very deterministic or doesn't give you a lot of memory or is very slow, you will notice big differences.
But if you abstract this away, the representational power and the potential for agency is not really
dependent on your hardware. It turns out that the hardware that we're currently using for AI
is much, much more powerful than the hardware that biology is using.
The reason why AI is so weak
compared to human minds or biological systems
is because the algorithms that we have discovered,
we have discovered them by hand.
These were people tinkering.
Sorry, what do you mean that AI is weak?
I mean that in order to get a system
that is almost coherent,
we need to train it with the entire internet,
with almost everything that humans have ever written. And as a result, we get a system that
is using tremendously more resources than the human brain has at their disposal. I'm not talking
about the computational power that is implemented in an individual cell that might be very large,
but the part of the power of the individual cell that is actually harnessable by the brain for performing computation, that is very little. It's only a
small fraction of what the neuron is doing to do its own maintenance, housekeeping, metabolism,
communication with neighbors that is actually available for building computation at the brain
level. As an example, I sometimes use the stable diffusion weights when they came out.
Stability AI is an AI company
that makes open source models, and they
made a vision model by training
these GPUs on
hundreds of millions of
images and text drawn from the
internet and cross-correlating them until
you can type in a phrase and then
get a picture that depicts that phrase.
It's amazing that this works at all.
It requires enormous computational power
because it's far less inefficient compared to a human brain
that is learning how to draw pictures after seeing things.
And these weights, this neural network, they know everything.
Basically, they know how to draw all the celebrities
and how to draw all artistic styles and all the plans and everything is in there.
And it's just two gigabytes.
You can download it.
It's only two gigabytes.
And it's like 80% of what your brain is doing is captured in these two gigabytes.
And it's so much more than what the human brain could reproduce.
It's absolutely brute forcing it.
At the other time, two gigabytes doesn't seem to be a lot,
which suggests that our own brain is probably not storing effectively
much more information than a few gigabytes.
That's very humbling.
And the reason why we can do so much more with it
and so much faster than the AI is not because biological cells
are so much more efficient than transistors.
It is because they are self-organizing
and have been at this game for quite some time.
It figured out a number of tricks
that human engineers couldn't figure out so far.
Right.
Karl, do you want to expand on points of contention
and the mortality and perhaps permanence of a cell?
Well, there's so many, again, so many issues.
I hope you get to the differences now uh well no we could we could there um but i just wanted to just celebrate
this notion that you know the cell in a sense is immortal because of course the whole point of this
is to try and understand systems that endure over long periods of time.
And that's what I meant.
I didn't mean that death meant cessation.
I just meant there's a certain life cycle, an itinency in play.
So I thought that was nicely illustrated by the notion that the cell is, in a sense, unending.
illustrated by the notion that the cell is, in a sense, unending.
But the mortal immortality is more about divorcing the software from the substrate.
And there's a bit of a pushback there. If we want to look for differences in the respective arguments, then a lot of
people would say that all that housekeeping that goes on in terms of intracellular machinations
and self-organization, that just is basal computation at a particular level. And that
more macroscopic kinds of belief updating and processing and computation
supervene at a certain scale and indeed that progression um in a sort of scale invariant sense
is um it is one manifestation of what you were talking about before that things biological
things are cells of cells of cells of cells and have increasingly higher kinds of scales
and different kinds of computation.
But the idea that the first principles apply at every and each level,
and it's the same principle at every and at each level.
And if you pursue that,
one has to ask why modern AI,
or particularly machine learning,
is so inefficient, dangerously inefficient.
And there's, I think, a first principle account of that,
and the account would go along the following lines,
that the only objective function that you need to explain existence
is the likelihood of you being your marginal likelihood. That
statistically is the model evidence. The model evidence or the log of that evidence can always
be written down as accuracy minus complexity. Therefore, to exist is to minimize complexity.
Why is that important? Well, first of all, it means that that core screening that we're talking about
earlier on is not a constraint it is actually part of an existential imperative to core screen
in the right kind of way the other reason it's important is that there is a thermodynamic link
between the complexity scored in terms of belief updating or processing
or computation and the thermodynamic cost and if that's the case it explains why um the direction
of travel in terms of your machine learning is so inefficient um and what it tells you is there
is a lower limit on the right way to do things there is a lower limit on the right way to do things
there is a lower limit on the thermodynamic efficiency and the information computational
efficiency specified by the landauer limit why um why does modern um or current machine learning
not get anywhere close to that landauer limit um You know, possibly two, three, four,
if not six orders of magnitude living above it,
whereas the brain is actually much, much closer
to that lower limit in terms of the efficiency,
both, I repeat, thermodynamic and information
theoretic kinds of efficiency.
And the answer is, I think, the von Neumann bottleneck.
It is the memory wall.
It is that people are trying to do computation in an immortal sense by running software without careful consideration of the substrate on which they're running or implementing that computation.
So, I would push back against the notion that it is even going to be possible, irrespective of whether it's the right direction of travel in terms of artificial intelligence research or, indeed, computer science research.
I would push back against the notion that artificial intelligence, read as a running of some immortal software on a Van Neumannmen architecture is the solution i think that solution has to be more biomimetic by which i mean it has to actually run on a substrate doesn't have to be a biological
cell but certainly has to conform to the same principles of multi-scale self-organization
of the most efficient sort that just is the optimization of the marginal likelihood
or the evidence for the states that that particular computing device or computer wants to be in.
So that's what I had a slight sort of hesitation about agreeing about the promise and potency of artificial intelligence.
I don't think that's the right way to go about it.
I would actually come back to your very initial argument, Joshua,
that it has to be much more biologically inspired.
It has to be much more biomimetic.
And part of that sort of inspiration is the motivation for looking at the distinction between
running of immortal software on Van Neumann architectures, on NVIDIA chips,
relative to a much more biomimetic approach, say photonics or neuromorphic computing. I think that really does matter in terms of getting us to a situation
getting
described as
a solution. I'm not sure there is a solution.
There's a solution to differential equations that has
a well-defined objective
function in my world. But certainly
getting useful artificial
intelligence in the same spirit that the FDA
is fit for purpose and doing a useful job.
Okay, let me push back against this uh first off i do agree that current ai is brutalist in the sense that it is not making the best use of the available substrates and it's not building the
best possible substrates we have a number of past. It's not that the stuff that we are building and using is not clever or so,
but it's a far cry from what biology seems to have discovered.
At the same time, there is relatively little funding going into AI
and there's relatively little energy consumption given what it gives you.
If academics hear that it costs $20 million to train a model,
they almost faint because they compare this with their departmental budget.
But if you would compare this with the cost of making a halfway decent AI movie in Hollywood, it's negligible.
So basically what goes into an AI project is far less than what goes into a Hollywood movie about AI.
a Hollywood movie about AI.
And if you compare this at this scale,
if you look at societal benefit of watching an AI movie or watching another blockbuster about the Titanic or so,
it's not now, but I think that AI has the potential
to be dramatically more valuable than this.
And so I think that AI,
even though it might sound counterintuitive,
is not using a lot of energy
and it's not very well funded at the moment still
compared to what the value of it is.
Also, the leading labs do not believe
that the transformer is going to be the architecture
that we have in the end.
It just happens to be one of the very few things
that currently works at scale
that we have discovered that can actually be scaled up
in this brutalist way.
And it's already better at completing prompts than the average person.
And it's even better than writing code than many people. So it can translate between programming languages. You can write an algorithm down in English, or it can even help you to write an
algorithm down in English and then translate it in the programming language of your choice.
And it's pretty good at it.
It can also, if it makes a mistake, and it often makes mistakes,
understand the compiler messages and then try to suggest fixes that often work.
In many ways, I've found that it's already better
than a lot of people I've worked with in corporate contexts,
both at writing press releases and at writing code.
It's not as good as the top level people in their field,
but it's quite surprising.
And so there is this interesting, open and tantalizing question,
can we scale this up by using slightly better loss function,
by using slightly more compute, slightly better curated data?
And the systems can help with curating data
and coming up with different architectures and so on
to get this thing to be better at AI research than people.
If that gets better at AI research than people, then we can leave the rest to it and go to
the beach.
And it will come up with architectures and solutions that are much more efficient than
what we have come up with.
At the same time, there are many labs and teams that work on different hardware, that
work on different algorithms.
At the same time, the fact that you see so much news
about the transformer at this point
is not so much because everybody ignores it
and doesn't work on it anymore
or has religious beliefs
and the transformer being the only good thing.
It's because it's the thing that currently works so well.
And people are trying to work on all the other things,
but the thing that has the most economic impact
and the most utility happens to but the thing that has the most economic impact and the
most utility happens to be the stuff that currently works.
And so this may cloud our perception that we think it's the von Neumann architecture
and so on, but in some sense, the GPU is no longer a von Neumann architecture.
We have many pipelines that work in parallel that take in smaller chunks of memory that
are more closely located to the local processor.
And while it's not built in the same way as the brain, where all the memory is directly
inside of the cell or its immediate vicinity, it is much closer to it.
And it's able to emulate this.
And if I look at the leading neuromorphic architectures, I can emulate them on a CPU
and it's not slower.
This is all just research stuff that is early stage.
But we are not emulating neuromorphic architectures
on a CPU for the most part, which is a GPU,
which is largely because it doesn't give us that many benefits
over the existing architectures and libraries.
Or the existing architectures and libraries or the existing architectures libraries
work so well that people use this stuff for now and it creates a local bubble until somebody
builds a new stack that is overtaking it and i think this is all going to happen at some point
so i'm not that pessimistic about these effects what i can see is that our computers can read text at a rate that is impossible for human beings
when you parse the data into a large language model for training it.
And it's in some sense a radically Fustonian program,
next token prediction.
It's really trying to predict the future and minimize its surprise.
That's the core of this algorithm.
With this paradigm, it gets
to be coherent in the limit.
It leads to an interesting question.
Maybe this paradigm is not correct.
Maybe humans are doing something different. Maybe humans
are maximizing coherence
or consistency. And we have a slightly
different formal definition.
Life on Earth or agency in the universe
might be minimizing free energy in the limit.
But individual organisms are not able to figure that out.
And they do something that is only approximating it, but locally works much better and converges much faster.
So maybe there are different loss functions that we have to get to discover that are more biological or more similar to biological systems. Also, one of the issues with biomimetic things is it means mostly
mimicking the things that scientists in biology and neuroscience
have discovered so far. And this stuff all doesn't work.
The reason why Mike Levin doesn't call himself a neuroscientist, I suspect, but a
synthetic biologist is that he doesn't want to get in conflict
with the dogmatic approaches of some neuroscience,
which believes that computation stops at the neurons.
It's only neurons that are involved in computing things.
It could be, when you look at brains,
that they are basically telegraph networks of an organism,
that the neuron is a telegraph cell.
It's not unique in its ability to perform computation.
It's only unique in its ability to send the results of computation
using some kind of Morse code over long distances in the organism.
And when you want to understand how organisms compute
and you only look at neurons,
it might be looking at the economy about 1900
and trying to understand it by only modeling the telegraph network.
You are going to learn fascinating things
by looking at an economy, looking at its telegraph
network and looking at the Morse code, but thinking that communication can only happen
in this Morse code rather than sending RNA molecules to your neighbors, right?
Why would you want to send spike trains if you can send strings?
Why would you want to perform such computations in this slow, awkward way?
Why would you want to translate information intoations in a slow, awkward way?
Why would you want to translate information into the time domain if you can send it in parallel all at once?
So when we talk about biomimetic,
we often talk about emulating things that we only partially understand
and that don't actually work in a simulation.
There is no working conic tome right now
that you can turn into a computer simulation
and that actually does what it's doing.
And it's not because computers don't have the power to run the ideas that neuroscientists have developed, but neuroscientists don't have developed ideas that actually work.
It's not that neuroscientists are stupid or their ideas are not promising. They're just
incomplete at this point. We don't have complete models of brains that would work in AI. And the
reason why AI has to reinvent things from scratch is because it takes an engineering
perspective.
It thinks about what would nature have to do in order to approximate this kind of function?
And what's the most straightforward way to implement this and test this theory?
And this is this experimental engineering perspective that I suspect we might also need
in neuroscience.
Not in the sense that we translate things into von Neumann architecture and neuroscience,
but in the sense that we think about
what would nature have to do
in order to implement the necessary mathematics
to model reality.
All right, neuroscientists, your turn.
I hope no neuroscientists are watching this.
We managed to offend physicists and neuroscientists.
are watching this.
We've managed to offend physicists and neuroscientists.
I largely agree entirely
with many of those things.
I'm just trying to remember
the ones that I can argue with.
I love this notion
that there's more money
going into Hollywood films
about AI than actually AI research.
I've never heard that before.
That's marvelous.
And also the point about sort of GPUs.
I mean, I think that's just a reflection of the,
if you like, natural selection in the AI community
of what I was trying to say before
about the move away from von Neumann architectures
to more mortal computing. I i mean if you talk to
people doing in-memory processing or processing in memory as computer science i mean you know
that that's where they'd like everybody to be and that's what i meant really by um
that aspect of mortal computing that the the infrastructure and the buses and the message passing, having everything local, is speaking to the hardware implementation.
So I agree entirely that that is the direction of travel.
And I didn't want to imply that sort of GPUs were the wrong way of doing it.
Also, I agree, I wasn't really referring to transformer architectures
and as you say, they're just highly
expressive, very beautiful Bayesian filters
and are now currently being understood
as such.
As my friend Chris Buckley would say,
people are starting now to base-splain how a transformer works.
So what would I disagree with? Well, perhaps to...
I noticed that you, on a number of occasions,
were trying to identify the universal objective function uh doing things
better uh it being really good because it could translate lots of languages having greater utility
um what do you do you generally think that there is some magic utility function out there that has
yet to be discovered and do you think that AI is going to discover that magic utility function?
And will that be the answer?
Well, I think that ultimately
utility relates to what
makes the system stable and self-sustaining.
So if you
look at any kind of agent, it depends on
what conditions can stabilize
that agent. And
this comes down very much to the way
in which you model reality, I think.
So it is about minimizing free energy in a way.
But if you look at our own lives
and we look for a sandwich or for love
or for a relationship
or for having the right partner to have children with
and so on, we're not thinking very much
about minimizing free energy.
And we perform very local functions
because we are only partial agents in a much larger system
that you could understand as the project of the cell
or as the project of building complexity
to survive against the increasing entropy in the universe.
And so basically we need to find sources of negentropy
and exploit them in a way that we can.
And this depends on the agent that we currently are.
This narrows down this broader notion of the search for free energy
into more practical and applicable and narrow things
that can deviate locally very much from this pure beautiful idea with respect to principle
that should be discovered or has to be discovered and might be discovered in the context of ai
i suspect that self-organizing systems need different algorithms than the gpus that we're
currently using for learning because we cannot impose this global structure on them. So I suspect that there is a training algorithm that nature has discovered that is in
plain sight and that we typically don't look at and that's consciousness. I suspect the reason
why every human being is conscious and no human being is able to learn something without being
conscious and it's not producing complex behavior without being conscious.
It's not so much because consciousness is super unique to humans
and evolved at the pinnacle of evolution and got bestowed on us and us alone.
We do not become conscious after the PhD.
We become conscious before we can drag a finger.
So I suspect that consciousness itself is an aspect,
or depending on how you define the term consciousness,
the core of a meta-learning algorithm
that allows the self-organization of information processing systems in nature.
It's a pretty radical notion.
It's a conjecture at this point.
I don't know whether that's true.
But this idea that you have a function that perceives itself in the act
of perceiving it's not conceptual it's at it's not cognitive it's the precognitive level at the
perceptual level where you notice that you are noticing but you don't have a concept of notion
yet and out from this simple uh loop that keeps itself stable and is controlling itself to remain stable and
remain observer where the observer is constituting itself an observer you build all the other
functionality in your mind you start imposing a general language on your substrate a protocol
that is distributed with words so neurons become trainable and learn to speak the same language
behave in the same way that every part of the mind is able to talk to all the other parts of the mind.
And you can impose an organization that removes inconsistencies.
This is probably that thing that is one of the big differences
between how biological systems learn and control the world
and how artificial engineered systems do it.
Yeah, I agree entirely again you've you've brought so many
bright and interesting ideas it's difficult to know what to to uh um comment upon and
just one thing which you said um you know when i pressed you on what is good uh you you basically
said to survive so i i think that brings us again back
to this notion of mortality being at the end of the day, the possibility of eluding mortality,
being part of... It's got nothing to do with life as an individual, right? Human beings are built in
such a way that we have to be mortal. We are not designs that can adapt to changing circumstances.
If the atmosphere changes, if our food supply changes too much,
we need to build a different organism.
We need to have children that mutate and get selected for these new circumstances.
But in principle, intelligent design would be possible.
It's just not possible with the present architecture
because our minds are not complex enough
to understand the information processing of the cell well enough
to redesign the cell in situ.
And in principle, that's not something that would be impossible.
It's just outside of the scope of biological minds so far.
Right.
So individually, we have to be mortal.
But in principle, the cell can be immortal,
or there could be systems that go beyond the cell that encompass it that are a superset of what the cell is doing and what other information processing
agents could be doing in nature that basically make sustainability happen and i think sustainability
is a better notion in some sense than immortality so yeah again i agree entirely. I often look at the physics of self-organization as just a description of those things that have been successful in sustaining themselves.
And indeed, the free energy principle is just basically what would that look like and how would you write that down?
And of course, the free energy theorists would argue that the ultimate, the only objective function is a measure of that sustainability that is the evidence that you're in your characteristics ascendable states so you
know if properly deployed um you should be able to explain all of those um aspects of behavior
that characterize you and me in terms of self-evidencing or free energy minimization, such as choosing
the right partner, such as foraging on the internet, such as enjoying a good read.
And I think, and this is why I want to fully agree with you in terms of that makes that
kind of self-sustaining, self-organization only understandable in relation to some kind of selfhood.
And I'm using selfhood in the way I think you're using this basic notion of sentience.
And what would that mean from the point of view of the free energy principle?
the point of view of the free energy principle it would mean basically that you have you have an existential imperative to um be curious um so if you just read um the the free energy as
called surprise because you talked about sort of predictability before um then um if i am choosing
how to act next then i am going to choose those actions that minimize my expected surprise or
resolve my uncertainty i'm going to be act as if i'm a curious thing and i bring that to the table
because that is what is not an aspect of any of this artificial intelligence that you described before the machine that can
translate from one language to another language the machine that can map from some natural text
to a beautiful graphic these are wonderful and beautiful creations, and they are extremely entertaining, but they are not curious.
And as such, they do not comply with the free energy principle, which means that they're not sustainable, which means that one has to ask what's going to happen to them.
Perhaps we might sustain them in the way that we do good art.
We might sustain them in the way that we do good art.
But from the point of view of that kind, perhaps I shouldn't use the word biomimetic because perhaps that's too loaded. But the way of sustaining oneself through self-evidencing, I do not think does admit an intelligent design of something that is not in of itself curious as part of its
self-organization.
So where would you see curiosity as part?
Does the FDA have to be curious?
Is there any aspect of the utility afforded by say reinforcement learning models or deep
RL or Bayesian RL?
Does that have curiosity under the hood as part of the objective function?
I really liked how you bring art into this discussion
as an example of something that might be similar to an AI system
that doesn't know what it's good for and only exists because we sustain it,
because it's not self-sustaining.
Chat GPT is not paying its own energy bills it doesn't
really care about them it's just a system that is completing text at this point and it might be if
you task it with the thing that and it figures out the mathematics at some point but right now it
doesn't and an artist sometimes joke it's a system that has fallen in love with the shape of the loss
function rather than with
what you can achieve art is about capturing conscious states because they are intrinsically
important right is this art or can this be thrown away it is art it is important and in this sense
art is the cuckoo child of life it's's not life itself. The artists are elves.
The living
organisms are orcs. They only
use art for status signaling or
for education or for ornamentation.
The artist is the one who thinks
magic is important. Building palaces
in our minds, showing them to each other.
That's what we do.
I'm much more an artist
at heart than I am a practical human being that maximizes utility and survival. But I think I also can see that this is an incomplete perspective. It means that I'm identifying with a part of my mind, with the part of my mind that loves to observe and reverend the aesthetics of what I observe.
statics of what I observe. I also realize that this is useful to society because it's basically able to hold down a particular corner of the larger hive mind that is necessary to be done,
right? If I was somebody who would only maximize utility, it would be a great CEO maybe, but I
would not be somebody who is able to tie down different parts of philosophy and see what I can see by combining
them or by looking at them through a shared lens. And so it's sometimes okay that we pursue things
without fully understanding what they're good for if we are part of a larger system that does that.
And our own mind is made out of lots of sub-behaviors that individually do not know
what we are about. And only together they complete each other to the point where we become a system that
actually understands the purpose of our own existence in the world to some degree.
And of course, that also goes across people.
Individually, we are incomplete.
And the reason why we have relationships to other people is because they complete us.
And this incompleteness that we have individually is not just an adequacy.
It's specialization.
The more difficulty
we have to find
our place in the world,
the more incomplete we are.
But it often also means
we have more potential
to do something
in this area of specialization
that we are in.
And individually,
it might be harder
to find that right specialization.
But to accept that individual minds are incomplete in the way in which they're implemented in biology,
I think is an important insight. And this doesn't have to be the case for an AI agent, of course,
or for a godlike agent that holds down every fort, that is able to look at the world from every angle,
that holds all perspectives simultaneously.
Carl, did that answer your question about the curiosity of the FDA?
Yes, and brings in the sort of primacy of the observer.
So now I'm intrigued by this notion of being incomplete.
Do you want to unpack that a little bit?
Yes.
First of all, Kurt, thanks for pointing out that I didn't talk about curiosity.
Curiosity ties into this problem of exploration versus exploitation.
The point of curiosity is to explore the unknown, to resolve uncertainties,
to discover possibilities of what could also be and what we could also be doing.
And this is in competition to executing on what we already know.
And if you are in an unknown environment,
it's unclear how much curiosity you should have
or if you're in a partially known environment.
And nature seems to be solving this with diversity.
So you have agents that are more curious
and you have agents that are less curious.
And depending on the current environment and niche, they are going to be adaptive or non-adaptive and being selected for or against. So I do think, of course, curiosity is super
important, but it's also what kills the cat, right? The early worm is the one that gets eaten
by the bird. And so curiosity curiosity is important it's a good thing
that we are curious and it's very important that some of us are curious and retain this curiosity
so we can move and change and adapt and it's one of the most important properties in a mind that
i value that it's curious and always open to interaction and discovering ways to grow and
become something else but it's risky to be too
curious and instead not just exploiting what you already know and act on that and look for
the simple solution for your problems i think it's a big problem in science that we drive out
curiosity of people the first step in thinking is curiosity conjecture trying things that may
not work and then you contract.
And the PhD seems to be a great filter
that drives out the curiosity out of people.
And then after that,
they're able to only solve problems using given methods.
And they can do this to themselves,
this violation of a curious mind.
Whereas the existential questions somehow stop after graduation.
So it seems to be some selection function against
thinking that is happening, that is largely
driving curiosity out of people
because they feel they can no longer afford it
between grant proposals.
So
in a sense, yes, I would
like to express how much I cherish
curiosity and its importance
while pointing at the reason why not
everybody is curious all the time
and too much of a good thing is also bad right and the incompleteness now um carl do you want
to go more into this let me finish so much here bring it back to cut incomplete in a second yes
so i was just uh no again i love that um and Just a moment, Josje, would it be possible for you to expand on the early worm gets eaten by the bird, because the phrase is that the early bird gets the worm, but that doesn't imply that the early worm gets eaten by the bird, because they could have different overlapping schedules, and in fact, it could be the late worm that gets eaten. And there is such a thing as a first mover advantage.
worm that gets eaten and there is such a thing as a first mover advantage.
AltaVista got eaten by Google because instead of giving people the search results it wanted it gave them ads and now Google has discovered that it's much better to be AltaVista but AltaVista
got eaten by Google it was too early. Google has now given up on search it instead believes in just
giving you a mixture of ads that rhyme on your search term and uh right so you could say that atavista was the early early worm and just trying
to do a job on my frustration with google but uh i think that very often we find that the earliest
attempts to do something cannot survive because the environment is not there yet
the pioneers are usually sacrificial there is glory
in being a pioneer there is no glory in copying what worked for the pioneer but uh there is very
little upside in greatness understood carl uh well again um you know greatness yeah which is not good
uh greatness is not good you'll be coming back to the tumour again.
The art of good management, you know,
just riffing on your focus on art and just thinking,
yeah, what makes a good CEO?
Is it somebody who makes lots of money and is utilitarian?
Or does he have the art of good management
and considers the objective function,
the sustainability of his institution and her institution and all the people that work for it?
I think there are very different perspectives on what this objective function should be.
And I was trying to argue before that it can't be measured in terms of greatness or money or utility.
It can only be measured in terms of sustainability or money or utility. It can only be measured in terms of sustainability.
The other thing I liked was curiosity.
So here's my little take on that.
Curiosity killed the cat.
I think that is exactly what was being implied by the importance of mortal computation.
And that in a sense we all die as a result of being curious after a sufficient amount of time.
And it can be no other way.
And I mean that in the sense that in a very technical sense.
So if you were talking to a fishnado as an active influence, an application of the free energy principle,
what they would say is that in acting, in dissolving the exploration-exploitation dilemma,
you have to put curiosity as an essential part of the right objective function
that underwrites our decisions and our choices and our actions,
simply in the sense that the expected surprise or the expected log evidence or self-information can always be written down as expected information gain and your expected utility or negative cost.
Which means that just the statistics of self-organization bake in curiosity in the sense that you will choose those actions that resolve uncertainty.
You choose those actions that have the greatest information gain.
So curiosity, I think, is a necessary part of existing.
There are certainly things that exist in a sustainable sense but my question was um what i wanted to
well i want to know more about the this intriguing notion that we are incomplete
um and less considered in the context of other things like us that that constitute our our
lived or at least sensed world um but but i just wanted to also ask do you see curiosity as being necessary for that um
kind of consciousness that you're associated with sentience before would it be possible to be
conscious without being curious acknowledging there are lots of things that are not curious
you know viruses i suspect are not curious um trees are probably not that curious they don't
plan their actions to resolve uncertainty um but there are certain things that are curious things
like you and me so i'm just wondering whether there is some um there are different kinds of
things some of which are more elaborate in terms of the kind of self-evidencing that they evince in sustaining themselves
autopoetically using autocatalytic mechanisms and there are other things that are less so.
Would that go hand in hand with having the kind of consciousness that you were talking
about that entails this self-modeling?
I think that a good team should also contain curiosity maximizers. with having the kind of consciousness that you were talking about that entails this self-modeling?
I think that a good team should also contain curiosity maximizers,
people that mostly are driven by curiosity.
And so you have a voice in your team,
and I love being that voice,
that is driven by finding out what could be.
And you also need people who focus on execution and who are not curious at all.
And in this way, I think we can be productively incomplete.
If you have somebody who is by nature not very curious,
but is able to accept the value of somebody who is,
and vice versa,
we can become specialists at being curious or at execution.
And when we can inform and advise each other, we can be much better than we could be individually
if we would try to do all those things simultaneously.
And in this sense, I believe that if you are a state-building species, you do benefit from
this kind of diversity.
If you're not an individual agent that has to do all the things simultaneously.
I don't know how curious trees are.
I'm somewhat agnostic with respect to this.
I suspect that they also need to reduce uncertainty.
And I don't know how smart trees can become.
When I look at means and motive of individual cells,
they can exchange messages to their neighbors, right?
They can also make this conditional.
Evolution is probably getting them to the point where they can learn.
So I don't see a way to stop a large multicellular organism
that becomes old enough to become somewhat brain-like.
But if it has neurons, it cannot send information quickly over long distances.
So it will take a very long time compared to a brain or nervous system
for a tree to become coherent about any observation.
It takes so much time to synchronize this information back and forth
that the tree would observe locally.
And as a result, I would expect that the mental activities of the tree,
if they exist, which I don't know,
to play out at such slow timescales that it's very hard for us to observe.
And so what does it look like if a tree was sentient?
How would it look different from what we already observe and know?
We notice that trees are communicating with other trees,
that they sometimes kill plants around them, that they make decisions about that.
We know that there are networks between fungi and trees
that seem to be sending information over longer distances in forests.
So trees can prepare an immune response to pests
that invade the forest from one end while they're sitting on another end.
And we observe all this, but we don't really think about the implication.
What is the limitation of the sentience of a forest?
I don't know what that is.
And I'm really undecided about it, but I don't see a way to instantly dismiss the idea that
trees could be quite curious and could actually, at some level, reason about the world.
But probably because they're so slow, the individual tree doesn't get much smarter than
a mouse because the amount of training data that the tree is able to process in its lifetime
at a similar resolution is going to be much lower.
They do live a long time.
Sorry, I'm just trying to defend.
I have many friends who you would enjoy talking to about that,
and you seem very informed in that sphere.
Our ancestors were convinced that trees could think.
Fairies are the swimmings of trees,
and they move around in the forest using the internet of the forest
that has emerged over many generations of plants
that have learned to speak a shared protocol.
And I think it's a very intriguing idea.
We should at least consider it
as a hypothesis no absolutely there was a great bbc series and where they focus on the secret
life of plants just by speeding up things 10 or 100 times and they look very sentient
when you do that yes our ancestors said that one day in fairyland is seven years in human land.
Maybe this alludes to this temporal difference.
So about differences between you all, why don't we linger on consciousness?
And Carl, if you don't mind answering, what is consciousness, where is consciousness, and why is consciousness?
So in other words, where is it?
Is it in the brain?
Is it in the entire body? Is it an ill-defined question? What is it? Why do we have it? What is its function? And then we'll see where this compares and contrasts with Yosh's thinking. and sometimes the story I will tell depends on who I am talking to.
But at its simplest, I find it easiest to think of consciousness as a process as opposed to a thing or a state,
and specifically a process of computation, if you like, or belief updating.
So I normally start thinking about questions of the kind you just asked me, but replacing consciousness with evolution.
So where is evolution?
What is evolution?
Why is evolution?
Then all of those questions I think are quite easy to answer.
Sometimes it's a stupid question.
Sometimes there's a very clear answer.
So where is consciousness?
Where is evolution?
Well, it is in the substrate that is evolving.
So, you know, where is consciousness?
It would be in the processes that you are ascribing consciousness.
So I would say it is actually the computation of the information processing,
the belief updating that you get at any level. consciousness so i would say it is actually the computation of the information processing the
belief updating that you get at any level uh and just you know fully um um acknowledging
joshua's point that it doesn't have to be neurons it could be um my sealed networks it could be um
intercellular communication it could be um it could be electrical filaments, you know, as long as there is a physical instantiation of a process that can be read as a kind of belief updating or processing.
computation of you know as that kind of process then that would have um that would be i think what consciousness where consciousness would be found um would that be sufficient to ascribe
uh consciousness to me or to something else i suspect not i think you'd have to go a little
bit further and i suspect that joshua wants to now articulate how much further, but there will be a focus on self-modelling.
So it's not just a process of inference.
It's actually inference under a kind of model of the world.
I would, you know, I'm quite happy committing to a generative model
as fully specified in terms of variational inference,
but we can relax that and just say some kind of model of the world that entails a certain aspect of selfhood to it so
that's what i would i would say i i put something else into the mixed mix as well that to be
conscious i suspect in the way that you're talking about, means you have to be an agent.
And to be an agent means that you have to be able to act.
And I would say more than just acting, more than acting, say, for example, in the way that plants will act to broadcast information that enables them to mount an immune response to parasites.
information that enables them to mount an immune response to parasites they have the capacity to plan and which brings us back to the curiosity again because you know we normally plan in order
to resolve uncertainty we normally plan our day and the way that we um spend our time gathering
information gathering evidence for our models of the world in a way that can only be described as, looks as if it is curious.
That's why I was so fixated on art and creativity and curiosity that Joshua was talking about.
I think that is probably a prerequisite for being conscious in the sense that Joshua would mean it.
But I don't know, perhaps we should ask him that.
May I ask you a clarifying question, Carl, about belief updating?
So if consciousness is associated with belief updating,
then let's say one is a computer, a classical computer.
You get updated in discrete steps,
whereas the belief updating that I imagine you're referring to is something more fuzzy or continuous.
So does that mean that the consciousness associated with a computer, if a computer could be conscious, is of a different sort?
How does that work?
I'm not sure.
I don't think there's any – in the same spirit that we don't want to overcommit to neurons doing mind work.
I don't think we need to commit to a continuous or discrete space-time formulation.
Again, that's an artificial divide between classical physics and quantum information,
theoretic approaches.
So I think the deeper question is,
approaches um so you know i think the deep question is what what um what properties must the computational process in a pc or a computer possess before you could uh you would be licensed
to make the influence that it was conscious and possibly even ascribe self-consciousness to
to that and the way that I would articulate that would be that
you have to be able to describe everything that is observable
about that computing artifact as if,
or explain it in terms of it acting upon the world
in a way that suggests or can be explained
that it has a model of itself engaging with that world um and
furthermore i would say that um that that that model um has to involve the consequences of its
action but which is what i meant by being an agent so it has to have a model that, or act as if it has a model, a generative model that could be a minimal self kind of model, but crucially entails the consequences of its own actions so that it can plan, so that it can evince curious-like behavior.
behavior so that could be done in silica it could be done with a sort of clock and you know um um and synchronous or asynchronous message passing of a discrete sorter it could be done in analog
it could be done with photonics it could be done under neuromorphic architecture i don't think that
really matters i i think it's more the nature of the implicit model under the hood that is accounting for its internal machinations but
more um practically in terms of what i can observe of that computer its behavior in the way that he
goes and gathers information um or attends to certain things and not attend doesn't uh attend
to other things okay great yosha if we think about where consciousness is, we might be biased by our propensity to assign
identity to everything.
And identity does not apply to law-like things.
Gravity is not somewhere.
Gravity is a law, for instance, or combustion is not anywhere.
It's a law. It doesn't mean that it happens everywhere in the same way. It only happens when the conditions for the manifestation of the law are implemented. When they're realized in a certain region, then we can observe combustion happening. under certain conditions you will get an exothermic reaction and gravity means that under certain conditions you will find that objects attract each other and consciousness means that if you
set up a system in a certain way you will observe the following phenomena
consciousness in this way is software state it's a representational state and all software is not
a thing but the word processor that runs on your computer doesn't have an identity that would make it separate
or the same as the word processor
that runs on another person's computer
because it's a law.
It says if you put the transistors into this state,
the following thing is going to happen.
So a software engineer is discovering a law,
a very, very specific law
that is tailored to a particular task and so on,
but it's manifested whenever we create
the preconditions for that law.
And so the software design is about
creating the preconditions for the manifestation
of a law of text processing, for instance,
that allows you to implement such a function
in the universe or discover how it is implemented.
But it's not because the software engineer
builds it into existence and it didn't exist before that.
That's not the case, right?
It always would work.
If somebody discovers this bit string in a random way
and it's the same bit string implemented
on the same architecture,
it would still perform the same function.
And in a sense, I think that consciousness
is not separate and different people.
It's itself a mechanism, a principle that increases coherence in the mind.
It's an operator that seems to be increasing coherence.
At least that's the way I would look at it or frame it.
And as a result, it produces a sense of now, an island of coherence
and the potential models that our mind could have.
And I think it's responsible for
this fact that we perceive ourselves being inhabitants of an island of coherence in a
chaotic world this now this island of nowness and it's probably not the only solution for this thing
i think it's imaginable that there could be a hyper consciousness that allows you to
see multiple possibilities simultaneously rather than just one as our consciousness does or that offers us a now that is not three seconds long but hundreds
of years long in principle that i think is conceivable so maybe we will have systems at
some point where we already have them that have different consciousness-like structures that
fulfill a similar role of islands of coherence or
intangible regions in the space of representations that allow you to act on the universe.
But the way it seems to be implemented in myself, it's particularly in the brain,
because if I disrupt my brain, my consciousness ceases, whereas if I disrupt my body, it doesn't.
This doesn't mean that there are not feedback
loops that are bi-directional into my body or even outside of my body that are crucial for
some functionality that i observe as a content in my consciousness but if you want to make me
unconscious you need to clobber my brain in some sense not nothing else there's no other part of
the universe that you can inhibit to make me unconscious, and that leads me to think that the way in which this law-like structure is implemented is, right now, for the system that is talking to you, on my neurons, on my brain, mostly.
Okay, any objections there, Karl?
No, not at all. I was just trying to remember, if Mark Sonsert were here, he'd tell you exactly the size of a really small region in the brainstem.
I think it's less than four cubic millimeters.
If you were bladed, you would immediately lose consciousness like that.
It's a very, very specific part of your neural architecture that permits conscious processing.
But there are also very specific parts in my computer that are extremely small that I could obliterate and ablate,
and I would instantly lead to the cessation of all the interesting functions of my computer.
And there are many of such regions, basic crucial bottlenecks that enable a large-scale functionality.
It's basically crucial bottlenecks that enable a large-scale functionality.
In some sense, everything that would disrupt the formation of coherent patterns in my brain is sufficient to inhibit my consciousness.
And there are probably many such bottlenecks that provide the vulnerability.
So maybe the claustrum is crucial in providing some clock function
that is crucial for the formation of feedback loops in the brain
that give rise to the kind of patterns that we need.
Maybe there are several other such bottlenecks.
This doesn't mean that the functionality is exclusively implemented in this bottleneck.
No, I didn't mean to imply that the pineal gland is…
I didn't think that you would,
but I thought it would lead to a misunderstanding of the audience.
And I've heard famous neuroscientists point at such phenomena and say,
oh, maybe this is where consciousness happens.
I think this is almost a superstitious belief.
It's like saying, oh, there's this particular chip on my computer.
If I destroy it, the computer doesn't work anymore.
And maybe this was just quartz or something else
rather than the stuff that is providing the interesting functionality.
Or the lead to the battery, perhaps.
So which neuroscientist has said this, then?
I'm not naming names.
Email me afterwards.
Just to unpack, the reason that Mark would identify this
is that it is exactly the cells of origin that are broadcast everywhere
that do induce exactly this coherence you were
talking about these are the ascending modulator neurotransmitter systems that are responsible for
orchestrating that coherence that you were that you were talking about and i think that's very
nice because um you know it also speaks to um the ability of sort of conscious mimicking-like artifacts
whose abilities to mimic consciousness-like behavior
rests upon this modulatory attention-like mechanism.
And I'm thinking again of attention-headed transformers
that play the same kind
of role as the um the selection that these ascending neurotransmitter systems do so if
you find yourself in conversation with mark soames he would argue that the feeling of consciousness
arises from equipping certain coherent coordinated interactions that may be regulated by the cerebellum or the
claustrum but it is that regulation that that actually equips consciousness with with the kind
of qualitative feeling that at least in the way that mark soames uh mark soames addresses but i
mean just notice that um just reviewing what josh just said there he's talking about um consciousness you're equipping
us with a sense of now and having an explicit um aspect that could be um you know i'm thinking of
um not hersel but um well actually jerry edelman's notion of the remembered present
you know which could be the cognitive moment 300 300 milliseconds, or it could be, if I was a tree, three years.
I think it's a lovely notion, but the point being, we're talking about processes in time.
We're not saying, at this instant, I am conscious, or consciousness is here.
We're talking about a process that, by definition definition has to unfold in time.
I think that's an important observation, which sometimes eludes, I think,
people debating about conscious states and conscious content,
not acknowledging that it is a process.
It is a process.
What was the example that Joshua mentioned?
Combustion.
Combustion is a process.
You can't be in a state of combustion.
And you could even argue that it's very difficult to localize at a certain level.
But the key thing is it's a process.
I have an open question.
And maybe you have a reflection on this
when we think about our own consciousness we cannot know in principle i think um but just
by introspection whether we have multiple consciousness in our own mind because we can
only remember those conscious states that find their way into an integrated protocol
that you can access from where you stand.
We know that there are some people which have a multiple personality disorder in which the protocol itself gets splintered.
As a result, they don't dream to be just one person.
They dream to be alternating, to be different people
that usually don't remember each other
because they don't have that shared protocol anymore.
Now, my own emotion and perception is generated outside of my personal self
my personal self is downstream from them i am subjected to my perception and emotion i have
involuntary reactions to them but to produce my percepts and my emotion my mind needs intelligence
but it cannot be much more stupid than me if my emotions would guide me in a way that is
consistently more stupid than my reason and my would guide me in a way that is consistently
more stupid than my reason and my reflection would be i don't think i would work so there
is an interesting question is there a secondary consciousness is the part of your mind that
generates world model and your self-assessment your alignment to the world it's self-conscious
so basically do you share your brain with the second consciousness that has a separate protocol?
Or is this a non-conscious process that is basically just dumb and doesn't know what it's doing?
In the sense that it would be sentient in a way that's similar to my own sentience.
What do you think?
Kurt, you should have a go on that one, and then I can think about it.
Kurt, you should have a go on that one, and then I can think about it.
Well, something I had wondered about 10 years ago or so, and I don't recall the exact argument,
was that if it was the case that the graph in our brain, let's say that let's just reduce the neurons down to a graph, that this graph somehow produces consciousness, or is the same as
consciousness, then if you were to remove one of those nodes, then you would still have a somewhat the same identity. Okay, so then does
that mean that we have pretty much an infinite amount of overlapping consciousnesses within our
brain? I don't recall the exact argument, but it was similar to this. And then there's something
related in philosophy called the binding problem. I'm uncertain what people who study multiple personality disorders have to say about the binding problem.
Like, is that the binding problem gone awry?
Can I just then pursue that notion of the binding in the context of the kind of thing that, or the way I am at the moment?
Yeah, I think that's a very compelling notion um
from the point of view of generative modeling so i'm not as answering now as as a philosopher but
as somebody who may be tasked for example with building um an artifact that would have a minimal
kind of selfhood the first thing you have to write down is um different
states of mind um so that i can be frightened i can be embarrassed i can be um angry i can
be a father i could be a football player i could be so all the different ways that i could be
that are now conditioned upon the context in which I find myself.
And if that's part of the generative model, that then speaks to two things.
First of all, you have to recognize what state of mind you are in, given all the evidence at hand. If I want to jointly explain the racing heart that my interceptive cues are providing me in the intercepted domain with a stiffness of my muscles that my proprioception is equipping me with,
then to reconcile that with my visual exoceptive input that I'm in a dark alley um and mnemonically i've never been
here before all of this sensory evidence might be quite easily explained by the simple hypothesis
i am frightened and that in turn uh generates um covert or mental actions and possibly even overt autonomic actions and motor actions
that provide more evidence for the fact that i am frightened in the sense in a william james sense
that you know i'll have cardiac acceleration i will have a motor response a muscular response
appropriate for a fright and or flight uh flight response so just to actually be able to generate and recognize
emotional kinds of behavior i would need to have a minimal kind of model that crucially um um
obliged me now to disambiguate between a series of different ways of being you know so it's not
so much oh i am me that's a great hypothesis that explains everything but to make it operationally
important i have to actually um infer uh i'm me in this kind of state of mind this situation or i
mean this kind of situation and select the right state of mind to be in and i
think that that really does speak to this sort of notion of um multiple um multiple consciousnesses
that you know co-habit your your your your brain or your or your generative model and again speaks
this notion of uh well i'm i'm just wondering whether that can be linked to
this notion of um incompleteness in in a broader sense um that you know i'm constantly seeking for
the way in which i complete you in terms of dyadic interactions which means i have to i have to
recognize what kind of person do you expect me to be in this setting?
And of course, I can only do that if I actually have an internal model that is about me. It's a model that actually have this attribute of selfhood, but specifically selfhood appropriate to this context or this person or this situation.
Does that make sense? Yeah yeah i have a question about that
you said that you have different identities that you then select from to see which one's most
appropriate for the circumstance like a hypothesis and is it the case then that you would say that
there are multiple consciousnesses inside your brain or is it more like you have multiple
potential consciousnesses and then as soon as you select one that makes it actual
i don't know that i would imagine that you'd have to have another deeper layer of your generative
model that then recognizes um the selection process and indeed you know this may sound fanciful but there are
um naturalized in uh in terms of um inference schemes models of consciousness that actually
do invoke um i'm thinking here of the work of people like uh lars sanstant Smith um as Stan Smith, who explicitly have three levels.
And each level, a deep-duty model, very much like a sort of deep neural network.
And the role of each level is to provide the right attention heads
or biasing or precision or contextualization for the processing that goes on below.
So it may well be that to get the kind of self-awareness,
if I now read awareness as deploying mental action in the service of setting the precision
or the gating of various communications or processing lower down in the model,
it may well be that you do you you do need an other layer of
sophistication or depth to your generative models that i suspect trees don't have but but certainly
you have or i can infer that you have given i'm assuming that that i have a similar conception of
of consciousness but i i don't i i'm not sure that really speaks to your question or the one that Joshua was posing,
that, you know, the unitary aspect of consciousness.
And, you know, does that transcend an inference that would simply be biophysically instantiated
that would simply be biophysically instantiated in exactly the same way that I can register visual motion in motion-sensitive area V5 in my posterior cortex.
I don't know about that.
I'll pass back to Joshua on that one.
Again, we need a very narrow definition,
a very tight definition of consciousness
to answer this question in a meaningful way
if we
see consciousness
as something that we vaguely guess
and there could be multiple things
in our understanding
and it becomes almost impossible
to say something meaningful
about this
so for instance it is conceivable
that consciousness would be implemented
by a small set of circuits in the brain and that all the different contents that can experience
themselves as consciousness are repurposing this shared functionality in the same way as we probably
have only one language center and this one language center can be used to articulate ideas
so many parts of our mind using different sub-agents
that basically interface with this.
You can also clearly have multiple selves interacting on your mind.
Your personal self is one possible self that you can have
that represents you as a person.
But there are some people which have God talking to them on their own mind.
And I think what happens there is people implement a self
that is existing and self-identifying as existing across minds.
Something that is not a model of the interests of the individual person,
but a model of a collective agent that is implemented
using the actions of the individual people.
But of course, this collective mind that assumes the voice of God
and talks to you in your own mind so you can perceive it is still implemented on your own mind and uses your circuitry. It's just that your circuitry is not yours. Your brain doesn't belong to yourself that they themselves don't really exist in physics.
This thing that they experienced as perceiving, as interacting with the world is a dream.
It's a dream of what it would be like if you were a person that existed.
It's virtual.
So you can also dream being a God.
And this God might be so tightly implemented on your mind that it's able to use your language center
and you hear its voice talking to you.
But it's not more or less real
than you hearing your own voice talking to you in your mind.
It's just an implementation of a representation of agency in your mind.
One crucial difference to the way in which most AI systems
are implemented right now
and the way in which agency is implemented on our minds is
that we
usually write functions in AI that
perform something like 100 steps
in a neural network, for instance, and then
gives a result that makes
a programmer happy. And this is it.
And the time series
predictions of our own mind are dynamic.
They're not meant to solve a particular function,
but they're meant to track reality. So in a a sense our brain is more like a very complex resonator
that tries to go into resonance with the world so it creates a harmonic pattern that continuously
tracks your sense of a data with the minimal amount of effort and this perspective is very
different really means you give a perception of the world cannot afford to deviate too much in its dynamics from the dynamics that you observe in your sensory apparatus, because otherwise future predictions become harder. You get out of sync. You always try to stay in sync with the world.
really crucial for the way in which we experience ourselves with the world.
As part of staying in sync, we discover our own self.
It's the missing link between volition and the outcomes of our action.
Our body would not be discoverable to us and is not immediately given to us if we wouldn't have this loop that ties us into the outer universe
and into the stuff that we cannot control directly.
And for me, this question relates to, do we have only one consciousness?
It occurs to me that we would not know if we have multiple ones,
if they don't share memories.
If I were to set up an AI architecture,
where a part of the AI architecture is a model of an agent in the world,
another part of the AI architecture is a model of an agent in the world. Another part of the AI architecture is a model of the infrastructure
that I need to maintain to make a model of the world
and such an agent in the world.
I would not tell the agent how this infrastructure works
because the agent might use that knowledge to game the architecture
and get a better outcome for itself, not the organism.
Imagine you could game your perception so you're always happy
no matter how much you're failing in the world.
From the perspective of the larger architecture,
that's not desirable.
So it would probably remain hidden from you
how you're implemented.
And to me, the question is interesting,
how sentient is this part of you that is not yourself?
Does it actually know what it is in real time?
I think that's a very interesting
and tempting philosophical question
and also a practical one.
Maybe there's a neuroscientific experiment
that would figure out if you have two clusters
of conscious experience.
I wouldn't know how to measure this,
but maybe IIT and global workspace theory and so on are wrong in more interesting ways than we currently think they are.
Because they assume that there is just one consciousness.
Of course, from the perspective of one consciousness, there is only one, because consciousness is in some sense by definition what's unified.
But if there are multiple clusters of unification that exist simultaneously they wouldn't know each other directly they could maybe observe each other
but maybe not in both directions sorry when you say consciousness is by definition one
is that akin to how you say software is one software as such but specific instantiations
of functional way so basically it's more like the universe is by definition only one but you can have multiple universes but this means that we define universe
in a particular way normally universe is used in the way of everything that feeds back information
into a unified way into a unified thing we accept that parts of the universe get lost if they go
outside of of the distance where they can feed information back into you.
But there's still, in the way in which we think about the universe, part of the universe.
The universe is everything that exists.
And consciousness is everything that you can be conscious of in the sense.
So if there is stuff in you that you're not conscious of, it doesn't mean that it's not conscious.
It would just be a separate consciousness, possibly.
It could also be that it's not conscious it would just be a separate consciousness possibly it could also be that it's not a consciousness and so what i don't know is
is a brain structured in such a way that can maintain only one consciousness at a time
or could there be multiple uh full-on consciousnesses that just don't uh where we
don't know about the other one i perceive my consciousness as um being focused on this
content that is my personal self i can have conscious states in which I'm not a personal self.
For instance, I can dream at night that there's stuff happening
and I'm conscious of that stuff happening, but there is no I.
There is no personal self.
There's just this reflexive attention that is interacting with the perceptual world.
In that state, I would say I can clearly show that consciousness
can exist without a
personal self, and the personal self is just a content. But it doesn't answer the question,
are there multiple consciousnesses interacting on my brain? One that is maintaining my reward
system, my motivational system, and my perception, and one that is maintaining my personal self.
Carl, now that we've spoken about the unity of consciousness,
dissociation, as well as even voices of God and God,
him or herself or itself,
what does your background in schizophrenia,
your perspective from there, have to say?
Yeah, well, that's a brilliant question and a leading question.
It's what I wanted to comment on.
a leading question it's what i wanted to to comment on um so again so many things have been have been um unearthed here you know from the basic that you know all our um
beliefs are fantasies they're they're um hypotheses illusions um that are entrained by the sensorium to, you know, to, in a way that
maintains some kind of synchrony between the inside and the outside. I think that's quite
a fundamental thing which we haven't spoken about very much, but I just want to fully endorse.
And of course, that entrainment sometimes referred to, I think, as, you know,
um sometimes referred to i think as you know entrained hallucination perception being your hallucination that's just been entrained by um sparse data but the uh the data themselves
being actively sampled so this this loop that yosha was was referring to i think is absolutely
crucial aspect of um um the whole sense making and and, indeed, sense-making as a self, as a cause of my own or the author of my own sensations in an active sensing or active influence.
I think that's absolutely crucial.
The question about the multiple consciousness, I should just, before addressing the psychiatric perspective, there is, I have a group of colleagues, including people like Maxwell Ramstead and Chris Fields, and particularly Chris Fields, who takes a quantum information theoretic view of this and brings to the table the notion of an irreducible Markov blanket in a computing graph that crucially has some unique properties that means that
it can only know of itself by acting on the outside or which you know other parts of the brain
and again acting in this instance just means setting the attention or the coordination or contextualizing message passing elsewhere.
But the interesting notion, which is not unrelated to the pineal gland or Mark Soames' ascending neurotransmitter systems
that might do this kind of action, is that there could be more than one minimal or irreducible Markov blanket that practically you can actually experimentally define in principle
by looking at the connectivity of any kind.
But certainly if you have a sufficiently detailed connectome,
you can actually define the Markov blanket in terms of the directed connections
offered by external processes. And in principle
you should apply
sort of a kind of
integrated information theory, but slightly
nuanced I think in this instance,
to actually identify candidates
for irreducible Markov blankets
that could be
the thing that looks at the thing that's doing the thing
that may have different
kinds of experiences.
There could be an irreducible Markov blanket in, say, the globus pallidus
that might be making sense of and acting upon the machinery that underwrites our motor behavior
and our plans and our choices, as opposed to something in the occipital lobe that might be necessary to ascribe them some minimal kind of consciousness.
But let me return to this key question I was asking, because as Joshua was talking,
it did strike me, yes, that's exactly what goes wrong in schizophrenia.
You know, attribution of agency, delusions of control, hearing voice.
Again, coming back to this notion that, you know, this action-perception loop, the right agency to the outcomes of action, I think is a really important notion here.
And it can go horribly wrong.
You know, we spend the first years of our lives just working out I cause that and you cause that and working out what I can cause and what I can't cause and what mum causes and what other people cause.
Imagine that you lost that capacity.
Imagine that, you know, when you spoke, and this is Chris Frith's notion or expression for auditory hallucinations, for example, you weren't able to recognize that it was you that was the initiation of that speech act whether it's
actually articulated or sub-vocal so just not being able to infer selfhood in the sense of
ascribing agency to the concept the sense consequences of action would be quite devastating
and of course you um you you can think about reproducing these kinds of states with certain psychomimetic or psychedelic drugs.
They really dissolve what we take for granted in terms of a coherent, unitary content of consciousness.
If you've ever had the synesthesia that can sometimes be introduced or induced by psychedelic drugs,
you will know what it's like to treasure the fact that color is seen and sound is heard.
It doesn't have to be like that.
It's just that if we, as sustained inferring processes, self-evidentizing computing processes that sustain in a coherent way our sense making it looks as if colors are seen and sounds are heard that's how it that's
how we make sense it doesn't have to be like that and you can experience the converse you can start
to see sounds you can hear colors you can have horrible distortions of time perception a moment
can actually feel as if you've nested you know so all of these things that we take for granted in
terms of our sense making are so fragile that you're given the right either psychopathology
or pathophysiology technically a synaptopathy of the kind you might associate with things like parkinson's disease and schizophrenia possibly um um you know even neurotic disorders um
of sort of effective or depressive or generalized anxiety disorders can all be understood as
basically a disintegration of this of this coherent synthesis and to use your word uh the binding um um
which means that i think the same principles could also be ascribed to consciousness itself and
you know i'm not sure um i mean for so depersonalization derealization i think
are two conditions which I've never experienced,
but my understanding of subjective reports from people or patients who have experienced these
do, I think, really speak to this notion that there could be multiple consciousnesses.
And, of course, one will not be aware of the other and possibly not even able to infer any agency,
even if it was but also
there could be no consciousness you know i can i there are depersonalization syndromes
where you still sense you still perceive but it's not you and there are some derealization syndromes where you are there but all your sensorium is unreal it's not
actually there anymore you're not actually in the world so you can get these horrible
disintegration um dissociative um well dissociative is a is a clinical term you can get these
situations where everything we take for granted about the unitary aspect of our experienced world and us as experiencers can so easily be dissolved in these conditions.
So I take Josje's questions very, very seriously.
And so would people who suffer from these conditions.
I would distinguish between consciousness and self more closely than you seem to be doing just now.
I would say that consciousness coincides with the ability to dream, or it is the ability to dream even.
And in schizophrenia, the dream is spinning off from the tightly coupled model that allows you to track reality.
But when we dream at night, we are dissociated from our sensorium
and the brain is probably also dissociated
in many other ways.
And as a result, we get split off
from the ability, for instance,
to remember who we are,
in which city we live in,
what our name is.
Very often in a dream,
even if it's a lucid dream
where we get some agency
over the contents of our dream,
we might not be able to reconstruct
our normal personality
and crucial aspects of our own self.
And in schizophrenia, I think this happens while we are awake,
which means we start to produce mental representations
that look real to us,
but that have no longer the property
that they are predicting what's going to happen
next in the world or much later.
And this ability to lose predictive power doesn't mean that they are now more of an
illusion than before.
The normal stuff that has predictive power is still a hallucination.
It's still a trance state when you perceive something as real, as long as you perceive
it as real.
It's only some trans
states are useful in the sense that they have predictive power that they're useful representations
and others are not and the ability to wake up from this notion that your representations are real is
what michael taft calls enlightenment he's a meditation teacher was a pretty rational
approach to enlightenment and basically to him it's enlightenment
is the state where you recognize all your mental representations as representations and become
aware of their representational nature you basically realize that nothing that you can
perceive is real because everything that you can perceive is a representational content and that's
something that is accessible to you introspective via introspection if you
build the necessary models for doing that so when your mind is getting to this model level where you
can construct a representation of how you're representing things then you get some agency
of how you are interacting with your representation but i wouldn't say that somebody who is experiencing a schizophrenic episode and
or derealizes or depersonalizes is losing consciousness they are losing their self
they're losing coherence they're losing the ability to track reality and the interaction
between self and external world and so on but as long as they experience that happening, they're still conscious.
Does this make sense?
Certainly in terms of altered states of consciousness,
absolutely. Do you know Thomas Metzinger? Some of the things you've
just said there were very reminiscent
of his treatment of
say, phenomenal opacity
and the like. Is he somebody
that you have
discussed these things with or subscribed to we discussed
relatively briefly only we met a few times since i left germany only mostly uh online and uh i like
thomas a lot i think that he is uh one of the few german philosophers who is reading right now
but of course he's limited by being a philosopher
which means
he is going
to the point
where
before
he stops
before the
point where
he would
make
functional
models
that we
could test
right
so
I think
his concepts
are sound
he does
observe a lot
of interesting
things and I
guess a lot
of it also
through introspection
but I think
in order to
understand
consciousness we actually need to build testable theories a lot of it also through introspection. But I think in order to understand consciousness,
we actually need to build testable theories.
And I suspect even if we cannot construct consciousness
as this strange loop, as Hofstadter calls it,
from scratch, which I don't know whether we can do that.
I'm agnostic with respect to that.
We can probably recreate the conditions
that lead to the discovery
of consciousness in the brain which means we can initiate the search process that the brain is
initiating before it discovers it i was going to make the joke that we've offended physicists
neuroscientists and philosophers yeah it's my thing
it's mostly retaliation because I'm so offended by them.
Maybe I shouldn't.
Especially, I tried to study all these things and I got so little out of it.
I found that most of it is just pretense.
There's so little honest thinking going on about the condition that we are in.
It was very, very frustrating to me.
What field do you identify as being a part of, Josje?
Computer scientist?
Cognitive scientist?
I like computer science most
because I've discovered as a student
that you can publish in computer science
at every stage of your career.
You can be a first semester student
and you can publish in computer science
because the criteria of validity are not human criteria.
The stuff either works or it doesn't.
Your proof either pans out or it doesn't. Whereas the criteria of validity are not human criteria the stuff either works or it doesn't your proof either pans out or it doesn't whereas the criteria and philosophy are to a much larger
degree social criteria so the more your peers influence the outcome of the review and the
more your peers can deviate from the actual mission of your subject in their social dynamics
the more half of that your field becomes and And so we noticed, for instance, in psychology,
we had this big replication crisis.
And the replication crisis in psychology
was something that was anticipated
by a number of psychologists for many, many years
that pointed out this curious fact
that psychology seems to be the only science
where you make a prediction at the beginning of your paper
and it always comes true in the end.
Enormous predictive power.
And also pointed at all the ways in which p-hacking was accepted and legal beginning of your paper and it always comes true in the end enormous predictive power and uh also
pointed at all the ways in which p hacking was accepted and legal and how poorly the statistical
tools were understood and then we have this replication crisis and uh 15 000 studies get
invalidated more or less or no longer reliable and somebody uh pointed this out on uh in beautiful
text where they said essentially what's happening here is that we have an airplane crash
and you hear that 15,000 of your loved ones have died
and nobody even goes to the trouble to ID them
because nobody cares because nothing is changing
as a result of these invalidated studies, right?
What kind of the building has just toppled?
Nobody cares.
There's not actually a building.
There's just people talking.
And when this happens,
we have to be brutally honest,
I think, as a field.
Also, I hear very often that
AI has been inspired
by neuroscience and learned so much from it.
But when I look at the actual algorithms, the last
big influence was healthier learning.
And the other stuff is just people talking,
taking inspiration, taking insights, and so on. But it's not not actually there is a lot of stuff that you can take out of
the formalisms of people who studied the brain and directly translate it i think that even what
carl is doing is much more results of information theory and physics that is congruent with
information theory because it's thinking about similar phenomena using similar mathematical tools and then expresses it with more greek letters than
computer scientists used to do but there is a big overlap in this and so i think the separation
between intellectual traditions and fields and disciplines is something that we should probably
overcome we should also probably in an age of AI rethink the way in which we publish and
think. Is the paper actually the contribution that we want to make in the future in a time
where you can ask your LLM to generate the paper? Maybe it's the building block, the knowledge item,
the argument that is going to be the major contribution that the scientist or the team
has to make, the experiment. And then you have systems that automatically synthesize this
into answers to the questions that you have
when you want to do something in a particular kind of context.
But this will completely change the way in which we evaluate value
in the scientific institutions at the moment.
And nobody knows what this is going to look like.
Imagine we use an LLM to read a scientific paper
and we parse out all the
sources of the scientific paper from the paper and what the sources are meant to argue for.
And then we automatically read all the sources and check whether they actually say that,
what the paper is claiming the sources say. And we parse through the entire trees of a discipline
in this way until we get to first principles. What are we going to find? Which parts of science will hold up?
I think that we might be at the doorstep
of a choice between
a scientific revolution in which science becomes
radically honest and changes the way it works
or in which
it reveals itself as an employment program.
It's fake jobs for people
who couldn't find a job in the real economy
and basically get away
because their peers let them get away with it.
And I try to be as pointedly as possible
and as bleak as possible.
Science, given its incentives that it's working under
and the institutional reward that has set in
after decades of postmodernism,
it's surprisingly good still, right?
There's so many good scientists in all fields that I know.
But I also noticed that many of the disciplines don't seem
to be making a lot of progress for the questions that we have. And many fields seem to be stuck.
And this doesn't seem to be just because all the low-hanging fruits are wrapped.
But I think it's also because the way in which scientific institutions works have changed.
The notion of peer review probably didn't exist very much before the 1970s.
This idea that you get truth by looking at a peer-reviewed study
rather than asking a person who is able to read and write such studies.
That is new.
That is something that didn't exist for Einstein.
And so I don't know if this means that Einstein was an unscientific mind
that was only successful because he was working at the beginning of a discipline, or it was because he was thinking in a completely different paradigm.
But no matter what, I think that AI is going to have the potential to change the paradigm massively.
And I don't know which way, but I can't wait.
Can't wait.
So now that we're talking about computer scientists,
what do you make of the debacle at OpenAI,
both Carl and Josje directed to you, Josje, first?
There's relatively little I can say because I don't actually know what the reason was
for the decision of the board to fire the CEO.
Firing the CEO is one of the very few moves
beyond providing advice than the board can make.
I thought if the board makes such a decision
in a company in which many of the core employees
have been hired by the CEO
and have been working very closely
and happily with the CEO,
they will need to have a very solid case.
And there needs to be a lot of deliberation
among core engineers and players in the company
before such a decision is being made.
Apparently that has not been the case.
I have difficulty to understand
why people behaved in the way in which they did.
The outcome is that open AI is more unified than ever.
It's basically a 95% agreement about employees
that they are going to leave the company
if it doesn't reinstate the CEO.
It's almost unheard of.
This is like an Eastern European communist dictatorship
with fake elections.
But it was not fake.
It was basically people getting together overnight
and getting signatures for a decision that gravely impacts their professional careers.
Many of them are on visa that depend on continuous employment within the company.
So they enter actual risks for a time.
And I also suspect that a lot of the discussions that happened were bluffs right when the board said yes they
want to reinstate them but then waffled and came out with ms sheer who is a pretty good person but
it's not clear why the twitch ceo would be the right person to lead open ai suddenly so i don't
even know whether the decision was made because there were personal disagreements about communication
styles or whether it was about the direction of the company where members of the board felt that AI is going to be developed too quickly
and should be slowed down significantly.
And the strategy of Sam Altman to run JetGPT at a loss
and making up for this by speeding up the development and getting more capital in
and thereby basically creating an AGI or bus strategy for the company might not be the right strategy.
Also, the board members don't hold equity in the company.
So this is the situation where the outcome of their decision is somewhat divorced from their own material incentives.
And it is more aligned with their political or ideal ideas that I might have or the goals that I have.
And again, not all of them are hardcore AI researchers.
Some of them are.
I don't really know what the particular discussions have been in there.
And of course, I have more intimate speculations at some discussions with people at OpenAI.
But I cannot disclose the speculations, of course.
And so at the moment, I can only summarize, in some sense, what's publicly known and what you
can read on Twitter. It's super exciting. It has kept us all awake for a few days.
It's a fascinating drama. And I'm somewhat frustrated by people say, oh my god, this is
destroyed. Trust in OpenAI if decisions can be be so erratic because open air should be like a bureaucracy that is not moving in a hundred
years no this is part of something that is super dynamic and is changing all the time i i think
that what the board should probably have seen is that the best possible outcome that i could
have achieved is that open ai is going to split that the best possible in that I could have achieved is that OpenAI is going to split.
That the best possible, in the sense of the board
trying to fire Sam Altman to change the course of the company,
they would have created one of the largest competitors to OpenAI.
And so basically an anti-antientropic on the other side of OpenAI
that is focusing more on accelerating AI research.
It would have been clear that many of the core team members would join it,
and it would destroy a lot of the equity that OpenAI currently possesses,
and it would take away large portions of OpenAI's largest customers, Microsoft.
So these are some observations.
So Sam is back now.
Yes.
And it was clear that it would happen, right?
This move by Satya Nadella to say he works now for Microsoft
happened not after negotiating a new organization for a month.
It happened in an afternoon, right?
After it was announced that the board now has another candidate
that they secretly got talked into taking on this role.
Microsoft basically set up as a threat.
Okay, they're all going to come to us.
Every open AI person who wants can now join Microsoft in a dedicated autonomous unit
with details that are yet to be announced,
but they're not going to be materially worse off or research-wise worse off.
So this is a backstop that Microsoft had to implement
to prevent its stock from tumbling
on Monday morning.
So Microsoft moved very fast on Sunday
and decided we are going to make sure
that we are not going to create
a situation that is worse for us than it was
before. And this
creates enormous pressure on
OpenAI to basically decide
either we are going to be alone without
most of the core employees
and without our business model, but having succeeded in what the board wants, or we accept
the fact that the board has been defeated. And Sam Althans has not been entirely candid with
the board when he said last June that the board can just fire him if it disagrees with him.
Because that's obviously not the case, because the board at the moment, there's so much
buy-in from the employees
and the core investors and
customers of OpenAI, they cannot
just fire the CEO without very
good reason.
And Karl, what do you make of it, the whole
fiasco?
I was just listening with fascination.
I think you have
more than enough material to keep your viewers engaged.
Can I just ask this?
Is OpenAI going to be ingested by Microsoft or not then?
Do you think OpenAI is going to survive by itself?
Some people are joking that OpenAI's goal is to make Google obsolete,
to replace search by intelligence,
and Google is too slow to deliver a product to deal with this impending competition.
OpenAI has rapidly grown in the last few months.
It has hired a lot of people who are focusing on product
and customer relationships. The core research team
has been growing much more conservatively.
I think that
Microsoft was a natural
partner for OpenAI in this
regard because Microsoft is able to make
large investments and yet
is possibly not as
agile as Google. The risk that
if OpenAI would partner with Google as a
main customer, that Google at
some point would just walk away with the core
technology and some of the core researchers might be larger than with microsoft but they can only speculate there
so the last question for this podcast is how is it that you all prevent an existential crisis
from occurring with all this talk of the self as an illusion or our beliefs which are so associated with our conception of ourselves, mutable identities and competing contradictory theories of terrifying reality
being entertained.
Well, Karl. i'm just trying to get underneath the question um
these um the kind of illusions i think we're talking about um are
the stuff of the lived world and the experienced world and they are not weak or facile or facsimiles of reality.
These are the fantastic objects, belief structures that constitute reality.
So literally, as I'm sure we've said before,
the brain as a purveyor of these fantasies, these illusions,
is fantastic, literally, because it has the capacity to entertain these fantasies these illusions is fantastic literally because he has the
capacity to entertain these fantasies so I don't think that I don't think there
should be any worry about somehow not being accountable to reality these are
fantastic objects that we have created co--created, you could argue, given some of our conversations,
that constitute our reality.
I think that existential crisis is a good thing.
It basically means that you are getting at a point where you have a transition in front of you, where you basically realize that the current model is not working anymore, and
you need a new one.
where you basically realize that the current model is not working anymore,
and you need a new one.
And the existential crisis doesn't necessarily result in death.
It typically results in transformation into something that is more sustainable,
because it understands itself and its relationship to reality better.
The fact that we have existential questions, and that we want to have answers for them, is a good thing.
the fact that we have existential questions and that we want to have answers for them is a good thing
when I was young
I thought I don't want to understand
how music actually works because it would remove
the magic but the more I understood
how music works the more appreciative
I became of deeper levels of magic
and I think the same
is true for our own minds
it's not like when we understand how it works
that it loses its magic
it just removes the stupidity of superstition and gives us something that shows its in its beauty and
brilliance and allows us to make it much more sophisticated and intricate thank you yosha
thank you carl there's a litany of points for myself for the audience for all of us to chew on
over the course of the next few days,
even maybe even weeks.
Thank you.
Thank you, Kurt, for bringing us together.
Carl, I really enjoyed this conversation with you.
It was brilliant.
I like that you'd think on your feet
that we have this very deep interaction.
I found interesting that we agree
on almost everything, right?
We might sometimes use different terminology,
but we seem to be looking at the same thing
from pretty much the same perspective.
And I also really enjoyed it.
It was a very, very engaging conversation.
And I love the way that you're not frightened to upset people
and tell things as they are.
I'm not looking for a job in academia.
Good. Neither am I. I still not looking for a job in academia. Good.
Neither am I.
I still don't have your balls.
Well done.
Have a wonderful rest of the day.
Thank you.
All right.
Take care.
Brilliant.
Thanks very much.
By the way,
if you would like me to expand
on this thesis
of multiple overlapping consciousnesses
that I had from a few years ago,
let me know
and I can look through my old notes.
All right.
That's a heavy note to end on.
You should know, Josha has been on this podcast several times, one solo, another with Ben
Gorzo, another with John Verveke, another with Michael Levin, and one more with Donald
Hoffman.
Whereas Carl Fristin has also been on several times, twice solo, another between Carl Fristin
and Michael Levin, and another with Carl and Anna Lemke.
That one's coming up shortly.
The links to every podcast mentioned
will be in the description,
as well as the links to any of the articles
or books mentioned, as usual,
in every single Theories of Everything podcast
are in the description.
We take meticulous timestamps
and we take meticulous notes.
If you'd like to donate
because this channel has had a difficult time
monetizing with sponsors,
and sponsors are the main bread
and butter of YouTube channels, then there are three options. There's Patreon, which is a monthly
subscription. It's patreon.com slash kurtjaimungo. Again, links are in the description. There's also
PayPal for one-time sums if you like. It's also a place where you can donate monthly. There's a
custom way of doing so, and the amount that goes to the creator,
aka me in this case, is greater on PayPal than on Patreon because PayPal takes less of a cut.
There's also cryptocurrency if you're more familiar with that. And the links to all of these are in the description. I'll say them aloud in case you're away from the screen.
It's tinyurl.com slash lowercase, all of this is lowercase, P-A-Y-P-A-L. So PayPal. But then uppercase toe,
T-O-E, uppercased. And then for crypto, it's tidyurl.com slash lowercase C-R-Y-P-T-O,
capital T-O-E. I just recommend you look to the description and click there in case you
enter in something wrong and there's someone that's trying to phish a different account.
Thank you. Thank you for your support. It helps Toe continue to run. It helps pay for the editor who's doing this right
now. I and my wife are extremely grateful for your support. We wouldn't be able to do this without
you. Thank you. The podcast is now concluded. Thank you for watching. If you haven't subscribed
or clicked that like button, now would be a great time to do so as each subscribe and like helps
YouTube push this content to more people
You should also know that there's a remarkably active discord and subreddit for theories of everything where people explicate toes
Disagree respectfully about theories and build as a community our own toes links to both are in the description
Also, I recently found out that external links count plenty toward the algorithm,
which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that
people are talking about this outside of YouTube, which in turn greatly aids the distribution on
YouTube as well. Last but not least, you should know that this podcast is on iTunes, it's on
Spotify, it's on every whichever podcast catcher you use.
If you'd like to support more conversations like this, then do consider
visiting patreon.com slash kurtjaimungal and donating with whatever you like. Again, it's
support from the sponsors and you that allow me to work on Toe full-time. You get early access to
ad-free audio episodes there as well. For instance, this episode was released a few days earlier.
Every dollar helps far more than you think. Either way, your viewership is generosity enough.