Theories of Everything with Curt Jaimungal - Your Brain Isn’t a Computer and That Changes Everything
Episode Date: September 22, 2025The best way to cook just got better. Go to http://HelloFresh.com/THEORIESOFEVERYTHING10FM now to Get 10 Free Meals + a Free Item for Life! * One per box with active subscription. Free meals applied a...s discount on first box, new subscribers only, varies by plan. Get 50% off Claude Pro, including access to Claude Code, at http://claude.ai/theoriesofeverything For the first time on TOE, I sit down with professors Anil Seth and Michael Levin to test the brain-as-computer metaphor and whether algorithms can ever capture life/mind. Anil argues the “software vs. hardware” split is a blinding metaphor—consciousness may be bound to living substrate—while Michael counters that machines can tap the same platonic space biology does. We tour their radical lab work—xenobots, compositional agents, and interfaces that bind unlike parts—and probe psychophysics in strange new beings, “islands of awareness,” and what Levin’s bubble-sort “side quests” imply for reading LLM outputs. Anil brings information theory and Granger causality into the mix to rethink emergence and scale—not just computation. Along the way: alignment, agency, and how to ask better scientific questions. If you’re into AI/consciousness, evolution without programming, or whether silicon could ever feel—this one’s for you. Timestamps: - 00:00 - Anil Seth & Michael Levin: Islands of Consciousness & Xenobots - 08:24 - Substrate Dependence: Why Biology Isn't Just 'Wetware' - 13:13 - Beyond Algorithms: Do Machines Tap Into a 'Platonic Space'? - 21:46 - The Ghost in the Algorithm: Emergent Agency in Bubble Sort - 29:26 - Degeneracy: The Biological Principle AI is Missing - 36:34 - The Multiplicity of Agency: Are Your Cells Conscious? - 43:24 - Unconscious Processing or Inaccessible Consciousness? The Split-Brain Problem - 49:32 - The Ultimate Experiment to Decode Consciousness - 57:31 - A Counter-Intuitive Discovery: Consciousness is *Less* Emergent - 1:03:39 - Psychedelics, LLMs, and the Frontiers of Surprise Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
I'm here for Bet Rivers Online Casino and Sportsbook with poker icon Phil Helmuth.
Thanks to Bet Rivers, I'm also a slots icon.
Great.
And a same game parlay icon.
Cool, cool.
A blackjack icon, a money line icon.
A roulette icon.
If you love games, Bet Rivers is the place to play in bet.
Bet Rivers.
Games on.
Must be 19 plus in present in Ontario.
Void or prohibited.
Terms and conditions apply.
Please play responsibly.
If you have questions or concerns about your gambling or someone close to you,
please contact ConEx Ontario and 1-866-531-2600 to speak to an advisor free of charge.
make the additional really weird claim that I don't think algorithms capture everything we need
to know about life. We've got on that the idea of the brain as a computer is a metaphor and not
the thing itself. There's no bright line between what it does and what it is. That would not be
what I would have predicted. This is a monumental theocution. For the first time ever,
Professor Anil Seth and Professor Michael Levin are conversing and performing research
in real time, and we get to be the flies on the wall.
Anil says that the brain as a computer metaphor has blinded us for decades.
You can't extract the software from the substrate.
This means that silicone consciousness may be impossible,
not because machines lack dualistic souls, though.
But wait, Michael disagrees.
He thinks that machines may be able to access the same platonic space
that biological systems tap into.
The magic isn't restricted to carbon.
Both professors are now building and studying xenobots together.
These are living robots made from skin cells that self-organized, exhibiting behaviors evolution never programmed.
Do they dream? Do they have preferences? Are they conscious?
On this episode of theories of everything, we explore their radical collaboration,
including questions like how split-brain patients may prove consciousness fragments and multiplies,
and the terrifying possibility that large language models are doing something entirely different.
from what their output suggests, tasks no programmer asked for, no steps in the code demand for,
but perhaps where that quote-unquote magic lies.
Remember to hit that subscribe button if you like videos exploring fundamental reality.
All right, we're going to talk about aliens.
We're going to talk about cyborgs, modules in the brain, split hemisphere patients,
if I'm not mistaken, at unconscious processing.
We're going to get to all of that.
To set the stage, I'd like to know what's exciting you, both research-wise, currently,
something you're pursuing.
So Anil, why don't we start with you, please?
Well, thanks, Chair.
Two things, I guess.
One thing, topic that seems to be exciting a lot of people these days,
which is the possibility of AI being conscious,
whether it's something that AI systems can have,
or whether, as I tend to think,
that it's something more bound up with our nature as living creatures.
And the other thing that's exciting me,
actually just came to mind in you a little this, the topics there, is the question of
islands of consciousness. So, you know, there's a lot of work on things like split brain
patients, patients with brain downers and so on. But a question that me and a couple of colleagues
Timme, Marcella and Matamini, if you're wondering, is, are there isolated neural systems
that may have conscious experiences? And one candidate for this is called hemispherotomy,
which is kind of neurosurgical operation where you have bits of the brain,
detached, disconnected from all other parts of the brain,
but you still have neural activity.
These parts of the brain is still part of the living organism.
Are they islands aware?
So we've been exploring that theoretically and very recently
with certain evidence from brain imaging
of people following this neurosurgical operation.
Michael.
Yeah.
So a couple of things on the experimental front.
I'm really excited about some novel systems that we're setting up as compositional agents.
So putting together different living and non-living components using AI and other interfaces
to allow them to not just communicate with each other, but we hope form a kind of collective
intelligence.
And then we can ask some interesting questions about what kind of interprospective this new
intelligence might be just in general, you know, complementing the work we've done before
around distributing and let's say separating out the different pieces like, like Neil was
just saying of the brain and so on, the flip side of that, which is putting together new
kinds of beings that haven't existed before and asking what their behavioral compliments
are, what their capacities are, what their goals are, their preferences, what do they
pay attention to these kinds of things.
And just in general, really digging into this idea of, for lack of a better word,
intrinsic motivations and asking in novel creatures that don't have the benefit of a lengthy
evolutionary history that presumably set some of their cognitive properties, where do
these things come from, you know, and how do we predict them, how do we recognize?
I should have said, of course, like one of the things that's really exciting me is stuff
that Mike and I have been talking about together, about, you know, we sort of
about, or some of the systems that he's building, some of the things he's actually talking
about, do they sort of self-organizing ways which seem to abet the laws of psychophysics
and other sorts of situations where we might attribute things like intrinsic motivation
to evolve systems?
So there's a big question about, you know, are laws of perception, are they adapted to specific
environmental situations or are they somehow
intrinsic to
how biological neural system
self-organize. So we've been
bouncing ideas back and forth
to do experiments to explore
some of these questions,
which is very... Tell us about
some of these experiments.
We haven't been done yet.
The logic is
to take some simple
observations of phenomena that
are very widespread in
perception across men.
any evolved species, whether it's a human being, a mouse, or, I don't know, probably a
bacteria or something like that.
So there are things like weather or affecting us as well, so the idea that the perceived
intensity of the stimulus scales logarithmically with its actual magnitude.
Now, this is something that seems very, very general.
Is this something that we can look for evidence?
for in some of these
completely
out of evolutionary context systems
of the kind that Mike is generating
and so that would be one example
there are a whole bunch of other examples we have
can we find things like susceptibility
to visual illusions or things like that
in these systems? So what are the simplest
kind of a very
general perceptual and learning phenomena
that we might be
able to examine
whether they happen
in these
systems which don't have
this straightforward evolution and trajectory.
I think that's the basic project.
Yeah. Yeah, exactly.
And these comprise
xenobots, anthrobots,
and even
weirder constructs that we can start
to put together by
building technological interfaces
between radically different kinds of
beings that allow them to, you know, sort of
like an artificial corpus
colossum that takes two different things
and tries to bind them into
into one novel collective thing
and seeing whether some of their properties
and their behavioral competencies
match the things that people have been studying
as a neil-sat psychophysics
and all the things out of a behavioral handbook, basically.
Yeah, and this kind of relates to this idea
of AI and consciousness and so on
for the following reasons, which is why I think it's very exciting.
I think might have different but overlapping reasons
for our interested in this.
For me, there's this sort of assumption
by people we talk a lot about consciousness and AI,
that the biological stuff we're made of doesn't really matter.
It's just there to implement algorithms.
Silicon could do just as well.
And I tend to think differently.
And I think we both do it.
And so these are ways that are just looking at,
what's the dynamic and functional potential,
that's sort of intrinsic to the stuff we're made of.
That provides the basis for our cognitive,
our sexual, ultimately our conscious abilities
and properties. So these are experiments of way getting at that. What's just there that the evolution
can then make use of? Anil, can you make the elevator pitch for people who are already familiar
with the argument that, look, the processing that's going on in our brain is just processing,
it could potentially be translated to a computer. If consciousness is similarly information processing,
then we have something that's quote unquote substrate independent. So you're making the claim that it's not so
clear, maybe there is a dependence to the substrate. Can you make that case? And then also,
Michael, I know that you have several questions you'd like to ask, Neal, and feel free at any point
to. I'll try and make a point. That's the case I'm trying to make. It's, it's, it's quite
tricky because it goes against such a deeply embedded assumption that the brain is basically
a computer made of meat and the things that it does, the only things that it does that are
relevant for things like cognition and consciousness are computations, our forms of information
processing. If you start from that perspective, it leads you to this idea that there is this
substrate independence. And what that means to unpack that, that just means that the stuff we're
made of doesn't really matter. It's the computations that matter. And if the substrate can
have been computations, then fine. These two sort of ideas go together because one of the whole
motivations for a computational view is substrate
independency. Two-inch formulation of computation
is formulated in terms of it being independent of any particular
material. So the elevator pitch really is that we've
kind of forgotten that the idea of the brain as a computer is
a metaphor and not the thing itself. It's a
sort of marriage of mathematical convenience. And the
closer you look at real biological systems is Mike Twerewere,
beautifully exemplifies, the less that this idea of substrate
independence makes any real sense. There's no bright line
in a brain or a biological system in general between what you might
call the mindware and the wetware between what it does and what it is.
And if there's no clear way to separate in a system what it does from what it is,
then it's very, very not so as clear that one should think
that computation is all that matters.
Because for computation to be all what matters,
you kind of have to have the sharp separation
between the software and the hardware,
but what it does and what it is.
And you can't do that,
and there's less reason to think the computation is what matters,
and if there's less reason to think that,
then there's equally less reason to think that you could implement
what matters in a substrate independent way on something else.
You can, of course, still use computers to simulate a brain
in whatever level of detail you want.
But that's neither here nor that.
It's a very useful thing to do.
We would both do this.
We do this all the time.
But you can simulate anything using a computer.
That's one reason computers are great.
But that doesn't mean you will instantiate the phenomenon.
You only do that if computation really is all that matters.
And I think that's very much up for, I think it's been a very deeply held assumption,
but I think it's likely wrong.
Yeah.
I mean, I agree with everything that Neil said,
but I take it in a slightly different direction.
So I think it's critical to remember that, yeah,
everything we think about as computation is a metaphor.
It's a formal model.
And so we have to ask ourselves,
what does this model help us do and what is it hiding?
In other words, what is it preventing us from seeing?
And I agree that this metaphor does not capture everything that we need to know about
and we need to use to do technology.
so on about life. I think the computational paradigm and the notion of algorithms and so on does
not capture everything we need to know about life. But I make the additional really weird claim
is that I don't think it captures everything we need to know about machines either. In other words,
we tend to think, at least the people I meet tend to tend to think that we have this set of
metaphors that are for machines and their algorithms. And they don't really apply to biology. Certainly
people say, well, they don't apply to me. I'm creative and whatever else, you know, they are. But, but there is a corner of the universe that is boring, mechanical. It only does what the algorithm says it should do. And for those kinds of things, you know, these, these metaphors are perfect. They capture everything there is to know. So I agree with Anil on the first part, but I doubt the second part. I think that a lot of what we have in our theories of computation is a pretty reasonable theory of what I call the front end. It's, I think most of what we
we deal with are actually thin clients in a certain sense.
Their interfaces to something much deeper, which we can call the platonic space.
I don't love the name, but I say it that way because then at least the mathematicians know
what I'm talking about.
But I think that even, and we have some work already published and more work coming soon
in the next few months on this, showing that, yeah, the standard way of looking at algorithms
don't even tell the story of so-called machines.
And so whatever it is, and I have guesses, but I, of course, we don't know, whatever it is that allows mind to come through biological interfaces and not be captured by these formal models, I think these other systems that we call machines and certainly cyborgs and hybrats and all and stuff, I think they get some of the magic too.
It's not going to be like us.
It's going to be different, but I don't think they escape these ingressions either.
this is why I think I find
Mike's work so interesting because it's
provocative in this direction I think
I think he's
you know I've always
I think he summarizes what I said very well
is that I think we underestimate
the richness of biological systems
if you force them into the
what's often called the machine metaphor
by which we really mean
that all that matters is a sort of
touring computation algorithm thing
but I think it's
it is equally true that we limit
our imagination about what machines might be as well by doing this.
And there's a whole kind of alternative history of AI,
which was really grounded in 20th century's Saigonetics.
There's much more to do about dynamical systems,
attractors, streetback systems,
all things you can still simulate computationally,
which are fundamentally not sort of arising from that,
the algorithmic way of thinking about things.
There are also really interesting mathematical properties like emergence and so on, which I think
applied to these can both help us understand, but also might be design principles for machines of
various times, which again don't really fit into an algorithmic view of things.
So it might as beautifully shows even something we think of as anonymically algorithmic,
like correctly if I'm wrong, like the bubble source stuff.
So this is an algorithm that anyone in computer science 101 learns to code, to sort things into a particular order, has really interesting emerging properties that could be, the other things can be built on top of.
So, yeah, I don't think, I think it's, for me, it's like, there's this, there's this nice iterative back and forth where we can learn to think of both biology and machine.
differently, and of course that might give us richer metaphors
through which we can use one lens to understand the other.
Would you say then that we have the idea of machine
and that a Turing machine is a strict subset of that idea of machine?
I mean, the Turing machine is an abstraction, right?
Turing machines were never sort of supposed to exist, you know, as things.
They've got infinite tape and things like that.
So you've got, you know, a Turing machine, the idea is you, in one sense,
you're mapping a bunch of numbers onto another bunch of numbers.
And then the universal Turing machine does this through this moving head and an infinite type.
It was never really supposed to exist as a physical machine.
I think that's where part of the problem has sort of come from.
But an algorithm in that sense, yeah, I think that's a subset.
When you realize a Turing machine, that's a subset.
set of possible machines.
Yes, when you realize a Turing machine,
it will be a subset of all possible machines
just because it's a particular Turing machine.
But when you realize a universal Turing machine as well,
that's also a subset of possible machines.
So if you don't mind spelling out to the audience
the idea of hypercomputation,
and would you then say that biological creatures or cells
or what have you,
are doing something that is hypercomputational?
And feel free to take this in a different direction
Michael as well, if you like.
Would you, yeah, I mean, I'd not say that, but would you tell, would you give me what
you're thinking of when you used to with hypercomputation?
I've heard it used to imply different things.
So if something can solve the halting problem, it would be an example of a hypercomputer.
Something that can decide problems that a Turing machine or a universal Turing machine can't.
Right.
So sort of super Turing in some sense.
That's one way in which machines can be.
non-Turing or can state Turing world.
But I think there are many other systems
that are just not captured by this way.
They don't have to be based on the halting problem.
I'd do strictly anything that is stochastic,
anything that is continuous,
is beyond this world of strict universal Turing.
There are kinds of extensions that try
try to go there. But there are also functions that things do that necessarily involve particular
material substrates. So take something like metabolism. And metabolism is not mapping some
range of numbers, whether they're continuous or random, to another number. It involves actual
transformation of a particular kind of substance into another kind of substance. That's just
non-turing, and what is a fairly trivial way,
but that kind of thing might be very important for particular classes of machines
or systems, whether they're biological or not.
So I think there's different spaces of what you might call non-turing processes.
Only some of these are these kinds of high-term computation,
whole thing, problem solving things where you might say you've got some sort of
of fancy quantum stuff going on.
but I think
people are different about that.
Phelians differ, right?
I mean, there are some people that would say
that they're actually
unless you're talking
about this hypercomputation,
the sense you've mentioned that everything else
is sort of a relatively feasible
extension of touring as is.
So there's definitely debate in that, Eric.
I've been using Claude to prep for theories
of everything episodes,
and it's fundamentally changed how I approach topics.
When I'm exploring, say, gauge theory or consciousness prior to interviewing someone like
Roger Penrose, I demand of an LLM that it can actually engage with the mathematics and philosophy
and physics, et cetera, at the technical level that these conversations require.
I also like how extremely fast it is to answer.
I like Claude's personality.
I like its intelligence.
I also use Claude code on a daily basis and have been for the past few months.
It's a game changer for developers.
It works directly in your terminal
and understands your entire code base
and handles complex engineering tasks.
I also used Claude, the web version, live,
during this podcast with Eva Miranda here.
Oh my God, this is fantastic.
That's actually a feature named artifacts,
and none of the other LLM providers
have something that even comes close to it.
And no coding is required.
I just describe what I want,
and it spits out what I'm looking for.
It's the interactive version
hey, you put words to what I was thinking.
Instead, this one's more like you put implementation
into what I was thinking.
That's extremely powerful.
Use promo code, theories of everything,
all one word capitalized,
ready to tackle bigger problems,
sign up for Claude today
and get 50% off Claude Pro,
which includes access to Claude Code
when you use my link,
clod.a.i slash theories of everything.
I would go in a slightly different direction and emphasize
something that does not lean on quantum mechanics,
does not lean on stochasticity,
and does not lean on hypertoring or anything like that.
And also let's even step back from the living,
because conventional living things are so complex
that you can always find more mechanisms and so on.
I want to look at an extremely minimal model.
model. And the reason that we chose this was precisely because it's such a minimal model. I wanted to sort of maximize the shock value of this thing for our intuitions. And this is the work that my student, Taining Jang, and Adam Goldstein and I did on sorting algorithms, which is what Neil mentioned. And there's a couple more things like it coming in the next few months. The sorting algorithm is, it's like bubble sort, selection sort, these kinds of things. CS students in CS 101 have been
studying these for, I don't know, 60 years probably.
And no one, as far as I can tell, no one noticed what we noticed because the assumption
has always been, this thing does what we asked it to do.
And a lot of, a lot of what I'm trying to emphasize is specifically running against that
assumption that, yeah, it sorts the numbers are, right?
But if you back off from this assumption that all it does is what the steps of the
algorithm ask it to do, then you find some new things.
things. And, you know, computer scientists are well aware of
emergent complexity, emergent unpredictability, you know,
cellular automata do all kinds of funky things and some of the
rules are chaotic and all this kind of stuff. That's not what I'm
talking about. I'm not talking about emerging complexity, unpredictability
or even perverse instantiation, which a life people
find all the time. I'm talking about things that any behavioral
scientist would recognize as within their domain, if
you didn't tell them that this came from a from a from a from a deterministic algorithm and so
I can go into details if you want but but a couple of a couple of things are our
salient here what this what this what what these algorithms are also doing while they're
sorting your numbers are also a couple of interesting I call them side quests because
we didn't there's nothing there are no steps in the algorithm asking them to do this
in fact if you try to write an algorithm to force them to do it it would be a whole bunch of
extra work, which is actually quite interesting because I think we're getting free compute here.
That's a whole other thing that I think is a very testable. It's a crazy, and it's a nice
testable prediction because it's so weird and unexpected. They are doing some other things
that are not directly related to what you've asked them to do. That's really important because
it means that these language models, for example, when we say AI nowadays, a lot of people think
language models, we people tend to assume that the thing that the language model talks,
about is some kind of clue as to its inner nature, right? And people say, well, you know, my GPT
said to me that it was conscious or wasn't conscious or whatever. My point is the thing you force it
to do may have zero to do with what's actually going on. Now, in biologicals, that's not true
because evolution, I think, works really hard to make sure that the signs that we, and the communications
that we do are related to our interstate and things like that. So in biology, those things are
tie closely together. But I think we've
disconnected them. And what we are now making are
things that
look like they're talking and whatever.
And they are. But I'm not sure
any of those things are at all a guide to what's going on
inside. And if a dumb bubble sork,
which is six lines of code, fully deterministic,
nowhere to hide, six lines of code,
if that thing is doing things that we did not
expect and we did not ask it to do, and by
I mean, there are no steps in the algorithm to do
what it's doing, then I don't, you know, who knows what these language models are doing,
but I'm pretty sure that just watching the language output is not a really good guy to
what's happening. I think we have to go back to the very beginning and we have to apply the
kinds of things that Anil was talking about, which is basic behavioral testing in various spaces.
I think our imagination is really poor at this. I think we have to, we have to just be really
creative as far as asking what what is this thing actually doing specifically in the spaces between
the algorithm because because the thing is uh the it it has to it's a little bit like this is this is a crazy
kind of a crazy analogy that I came up with the other day you know um this this notion of steganography
so in steganography you take and let's say a piece of data let's say it's an image it's a jpe and
it looks like whatever it looks like there are bits within that image that if you were to change
those bits it wouldn't look any different right
There's some degrees of freedom in there that you can move things around.
And the image would still look the same.
And so what people do is they hide information in there.
And maybe it's your signature that you are the one who took the picture or maybe it's
a code because you're a spy, whatever it is.
You hide information in there.
But the iron rule is you can't mess up the primary picture.
You can sneak stuff in to the degrees of freedom, but you can't mess with the primary
picture or the primary data pattern because then it will be obvious that something's there.
I kind of have a feeling that this is what's going.
on not just with computer algorithms but with everything there is there is the primary thing it's
supposed to do and anything else that it gets to do has to be compatible with that primary thing it
isn't magic you can't break the laws of physics you can't go against the algorithm right you're
not doing things that the algorithm forbids but there's but there but it turns out i think that
there's this that there are these weird like empty spaces between the algorithm where you can do
and i mean doesn't that describe to some extent our existential you know you have a certain
a bit of time in this world.
You have to be consistent with those laws of physics.
Eventually, your physical body gets ground down to entropy and whatever.
But until then, you can do some cool stuff that isn't forbidden by the laws of physics,
nor is it prescribed by those laws, I don't think.
And so you get this, you get, this is what I think is really interesting about these things.
And the algorithm itself, to the extent that it has to do the algorithm, that limits what else
it can do.
In a sense, what it's doing is in spite of the algorithm, not because of it.
And so I agree with a kneel here.
I'm not a computation.
I don't think anything is conscious because of the algorithm.
If anything, I think the mental properties it has is in spite of the thing we force it to do.
And so one thing, and I'll stop here, one thing which I'm sort of most proud of in that paper that I think was kind of cool,
is that we figured out a way to let off the pressure on the algorithm a little bit, to see what would happen.
And the way you do that now, how would you do that?
It has to follow the algorithm.
How could you possibly let off the pressure?
What we did was we allowed duplicate numbers in the sort.
And what that allows you to do is you still have all the fives that will have to end up before the sixes and that has to go before the sevens and so on.
But how you arrange those is now not really constrained by the algorithm.
You don't touch it.
You don't change the algorithm.
You just allow multiple repeats within the and what did we see?
We saw that the crazy thing it was doing, which I call clustering.
I could tell you what that is, but it doesn't matter.
It went up higher than when you couldn't, then when we didn't let it do that.
And so I really think, and this comes back to the AI thing, I really think that it's a lot like raising kids in the following sense.
To the extent that you force them to do specific things, you squelch down on the intrinsic motivation.
Some kid that's forced to be in a class all day, you're not going to get to see what else he would be doing otherwise.
Maybe it's out playing soccer, who knows what it would be.
And so to the extent that we force these things to do specific things, we are actually not going, we're reducing what else they might do.
And that's what we need to develop is the tools to detect and to facilitate this intrinsic motivation.
And then you get into alignment and all of that.
There reminds me a lot of when I was doing my postdoc 20 or the years ago coming across,
well, being told by my mentor at the time, Gerald Edelman, about the distinction between redundancy and degeneracy.
I think this is very opposite here.
So, you know, engineering, people often talk about having redundancy,
within a system. So if a system is designed to do something, to follow some steps in an
algorithm, well, then you might want multiple copies in case something goes wrong, you have a backup.
But the backup is doing the same thing. It's redundant in that sense. Biological systems don't
seem to be like that. They exhibit degeneracy rather than redundancy. That is, they may have
multiple ways of doing the same thing in context A, but in context B, these multiple ways of
doing the same thing, now do different things.
So this is hinting at the same thing that although it looks like they're doing the same
thing, there's actually some spaces in between somewhere that you won't see unless you
look in different contexts.
Otherwise, you'll only see the same process that might look like an algorithm.
And it's that degeneracy that gives biological systems their kind of open-endedness, their
ability to adapt to novel situations and so on.
It might be related to what Mike is calling an intrinsic motivation that you have to have
some kind of degeneracy rather than redundancy to systems.
I mean, what's interesting to me is that people are often, with the exception of a few
like die-hard, you know, reductionist, materialist, whatever.
People are generally pretty willing to grant living things that, right?
And they're okay with saying that, you know, living things, especially brainy, living
things get to get to do some of those things but what i've what i'm now finding is that people get
very upset when when you when i suggest that the same thing might be true all the way down it seems
to be very important that we have this distinction no those are that's the dead matter and the
mere machines we are special we can do this thing and and my point is not i'm not trying to
mechanize living things i'm going in the opposite direction i'm saying there's not there's not less
mind than you think there is, I think there's
more. But actually
especially people
you know, kind of organicist thinkers who
really resist the mechanization of life
and all this stuff, they
really get really upset
at this last part
because if
I suppose we're not
as special if it goes all. I'm like I'm not sure.
I think, you know, there's some kind of
a scarcity mindset that
there's just, there's not enough mind for
for, you know, for all of us.
Maybe. I think it might be that there's still this worry that even if you're, like say
bubble sort again, bubble sort is still implemented on like standard computers, right? So one way
of potentially misunderstanding what you're saying is you're then basically allowing computational
functionalism by the back door again in some ways by saying, look, you know, an algorithm like
bubble sort has actually all the things that you're, you know, that's you.
you need, or it has so much more going on than one might think.
So let's not be too quick to rule out substrate independent algorithms as sufficient
for other things that might seem otherwise hard to explain in biology.
I think you're right, and I think people could, but in that that would be a misinterpretation
of what I'm saying. I am not saying that it's doing that because of the algorithm, right?
So the standard computationalist theory is you are conscious because your algorithm is doing workspace theory, whatever it's doing, right?
That's why you are, I'm saying the exact opposite.
I'm saying that even something as stripped down and forced the stupid algorithm, there are still spaces there through which whatever this is that I, you know, that this, whatever this magic is that we're talking about is, is able to squeeze in even there.
There are minimal versions of it that will shine through even there.
And if you provide a more, a different interface,
and I don't want to just say more complex because I don't think it's just complexity.
Maybe it's materials.
Maybe it's some other stuff.
But if you provide better interfaces such as living materials, well, then sure, you'll get way more.
But the stuff seeps into even the most constrained systems, I think.
So let's get to aliens, Michael.
I don't know what this is, you know, people email me sometimes asking to talk to my alien
handlers, there's that, but I don't know, I don't know anything about aliens other than to say
that it seems implausible to me, not being an expert on the exobiology, whatever, it seems
implausible to me that the only kind of life is the life that we're familiar with here or
cognition. I expect that elsewhere in the universe there will be extremely,
extremely alien forms of mind that are not carbon, not, and I mean, I can get even
weirder, but not the kinds of things that we're used to here. I think our imagination
is terrible for that kind of thing. I mean, sci-fi does okay sometimes, but yeah, I just,
you know, anything that's, anything that's tied to the specifics of life on Earth, I think,
is almost certainly too narrow as a criterion for these kinds of things.
I mean, I'm just always, I go back to the Fermi paradox and like, you know, where is everybody.
and which always worries me because it just sort of suggests to me that I think it's also very
implausible that we're the only example of life but then the evidence for intelligent life
that has been able to broadcast structured energy out into the universe seems lacking where
the hell is everybody so of course the conclusion from this is that life might be very
prevalent in many places,
at least certainly not only here,
but that there's quite difficult
to get life to the stage
where it lasts long enough
to persist and become cognitively sophisticated.
I have no idea,
and I find that existentially concerning
and just a great
sort of shaker of the snow globe
for reminding us that we really need to take care of our own
planet and civilization first, because it might not be very common to get to the kinds of things
we are, even if it's exotic in a different way somewhere else. I think the universe is much more
likely to be filled with gray goo than, you know, Mike Levins with eight legs and octapot, octopod form.
So, Anil, if I was to take your cells and put them into a dish, some would form xenobots,
Some would die, most probably would die, and some may just wonder about or what have you.
Have you become multiple agents at that point, or were you always multiple agents pretending to be one?
I don't think pretending to be one.
I think it's an excellent question.
I don't really, whether you can have multiple kind of coarse grainings of agency simultaneously, I think is quite interesting.
I don't see why not, in a sense.
I think there can be suborganismic levels of agency in my constituents.
But there's something sort of enslaving of these finer grains of description in things like organisms.
Things pull together, the parts pull together as a whole in a way that doesn't happen if you dissociate me into my constituent.
situant cells. So I don't, yeah, I don't see a contradiction between cells having agency
and an organism having agency and a society having agency and perhaps a global society
having some kind of agency. These things can all coexist and have a reality simultaneously.
But they will affect each other. So agency at a macro level will probably constrain the agency
that's available at the micro levels. And you have a book on consciousness, which I'll place
on screen and a link in the description right now.
So, you've probably heard of the identity theory of consciousness.
My understanding is that it just says mental states are simply the same as physical states.
They're not caused by, they're not emergent from, they're just identical to them.
What do you make of that?
I'm curious for both of you.
Well, I don't think it's a theory.
I think things like identity in quite theory and more metaphysical positions than actual
theories. And for me, I like to wear metaphysics lightly, if at all. I don't think you get
very far. To say that a mental state is, or a conscious state, is identical to a physical
state, I mean, who knows? In some sense, it might be trivially true and in another sense it might
be absolutely completely wrong. But what I do think is it's not, it doesn't give you anything in
particular to do or anywhere to go.
So instead of sort of arguing about whether theories like that are correct or incorrect,
I prefer to ask whether they're useful or not useful.
And I don't think identity theories that are useful.
I'm broadly a pragmatic materialist, which is to say that I'm pretty convinced that
conscious states have something to do with the stuff, with physical stuff.
and we certainly know empirically there are correlations and causal relations between
if you do something to the brain something will happen in conscious experience at least
in human beings who knows you know maybe consciousness is more general than biological systems
but I think pragmatic materialism is a productively useful thing to do and we can go about the business
of trying to explain properties of consciousness in terms of properties of biological systems
and we'll see how far we get.
And this depends.
Then we have to face the question of
what are the properties of biological systems
that give us explanatory, predictive grip
on properties of consciousness.
For a bunch of people, the assumption is
it's just the computations to bring us back
to early part of the conversation.
But there could be many other things
that actually give us explanatory
and predictive grip about consciousness
that aren't the computations.
And that's the view that I'm interested in in exploring.
And we'll see whether it's useful or not.
Yeah, I agree with that.
I mean, it sounds, I think it's less than the theory than it is, say, a linguistic claim.
It's just, you know, you're just saying something about the definitions.
I find it kind of unhelpful.
It's a little bit like saying that, you know, airline ticket prices.
What are those?
Well, let's associate.
them with some physical states and well what explains them well the the you know the constants at the
beginning of the big bang plus some randomness like in a certain sense kind of in another sense like
how much insight are you going to get as far as why these prices are going up or down if if you have
this view i think probably zero and so like an eel i'm interested in metaphors and i think that
all these things are metaphors but i'm interested in metaphors that help us discover new things
and I don't see how equating them linguistically with physical states is doing the trick.
I don't think that works in biology for the sort of cognitive non-consciousness specific things,
and I don't see it helping here either.
You know how in physics we like to reduce something that's complex into something more elegant and more efficient,
something simpler, for instance.
It turns out you can do that with your dinner.
Hello Fresh sends you exactly the ingredients you need.
They're pre-measured, they're pre-portioned,
so you don't have to deal with this superposition of,
do I have too much cilantro versus not enough cilantro
or whatever you have collapsing in your kitchen every night?
They've just done their largest menu refreshed yet
with literally 100 different recipes each week.
There's vegetarian options, calories smart ones,
protein heavy, my personal favorite.
Then there's a new steak and seafood options
at no extra cost. All the meals take approximately 15 minutes to a half hour. They've actually
tripled their seafood offerings recently and added more veggie-packed recipes. Each one has
two or more vegetables now. I've been using it myself. It's the one experiment in my life that's
always yielded reproducible results. It's delicious. It's easy. It saves me from having to live on
just black coffee while editing episodes at 1 a.m. Personally, my favorite part is that it's an
activity that I can do with my wife.
Thus, it not only serves as dinner, but as a bonding exercise.
The best way to cook just got better.
Go to Hellofresh.com slash Theories of Everything 10 FM to get 10 free meals plus a free
item for life.
That's one per box with active subscription.
Free meals applied as discounts on the first box, new subscribers only varies by plan.
That's Hellofresh.com slash theories of everything 10 FM to get
10 free meals plus a free item for life.
Mike, you said you had some questions about split hemisphere patients for Neal.
Well, I don't know what's, okay, it's not so much specifically about split hemisphere patients,
but I guess it's the thing I brought up in email.
I was just wondering, I was listening to a talk, I forget whose talk it is,
and somebody was saying, look, there are all these unconscious processes during reading,
during the driving, whatever, there's all these unconscious processes.
And I was just curious what you think about that because it seems to me critical to say conscious to whom.
In other words, they might well be unconscious to the main left hemisphere, whatever, that's verbally reporting this and saying, wow, I drove all the way from home to my office and I wasn't conscious of any of that.
And so you say, okay, great, there's this unconscious spot.
Well, it's not conscious to you, but neither are my conscious states conscious to you.
So how do we know, right?
So that all of these things aren't, the subsystems of the brain and mind that execute them.
How do we know they don't have an experience?
They can't verbalize.
So I was just curious about that because it seems like it's just a foregone assumption.
And it seems like really begging the question if we don't, if we just assume that because you don't feel them, that they don't, you know, that they don't.
And it's the same.
And the reason it's of interest to me is that that's what people say about our body organs too.
Right.
So I make the claim that for the exact same reason we give each other the benefit.
reasons, four or five reasons that we give each other benefit of the doubt about consciousness,
you should take that seriously about your various body organs. And people say, well, I don't feel
my liver being conscious. Of course now. You don't feel me being conscious either. So I was just
what you think about that. Yeah, we had just for the people listening, we started this nice
dialogue by email just a couple of days ago. So I think it raises some really important questions
about how we use the words. Unfortunately, I do think it's a little bit linguistic here. We talk about
the conscious and the unconscious.
And of course, they mean different things in different contexts.
So when it comes to, let's say, split hemisphere patients, the intuition is there are two separate
conscious agents.
Just only one of them has the ability to behaviorally report through language, what it's
experiencing.
But it's partly because each hemisphere has kind of the full complement of resources that one
think of as necessary that this becomes, you know, a sort of plausible position. Then there's
other uses of conscious versus unconscious. There's a whole history of a lot of the history
of consciousness science is trying to contrast conscious from unconscious perception. So, you know,
you'll show an image and somebody will say, yeah, I see it. And then you mask it in some way,
manipulate it in some way. And people say, oh, no, I didn't see it, but you can still see parts
of the brain responding and the logic is well the contrast that you get there between when
something was consciously seen in the same image or the same sound was not consciously experienced
or if you look at the difference in the brain that difference has to do with with consciousness
that's the whole strategy of looking for the for the neural correlates of consciousness but then
you might ask well how do you know that the unconscious perception was in fact unconscious
It may have just been unconscious to the subject as a whole,
but there may have been an inaccessible conscious experience happening.
So I think this is logically, perfectly possible.
But then you have this whole, well, how do you then link that not only to a brute correlation,
but you have to then come up with some theoretical reason,
and that will depend on your theory.
A theory like global workspace might say,
okay, look, the reason that the conscious perception was reported,
portably conscious was because it engaged
the global workspace. And the theory
is that things are
conscious in virtue of accessing this
global workspace. So you have some
sort of theoretical reason for saying
that the unconscious is in fact unconscious.
But of course then you risk
a little bit of circularity, right?
That your
evidence for global workspace is based
on the theoretical explanation that
makes one conscious and the other not
conscious. So you have to have multiple
sources of evidence.
All this to say is it's a very good question, and it came up in the thing I mentioned right
the beginning.
We have these hemispherotomy patients whose parts of their brain are completely disconnected.
So they, by definition, can't respond to things.
They can't generate any response.
They're sort of the opposite of language models in this sense, right?
They can't give us any persuasive behavioral evidence because they're not connected to anything.
Yet, you know, they are part of a brain that was.
at one point conscious and all that's happened really in the limit they're damaged as well
I mean there's other things going on is they've been disconnected so plausibly at least for me
they're more likely to be conscious but inaccessible much more likely a priori than a language
model is to be and so we have to find indirect ways of trying to assess the the likelihood of
consciousness in these very disconnected hemispheres and to cut a long story short very short
I know you've got to go in a second, Mike.
When we look at EEG, and this has work done with colleagues at the University of Milan,
it looks like these isolated hemispheres are in states of very, very deep sleep.
So we see slow waves, very prominent slow waves, sharp spectral excellence on.
But how do we know that that is in fact unconscious?
Because there are a few examples of human beings where we actually see slow waves
at the same time as consciousness, in DMT, for instance, and things like that.
So it's iterative. It's very hard to be definitive, and it's an excellent question.
I think we don't know, you know, until we start looking at systems radically different
from a psychology undergraduate looking at a monitor, which we still do, and that's very useful.
But we have to look at these other things as well.
We don't really know what assumptions we're making when we interpret the data from,
just look for the car keys where the light is.
You might miss the bigger picture.
Now, Mike, before you get going, I suppose I gave you both,
unlimited resources to design some
experiment, what
would you create?
Boy, well, I
fundamentally, I think we need
an environment,
a closed loop environment in which to
exercise all kinds of
the Zenobots and Anthropods at just the beginning,
there's so much weirder things that we're looking into,
such that we might
to be able to recognize new kinds of cognitive preferences, goals, competencies, whatever,
to which you were otherwise blind.
I mean, you could imagine making this thing enormously rich and complex.
Anil.
Well, I mean, obviously, Fun Mike would be the thing to do.
But, you know, other than that, I think if you think about where the adjacent possible progress
might be most rapid,
what we lack, what we've lacked in neuroscience is the ability to look at high resolution in time and space
and across much of the brain at the same time, measuring from many neurons in time and space
at the same time, in systems that we know are conscious or very high primates and other things.
And there are just massive advances now, I think, in invasive neurophysiology, in different kinds
of neuroimaging methods, that we can sort of,
optogenetics being one of them.
But I think really doubling down on manipulation and recording
and high space time and coverage simultaneously,
coupled with the development of new mathematical tools
to understand these kinds of complex data sets,
that's where I'd go. Lots to do that.
many of the people who watch this podcast are specialists in computer science, math, physics, philosophy, adjacent fields. Conscious and studies, neuroscience, of course, cognitive science. But also, many are not. Many are artists, for instance, when I was at this MIT event. I'll place a link on screen and in the description. There were many people who were painters and poets and so on who came up to me. So I was going to ask just about advice for, for, for, for,
researchers, but you can frame it as advice to everyone. What advice do you have?
You know, I think for students, it's super important to kind of curate your curiosity,
I think. I mean, I started at this very general curiosity in consciousness, but then I think
it was important to allow that curiosity to find other branches that they,
end up coming together in different ways.
You know, I got very interested in other things, too,
in cybernetics, in things that at the time didn't seem to have
much to do with this big question.
But a lot of, I think, one way to carve out a successful career
is to put different pieces together,
to gain skills that are both techniques,
methodologies, but also conceptual toolboxes too, that you then can reassemble in different ways
that other people might not have had the opportunity to do so.
So really, there's two interconnected things, which is, don't lose sight of the big picture
of what you want to do, but also be flexible and try and develop curiosity and adjacent
things that might come in handy.
and also learn to do stuff.
You know, I think many advances in science have come about
through advances in methods first.
And if we learn methods,
we will learn the right questions to ask.
And I think that's maybe the thing that I'm still trying to learn to do
as a researcher, which is the thing I find really hard.
It's finding the right questions.
not find any answers to the questions that you have.
That for me is still the real struggle.
Can you give an example, one of method that you wish you had, for instance,
it could be that you wish you had learned earlier in your career,
or just a general example of something that would be beneficial to a student,
so method, and then also you mentioned that asking questions.
So then also something, an example would be, well, what's something,
where you were pursuing the answer, but you realized that it should have been a better question.
So I'll try and give examples that connect both of these things.
So an example of something that I wish I had gained some expertise in earlier is psychophysics.
Now, this is this standard experimental thing.
I caricatured it a bit early undergraduates sitting in front of a monitor, pressing buttons and so on.
but the methods of psychophysics
are probably the longest established
experimental methods of studying consciousness
how do we interpret data
for people pushing buttons when you show them things
I mean it's very simple
but there's a huge amount of literature
that goes back to the 19th century there
and you know I made I think a ton of mistakes
and certainly a ton of inefficiencies
kind of improvising my way
through this literature
or through my own work
work because I hadn't gained the skills early enough.
So that's one example of something I wish I'd done differently.
I think it would have allowed me to ask better questions experimentally.
The thing that I think that went well was, I picked up, I learned to train myself
and then asked other people to help me learn.
information theory and
Granger causality modeling.
This is a mathematical sort of framework
for understanding information flow,
causal interactions between nodes of a network
in complex systems generally.
These methods were mainly used,
certainly at the time, in the early 2000s
when I encountered them,
they were primarily still used in economics,
econometrics,
not in neuroscience.
There were a couple
of papers basically saying, hold on a minute, we might be able to look at, apply these methods
in neuroscience. And I just got curious about that, not because I thought there was a big clue to
consciousness there, but I thought, hold on, that's really interesting. You know, people assume that
they look at coherence or mutual information or correlation between brain regions, but
might not be interested in causal information flow, you know, arrows, lines with arrows
that are not going both directions.
So I was lucky to know people who could help me learn this stuff
and it's become quite a strong part of what I've done over the years.
Now working with mathematicians who know this stuff much better than me,
but we've done a lot in applying these methods in neuroscience now
and giving people the tools to apply them for themselves.
And it's also fed back into other things to ask,
this is the other example.
So different questions, right?
So one question that I've been asking for years,
and I think it's getting some wider grip now,
and again, this is largely thanks to collaborations
with mathematicians, is emergence.
So this came up a little bit in a conversation with Mike.
People talk about emergent properties and so on,
and often it's the sort of placeholder magic
for things that we don't really understand.
But actually, you know, I think there's ways to make quantitative,
sense, to measure emergence, to characterize it, to identify it in a data-driven way
from systems. And the mathematical toolbox of information theory and Granger causality
has actually turned out to be very useful in figuring out how to do this, to come up
with measures of emergence that allow us to ask questions about emergence in a more
quantitative and operational way. I remember you and a few other people had a paper on this
within the past two years, they're so correct?
That's right. I've been working actually with two different groups of people on two different
approach. The main one is with my colleague Lionel Barnett, who I've worked with for many years
now, who's a mathematician. And we have a, so the story that was, I actually wrote a paper on
this 15 years ago using Granger Causality to measure emergence. And I was very pleased
myself at the time. This is great. No, no. Here's this concept and here's a way to implement it
mathematically and it, you know, it kind of got a bit of attention but not much. And then Lionel
pointed out to me that it was basically, you know, flawed in all sorts of ways and came up with
a related idea that does something much more rigorously. And it's a slightly different thing
and we're still working on it to figure out how to extend it. But it's mathematically a much
more serious enterprise now. But what it does, basically, it says, okay, you've
you've got a complex system, an example that's often used as you have a flock of birds.
There may be birds flying around in the sky and sometimes it looks like they're flocking
and other times it doesn't. Can you quantify that? And of course you could say, well,
it's in the eye of the observer. Fine, it's in the eye of the observer, but so basically, you know,
so is everything really. There's still a difference between a flock and a non-flock.
And if we can quantify that, then and generalize it.
So maybe there's something about neurons that have an essence of this flockiness,
but maybe not now in space in three dimensions,
but in some other dynamical space, in some other dimensional space.
And the approach of this that Lionel and I took was to come up with a measure we call
dynamical independence, which is when a sort of,
a zoomed-out level of description, a coarse-graining as physicists like to say,
a higher level of description of a system, if that is, if its evolution over time
is statistically independent of what its constituent parts are doing, then it in some sense
has a life of its own, then it is in some sense emergent, dynamically independent.
And it turns out that the utility of this approach is that we can apply it in a purely data-driven way
without making any presuppositions of saying, oh yeah, there's a flock, is it emergent?
We can just identify potential emergent properties in a system
and see how they look in different states.
And just to where we're at right now, for me, is a hugely exciting thing actually,
which is that often people say, well, conscious states are emergent from that neural underpinnings.
The brain is in some sense, conscious brain is in some sense more than some of its parts.
That all sounds very nice, and I'm sure I've said stuff like this many times before.
But now with the tools that Lionel developed and applied with a PhD student of ours
and who's also working with others in Paris, Tomar, and Drian, we find something quite different, actually,
which is that when the brain is in a conscious wakeful state,
there's less prominence of these so-called dynamical independent coarse grainings
than when the brain is unconscious in anesthesia,
which is sort of not the, the slogan would be a little bit not what we were expecting,
like emergence is lower in consciousness than in unconsciousness.
that would not be what I would have predicted a few years ago
or even two years ago, one year ago, I'm not sure.
But it's looked when we operationalize emerge into this specific way
and with this specific data, that's what we find.
But then that raises other interesting questions.
And I think this is the beauty of actually operationalizing these things,
making them quantitative, because now we have another set of questions
which is like, ah, maybe it's, this is because in the conscious state, actually, when you don't
have emergence in the way we're quantifying it, what you actually have is something called
scale integration, where there's actually what's happening at the macro and what's happening
at the micro are much more independent. There's much less separation of scales. And this takes
us right back to what we were talking about with Mike and indeed the whole idea of conscious
AI, that, you know, I said right at the top that in brains there seems to be, it's harder
to separate what they do from what they are. In a sense, this is a way of quantifying that
hardness. And it seems when the brain is conscious, it's even harder to separate what it does
from what it is. You have this deeper integration of scales vertically, not across time or across
space, but across levels of description of a system. And so for me, this is opening like a whole
range of questions that haven't really been asked. Certainly I haven't asked them before.
It's a different way of looking at a system like this. And it all turns on having this mathematical
method available. And for me, that goes right back to the serendipity of being curious about
Granger causality 20 years ago. Hmm. Now, there's some research that says that when one
takes psychedelics, it probably depends on the psychedelic, that the brain is less active.
even though your conscious experience,
quantum code, is greater somehow.
Is this related to that,
or have you not studied emergence
when it comes to the brain under psychedelics?
It's a little related.
So we have a little bit, in collaboration,
we don't have the license to collect our own data under psychedelics,
but we've collaborated with people like Robin Cart-Harris
and others who have.
And we have not...
yet, but this is very much on the cards, we have not yet applied this same measure that I was
just talking about to the psychedelics data, but there's no reason we can't. What we have done
is we've applied other measures that have often been used in things like sleep and anesthesia
as well that measure what we call signal diversity. And the story here is that when you lose
consciousness, your brain activity seems to become more predictable. So the repertoire of states
that it inhabits is lower. This is measured using this quantity we call Lempel Ziv complexity.
It's sort of the compressibility of a signal. And the complexity is lower when you lose consciousness.
Your brain dynamics are more compressible than more predictable. When we applied this,
this was now nearly eight or nine years ago to data from psilocybin, LSD.
We found the opposite, that the brain activity became even less predictable.
So more diverse, more different patterns, less compressible, higher levels of complexity.
So that's one clue, but to me it's still very preliminary.
This method of measuring signal diversity is quite,
precarious. It depends. If you do it a different way, you tend to get different results.
But I think, you know, there are other things we looked for we didn't find in the psychedelics
data set. I was expecting to see, for instance, just much greater information flow from the
front of the brain to the back. I thought, you know, that might explain the prominence of
hallucinatory contents. We didn't see that, at least not in the analysis that we did at first. We
didn't see any change in information flow in that way. So I don't know. I mean, there's a lot
to be, a lot to be done. But I think that certainly just looking at overall levels of brain
activity, it'd say it's less active or more active is not going to give us the answers. We need
to look in more sophisticated ways. Now, lastly, speaking of surprise minimization, what else has
surprised you lately in consciousness research? What is surprise me?
I mean, we can put an aside, but I think the thing that surprised everybody,
this is only tangentially related, is how simultaneously impressive and unimpressive language models are.
Okay.
They're really very different from how I thought they would be.
They can do a lot more, but they also have sort of still bizarre failure modes and so on.
So I just would not have expected the trajectory of language models to be as salient.
as it has been.
That's certainly been a big surprise.
What else has been surprising?
I don't know.
It's a really good question.
I'm not sure that anything massively stands out to me.
I'm sure something will come to mind
as soon as we finish this conversation.
As it does.
There have been other things which have turned out kind of in ways that one might have expected.
There was this huge adversarial collaboration between integrated information theory and global workspace theory,
this big effort to compare these two big theories of consciousness.
And of course, that's turning out that there's evidence for and against both,
and there's no decisive blow against either.
And that's probably exactly what one would have expected,
though there's still a lot of interesting and surprising things
there in the details.
But yeah, I don't know.
There's lots of things that are, I would say, small-scale surprising.
It's like, oh, I didn't expect that experiment to go this way or that way.
But I can't think of anything massive.
The AI thing is sort of dominating my surprise minimization landscape at the moment.
Thank you both for spending.
so much time with me and the audience. Thank you so much. Yeah, much appreciated.
Thanks. Thank you. Thank you, Mike. See you both. Yeah. See you.
Hi there. Kurt here. If you'd like more content from theories of everything and the very best
listening experience, then be sure to check out my substack at kurtjymongle.org. Some of the top
perks are that every week you get brand new episodes ahead of time. You also get bonus written content
exclusively for our members, that's C-U-R-T-J-A-I-M-U-N-G-A-L.org.
You can also just search my name and the word substack on Google.
Since I started that substack, it somehow already became number two in the science category.
Now, substack for those who are unfamiliar is like a newsletter, one that's beautifully
formatted, there's zero spam, this is the best place to follow the content of this channel
that isn't anywhere else. It's not on YouTube. It's not on Patreon. It's exclusive to the
substack. It's free. There are ways for you to support me on substack if you want, and you'll get
special bonuses if you do. Several people ask me like, hey, Kurt, you've spoken to so many people
in the fields of theoretical physics, a philosophy, of consciousness. What are your thoughts, man?
Well, while I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics.
And it's the perfect way to support me directly.
Kurtjymongle.org or search Kurtzimungle substack on Google.
Oh, and I've received several messages, emails, and comments from professors and researchers
saying that they recommend theories of everything to their students.
That's fantastic.
If you're a professor or a lecturer or what have you
and there's a particular standout episode
that students can benefit from or your friends,
please do share.
And of course, a huge thank you
to our advertising sponsor, The Economist.
Visit Economist.com slash tow, T-O-E
to get a massive discount on their annual subscription.
I subscribe to The Economist and you'll love it as well.
Toe is actually the only podcast that they currently partner with, so it's a huge honor for me,
and for you, you're getting an exclusive discount. That's Economist.com slash tow, T-O-E.
And finally, you should know this podcast is on iTunes, it's on Spotify, it's on all the audio platforms.
All you have to do is type in theories of everything, and you'll find it. I know my last name is
complicated, so maybe you don't want to type in Jymongle, but you can type in theories.
of everything, and you'll find it. Personally, I gain from re-watching lectures and podcasts. I also
read in the comment that Toll listeners also gain from replaying, so how about instead you
re-listen on one of those platforms like iTunes, Spotify, Google Podcasts? Whatever podcast
I'm there with you. Thank you for listening.
