Theories of Everything with Curt Jaimungal - Michael Levin IN-PERSON AT TUFTS / LEVIN LAB
Episode Date: June 14, 2024Michael Levin is an American developmental and synthetic biologist at Tufts University, where he is the Vannevar Bush Distinguished Professor. Levin is a director of the Allen Discovery Center at Tuft...s University and Tufts Center for Regenerative and Developmental Biology. He is also co-director of the Institute for Computationally Designed Organisms with Josh Bongard. Please consider signing up for TOEmail at https://www.curtjaimungal.org  Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch  Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything Â
Transcript
Discussion (0)
The hardware does not define you.
I get lots of emails from people who say,
I've read your papers, I understand I'm a collective intelligence of groups of cells.
What do I do now?
I just learned that I'm full of cogs and gears,
therefore I'm not what I thought I was.
And I think this is a really unfortunate way to think.
The bottom line is you are still the amazing integrated being
with potential and a responsibility to do things.
I think we are actually a collection of interacting perspectives and interacting consciousnesses.
Professor Michael Levin, welcome.
What's one of the biggest myths in biology that you with your research, with your lab,
is helping overturn?
Yeah.
Hey, Kurt, great to see you again.
I think that one of the biggest myths in biology is that the best
explanations come at the level of molecules. So this biochemistry, what they call molecular
mechanism, is usually currently taken to be the kind of the gold standard of what we're looking
for in terms of explanations. And I think in some cases that's valuable. That's, you know, that has the most value, but in most cases, I think it's not the right level.
And Dennis Noble has a really interesting set of papers
and talks on biological relativity.
So this idea that different levels of explanation
especially in biology provide the most bang for the buck.
And I think in terms of looking forward as to
what do these explanations let you do as a next step, I think we really need to go
beyond the molecular level for many of these things.
What's the difference between weak emergence and strong emergence?
I think that emergence in general is basically just the measure of surprise
for the observer. So I think a phenomenon is considered emergent by us.
If whatever it's doing was something that we didn't anticipate.
And so it's, it's relative.
I don't think emergence is an absolute thing that is either is there or
isn't or is weak or strong.
It's just a measure of the amount of surprise, you know, how much, how much
extra is the system doing that you knowing the rules and the various
properties of its parts didn't see coming.
That's, that's emergence, I think.
Would you say then that biology has physics fundamentally?
So it goes biology, chemistry, physics?
I mean, people have certainly made that that kind of ladder.
I'm not sure what we can do with that.
I think that it's more like these different levels have their own
types of concepts that are useful for understanding what's going on and they have their own autonomy.
That's really important that I think there's a lot of utility in considering how some of these
higher levels are autonomous and they do things that the lower levels don't do and that gives us
power, that gives us experimental power. And would you say that there's something that the lower levels don't do in principle or
in practice?
So, for instance, is there something that a cell is doing that in principle can't be
derived from the underlying chemistry, which in principle can't be derived from the underlying
physics?
Yeah, it really depends on what you mean by derive.
But for example, there are certainly aspects of, let's say, cognitive functions that we normally associate with fairly complex
creatures, brains and so on, that we are now seeing in very simple basal media.
So even something as simple as a gene regulatory network can have six different kinds of learning
and memory, for example, Pavlovian conditioning.
And so there, what you would see is on the one hand, you say, okay, well, look, here's, here's a simple bit of chemistry that's actually doing these complex things that we normally
associate with behavior science and with investigating cognitive systems.
Um, you can always tell a chemistry story about any effect.
In fact, you can, you could also tell a physics story as well.
If you look under the hood, all you ever find is chemistry and physics, right?
Uh, but the more interesting thing about that story is that you actually can use paradigms
from behavioral science, such as learning and conditioning and communication and active
inference and things like that.
And that lets you do way more than if you were to restrict yourself to an understanding
at the level of mechanism.
What's super interesting there is that you said you could always tell a chemistry story
or you could always tell a physics story to explain some biological process, but you can't always tell a biological story to something that's physics related.
So what people would consider more fundamental or what gives rise to the rest in terms of ontology would be, well, can you tell a story in terms of so-and-so? So can you tell a story in terms of biology of
physics? Can you tell a story of, well you can tell a story of architecture for
these buildings behind us? Maybe even of biology because people comprise the
people who made this and you could tell a mathematical story but it would be
difficult to make the case that you could tell an architectural story in the
way that we understand architecture not not a metaphor, about mathematics. Do you agree with that or do you disagree with that?
I think that's somewhat true, although it's not as true as we think. So oftentimes, you
actually can tell an interesting story cached out in the terms of behavioral science, for
example, active inference, for example, various learning modalities of objects that are
really thought to be the domain of physics and
chemistry. So I think there's more of that than than we tend
to, we tend to understand as of now. But it is also the case
that the fact that you can tell a physics story doesn't mean
that there's that much value in it necessarily. So for example,
imagine that there's a chess game going on.
So you could absolutely tell the story of that chess game in terms of particle movements or maybe quantum foam or I don't know whatever the bottom level is.
So you can do that. How much does that help you in playing the next game of chess?
Almost not at all, right? If your goal is to understand what happened at the physical level, yes, you can tell a story about all the atoms that were pushing and pulling and all of that.
But if your goal is to understand the large scale rules of the game of chess, the principles,
and more importantly, to play the next game of chess better than what you just witnessed,
that story is almost completely useless to you.
And this abounds in biology where you can use chemistry and physics to tell a story looking backwards
about what just happened and why,
but in terms of understanding what it means
and developing that into a new research direction,
new biomedicine, new discoveries,
it often means that you actually have to tell
a much larger scale story about memories, about goals,
about preferences, and these other kinds of concepts.
What would be the analogy here for,
so the rules of chess, what are you trying to understand?
The rules of what, biology, developmental biology,
cellular biology?
Well, the thing that ties all of our work together,
I mean, we do a lot of different things.
We do cancer biology, we do developmental biology,
we do regeneration, aging, not to mention AI
and some other things.
What ties it all together
is really an effort to understand embodied mind. So the center focus of all my work is really to
understand cognition broadly in very unconventional, diverse embodiments. And in biology,
what we try to understand is how life and mind intersect and what are the features
that allow biological systems to have preferences, to have goals, to have intelligence, to have
a problem solving, clever problem solving in different spaces.
Okay, great.
Because there are a variety of different themes that your work touches on.
So regeneration, cancer research, basal cognition, cross embryo morphogenetic assistance, which is from a
podcast that we did a few months ago and that will be on screen, link in the
description. There's xenobots, anthropobots. Please talk about what ties
that all together. Yeah, what ties it all together is the effort to understand
intelligence. All of those things are really different ways that intelligence is embodied in the physical world.
And so, for example, when we made xenobots and anthropobots,
our goal with all of these kinds of synthetic constructs is to understand where the goals of novel systems come from.
So typically when you have a natural plant or animal and it's doing various things, not only the structure
and the behavior of that organism,
but also whatever goals it may have,
we typically think of are driven by evolutionary past, right?
So these are set by the evolutionary history,
by adaptation to the environment.
So for eons, everything that didn't quite,
it wasn't quite good at pursuing those goals died out.
And then this is what you have. That's the standard story.
So the reason that we make these synthetic constructs is to ask, okay,
well for creatures that never existed before that do not have an evolutionary
history of selection, where do their goals come from?
So that's, that's just part of that, that research program to understand
where do the properties of novel cognitive systems come from that do not
have a long history of selection
as that system. Everything else that we do is an extension of our search for intelligence. So
basically cancer, right? So the way we think about cancer is as the size of the cognitive light cone
of living or of cognitive systems. So what I mean by that is every, every agent, uh, can be, uh, demark, it can be
defined by the size of the largest goal that can pursue.
So that's the cognitive light cone in space and time.
What are the biggest goals that it can pursue?
So if you think about individual cells, they have really, uh, sort of humble
single cell scale goals.
They have metabolic goals.
They have, uh, you know, proliferative goals and things like that.
They, they handle, uh, the situation on a very small single cell kind of scale.
But during embryonic development and during evolution in general,
policies for interaction between these cells were developed that allow them to scale the cognitive lycone.
So groups of cells, for example, the groups of cells
that are involved in making a salamander limb, they have this incredibly grandiose goal in a different space.
Instead of metabolic space, they are building something in anatomical space.
So they have a particular path that they want to take in the space of possible
anatomical shapes. So when a salamander loses their limb,
the cells can tell that they've been pulled away from the correct region of
anatomical space. They work really hard. They rebuild, they take that journey.
Again, they rebuild that limb and then they stop.
And they stop because they know they've gotten
to where they need to go.
That whole thing is a navigational task, right?
And there's some degree of intelligent problem solving
that they can use in taking that task.
So that amazing scale up of the cognitive light cone,
that, you know, the light cone measures,
what do you care about? So, you know, if you from what, what do the, the, the light cone measures, what do you care about?
So, so, you know, if you're a bacterium, uh, maybe you care about the local
sugar concentration and a couple of other things, but that's basically it.
If you're a set of salamander cells, what you really care about is your
position in anatomical space.
Do we have the right number of fingers, the right size, everything, the
right size and so on.
And so understanding that scaling of cognition, the scaling of goals, the,
how does a collective intelligence work
that takes the little tiny, very competent, tiny components
that have little tiny light cones
and how do they come together to form
a very large cognitive light cone
that actually projects into a different space,
anatomical space now instead of metabolic space.
Though that's fundamentally a question of intelligence.
And the other thing about it is that this kind of thinking, so this is really important,
that this kind of thinking is not just, you know, sort of philosophical embellishment,
because it leads directly to biomedical research programs. If you have this kind of idea,
what you can ask is, well, let's see, then if cancer is that type of phenomenon,
it's a breakdown of that scaling and that basically
individual cells just shrink their Lycone back down to the level of a primitive microbial
cell as they were once.
And that boundary between self and world now shrinks, whereas before the boundary was that
whole limb.
Now the boundary is just every cell is the self.
So from that perspective, they're not more selfish.
They just have smaller selves.
So that's really important because a lot of, a lot of work in cancer, um, models,
uh, cancer cell behavior from a perspective of game theory, like they're less
cooperative and they're, they're more selfish.
Actually, I'm not sure that's true at all.
I think they just have smaller selves.
And so, so just a moment.
So you think you would say that every organism is equally as selfish.
It just depends on the concept of self.
Yeah.
I think, I think all agents are one of the things agents do is operate.
It's not the only thing they do, but one of the things they do is they operate in
their self interest, but the size of that self can grow and shrink.
So individual cells, when they're tied into these large networks, it'll be
using electrical cues, chemical cues, biomechanical cues, they're tied into
these larger networks that partially erase
their individuality. And we could talk about how I think that happens. But what you end up with is a
collective intelligence that has a much bigger cognitive light cone. The goals are much bigger.
And then of course, you can scale that up. I mean, humans have enormous cognitive light cones
and whatever's beyond that. But that's the thing, it's the size of your cognitive light cone that determines what
kinds of goals you can pursue and what space you pursue them in.
So we talk about goals and we talk about intelligence.
And you said something interesting, which is that life is the embodiment of intelligence
or something akin to that, the embodiment of intelligence.
So it's as if there's intelligence somewhere and then you pull from it
and you instantiate intelligence.
So is that the way that you think of it?
Let me be clear about, about what I mean.
In philosophy, there's this concept of universals.
So Plato had the forms and then would say that this is almost rectangular.
So it's embodying the, the rectangularness, which is somehow,
somewhere out there akin to what I imagine
you meant when you said intelligence is there being embodied by this.
But then there's Aristotle which says, okay, yes, there is something akin to a form is
just not out there.
It's actually in here, the rectangularness is a property of this microphone.
So I imagine this latter view is the one that most biologists would take. Not that there's intelligence out there, let's just grab it and instantiate it here.
No, something has the property of intelligence.
So explain what you mean when you say embodying intelligence.
And also what you mean by intelligence.
Yeah, let's see, and just a quick thing to finish real quick, the previous thought, which is that
the reason I think this stuff is not philosophy, these are not questions of philosophy, these are very practical questions
of science because they lead to specific actionable technology.
So the idea that what's going on in cancer is a shrinking of the cognitive light cone
leads directly to a research program where you say, well, instead of killing these cells
with chemotoxic chemotherapies, because I believe that they're genetically irrevocably
damaged,
maybe what we can do is force them
into better electrical networks with their neighbors.
And that's exactly what we've done.
And so we've had lots of success
expressing strong human oncogenes in frog embryo cells,
and then leaving the oncoproteins be,
but instead making sure that
they're connected into tight electrical networks with their neighbors and they normalize they make
nice skin nice muscle they they do their normal thing instead of being metastatic and so and so
that kind of thing you know it's it's very important to me that all these ideas and so we'll
talk about in a minute about platonic space and things like that it's important to me that all
of these ideas don't just remain as kind of
philosophical musings, but they have to make contact with the real world and specifically, uh, not just explaining stuff that was done before, but
facilitating new, new advances.
They have to, they have to enable new research programs that weren't possible
before that that's what I think is the value of all of this kind of these,
these deep philosophical discussions.
So let's see, uh, with respect to the definition of intelligence.
So I like for practical purposes,
I like William James's definition,
which is some degree of the ability
to reach the same goal by different means.
And that is presuppose, it's an interesting definition
because it presupposes that there is some problem space
that the agent is working on.
But it does not say you have to be a brain. It doesn't say you have to be natural versus artificial.
It doesn't say any of that.
It's a very cybernetic definition.
What it says is that you have some amount of skill in navigating that problem
space to get to where you want to go.
Despite interventions, despite surprise, despite various barriers in your way,
how much competency do you have to achieve
that? And that was his definition of intelligence. Now I will certainly agree that that doesn't
capture everything that we care about in cognition. So, so a focus on problem solving doesn't
handle play and it doesn't handle emotions and things like that. But for the purposes
of our work, I focus on the problem solving aspects of
intelligence. So within that, your point about the platonic space. So now to be clear, everything
that I was saying before, I think we have very strong empirical evidence for. And so
now I'm going to, in answering this question, I'm just going to talk about stuff that I'm
not sure of yet. And that these are ideas we don't, you know, I don't feel strongly
that we've shown that any of this is true, but this is my opinion.
And this is where our research is going now.
I actually think that the platonic view is more correct.
And I know this is not how most biologists think about things.
I think that
in the exact same way that mathematicians that are sympathetic to the platonic worldview,
this idea that there are in fact in some way there is a separate world in which various
rules of number theory and various facts of mathematics and various properties of computation
and things like that, right? That there's a separate space where these things live.
And importantly, the idea is that, you know, they think that we discover those things.
We don't invent them or create them.
We discover them that, and that they would be true still.
If all the facts of the physical universe were different, they would still, you know,
they would still be true, right?
I extend that idea in a couple of ways.
One is I think that what, what exists in that
platonic space is not just, um, you know, rules of
mathematics and things like that, which are in a
certain sense, low agency kind of objects,
because they just sort of sit there doing, you
know, they don't do much.
I actually think that there's a spectrum of that.
And some of the components of that platonic space
have, have much higher agency.
I think it's a space of minds as well and of different ways to be intelligent.
And I think that just like when you make certain computer, certain devices, you
suddenly harness the rules of, uh, uh, rules of computation, rules of
mathematics that are basically free lunches, you know, there are so many
things you can build that will suddenly, um, have properties that you didn't have
to bake in, you know, you get that for free basically, right?
I think intelligence is like that.
I think some of the components of that platonic space are actually minds.
And, uh, and sometimes what you, what happens when you build a,
when you build a particular kind of body, whether it's one that's familiar
to us with a brain and so on, or really some very unfamiliar architectures,
whether they be alien or synthetic or whatever.
I think what you're doing is you're harnessing a pre-existing intelligence that is there
in the same way that you harness various laws of mathematics and computation when you build
specific devices.
Okay, well, that's super interesting.
So at any moment, there's a Mike, there's Mike 11, but there's Mike 11 at 11am, then
there's Mike 11 at 11 a.m. Then there's Mike 11 at 1101 a.m. And
It's not clear. They're the same. So there's the two the river do you step in it is the same river? Okay, cool
There's a through line. There's something there though
Maybe it's something like you're akin to what you were an epsilon amount of time previous and then that's the through line
So as long as that's true, then you can you can draw a worm of you throughout time. Are you saying that
at any slice of that there was a Michael Levin in some Michael Levin space which
is also in the minds of human space and you're picking out points somehow you're
traversing the space of minds? That's actually even more specific than
what I was saying. I mean, that's interesting.
And I think that's what I was saying that in general, kinds of minds.
So there are different, I don't know quite what to think about individual instances, but kinds of minds.
Because that would be an extremely large space then.
Well, I think, yes, I think the space is extremely large, possibly infinite, but in fact, you know. Because that would be an extremely large space then. Well, I think, yes, I think the space is extremely large,
possibly infinite, but in fact, probably infinite.
But I think what I was talking about was mainly types
of different minds, types of cognitive systems.
Now for individuals, you raise a really interesting point,
which is I call them selflets, which are the kind of
the thin slices of experience that-
You call them selflets? Selflets, kind of the thin slices of experience that. You call them selflets?
Selflets.
Because you have a self and then the different slices of it are the selflets and you can
look at it as you know kind of like that special relativity, you know, bread loaf, you know,
sliced into pieces, that kind of thing.
So I think what's interesting about asking that question about the self and what is the
through line is what's really important there is to pick a vantage point of an observer. Again,
I kind of akin to what happens in relativity, you have to ask from whose perspective. So,
so one of the things about being a continuous self is that other observers can count on your
behaviors and your property staying more or less constant. So the reason that we identify, oh yes,
you know, you're the same person as you
were before is basically, you know, nobody cares that you have the same atoms or
you don't, or, or the cells have been replaced.
What you really care about is that this, this being, I can have the same kind of
relationship with them that I had before.
In other words, they're consistent.
I can expect the same behaviors, the things that I think they know, they
still know and so on.
And so that, that of course, in, in, in our human lives, that, um they know, they still know and so on. And so that, of course, in our human lives,
that often breaks down because humans grow
from being children to being adults,
all their preferences change,
the things that they remember
and the things that they value change.
In our own lives, we sometimes change, right?
And that's a much more important change.
Are you the same person,
even though if all your material components
remain the same, but you changed all your beliefs,
all your preferences, would you still be the same person?
So I think what we mean when we say the same
is not about the matter at all.
It's about what kind of relationship we can still have
and what do I expect from you behavior wise and so on.
And there's some really interesting, and so that's from the perspective of an external observer.
Now, the latest work that I just published a couple of days ago looks at what does that mean from the perspective of the agent themselves.
And this idea that you don't have access to your past, which have access to our memory and grams, you know, traces of past experience
that were deposited in your brain and possibly in your body
that you then that future you is going to have to interpret.
And so that leads to a kind of a scenario where you treat your
own memories as messages from your past self, you know, um, the idea, uh, then that is that those memories
have to be interpreted.
You don't necessarily know what they mean right away because
you're different.
You're not the same as you were, right?
Especially over long periods of time.
And this, this, this comes out very starkly in organisms that
change radically, like caterpillars to butterfly.
So, so memories persist, um, from, from butterfly, from
caterpillar to butterfly, but the actual, uh, detailed memories of the caterpillar are of absolutely no use to the butterfly. It doesn't eat the same stuff. It doesn't move in the same way. It has a completely different body, completely different brain.
You can't just take the same memories. And so I think what happens in biology is that it is very comfortable. Uh, in fact, it depends on the idea that the substrate will change.
You will mutate your cells.
Some cells will die.
New cells will be born.
Material goes in and out.
Unlike in our computational devices,
the material is not committed to the fidelity of information
the way that we are in our computation.
You are committed to the salience of that information.
So you will need to take those memory traces and reinterpret them for whatever your future situation is.
In the case of the butterfly, completely different.
In the case of the adult human,
somewhat different than your brain when you were a child.
But even during adulthood,
the context, your mental context, your environment,
everything changes.
And I think you don't really have an allegiance
to what these memories meant in the
past, you reinterpret them dynamically.
And so this gives a kind of view of the self, kind
of a process view of the self that what we really
are is a continued, uh, continuous dynamic attempt
at storytelling where what you're constantly doing
is interpreting your own memories in a way that
makes sense about what, you know, a coherent story about
what you are, what you believe about the outside world.
And it's a constant process of self-construction.
So that, that I think is what's going on with selves.
If it's the case then that we interpret our memories as messages from our past selves
to our current selves, then can we reverse that and say that our current actions are messages to our future self?
Yeah, yeah. I think, I think that's exactly right.
I think that a lot of what we're doing now is good at any given moment, uh,
is behavior that is going to enable or constrain your future self.
You know, uh, you're, you're setting, you're setting the conditions in which the environment in which your future self is going to be living in, including changing yourself,
you know, anything you undertake as a self
improvement program or conversely, um, you know,
when people entertain, um, intrusive or
depressive thoughts that changes your brain, that
literally changes the way that your future self is
going to be able to process information in the
future.
Everything you do radiates out not only as messages and, um, a kind of
niche construction, you know, where you're changing the environment in
which you're going to live and which everybody else is going to live.
We are constantly doing that to ourselves and to others.
And so that, you know, um, really, uh, forces a kind of, uh, thinking about
your future self as kind of akin to other people's future selves.
And I think that also has important ethical
implications because once, once that symmetry
becomes apparent that your future self is not
quite you, um, and also, uh, others future
selves are also not you that suggests that the
way, the same reason you do things so that your
future self will have a better life.
Uh, you might want to apply that to others, future selves, like breaking,
breaking down this idea that I'm certainly not the first person to say this,
but breaking down the idea that you are a persistent object, separate from
everybody else and, and, you know, sort of persisting through time, breaking
that down into a set of selves.
And what you are doing now is basically for the good of a future self, I think,
makes you think differently about others future selves at the same time.
Is it as simple as the larger the cognitive light cone, the better for the organism?
Well, what does better mean? I mean, I think that certainly there are extremely successful
organisms that do not have a large
cognitive light cone.
Having said all that, you know, the size of an organism's cognitive light cone is not
obvious.
We are not good at detecting them.
It's an important research program to find out what any given agent cares about because
it's not easily inferable from measurements directly.
You have to do a lot of experiments.
So assuming we even know what anything's cognitive
Lycone is, I think lots of organisms do perfectly well,
but then what's the definition of success?
So in terms of the way we think about,
well, the way many people think about evolution
in terms of, you know, how many copy number,
like how many of you are there, that's your success.
So just persistence and expansion into the world.
From that perspective, I don't think you need a particularly large cognitive light cone.
Bacteria do great. But from other perspectives, if we sort of ask ourselves what the point is,
and why do we exert all this effort to exist in the physical world and try to persist
and exert effort in all the things we do in our lives. One could make an argument that a larger cognitive light cone is probably better in the sense
that it allows you to generate more meaning and allows you to bring meaning to all the
effort and the, you know, the suffering and the joy and the hard work and everything else.
From that perspective, one would want to enlarge one's cognitive light cone.
And you know, we collaborate, well, I collaborate with a number of people, uh, in, uh, in this group called
CSAS, the, the, the center for the study of apparent selves.
And we talk about this, this notion of something, for example, in Buddhism, they
have this notion of a bodhisattva vow.
And it's basically a commitment to enlarge one's cognitive light cone so that over
time, one becomes able to have a wider area
of concern or compassion, right? The idea is you want to work on changing yourself in a way that
enlarges your ability to really care about a wider set of beings. And so from that perspective,
maybe you want a larger cognitive light cone. Okay, so there are three notions here. So enlarging what you care about, okay, then also enlarging what you find relevant.
And then there's also increasing the amount of opportunities that you'll have in the future.
So that's the adage.
That's the business adage.
Go through the door that will open as many doors as possible.
And then the relevance one, I don't think is the case because if you find more, if you find too much relevant, you, you will also freeze because you don't know what to do.
Now then there's the concern for others. So help me disembroil these three from one another.
Yeah. The relevance is, is thing I often think about is like, what's the, what's the fundamental
unit that exists in the world? Is it, is it, you know, is it genes? Is it information?
Like what, what is it that's really spreading through the universe and differentially reproducing
and all that? I tend to think it's perspectives. I think that what's really out there as, as
a diverse, as, as a way to describe really diverse agents,
I think one thing they all have is a commitment
to a particular perspective.
So perspective in my definition is a bundle of commitments
to what am I gonna measure about the outside world?
What am I gonna pay attention to?
And how am I going to weave that information
into some sort of model about what's going on
and more importantly, what I should do next.
So there are many, many different perspectives.
And as you said, it's critical that every perspective has to shut out way more stuff
than it lets in because, right, because if you, if you act and this, this interestingly,
this gets back to your original point about the different levels of description, you know,
physics and chemistry and all that, because if you try to, um, if you want it to, if you want it to be a
Laplacian demon, if you want it to track microstates, all the particles, right?
I'm just going to track, I'm going to track reality as it really is.
I'm just going to watch all the particles.
That's all I'm going to do.
Uh, no, no living system can survive this way, right?
Because, because you, you would be eaten before anything else happens.
You would be dead.
So I think that, um, I think that any agent that evolves
under resource constraint, so which is all of life,
and it remains an interesting question,
what does that mean for AIs and so on,
but any agent that evolves under constraints
of metabolic and time resources
is going to have to coarse grain. They're going to have to not be a reductionist. They're going to have to course grain.
They're going to have to not be a reductionist.
They're going to have to tell stories about agents that do
things as a means of compression,
as a way of compressing the outside world
and picking a perspective.
You cannot afford to try to track everything.
It's impossible.
So that compression also comes back to the memory
and grams issue that we were talking about because
as your past self compresses lots of diverse experiences into a compact representation
right of a memory trace of what happened, that kind of learning is fundamentally compression
when you're compressing lots of data into a short pithy generative rule or a memory
that you're going to remember that you can use later.
So not the exact experiences that you had,
but some compressed representation.
Well, the thing about compression is that
the most efficient,
the data that's compressed really efficiently
starts to look random.
So because you've gotten rid of all the correlations
and everything else, the more you compress it,
the more random it looks.
And you've really gotten rid of a lot of metadata
that you have to, that's the point of compression.
You've lost a lot of information.
Wait, why do you say that the more compressed it is,
the more random it is?
Why isn't it the opposite?
But the more random it is, the more incompressible it is.
That's true because you've already compressed the hell out of it. That's
why, right? That's why it's incompressible, because you've already
compressed it as much as it can. This is the issue that, for example, the
SETI people, right? The Search for Extraterrestrial Intelligence,
that they come up against because the messages that are going to be
sent by truly advanced civilizations are going to look like noise to us.
Because really effective compression schemes,
unless you know what the decom,
so you've compressed it,
unless you know what the algorithm is to reinflate it,
the message itself looks like noise.
It doesn't look like anything.
Because you've pulled out all of the correlations,
all of the order that would have made sense to a more naive observer is now gone.
And if you don't have the key to interpret it with, it just looks like noise.
And so that means that if you sort of think about this, there's an architecture like a bowtie kind of architecture that's important here, where you take all these experiences, you compress them down into an engram.
That's your memory. It's actually, that's important here where you take all these experiences, you compress them down into an N-gram.
That's your memory.
And then future you has to reinflate it again and figure out, okay, so what does this mean for me now?
Right.
It's a, it's a simple rule that I inferred.
Let's say you learned an associative learning task or some kind of, you know,
you've learned something general, you've learned another to, to count, you know,
a number or something like this.
So, you know, so rats, I think it takes about, if I recall correctly, like, like 3000 trials before they understand the number three that's that's distinct from any instance.
So it's not like three peanuts or three, you know, it's like the number three of anything.
Right.
So after, you know, some thousands of trials, they get it.
And so now they have this, this compressed rule.
They don't remember the sticks or the, or the, or the peanuts or whatever, but they,
they, they remember the actual rule to, to, right.
And so that's the Ngram at the center of your brain. this compressed rule, they don't remember the sticks or the, or the, or the peanuts or whatever, but they, they, they remember the
actual rule to, to, right.
And so that's the Ngram at the center of your
bow tie.
Well, future you has to, has to re-inflate it,
has to, has to uncompress it and expand.
Okay.
But now I'm looking at three flowers.
Is that the same or not?
How do I, how do I apply my rule to this?
And I think that what's important is that you
can't do that deductively because
you're missing a lot of that basic information.
You have to be creative.
When you interpret your own engrams, there's a lot of creative input that has to come in to understand
what it was that you were thinking at the time.
And I think that kind of a thing, I mean, I mean, we know that that recall of memories actually changes
the memory, right?
So, so in neuroscience, they know this,
that there's no pure non-destructive reads of memories
that when you access your memory, you change it.
And I think that's why, because the act of interpretation
is not just a passive reading of what's there.
It's actually a construction process of trying to recreate.
So what does that mean for me?
What does it mean now?
And that's part of that process of the dynamic self,
is trying to figure out,
and obviously all of this is subconscious,
but trying to figure out what your own memories mean.
Yes, okay.
So you said obviously much of this is,
or maybe you said obviously all of it,
or much of it, I'm not sure, is subconscious.
So when we say you currently are constructing an engram for the future you to then unpackage,
I am not doing this, at least not at an effortful conscious level. There's an instinctual unconscious
component to it. And then both to the encoding and then to the retrieval and the expansion of the engram.
Yeah.
So, who, the person who's listening to this, they listen to these podcasts,
they listen to theories of everything, in large part because they're trying to understand themselves,
they're interested in science, they're interested in philosophy.
You're also speaking to them now with this answer to this question,
who are they? They're listening to this and they're saying, I'm doing this.
I don't, I'm not aware of doing this.
This is all news to me, Mike.
You're saying I've been doing this my whole life and this defines me.
This doesn't sound anything like me.
So who are you?
Who are you, Mike?
And who is the person who's listening?
What defines them?
Well, so, so a few things.
So the fact that there are an
incredible number of processes under the hood, it has been known for a really long
time. So not only all the physiological stuff that's going on, I mean you also
don't have the experience of running your liver and your kidneys, which are
very necessary for your brain function. You are also not aware of all the
subconscious motivations
and patterns and traits and everything else.
So let's assume right now that
whatever our conscious experience is,
there is tons of stuff under the hood.
Not just the thing that I just said,
but everything else that neuroscience has been studying
for, you know, for a hundred years or more.
There's lots going on under the hood
and that doesn't define you.
It enables you to do certain things.
It constrains you from doing certain other things that you might want to do.
The hardware does not define you.
I think the most important thing, and look, I think this is a really important
question because I get lots of emails from people who say, I've read your papers.
I understand that I'm now that now I understand I'm a collective intelligence
of groups of cells.
What do I do now?
I don't know what, what, what to do anymore.
Right.
And my answer is do whatever amazing thing you were going to do before you read
that paper, like all of this information about what's under the hood is interesting.
And it's, and it's, and it's, um, you know, it has all kinds of implications, but the one thing it does not do is diminish your responsibility for, uh, living the best, most meaningful life you can.
It doesn't affect any of that.
And, um, uh, one way I think about this, I, and there's lots of sci-fi about this, but one, one thing that, uh, you might remember is, uh, you've seen the film Ex Machina?
Yes.
And so, and so there's one scene there where, uh,
the protagonist, he's standing in front of a mirror
and he's completely freaked out because the AI is so
lifelike, he's now wondering, maybe he's a robotic
organism too.
And so, so he's, you know, he's, he's cutting his
hand and he's looking in his eye and he's trying to
figure out what he is.
And so, so let's just dissect that for a minute. Um, the reason he's doing this and, and what happens to most people, which I think
is quite interesting is that if, if they were to open their arm and they find a
bunch of cogs and gears inside, I think most people would be super depressed.
Because what they would, because I think where most people go with this is I've
just learned something about myself.
Meaning I know, I know what cogs and gears can do.
They're a machine.
And I just learned that I'm full of cogs and gears.
Therefore I'm not what I thought I was.
And I think this is, this is a really unfortunate way that, uh, really unfortunate
way to think because what you're saying is your experience of your whole life and all of the joys and the suffering and the personal responsibility and everything else that you've experienced, you are now willing to give all that up because you think you know something about what cogs and gears can do.
I would go in the exact opposite direction and I would say, amazing, I've just discovered that cogs and gears can do this incredible thing.
Like, wow.
And why not?
Because, because why do you think that proteins and, um, uh, you know, the
ions and proteins and the various things in your body, those are,
those are great for true cognition.
I mean, like, like, I always knew I was full of, you know, protein and, and, and
lipids and ions and all of those things.
And that was cool.
I was okay with being that kind of machine,
but cogs and gears, no way, right?
And so I think one thing that we get from our education
focused on certain kinds of materialism
is that we get this unwarranted confidence
in what we think different materials are capable of.
And we believe it to such an extent, I mean, I find this amazing as a kind of a,
you know, an educational or sociological thing that we imbibe that story so strongly
that we're willing to then give up everything that we know about ourselves
in order to stick to a story about materials.
I think, I think that's, you know, the one thing that I think Descartes had really, really right
is that the one thing you actually know is that you are an agentic being with responsibility
and a mind and all these, you know, and all this potential and whatever you learn is on
the background of that. So if you find out that you are made of cogs and gears, all you should, uh, is, is, is, is, is on the background of that.
So if you find out that you are made of cogs and gears, all you should conclude is, well, great, uh, now I know this stuff can do it as well as proteins can.
And so, so, so, so from that, so, so what I really hope, um, that people
get from this is simply the idea that.
Pretty much no discovery about the hardware, no discovery about the biology or the physics of it should pull you away from the fundamental reality of your being that whatever it is that you are, you know, groups of cells, an emergent, you know, an emergent mind pull down from platonic space or whatever, whichever of these things are correct.
from platonic space or whatever, whichever of these things are correct.
The bottom line is you are still the amazing integrated being with potential and a responsibility to do things.
So many people who dislike materialism and like the more whatever they consider to be not
materialism, so it could be idealism, it could be spiritualism or whatever they consider to be not materialism. So it could
be idealism, it could be spiritualism or whatever they want to call it or non-dualism or trialism
instead of a dualism. In part what they're saying is look I'm not material because they denigrate
the material and they view it as robotic lifeless but you're saying there's another route if you are
material whatever it is you turn out to be you can elevate that so you can say look there's another route. If you are material, whatever it is, you turn out to be, you can elevate that.
So you can say, look, there's, there's a dynamism to it.
There's an exuberance.
There's a larger than lifeness to the, what I previously thought was ossifying.
Yeah, for sure.
And Ian McGilchrist makes a point of this too.
He says that we've, we've underestimated matter.
You know, when, when we talk about materialism materialism, we have been sold and are selling still,
this notion of matter as lacking intelligence.
And I think that we need to give up this unwarranted confidence in what we think
matter can do.
We are finding novel proto cognitive capacities
in extremely minimal systems, extremely minimal systems.
And they're surprising to us.
They're shocking to when we find these things.
And I think we are really bad at recognizing
what kinds of systems can give rise to minds
and therefore being depressed because we think
that we are a particular kind of system and there's no way that system can be this, this, you know, majestic agentic being.
It's, it's, it's way too early for that.
We just, we absolutely, I mean, this is, this is one of the, I think the major things is that we have this, we have this idea that, that we know what different kinds of matter can do.
we have this idea that we know what different kinds of matter can do. And obviously, I'm not just talking about homogenous blocks of material,
I'm talking about specific configurations,
but really minimal kinds of things can have some very surprising cognitive qualities.
And so, yeah, it's way too early to think that we know what's going on here.
I think we have a record here of 45 minutes of recording or so, Yeah, it's way too early to think that we know what's going on here.
I think we have a record here of 45 minutes of recording or so, and we haven't mentioned
the word bioelectricity once.
So kudos to you, kudos to me.
How the heck did that happen for a Michael Levin interview?
So what does bioelectricity have to do with any of this?
Yeah.
So bioelectricity is not magic. It is not unique in the sense that
there are probably, certainly out in the universe, there are probably other ways of doing what it does.
But what it does here on earth in living forms is something very interesting. It functions as a kind
of cognitive glue. So when you have collective intelligence, you need to have
policies and you need to have mechanisms for enabling individual, competent individuals
to merge together into a larger emergent individual that's going to do several things. First of
all, the larger network, the larger level is going to distort the option space for the lower levels. So
the parts are doing things that the parts do, but they're now doing it in a way that
is coherent with a much higher level story of what's going on, higher level goals, because
their action landscape is bent. It's distorted by the larger level, their perception, their, their, you know, energy landscape is, is distorted.
So, that collective is going to, that new self is going to have memories,
goals, preferences, a cognitive light cone.
That's, that's, that, that requires some, some, some, some very specific
features and there are a bunch of them.
And one of the modalities that lets that happen,
that lets the cognitive light cone scale,
that lets the intelligence scale is bioelectricity.
So by taking cells and enabling them to be part
of an electrical network, there are some really interesting
larger level dynamics, which, you know,
this is what we exploit in artificial neural networks,
of course, right?
This is biology, you notice this since the time
of bacterial biofilms, that electricity
is just a really good way for higher level cells to emerge
and higher level computation to emerge.
There are probably other ways of doing it,
but here on earth, bioelectricity
tends to be the way to go.
Something I always wonder in conversations
about our higher selves, lower selves, higher goals.
How do we even say higher or lower when what we're talking about is such a vast landscape of goals or cognitive light cones in a higher dimensional space
where the real number line is the only continuum that has an ordering to it.
Soon as you have the complex numbers or R2 or R3, etc.
There's no, you can't pick two points and say one is higher than another.
Unless you implement other structure.
So what is it that allows us to say higher or lower?
Bad vocabulary. You're 100% right.
The only thing that matters here, because it doesn't necessarily mean the next level up is not necessarily smarter than the lower level, usually,
but that doesn't guarantee that at all.
Not necessarily bigger or smaller in physical scale.
So the only thing I mean by,
and we really don't have a great vocabulary yet
for all this stuff, but the only thing I mean by higher
is something akin to set membership.
Just the fact that a tissue is made up of cells
and cells are made up of molecular networks.
That's it. That's all I'm talking about. I'm not saying that it's bigger or more intelligent or more valuable.
All I mean is that in this in this heterarchy certain things are made up of other things.
That's it. That's all I mean.
Earlier when defining intelligence, I believe you said William James's was something about ability but also means.
So ability to generate multiple paths to a single goal.
I don't know if it was also the ability to have multiple goals, but we can explore that.
But let's pick out a goal, then you can generate multiple paths to that goal, many ways of executing.
But then you also, I believe you said the means too as well.
Is that correct?
Yeah, the means in James's, at least the way I read him, uh, in James, when
he says means he means the path that is, that is the path, a means to an end, right?
It's a path that takes you to that end.
So, you know, this is the kind of stuff we see in biology, just to, you know,
just to give you an example, um, one thing that people often think when, when they
hear me talk about the intelligence of development and so on, people often think, I mean, the complexity,
just the fact that, that you start from an egg and you end up with, I don't know,
a salamander or something that there's an increase in complexity.
And then, you know, and then, then rightly people think, well, that's just,
you know, uh, there's, there's lots of examples where simple rules give,
give rise to complex outcomes.
That's just emergent, emergent complexity.
That's not intelligence.
And the right, that is not whatgent complexity. That's not intelligence.
And the right, that is not what I mean.
That's not intelligence.
What I mean by intelligence is the problem solving
of the following kind.
So let's say you have an egg belonging to a salamander.
One of the tricks you can do is prevent it from dividing
while the genetic material is copying.
And so you end up with polyploid newts.
So instead of 2N, you can have 4N, 5N, 6N, 8N, that kind of thing. from dividing while the genetic material is copying. And so you end up with polyploid newts.
So instead of 2N, you can have 4N, 5N, 6N, 8N,
that kind of thing.
Well, what happens when you do that,
what happens is the cells get bigger
in order to accommodate the extra genetic material,
but the actual salamander stays the same size.
So if you take a cross section,
so let's say we take a cross section of a little tubule,
a kidney tubule that runs to the kidneys.
Normally there's like eight to 10 little cells that go in a circle to make
that tubule and then there's a lumen in the middle. So if you make the cells
gigantic the first thing you notice is that well first of all having multiple
copies of the genetic material you still get a normal newt. So that's pretty
amazing already. Second amazing thing is when the cells get the cells scale to
the amount of genetic material so the cells get larger. That's amazing.
Then you find that, well, actually,
since the cells are really big,
only a few of them are now working together
to make the exact same size tubule.
So they scale the number of cells
to make up for the aberrant size of the cell, right?
That make sense?
And then the most amazing thing of all happens
when you make truly gigantic cells, there's
not even room for more than one, one cell will bend around itself leaving a lumen in
the middle.
The reason that's amazing is that that requires a different molecular mechanism.
That cytoskeletal bending whereas before you had cell to cell communication.
And so that kind of thing, right?
So just think about this, you're a new coming into the world, you have no idea, You can't count on how much genetic material you're going to have, how many cells you're going
to have, what size cells you're going to have. What you do have is a bunch of cool tools at your
disposal. You have cytoskeletal dynamics, you have gene regulatory networks, you have bioelectricity,
you have all this stuff. And what you're able to do under totally novel circumstances is pick from
your bag of tools to solve the problem. I go from an egg to in, in more for space.
I take this journey from an egg to a proper newt.
Not only can I not count on the environment being the same, I can't even
count on my own parts being the same, right?
That kind of, that kind of, um, uh, you know, another way to call
this attitude is, uh, is, is, is beginner's mind.
It's like, you, you don't over train on your priors on your evolutionary
priors, you have a bag of tools and you're not just a fixed solution.
This is, this is why I think evolution doesn't just, uh, produce solutions
to specific environmental problems.
It produces problem solving agents that are able to use the tools they have.
I mean, what's a, what's a better example of intelligence that something
that can use the tools it has in novel ways to solve a problem that's never seen before, right?
That is a version of intelligence. And that's what is all over the place in biology, the
ability to navigate these pathways, not only to avoid various barriers and so on, but to
use the tools available to them in creative ways to solve the
problem. And we see some of this in extremely minimal systems. It does not require a brain,
doesn't even require cells. Very minimal systems have surprising problem solving capacities. And
this is why we should be extremely humble when we try to make claims about what something is or
isn't or what competencies it has. we are not yet good at recognizing those things.
We do not have a mature science yet of knowing what the properties of any of this stuff is.
So the tricky part with this definition of intelligence, help me out with this, is that
what we want to say is that it's conceivable that the kid from Saskatoon, the poor kid,
is more intelligent than the rich kid from the Bay Area.
So that's conceivable.
But the rich kid has far more means, far more ability to achieve their goals.
So if there was implementability within the path, so if we say look the ability to generate paths that are realizable is
in part what defines the IQ or the intelligence. Well the poor kid from
Saskatoon has less raw material to play with to generate a path.
So how do we avoid saying unless you want to say which I don't imagine you want to say.
How do we avoid saying that the poor kid from saskatoon is?
Just by definition less intelligence by by happenstance than the person from the Bay Area
the thing the thing to keep in mind here is that
Estimates of intelligence and and I think all cognitive terms so everything about
You know, so all the words of people you sentience cognition with goal directedness all of them
I think we have to remember that those are not
objective properties of a given system.
IQ is not the property of whatever system
you're trying to gauge the IQ of.
It is your guess, your best guess
about what kind of problem solving
you can expect out of that system.
So it's as much about you as it is about the system.
And we've shown this in our experiments a lot of times
that when people talk about certain kinds
of developmental constraints, or they
talk about the competency of tissues
to do one thing or the other, it's
much more about our own knowledge
of the right stimuli and the right ways
to communicate with that system and not so much
about the system itself.
When you make an estimate of the intelligence of something,
you are taking an IQ test yourself.
All you're saying is this is what, this is what I know in terms of, and this is
what I can see in terms of what kind of problem solving I can see.
And this applies to, to, to animals.
This applies to AIs, this applies to humans and various economic environments.
Um, you know, this, this, the simple version of this is, uh, you show somebody
a human brain and they say, that's a pretty awesome paper weight.
And I can see that it can do, you know, at least action and hold down
my papers against gravity.
And that's all I think it can do.
And somebody else's, you've missed the whole thing, right?
Like this thing does all this other stuff.
So I think that, uh, that, that type of, um, mistake where a, we think that it's an objective property of the system and B that we think that that type of mistake where, A, we think
that it's an objective property of the system,
and B, that we think that we're good at determining what that
is, is what bites us a lot when we're dealing with especially
unconventional systems.
So to use your example, if someone
looks at a kid with that environment and says, well,
I don't think this kid has much intelligence.
The problem isn't on the side of the kid. The problem is that somebody else might come along
and says, oh, you don't get it in a different environment. This kid would exhibit all these
amazing behaviors. The good news about all of this, and certainly it's not in my wheelhouse
to comment on any of the kind of economic stuff or
the sociology of it, but for the, for the biology and, and,
and for the computer science and so on.
The good news is that all of these things are empirical,
empirically testable.
So when, when we come across a certain system, each of us is
going to guess what is the problem space
it's operating in, what are its goals,
and what capabilities do we think it has
to reach those goals, and then we do the experiment,
and then we see who's right.
That's the thing, that this is not a philosophical debate.
This is absolutely experimental.
So if you say, I don't think these cells
have any intelligence, I think they're just
a feedforward emergent dynamics, and I think, oh no, I think they're just a feed forward, emergent dynamics.
And I think, Oh no, I think they're actually minimizing or maximizing some
particular thing and they're clever about doing it.
We do the experiment.
We put a barrier in their place in the, you know, between them and their goals.
And we actually see, do they or do they not have the, what I claimed to, to, you
know, uh, to be their, their competency.
And then we find out how much each of our views lets us discover the next best thing.
So these are all empirically testable kinds of ideas.
So before we get to consciousness and 11 labs, I want to talk about cognition.
So in 2021 or so you had a paper called reframing cognition.
Okay.
Something akin to that.
Yeah, that sounds like, uh, I think that might have been a review with Pamela Lyon
Yes, yeah, and then on
Page 10 section 5 something like that you defined or you started talking about basal cognition and uncaviated cognition
So what do those terms mean?
To be completely honest, I don't remember this part.
I mean, yes, I certainly don't remember the pages or the-
Basal cognition?
Yeah, I mean, so, okay, so the idea for basal cognition is basically that whatever cognitive
capacities we have, they have to have an origin and we have to ask where did they come from
because this idea that, you know, we are completely unique and they suddenly sort of snap
into place, it doesn't work evolutionarily
and it doesn't work developmentally.
Both of those are very slow processes.
So the stories we have to tell are stories of scaling.
To really understand these processes,
we have to understand how simple information processing
capacities scale up to become larger cognitive light cones,
more intelligent systems project into new problem spaces and so on.
So basal cognition is the question of, okay,
so where did our cognitive capacities come from?
So that means looking at the functional intelligence of cells,
tissues, slime molds, microbes, bacteria, um, and,
and minimal matter, you know, active materials, that kind of stuff.
That's basal cognition.
What do the primitive,
what do the really primitive versions of cognition look like?
And it's a really important skill to practice
that kind of imagination,
because often what trips people up is they imagine,
for example, panpsychist views, right?
So somebody says, oh, you're trying to tell me the, the imagines for example, for example, panpsychist views, right?
So somebody says, oh, you're trying to tell me that this rock, you know, is sitting there
having hopes and dreams.
Well, no, that's not the claim.
The claim is that the claim isn't that, that these, these full blown large scale cognitive
properties that you have are exactly there everywhere else.
The claim is that it's a spectrum or a continuum and that there are primitive tiny versions of them that are also should be recognized because we need to understand how they scale.
So that's that's that's basal cognition.
So if it's a spectrum, I hear this plenty.
Look, I'm not saying someone will say I'm not saying everything is conscious.
It's a spectrum.
It's not on off.
But then to me, can't you just define on off to be if you have a non zero on the spectrum,
then you're on?
Like, for instance, you don't if you have a non zero on the spectrum then you're on like for instance
You don't say you say a particle has electric charge. Is it electrically charged? Well, it's on the spectrum
Yeah, if it has a non zero amount you call it electrically charged if it's zero, then you say it's neutral
so
Can't you just say then that yes the rock does have hopes and dreams even if it's at?
0.0 0 0 02% of whatever you have.
Oh, I personally am on board with that. I think, you know, I think I think potential energy and least action principles are the tiniest hopes and dreams that there are.
I so I agree with that completely. I think that is the most basal version. And in our universe, so this goes way beyond my pay grade,
but for example, I've talked to Chris Fields,
who is really an expert in this stuff.
And I asked him, is it possible to have a universe
without least action laws, right?
And he said, the only way you can have that
is if nothing ever happens.
So if that's the case, that tells me that in our world,
there is no zero on the cognitive scale and everything is on it. So if that's the case, that tells me that in our world,
there is no zero on the cognitive scale and everything is on it.
But again, we have to ask them.
So I agree with you.
I think if you're on the spectrum,
then you're on and that's it.
And I think in this universe, everything is on.
But we have to ask ourselves,
what do we want this terminology to do for us?
So that's why some people critique these kinds of perspectives by
saying, well, then if, if everything is on it, then the word means nothing.
Then why, you know, why do we, why do we even have the word?
Cause everything is, if everything is cognitive and I didn't say consciousness
yet, but, but let's say, let's say everything is cognitive, then why do we
need the word everything?
And I really think that we need to focus on what we expect the terminology to do
for us. So let's imagine, let's just dissect this for a minute, the old paradox of the heap, right?
So you got a pile of sand and you know that if you take off one little piece of sand,
you still got a pile, but eventually you have nothing. So how do you define the pile? So
I think for all of these, so my answer to this, and I think the solution to all of these kinds
of terminological issues is that it's not about
the object itself, it's about what tools you're going
to use to interact with it.
So if you call me and you say, I have a pile of sand,
all I wanna know about the definition of pile is,
am I bringing tweezers, a spoon, a shovel,
a bulldozer, dynamite, what, how are we, like, What are the tools that I have to do what we need to get done?
So that is the only value in this terminology.
So by saying that everything is cognitive,
does that by itself help us with anything?
No.
I think what does help is if you tell me what kind of cognition and how much,
and that's an empirical question, and then we can argue about it.
And the answer to that question is And then we can argue about it.
And the answer to that question is,
what are the tools that help us the most?
So you show me a bunch of cells.
And you say, I think the right way to do this
is physics, chemistry, and feedforward emergence
and complexity.
That's how I think we're going to interact with it.
And I look at it and say, I think
the way to interact with this is through some interesting concepts
from cognitive neuroscience, including active inference,
learning, training, and so on.
Then we get to find out who's right.
If I can show that using my concepts,
I got to new discoveries that you didn't get to,
there you go.
On the other hand, if I waste all my time
writing poems to a rock and nothing ever comes of it,
well, then you're right.
And so I think that the point of all this terminology,
yes, we can say it's all on a spectrum,
but now comes the fun and interesting work of saying,
okay, so what does the spectrum look like
and where on the spectrum do the various things
that we see land?
Okay, let's get to consciousness.
I wanna say that I don't agree with Chris Fields
about the principle of least action because
Firstly people say the universe is lazy, but you can also put a minus sign and say the principle of most of maximum effort
Okay, but then also there are many
There are many quantum field theories that aren't based in Lagrangians to minimize. So there's algebraic, constructive,
axiomatic, and categorical. And then there's this whole, there's a new video that actually
got released a couple of days ago by Gabriel Carcassi. And I'll put the link on screen,
which says there's a distinction between Newtonian, Lagrangian, and Hamiltonian mechanics. So
Hamiltonian is more about flows. You just watch the flow of the system. Lagrangian is
the one where you minimize.
And then Newtonian, there's actually some Newtonian systems, F equals MA, that you can't
map to a Hamiltonian system.
So I have a bone to pick with Chris Fields.
You should have him on.
This is getting way beyond anything I could argue with you about, but you should have
him on and you guys could talk.
I would watch that for sure.
We had a three-way discussion.
Again, a plug here with Michael Levin, Carl Friston,
and Chris Fields.
That was fun.
Yeah, yeah.
OK, so many people want to know, what is your hunch at which?
See, there are various interpretations
of quantum mechanics.
We're not going to go there.
But there are various theories of consciousness
in the same way.
There's a litany.
There's a litany.
Which one do you feel like is on the right track?
Well, let's see, I can say a few things.
What I definitely don't have yet, I'm working on it,
but I don't have anything that I would talk about now,
a new theory of consciousness.
So I do not have anything brilliant to add to this
that somebody else hasn't already said.
So I'm just going to kind of tell you what, what I have to say now.
Sure.
Um, I, I think that, uh, one, one thing that's really hard about consciousness and what makes
it the hard problem is that unlike everything else that we work with, we don't have any
idea what a correct theory would output.
So what format would the predictions...
Sorry, we don't have an idea of what the correct theory would look like?
No, no, no, no.
We don't have any idea of what the output of a correct theory would look like.
What would it give you?
Right?
So for everything else, a good theory gives you numbers, predictions of what specific things are going to happen.
What does a good theory of consciousness give you?
So what we would like is something that we say,
okay, here's a cat or here's a cat
with three extra brain hemispheres grafted on
that also has wings.
What is it like to be these creatures, right?
What is the output of a correct theory of consciousness?
Because if it outputs patterns of behavior
or physiological states,
then what you've explained is physiology and behavior.
There are going to be people that say,
well, you haven't explained the consciousness at all.
In fact, almost all theories of consciousness
look kind of eliminativist, even the ones that aren't trying to be, even the ones that say, no, no, we're not trying to explain a way consciousness look like, look kind of eliminativist,
even the ones that aren't trying to be,
even the ones that say,
no, no, we're not trying to explain away consciousness,
it's real, and I'm going to explain it.
Then you look at the explanation and you always feel like,
yeah, but you haven't explained the actual, you know,
the actual consciousness.
You've explained some kind of behavioral propensities,
physiological states or whatever.
So that's the problem that we have.
Consciousness is the one of those things
that cannot exclusively be studied in the third person.
Everything else you can study as an external observer and you don't change much
as the observer by studying them. Consciousness, you can really only
in a full way, you can only study consciousness by being part of the experiment,
by experiencing it from the first person perspective.
So the weak version of this is you might say, well, a good theory
of consciousness is, is a kind of, is art.
What it outputs is art poetry, whatever, that when we experience it, it makes
us experience that conscious state.
And we say, oh, so that's what it's like.
I see.
Right.
So, so that's, that's one that, but that's kind of a weak form.
You can do a stronger form and you say, well, the real way to do it is to have a
rich brain interface.
So if I want to know what some other system is, is what its consciousness is
like, we need to fuse together.
Now, caveat to that is you don't find out what it's, if you do that, you know, so,
so let's say a rich, um, I'm kind of brain interface, you know, we really
connect our brains together or something.
You don't get to find out what it's like to be that system.
You find both of you find out what it's like to be a new system
composed of the two of you.
So it's still not.
So from that perspective, uh, it's, it's really hard.
I mean, you know, people, I suppose people who do meditation or take psychedelics,
I suppose they're doing experiments in actual consciousness, but, but third
person experiments in consciousness are really hard. You can do things like turn it off,
so there's general anesthesia and you can say,
oh look, the consciousness is gone.
And even then some people will say, yeah,
but I experienced floating above my body
while you did the surgery and I saw you drop the scalpel
and do this and that.
So it's still, even with that amazing reagent
of being able to supposedly shut off consciousness, you still got some issues. So, so, so the
study of consciousness is hard for those kinds of, um, for those kinds of reasons.
Um, and, uh, I think that about the only useful thing I could say here is that for the same
reasons that we associate consciousness with brains, for exactly those same reasons,
we should take very seriously the possibility of other forms of consciousness in the rest
of our bodies and also lots of other things.
But you know, we, and so I'm working, so Nick Rouleau and I are working on a paper on this
where you can sort of look at all the different popular theories of consciousness on the table
and you can just ask which of them are specific
for brains and why.
Like what aspects of each of those theories
really tells you that it's got to be brains.
My guess is we haven't finished the paper yet,
but my guess right now is that there's not a single one
that can distinguish brains from other types
of structures in your body.
And so I think we should take very seriously the possibility that other subsystems of the body have some sort of consciousness.
It's not verbal consciousness.
Sorry, I'm not understanding. Are you saying what if we list all the theories of consciousness and then we place, does it distinguish the brain as being responsible for consciousness?
We ask what is it about that theory that says it's in the brain rather than somewhere else? Let's say you're literally your liver.
So IIT would say no, because IIT is a panpsychist theory that would say,
look, if your liver is doing some processing,
then it has some non-zero amount of consciousness.
Right. Right. And I, and I agree with that now, now, now, now, as far as I,
as far as I understand,
IIT also has an exclusion postulate that says there should only be one
central consciousness in the system. I think that's true. At least it used to be true.
I don't know. Julio may disagree with that. But I think we are actually a collection of
interacting perspectives and interacting consciousnesses for that reason. And then sometimes people say,
well, I don't feel my liver being conscious.
Right, you don't, but you don't feel me being conscious either.
Of course you don't.
And the fact that your left hemisphere has the language ability
for us to sit here and talk about it and the liver doesn't,
doesn't actually mean that it's not conscious.
It just means that we don't have direct access to it and
we don't have direct access to each other. So that doesn't, that doesn't bother me.
So I, that's, that's my suspicion about, about consciousness is that for the same reason that
people think it's in the brain, we should take very seriously that it's in other places in the
body. And then, and then more, more generally, you know, other, other types of constructs that are not human
bodies at all, or not even animal bodies.
You've spoken to Bernardo Castrup several times now.
What is it you agree with him the most that you think most people would disagree
about, because you agree with him that it's nice to go for a walk.
Okay, sure.
But most people agree it's nice to go for a walk.
So what is it that you agree with him about that you think is a contentious issue to most people?
So this is a controversial statement.
And then what is it you disagree with him about
regarding consciousness?
Yeah, boy, I don't, you know,
it's hard for me to know what most people agree
or disagree with him about.
I really don't know.
We agree on a lot of things.
We agree on, I think, the primacy of consciousness i think that you know his idealist position has a lot of a lot to recommend it though one thing i think we disagree on is.
The issue of compositionality so if i recall correctly from from a talk that we had together a little while ago he felt that.
from a talk that we had together a little while ago, he felt that it is important in order to be a true self, to be, to have a conscious experience as an inner perspective.
You have to be, you know, he focuses on the view of embryonic development as a single
system that, you know, whatever subdivides and develops, but it starts out as a single system. And I was arguing that that really is just
a contingent feature of biology.
I mean, we certainly can take two early embryos
and mush them together.
You get a perfectly normal embryo out of it.
And in general, there are lots of biological systems
like our xenobots, like anthropobots
that you can create by composition,
by pulling other things together. So I don't give as much, um,
I don't put as much emphasis on a system being demarcated from the outside
world because it was somehow, uh,
because it started out that way and it sort of remained, uh, you know,
disconnected. I think those are,
I think that's kind of a superficial aspect of the biology and you can do
things a different way. I don't think that's what's responsible for it. But he, you know, I think
he thinks it's important that individual selves are not compositions. They're not made as
compositions. They're somehow, you know, individualized from the word go, which again,
even the egg, right? So we, I mean, we humans like eggs because we can see it as a distinct little thing
with a membrane and say, ah, this is an individual.
But even an egg is composed by the maternal organism
from molecular components.
Like I see no point at which any of this
is truly distinct from anything else.
So I put less emphasis on it,
but I think he thinks it's important.
It seems like the point that you're saying is,
look, we can think about this as several
rooms, this building comprises several rooms, but even in, and Bernardo may say that's what
makes a person is the distinct rooms, but you're saying, yeah, but even in a room, there
are different people, there are different chairs, there are different tables.
Is that what you're saying?
What I'm saying is, and I may not be doing justice to his view, and I think you should
ask him more about this, but I think he thinks that it's important in order to
be a unified. So, so I think we were discussing what makes for a unified inner perspective,
right? So, so we don't feel like, uh, billions of individual brain cells. I mean, I have
no idea what we kind of do because that's what it feels like to be a billions of individuals
of, of neurons. That's really what it feels like. But we do feel,
at least most of us, most of the time, feel like some kind of unified centralized inner perspective.
And so we were talking about how that comes about. And I think he felt that having that
in the physical universe is importantly related to arising from a single origin. So he sees the egg
as a single point of origin and arising from that, that's how you are a separate individual from others.
And I see it as much more fluid and I see the boundary between self and world
as something I can change all the time.
I think it changes in embryogenesis and that's the scale of,
that's the story of the scaling of the cognitive light cone that we talked about.
I think it can shrink during cancer.
I think it can change during metamorphosis, during maturation.
I think, I think it's much more fluid. I think it can change during metamorphosis,
during maturation. I think it's much more fluid than that.
Now, as we're on speculative ground, if what makes an agent is the distinction between
the self and the world, and some people think of God as the entirety of everything, thus
the entire world, and there's no distinction, then can one say that God is an agent? I don't know.
I mean, certainly I think most, well, religions that have a God anyway, as far as my understanding
is they would think that yes, that God has extreme agency, in fact, higher than ours.
I don't know what that really buys us, you know, in any, in any helpful way.
Remove the word God.
Does the world have agency?
Okay.
So, so that's an interesting question.
So, so let's start with, first of all, how do we know when anything has agency?
And that's an, that's an experimental research program.
So you basically, you hypothesize what problem space it's working in, what you
think its goals are, and then you do experiments to figure out what competency it has.
And then you find out, did I guess well, poorly, do I need to make a better guess and so on.
So for example, people have said to me, well, you know, you're kind of a panpsychist almost
view says that the weather should be, you know, cognitive.
And I, you know, I say, uh, I don't say that it is
or isn't because we haven't done the experiments.
Do I know that, uh, weather systems, let's say,
let's say hurricanes or so on.
Do I know that they don't show habituation,
sensitization, that they couldn't be trained if you
had the right scale of, of machinery?
I have no idea.
But what I do know is that it's not a, um, this is
not a philosophical, uh, thing that we can
decide arguing in an armchair.
Yes, it is. No, it isn't. No, you have to do experiments and then you find out. So now the question
is, okay, so what about, you know, the galaxy, what about the universe, right? Are these
the, you know, Gaia ecosystems? Again, I think these are all empirical questions. Now, some
of them are intractable. I, you know, we don't have the capability to do experiments on a,
on a planetary scale. But for example, one thing that I did try to do experiments on a planetary scale.
But for example, one thing that I did try to do once was design a gravitational synapse, so design a solar system size arrangement where
masses would fly in and based on the history of masses flying in, it would
respond to new masses in a different way.
So you can do, you know, historicity and you can have habituation and
sensitization and things like that.
So could you have something like that? That would be very sort of ponderously slowly on
an enormous scale, computing something and having,
you know, sort of simple thoughts.
I bet you could, you know, I bet you could, um, is
the real universe doing that?
I have no idea.
We have to do experiments.
So, so, you know, here, here you bump up against
another question, which is how do you know if, if
and when you are part of a larger cognitive system? Right. So, so I don't know, you know, how do you know if, if and when you are part of a larger cognitive system?
Right. So, so I don't know, you know, how do we know if we are in fact, part of a bigger,
a bigger mind? So I don't know. Um, and my suspicion is that there are some sort of
girdle like theorem that will tell you that you can never know for sure. Uh, and you can,
you can, you know, you can never be certain, but I bet that you could gather evidence for it, for or against.
And I often think about a kind of a, you know, kind of a mental image.
Imagine two neurons in the brain and one is a, you know, kind of a strict materialist
and one's a little more mystical.
And the one neuron says, like, you know, we're just, we just run on chemistry
and the outside world is a cold mechanical universe and it doesn't care what we do.
There's no mind outside of us.
And the other one says, can't prove it, but I kind of feel like there's an order to
things and I kind of feel like our environment is not stupid.
I kind of feel like our environment wants things from us.
And I kind of feel these waves of, of, you know, these waves back propagating
through us that are like almost rewards and punishments.
I feel like the, the, the universe is, is trying
to tell us something.
And the first one is, ah, you're just seeing,
you know, uh, faces and clouds.
It doesn't, it doesn't exist.
And of course, in, in, in my example, the second
one is correct because they are in fact, part of a
larger system, they're part of a brain that is
learning things.
And it's very hard for any one node in that
system to recognize that that or even a sub
network. But I wonder if we could, having a degree of intelligence ourselves, if we could gain
evidence that we were part of a larger system that was actually processing information.
And I don't know exactly what that would look like, but my hunch is that it would look like what we call synchronicity.
I think that what it would look like are, um, coincidence, co co are, are events that don't have a causal connection at our lower level, like, like mechanistically, like by physics, there's no reason why that should be.
Like by physics, there's no reason why that should be, but at a larger scale in terms of meaning and, you know, the greater meanings of things that they do have some kind of interpretation.
And I think that's what it would look like to be part of a larger system.
I think it would look and feel like synchronicity.
So does it exist?
I, you know, I don't know, but that's what I think it would feel like.
Take me through the history of the Levin Lab.
When did it start?
Whoa.
What were your first breakthroughs?
Yeah.
Okay.
Let's see.
Well, it started, I mean,
it started in my head when I was pretty young.
Like I was, it was a dream that I had
to do this kind of stuff.
I mean, I consider myself to be the luckiest person in the world. I get to,
I get to do, you know, the, the funnest stuff with the best people. So, um, I,
yeah, I think it's, I think it's a super, super fortunate. Um, but, but I had,
I kind of had this, this idea when I was very young, I had no idea what it was
like. I was pretty sure that it was actually impossible. Um,
I never really thought it would be practically feasible,
but I figured I would push it as far as I could before, you know, before, um, I would
have to go back to coding and, uh, you know, because, um, yeah.
For people who are unaware of your backgrounds also in computer science.
Right.
Right.
Yeah.
Yeah.
Yeah.
We, we, you know, I, I learned to program pretty young and at that time, uh, that
was a good way to, uh, that was a pretty good way to, um, to make money.
And I just figured I would do the biology as long as I could.
And then eventually I would get kicked out.
And then, then I would just go, you know, go back to coding.
So yeah, my lab actually began in September of 2000.
That's when I got, I got a faculty position at the Forsythe Institute
at the Harvard Medical School.
And, and I, yeah, we yeah, we opened our doors in 2000.
It was just me at first and then me and one other technician.
There's like 42 of us now, but at the time
it was just me and a tech named Adam.
And that was the first time, starting then
was the first time that I could really start
to take practically, to be practical about some of the,
some of the, um, ideas I had about bioelectricity and cognition and all these
things. Prior to that, I was building up a tool chest. So I was building up, uh,
skills, techniques, um, uh, you know, uh, uh, information and so on.
But being a grad student and then a postdoc,
I wasn't able to talk about any of these things, uh, but, but then when I was on my own and, and, um,
then, then that was the time to get going.
So, um, just a couple of, you know, a couple of interesting, interesting
milestones, uh, uh, already by the time, by the time I got, you know, I, I, uh,
we, we opened the lab, I was involved in a collaboration with, um, with Ken
Robinson and his, uh, his postdoc Thorley Thorlund and together, know, I, we opened the lab. I was involved in a collaboration with, um, with Ken Robinson and his, uh,
his postdoc Thorley Thorlund and together, uh, with, and with my postdoc
mentor, Mark Mercola, we, um, really showed the first molecular tools for
bioelectricity.
So we had a paper on left, right asymmetry and, uh, we showed, uh, we
showed the first, uh, bioelectric tracking of non-neural bioelectric states
in the chicken embryo.
We showed that it was important for setting up which side is left, which side is right
and then manipulating that information using injected ion channel constructs.
So that was the first time any of that, you know, reading and writing the mind of the
body, which is how I certainly wouldn't have said it back then, but this is how I see it
now.
That was the first time that was done in a molecular way. So that cell paper came out in I think 2002,
I think it finally came out.
But that was a really early project.
The other really early project was I had,
as a postdoc, I started gathering tools for this whole effort.
And a lot of those tools were DNA plasmids encoding different ion channels.
And so what I would do is I would write, send emails or letters to people working in electrophysiology, gut physiology, inner ear,
and they would have some potassium channel that they had cloned.
And I would say, you know, could I get one of these plasmids and I was
telling them what I was gonna do you know I would say and what I'm going to
do is use it to express it in embryos in various locations and use it to
in a very targeted way change the bioelectrical properties of these
things and you know most people were very nice and they sent me these constructs. One person sent a
letter to my postdoc mentor to say that I had had a clearly had had a, you know, a mental breakdown
and that he should be careful because this is so insane that I'm obviously off my rocker, right?
And so I remember.
Now, wait, is this your recapitulation of what they said or they actually said mental breakdown?
Well, okay, so I didn't see the letter, but my boss came to me. So my boss came to me and he was laughing and he said,
and he said, look at this, this guy says you're nuts.
So you asked him for a plasmid.
He told me to be, you know, to watch out.
He says, you're having a psychiatric break.
So that's what I'm relaying is what he said to me.
Okay.
So, but nevertheless, most people send constructs.
And when we got to lab, went to when my lab opened I
started doing that I started been mis- expressing these things in embryos to
just to just to see the space of possible changes right what does
bioelectricity really do I mean nobody knew at the time it was thought it was
really crazy it was thought that membrane voltage was a housekeeping
parameter there was an epiphenomenon of other things that cells were doing and that if you mess with it, all you're going to get is uninterpretable death. That's
what everybody thought. This was a stupid idea. And so we started doing this and I had this
graduate student, she was in the dental program, her name was Ivy Chen, and she was in the dental
program and she had amazing hands. And so I taught her to micro inject RNA into cells in the embryos.
Cause you know, she had like really, really good hands.
How did you know she had good hands before she tried that?
Well, you look like you have good hands.
Well, she was a dental student.
And so, so I talked to her, she wanted to do research and I said, tell me,
tell me what you do.
And they said, Oh, I do these, you know, I saw up people's, you know, gums
and whatever and I said, okay, you probably could do this.
Okay. So she wasn't playing call of duty. No, oh, I do these, you know, I sew up people's, you know, gums and whatever. And I said, okay, you probably could do this. Okay. So she wasn't playing Call of Duty.
No, no, no. She, well, she may have been also, but I don't know.
All right.
I don't know that. What I know is that she was doing, you know, surgeries in people's mouths.
And I thought that she may be able to, you know, in tight, confined places, you know, with the glasses and everything.
So I thought she would be able to do this through a microscope. And she was.
And so we did this together and we injected these constructs.
And I still remember to this day, she calls me in one day and she says,
so I looked at the embryos and they're covered with some kind of black dots.
And I said, black dots, let me see.
Let's go look.
So I come out and we look through the microscope.
The black dots were eyes.
What she had done was the potassium channel
that she injected was the one that we later,
it took years to publish the paper,
but what she had discovered is that,
is a particular bioelectrical state
that tells cells to build an eye.
It's remarkable because right there and then,
you knew that A, bioelectricity was instructive,
it was not an epiphenomenon
because it controls which organs you get B that
it's that the whole system is modular and hierarchical because we did not
tell the cells how to build an eye.
So we didn't say where the stem cells go or what cells go next to what other
cells or what genes should be expressed.
We did none of that.
We triggered a high level subroutine that says, build an eye here.
So right away that, that one experiment like told us,
told us all these amazing things.
Then eventually, and this was the work of a grad student,
Sherry Au in my group who took on the project
and she did a whole PhD on this showing that.
The other amazing thing about it is that
if you only target a few cells,
what they do is they get their neighbors to help out
because they can tell there's not enough of them
to build an eye, kind of like ants, the ants recruit their buddies to help out because they can tell there's not enough of them to build an I can, you know, kind of like ants, you know,
the ants recruit their, their, their buddies to take on a bigger task.
So that tells you that the, that, that the material you're working with has
these amazing properties that you don't have to micromanage, right?
It's a different kind of engineering.
It's engineering as, as I put it recently in a recent paper, it's
engineering with, um, a gentle materials because it's a material
with competencies, with an agenda.
You don't have to control it the way you do wood and metal
and things like that.
So, okay, so anyway, so that kind of thing.
Then we had a bunch of work on,
a bunch more work on left-right asymmetry
and showing how the cells in the body decide
which side they're on based on these electrical
cues.
Then we discovered that in order for cells to, the way they interpret these electrical
cues had to do with the movement of serotonin.
So long before the nervous system or the brain shows up, the body is using serotonin to interpret
electrical signals. So, so this was, this was really underscoring, um, the, uh, the idea that.
A lot of the tools, concepts, reagents, pathways, mechanisms from neuroscience
really have their origin much earlier.
And so this was a completely new role for serotonin, right?
So, so serotonin is a neurotransmitter does many interesting things, but long
before your brain appears, it also controls which direction of your body,
your heart and your various other organs go to. So trying to understand in the kind of, you know,
in the short term, how does electrical activity control cell behavior, but bigger picture,
wow, these neural like processes are going on in cells that are absolutely not neurons much more,
you know, much long before that. So that was kind of cool. Then I hired a postdoc named Dani Adams,
who later became a faculty member and a colleague.
And one of the things that she did was to pioneer
the early use of voltage sensitive dyes
to read these electrical potentials.
And so she discovered in the work that she did in my group,
she discovered this thing we call the electric face.
And approximately what year is this now? Oh, this is electric face. This is probably
2008, something like that, 2007, 2008. And what she had discovered was that if you look at
the nascent ectoderm that later will regionalize to become face and mouth, you know, eyes and mouth and all of that, that early on before all the genes turn on
that determine where all those things will go,
the bioelectric pattern within that ectoderm
looks like the face.
It shows you where all this stuff is going to go.
And then ultimately we were able to show that,
and by the way, there was that eye spot, which is why the eye thing worked. And we were able to show that, and by the way, there was that eye spot,
which is why the eye thing worked.
And we were able to show that all kinds of birth defects
that mess up the formation of face do it
by inhibiting the normal bioelectric pattern
and that you could fix it.
You could exert repair effects that way.
So that was interesting.
Then we started looking at regeneration.
And again, the early work was done also by Danny,
and then later on by Kelly Chang, who's now
a faculty member at University of Las Vegas, where what we
did was we showed that tadpole tail regeneration was also
bioelectrically driven.
And that was our first gain of function, uh, effect in regeneration, where
we are able to show that we could actually induce new regeneration.
So the tail is a very complex organ.
It has a spinal cord, muscle bone, well, not bone, vasculature, um, uh,
innervation, um, but you know, peripheral innervation skin.
And so, um, we took, we took tadpoles that normally do not regenerate.
There's a, there's a stage at which at which they can't regenerate their tails.
And we developed a bioelectric cocktail
that induces it to grow.
My postdoc at the time, Kelly Chang said,
I soaked them and I said,
how long did you soak them for?
And she said, an hour.
And I thought, but that's gotta be too short.
There's no way an hour soak is gonna do anything.
And sure enough, that hour soak led to eight days
of regeneration where we don't touch it at all.
And the most recent version of that work
is in the frog leg, where we show that 24 hour stimulation
with our cocktail induces a year and a half of leg growth
during which time we don't touch it at all.
So the amazing thing there is
again this is not micromanagement, this is not 3D printing, this is not us telling every cell
where to go during this incredible year and a half long process. This is at the very earliest
moment you communicate to the cells, go down the leg building path not the scarring path and that's
it and then you take your hands off. It's calling a subroutine, it's modularity, it's relying on
the competency of the material
where you're not going to micromanage it.
So that was the first time that kind of became obvious
that that was possible is when she showed
that just an hour stimulation
of the correct bioelectrical state
got the whole tail to commit to regenerate itself.
So that was the beginning of our regeneration program,
after which we went into limbs.
And now, of course, we're trying to push into mammalian limbs.
Yeah, along the way, Celia Herrera-Rincon and Narosha
Morrugan were other post-docs that showed leg regeneration
in frog and so on.
Around that time, we had another thing we, we really,
I really wanted to work on cancer and I really wanted to work on this, um, on
this idea that, uh, there's a bioelectric component to it and the way you can,
you can think about it is simply that not why is there cancer, but why is
there anything but cancer?
So, so why do cells ever cooperate instead of being amoebas?
Why do they ever cooperate?
And so we know that the bioelectric signaling is the kind of
cognitive glue that binds them together towards these large-scale
construction projects, maintaining organs and things like that.
And so, yeah, so we wanted to study that bioelectrically.
And so I had two students, Brooke Chernet and Maria Labeckin, who undertook that.
And we were able to show that using this bioelectrical
imaging, you can tell which cells were going to convert
ahead of time.
You could also convince perfectly normal cells
to become metastatic melanoma, just by giving them
inappropriate bioelectric cues about their environment.
So you can, no genetic damage, no carcinogens,
no oncogenes, but, but just
the wrong bioelectric information and they become like metastatic melanoma.
And best of all, they were able to show that you can actually reverse, reverse carcinogenic
stimuli, for example, human oncogenes by appropriate bioelectric connections to their neighbors.
So we had a whole set of, we had a whole set of papers showing how to control
cells bioelectrically and Juanita Matthews in my lab now is trying to take all those strategies
into human cancer. So this is 2009? This was, yeah, the first experiments were done around
2010, 2011, something like that.
So when did this conjectural connection between bioelectricity and cancer occur to you?
The field was clueless about that. It's not as if they had an opinion and said no.
Well, to be clear, the very first person who talked about this was Clarence Cohn in 1971. So Clarence Cohn in 71 wrote that he had a couple of papers
in science where he showed that resting potential of cells
was an important driver of cell proliferation
and he conjectured that it might have something
to do with cancer.
So this idea had been floated,
nobody had ever done anything with it
and the tools to study this
at a molecular level didn't exist until we made them. So, you know, that idea just that
bioelectricity is important and cancer had been around before. What I think we brought to it that
was completely new is the notion of that this is also related to cognition.
The idea that it's not just that it drives proliferation and cancer, that this is really
involved in setting the limiting the size of the cognitive light cone, at which point
cells acquire ancient, you know, metastatic like behavior, the way that, you know, amoebas
do.
That aspect of it, I think is completely completely new to with us. That idea that this really is about the boundaries of the self.
I've never seen anybody else talk about that.
So those, around that same time, there was something else interesting that happened in 2009,
which is that we were studying planaria, we were studying flatworms. And we had shown that when you cut planaria into pieces,
the way that these pieces decide how many heads they
were going to have is actually related
to the ability of cells to communicate with each other
using gap junctions, these electrical synapses.
And so we had made some two headed worms and so on. And around 2009, I had this student, this visiting
student from BU, her name was Larissa.
And she, I had asked her to recut the two headed worms.
And just in plain water, no more, no more manipulation of any kind.
We may cut them, meaning cut off the heads of them.
Again. Yeah. So, so, so you have a normal one headed worm. cut them, meaning cut off the heads of them? Again, yeah.
So you have a normal one-headed worm,
you cut off the head and the tail,
you have the middle fragment,
you soak that middle fragment in a drug
that blocks the cells from electrically communicating
with each other and they develop heads at both ends.
Would that drug that you soak it in
be called a biological cocktail?
Like, is that what you referred to earlier?
It's a different biological cocktail.
It wasn't even a cocktail.
It was a single, one single chemical.
I see.
It was really simple.
It was just one single chemical and all it does is block out junctions.
And so what that did was change the electric circuit properties that the cells have and
both wounds decided they had to be heads.
So now you get these two headed worms.
So yeah, so Larissa recuts these two headed worms in plain water, no more manipulation
and she gets more two headed worms.
It's permanent.
Once, once you've convinced them now, the genetics are untouched, right?
No, no, no, no genomic editing, no trans genes, uh, is genetically identical, but the two
worms are, uh, the two headed worms are a permanent line now.
So, so a couple of interesting things there.
One is, uh, well, one, one is, is, is it shows the, um, the
interesting memory properties of the medium, meaning once you've
brought it to a new state, it holds it remembers the two, it's
a kind of memory.
It remembers the two headed state.
Another interesting thing is that two headed worms were first seen.
Well, they were first described in the early 1900s.
So people had seen to be made by other, by other means, people had seen two headed
worms, apparently to, to, to my knowledge, I don't think anybody's ever written
about this, nobody thought to recut them until we did it in 2009.
And I think the reason is because it was considered totally obvious.
What would happen?
I mean, their genome is normal.
You cut off that second topic head. And of course it'll just go back to normal. That obvious what would happen. I mean, their genome is normal. You cut off that second topic head.
And of course it'll just go back to normal.
That's what people assume.
So this is another example of why thinking
in these different conceptual ways is, it matters.
It leads to new experiments
because if you don't think about this as memory,
if you're focused on the genes as driving phenotypes,
then you would never, it doesn't make any sense
to recut them.
But if you start thinking, well, I wonder
if there's a physiological memory here,
then that leads you to this experiment, right?
So thinking in this way leads to new experiments.
And then the other thing it points out
is something really interesting.
So for pretty much any animal model,
you can call a stock center and you can get genetic,
you can get lines of genetic mutants.
So you can get flies with curly wings and mice with crooked tails and weird coat patterns
and you know, chickens with funky toes.
You can get any, you know, any kind of, any kind of mutant lines.
In planaria, there are no mutant lines.
Nobody's ever succeeded in making anything other than a normal planarian, except for our two headed form.
And that one's not genetic.
And so there's a deep reason,
which I didn't understand back then.
In fact, I think we only,
I think I only really figured out
what I think it means in the last few months.
But it was striking that the only unusual planarian form,
permanent planarian form out there
was the one that we had made.
And that's the one that's not genetic.
It's not done by the way that you would do this with any other animal.
Yeah.
What's your recent discovery then?
Well, it's, it's, it's not so much a discovery.
It's more of a new way of thinking about it.
So, um, so one of the, one of the weird things about planaria is that because the way, the way
they reproduce, at least the ones that we study, the way they reproduce is they tear
themselves in half and then each half grows the rest of the body.
And, um, typically what happens for most of us that reproduce by sexual reproduction is
that when you get mutations in your body during your lifetime, those mutations are not passed on to your offspring, right?
They disappear with your body and then the eggs go on and so on.
Well in planaria it's not like that.
In planaria, any mutation that doesn't kill the cell gets expanded into the next generation
because each half grows the new grows the remainder of the body.
And so their genome is very messy. They have, I mean, in fact, cells can be
mixed up, they can have different numbers
of chromosomes.
That's, that's very weird.
And I always, I always thought, isn't it
strange and nobody ever talked about this
in any biology class that I've ever had.
Isn't it strange that the animal that is
the most regenerative, uh, apparently
immortal, they don't age, uh, resistant, and by the way, resistant to trans genes, so nobody's still, nobody's been able to make transgenic worms, is also the one with the most chaotic genome.
Now that's bizarre. You would think, but from everything that we are told about genomes and how they determine phenotypes, that the animal with all those amazing properties
should have really pristine hardware.
You would think that you should have a really clean,
really stable genome if you're going to be regenerative
and cancer resistant and not age and whatever,
it's the exact opposite.
I always thought that was incredibly weird.
And so finally, I think,
and we've done some computational work now
to show why this is, I think we now
understand what's happening.
And what I think is happening is this.
Imagine, let's go back to this issue
of developmental problem solving.
So if you had a passive material such that you've got some genes,
the genes determine what the material does.
And so therefore, you have an outcome.
And that outcome gets selected, either it it does well or not and then there's
differential reproduction so the standard story of evolution then
everything works well and everything works like it would in a genetic
algorithm very simple very simple the problem with it of course is that it
takes forever because let's say that let's say that you're a tadpole and you
have a mutation mutations usually do multiple things let's say that you're a tadpole and you have a mutation. Mutations usually do multiple things.
Let's say this mutation makes your mouth be off kilter,
but it also does something else somewhere else in the tail,
something positive somewhere in the tail.
Under the standard evolutionary paradigm,
you would never get to experience the positive effects
of that mutation because with the mouth being off,
you would die and that would be that.
So selection would weed you out very quickly
and you would have to wait for a new mutation
that gives you the positive effects without the bad effects on the mouth.
So that it's very hard to make new changes without ripping up old gains and so on.
So that's some of the limitations of that kind of view.
But a much more realistic scenario is the fact that you don't go straight from a genotype
to the phenotype. You don't go
from the genes to the actual body. There's this layer of development in the middle. And the thing
about development is not just that it's complex, it's that it's intelligent, meaning it has problem
solving competencies. So what actually happens in tadpoles is if I move the mouth off to the
side of the head, within a few weeks, it comes back to normal on its own, meaning it can reach
again that region of anatomical space where it wants to be.
So imagine what this means for evolution when you're evolving a competent substrate, not
a passive substrate.
By the time a nice tadpole goes up for selection to see whether it gets to reproduce or not,
selection can't really see whether it looks good because the genome was great or because
the genome was actually so-so but it fixed whatever issue it had.
So that competency starts to hide information from selection.
So selection finds it kind of hard to choose the best genome because even the ones with
problems look pretty good by the time, you know, it's time to be selected. So what happens, and we did computational simulations of all this, what happens is that
when you do this, evolution ends up spending all of its effort ramping up the competency,
because it doesn't see the structural genes. All it sees is the competency mechanism.
And if you improve the competency mechanism, well, that makes it even harder to see the genome.
And so you have this ratchet.
You have this positive feedback loop
where the more competent the material is,
the harder it is to evolve the actual genome.
All the pressure is now on the competency.
So you end up with kind of like a ladder, really
an intelligence ratchet.
And people like Steve Frank and others have pointed this out
in other aspects of biology and also in technology, right?
Once ray-to-ray technology came up,
it became not as important to have really pristine
and stable disc media because the raid takes care of it,
right?
So the pressure on having really, really stable disk is off.
So what it means is that in the case of the planaria,
that positive feedback loop, that ratchet
went all the way to the end.
That basically what happened here
is that what you've got is an organism where it is assumed
that your hardware is crap.
It's assumed that you're full of mutations.
All the cells have different numbers you know, numbers of chromosomes.
We already know the genetics are all over the place.
But all of the effort went into developing an algorithm
that can do the necessary error correction
and do, and take that journey in morphous space,
no matter what the hardware looks like.
That is why they don't age.
That is why they're resistant to cancer.
And that's why nobody can make a transgenic worm
because they really pay less attention to their genome
in that sense than many other organisms.
So you can imagine a sort of continuum.
So you've got something like C. elegans, the nematode,
where they're pretty cookie cutters.
So as far as we know, they don't regenerate much.
It's what you can see.
What the genome says is pretty much what you get.
Then you've got some mammals, right? So mammals, at least in the embryonic stages,
they've got some competency. You can chop, you know, early mammalian numbers into pieces and
you get twins and triplets and so on. Uh, then you get salamanders. Salamanders are really,
they're quite good regenerators. They're quite resistant to cancer. They are long lived. And then,
you know, when, when know, when you run that spiral
all the way to the end, you get planaria,
which are these amazing things that have committed
to the fact that the hardware is gonna be noisy
and that all the effort is gonna go
into an amazing algorithm that lets them do their thing.
And that's why if you're gonna make lines
of weird planaria, targeting the structural genome
is not helpful, but if you screw with the actual mechanisms
that enable the error correction,
AKA the bioelectricity,
that's when you can make lines of double-headed and so on,
because now you're targeting the actual,
the problem-solving machinery.
And if you were to look at the genome
of the salamander versus the C. elegans,
would the C. elegans be more chaotic
or more ordered than the salamander?
It's a good question. So nobody's done that specifically. No, as far as I know, this is
something that we're just ramping up to do now is to start.
Because correct me if I'm incorrect. It sounds like the hypothesis is that if you have a
large amount of genetic chaos, or if you can quantify that, then you would have something that would be compensated for
in terms of competency or some higher level structure.
Yeah, I think that, yes, I think that's a prediction
of this model that I just laid out.
And so, yeah, so we can test that.
I mean, part of it also is there's an ecological component.
I mean, you can ask the question,
so why doesn't everything end up like planaria?
And I think there's an aspect of this
that that ratchet obviously doesn't run to the end in every scenario because in
some species there's a better trade-off to be had somewhere else.
I wonder if there are three components then because then if you don't see a direct correlation
could be hidden by a third factor.
Yeah, which I think would probably be environment. It would probably be the ecology of how do
you reproduce, you know, how noisy, how dangerous, how unpredictable is be environment. It would probably be the ecology of how do you reproduce,
how noisy, how dangerous, how unpredictable is your environment?
I'm gonna guess there's something like that involved here.
Yeah, but I think that's starting to kind of explain
what's going on with planaria.
So we found the persistence of the two headed phenotype and then, um, Nester Oviedo and Junji Morikuma, um, in my group, uh, wrote a, wrote a nice paper on that and then so on.
And then the next kind of big advance there by in, in 2017 was by Fallon Durant, um, a grad student in my lab who also did something interesting. So when you take a bunch of worms and you treat them with, let's say, this reagent that blocks the gap junctions,
typically what you see is, okay, you treat 100 worms, 70% of them go on to make two heads and
30% are unaffected. So we thought they were unaffected because they stay one-headed and
we always call them escapees because we thought that they somehow just escaped
the action of the octanil.
Maybe their skin was a little thicker or something
and we never had a good explanation for it.
But anyway, there was a 70% penetrance to the phenotype
and most things have not 100% phenotype, so it wasn't.
Penetrance?
Penetrance just means that when you apply some treatment
to a population, not all of them have an effect
and not all of them have the same effect.
That's true for pretty much every drug, every mutation and so on.
So for years we called them escapees and then around 2015 when Fallon joined the lab, she
recut some of those one headed escapees and found that they also do 70-30, that 70% of
them became double headed and 30% didn't.
And so what we realized was that they're not actually escapees.
They're not unaffected.
They're affected.
But the way they're affected is quite different.
They are randomized.
They can't tell if they should be one headed or two headed.
And they flip a coin set with a 70 30 bias about what they should do in any given generation.
In fact, we were able to show that when you cut multiple pieces from the same
worm and we call them cryptic worms cryptic, because physically they look
completely normal, one head, one tail.
They look normal, but they're not normal.
And then, because, because if you recut them, they're not sure what to do.
Their memories are, are by stable.
So what happens is that you can cut them into pieces and every piece
makes its own decision if it's going to be one headed or two headed, even
though they came from the same parent organism with the same roughly
70, 30 frequency.
So that's another kind of permanent line and the way we studied it more recently was as
a kind of perceptual by stability.
So like the rabbit duck illusion, right?
You look at it and it looks like one thing, looks like something else.
That's kind of what's happening here.
There's a bioelectrical pattern that can be interpreted in one of two ways.
And that's why they're confused.
They're sort of bistable and they can fall in either direction.
So that's another thing we did in planaria.
Okay, so that's 2017.
I'm going to get you to bring us up to 2020 and then to 2024.
But first, what is meant? Explain to
myself, what is meant when you say Levin lab? So there's also Huberman lab. Do biologists,
do professors who are in neuroscience or biology get given a lab by the university
as the standard? Do you share lab with other people? What's meant by lab? Is it a room? What is it? Yeah, okay so the way this works
is basically that when you're finishing up your postdoc you do what people
call going out on the market which means you interview at a bunch of places and
see who will hire you as a brand new junior faculty member. So when you get a
job and so this is this this is, you know,
considered your first real independent job,
because you are now in charge
of all your successes and failures.
It's all, it's all up to you.
And typically, yes, at that point you get a lab.
So one of the things you do is you negotiate
the amount of space you have.
Typically you start off pretty small.
And then over time, if you bring in new grants,
then you ask for more space and the lab grows.
And when they say Levin Lab or Huberman Lab,
what they're just referring to is the,
all of the research that is controlled by you,
where you make the decisions,
where that particular person makes the decisions.
So physically, so for example,
I have three different locations on this campus where my research is done.
And that's just because there isn't one contiguous space.
I mean, I would be happier to have everything under one,
you know, in one place, but they're large spaces.
And so there's a few specific locations.
All of them are considered part of the Levin Lab
because as the principal investigator,
I'm the one whose job it is to bring in the external funding
to support all the people that work there
and to pay for the reagents.
And also, I'm the PI of these labs.
It's their job to be responsible for the good and the bad
of what they do.
So that's what it means.
It just means you're the principal investigator
responsible for determining
what research happens and what happens to that research, who does it, you're
hiring, you're, you know, recruiting people, you're writing grants.
That's what it means.
And for the people who are watching, I mean, sorry, for the people who are
listening, there are shots on screen right now of the Levin lab provided
we're able to go later.
Okay. So now bring us to 2020 to 2024. Yeah.
So some of the latest, most exciting things that...
Oh, by the way, one thing I didn't mention that happened around between 2015 and 2020
or so was also Vibehuff Pies.
He's also a staff scientist at my lab.
We were able to show that you can actually use reinforcement
of bioelectric patterns to fix birth defects.
So we in frog started, yeah,
this is all this started in frog.
And we were able to show that there's a range
of birth defects induced by other means
that you can actually repair.
So complex defects of the brain face hard like these really complicated things.
You could actually repair them by very simple changes in the bioelectrical pattern.
So I, so I, you know, I think that's a really important actually story because, because
not only is it a path to birth defects, you know, clinical repair of birth defects, but
it actually shows how, again,
you can take advantage of this highly modular aspect and you don't have to tattoo instructions
onto every cell for something as complex as a brain or a heart.
You can give pretty large scale instructions and then the system does what it does to fix
it.
So, okay, so the next couple of big things after that
were first of all, the discovery of xenobots,
and this was Doug Blackiston in my group
did all the biology for it.
And this was in collaboration with Josh Bongard,
who was a computer scientist at UVM
and his PhD student at the time, Sam Kriegman,
who did all the computer simulations for the work.
So this was the discovery of the xenobots
and the idea of using epithelial cells
from an early frog embryo to,
so perspective skin cells to let them,
to liberate them from the rest of the embryo
where they're basically getting a bunch of signals
that forced them to be this like, you know,
boring skin layer on the outside of the animal,
keeping out the bacteria.
So they were going to be a skin cell, they weren't yet.
They were committed, well, at the,
it's hard because skin isn't, you know,
it's not a real precise term,
but it's cells that at the time that we took them off
the embryo were already committed to the fate
of becoming that kind of outer epithelial
covering.
They hadn't matured yet, but they already knew they were going down that direction.
So we were able to show that when you liberate them from those cues, they actually take on
a different lifestyle and they become a motile sort of self-contained little, little, um,
construct that swims around. It does some really interesting things, including making copies of itself from,
from skin cell, from loose skin cells that you provide and so on.
Um, shortly thereafter, uh, you know, a few, a few, a couple of years later,
we were able to make anthropos, which is the same thing, but with adult human
tracheal cells and part of that.
So, so that has, there's, there's a couple of reasons why that's important.
Very simply, you know, some people saw xenobots
and they thought, well, amphibians are plastic,
embryos are plastic.
Maybe not shocking that cells from a frog embryo
can reassemble into something else,
but basically this resembles a developmental,, uh, uh, uh, a
phenomenon in frog developmental biology known as an animal cap and, and, and,
you know, one way an animal cat, it's called an animal cap.
The animal cap is just basically that top layer of skin cells of perspective.
The ectodermal cells it's called an animal cap.
So, so some people thought about this as a unique feature of frog
developmental biology.
And so I wanted to get as far away from frog and embryo as possible because I wanted to
show that this was kind of general and a broader phenomenon.
So what's the furthest you can get from embryonic frog, adult human.
So Gizem Gomushkaya in my group who just defended her PhD about a month ago.
So she developed a protocol to take donated tr uh, tracheal epithelial
cells from, from human patients, um, often elderly patients and let them assemble
into a similar kind of thing, a self motile little, um, uh, construct that
swims around on its own.
And one of the most amazing things that, uh, that these anthropoids do, and this
is just the first thing we tried.
So I'm guessing there's a hundred other things
that they can do.
But one thing that they can do is if you put them
on a neural wound, so in a petri dish,
you can grow a bunch of human neurons,
you take a scalpel, you put a scratch down
through that lawn of neurons.
So there's neurons here, there's neurons here,
there's a big wound in the middle.
When the anthropoids come in, they can settle down, when the, um, anthropots come in, they,
they can settle down into a kind of a, we call it a super bot,
which is the collection of, um, collection of anthropots.
And four days later, if you lift them up, what you see is what they do is they
take the two sides of the, of the neural wound and they knit them together.
So they literally repaired across that gap. So you can sort of start imagining,
I mean, there's a couple of things on a practical level, you can imagine
in the future, uh, personalized interventions
where your own cells, no, no, no, heterologous, uh, you know, uh, synthetic
cells, no gene therapy, um, but your own cells are, are, are behaviorally reprogrammed
to go around your body and fix things in the form of these anthropoids.
You don't need immune suppression because they're your own cells.
Right.
And so, and so you, so you can imagine, uh, those kinds of, um, uh, kind of
interventions, it's interesting because a lot of the interventions we use now,
we use drugs and we use materials, you know, screws and bolts and things.
And then occasionally we use like a pacemaker or something, but, but
generally our interventions are very low agency, you know, and ultimately
we'll have like smart implants that, that make decisions for you and do, but,
but mostly they're very low agency kinds of things.
And when you're using low agency tools, the kinds of things you can do are typically very
brittle.
In other words, this is why it's hard to discover drugs that work for all patients.
You get side effects, you get very differential efficacy in different patients because you're
trying to micromanage at the chemical level of very complex system.
You don't just want agency, you want agency that's attuned to yourself because you can get someone else's agency.
Oh, yeah. Yeah. Yeah. Yeah. Yeah. It's not great for you.
Exactly, which is why your own cells coming from your own body,
they share with you all the priors about health disease, stress, cancer, you know, they're part of your own body.
They already know all of this and so part of this is creating these,
understanding how to create these agential interventions
that can have these positive effects.
I mean, we didn't teach the anthropots
to repair neural tissue.
We had no idea.
This is something they do on their own.
Like who would have ever thought that your tracheal cells,
which sit there quietly for decades,
if you let them have a little life of their own,
they can actually go around and fix neural wounds, right?
You must have had some idea that this would be possible,
otherwise you wouldn't have tested it, no?
True, true, yes.
This is true for a lot of stuff in our lab
where people say, well, did you know that was gonna happen?
And so on the one hand, no, because it's a wild
and it wasn't predicted by any existing structures. On the other hand, no, because it's a wild and it wasn't predicted by any existing structures.
On the other hand, yes, because we did the experiment
and that's why I did it because I had an intuition
that that's how this thing would work.
So did I know that it was gonna specifically repair
peripheral innervation?
No, but I did think that it would,
that among its behavioral repertoire
would be to exert positive influence
on human cells around it.
And so this was a convenient assay to try.
We have a hundred more that we're gonna try.
There's all kinds of other stuff.
Yes, I see, I see.
So you're testing out a variety.
Correct, you have to start somewhere, right?
And so we said, okay, so Gizem and I said,
well, why don't we try some, you know,
a nice, easy neural scar?
There's many other things to try.
Uh, so that's kind of the practical application, but the kind of bigger
intellectual issue is much like with the xenobots, what's cool about making
these sorts of synthetic constructs is that they don't have a long evolutionary
history in that form factor.
There's never been any xenobots.
There's never been any anthropos in the evolutionary history.
The anthropos don't look anything like a human state, you know,
stages of human development.
And so the question arises, uh, where do their form and behavior come from then?
Right.
And so this is where you get back to this issue of the platonic space, right?
If it's, if you can't pin it on eons of specific selection for specific, uh,
functions, where do these novel capabilities
come from? And so I really view all of these synthetic constructs as exploration vehicles.
There are ways to look around in that platonic space and see what else is out there. We know
normal development shows us one point in that space that says this is the form that's there,
but once you start making these synthetic things,
you widen your view of that latent space
as to what's actually possible.
And I see this research program as really investigating
the structure of that platonic space
in the way that mathematicians, people make the map
of mathematics, right?
And there's sort of a structure of how
the different pieces of math fit together.
I think that's actually what we're doing here
when we make these synthetic things is we're making, um, vehicles to, to,
um, observe what else is possible in that space that evolution has not shown us yet.
Um, yeah.
So, oh, you know, and, and, and then you can do interesting things like, and this
is, this is, this, you know, still unpublished, but you, you can, you can
ask questions like, um, what, what do their transcriptomes look like?
You know, what genes do xenobots and anthropos express?
Uh, and, uh, you know, without blowing any of the sort of surprise with the pay,
the paper should be out sometime this year.
Um, massively new transcriptional profiles in these things, no, no drugs, no,
um, no synthetic biology circuits, no genomic editing, just by virtue of having a new lifestyle,
they adapt their transcriptional profile.
The genes that they express are quite different,
quite different.
So that'll be an interesting study.
And then, you know, for the rest of it,
I mean, what we've been doing in the last few years
is trying to bring a lot of the work
that we've done earlier into clinically
relevant models. So the cancer stuff has moved from frog into human cells and organoids,
spheroids, so human cancer spheroids and glioblastoma colon cancer, stuff like that. The regeneration
work has moved from frog into mice. So, and this is, it's, you know, it's coming along.
I'm not, I'm not claiming any particular result yet.
This is, I should also say there's an invention.
What do you call it?
Disclosure here I have to do
because we have a couple of companies now.
And so I have to do a disclosure that
in the case of regeneration,
so MorphoCeuticals is a company that Dave Kaplan and I have.
So David is a bioengineer here in Tufts and he and I have this company aiming at limb
regeneration and more broadly bioelectrics in regeneration.
So yeah, so the cancer of the limb regeneration stuff, you know, more, more, more experiments in trying to understand how to read and interpret
the information that flows across levels.
So, so we know cells exchange electrical signals to know how to make an embryo.
Turns out embryos actually communicate with each other.
So that's been a really exciting finding recently for Angela Tong in my group, which just got
her PhD as well, where we studied this embryo to embryo communication,
showing that groups of embryos actually are much better
at resisting certain defects than individuals.
And they have their own transcriptional profile.
So I call it a hyper embryo,
because it's like the next level.
They have an expression,
a transcriptome that is different
from normal embryos developing alone,
let's say.
So that's pretty exciting.
And yeah, those are the kinds of things we've been focused on.
Okay, now we're going to end on advice for a newcomer to biology.
They're entering the field.
What do you say?
Boy, well, step one is to ignore most people's advice. So that's, I don't know how helpful that will be. But I actually have, I have a whole thing about maybe we can put up a link. I have a whole long description of this on my blog. So on my blog, I have a thing that basically talks about talks about advice.
Okay, that's on screen. Also, you should know that the previous research by
Angela Tong and Gamaskaya. Yeah, because I'm Gamaskaya. We did a podcast together. So that
link will be on screen as well. There's also another podcast with Michael Levin, which is on
screen and then another one, which is on screen with Chris Fields and and Carl Friston, another
one with Michael Levin and Yoshabok. That's on screen. So Michael is a legend on theories of everything.
Okay. So does your advice for
the biologists differ from your advice to the general scientist entering the field?
I mean, the most important thing I'll say is I do not in
any way feel like I could be giving anybody advice.
I think that there are so many individual circumstances that I'm not going to claim
I have any sort of.
How about what you would have wished you had known when you were 20?
Yeah.
So this is pretty much the only thing I can say about any of this.
of this, uh, that even, even very smart, successful people are only well calibrated on their own field, their own things that they are passionate about.
They're not well calibrated on your stuff.
So what that means is that if somebody gives you, so this is, this is all about,
it's kind of meta advice.
It's all about advice, advice on advice.
And, and, and the idea is that when somebody gives you a critique
of a specific product, so let's say you gave a talk
or wrote a paper or you did an experiment
and somebody's critiquing what you did, right?
That's gold.
So squeeze the hell out of that for any way
to improve your craft.
What could I have done better?
What could I have done better to do a better experiment?
What could I have done better in my presentation so that they would
understand what I want them to understand that that's gold.
The part where everybody gives a large scale advice, Oh, work on this.
Don't work on that focus.
Um, don't think of it this way.
Think of it that way.
All of that stuff is, uh, generally speaking, better off, uh, ignored completely.
generally speaking, better off ignored completely.
So, um, people, people are really not calibrated on, on, on you, your, your, uh, your dreams in the field, your ideas, uh, you know,
you, it does not pay to listen to anybody else about, um, what you
should be doing and how you really need to be, everybody needs to be
developing their own intuition about, about what that is and testing it out by doing things and seeing how they land.
And I think that most, most everything that we've done along the way that's interesting,
certainly we've had plenty of dead ends and you know, plenty of, made plenty of mistakes,
but most of the interesting things that our lab has done along the way, very,
very good, very successful smart people said, don't do this, there's no way this is going to
lead to anything. And so the only thing I know is that, paths in science are, nobody has a crystal
ball, paths in science are very hard to predict and people should really be very circumspect
about being, taking extremely seriously specific
critiques of specific things that will help them improve their process versus
these large scale, you know, sort of career level, level things that, um, yeah,
I don't think, I don't think you should be taking almost anybody's advice about
that.
Can you be specific and give an example of where you like the minutia of a
critique and then where you disliked the grand scale critique?
Yeah. like the minutia of a critique and then where you disliked the grand scale critique? Yeah, I mean the minutia happens every day because you know every day we get
comments on, you know, let's say a paper submission and we see and somebody says, well, you know,
it would have been better if you included this control or I don't get it because you know,
It would have been better if you included this control or I don't get it because it's clear that the reviewer didn't understand what you were trying to get at.
And so that's on us.
That's on us to describe it better, to do a better experiment that forces them to accept
the conclusion, whether they like it or not.
The best experiment is one that really, it forces the reader to a specific
conclusion, whether or not they want it to go there, it's irresistible.
It's so compelling.
You know, it's clean, it's compelling.
So that kind of stuff happens on a daily basis where you see what somebody
was and wasn't able to absorb from what you did.
And you say, okay, how can I do this experiment better?
What kind of a result would have gotten us to a better conclusion where
everybody would have been able to see. So that,
that stuff happens all the time. Um, the other, the other kind of thing, I mean,
I'll give you an example, uh, uh, from the, from the tail regeneration, uh,
kind of, uh, era, uh, we, we showed,
we showed that normally when, when tadpoles normally regenerate their tail,
it, um, there's a particular proton pump that's required for that to, to happen,
you know, a, a, a proton pump in the frog embryo. And so what we showed is that you can get rid of that
proton pump and then the tail stops regenerating. And then you can rescue it by putting in a proton
pump from yeast that has no sequence of structural homology to the one you knocked out of the frog,
but it has the same bioelectric effect, right? And that's how you show that it really is bioelectricity.
Right.
So, so we got two reviews on that paper and the first reviewer said,
Oh, you found the gene for tail regeneration.
That proton pump is the gene for tail regeneration.
Get rid of all the electrical stuff.
You don't need it.
You found the gene for tail regeneration.
The second, uh, the second reviewer said, oh, the gene obviously doesn't matter
because you just replace it by the proton pump,
so by the one from yeast, yeah, get rid of all of that
and just do the bioelectrical stuff, right?
So that shows you right away the two different perspectives, right?
Each person had a particular way they wanted to look at it.
They had exactly opposite suggestions for what to throw out of the paper.
And only together do those two perspectives explain They had exactly opposite suggestions for what to throw out of the paper.
And only together do those two perspectives explain that what's going on here is that,
yes, the embryo naturally has a way of producing that bioelectrical state, but what actually
matters is not the gene.
It's not how you got there.
It's the state itself.
And so that kind of a thing, those kind of perspectives or, you know, the, uh, the, the, uh, the people who are upset at, um, you know, for example, calling xenobots
bots, right?
We call them bots because, because it's, we think it's a biorebotics platform.
So, so one thing that happens is that, um, you've got, uh, you've got the people
that, um, are sort of from the organicist tradition and they'll say, it's not a
robot, it's a living thing. How, you know, tradition and they'll say, it's not a robot,
it's a living thing.
How dare you call it a robot?
And part of the issue is that all of these terms, much like the cognitive terms that
we talked about, it's not that the xenobot is or isn't a robot.
It's simply the idea that by using these different, using different terms, which are signaling
is what are some of the ways that you could have a relationship with it?
So for example, we think that we might be able to program it, to use it for useful
purposes.
That's what the bot, that's what the terminology bot emphasizes.
Do I think it's only as an about absolutely not.
I also think it's a proto-organism with its own, a little, you know, it's
own limited agency and its own, you know, things that we haven't published yet,
which we're working on.
It's a, their learning capacity and so on.
So, um, you often run into that is that people think
and everything should only be one thing.
And that, right.
And that this is all a debate about which thing is it.
And I don't think that's true at all.
There's another, just, you know, kind of one last example.
Again, having to do with terminology.
Somebody said to me once, you know, people are very
resistant to the use of the word memory for some of the things that we study.
And she said, why don't you just come up with a new term, you know, shmemory or something.
And then nobody has to be mad, you know, you can say, okay, do you like this is, you know,
human learning is memory.
And then this other thing where these other things learn, well, that's shmemory.
So then, and that's that's the kind of.
Shm intelligence.
Yeah.
Why are you calling it intelligence?
Yeah, exactly.
And, and that's the kind of, okay, so that,
that's just an example of the kind of advice
you might get from somebody.
And, and in a certain sense, it's true that if
you do that, you will have fewer fights with
people who are very purist about, you know, they
want, they want memory to be in a very
particular box.
You'll have less fights with those.
Those are true, but bigger picture though, imagine, um, so
there's, there's Isaac Newton and you know,
the apples fall.
I mean, I know it didn't really probably
happen, but let's see, you know, the apples
fall in ministry and he says, okay.
So, so gravity, I'm going to call gravity,
the thing that keeps the moon, you know, in
orbit of the earth.
And then I'm going to call shmavity the
thing that makes the apple fall.
That way there won't be any arguments, right?
Like, yeah.
But what you've done there is you've missed the biggest opportunity of the whole thing, which is the unification. The thing that the fact that actually no, the hill you want to die on is that it really is the same thing. You don't want a new term for it. So that's, that's just an example of, of the kind of, you know, it's good advice if you want to avoid arguments. But if, if your point is that, no, it actually is, that's the whole whole point We need to have a better understanding of memory add this is this is you know
I want those arguments then then that's that's something else and that's the kind of you know, that's the kind of strategic
Thing that you should decide on your own what you want to do now in that case
Why couldn't you just say actually it wouldn't have been a mistake for Newton to call this one gravity one and that one gravity two
Until he proves
that they're the same mathematically just like they're different there's
inertial mass and then there's another form of mass and then you have the
equivalence principle. Yeah you could you could you could the thing is that in my
point is the issue is that we never know a priori whether we're supposed to
unify or distinguish. That's correct yes no true true true a priori whether we're supposed to unify or distinguish. That's correct. Yes. No, true, true, true. A priori you don't know. And so the question is,
in your own research program, which road do you want to go down to? Because if you commit to the
fact that they're separate, you don't try the unification, right? You try the unification,
you spend your time. I mean, it takes years, right? You spend time and effort in a particular direction because you feel it will pay off.
If it were truly, uh, you know, it could be this or that.
Are you going to spend 10 years on one of these paths?
Right?
You, you really need to in science, you don't, there's no do overs.
Like you commit those 10 years are gone.
So, so you need to have a feeling and you need to have an intuition
of which way it's going to go.
And you definitely don't need to declare ahead of time that I know how it's going to turn
out because you don't.
But you do need to know, despite everybody telling me this or that, I am going to commit.
That's really all you have, right?
You don't have any kind of crystal ball.
You don't have a monopoly on the truth.
But what you do have is a responsibility to manage the limited time that you have.
So how are you going to spend your 10 years?
And it's going to be hard, right?
Lots of blood, sweat and tears.
It's a hard job.
There's, you know, constant criticism and that's how
science goes, um, lots of, uh, lots of stress.
But so now the question is that you're going to have
that stress following somebody else's research
agenda or yours, you'll still be old and stressed
out by the end of it.
But the question is, will you have tested out your
own best ideas or somebody else's view of what
science should be?
That's my only hope. Well, Michael, speaking of limited time, I appreciate you spending yours with me and with the crew here today.
Super. Thanks so much.
Thank you so much. Thank you.
Thank you so much. Yeah, it's great to see you again. Great discussion. I love talking to you.
Thanks for having me so many times. It's been really excellent.
So Taylor Swift has a tour called the era's tour. Okay, okay. You've been around since 2000 active in the field
Okay, this is akin to the Michael Levin era's tour all of Michael's work
Well, not all of it, but the milestones the greatest hits in
approximately two hours or so
so share this if you're a fan of Michael's work
and well, Michael, thank you.
Thank you so much.
I really appreciate it.
Thank you.
Firstly, thank you for watching.
Thank you for listening.
There's now a website, curtjymongle.org
and that has a mailing list.
The reason being that large platforms like YouTube,
like Patreon, they can disable you
for whatever reason, whenever they like.
That's just part of the terms of service.
Now a direct mailing list ensures that I have an untrammeled communication with you.
Plus soon I'll be releasing a one page PDF of my top 10 toes.
It's not as Quentin Tarantino as it sounds like.
Secondly, if you haven't subscribed or clicked that like button, now is the time to do so.
Why?
Because each subscribe, each like helps YouTube push this content to more people like yourself,
plus it helps out Kurt directly, aka me.
I also found out last year that external links count plenty toward the algorithm, which means
that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people
are talking about this content outside of YouTube, which in turn greatly aids the distribution
on YouTube.
Thirdly, there's a remarkably active Discord and subreddit for theories of everything,
where people explicate toes, they disagree respectfully about theories, and build as a community our own toe. Links to both are in the description.
Fourthly, you should know this podcast is on iTunes, it's on Spotify, it's on all of the audio
platforms. All you have to do is type in theories of everything and you'll find it. Personally,
I gained from rewatching lectures and podcasts. I also read in the comments that hey, toll listeners also gain from replaying. So how about instead you re-listen on those
platforms like iTunes, Spotify, Google Podcasts, whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like this,
then do consider visiting patreon.com slash KurtJayMungle and donating with whatever you like.
There's also PayPal. There's also crypto
There's also just joining on YouTube again
Keep in mind its support from the sponsors and you that allow me to work on toe full time
You also get early access to ad free episodes whether it's audio or video
It's audio in the case of patreon video in the case of YouTube for instance this episode that you're listening to right now was released
a few days earlier. Every dollar helps, far more than you think. Either
way, your viewership is generosity enough. Thank you so much.