StarTalk Radio - Synthetic Biological Intelligence with Brett Kagan
Episode Date: June 21, 2024Can you make a computer chip out of neurons? Neil deGrasse Tyson, Chuck Nice, & Gary O’Reilly explore organoid intelligence, teaching neurons to play Pong, and how biology can enhance technology wit...h neuroscientist and Chief Scientific Officer at Cortical Labs, Brett Kagan.NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/synthetic-biological-intelligence-with-brett-kagan/Thanks to our Patrons Amar Shah, Carol Ann West, Mehdi Elahi, Peter Dawe, Paul Larkin, Saad Hamze, Eric Kristof, Nikki Shubert, braceyourself07, and wayne dernoncourt for supporting us this week. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Transcript
Discussion (0)
Coming up on StarTalk Special Edition, rather than attach computers to your brains,
what happens if a computer is made of the building blocks of brains themselves?
Coming up on Special Edition.
Welcome to StarTalk, your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk Special Edition.
Neil deGrasse Tyson, your personal astrophysicist.
I got with me my co-host, the dynamic duo.
Oh.
Gary O'Reilly. Gary, how you doing, man?
I'm good, Neil. Good to be on again.
Former soccer pro. You staying in shape a little bit?
No. Okay.
We'll do a separate episode
on staying in shape. Just as it should be.
Just as it should be. And that other voice
is, of course, longtime StarTalk co-host
Chuck Nice. How you doing, Chuck? Hey, what's
happening? So, Gary, what did you and your
producers cook up for today?
Well, once we came across this particular
project and this particular guest,
this was a given.
We had to do it. So let me put
it to you this way. Everyone wants
to know what tomorrow's
world will bring. We all know
you can put technology into
biologists, think Elon and Neuralink.
But is that the only
route to the future? What if we flip the
script? Yeah. What if we introduce the biology to the technology? Then what happens? You get
what Unreliably Informed is called synthetic biological intelligence, or as we will learn
further into the show, SBI for short.
And so during this show, Neil, we're going to get into AGI ethics, comparing SBI with silicone intelligence, and there's so much more.
There's loads to unpack, so buckle up.
We are headed to the future.
Wait, you mean silicone, not silicone.
I think silicone you find in...
We already have silicone intelligence.
I think we already have that.
It's called Hollywood.
That's right.
Silicone intelligence, baby.
Are we in the weeds with tomato and tomato?
No, I think it's silicon.
Silicon.
You're comparing biology introduced to technology and technology introduced to biology. Weeds with tomato and tomato. No, I think it's silicon. Okay, silicon. All right, fine.
You're comparing biology introduced to technology
and technology introduced to biology.
And that's what we're going to explore here.
Totally, yeah.
All right.
All right.
So let's get our guest,
who may be uniquely qualified,
to tell us about it.
Brett Kagan.
Brett, welcome to StarTalk Special Edition.
Thank you so much.
Thanks so much for having me.
And you're dialing in from
Australia, Melbourne. So yeah, yeah. Nice. Great to be able to organize the international calls
at a time that can work for everyone. Indeed. You have a PhD in neuroscience, which is,
if I were to pick a field today, it might just about be that. Maybe, maybe. It's a close second
to astrophysics, just for how endless that
frontier feels, how ripe
it may be for future discovery.
You're also a chief scientific
officer at the Cortical Labs
in Melbourne?
Yeah, yeah. And
who are these people? They want to
build synthetic intelligence
processors.
Oh. Oh.
Okay.
A positronic brain, huh?
So let's just start out.
You're making brain cells in a dish?
Does that mean you're growing neurons?
And then, if that's what you're doing,
you're going to put it into electronic circuitry?
So what's up with that?
Yeah, that's exactly what we're after.
So when you get right down to it,
I think it's interesting you bring up astrophysics as a parallel because when you look at the complexity that happens in the universe
at that macro scale,
infinite number of different bodies all interacting with each other,
moving as a whole,
you do actually get a similar level of complexity when you look at the brain.
And when you look at the outcome of what brains can achieve, building everything that's around
us, you can realize it's a pretty special system.
And so we became really interested in this idea.
Well, what if you could leverage that fundamental building blocks of brains to actually create
a device that is intelligent.
And so we set about figuring out a sustainable and ethical way
to produce brain cells, which fortunately has been established
through a lot of academic work previously.
Wait, when you say ethical way, you mean there's some brain cells
that'll complain and protest your work?
Well, for us, we were obviously interested in a lot of people in the work,
they take them from animals.
So you have to grow the animal, you have to kill the animal,
you have to harvest it, which is for some work necessary.
But we wanted to figure out,
is there a more scalable and sustainable way to do that?
And so we moved to synthetic biology.
And so we found that what we could generate
was something called an induced pluripotent stem cell,
which is a type of stem cell that you can make
from any adult donor's blood, tissue, or skin cells,
or there's a number of ways you can do it.
Then we could turn them into brain cells
using a number of different methods.
And then we can integrate them into devices
like what you can see I've got next to me here,
which allows us to interact with them.
Please describe it. Describe what you have.
Yeah, so this sort of device, we call it a CL1.
Essentially, it's a device that allows you to record the small electrical pulses
that happen when brain cells are active,
and then also supply small electrical pulses to communicate to them.
So electricity here is a form of information transfer.
You're peeking at the playbook of nature
and adopting elements from it, methods and tools,
to improve your efforts to duplicate it in hardware. Is that a fair characterization?
Not exactly. We're not trying to duplicate it in hardware. What we're trying to do is actually
leverage the cells themselves. Part of the problem is we can't duplicate brain cells and hardware.
The complexity that they display is something we can't achieve yet.
And so we kind of adopted this idea of why mimic what you can harness.
All right.
So how are you getting from growing your neurons and your brain cells
onto the multi-electrode arrays?
And then how are they networking and beginning to function?
Yeah, all of those are great questions.
Sort of real short, real simplified.
These neurons, what we can do is we can put down these extracellular matrices
that the neurons are pretty happy to grow on.
And we just put those above these multi-electrode arrays.
So it's a dense platform with electrodes. And we just put those above these multi-electrode arrays. So it's a dense platform
with electrodes. And we can grow the neurons on that. And we can keep them alive with standard
methods for up to a year, sometimes longer. And then how do we actually interact with them? Well,
that's where the neurocomputational approaches come in. And this is really a question. This is
a physics question. How do information systems work at a fundamental level
so now do neurons do they have a proclivity to start communicating with one another on this
matrix that you set up or is that something that you are specifically designing and creating because
our neurons are firing in our brain. Yeah, absolutely.
And the answer is actually both.
Neurons will, it's called spontaneous
activity. Okay.
When they're there, they will network up. They will
talk to each other.
Of course, without any information coming
in, they don't have very much to talk about.
So you think.
So you think.
Who knows what sort of quantum home is.
Yo, yo, yo, did you hear what was happening over in the hippocampus?
I just got back from the hippocampus.
It's going crazy over there, man.
Are you a big man on the hippocampus?
No, no, no.
Yeah, absolutely.
There's a huge amount of complexity that arises there.
But what's fascinating is when you shift it and you give them that information, the reorganization you see is dramatic. And it really suggests the
ability to interact with these systems as something that is achievable. You're saying
the neurons intentionally organize themselves in a way in response to being laid down on your
multi-electrode arrays? So you can pattern it. You can use a bunch of materials to create some
intentional organization, which is fine. And that's a bunch of materials to create some intentional organization,
which is fine, and that's a really neat area that we're investigating.
But what I think is more exciting is the fact that they will,
in response to the information you provide,
these electrical signals that have structure and quality to them,
they will reorganize their function rapidly.
Oh, whoa.
Okay.
Very Frankenstein here.
That's very cool, though, because our brains do the same thing.
I guess so, don't they?
Exactly.
Our brains do the exact same thing.
That's wild.
But so you're not growing neurons.
You are activating neurons.
We have to do both. So we grow them, we plate them, and then how can we activate them with information?
Wait, if you can grow neurons, why can't you cure spinal cord
severed nerves, that sort of thing? Well, that's actually what a lot of this technology for the
synthetic biology has arisen from. People looking at Parkinson's, spinal cord damage, Alzheimer's,
a whole range of things. And so people have been trying to develop brain cells for just over
decades from this material. But we kind of took
it and went, well, great, we could also apply it to just build brain cells for an intelligent
purpose. So we're kind of adapting it. In reading about your project here, Brett,
I came across a term called embodied networks. Yes. Could you break that down for us so as we'll
understand if we get to this point? Because I think we've got a lot of distance to cover before we get to how this sort of intelligence is going to be used in the future.
So if we can sort of build some basics as we understand a little bit more, please.
Yeah, sure.
So in the simplest possible terms, embodiment, we all have embodiment, right?
There's a statistical or physical barrier between our bodies and the external world.
The question is, how do you create that for a group of neurons in a dish, right?
Most neurons sit there in the darkness chatting to themselves as we were saying,
like, what's going on there? We don't know.
What we try to do is by creating a tight, what's called a closed loop,
where we take information from the cells, we apply it to a virtual world,
like we started with the game Pong,
and then we feed back how that changes the world to the neurons.
Suddenly, there's this barrier between the neurons' activity
and how it affects the world, and they're informed of that.
And so this is embodiment,
because there's this barrier, there's this separation.
Before we get to the abilities of these things
that you are creating in your Frankenstein lab,
tell me again, convince me that adding neurons to circuits is better than adding circuits to neurons.
Because you are bucking a trend here.
You know, Neuralink, Elon Musk, and others, they want to put the internet in our head
and enhance our biology.
You want to make a head and then put the internet in it.
So why should I bet on you?
Well, look, it's not a zero-sum game, right?
They do different things.
One's aiming to take humans and advance our capacity. Well, look, it's not a zero-sum game, right? They do different things.
One's aiming to take humans and advance our capacity.
The other one's trying to use the capacity of biology to enhance some other process.
So both have their times in place and there's also the chance they could interact.
It's one thing to put a BCI, a brain-computer interface, into a head.
How do you actually interpret those signals?
This is an open question.
Maybe brain cells are better at interpreting signals from brain cells.
Wow.
In terms of, yeah, so there's a lot of capability there.
And it's supported.
If you look at what we as humans or animals,
bees, cats, rats, whatever, what they do well,
they do well going into a novel environment,
investigating it, optimizing,
and they do it with a fraction of the power of machine learning,
between hundreds of thousands, hundreds of millions of times less power consumption.
Right.
And they can do it quicker in terms of the amount of data they need as well.
You don't need to look at too many tigers to learn. To know what a tiger is.
To learn from the tiger, right?
We have these innate predisposition to learn rapidly.
So now, let me ask you, so once
you just brought that up, the reason why we are able to do that, say for instance, and like
computers don't, is because we are grounded in real world knowledge of what we are experiencing.
Yeah. So how exactly do you transfer that into neurons that are embodied in something other than a circuit?
How do you bring that real-world grounded, or is the real-world grounded knowledge that we have
because of the network of neurons that we have in our brain?
Our ability to acquire that real-world knowledge is what's fascinating.
So something like a tiger or a snake,
that could be a genetic prior, right?
Generations and generations of people
who didn't run when they saw tigers got eaten.
But you'll also learn to fear electrical shocks, let's say.
It's very unlikely there's a predisposed genetic prior
to make you scared of electricity
because we just haven't interacted with it for that long.
But we learn, don't we?
Rapidly, you show a child once or twice,
you show them a video, they learn,
oh, electricity, bad.
One or two samples is all that they'll typically need.
And that's due to the absolute mass of paralyzation
and flexibility of a neural system.
We can make these connections.
How long does it take your neurons, processors
to learn as they play Pong? Or is this an experiment that's still ongoing?
We're still just scratching the surface of this. This is science as it's happening.
And we've wanted to be really, we kind of like to think of ourselves as like anti-hype scientists.
So we wanted to show people like, look, here's this work. It's covered in warts. It's messy.
It's janky. But it seems to be doing something. So we wanted to show people like, look, here's this work. It's covered in warts. It's messy.
It's janky.
But it seems to be doing something.
So we wanted to share it with people.
So we were able to see some learning.
We basically tested for, say, 20 minutes at a time.
And we could find very big differences from the first five to the last 15.
So within five minutes, they were reorganizing.
And actually, we recently found out it could be even quicker than five minutes for them
to reorganize.
And then the learning appears over the next couple of minutes as they start to upregulate and improve.
I'm still trying to figure out, is this just a natural order of neural function?
Because there's a saying, neurons that fire together, wire together.
Yeah, yeah.
Is that what you're talking about when you said the learning aspect?
Is that what they say up in the hood?
I mean, where'd you get this?
No, that's a classic line
for something called
heavy plasticity.
Neurons that fire together,
wire together.
It's been a mantra in neuroscience
since, oh gosh,
don't test me on the history of that,
but for a long time.
Okay.
Maybe it did start up in the hood.
Okay.
Yeah.
In our hood, in the hood we share, yeah. Yeah, it was a dude named Jam time. Okay. Maybe it did start up in the hood. Okay. Yeah. In our hood.
In the hood we share, yeah.
Yeah, it was a dude named Jamal.
Okay.
Okay.
I know him.
Thank you.
Jamal the wise.
Jamal the wise.
Is that what you're seeing
when you talk about this learning aspect
that happens in this short period of time
that you're observing?
That's actually just one part of what we're seeing. So yeah, this thing called
heavy plasticity absolutely massively upregulates incredibly rapidly once they're in these
environments. But what's cool is that there's actually so much more that's going on. And you
can break this down and find so many different processes that are interacting at different time
scales. So that's why when I say the complexity of these systems,
one of the few things that really bears parallel to
is those massive macro-scale interactions
that can happen on that galaxy level.
It's absolutely mind-blowing.
What you're talking about,
now that I'm putting all this together,
is freaking crazy
because what you're talking about right now is a biological computer,
basically, that has the ability to do what we do because computers right now can't do what we do.
They can't do-
Silicon computer.
Silicon, thank you.
It can't do what we do.
The real world grounded knowledge that is necessary to do what we do.
It can make huge calculations and tons of associations, but it has to see all those associations in order to make them.
And what you said earlier is what really makes sense.
You show a child a ball, it will know a ball if you show it a baseball, if you show it a basketball.
It's going to say ball.
Whereas if you show a computer that,
you have to show that computer every single kind of ball
for it to say that's a ball.
What you're talking about right now with using neurons,
you can turn these things on and off instead of zeros and ones.
And instead of zeros and ones where you got to show
every single thing that is,
it can actually do what we do
and it can start to make associations on its own.
Am I right when I say this?
That's certainly what we're hoping to be able to show.
And you've got this neat thing here
where you have a ground truth.
Let's say people are going for
artificial generalized intelligence.
And we don't know if you can achieve what you're talking about with silicon.
It's never been done before.
But you, me, as I said, cats, rats, birds, bees,
to some extent have this generalized intelligence.
You have this ground truth that using this hardware, this wetware,
it is possible to have these effects.
So the question isn't if it's possible, but how do you get there?
And that's a very different place to start from.
Let me just for once, I'm sorry, guys, because I'm freaking out right now.
You're freaking me out, man.
I'm just saying.
Okay, here's the question, Brett.
And I'm not trying to be disrespectful at all.
That means he's about to be disrespectful.
Well, I don't mean to be.
Why the hell would you want to do this?
I mean, this could go horribly wrong in a lot of ways.
Like, you could literally create the intelligence that becomes the next species.
It won't be a computer.
It'll be something much more.
That's what I'm getting.
You're scared, Chuck?
I'm scared.
I'm sorry.
I'm scared.
You're scared.
By the way, the Terminator
had biological tissue
affixed to its exoskeleton.
But not,
and I think this is
an important thing,
not a biological brain.
Right.
And I think when you
think about the risk,
something that can
self-replicate rapidly be hard to get into the internet, and all these things that we worry about AGI, all those fears are missing from biological intelligence.
At the end of the day, even if you do create an incredibly intelligent system in a dish, let's say that happens.
Let's say we go completely out there, super intelligence in a dish.
It's still not really going to be able to manage
against a small quart full of bleach.
So these things are controllable.
They're controllable.
This is one aspect of it.
Even if we achieve what you're saying.
It's not going to jump out at Petri dish and kick your ass.
Exactly.
It's not going to happen.
Any capabilities we provided is something we have to provide it.
And we have no intention of doing any of this in the near future.
And at the end of the day, it will be discreet. It will be controllable.
It will be just one brain, much as
we are.
I'm Kais from Bangladesh
and I support StarTalk on Patreon.
This is StarTalk with Neil deGrasse Tyson.
So what have you taught at Pong?
We did, yeah.
We started out with Pong.
Wait, just wait. No, you can't just, we just started it? Pong? We did, yeah. We started out with pong.
Wait, just wait.
No, you can't just,
we just started out with pong.
How the hell you get from some circuits and some neurons to pong?
So that was a great question.
And we were really interested
in not just trying to leverage
some basic reaction response thing.
You stimulate here
and you measure the same response,
which is sort of what a lot of people have been looking at for a while. We wanted to know,
what's the fundamental basis of intelligence? And there's a lot of theories out there. So we
started to work with this brilliant neuroscientist over in University College of London called
Professor Carl Friston. And he proposed this idea called the free energy principle.
called Professor Carl Friston.
And he proposed this idea called the free energy principle.
And essentially what it states is that a system will work to minimize the uncertainty, the information entropy in its environment.
And so we thought, look, this is great.
If true, it could be a generalized intelligence.
And if we find evidence against it, well,
we found evidence for against a very prominent theory.
That'll be a good thing to get people's attention as well.
Either way, for us, we do good science, win-win. And so we tested it. We built an environment where when the system
missed the ball, we gave it control of the paddle and we said, if you miss the ball,
you're going to be injected randomness into your well. What's that look like?
Just random stimulations all over the dish. A super simple idea to test a very complex idea.
And what we found was that over time
with this feedback loop that we created,
which was worked in real time,
the system actually did change
its behavior rapidly
at all these levels I was talking about.
So that was pretty exciting for us to see.
When the paddle misses the ball,
how do you gauge the intelligence
that it doesn't get as angry
and throw the paddle down
and have a tantrum?
Or does it, I mean, we talked about
it reorganizing. So how
do you get that metric to gauge this
intelligence? Or is it just visual?
Well, it hits the ball more accurately and better now.
That's the simplest approach, right? If you were to
train a person or a cat or a
dog to do a trick,
the question isn't what's really going on in
its head. The question is, does it
sit next time I tell it to sit?
Or can it solve 2 plus
2 next time? And so that's exactly
what we said. And we disconnected
them. We disconnected where the information went
in and where the information went out
so that it would actually have to be some sort
of process going on
through the system. And we just looked.
Would it learn over time?
Wow.
So, mentioning learning, if you put synthetic biological intelligence
and compared it to a machine learning algorithm,
what's the effectiveness, the efficiency,
the speed of how this compares and operates?
Yeah, that's a question that seems to upset a lot of people because we actually have done
that work in depth.
It's the answer that upsets them, not the question.
Even the question can upset some people.
There's been such a narrative of AI, ML supremacy for so long.
Unfortunately, in science, there can be a lot of gatekeeping in certain areas.
But we've sort of very conclusively shown
that these systems actually
have better learning rates,
better sample efficiency
than machine learning.
Now, of course,
machine learning will continue to learn
and you can speed it up
and you do all these things.
But if you're talking about
the amount of power consumed
and the amount of data consumed
to get to a given point,
these systems will outperform them.
Well, that's because you need less of a sample size in terms of input.
You don't need as much input with what you're talking about.
With machine learning, it's only as good as the data set that trains it.
So you need a tremendous amount of data to put in in order to get something out.
And remind us of what AGI stands for.
Artificial Generalized Intelligence.
Which means?
The idea of having an AI that's able to solve, well, basically with human-level capabilities.
Able to be generalized in their approach with a given set of data,
opposed to having to be trained in a bespoke way
on every single task,
which is currently what you have to do.
We're used to algorithms.
And obviously, if you want them to solve
incredibly large problems, it takes a lot of energy.
You've highlighted the fact that this is a low energy intelligence.
Do I need a football field size of biological processes
that you're creating to solve big problems?
Or is this something that's ultimately
going to become scalable and handy to stick in your pocket?
Think back to the telephone.
Now we can stick one in a small pocket.
There was before you had a telephone box
that couldn't go anywhere.
Yeah, great question.
And actually, bigger isn't always better.
You look at elephants.
Their brains are about two and a half times as large as us.
But unfortunately, most of them, unless they're killed by predators,
die of starvation because they grind their teeth down.
So bigger isn't always better for a brain.
It's the connections inside it,
and it's the method in which it's used that matters.
Even, let's say, bumblebee intelligence.
Bees are an amazing creature, and they can do so much. If we could harness, that's only 800 to a million,
800,000 to a million, I should say, neurons inside a bee, and they can achieve so much.
What if we could just harness that level of intelligence? It would outperform any
machine learning-based drone we have. Wow. That's pretty insane.
I just love the fact that elephants
die because they don't have a dental plan.
Yeah, yeah.
Sorry to bring the tone down, guys.
If only elephants had learned to be dentists.
What they need are dentures, you know.
Exactly.
All fresh.
Is this synthetic biological intelligence going to solve problems
in a similar way to silicon intelligence?
And there's this worry about ethics of AI.
And does biological intelligence have a greater or lesser degree of ethical concerns for
us as a species which is talking back to the point Chuck was making earlier. Two very good questions
the first one is almost certainly not and I could talk for hours on on the differences we see I'll
give one example, though.
When you look at a complex dynamic system like the brain and you inject information into it,
you're going to see these very dramatic changes
that you won't see in, say, silicon computing.
For example, something called criticality.
This is basically something that's balanced at the edge of chaos
between order and disorder.
And it's fascinating because that's the exact same sort of thing
you'll see as bird flocks respond and change their behavior,
their flight patterns in response to, say, a hawk
or something like this.
And so you start to get these parallel links
between how these systems at the neuron level,
at a bird level, perhaps even, you know,
you can model this in city levels,
are actually changing their behavior.
So it's very fundamentally a natural process opposed to zeros and ones there will be overlaps there will be
links but fundamentally there are also more differences which does raise the ethical
questions so we work with like a lot of independent bioethicists around the world to look at this and
actually one really exciting thing is that if you start to want to look at what a morally relevant state is, broad term, like consciousness, this tool could actually help you
maybe understand what that even means. Because when you look at consciousness in a person,
there's so much going on, right? But if you can break it down to a simpler level and start to
look at metrics there, you can maybe actually understand what is the biological basis to some
of these morally relevant states.
Now, is it possible that you wouldn't even need to worry about consciousness when you talk about the bee and you talk about the elephant?
I'm looking at it like a neural network.
If you're looking like when you look at AI neural networks,
if you were to take specific tasks for your biological computer
and you were to link them all together,
you could kind of make this, I don't know,
ad hoc makeshift kind of brain,
but it wouldn't necessarily require intelligence,
not intelligence, consciousness.
Exactly, yeah.
You're spot on.
And this is something I try to communicate
with people. Intelligence and consciousness, they're not
inherently tied together.
And you see examples of that in people
as well. So there's a phenomenon
called blindsight. I don't know if you're familiar
with this.
But in blindsight, essentially
you have a case where you've got damage to the visual
cortex. And so if someone becomes
legally blind, they perceive, they have no conscious to the visual cortex. And so if someone becomes legally blind,
they perceive they have no conscious experience of any vision.
Yet, if you throw a ball at them
or you pull a chair in front of them as they're walking,
it's not nice to do, but people have done it for tests.
They'll move around the chair, they'll catch the ball, right?
But they won't know it.
And you'll say, well, how did you do this?
Oh, it was luck. I didn't do it.
And so you have this thing of intelligent action, catching a ball, moving around the chair, no conscious awareness. So it's certainly possible.
And again, we're not necessarily trying to create a human brain in a dish. We're using neurons as
an engineering substrate. So if we understand what causes consciousness, we can build around
and away from that if that's desirable. You say you're not trying to create a brain,
but it sounds a lot like you are.
Not a human brain.
Not a human brain.
But isn't there something,
wasn't there research that cortical labs,
your labs did with Johns Hopkins University
here in the US,
regards organoid intelligence?
I can say that,
but I can't give you an explanation of what it is.
So would you help me understand that a bit better, please?
Yeah.
So there are the two.
That's why I say not necessarily wanting to create a brain in addition.
There are two directions.
One is, well, the human brain is the most capable brain
for doing complex tasks that we know of.
We should just recreate it exactly,
and that's going to come with certain benefits and certain risks.
The other approach is to go the other way.
So with the organoid intelligence work,
yeah, we're looking to see more physiologically
or biologically compatible systems,
something that looks more like a human brain.
Still far smaller, far more simple,
but closer down that pathway.
But as I was saying,
if we do become really worried about consciousness,
we can pivot and go the exact opposite direction
and still have a lot of use.
So make it less like a human brain,
but leverage the underlying properties of these neurons.
Where precisely does the ethics concern land
in this whole conversation?
I think there's two broad or three broad areas.
One of them being largely soul,
because of stem cell therapy-based work
that's been going on in genetics work.
That one is the donors for your tissue, for the blood.
It's a simple process, but still, you don't want people taking advantage of.
You want to make sure it's nice and equitable access.
You have genetic diversity, et cetera, et cetera.
Fortunately, a lot of work's been done on that.
The other two, one is applications, which needs to be done on a case-by-case basis.
And then the other one is indeed this idea of what if they become conscious?
So I think there's these sort of three pathways of ethics that we need to be aware of.
And do you get ethicists or are neuroscientists also good at that exercise?
We work with both.
We're a big proponent of multidisciplinary collaboration.
So we work with ethicists
to talk about ethical problems. We try to integrate
them with neuroscientists. Well, guess me
is when people say, let's choose a priest, a rabbi
and an imam and
they don't bring in a scientist into a conversation
about the ethics of the science.
So I'm delighted to learn that
you've got a seat at the table. Well, that's, wait a minute,
that's because you guys are the problem.
Huh?
Brett, you just casually mentioned consciousness.
Like that's something you know you can create in your dish.
When there are people who are brain dead, but are they conscious?
Do we know enough about consciousness to say whether what you're creating ever achieves it?
Yeah, sorry, that was not my intention.
I might have misspoke.
It's not so much that we think we can create consciousness in addition.
In fact, my personal belief is that we're unlikely to do that because, as you said, there's so much complexity there.
But we have to recognize that there are these possibilities.
So we're making sure we progress this work in an ethically sustainable way.
So what I would say with that, though,
is that if we begin to find that consciousness or anything like that does arise, it wouldn't just change how we treat these things in a
dish.
It would change how we interact with all of nature.
We're looking at things that are still more simple than a cockroach at the moment.
For now.
For now.
For now.
But there's so much complexity there.
It could inform not,
you know,
not just the ethics
of this research area
or this application,
this technology,
but how we interact
with the world.
And I think that's exciting.
But wait a minute, Brett.
Let me just push back
for a second.
When you talk about complexity,
the kind of complexity
that you're talking about
could lead to consciousness
and we wouldn't even know how.
I mean, is consciousness something that, for, we know that we're born with it supposedly. But like Neil just said, there are
people who are alive and not conscious. So is it something that is emergent? What is it?
What is it? And then once we figure out what it is,
when you talk about the level of complexity
that you're talking about,
maybe it could happen the same way it happened in us.
If it is indeed emergent,
maybe you will happen upon consciousness.
And what do you do at that point?
Look, it's a brilliant question.
It's one we ask.
We don't have the answer to it.
What we'd like to think though... That was a great answer, one we ask. We don't have the answer to it. What we'd like to think, though...
That was a great answer, by the way.
We don't have the answer.
And I think, as scientists, you have to be humble.
You have to say, look, we don't know.
As I said, I don't think it'll happen,
at least not any time in the foreseeable future.
But we don't know that.
So we have to be humble.
We have to approach it.
And we have to say, well, look,
how do we test and make sure that we're able to know
and sort of identify the road signs before we've come up to the turn? And so that's what we're
actively working on with people is not just blindly going into this and saying, well,
maybe we create Frankenstein, maybe we don't, that's someone else's problem. No, it's our problem.
And we need to bring in the people to work together to be able to figure out the best way to actually
get to the result we want to get to, which is ultimately something that benefits people.
Are we looking at the future of computing being synthetic biological intelligence?
Or are we likely to find the hybrid between the biological and the silicon?
Absolutely.
So there's this idea called heterogeneous compute.
We already have heterogeneous compute, by the way.
I mean, CPUs and GPUs, they process data differently
and they work together really well.
What if we could bring about
maybe one day you have your
quantum processing unit that does
photography very well, and you have your
biological processing unit that does real-time
applications really well.
And they all work together, so that you have
the right tool for the right job.
Do they solve problems in different ways, or did we explain that you have the right tool for the right job. Do they solve problems
in different ways
or did we explain that fully
in the sense of
will the biological intelligence
find a different route
to a solution
than the silicon intelligence?
That's interesting.
Again, we would almost certainly think so
based on just what we've done
from say human versus AI,
which people, ML, machine learning,
that people have looked at,
we do seem to solve problems differently.
And sometimes humans don't always come up
with an optimum answer.
Often what they come up to is an answer that's good enough.
And so you need to figure out like,
do we want to know how to get from points A, B, and C
in a way that you can do it with your time?
Or do you want to figure out the exact optimum pathway
that might take you hundreds of thousands more amount of power consumption?
So you just need to figure out what is it we actually need to know.
And I think that's something that we've, as people,
not always done really well.
We haven't always figured out the best approach,
the most efficient approach to get things done.
So, you know, that being said,
efficient approach to get things done.
So, you know, that being said,
is it possible that the computational power of the synthetic biological computer,
is it possible that that might be compromised
by the fact that it thinks more like we do?
In other words, if giving it a real-world grounded knowledge,
could that actual knowledge
be an impediment?
I'm not...
I can't think of a case
where knowledge of reality
is an impediment
to problem solving.
Yeah, I agree.
Okay.
I'm just looking at...
I'm looking at the differences
because those are the differences
between how we think
and computers think.
So I'm looking at those differences. Is there any possibility differences because those are the differences between how we think and computers think.
So I'm looking at those differences. Is there any possibility that could serve as a stumbling block in any way? That's all. No, and it could be. As a scientist,
I never like to say a thing is not possible. The possibilities out there are almost endless.
But what I'd say is if you do find those edge cases,
it just means that biological computation
is not the right approach for that problem.
Yeah.
That's super cool, man.
Yeah, we go through versions of that.
Okay.
When we program computers,
there's certain methods of programming,
certain computing CPU wiring
that doesn't marry well to the problem you want to solve.
Gotcha.
And you just work on it.
You work on something else.
You could write code that overcomes that, but then that burns CPUs that you can use
for the real thing you're trying to calculate.
For something else, right, right, right, right.
Okay.
That makes sense.
Okay.
So Brett, as this thing begins to develop in progress, where are you looking at it
and thinking it's really going to do
well in this field? Is this
field healthcare? Is this field data
processing, autonomous,
whatever I can't say,
autonomous events,
driving, flying, or whatever
that might be?
Yeah, the nice thing about this approach is
it's a platform technology.
So yeah, the initial use cases,
like just basic science research
is a super interesting question.
And it's a huge field.
Billions, tens of billions of dollars
get spent in it every year.
And we're often using tools
that aren't quite up to asking the questions
or answering the questions we're asking.
Beyond that, as you say, healthcare, drug testing.
Is there a tool you already know you need that you don't have that might be supplied by a medical engineer or a physicist?
Well, we have medical engineers. We work with physicists for this very reason, right? We're
incredibly multidisciplinary. So this little box here, it combined all sorts of things,
biomedical engineering, hardware, software, everything.
We all had to come together and work. That's ultimately one of the biggest reasons we're
a company, not an academic lab. Academic labs silo. They focus on one area and they go deep.
And I have a lot of respect for that. But we weren't able to build a platform technology
with that approach. So we had to bring everybody together. That being said, do you ever envision yourself saying these words?
It's alive!
It's alive!
It took that long?
Yeah, yeah, yeah.
I've been resisting.
I've been resisting.
I've been resisting.
I've been resisting.
I know. You caved. We have been resisting. You've been saving it. I've been resisting. I know.
You caved.
We have to say that at least twice a day.
It's compulsory.
You're not allowed to leave
before you've got your maniacal laugh out the way.
He is alive.
I've seen it twice with Frankenstein and Frankenfutter.
Yeah.
These are the two people creating life out there.
So you had to play Pong.
Is there other games
coming forward on it? Some other
mental feats that we can look forward to?
Yeah, we started with
Pong because DeepMind started
with that for some of the first RL work.
It's one of the first computer games.
It met a bunch of other criteria.
We moved on and we did try some other things
and had some really interesting results.
But what became really apparent to us
is that we were using off-the-shelf hardware
and we were sort of hacking a lot of it.
Because we started out, we didn't have a lot of money
or many resources, it was just a couple of us.
And so we had to make do with what we could.
And we just realized, God, it was hard
to use things not designed for that purpose.
And so we set out to build the platforms that we're building that make it easier.
So now instead of 18 months of development to make Pong,
we can do it in a week or two.
And so just now these things are coming online
and we're starting to iterate rapidly.
And we're doing all sorts of things.
Some of them are really basic neurocomputational questions
that just haven't been able to be answered before.
Trying to understand the music of neurons,
the waveforms and what that means
in terms of the computational approach.
So you're telling me this will help us understand our own brain
or is this going to go off and do something else
and become our overlords?
Well, I was going to say, why not both?
I don't think it will become our overlords.
But why not both?
Why not?
No, no, yeah.
It's gotten to his head, Chuck, you see.
Yeah, yeah.
It could be.
Look, we certainly think, like, for sure,
it's going to help us understand our own brains.
And when you understand a system, you can build it.
I forget who said the famous quote,
but if I can't understand it, I can't build it.
That's what we're trying to do, basically.
I believe that was Field of Dreams. Which I can't understand it, I can't build it. That's what we're trying to do basically. I believe that was Field of Dreams.
Which I can't understand.
I'm pretty sure that was what he said before he came up with something a little catchier.
He came up with something a little catchier.
If we build it, they will come.
Yeah, exactly.
If we build it, they will kick our ass.
There are a lot of ways that could have gone.
But before we get our asses kicked,
and I'm not overly keen on that theory,
Neil mentioned about spinal column damage.
Is there a way that this will develop to treat disease neurons
and bring a healing process forward?
If you've got it playing pong, it's not just sensory,
it's motor skills as well.
There's a lot of complexity in this.
Is that, are you able to articulate that forward?
Yeah, absolutely.
And that's what I was saying.
Neuroclinical trials
for psychiatric neurological disease,
they fail nearly all of the time.
You look in between 7% or 8%
down to less than 1% depending on the area. And part of the
reason is our pre-screening tools, our pre-trial test, pre-clinical testing is enough for the task.
And that's because when you look at a neuron, the purpose of a neuron isn't to express a protein
or to even, it's not even to fire action potential. It's just electrochemical, that's it.
But it's not just to have electrical chemical activity. That's the thing. It's just electrochemical. That's it. But it's not just to have electrochemical activity.
That's the thing.
It's to process and do something
with information.
It needs the external information
to do its job,
to do its function.
But it's doing it electrochemically.
Right.
Yes, it's doing it electrochemically,
but it needs the external information.
That's a marker of it,
but it's not the whole story.
And so if we can look at
the response of these systems
and how they
change their information processing response to drugs. You're going to get a far better
understanding of how that drug is affecting the system. And we've done that. We've been using
sort of very simple epilepsy models and finding that if you take an epilepsy model, it doesn't
learn Pong, unsurprisingly. Oh, wow. Okay.
If you treat it with things that reduce that activity,
not only can it improve
its gameplay,
but you get a wealth of information
that was previously inaccessible.
You said about criticality,
that borderline between
organized and chaotic,
isn't that kind of like an epileptic fit?
No, no, actually.
When you go into that complete chaotic state of...
No, yeah, that's what I'm saying.
They don't go there.
They balance between the order of chaos and...
Uh-huh.
So that balancing act is actually incredibly important
for information transfer, and it's implicated.
So we actually had a paper on this,
and a lot of people sort of said,
oh, this is related to memory or to intelligence or to this and to that and there's a lot of controversy and what we found was actually
no it it underpins all of them because it's a fundamental pattern of dynamic systems in response
to an external signal and as i said that's we draw the parallel between flocks of birds and
and people and cities and all of this stuff and you can see the same patterns arising again and
again in nature yeah but you can't look at a bird and know that it flocks, can you?
Well, no, you need to look at the flock.
Right, but you don't know that it even has the capacity to do so.
So the flocking is itself an emergent element of bird behavior, right?
Exactly, yeah, yeah.
And so we're trying to build the system
so you can actually
look at the emergent properties that happen from the collection. What might be a naive question,
we learn in, you know, basic brain biology class that different parts of the brain specialize in
different activities, though there's quite a bit of overlap. But there's a portion that
focuses on your vision and your name, facial recognition and language.
And we know that from brain injuries, the person loses that ability.
Does this tell us that if you sample neurons from different parts of the brain,
they will behave differently in your circuit?
Or are all neurons identical?
And it's just how they've been trained ever since they were born.
Yeah.
So no, there are different types of neurons that do different things.
So we mainly work with cortical neurons.
Which means what?
So cortical neurons are sort of the neurons that sit on the outside layer of the brain.
They're important for stuff like tension, very higher order cognition.
The good stuff that makes us human.
Yeah.
And if you look at, say, a human compared to a monkey or something else,
the big difference is we have a whole lot more cortex.
That's what gives us our humanness.
And some people have a much bigger reptilian brain.
Is there still talk of this, a reptilian brain?
Look, it's more of a metaphor or model,
a way to think about how it's structured.
If there were one,
I wouldn't want you to use those neurons for me.
Yeah, I was going to say.
Give me the good neurons, not the billion ones.
Not the Sleestack neurons.
Yeah, exactly.
But then we can grow other types of cells as well.
So we have some mythocampal cells,
and really the limitations we have on this
is mostly at the moment due to funding and time.
I used to be as an example because I think if we had sort of enough funding and time,
we could recreate bee brain complexity with the synthetic biology and bioengineering tools we have available.
I think it's possible.
We devoted an entire episode of Cosmos to bees.
They're fascinating.
Just the waggle dance of a bee.
Yeah, yeah, yeah. How a bee yeah how they communicate how they
how they pick up
camp and move
to another location
and how they
scope it out
and their brain
is this big right
I mean tiny
tiny
tiny
but the complexity
you gotta love
any species
that communicates
through twerking
okay
and now I've got
that song in my head
I know yeah
that's a new one.
Chuck, that's on you.
Chuck, now we'll never, Chuck.
Ken, I'm thinking.
I need a thing they have in Men in Black, you know, where they,
please take that out of my head.
Yeah, that's about it.
Well, Brad, any future thoughts so that we can think nice things about your work
instead of worry about how one day we'll become our overlord.
I love it, though.
Look, I don't think there's any worry about these things becoming our overlord.
He doesn't think.
You hear that?
He doesn't think there is.
I don't think that.
Hey, look, I'm a scientist.
As I said, I have to leave possibility for the unknown always out there.
Yes.
Helps me get up every day.
I'm sure you feel the same way. It's the unknown
that drives us forward. Of course. It's the
only driver, yeah. I think it's these very
features that make this work so exciting.
The fact that there is going to be parallels.
Even if we do
take it down that engineering pathway.
And it's the fact that
this drive to understand
the unknown and to optimize,
it gives us both the chance to understand ourselves and the world better
and also potentially to provide a platform that can change the way we do so many things
from drug discovery to maybe computation.
One of the things that drove me to this company was the founder, Pon.
When he approached me, he's like,
Hey, look, we're starting this. Do you want to come on board?
I said, Well, what do you want to do?
I love the idea, but what do you want to do?
Is it just going to be to sell out?
And he said, no, no, no.
We want to create a legacy.
We want to change the way things are done.
We don't care about the money.
There's easier ways to make money than this.
This is not a convenient way to make money.
But it is our chance, we think,
to make an impact on the world and for the better.
All right, Brett,
we got to call it quits there.
Thank you for dialing in from Melbourne, Australia
for this call.
It seemed like you were just
right across the street.
He's the man from the future, Neil.
Literally, a man from the future.
Oh, it's tomorrow.
Thank you.
Okay, fine.
Fine. 14 hours ahead of midnight. Yeah. Thank you. Okay, fine. Fine.
14 hours ahead of midnight.
Yeah, thank you very much, Chuck, Neil, and Gary.
It's been a pleasure chatting to you.
No, you've opened our eyes and our minds to a number of things.
Thank you.
You got it.
All right.
Gary, good to have you.
Chuck?
Pleasure.
Such a pleasure.
Keep it going, guys.
All right.
This has been StarTalk Special Edition, the SBI version.
Neil deGrasse Tyson here.
As always, your personal astrophysicist.
Keep looking up.