Podcast Page Sponsor Ad
Display ad placement on specific high-traffic podcast pages and episode pages
Monthly Rate: $50 - $5000
Exist Ad Preview
The Changelog: Software Development, Open Source - Biocomputing on human neurons (Interview)
Episode Date: August 14, 2025Dr. Ewelina Kurtys is leading the way in biocomputing at FinalSpark where she is working on the next evolutionary leap for AI and neuron-powered computing. It's a brave new world, just 10 years in the... making. We discuss lab-grown human brain organoids connected to electrodes, the possibility to solve AI's massive energy consumption challenge, post-silicon approach to computing, biological vs quantum physics and more.
Transcript
Discussion (0)
What's up friends? Welcome back. This is the change log. We feature the hackers, the leaders, and those building biocomputing. Yes, today we're joined by Dr. Evelyn Kurtz. She's leading the scientific research for Final Spark on the next evolutionary leap for AI, the leap for biocomputing. This is neurons. This is serotonin. This is dopamine. This is all the things.
live thing computing. We learn about digital processes versus bioprocessors, the role of rewarding
these things with dopamine and serotonin, how they measure input and output, reading data, exotic
use cases, general purpose, who's using it and why, and why it takes 10 years to get to a useful
product. Of course, a massive thank you to our friends and our partners at fly.io. That is the home
of changelaw.com. Learn more at fly.io.
Okay, let's talk about biocomputing.
What's up, friends?
I'm here with Kyle Galbraith, co-founder and CEO of Depot.
Depot is the only build platform looking to make your builds as fast as possible.
But Kyle, this is an issue because GitHub Actions is the number one CI provider out there.
But not everyone's a fan.
Explain that.
I think when you're thinking about GitHub actions, it's really quite jarring how you can have such a wildly popular CI provider.
And yet, it's lacking some of the basic functionality or tools that we need to actually be able to debug your builds or deployments.
And so back in June, we essentially took a stab at that problem in particular with Depot's GitHub Action Runners.
What we've observed over time is effectively GitHub Actions when it comes to actually debugging a build,
is pretty much useless. The job logs in GitHub Actions UI is pretty much where your dreams
go to die. They're collapsed by default. They have no resource metrics. When jobs fail, you're
essentially left playing detective, like clicking each little drop down on each step in your job
to figure out like, okay, where did this actually go wrong? And so what we set out to do with our own
get up actions of observability is essentially you built a real observability solution around
good of actions. Okay, so how does it work? All of the logs by default,
for a job that runs on a Depot GitHub Action Runner,
they're uncollapsed.
You can search them.
You can detect if there's been out of memory errors.
You can see all of the resource contention
that was happening on the runner.
So you can see your CPU metrics, your memory metrics,
not just at the top level runner level,
but all the way down to the individual processes
running on the machine.
And so for us, this is our take on the first step forward
of actually building a real observability solution
around GitHub actions.
so that developers have real debugging tools
to figure out what's going on in their builds.
Okay, friends, you can learn more at depot.dev.
Get a free trial, test it out.
Instantly make your builds faster.
So cool, again, depot.dev.
Today we're joined by Evalina Curtis, or Curtis, or Curtis.
Yes, third time to charm.
A scientist turned entrepreneur with a PhD in neuroscience with 12.
20 plus peer-reviewed papers.
So you're the real deal, Avelina.
Real deal.
Thank you.
You're welcome.
Thank you for showing up.
Coming on our show and talking about neurons.
Neurons.
This will be an interesting conversation.
I'm a little bit out of my league here, if I'm not going to lie.
Because I saw on your website, it's final spark.com, on the neuroplatform page, it says instant access to human neurons.
And I was like, what does that even mean?
I have no idea what it means.
So please, demystify a little bit that, and we can dig into the science as well.
So that means that our lab is available remotely.
So everyone from all over the world can access our laboratory to the website browser.
They can log in and they can write Python code to do experiments because everything is connected to real neurons.
because we are trying to build computers using living neurons.
We want to use neurons as a processor because they are very energy efficient.
So that's the reason.
And at the moment, it's still R&D, of course.
And, you know, we don't have these computers yet.
But for the moment, it's possible to do experiments only to try to program neurons.
but it's not possible yet to process information like images or sounds or videos,
but we hope to do this in the future.
So we recently had Greg O'Serie on the show talking about the AI energy crisis
and all these ways that we can potentially power this new compute demand, which is burgeoning.
And it sounds like those ways were hard, and maybe if you figure this way out, it's way better.
how much more energy efficient is it to compute on neurons versus silicon?
So neurons are one million times more energy efficient.
Of course, this is all estimation because we can have some idea about this by looking at
human brain, which is built out of neurons, and we can have some idea what will be the
processor, what will be the efficiency of the processor.
Okay.
So when I think of a platform where you're provided.
instant access to human neurons.
I just added the word human in there.
Maybe they're not human.
I just think of...
No, they are human.
So I'm thinking about a bunch of brains floating in, you know, water or some sort of formaldehyde.
There are no brains.
No brains.
I told you about my league here.
I'm just everything science fiction is coming out of me here.
No, sometimes you can see on social media such pictures of brains enslaved in the lab,
but that's not what is happening.
So we are just using the same building blocks,
which are in the brain,
but it's like bricks.
You can build a house or you can build something else.
So we just use this building blocks,
but we don't want to build brains in the lab.
We would like to build computers,
which will be totally different,
probably much, much bigger
because we imagine these neurons can have huge structures in the lab,
as we don't have to make it so small
as a human brain.
So these are only building blocks.
So we are not trying to reproduce brain.
It would be actually very difficult and impossible at this stage, actually on science,
because brain is very, very complicated.
And there are a lot of little structures, so we don't try to, we don't try to make this.
We just use living neurons.
And there are human neurons, indeed.
And they are derived from the human skin.
So you can reprogram the cell.
of the skin so that they become stem cells and from this you can have any
cells theoretically any cells you want okay so no brains but human skin cells and
the neurons that are in them no skin cells which later become neurons okay
they become neurons they are kind of do you know any of this stuff I'm over here like
a kind of I mean what we know about
what we know about human anatomy and why there's so much curiosity and why she's studying neuroscience
and how this science fiction era is just simply that our brains can compute so well with such
little power requirement that's why there's the lore right 20 watts is what I read 20 watts to power
the human brain right very little comparative to chat GPT or something like it right and to simulate human brain
you would need a little nuclear plant.
So you don't need the full cognitive brain.
So here's what I understand about their brain at least.
And tell me if this even maps to the science that you're doing to discover this stuff,
is that you've got this humanity, which is your frontal lobe.
That's what helps you have rationale, reasoning, et cetera.
If I don't have my frontal lobe, I'm angry at them.
I'm not nice at them.
I don't make good choices.
I make very poor choices.
How do you get to this?
level of compute without the full brain? How are these cells able to do so much without what I
would typically call like the human brain, I guess? Well, so human brain is actually for many things,
not only thinking, it also runs all our body, controls everything. So that's not always necessary
for the computer. What is the most interesting for us is indeed this cortex part, which is
responsible for thinking, for processing some abstract information.
So we are more interested, most interested in this.
So we would like to, in the future, process information through the neurons, information only.
So we don't try to, for example, to control human body or stuff like this.
So there are a lot of things in the brain which are not really related to the biocomputing project.
So you're effectively using the neurons just for like as logic gates.
Like you're just doing ones and zeros at the end of the day.
They're not doing.
Well, yes, we would like to try to play to reproduce their logic gates.
However, neurons work totally different.
And that's why it's so difficult, actually, to build the computers.
Because indeed, in the computer, in the computer, you have zero and ones.
And this is one of the reason why actually they use so much energy.
But the brain is encoding information totally differently in space and time.
So when we have neurons in our head, it matters when and where in exact location, they are active.
And this is information.
So this is totally different type of encoding.
So no zero ones, actually.
But there are a lot of ways how we can look at the activity of the brain.
For example, how often you have spikes or what are the time in between the spikes, so this electrical activity of the neurons.
So we know for sure it's totally different.
And that's why this project is so difficult
because we have to learn totally,
we have to figure out totally new way of programming,
totally new approach.
It's the same actually as in quantum computing.
It's the same situation that you have totally different hardware,
which is working differently.
So it's necessary to figure out a new way of writing algorithm.
And that's why it's so difficult
because actually someone has to have to,
has to come up with some idea, which would be totally, totally different.
But indeed, at the moment, when we do research on living neurons
or sometimes on some simulations of neurons in silico,
people usually try to follow the rules of digital computers,
like try to reproduce logic gates, which is really actually not really correct,
but that's the best what we can do at the moment.
Let me see if I understand this.
I've grokking some of the stuff from what you're sharing
and then also from your very awesome website,
Phonospark.com.
It says they transform stem cells into mini brains
that learn and adapt growing neurons in an orbital shaker,
which I have no idea with that.
It sounds so cool.
Over the three-month period.
And these mini brains, organoids,
not sure if that's a term y'all came up with or not,
but that sounds cool too.
are 0.5 millimeters in size with about 10,000 neurons that function as real brain tissue.
So you've found a way to take stem cells, grow them over a three-month period.
They used to have a half-life that was even shorter, like a few hours, not that you can actually live 100 days.
And they get connected to this neuroplatform with 24-7 access.
So essentially the same way we treat a CPU in AWS.
you're doing with stem cells, turn neurons, turn mini brains, turn ornoids, that can be compute platform.
Yes, absolutely.
Although now it's for experiments, so we cannot really process information the same way as in digital.
But, yes, it's available remotely.
And we imagine that actually in the future our lab or our biocomputer will be available remotely as a cloud service today.
Right.
not quite aWS yet but working your way there yes absolutely how did you get involved in this
where are you coming from so i come from poland i i did i was always on the medical site
let's say i studied pharmacy and biotechnology and i always wanted to be a scientist i enjoy a lot
working in the lab i was very fascinated you know by cracking my brain you know teasing my brain with
some ideas and challenges.
So I always wanted to be scientists
and I realized at some point that brain is the most interesting
part to study. So I did PhD in neuroscience.
I was working on brain imaging.
And later when I moved to industry
because I always wanted also to see what is outside academia
outside this academic world,
I started to work with startups on actually
initially on imaging.
Then there is a lot of AI in the imaging, medical imaging in industry.
So this is how I learned about AI.
And I become fascinated by that.
And I started to discover which opportunities it brings beyond imaging.
So I started to work on commercial applications of artificial intelligence.
And after I started to work on next frontier of AI, which is actually closely related to neuroscience, so on biocomputers.
Can you talk about the imaging?
I think you mean when you say imaging, you're probably referring to like MRIs, like brain scans.
Is that right?
Yes.
So actually I did my research on a positron emission demography.
So this is something what you do using radioactivity.
You put some radioactive substance in the body and the substance goes to some specific places in the body and you can detect this non-invasively.
So you can get the picture, for example, of the brain.
which parts are active for example or you can visualize some receptors without
opening the brain but it's actually similar to MRI MRI is just a little bit
different so you don't use the radioactivity and you you can see a little bit
different things but the idea is always the same to look inside without opening
the body right yeah the MRIs are a little different which one's more
accurate? Is the imaging or the MRI more accurate? It doesn't matter.
Always imaging. So I think it, no, I'm not sure it's good comparison because I think it's
really depends on the protocol because there are different types of MRI and also there are
different times of PET. So it really depends. It depends on the, on the parameters. And also
they measure different things because PET is always functional. So there is always some, when you use
the radioactivity, there is always some chemical, chemical substance, like even glucose.
Everything is actually chemical substance, everything what is flowing in the body.
So you always observe some process, biological process.
And in MRI, it's not always like this.
Sometimes you just observe the tissue, you know, when you have different tissues, and it's
static, and it's not always functional.
It's not always observing.
This imaging, that's why I asked this question, because this imaging,
is really kind of like the rage, I would say.
And my version of the rage may be way different than your scientific version of the rage.
But what I mean by that is that there's a lot of study around mental health, ADHD, ADD, you know, trauma, you name it that folks are trying to image brains in these scenarios.
Is that kind of what got you into this curiosity of like how the brain operates from,
different trauma levels or different, you know, prescriptions or descriptions of health
concerns, mental health concerns, whatever it might be.
Is that what got you interested in this imaging process to understand more clearly how
the brain reacts to, I suppose, life?
Well, actually, I was working on something a bit different to what you are talking about
because you say about different activities of the brain during different maybe diseases
or maybe different tasks, cognitive task.
But I was actually working more on inflammation.
So I try to, I try to visualize microglia.
Microglya are a type of cells which are around neurons in the brain.
So they actually take care of the neurons.
And sometimes they become very activated, which means inflammation.
And it's believed that this process is actually involved in many neurodegenerative disease and also depression.
So when you have inflamed brain, that you can develop some disease.
Yeah, Alzheimer's.
That's interesting.
yeah inflammation is like the number one issue for most people yes right actually inflamed everywhere
my research was actually about the effect of nutrition on inflammation and it's funny because
I started to put attention on everything or what I eat after I started to do this research
is it's really interesting because I started to look totally different on my groceries
because actually a diet can be pro-inflammatory or anti-inflammatory is very important
what you eat and it can affect also your brain health and I think now after you know it's maybe
almost 10 years since I did the studies now I see that there is more and more you know more and more
people are talking about this about anti-inflammatory diet about how much is important what you
eat for also your brain health how did that lead you into discovering I mean it kind of seems
obvious, but how did that lead into AI and your discovery there? Were you leveraging, you
know, trained models, express how you get curious of the AI?
No, actually, I started my first job in industry. I started in the company, which was doing
medical imaging. And actually, it was very good, very good start because there was at least
one thing which I understood at the time, because I had absolutely no idea about how companies work
and, you know, anything about this industry world. So there was.
at least one topic which I understood well,
which was medical imaging.
And this company was doing a service of analyzing images
from different medical studies.
And they talked a lot about AI.
Because when you have imaging data,
you know, when you have a high number of imaging data,
you can analyze them automatically.
In some way, you can use AI for that.
And actually, that's the way how I learn about artificial intelligence.
Because actually, when you go for a different
industry events, you see people talking constantly about AI. And I was very lucky also because at
that time I was in London. So this is very good place for learning new stuff and for networking.
So I could get a lot of exposure and to see what people are doing with AI. And I could discover
that there is much more beyond imaging. So that's why I started to be interested with anything
what you can do with AI. Well, being able to scan a lot more, no pun intended, really, but just
grasp a lot more of these imagings that you're doing to see the anomalies and see the
connection points that you can't really see individually. I mean, that totally maps to me
because the more you can see across different scans is good. Yeah, this general thing about AI
because it can see much more than us and can scan a lot in the very short time. So how long
have you been working on this problem? On the final spark, I met the founders in 2019 at the
conference in London. So I started to work with them initially about some other projects.
So actually on final spark, I could say I'm working like three years around.
Okay. And there's a platform right now for experiments. You're hoping to get to compute down
the road and a service for that. Is there a straightforward path towards that or are there like
breakthroughs that still need to happen to get from where you are right now to where you guys want
to go. No, it's very difficult, very challenging project. That's why we expected to build
this real computers in around 10 years. Okay. So it's a bit challenge when we talk with investors
potential because it's quite long-term project and it's very, very difficult because nobody
knows how really neurons and code information. So this is the biggest challenge. So we know that
neurons are active electrically. You know, they are spiking. Spike means that, you know, there is
activity. And we know quite a lot about this, how it happens. However, we cannot really translate
this into some specific information. So, for example, you have text or image. Maybe about imaging,
there is some understanding already in neuroscience. But, you know, when you have, for example,
words, text, it's hard to say how some word can translate to specific activity of neuron.
So at the moment, as I said, many people, you know, do a lot of random experiments.
Also us, we do a lot of trial and error.
So this is why actually we build automated laboratory.
Initially, the idea was to just be able to do as many experiments as we can.
And also a lot of research on neurons are often inspired by what happens in the digital world,
which is not really correct
because neurons are working totally differently
but it's still at the moment the best you can do.
So this is the biggest challenge
that we don't really know
what activity of neurons mean.
And also another thing very important
is that brain or neurons,
any kind of form of also our neurons in the lab,
they are not stable systems.
So a computer you can consider as a stable system.
it's a dead matter so it works today in some way tomorrow will work the same way but the living
tissue is not like this it can change so the dynamic inside can change so for example today we do
some experiments we send electrical signals to neurons and they can react in one way and tomorrow
they can react to the same signal totally differently so that's also a big challenge the fact
that living matter is plastic.
So it changed behavior, actually also like us.
And our brains, we also change during time.
We can be completely different people.
Can you talk about how you get them to compute?
This is actually a challenge.
So what we do, we try to send them electrical signals
because neurons are placed on the electrodes.
You can see this on our website, finalspark.com.
There is section live.
You can see the readout.
So we send them electrical signals and we measure how they respond.
So how they change the activity, how they change electrical activity.
Another thing, how we try to compute neurons is also by sending them some chemical signals.
At the moment we can send them dopamine or serotonin.
So programmatically we can program in Python that neurons will get dopamine at some point.
So this is why, so at the moment this programming is not really to do some specific task
as with computers, but actually to change the behavior of neurons.
So this is actually the first step.
So we won't be able consistently control how neurons behave.
So how is the electrical activity of neurons?
So you may have one lazy organoid and one very non-lazy organoid.
Yes, absolutely.
It is biological tissue sometimes can very very.
Sometimes they can just die
So, you know, we are still learning
And yes, so there is also variability
Yes, absolutely
A lot of things can be also lazy organoid
Yeah, or depending on the day
Like yesterday it was really productive
And then today it's lazy
Yes, as I said, you know, every day is dynamic
It's not a stable system
However we have some success
It's not only so bad
We were able to store
one bit of information.
So just to give you idea about the stage at which we are,
we store one bit of information in neurons.
That was quite consistent.
And we were able to reproduce this many times.
So we are happy.
There is some kind of progress.
But yes, it's very challenging to get something.
So when you store a bit of information.
Yes.
How do you read it back out again?
Or how do you get it back?
Or how do you know that it's stored?
Yes.
So actually that's quite technical because I can tell you every blob of cells because they are such a blobs, this in Neurosphere's organoid, they are 3D structures, and each is placed on the eight electrodes.
And all these eight electrodes, they measure activity from neurons.
And depending on how strong activity is at each electrode, you can mathematically calculate something what is called.
center of activity.
So this is quite a hard science approach.
And we were able to shift the center of activity.
So yes, no, that was one bit of information.
And this is quite complex, as you can see.
But yes, we were able to have this consistently, this kind of results.
Consistently across different neurons and across different times?
Yes, and different days.
because that's always and in general in bioscience in every time when you work with biological tissue
is actually this is important that you have to be able to repeat because many things work once or twice
but it's important to be able to repeat your results yes on different days on different neurons
and is the process slow well I cannot to be honest I cannot tell you how much time it took
No, actually, I don't know, but I guess it's in seconds or milliseconds, but I don't know.
But generally, I can say that in general neurons are slow, and in general, neurons will be good for a task, which don't have to be fast.
Because also when we look at the human brain and we look at computers, we can see that computers are very good in speed, in doing repetitive things, very, very fast.
And we will never be able to compete with digital.
on that. However, brain is better in complex tasks because we can solve complex problems
using very little energy. So that's where it's our strength. But definitely speed or, for
example, memory is not something where neurons are better because also when we look at our brain
that are very limited. Actually, you know, computer can remember 20 books very easily and for
us would be difficult to remember every word in 20 books. But the computer is easy.
What does this wetware look like?
Like I'm programming my Python experiment.
Yes.
And I'm sending it over the internet, I suppose,
or some sort of VPN connection to you guys.
Absolutely.
And so, of course, it's going over copper wires and Wi-Fi and whatever,
backbones, and then back into your interface,
which eventually translates it into,
I'm imagining there's like a needle at the end of a thing
that sprays some dopamine.
I don't know what it happens.
Like, what happens at the end,
the last mile of this API.
Yes. So when you send electrical signal, you have a digital to analog converter.
So you have to translate the things from digital world to analog because neurons are
analog.
So you have this digital to analog converter and it is translated into electrical signals
which goes through the electrodes.
So basically the stream of electrons are going flowing to the electrodes through the electrodes.
And when you want to send a signal with dopamine or serotonin, then it's connected to the lamp.
So we have UV lamp.
And when the UV light is open, the dopamine is released because dopamine is closed chemically.
It's called it's encased in the chemical so that it's not active.
But when it sees the UV light, then it's released.
So it's a way of releasing very, very quickly dopamine to the neurons.
So this is how it works.
So then it's connected, does some controllers, which are connected to the lamp, and then the
lamp switch on.
Okay.
Same thing for serotonin?
Yes, but for serotonin, we have different wavelengths.
I don't remember which one.
It's not UV, but then we have different wavelengths, yes.
So basically, so that they don't overlap, so you can have both in the medium.
And we, of course, we plan to have more of this and, of course, much more neurotransmitters.
But we started with dopamine, now we added serotonin.
Are those hormones or those chemicals?
What is the proper terminology to call like dopamine?
No, they're neurotransmitters, neurotransmitters.
So you have several in the brain and they affect learning.
And actually the whole idea of using them is because we use them for feedback.
Because actually the way how humans are learning is by feedback.
So you have interaction with the environment.
You get feedback, so things are going good or bad, and then you learn.
If you should do this or not.
And the same way, actually, neurons are learning in vitro on the very basic level.
Because, for example, when something good happened, then there is dopamine release
and that reinforces the connection between neurons.
So if they have done something good, then it kind of reinforced this.
behavior. So the idea, at least our idea here, because it's a bit complicated, but
and actually there are different opinions about how to, how to give a punishment or reward to
neurons, but our idea is to give dopamine as a reward and no dopamine as a punishment. So that's,
that's used for the feedback loop. So for example, you stimulate neurons with some electrical signals,
you measure the behavior.
For example, you want that they increase
activity. So if they
do this, you give them dopamine. If they don't
do this, you do nothing. And then you send
electrical signal again. And there is such a loop
over and over and you see if they're
learning. And these neurons learn.
Well, that's the problem.
Sometimes they learn. Because they learn.
Yeah, so this is still challenge, you know.
This is still
a challenge. Learning. So learning
for neurons is changing the connections,
changing the behavior of neurons so behavior will be electrical activity and this is a
still challenge it doesn't always work okay friends I'm here with a friend of mine
hard jot Gill CEO of Code Rabbit AI code reviews so awesome so the explosion of AI for
developers is very real as you know some call it hype some call it the future hard jot either way
Code review remains the bottleneck for teams.
What do you think?
How does CodeRabbit fit into this new world?
My message to developers is like AI is here to stay.
We have seen great success with core generation tools,
especially the agentic architecture that are getting really good
in terms of exploring your code and solving small issues.
And it's only going to get better from here.
This is like a time when you embrace AI.
Otherwise, like it's like about getting left behind.
And AI is not going to replace the developers is what we have been seeing.
I mean, it's like just a lot.
alleviating the role of that and it's like going from a tank battle to an air battle like earlier
developers were struggling with syntax and all the mundane and the toil unit test cases like
all the boring stuff but now we're seeing all of that is increasingly being automated with
AI fight the air battle as they say and the same thing is happening on the code reviews now you're
generating a lot more code and what's hitting you next is code review bottleneck that's where we come in
as code habit generating AI based code review platform which reasons about your changes and
elevates your role as a reviewer like you're not going and finding issues which are surface level at the
code i mean it goes beyond static analyzers to understand and those changes but it does elevate
your role as a reviewer to look at the high level picture whether these code changes are aligned with
where this product has to be where there is like code changes are aligned with the overall
architecture direction of your company that's where we come and help so how does code rabbit
work code rabbit like the great thing about this solution it works where you work like it's not like you have to
or adopt a completely new habit or remember to use AI in this case.
So it works.
It deeply integrates into your Git platforms inside your GitHub, GitLab, and other Git platforms.
And in addition, like two weeks back, we also announced a VS code extension, which we have made
pretty much free for all individual users.
So there's no reason not to try it out.
Like if you're already using Cursor and some of these like AI code editors, it's a right complement.
Like as you are done changing, making your code changes, just trigger the code rabbit
after each commit and you'll be surprised at the quality of findings it will find and the issues
it will find on top of your AI generated code very cool while i'm a huge fan of code rabbit as you know
we're using it here at changelog and you can see in action in our pull requests you can get
started today for free and it's also free for open source learn more at codrabbit.ai again
codrabbit dot a i remind me uh
dopamine is positive that's a that's used for rewards what is serotonin use for what do
no actually that would also be uh for the for the different version of reward okay yes however you
know if we have to go to the details it's a little bit tricky so uh we are still let's go
the details let's get tricky yes it's always because actually dopamine is it also depends
when it is given and also there are different receptors so receptors are on the surface on the of the
So if they have dopamine receptors, that means they can recognize dopamine actually because
they need always a receptor to recognize neurotransmitter.
So it's a little bit tricky because there are different types of receptors and the different
timing.
Sometimes it's milliseconds or microseconds.
So the timing also is important for cells, but we assume that dopamine is a reward.
So it's not stable yet.
You're still, some days they do and some days they don't.
And you're learning.
So this is 10 years before it's usable in production, right?
This is total lab learning.
What is it that, and maybe you're still early, you can't answer this question,
but what is it that makes it such a variable?
Is it just because it's bio and we don't know, we don't know?
Yes, because it's bio.
Because first, there are two reasons, I would say main.
First is because it's bio, so it's unstable.
Second is because nobody knows yet how neurons encode information.
This is totally different than digital.
So this is such a challenge because you have to understand a new way of programming.
But you do have some indicators that they at least generally work the same.
Well, you mean neurons.
Yeah.
Well, like if each neuron was like a snowflake.
You know, every snowflake is unique and it melts.
So it changes.
Then there really would be no like 10 years, 100 years, a billion years,
like there would be no getting there because there's no determinism at all.
Because everyone could just work completely different every time you prod it.
You could never get information.
But you've actually gotten a bit back out again.
So you have proven.
You know, I think it is deterministic.
It's just that we don't know the rules yet.
Right.
That's what I'm saying.
You do have an indicator that they do kind of work the same.
generally though at least there's one thing yes but the indicator is our brains actually because
you know we have no doubts that neurons can process information and very well this is why we can
talk so kind of we can say nature is a proof that neurons are working that's fair there is
we think differently we all think I guess we just have to learn how to program them have you tried
telling them to ultra think sorry that was a joke that's from a
previous show.
I have been using that, by the way.
I've now said, sorry for a slight aside,
triple check and ultrithink.
That's my new phrase.
That's the keyword, Evelyn.
For certain AI.
They're not thinking best.
Of course, with a neuron,
maybe you just,
you know,
you just keep it analog and just whisper to it.
Ultra think,
you know,
like just walk up to it and whisper.
Yes, you can whisper,
but you know,
they don't have ears.
And so they can only understand.
Oh, you need some ear cells.
Get some ear cells.
translate because you know usually our ears also translate to electrical signals that's why our
neurons in the head can understand so you have to learn that's the whole point how to encode
information so that they can understand yeah so you guys are just running i imagine you're just
running experiments non-stop right because you're trying to figure out how these things work yes absolutely
and we are actually constantly building because we have done huge progress since we started you know
we built whole laboratory, a very stable system for working on neurons.
And also now it's available remotely.
So we are also busy with many users from all over the world.
So we invited nine universities from different countries to work with us.
They have access to our lab for free to study also neurons.
And we also have first industry clients who pay us to get access to our lab.
So we are also busy with this.
Really?
Yeah, we didn't plan for this, but people started to write to us that they would like to try,
that they would like to get access to the lab.
And yes, and now we have two types of subscription.
And we have users who are coming to us and testing neurons.
Is this the Betamax versus VHS all over again in terms of quantum computing versus,
would you call this bioprocessors?
How would you frame this?
because it seems like you're both trying to solve a similar problem.
Bioprocessor, very good, or biocomputing.
We call it biocomputing, bioprocessor.
So, no, I wouldn't say it's in competition because this is totally different mechanism,
different things.
So we know quantum computing actually is very fast, and it can maybe be good for an encryption
of information.
So it is totally different type of task.
So I'm not sure it will be in competition, but...
Okay, I was thinking more like one may win or one may actually prove to be fruitful in terms of viability.
Well, that's kind of what I was thinking like...
Actually, I think that the future will be that we will have very different type of hardware
because generally you can see this kind of direction.
It's not only quantum, not only biocomputing, people are also working on many specific chips,
also digital, which are optimized for some specific tasks.
So I believe that we will have variety.
So today we have mostly CPU, GPU,
and in the future we will have hundreds or maybe of different chips,
I believe so, which will be optimized on some specific task.
I think this is on a define where we talked about this,
but do you recall talking about slime molds and subway systems?
Yeah, like a couple years ago.
Yeah, like just this really fringe.
I was recent.
I want to see in the last year.
We were talking about the concept of slime molds being very sophisticated.
They used the slime to like design the subway systems or something.
Right.
Routing essentially.
Like efficient pathways to X.
And they compare that to like subway systems in the way we route, which is more like cause and effect.
Really like we were very reactive.
But it's very similar in terms of like bio.
You've got this.
this intelligence of sorts, not intelligence like it's got a body and it can come fight you,
like slime's not going to do that, but it can do its own growth mechanisms.
And I'm not a slime expert, so I'm not trying to pretend.
But just being enamored by the fact that there's some level of intelligence in slime that can predict maps,
just this idea of bioinformatics, biointelligence that can supply this.
In this case, it's obviously a neuron that can provide feedback and computing and stuff
like that, but very similar in nature in terms of like trying to leverage intelligence
built into nature, the world around us.
Yeah, I think people are also inspired by insects or different biological things in competition.
Yes, there are such projects also.
Evelyn, what industries are interested in this?
This, like you said you have these users on.
all of a sudden, like, which industries want this as a thing?
I would say we have three types of users, individuals, fascinated engineers, or small
startups.
Some of them want to do something related to biocomputing.
That's why they want to use our platform.
And big companies, which have R&D teams, very large companies, which have R&D teams,
which want to do some project on cutting edge technology.
The same actually how people do with quantum.
they know that it doesn't work yet but they want to know what is going on they want to know
how it works because they believe it will work in the future okay so that's cool I mean are they
doing like what kind of stuff are they trying to do you don't have to give specific examples
no actually this is confidential what are doing our clients is confidential sure but we have
universities which are using our lab for free and they are going to
publish. So actually, that's why we chose them. We chose those who have the highest chance to
publish. Sure. That makes sense. And actually, there will be some papers coming for what people are
doing. So I hope everyone will be able to see. And we will be promoting this for sure. What's at the
other end of my Python API call? So we talked about what was at the neuron platform end, like,
you know, a UV light turns on or some sort of electrode electrolyzes.
What do I get back?
Like I make a call.
Is it like a one zero?
Is it like a success fail?
Is there more information coming back to me?
Like what do I get back at the other side so I can actually start mapping results or
trying to make sense of it?
So what you get in response is the electrical activity of neurons.
So this is what you can see also in our website.
on the life section
so the way
how you can measure activity of neurons
is a few different ways
you can get yes-no
response so this is spike trains
so this kind of data you get
just a dots and you know this was spike
every time there was a spike
you get a dot and there is already
quite a lot you can analyze the patterns
you know you can see if they're more active
or less active
this is actually the most common way
how you collect the data.
And it's quite efficient also
because you just have one dot,
one point for each
occurrence of the spike.
And very often it can be enough.
But if you want to be more specific,
you can also measure the shape of the spike
because spike means that the neuron will have
which change the charge
and this will always have a shape
and you can also analyze the shape of the spikes.
So this is much more heavy.
maybe data, but you can also get this.
And of course, then you can have, you know, people try to have different way of interpreting
the data.
This is actually, it's a big room for creativity.
For the moment, we look, for example, how late, what was the delay before we saw the signal?
Or we can see the distance between the signal, for example, how often a neuron is active.
So we try to characterize all these patterns on how they're active.
So that's what you get.
And then you can, yeah, you can do.
There is a lot of signal processing, a lot of analysis of the data.
Sure.
And a lot of data.
Can you target a specific neuron or organoid to like make sure that your call goes to the same
place every time or no?
No, actually you have eight electrodes.
So every electrode is a little bit different place of the organoid.
And then you can target a specific electrodes.
And you can, for example, use only a few of them.
Or you can use, for example, four of them for sending signals and four of them for receiving signal or some other combination.
So that gives you some room for playing.
And also, you know, what is also interesting is not everything.
electrode is always active because sometimes you might have less signal or no signal at some
of electrodes.
So it's really complicated.
It's very difficult to work with the living tissue.
Sometimes they are just not active also.
So they have, how do you know when they're about the die?
You mentioned inefficiency.
You mentioned one day doesn't work the same as the next.
We know they, you know, earlier in your research, they would die in hours.
Now they die in hundreds of days, I think.
Help me understand terminology, is that right?
And how do you know the inefficiencies aren't because they're about to die?
Again, I don't know if that's the right terminology to use or not.
So there is at least one thing which is easy.
So this is easy to see if they die or not because they are not active.
So living neurons, they are spontaneously active, electrically.
So they will always produce some spikes and you will see them on the electrodes, on the measurements.
So this is quite easy.
to say that they are dead.
If there is no activity, you assume they are dead.
Okay.
And actually, you are right, because also batches are different.
Sometimes they are, and this is also what you can see on our website,
because a few of our Neurospheres are monitored there.
And you can see that the activity is not always the same.
Sometimes it's active, sometimes more, sometimes less active.
So all this, you can see very easily on the electronic.
when you measure the activity.
Yeah.
One thing I think is interesting, too, is the environment it has to live in, which, you know,
we talked a little bit about quantum computing and then comparative to biocoputing that
there has to be sterile environment, no viruses.
Can you talk?
I know you're in a lab or at least early days of research and stuff, but what is the environment
and how will that potentially scale to usable product?
at the long tail of usage.
What does the environment these things live in?
Yes.
So environment is very important.
Neurons are very fragile.
And the environment has to be physiological, so the same as in our bodies.
So there has to be physiological temperature.
So there has to be, of course, always liquid around.
So neurons are in the medium.
So this is water with different substances.
which keep them alive, which also feed them.
And all this is very, very important, like pH, temperature, everything,
even small vibration, everything is really important for the neurons to be stable,
and it has to be very strict.
Otherwise, the activity can change or they can die.
And this is why also we believe in this bioservers, in this central servers idea,
because we think that it will be easier to control these conditions of the neurons when they will be in central server.
So that's also the reason why we.
So not likely to have a home version of this in the early stages of this.
Like you want to centralize it at some sort of data center or a space where the environment can be better controlled.
Yes, absolutely.
And we imagine we have the same, what we have today, but much bigger.
How big is what you have today?
Well, now we have two rooms for the laboratory.
So we are growing.
We started with one little lab and our neurospheres are a few millimeters diameter, 10,000 neurons each.
So they're very, very small but for experiments is enough.
And in the future we imagine to have huge structures, even 100 meters long of neurons.
So that's how we imagine the future.
be much, much bigger.
Yeah.
I have so many questions about the details of that, but you can't really ask them until you
guys know how they work exactly, because I think a lot of their decisions be based on
how they work, like how many neurons will I need to do a thing?
And it's like, well, we don't know because we don't know how they work exactly yet.
Yes.
It seems like it's going to be exotic use cases.
And I imagine as somebody who's been, you know, at the doctorate level, the PhD level of
this from neuroscience to, you know, this laboratory stage that you probably see at least
some very exotic use cases. It has to have a unique environment. You plan to centralize it to
offset that. But I'm sure there's unique scenarios where like this may be finally tuned or
very specific to a certain type of task versus general computing. It's not going to be in my
iPhone, maybe at that point it will be an iPhone, like a literal iPhone. Anyways, are there any
unique exotic scenarios or use cases that you already see even though you're in the science
stage that where this may apply? Well, actually, we aim for general computing, but of course,
not everything. By looking at human brain and also thinking about what is done now in digital,
we think that every, maybe not every, but many tasks which are done by artificial neural networks
will be much better to be done on biological neural networks.
So for example, generative AI, we believe could be better on the real neurons.
Really? Okay.
So maybe that's the first place where, because we started off talking about our energy crisis
that is obvious.
That seems to be the obvious reason why biocomputing is the first.
platform it's potentially a lot less required energy usage
absolutely in general yes yes we go for general computing and which will be
much much cheaper and very competitive to digital are any of your customers
well capitalized evil geniuses who just want to like electrocute some neurons because
they're just enjoying grew maybe grew is like a like a doof and schmerts or maybe like a
Moriarty. Any Moriartis?
I'm just messing with you, Evalina.
I've just run out of actual questions.
Adam, anything else for her?
I mean, this is interesting stuff.
I think we're definitely a lot of work to be done.
Yeah, I think it's really really about the,
I was just thinking like, where could it be used?
Where do you see it being used?
I'm surprised at general because it seems like it's,
that's the long road.
Like the short-term road would be specialized use cases
where you can control the environment.
have potentially really rich clients that have blank checks that can give you four rooms versus two kind of thing
I'm thinking like that versus general computing but I guess I was wrong no actually we aim for general
computing we think it will be general computing what could be a real revolution revolution
post silicon Jared post silicon this is bio computing human neurons wow I'm
I'm looking forward to it. I can't wait to see what happens next. I'm so surprised by what's
happening today. Yeah, I had no idea. Let alone what the future may hold from this. So cool.
Thanks, Avelina. Thanks for coming on the show and telling us all about it.
Thank you so much for a nice questions and nice discussion.
Okay, 10 years is a long time to wait for anything, right? Kind of won it right now. How about you?
Could you imagine, though, being in the lab for 10 years, eking out all the details,
is finally getting to a point where it's useful in some way.
I couldn't imagine that personally.
It's a long road, but the long-term payoff for humanity could be tremendous.
Well, we're back in the saddle officially after being in Denver for our live show.
If you missed it, check it out.
changelaw.com slash live.
The recordings are there.
The details are there.
And stay tuned for our next live show.
We do have a bonus on this episode for our plus plus subscribers.
Changelaw.com slash plus plus.
It's better.
You know, it is better.
You drop the ads, you get close to that cool change law metal, directly support us, and you get bonus content.
It's awesome.
It's better.
Learn more at changelaw.com slash plus plus.
Of course, our friends at Fly, our friends at Depot.Dev, and our friends over at codrabbit.
We use Depot, we use Fly, we use CodeRabbit.
We love all three.
You should try them all out and tell them you sent you.
Of course, big thanks to break mass a cylinder for those beats.
That's it. The show's done. We'll see you on Friday.