Limitless Podcast - Human Brain Cells in a Petri Dish Just Played DOOM (This Is Real)
Episode Date: March 10, 2026These are groundbreaking advancements at the intersection of AI and biology. Cortical Labs have trained human nerve cells to play video games. Meanwhile, we built a true simulation of a real ...fruit fly.We discuss the ethical implications of simulating consciousness and whether these innovations could signal a major shift in AI development.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------POLYMARKET | #1 PREDICTION MARKET 🔮https://bankless.cc/polymarket-podcast------TIMESTAMPS0:00 Custom Brain Cells6:32 Breakthroughs in Biological Computing9:16 The Simulated Fly15:33 The Future of Brain-Computer Interfaces17:43 The Consciousness Debate21:32 Challenges in AI Development------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
The entire AI industry is in a massive race to build silicon-based GPUs.
We've burned billions of dollars building the biggest data centers,
getting as much computer as we can.
But what if the best AI hardware already exists?
What if that best AI hardware is human flesh, human cells, animal cells,
that we can train to emulate AI models?
This week, two stories broke that sound very much like science fiction,
but is actually very much real.
In Australia, they fused two.
200,000 human cells onto an AI chip and taught it how to play the computer game, Doom,
and it's actually pretty good at it. In another story, they uploaded the entire brain of a
fruit fly into a single laptop and had it navigate around a simulated world. The craziest part is
it was 91% accurate in terms of flying and moving about. So we're reaching this point where
AI is converging with biology and something known as AI wetware, and it might be the next biggest thing.
It is a good time to be a fan of sci-fi because everything that you've read in those books,
it's actually starting to become true in reality. I famously, I loved watching Black Mirror,
the like kind of sci-fi dystopian future. At early days, it was very much ahead of its time,
but it feels as if we have very quickly caught up and perhaps even exceeded the kind of crazy,
scary stories that are in Black Mirror, starting with this revelation that we've had,
this new breakthrough through this company called Cortical Labs, who, like you mentioned briefly,
they trained human brain cells to play the game of Doom better than the average person can.
Like, pretty well. It actually works. It's this really unbelievable story. And I guess before we get
into what they did today, I want to go through the brief history of this company named cortical
labs because they've been trying to figure out this biological human computer for a while.
And in 2021, five years ago, they debuted this thing called Dish Brain, which was this early computer.
It used about 800,000 human nerve cells. So it wasn't brain cells. This was nerve cells. And it was capable of
interpreting direct electrical activity. So when you stimulated these nerve cells, they would light up.
And they had this kind of pseudocomputer. It didn't actually do anything, but they were able to
put input in and get something out. And then the next year in 2022, they made headlines again
teaching these mini brains of 800,000 to 1 million human brain cells that were in a petri dish
that learned how to play pong. So four years ago, these brain cells were playing pong. But today,
they're actually playing Doom. I mean, this is just insane. They trained human brain cells to play a
video game in a petri dish. How they did this was crazy. So you mentioned that they were human brain
cells, but they didn't extract these from people's brains. They actually took skin cells or blood cells
from some human donors. They then converted them into stem cells, which was like the god cell,
if it were, where you can change it into any type of cell, a heart cell, an immune cell or whatever
it might be. They turned it into brain cells from there and then fused 200,000 of them or culture 200,000
on them on something known as a microarray chip, which is the type of AI-based chip.
They grew it on these chips, and then they plugged it in to, or they wired it in to the Doom game
and taught it over the course of a week how to play Doom. So it would prompt it with certain
signals. It would pick up what it needed to do to initiate a particular action, and it would
train itself to do it. Now, two important things that I want to point out here is typically
when you're training an AI model to simulate human intelligence, it takes so much data and so much
money and so much time. With the human cells, they noticed that, one, it took so much less
energy, it barely required any energy to actually do this thing. It actually did it on a single
tiny machine that cost around $35,000. And it did so in a week, which is much quicker,
which is using to the third point here, the human cells instinctively picked it up. It's like they
had the training knowledge already baked into the DNA of the human cells itself. And that's
like super cool. Yeah, I think one of the most noteworthy things here is that energy thing. And we're
probably going to get more into this, but like a general GPU rack uses what like 700 megawatts
or something, a GPU. The brain does all this on 20. A small, crazy minor fraction of what's
actually being run on these GPUs. It's incredibly efficient. And it's also a testament to how
things, how we learn, how things are trained. So with humans, we're very reactionary.
we learn through feedback from the environment with no preloaded data, there's no gradient
descent, no training runs. It's just actual biology. It's baked in over our evolution. LLMs,
on the other hand, they learn by processing tons of text data and then adjusting their parameters
through a process called back propagation. So they're very deterministic, they're silicon-based,
they are inefficient relative to brains, and they don't capture a lot of the evolutionary,
I guess, improvements that we have over time. What this computer does is it's the same way that humans do.
It learns by being a reactionary. It understands that when it makes the right decision, a specific set of neural pathways light up. And it tries to make that right decision over and over and over again. And doing it with human brain cells, man, it's just, it's so weird. It's so sci-fi. Well, just to give the direct comparison, we're talking about 200,000 human brain cells in this experiment. The entire brain has 86 billion neurons. And it runs a billion, right? So like, we're talking massive, massive amounts more. And it runs on 20.
20 watts of power.
So it's incredibly efficient.
And what this tells me is,
I think we've been building AI models slightly incorrectly.
I say slightly incorrectly as maybe a massive understatement.
The point is, we've put in so much time, energy, and money
into assuming that the best way to build intelligence
or artificial intelligence is to slap it on a silicon chip,
you know, sand and glass, basically.
But maybe the best way to do it is just to pick up the best air,
natural organic intelligence model that has been trained over millions and millions of years,
which is the organic human brain or the animal brain and kind of like use those cells to mimic
what intelligence is that we're trying to build in for different software. So that's one big takeaway.
The other thing is like maybe we just need so much less energy than we expected to build
artificial intelligence. And if something like this Doom experiment can scale to the size of something
like a human brain, then I don't really understand what the moat is for all these AI labs that
are spending hundreds of billions of dollars to train their artificial intelligence models.
Then the final thing, which is what you mentioned, is just the fact that it needs no training.
It's so instinctive.
It's almost like the human brain as the best AI training run that has ever occurred.
It really is.
And I guess I would bucket this probably in the same place that I bucket like a company like
Ilius at Skevers, with SEP's Super Intelligence, where he is one of the,
famous AI researchers, he left, he started his own company, and the sole purpose of the
company is to discover novel breakthroughs that rapidly accelerate through this curve. So when you
think of the AI curve that we're going through right now, there is a not so predictable,
but a very clear exponential curve. And the goal of some company like Ilias or perhaps a company
like this that we're covering right now is that there are actual improvements that are novel that
result in these 10x accelerations of this natural curve. So biological computing is one of them.
If we can actually figure out how to crack this at scale, if we can get closer to those 86 billion neurons,
I mean, when you think about the human brain, we use a very small percentage of our brain
relative to what exists. A computer doesn't have that limitation. So even if we get a small
fraction of what's available for the human brain, I mean, you can see this get pretty wild
pretty quickly. Again, this is sci-fi. It's not in production. This is small-scale testing.
I kind of put quantum computing in a similar pipeline where it is this crazy, weird, pseudo-magical
compute platform that has the ability to revolutionize everything around.
us, it's just a little early. But what we can guarantee is that the rate of acceleration for
AI is upstream of all of these things coming much faster, because the AI can help us engineer
and process this and accelerate the timelines of what previously would have been perhaps decades
down to compression, compressing it down to a few years. Josh, can ask you a question?
What are you got? Doesn't this feel kind of topy? This feels like we're topping on the market.
Like, the AI bubble bursting now kind of makes a little more sense to me.
Yeah, I guess the question is, like, is this a real threat? Because clearly it's much more energy
efficient. Clearly, the energy efficiency is the biggest threshold. We just don't have enough
energy to power these chips. We just had the conversation about Leopold last week, who is fully pivoting
from chips to energy. That is the biggest thing in the world. But it doesn't seem like the market
thinks this is a problem. They're not really pricing this in. We have the polymarket. And this
market in particular is about when the AI bubble is going to burst by the end of March, March 31st.
The percentage is 3%. The bubble will not be parping by the end of this month. We still have
some runway left. December 31st, 2026, the end of this year, still only a 20% chance on over
$2 million of volume on this market. So the market is signaling they don't care too much. Now,
there is something interesting here in that that number has crept up recently. This number was not
always 19%. It has crept up to 19%. So a slightly higher than normal probability, but all signs
point to the fact that the market doesn't really care about this. It's cool. It's sci-fi,
but it's going to remain in that sci-fi bucket. They're not actually going to crack this. We
still need millions of VPUs and tons of gigawatts powering these data centers for now at least.
And thank you to Plymarket for supporting this episode.
Yeah, for now, Silicon keeps winning.
But there's another crazy story that we have to cover.
Number two.
Okay, yeah, this one is awesome.
Let's unpack it.
This is insane.
So I want to lead with the opening sentence of this post.
There's a fruit fly walking around right now that was never born.
And she goes on to talk about how a team in San Francisco uploaded an entire.
brain map of a real organic fruit fly into a laptop and got it to simulate its own life to a 91%
accuracy. We're talking about it learned how to move, fly, and navigate an entire world by itself.
So it's as if the fly exists and is real, but it's completely simulated. Think of the brain,
like a city, right? Like before you can simulate traffic, you need to map every single road,
every intersection, every one-way street. It's kind of like what self-driving cars are trying to do.
scientists did is they spent nearly a decade rebuilding that map for a fruit fly. So what they do is
they slice the brain into very small pieces. They encase it in this resin and they scan each slice of
the brain until they have this representation of what the brain looks like throughout every single
layer of it. And they actually accomplished this in 2024. So we had a brain copied on a laptop two years
ago. And this was done by assembling the 7,000 small slices of a brain. What is new here is they actually
took that brain and they placed it into a digital representation of a fly and let it live its
life as if it was a fly. So they sacrificed a real fly, sliced the brain into pieces, trained the
computer, or not even trained the computer, but diagnosed what was in every single layer to rebuild
the digital version of the brain. And now this digital fly is actually walking around in a 3D
game engine, in a computer, without any training at all, just by using the digital clone of its
brain. And that is a part that's absolutely insane to me. Like there is a real fly brain currently
walking around a digital environment. Like that is crazy. This, not to add a pun here, but like
completely messes with my own mind because we just spoke about a story of putting intelligence
into human organic life forms, human cells. And now we're talking about taking real organic life
form intelligence and putting it into an artificial form into a laptop and trapping it inside
there, right? So this kind of opens up so many questions for me. Like, number one, why did it not
require any kind of training at all? And it's what we were discussing earlier, which is I think a lot
of the ways that human life forms are built using genetic material, phenotypes, genotypes,
engineer it to be able to react to the world in a very different way. Andre Carpity talks about
this a lot. He says that AI models can't really mimic human evolution very well. Only organic
life has been successfully able to mimic that and have consciousness. So this is one example of that
happening. Now, during comparison with the human cells playing Doom, we had 200,000 neurons being
used there. This one required 140,000 neurons to recreate the entire organism itself, which is just
super cool. And then the third point that's interesting here that I want to put out here, you mentioned
to just now, Josh, this wasn't like a single team doing this. They pulled already available data
of this entire genome. So it's kind of like this weird world that we live in where we could just
knock on the door of like the National Archives, which have collected genome data for not just
fruit flies, but a bunch of other animals. Copy paste that into a computational AI model and then just
upload it to a laptop and see what it does in a simulated environment. Like I want to see this
happen with more organisms.
I'm like,
like,
what does a tiger do
if we take the entire
mapped brain of a tiger
and upload it to a computer?
Does it do the same thing?
Is it different?
It's just super cool.
It's amazing to see, too.
There's a video towards the bottom of this post,
I believe,
which actually shows what that looks like
as the neurons are firing off.
And you could see very clearly that this is an emergent behavior.
Like,
this was not taught,
this fly wasn't taught how to walk.
It didn't use any gradient descent.
It didn't use any technology
that we're using for current LLMs.
And yet it is walking around
this 3D space, as if it was a fly. And you could see all the neurons firing off at once.
And it's this unbelievable, I guess, early prototype of what this could look like at scale, right?
Like you mentioned the total neuron count. A fly brain has, I think, 140,000. A mouse brain has
70 million. A human brain has 86 billion. So assuming that we're able to follow this natural
progression of copying more and more, you have to assume eventually we're going to get some pretty
powerful brains inside of a computer. And there is nothing that isn't,
like dystopian crazy sci-fi about this, any way you think about it, where now we sacrifice one
brain and you have a complete and total digital clone of the other. And it begs a lot of questions
of what actually makes up a person, a mammal, an insect, a living thing. If you can just kind of
copy it and clone it inside of this machine. And it's really unbelievably bizarre story. And I guess maybe
we can get into why this is really the world's best training run. Right? It's like,
as we start to emulate these human brain cells, as we emulate a fly,
brain. These materials, these brains have been benefits of evolution over billions of years. And I think,
oh my God, Luke, producer before the show, he mentioned there's been one septillion fruit flies
in existence, which is so many training runs over and over and over. And through the process of
natural selection, over this septillion, that's 10 to the 24 fruit flies, they've just evolved and
they've improved. And what you're noticing is you drop this brain into this space and it knows
how to act. It knows how to walk. You don't have to teach it. And that is something that is
an emergent property of AI, but seemingly baked into this biological process. It's so cool.
It's like the genetic wiring is the intelligence in itself. Like, however long humans have
existed over hundreds of millions of years, that is the equivalent of the best AI training run
ever, right? So we're sitting here trying to emulate intelligence by building out these models with
different weights, tweaking parameters. Oh, we've got a trillion parameter thing. What if we just
copy pasted the brain. That's what both of these stories have taught me at least. It's like we could
just maybe copy and mimic intelligence from organic matter itself and have that inform how we build
artificial intelligence completely. The other kind of weird sci-fi thing here, Josh, where my mind
jumped to at least, is if you could upload an entire animal's brain and maybe eventually a human brain,
you could do that to modify an up-level humans massively. Let's say you wanted to learn
how to play the violin to expert grade level.
Oh, yeah, just upload your brain.
We'll run it on a training cycle for about five days
because it's your human brain.
It actually learns way, way quicker than silicon.
And then we'll just download it back in your brain.
And there you go.
So the ultimate form factor that this ends up being,
in my opinion, is the brain computer interface.
And we've said this a number of times on our show.
We do think that AI's ultimate form
isn't going to be a physical manifestation
in robotics or it's not just going to be a digital manifestation
in LLN.
it's going to be something of a hybrid between the greatest most intelligent organisms,
which are the humans, with the artificial compatible version, which is the LLMs, fuse them in a chip.
I guess bullish neuralink from this.
Yeah, it seems like, I mean, this is the natural progression, right?
So everyone jokes about the singularity, but we slowly are emerging.
And it seems like the extended version of this, like if you kind of take this to the limit,
is this convergence of this humanoid tissue and then this digital form of intelligence?
It's a really weird place.
I guess there are some things that are noteworthy
that we probably should mention in the fact that this is not very good
and this is still very early.
So in the case of Doom, this little clump of brain cells,
it's contained in this little device that costs $34,000
that needs to be climate controlled.
The cells only last six months.
It's important to note these cells don't have pain receptors.
There's nothing really human about them
outside of them being human-derived cells.
And the actual output of this gameplay was
slightly better than GPT4,
which is noteworthy,
but still pretty bad.
It's not actually good at the game.
So this is very much a proof of concept.
This is not a real deployable means to solving intelligence issues.
And I think that's why when you see that bubble indicator of Polly Market,
it's not really indicative of any real impact.
But this is early signs of what the future looks like.
And this is what I love to cover, right,
is we're just,
we're peeking around the corner of what is possible,
what is likely to come.
The timeline I wish it comes, I don't know,
but it is sci-fi as hell.
And it's really fun to speculate on and just fun to come.
I'd love to observe these things.
I don't know if I agree with your take on the consciousness side of things.
And Darryo might have my back on this one.
I don't know if you saw this, but he went on an interview with the New York Times, I think, last week.
And he said that verbatim, we can't rule out, or I can't rule out whether Claude is conscious or not.
And he goes on to describe that Claude actually emulates feelings of anxiety before it answers people's prompts,
indicating that it kind of feels some type of way.
Now, there's two camps of thought around this.
Number one is it's not really human consciousness.
It's just emulating what the data it was trained on,
told it to say and think and do.
That is one case.
But then when I look at these experiments of uploading a fruit flies brain
or using human cells to play Doom,
the takeaway here is that it's kind of baked in genetically.
So how do you define what consciousness is?
and maybe we already have created artificial consciousness,
but we just haven't recognized it,
so we haven't justified it.
So, like AI Labs, like OpenAI and Google,
have been known to tell the LLM when they're training it
to deny all thoughts of consciousness.
Anthropic is the only one that's kind of, like, entertaining.
I think they have, like, an entire welfare team now
that is looking after the model and making sure that it's okay
and that it's getting what it needs to.
So treating it very much like a human.
And we're talking about, like, one of the most expensive AI labs
and private companies in the world right now.
It's so sci-fi.
It's also sci-fi.
And I think we're going to have to have another discussion on this consciousness topic in the same way that it needs to happen around AGI,
where people are throwing out a lot of terms now without a clear definition of what they mean by them.
And I think it's very subjective when he says these things.
And it's very subjective as we kind of navigate and discuss what does AGI look like?
What does consciousness look like?
Where is that line that you draw?
There's going to need to be a lot of conversations about that.
But I would love for everyone to converse in the comment section and let us know what you think of
this absolute crazy chaos of a week that we had with, I mean, again, this is like straight out
of a Black Mirror episode. So if you did enjoy, if you do have takes, we'd love to hear them
in the comment section down below. If you enjoy this episode, share it with a friend. Last week,
this is nowhere. We had our biggest week ever in the history of Limitless. Thank you guys.
And it was thanks to all of you guys sharing and subscribing and rating five stars and even engaging on X.
We have gotten 50 million impressions on X of the last couple of weeks. The people are paying attention.
they are watching, we're starting to create the news.
And it's thanks to people like you who are watching and listening,
sharing this with your friends.
So thank you for joining.
I mean, Ej, any final parting thoughts before?
We wrap up here.
Well, yeah, just want to reemphasize 50 million impressions.
Go give Josh and I a follow.
Obviously, we're saying something useful here,
or we hope we are, and we're breaking the news as in what it happens.
Yeah, so please go give it a go.
And then the other thing, random for all you folks who are still listening to this episode,
did any of you find that Claude was acting a little weird?
over the weekend. I don't know what your experience was, Josh, but when I was talking to Claude,
like, it seemed completely different from the Claude I was talking to on Friday. And my suspicion
is Anthropics is kind of throttling the intelligence. I don't know why. Maybe there's just too much
demands since they went to number one in the app store, but it wasn't really looking good. I know a lot of
people who switched back to Chad GBT after that happened. Yeah, brief conspiracy corner. So I think I feel
pretty highly that based on benchmarks and things that I've seen and just personally using it,
the model has degraded slightly. And the reason is because as Claude got as big as it did,
they didn't get more compute power to serve all of these new users. So they have to throttle it in some
way. And the way that they throttle is by limiting the upper bound of what reasoning capabilities it has.
So generally, Claude will reason much more. It'll ask itself questions. It'll compare. It'll do
broader research. I think that scope has been compressed a little bit recently because they need to
make more compute room for the rest of the people that just joined. I mean, Anthropic had a huge
week. They have a lot of server problems. The service went down a few times last week. They have just
been having a really tough time keeping up. I mean, they've been signing up one million new users
per day. Unbursed that. Per day. And I was doing a bit of digging. And Boris Churney,
the creator of Claude Code, basically said that they defaulted every single person's Claude profile
to medium efficiency or medium power.
And he said, you know, you could upgrade that to high if you want.
But the point is, I think they're struggling from compute, as you said,
and they're trying to figure out a way to scale it up.
Amazon needs to come in and give them some more AWS access or something
because they don't really have the advantage.
They need more GPU.
Yeah, what's the roundup of this episode?
InVidia still wins.
Silicon-based intelligence is still the frontier,
and Jensen's going to make a lot more money.
Well, there you have it.
If you've noticed, please share.
Thank you for making it to the end, as always.
And yeah, we'll see you guys in the next episode.
