Big Technology Podcast - Brain Computer Interface Frontier: Movement, Coma, Depression, AI Merge
Episode Date: September 3, 2025Dr. Ben Rapoport and Michael Mager are the co-founders of Precision Neuroscience, a company building a minimally invasive, high-resolution brain-computer interface. The two join Big Technology to disc...uss the modern day applications of BCIs and frontiers of the technology, including computer control, stroke rehab, decoding consciousness in coma patients, AI-powered neural biomarkers for depression, and the long-term prospect of merging human cognition with machines. Tune in for a fascinating look at the potential for one of earth's most promising technologies. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Could mapping the mind through brain computer interfaces allow us to one day build a foundational model for the brain?
We'll find out on a special edition of Big Technology Podcast right after this.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
We are talking a lot about brain computer interfaces on the show these days, and there's a reason for it,
because the vision extends far beyond just allowing people who are paralyzed to be able to move a cursor on the screen.
And in fact, the technology can be applied in far broader ranges and far broader use cases.
And we're going to talk about it today.
We're lucky to be joined by the founders of precision neuroscience.
We have Michael Major here.
He is the CEO.
Michael, great to see you.
Thanks for having us.
And Ben Rappaport is here, the co-founder and a practicing neuroscience here to tell us all about how this technology works.
Ben, great to see you.
Great to be here.
Thanks for having us.
So let me take a look at the stats that you guys have.
All right.
We'll just read it off for the beginning just so you folks understand.
The folks listening at home understand that this is a legit company, started in 2021,
raised $155 million, closed your Series C in December, 2024, and you have 85 people working for you.
And I'm just going to hold this up to the camera.
And for those listening at home, I will try to describe it.
This is the brain computer interface that Precision has built.
You can see it here.
It is quite flexible.
and I think it doesn't damage the brain, which is sort of the one of the differences that you have with Neurlink.
Of course, we had Noland Arbaugh on the show a couple months ago or many months ago now talking about how this has changed his life.
So we're going to talk about how it could change people like his life and many more.
Anyway, talk a little bit about what brought you to this technology and why you think it's so promising today.
Michael, do you want to start?
Yeah, no, you know, my answer is what brought me to this technology is really Ben.
You know, I think Ben, who I'm going to introduce a little bit because he's often too modest.
Ben is a neurosurgeon, as you mentioned.
He's practices in Mount Sinai.
He also has a PhD in electrical engineering, and that's not accidental as a combination.
You know, the brain is an electrical system, and so to interface with the brain, really understanding the electrical nature of the organ itself,
as well as the electronics that you need to design to interface and to drive function is totally
core to what we're doing. This is really Ben's life's work. Ben was also one of the co-founders
of Neurrelink with Elon Musk and several others and left to approach this technology in a different
way for reasons that we'll get into. But I met Ben. Ben and I were in college together,
but didn't know each other and a mutual friend put us in touch. Really, my background is in investing
and business building. And I have partnered with Ben to help translate his intellectual vision
for a device that I think is going to really transform what it means to be disabled and
eventually transform medicine into a practical reality. And Ben, can you, so Michael mentioned
that the brain is basically an electrical, what did you call it, electrical device? An electrical
organ. Oregon. So talk a little bit. We talked about this on the show a couple of times,
but I'd love to hear from your perspective.
Talk a little bit about how the brain is electric.
And is that sort of reducing the brain's capacity a little bit?
For instance, like if it's just electric,
the idea would be that you could basically decode what's going on in the brain.
But right now, the brain is so little understood
that there's, I think, some conventional wisdom
that part of it is just gray matter that will never understand.
Well, the brain is definitely not just electric,
But thinking of it as an electrical system helps us to interface with it and in some ways to heal the brain when it is injured.
And so thinking of the brain as an electrical system in certain ways is a very useful, extremely useful model for understanding how the brain works, how it communicates with the body, how we can communicate with it from the outside world and vice versa.
it's not that it's just an electrical system, but it is the electrical nature of the brain
and the electrical nature in which the brain processes information makes it special. It makes it
unique kind of among biological systems. And the fact that we have a good understanding of how
the brain represents the world and interacts with the world through electrical signaling
makes it possible to develop brain computer interfaces. Okay. And so that kind of brings us to the
present where we're starting to see, I would say, rapid advancements of brain computer interfaces
going into people's brain. Nolan, who we've had on the show, has a brain computer interface
developed by NeuroLink company, you co-founded, that allows the computer to basically read the
signals in his brain, and when he thinks about moving his arm in one way, the cursor will move
in that way. When he does a different thought, he can click. So talk a little bit about you're not
quite doing that at Precision Neuroscience yet, but talk a little bit about why this field
is advancing as quickly as it is and what this device is that you've brought here today and how
it differs. Right. So I would say, first of all, one of the reasons that now is such an exciting
time in the field of brain computer interfaces is that a couple of companies, Precision, Neurilink,
among them, have advanced to clinical stage,
meaning that we're well out of the realm of scientific research and on the path to bring computer interfaces into medical reality.
And that for many of us who've been interested in this field for a couple of decades has been an incredible transformation to be a part of,
because there's been this notion that what was scientifically known to be scientifically feasible for many years is now squarely on a path to become.
a medical standard of care in a way that's going to help a lot of people who have what
were thought to be previously untreatable diseases like paralysis from ALS or stroke or spinal cord
injury. So that's one of the reasons that there's so much excitement about the field today
is that this technology is reaching people who need it like Nolan.
And, you know, when precision was started, there was a dogma in the field that you needed to
penetrate into the brain in order to drive powerful function, like some of what you just
mentioned Nolan's able to do with his device. Precision was founded with a very different
philosophy, which is that safety and performance are not an opposition. They are actually
self-reinforcing. Precision has now done 40 implants. That's more than the rest of the industry
combined in the past two years. And what we are able to achieve in those implants, which so far are
temporary. So we have not permanently implanted someone, but we have implanted patients for
hours at a time and enabled them to control computer cursors with their thoughts, to control
robotic arms with their thoughts. So this is at the cutting edge of performance that has been
demonstrated by other systems, and it's achieved without the requirement for, you know, puncturing
brain neural tissue and doing some damage to neural tissue that other systems require.
Okay, for listeners and viewers, I just want to say that there is going to come a moment in this conversation where we're going to take it from today to where this goes in the future.
I'm talking about treating depression. I'm talking about treating stroke and coma. I really think that is worth sticking around and joining us for the second half.
But for those who want to understand the basics of this technology, we're going to do that now just for a bit.
But this is definitely one of those episodes where it's worth staying with us.
and talking about maybe there's a way that that can end up helping build better AI models
through the brain or understanding the brain through AI.
So that's all coming up.
But Michael, you mentioned that this is a non-invasive device.
I'm going to kind of not listen to your instructions and just kind of hold it up.
So just people see it.
This is the device.
And there's 1,024 electrodes here.
For those who are listening, it is paper thin or less than paper.
paper thin. And the electrodes are kind of come up on a head at the top here. Pretty amazing.
And the thing that you mentioned about it not being invasive. So the history of this technology
has basically been that if you want to get a read on a brain signal, you need to put some
like hair-like follicles or some prongs into the brain. But this lays on top of the brain.
Yeah, that's right. Just to describe the device in a little bit more
detail. It's actually thinner than a piece of scotch tape, so incredibly thin and very conformal to
the brain, and it's designed to sit on the surface of the brain without puncturing or damaging
neural tissue. The system has a thousand twenty-four tiny platinum electrodes, which are basically
sensors, and those electrodes are, we manufacture this system using photolithography. So it's the same
technique as semiconductor chips. This is really the first example of cutting it.
edge manufacturing techniques being applied to medical technology. We actually own the fab in which
we do it. There is no domestic supply chain that's capable of this sort of work. But what we are
able to achieve with this sort of a super high resolution system that sits on the surface of the brain
is really to record the awake activity of the brain at a resolution that's never before been
seen and never before been achieved. And just going back to your question,
earlier, you know, the brain is an electrical organ, meaning that when we have thoughts,
there is actually a physical manifestation of those thoughts. So we intend to move our hands or our
fingers or we recall an idea. There is actually physical manifestation for that, and it's electrical.
And what this system is able to do is measured to record that electrical activity in a way that's
never before been possible. And so Nolan's device has 1,024 electrodes on it. So same as this.
But you have at one time inserted as many as four of these in a brain.
So that's given you much greater signal than that just one neural device.
That's right.
So the part of the philosophy of precision was to make is to make the technology highly scalable across a number of domains.
And I think this will probably be a theme of the conversation as we take it from where the technology is today to where we might go in the future into other disease states or how this technology informs the development.
of artificial intelligence, but as you said, what you were holding in your hand just a moment ago
is one module that contains 1,024 electrodes. But we have the capability of deploying, because
it's non-damaging to the brain, multiple of these modules onto the brain simultaneously. And we've
done that actually many times. In quite a few patients, we've used at least two of these
simultaneously, providing 2048 electrodes on the surface.
of the human brain and in one in one particular case we used 4,096 electrodes four modules and
there's not really a what does that greater number of signals give you well um it really the
important it provides us a more detailed picture and a more complete picture of what the brain is
doing at any given time because i think one of the important things to to realize about the brain
at least the parts of the brain that we think of as being most relevant is that almost all of our
conscious experience, whether that is movement or sensation or vision or sort of decision-making
or memory, all happens basically at the surface of the brain. We call that the cortex. I think
that's not an intuition that everybody has because we think of the brain as a three-dimensional
structure, which it is, living inside the head. And so people think of it as kind of
of all of the functions of the brain, people have this intuition that are, they're kind of
uniformly distributed in this, uh, in this, you know, 1,500 grams of tissue. But actually they're
in that, that all that processing is not uniformly distributed within that volume. It's almost all
very, very close to the surface. That's crazy. What happens is deep in the brain then?
Well, most of what's deep in the brain is wiring. Not, not all of it, but, but I would say
most of what's deep to the surface is the wiring that connects the cortical surface to,
the rest of the body and then there are areas like for example the basal ganglia or the cerebellum
or the brain stem that primarily subserve non-conscious functions so the smoothing out or the learning
unconscious learning of movements breathing vital you know vital functions that we don't think of
as part of our conscious experience so most of conscious human experiences is happening very
close to the surface and that certainly includes what we're focused on now which is movement and
most of the way we interact with the digital world and the physical world is in some way through
movement or intended movement. So the fact that we're focused and the brain computer interface
world today is focused on paralysis as a target disease state is not really an accident.
One of the reasons for that is, first of all, that is a pathway within the brain that is
extremely well understood, how the how the brain goes from what's happening at the level of
intention to move and
electrical activity on the brain surface
to activation of the muscles
is one of the best studied
and most well understood neurological
systems. So interfacing with it
is kind of a natural
thing to do. But also if you just think
about the way we
live our lives in
2025, you're sitting here
with a laptop in front of you and you interact
with that laptop through
touching the keys and moving the
cursor on the screen. And so
those are volitional movements and they happen in actually a very small piece of real estate
on the on the surface of your cortex that's only a few square centimeters in size and so to get back
to your original question of what does that do for us the ability to put a few square centimeters
of high density electrodes on the brain service it allows us to interface with the entire
extent of your desire and planned actions into the outside world so the the key in
in brain computer interfaces as they exist today.
The key to restoring or augmenting, you know,
an interface with the digital world between the brain and the digital world
is to get sensors onto the area of the brain
that are responsible for those interactions with the outside world
to span the entire extent of that real estate
and to do it at a level of resolution
that is appropriate to the brain's intrinsic signal processing capability
and yet to do it without damage,
that part of the brain at all and that's really what we've what we've done at precision is we've
actually people when we started the company didn't think that this was possible as michael mentioned
there was a dogma that said that uh in order to interface in a reliable way you actually had to
penetrate into the brain um with these needle like electrodes and i think part of that dogma
in some ways stemmed from uh a mistaken intuition that that there was um you know a needle like
to unlock information deep within the brain, so-called deep within the brain, in order to do that.
But in reality, enough information is represented right at the cortical surface, that if you
build the sensors with a high enough resolution, as we have, and you span the appropriate
amount of cortical real estate, you can get incredibly high fidelity function out of these
interfaces. And actually, we've begun to show that, I think, in ways that are surprising,
even to the experts.
So that has been an incredibly exciting development for us
over the last couple of years, especially this year.
Okay, what surprising ways,
because you're both smiling about the surprising applications?
Well, I mean, you referenced cursor control
and that kind of interaction with the digital world
as kind of a benchmark for function in brain computer interfaces,
which it is.
it's not actually it's not trivial to control a cursor in multiple dimensions in real time
and yet we've begun to be able to do that in ways that are incredibly reliable and in ways
that patients can learn to achieve cursor control actually within just a couple of minutes of
training data and you don't have the thread of thread retraction which is what no one has experienced
because there are no threads in the brain correct yeah exactly and I think you know one
other way to answer your earlier question. Why does more resolution matter? Why does a higher
electrode count matter? I think one analogy that we sometimes talk about is that BCI is at core.
It's a communication device. And if you think about it in that way, you know, the bandwidth of a
communication device is really what determines how it can function. So as an example, like a 56K
modem is capable of chat, whereas a fiber optic connection enables Netflix. It's the same thing.
thing with BCI, where if you have a very low resolution system, you can do very basic things,
let's say like click on a computer mouse. Our ambition is far greater than that. So what we are
going to enable in people who are right now paralyzed is seamless control of computers and smartphones
and other digital devices in the way that we all frankly take for granted every day. So think
about, you know, rich video games, productivity suites like Microsoft Office and Google Suites,
That sort of level of functionality, which we think is really going to be life-changing for people who today have very little therapy available to them, is only possible with a system that has a high degree of bandwidth.
So they'll be able to do this just entirely with their thoughts.
That's right.
Exactly.
And I would emphasize also that the component that you were holding up that's here on the desk in front of you is only a part of the brain computer interface system.
So these electrodes form the sensor that touches the brain,
kind of caresses the surface of the brain.
But, of course, they're connected to implantable microelectronics
that amplify, digitize, record, compress the data,
and all of that data, and this will probably take us into some of the other areas
that you wanted to touch on that you alluded to in the beginning.
That data all has to be processed.
So it's artificial intelligence that allows us to, in real time,
decode the electrical information from the brain surface into meaningful command and control signals
for a digital interface. The signals don't come off the brain in a way that tells us how to
interpret them. Actually, there's a kind of translation process that needs to happen in real time.
Okay, so one more question about the device itself. Then we move into the more futuristic and theoretical
questions. Why haven't you left it in? Because if this is not damaging the brain, we know that
the other devices have been left in. No one is kind of moving about.
the world with a neural link in his brain right now.
But this is something that has only been used in circumstances of like when the brain is
already open to understand, to try it out or understand a little bit more about surgery.
So why haven't you left it in yet?
That's right. We've taken, we will be leaving it in is the short answer to your question.
And the reason is that we've taken a slightly different approach to development,
which is to emphasize to make sure that what we've developed is safe and
functional before beginning permanent implants. And so for us, it's been incredibly important
to ensure that the interface works and delivers a level of functionality that we think is essential
to guarantee to patients before we start leaving the devices in. So we therefore pursued a strategy
that was sort of a phased development approach. And so in our first 40 patients, those were temporary
implants that were designed to validate that the quality of the signals and the ability to
decode those signals in real time.
We then, you know, we now, we're actually the first modern brain computer interface company
to have an FDA clearance.
So the version of the electro that you were holding in your hand actually has now FDA clearance.
So among the current leading BCI companies were the only company that has a full clearance
from the FDA.
And as part of leveraging that clearance, we're moving to a next phase in our clinical studies that will allow the system to be left in place for up to 30 days.
And that phase will help us further validate the quality of our decoding algorithms and deeply understand the nature of the signals as these devices are left in place.
As you alluded to, you know, others have had some problems in their early after implantation.
And what we're trying to do is to anticipate and avoid such issues as we move to permanent implants.
And then it will be after that phase, after this next phase, that we start leaving the devices in patients as part of a clinical trial permanently.
And just for context, you know, I think the fact that we even have this path available to us is a testament to the underlying
safety of the system. So we did our first patient implant within two years of founding.
You know, it took neuralink eight years. It's taken other competitors even longer than that.
And I think that that's, you know, one, a product of just the focus and the design philosophy
that Ben just articulated. But it also is a feature of the fact that our system is reversible,
that it doesn't damage the neural tissue, that you can remove it. And there's been no harm to patient.
So I think that that just speaks to, you know, some of the underlying characteristics which we think are important today and will be very important in the future for clinical adoption.
So there's two timelines here.
One is this timeline that it's going to take you to be able to get this current device implanted full time in people's brains or on top of people's brains to be more accurate and then read their brain signals in the motor cortex.
I'm going to be really unfair.
I know that that's going to take a tremendous amount of work.
And if your company does that successfully, it's probably a good outcome for your company.
But let me be unfair and ask you now where the technology can move to after that.
Because Ben, you mentioned that most of what we do, most of our signals of what happens in our brain happens at the surface.
So what happens if you expand beyond the motor cortex?
Where else could this technology go?
I know Elon Musk and Neurrelink are currently working on eyesight, which even if you are born completely without eyes,
it could potentially take signal from the world around and then beam it into the visual cortex.
So that maybe it might be one application. Where else should we look?
It's a great question, and we definitely are thinking about this.
And it actually dovetails with the prior question, which is, you know, what are we learning as we bring this technology in its current stage into the world?
And as we work with patients and physicians across the country, even in the early clinical studies.
and part of that experience for us has been a process of discovery.
You know, when you bring a technology that you've been developing into the real world
and you put it into the hands of people with lived experience
and experience insightful practitioners,
you learn all kinds of things that you might not anticipate.
And so actually, even though we're focusing on applications in the motor cortex,
As part of these studies, our electrodes have been placed actually all over the brain.
They've been placed in sensory cortex, in prefrontal cortex areas, which are responsible for decision-making.
They've been placed in the spinal cord and on the brain stem.
And so I would say that at this point, it's very exciting for us because we have an incredibly large, rich data set that's probably just absolutely unique in the world now,
as far as regions of the brain that we've interacted with and things that we're thinking about potentially doing in the future.
So just that gives you a sense of what kind of data we're starting to experience.
And in practical terms, you know, we think that probably stroke represents the next expansion of the market for brain computer interfaces.
So right now, the disease states that we're designing the interfaces to treat are really
forms of severe paralysis, really forms primarily of quadriplegia that result from spinal cord
injury in the neck or brain stem stroke or diseases like ALS and are degenerative diseases like ALS.
And those are very severe forms of paralysis that are either complete or near complete and leave people
either mostly or completely unable to use their hands.
There are other forms of paralysis which are more common,
and those arise, for example, from common forms of stroke.
Stroke affects almost a million patients in the United States,
almost a million people in the United States per year,
and about a third of those, so several hundred thousand people,
recover from their stroke with a persistent deficit
that leaves them either having difficulty articulating speech or difficulty using their hands or difficulty
walking and those people although they may not be completely paralyzed live with a severe
deficit or or or disability in their ability to interact with the world just because of their
paralysis and so we believe that that is a next step for brain computer interfaces in the medical world
that's what that's what physicians with experience are asking for and that's what patients
with stroke are asking for and it also is we think very medically and scientifically feasible
and when you apply it to stroke patients is it that you are able to decode what they want
to say and then help them say it or is it if they lost some movement actually using
electrical signals to help them move again so we're still talking we're still talking about
a function that is decoding intention and expressing that through digital means.
So, for example, somebody who, you know, who has a stroke on the, you know, on one side of their brain and can't move a hand, for example, or can't move it well enough to type.
You know, that kind of deficit, which is debilitating for people who are trying to return to work, especially if it's in the dominant hand, for example, that kind of deficit could be augmented by a direct thought to digital,
world communication. Does that make sense? We're not talking about yet stimulating the brain in a way
that restores the hand back to normal or that, you know, provides an arthosis over the hand that
moves the hand again. I do think that will come and we're already talking to partner companies
to do that kind of a thing. But the therapy from brain computer interfaces is primarily
something that kind of reads brain activity in real time and establishes, you know, intuitive, smooth
communication with the digital world. And I think that helps hopefully give an intuition for how
this industry is going to develop in coming years. So we're starting with very severely paralyzed
people. There are about 400,000 people who have no use of their arms and hands. And so for them
being able to control a computer, a smartphone, a tablet, it's going to be life-changing. And we think
that that's a sort of $2.5 billion market to start with. But as Ben mentioned, there are other
issues that are much more common, stroke being one of the principal ones, which is why the number
of people who have some sort of motor deficit, maybe not complete inability to move arms and hands,
but some deficit is about 12 times that number. So many millions of people in the United States
alone. And allowing them to interact with the digital world in a seamless way, we think,
is also going to be really transformative. It's going to start with the most severely impacted
patients and then move. But in terms of restoring movement, you know, what you really need is
basically a partner device to help with either stimulating peripheral nerves or, you know,
creating a prosthetic. The way that we think that brain computer interfaces are going to develop
over time is that we think that the neural data is the hardest to access. It's the hardest to
extract, really. And as a result, we sort of think about it as like the operating system.
There will be other products and devices that plug into the data that we're able to provide
to enhance people's lives in different ways. Think about it like an API. But I think the
companies that are able to record at high fidelity and then transmit the neural data are going
to be able to create an ecosystem around them to do all sorts of things.
or right now, not possible.
Yeah, and just to be clear, you've already had inbound from quite a few.
Okay.
So, yeah, before we go to break, let me, like, throw out, like, one example, like, potentially
you could be reading signals of depression on someone's brain,
and then maybe some ancillary company can use that API data
to, like, stimulate the part of the brain that has a deficit of some electrical signals
as a potential cure.
I mean, that's already starting in academic research.
where there's sort of closed-loop systems that are helping people who have refractory depression
that, you know, doesn't respond well to medication and which is very severe with systems that,
you know, right now we have technologies like electroconvulsive therapy, which are very coarse,
which, you know, effectively, you know, incite a seizure in people.
But it does have efficacy, which is why it's still used.
But if you think about...
Imagine being targeted on this stuff as opposed to just using brute electricity.
Exactly.
Yeah, I think we're headed to that future.
You raise a good point and an application space that we've thought a lot about and that many people think about.
And I think kind of what you're alluding to is this concept of a digital biomarker.
And it's something that we can do with the precision system that's very difficult to do with the system of penetrating type electrodes for a number of reasons.
What's a digital biomarker?
A digital, digital biomarker is kind of like, it's kind of like a signal, but it's instead of a molecular readout.
outlet like you get with a blood test, it's a digital signal that you get by electrically reading
the brain. So think about if you were to place the precision electrode ray over a relevant
region of the brain that's relevant to depression. And some of those are already known and well
established. So in a patient who's prone to depression, you might see, and there's already
very good preliminary data to suggest that this is the case. You might see that when they're beginning
to enter a relapse of their depression, there are particular digital signatures that occur in
that area of the brain that are predictive of them entering a relapse. And that can be life-saving
for people, you know, who may become suicidal due to severe refractory depression. So being able to
predict those periods before they happen and either deliver stimulation or change their medications
or alert, you know, alert care team that that is in the process of happening. And that kind of
concept actually is common across a number of important disease dates, including epilepsy and
depression and others. So this is a direction that, you know, that we've thought a lot about,
that we've had good discussions with, you know, the industry about. And it's actually not
that dissimilar from some of the molecular studies that people do nowadays, you know, like
in the era of gene sequencing, you know, low-cost gene sequencing has enabled many people to do
to learn a lot about their own biology and predict things about them that they wouldn't have
already known about. So it may be that, you know, one direction for neurotechnology is something like
this. Go ahead. No, I think Ben's getting a really important theme that we talk about a lot
in the company, which is, you know, why has there been more progress in other areas of biology than
there has in neurology? And one of the reasons, I think as Ben is alluding to,
is that we've been able to digitize biology in other areas,
genomics revolution being a perfect example of that,
and that has led to a number of breakthroughs
and very rapid progress today.
It's been tough to convert the neural signals of the brain
into something that we can apply compute to.
The brain is hard to access.
You know, Ben knows this better than we do.
It's hard to get there.
And once you do...
Some good design, keeping that thing protected.
That's right.
The skull does serve a purpose.
And, you know, the biology once you get there is sort of this mushy mass.
And so figuring out a way to effectively digitize this biological system in a way that is scalable,
which again, I think the fact that our system is scalable across different areas of the brain
at the same time at super high resolution, this is something that we think is going to end up unlocking
a number of breakthroughs, which frankly today are hard to predict.
Yeah, I know I know you wanted to get to, you know, where does this take us in the world of AI and foundation models and, you know, modern machine learning and how does that connect to what's happening in brain computer interfaces?
And I think this is kind of a good segue for that.
and just a bit of intuition that I would want to provide.
This is something special about the precision system
because the electrodes that you were holding in your hand a minute ago,
they form a regular lattice,
and they have a spatial relationship with one another
that's kind of like the spatial relationship among pixels on a screen.
It's the same every time.
And so when those are placed onto the brain of one patient,
the data format that they read out,
is the same data format and it has structural elements that are the same from patient to patient.
So it brings commonalities across patients that we've studied into very sharp focus,
and that has been a major limitation of the ways that we've interfaced with the brain for generations,
including with the penetrating electrodes that are used by other systems.
Because when you penetrate into the brain using needle-like electrodes,
the spatial relationship of what you're recording from is kind of a little bit random.
And so that relationship is not preserved from patient to patient,
and it makes it very difficult to apply learnings from one patient to the next patient.
In the precision system, one of the inherent advantages is that the data is so regular in structure
that we're able to compress it, we're able to learn across patients, across populations,
and leverage those learning in the machine learning,
algorithms that we developed and that we have found to be just a you know a massive advantage i want to
take it at one level deeper and ask about decoding full brain activity and making the brain
less mysterious but why don't we do that right after this so let's take a break and come right back
we're back here on big technology podcast with michael major and ben rapaport the co-founders of
precision neuroscience uh we before the break we talked a little bit about on unraveling the mysteries of the
brain through electrical signals. And at the moment, we could all agree the brain is a mystery,
even though the technology companies like yours are getting better at decoding parts of it
and being able to do something with that information. But do you ever anticipate a moment
where the totality, like I started this show saying, could we build a foundational model for the
brain? And I guess that was like a way of saying, could we find a moment where the totality of
everything happening in the brain is decoded by technology. And if that happens, what does that
lead to? We talk about this a lot. This is sort of in the zeitgeist right now in the tech world,
this question of the whole brain interface. And I'm happy to discuss it. We have kind of our own
view of what it means to have a whole brain interface. And I think understanding what that implies,
requires a few things.
One is that, as we mentioned earlier,
the distribution of information through the brain is not uniform.
There are areas of the brain that are much more relevant
to our interaction with the world than others.
So most of the brain is actually not relevant
for communicating with the outside world
or with artificial intelligence.
Most of the brain is taking care of.
of the body and not in ways that are particularly relevant
to interfacing with the outside world.
Does that mean like controlling like blood flow
and stuff like that?
Does that happen in the brain?
Let's call it our vital functions
and unconscious functions and things like that.
So that's all being directed by the brain?
A lot of it is and not all of it is, but some of it is.
And certainly.
I guess like we think that the heart beats
and the blood goes and it has nothing to do with the brain,
but the brain really is the command center
for a lot of this stuff.
The brain is is involved.
It's not necessarily,
doing everything moment to moment. Certainly the heart has its own intrinsic ability to function.
But the brain is heavily involved in a lot of what goes on in the body. And, but my point is just
that, and also a lot of the brain, as we mentioned before, is white matter connections between
different parts of the brain and from the brain to, for example, the spinal cord. And those are
incredibly important for how the brain functions in a biological system, but with respect
to interfacing with the outside world, it's, those are much less relevant when you think about
a whole brain, what we think about as a whole brain interface intuitively. So that's, I want to start
with that, that even though the brain, there's this notion that you want to have a whole brain
interface. Actually, what I think we really mean by that is we want to be able to interface with the parts
of the brain that are responsible for our conscious interaction with the world.
And that actually is a spatially limited portions of the brain.
But isn't that relatively unambitious?
I mean, I know it's very ambitious, but this is easy for me to ask.
But I'm saying that, like, to me, the thought is what about taking my memories and kind
of dragging them from the brain onto a computer?
I totally get that, right?
And so the point I'm trying to make is that your memories have a spatial location within the
brain for, you know, to a significant extent.
And so the problem of how to do that is not the same as how do we record the continuous state of every single cell in the 1,500 grams of the brain.
I think it's very important to understand the brain is a physical thing.
It has structure.
And how we think about interfacing with the brain begins in part with understanding that there are some parts that are more relevant to forming that interface than others.
And then how do we get there?
and how do we listen in or interact or interact with those areas of the brain?
So as we mentioned earlier in the conversation,
we're really focused right now on movement-related areas,
the so-called motor cortex.
That's an area that's very, very salient to our interactions with the outside world
moment-to-moment.
But, you know, the vision-related areas and sensation-related areas
and hearing-related areas and memory-related areas
and decision-making areas.
These are all surfaces within the brain that if you want to build an interface that that encompasses all of those functions, you have to touch those surfaces in some way.
Can we talk about two of those, memory and decisions?
So do you anticipate at one point the science might, or your companies like yours might be able to effectively go in and download our memories or maybe go in and like, you know, decision scientists would have a field day.
with this, like basically decode how we make decisions just by reading the signals off the brain?
I think from the standpoint of those are two very different problems.
From the standpoint of decision making, I think the answer is much closer to yes,
and we have an understanding of how that sort of thing might happen.
From the standpoint of memory, it's a little bit different.
And we can take those two questions, you know, one at a time.
I'll just say that for memory, it's important to understand that the way the brain stores memory
is very different from the way we think of memory being stored in the digital world.
This is important.
I think this is important to understand that memory as it's stored in the digital world,
really the bits have a physical manifestation that on some level you could, you interrogate them, right?
when you store a bit it's a spin state or you know or something like that and so the reading of those memories is really the reading of the state of something that you know at a very very small physical level has a representation in the physical world yeah click on a file it will go into the sort of semiconductor mainframe and then access where that file has lived right a transistor state is changed right you know or a spin and
state is changed or some something in solid state is actually is changed to represent the bits.
So when we think about, you know, people talk about bits and atoms as being, you know,
intrinsically linked, they are, right? There is, there is really a one-to-one representation
between a particular bit that you're trying to store and some state in the physical world.
So how is the brain different. The brain is different because, and this really relates to how,
one of the ways that neuroscience has been so important in motivating developments in artificial intelligence
is that the way that a particular memory is stored is not really in the flipping of a state.
In order to read out a memory from the brain, you have to stimulate the network, what we think of the network
of the network that represents that collective, you know, memory,
and you have to stimulate it with something that triggers the recollection,
and the network then either completes or reproduces that, right?
So as an example, you know, you can be reminded of a memory by a particular trigger, right?
Or you can trigger yourself to remember something, right?
But there's no scan that you can do or that we can do that looks into the brain
and says there is the face of your family or there is the you know the combination of your
combination of your combination lock do you so can i ask a weird question then like where do the memories
live when they're not there they they they um they don't really exist as such wow they the brain is a
system that uh that can produce the recall of the memories when appropriately triggered but it's not like
somebody by reading the physical state of the brain with a scan, the way you could, a disk drive, for example, can find all of those bits of information. Does that make sense? It's like, it's very different. Where do they exist? So they just don't exist? It's not that they don't exist, but the storage mechanism is very different. It's like saying that in order to, in order to get the, in order to retrieve the memory, you need to trigger it, right? So in order to get the memory of a combination lock out of your brain, you need to basically say, what's the
combination to your combination lock, and then you'll retrieve the answer, right? But it's much more
difficult to, like, there's no file address that we know to go to in your brain that contains
the digits of the combination lock. Knowing what you know about the technology, do you think
will eventually find that filing cabinet in the brain where all this lives? I don't,
what I'm trying to say is that it doesn't exist as such, to the best of our, to the best of our
understanding of how the brain works. So then how could a technology company then go and basically
stimulate the brain to share memories. Maybe there's a way that, you know, I'd like to, for instance,
like relive a memory or maybe, you know, there's something that my father told me 20 years ago
that I forgot. But I really want to remember what he said, word for word. Can technology one day
be able to be used to stimulate that and then recall those memories? I don't think we know the,
I think we're far from knowing the answer to this. I think, well, let's put it this way. It's
known that by electrically stimulating the brain, you can reproduce certain memories. But it's much
at this stage of our understanding of neuroscience, the predictability of how we do that is really
not well understood. Okay. But one sort of way that I think we think about some of the questions
that you're getting at is that we've just never had the tools before in neuroscience to interrogate
some of these questions. So we've never had as high resolution a picture of the awake human brain
as we now have with the precision system. So, you know, the electrodes that I mentioned earlier,
the 1,024 tiny platinum electrodes, most of them are 50 microns in diameter, which is actually
the size of a neuron. Right. And we're starting with a postage stamp sized electrode array
over motor cortex. But the medium term vision for precision is a much larger.
electrode array covering much greater areas of the brain's cortex with hundreds of thousands,
someday millions of electrodes. And I think once we're able to achieve that and apply the cutting
edge compute algorithms that are available today, we're going to learn answers to questions
that right now we just haven't been able to interrogate properly. And I think that kind of
interface will allow us to fluidly interact with the kinds of digital memory that we kind of have
more of an intuition for in our daily interaction with technology. And certainly many of the ways
that we think of expanding our memory type capability is in that kind of file system storage
way with addresses. So certainly we can think of ways and the way we interact with those
memories are with the kinds of brain-computer interface type technology that we're already
talking about, right?
So I definitely foresee a future in which we can, you know, near future, in which we can
augment, you know, memories through that kind of fluid interaction.
So we could have like an external hard drive that the BCI accesses?
We kind of already have that, you know, with assistive technology today.
I mean, you carry it in your pocket instead of in the cloud.
But a direct link between a device in the brain would be wild.
I think that we are already building towards, you know, a state of brain computer interface
technology that will facilitate that. No question about that. And I think that that's important
because it's based on just an extension of what we're already doing. You know, right now,
we're already cyborgs in some way. We have this digital extension through a black box at the
end of our hands that we tap on furiously. And that's how we access, you know, our external hard drives
in the way, you know, whether it's, you know, a Dropbox in the cloud or Wikipedia or whatever.
You know, I think the evolution of the way that human beings are able to control technology and compute has changed.
It's changed repeatedly over the course of the past 40 or 50 years from mainframe computers where we had like punch cards to desktops and laptops where, you know, we use our hands to control a keyboard or a mouse.
Now it's mostly phones.
Increasingly, I think this is moving to wearables, think about AirPods that people forget they even have in and, you know, makes them sort of quasi-siborgs.
I think over time, it is likely that a more seamless, a more intuitive way for us to engage with the digital world is going to happen.
And I think this concept of sort of thought-based control of computers, which right now still sounds like science fiction, even though we're doing it in clinics across the
country. And even though it's been done in academia for a couple decades, it still sounds sort of
fantastical. But the moment for real clinical adoption is very near. And this concept of seeing
people control computers with their thoughts is going to become much less amazing and much more
commonplace. And I think as that happens, our attitudes towards how to control computers is going
to change. What about studying decision making? Yeah. So I think decision making, we picked
we picked the more difficult one first because i think i think memory is more just i think memory is
a little more difficult decision making i think is we're already doing that in in many ways you know i
think we have a much better understanding um of certain aspects of decision making um because a lot of
that relates to um the pre planning of speech and motor function and so we have a better uh a better
lens on that now and uh a lot already is understood about the neural systems that that
serve decision-making. So predicting decisions before they happen, at least at the fraction of a
second level, um, is already possible. And we have already some understanding of the areas of the
brain that are responsible for that. So, um, both decision support and decision prediction is kind of
already in the near term, uh, in the near term roadmap. Stock traders are going to have a field day
with that. And by the way, you know, I think that that has the potential actually to be
influential in some of the more cutting-edge foundational models that are being used and
they're being developed in the area of artificial intelligence today. So, you know, interestingly,
a lot of AI has been inspired by the architecture of the brain, you know, neural nets or
where we've started. And some of these more abstract ways of decision making are likely to help
with breakthroughs towards sort of the next stage of generative AI.
And that's why you're starting to see some convergence of the pure software players
with folks like us who are interested,
who are actually developing hardware that interacts with biology.
What about decoding states of consciousness?
So we talked, so we've talked a couple times now before we've done this episode,
just so I can try to wrap my head around what you guys are doing.
And one of the things we spoke about was coma as right now,
coma I think is very little, it's minimally understood. People just sort of say, okay, they're
laying there and kind of like a half sleep. But what can, for instance, putting a series of these
brain computer interfaces on the brain of a coma patient potentially tell us about what's actually
happening inside their mind as they're laying there? Yeah, I mean, this is something that we think
is profound that we're actually actively working on. And it's due in part to work down over the last
decade or decade and a half, um, in the neurology and neuroscience community to try to understand
coma. And, um, you know, this notion of consciousness is something very, very deep in philosophy and
neuroscience and, um, you know, a lot of, a lot of time and effort has been spent trying to even
understand what that means. Um, but everybody has a, some intuition for what consciousness means and
that coma represents a kind of disorder of consciousness, um, or a lack of, or almost lack of
consciousness but there's a spectrum between coma and normal wakefulness that all of us here in the
room experience and it's not exactly even a linear spectrum so you know we we're all familiar
with sleep right and different sort of phases of wake different degrees of wakefulness and and people
who have certain forms of brain injury for example severe trauma or debilibility
rotating strokes, they, especially in the early phases of their disease, and, you know, many of the
listeners may be familiar with family members who've had very severe injury that results in a
coma or a severe change in the state of consciousness. And for decades, this has been a really
vexing problem trying to understand is the person who's currently unconscious or currently in a
coma going to emerge from that state and if they do will they emerge as the person that we knew
and in the period in which they're not wakeful and able to communicate are are they in there
so to speak right is that person in there are they are they thinking and a not just not able to
communicate or are they not there at all in the way that we knew them and um it turns out that
for some disorders of consciousness, some people who are, you know, seemingly not able to interact
with the outside world, their eyes are closed, they are not moving or speaking, some of those
people, it's now known about maybe even 15 or more percent in some cases, have the ability
to think, at least for some portion of the day, and even can modulate their neural activity
in kind of the same ways that we do in order to do.
drive speech or to drive movement, but they can't make that movement manifest in the outside world
through vocalized speech or movement, which are the ways that we normally communicate and express
our consciousness. And that, if you stop to think about it, can be both troubling, you know,
on a deep level, but also represents kind of an opportunity, which is that the same brain
computer interface technology that we've been talking about to restore
movement and people who we know to be totally lucid but paralyzed, that same technology
actually can serve as a bridge both to help diagnose and provide prognostic information
for whether the person who has this brain injury is likely to recover from it. And even in that
period when they're not fully recovered to provide a window into what's going on inside so
that they can actually communicate in some ways with the outside. So there's a
you're saying where it might be possible to effectively talk with coma patients through BCIs?
Not coma patients, because coma means that they don't have a level of consciousness that can
provide that. But there is an intermediate set of states. They're called, for example,
minimally conscious state, for example, or cognitive motor dissociation is the term that's
often used for these states, which are in between coma and full wakefulness on this sort of
spectrum and those patients they look effectively like patients who are in a coma and it's very difficult
to distinguish them in some cases and so we feel and then we're working on this that brain
computer interface technology can provide a tool for distinguishing those patients from true coma
and yes providing a way for them to communicate that's unbelievable and you know this has
important predictive power in terms of, as Ben mentioned, determining the chances that they recover
and become themselves again. Right now, you know, this sort of continuum of consciousness is most
often diagnosed by nurses who are just trying to perceive whether there's any wakefulness or
movement or an ability to respond. But that's in the context of, you know, loud hospitals
and lots of patients who command their time.
And so the error rates are extremely high right now.
So being able to do this in a way that's much more definitive
and also, which, as has been mentioned,
gives people who are in there,
who are able to modulate brain activity.
So they can imagine, you know, you ask them a series of yes, no questions,
and they can actually answer them by imagining,
making certain movements,
allowing them to express themselves and communicate out to the world.
I mean, it's just, it's sort of an unimaginable thing to be in that state and to be conscious or to be at least aware of what's happening and to be completely uncommunicative because you can't control the muscles in your body.
And so I think this is a bridge that has the potential to be really important.
Crazy. All right. I have two questions for you before we leave.
The first is we kind of touched on it a couple times, but what happens when people trust their brains to BCI companies,
let's say this, you know, 10 years or 30 years down the road becomes commonplace to have
some sort of device in your brain. I know where right now, or right now you're mostly
using this for disabled people or exclusively using it for folks who, like, need it to be
able to function. Is there a worry that someday that technology, if it becomes too commonplace,
can be used to hack into brains or to write signals in a way that, you know, you get a bad
actor in there. Like, there's this study. I think we're going to talk about it with Sally Adie or
she's already been on the show to talk about it, but there were like these rats that were able
to hit this pleasure lever and that would send an electrical signal into their brain, and they
would basically do that all day and not do anything else. So what do you think about the ability
to write and should we be afraid of it? I feel like that button for rats is like Instagram,
right? Look, there's definitely parallels in the human experience today, no doubt. Yeah, I mean,
these are incredibly important questions.
the issue of neural data privacy, neural data security, you know, there's nothing more inherent
to who we are than our brain activity. I'd make a few points about this, and it's obviously
an evolving space. One is we're a healthcare company, we're developing medical technology
in the context of the FDA regulatory regime as well as health. I'm not worried about your
device right now being used to hack brains. But I think these concepts are being considered now. It's
early, I think, you know, we're developing a tool for paralyzed people to live higher quality
of life. But over time, I think these issues will certainly emerge. And the FDA has actually
taken a very proactive role in helping define the regulatory regime, not just for today, but for
tomorrow as well. They've helped create something called the collaborative community, which they've
done in a few different areas of emerging medical technology. And it's a way to convene a lot of the
different stakeholders into one place to map out clinical practice guidelines, you know, reimbursement,
as well as some of the ethical considerations which are different depending on the technology.
So this is, you know, academics. These are the patient advocates. This is industry. This is clinicians.
These are a hospital administration. These are payers like insurance companies.
And so one of the work streams of this collaborative community, it's called the Implantable Brain
Computer Interface, Collaborative Community, it's a little bit of a mouthful, but is specifically
focused on data privacy, data security, and ethical considerations.
And so I think that's an incredibly powerful sort of forum in which to start mapping out what
this looks like in coming years.
Let me take that in a different direction, just because, I mean, as you can tell,
as futuristic as brain-computer interfaces are,
we're pretty practical-minded as a company,
really, really trying to bring this technology
into the real world to impact people
and become part of the standard of care.
But having said that, I think we're profoundly influenced
by the science fiction of our childhood and of modern times,
and there's a long track record of the science fiction of today
helping to influence the science reality of tomorrow.
So we take our responsibility in this regard very seriously,
and we take these thought experiments very seriously.
And as Michael mentioned, I think we and others in the space
are trying to ensure that this all develops
in as responsible a way as possible,
understanding that it can be hard to predict what happens,
and it can certainly be hard to legislate around,
all future eventualities. But we definitely have our eye on it. Okay, let me end with this on the theme of
science fiction. We talked today a lot about transposing the brain into compute, taking thoughts
from the brain and bringing it into technology. What about the other way around? So I'm curious
if you think there is a way for AI in some way to merge with the human.
brain and then I guess as a corollary to that knowing what you know about human
consciousness do you think it's possible for at all for AI to achieve
consciousness or self-awareness yes say more yes you should have us back on
the show for another episode on whole brain interfaces merging with
artificial intelligence and and AI consciousness I mean that's a whole that's a
whole another couple of hours but you're you're a neurosurgeon you believe
can achieve consciousness i do yeah how i mean i don't i don't think that to me it's not even such a
i don't even see that as such a um i don't even see that as such a difficult or troubling question
okay so we will have to do another hour then is what you're saying uh and then what about this
idea of of of a i and human brains merging any thoughts on that it's it's already happening right
I mean, in some ways, that's exactly what we're developing.
And we see the brain computer interfaces today, as you alluded to, and as Michael mentioned,
as kind of like the, in some ways, the foundational layer of, you know, a merger between the brain and artificial intelligence.
Right now, it has some very practical manifestations, which is effectively to become a different kind of user interface.
You know, as Michael mentioned, we have ways today that we've become accustomed to of how we interact with the digital world.
And it's usually with voice or hand control.
But the technology that we're building is to enable direct brain to digital world control.
Right now, actually, what we're doing almost of necessity, because so much technology is just built around voice and gestural and
you know, hand motor control is kind of a two-step bridge between neural intent and a conversion
to what would be, for example, typing on a keyboard or moving a cursor or speaking some commands to a
computer. But that's just kind of an artifact of the way the user interfaces of today are built.
we already know, actually, that the latency between your brain and your hand
and the ability to think something and type it is around 25 milliseconds.
So that actually puts a biological hard limit on how fast you can interact with the digital world,
even though you're thinking faster than that, right?
In a brain computer interface, the latency of the system is right now in the single digit milliseconds.
will be even faster than that.
That's why Nolan said it's kind of like a superhuman ability.
Exactly, right.
So many participants in these clinical trials have described it in similar terms,
kind of like the neural interfaces predicting what they're thinking.
This gets to your question earlier about, you know, decision making and where and when is the
decision made and when can a neural interface infer that or predict it.
But I'm using this specifically to just point you in the direction of what the user
interface of the future where brain computer interfaces are pervasive looks like it looks like not a
keyboard or voice activation or gestural control it looks like something that almost is is predicting
or intuiting what you're thinking and it's actually it's counterintuitive to how we interact with
the world because we're built and wired and have an intuition around actually interactions with
the world that are that are built around this 25 or so.
millisecond latency. Anything that happens faster than that, you don't realize that it has that
latency. And so when you see it happen faster, it almost seems like magic. But that's what an inherent
brain computer interface, user interface looks like. That's happening. And that's just the first step
in what, you know, a kind of a merger with artificial intelligence looks like. Okay, Michael,
let's give you the last word. I want you to just give us like a realistic timeline of like
what the next couple of years are going to look like in this technology. And then what size of a
business that could be if it works according to plan. Yeah. And just to, I'll answer both those
questions. Thank you. But just to follow up on what Ben was saying, just in layman's terms,
we already are augmented by AI. It's just slow. Okay. We have to type. And I think that there
will be an ability to access information much more quickly and also much more intuitively with
context, which is part of the reason that companies like Meta and Google are developing wearables,
context is going to really improve the functionality of these systems and how we use them.
And I think that that's going to be part of brain computer interfaces as well.
In terms of timelines and market size, I think that there is growing recognition that this is going to be a big industry.
Morgan Stanley wrote a report last year that estimated a $400 billion tam.
And that will build first in some of the medical applications that we described.
We expect within five years there to be precision system on the market and maybe one or two others that are making a big clinical impact for people who are severely paralyzed and then expanding from the sort of $2.5 billion a year market that the severe paralysis represents to something that's 10 to 15 times that as the applications for the technology become wider.
I think what we have a precision that's unique within the context of brain computer interfaces
is the ability to commercialize a temporary version of the system.
Ben mentioned that we have our first FDA clearance,
which is a tremendous milestone for precision,
but it's also just a sign of the progress of this industry
towards real commercial and clinical impact.
That is not instead of our permanent implant.
That is in addition to and parallel to our permanent implant.
The constraint to a temporary implant is that it can be implanted up to 30 days, but there are a number of applications for a 30-day implant, some of which been described in the disorders of consciousness, which we think are actually going to create businesses that are several hundred million dollars in annual revenue and which do a tremendous amount of good in terms of human health.
All right, well, folks, if you stayed till the end, I promised you we were going to get into some weird and good stuff.
And I think that we delivered.
So thank you for staying with us all the way until this late moment in the interview.
I called Brain Computer Interfaces.
I said that 2025 was going to be the year of Brain Computer Interfaces.
And I think the conversation that you just heard really shows that it is unfolding exactly that way.
So Michael and Ben, so great to see you.
It's always fun to talk.
And I hope you do come back and give us that extra hour on whole brain mapping and whether AI and the human brain can merge.
Sounds good.
Looking forward to it.
All right.
Thank you so much for being your end.
Thank you, everybody, for listening and for watching.
We'll see you next time on Big Technology Podcast.