That Neuroscience Guy - The Neuroscience of Brain-Computer Interfaces
Episode Date: November 6, 2022Brain-Computer Interfaces, or BCIs, are designed to connect our brains to technology with varying uses. It may seem like science fiction, but BCIs allow us to control robot arms, use a computer, or ev...en fly drones all with our brains. In today's episode of That Neuroscience Guy, we discuss how BCIs work, and all their potential uses.
Transcript
Discussion (0)
Hi, my name is Olof Kergolsen, and I'm a neuroscientist at the University of Victoria.
And in my spare time, I'm that neuroscience guy.
Welcome to the podcast.
You might have heard that people like Elon Musk want to implant electrodes directly into
our brains.
But why?
A form of mind control? No,
well, maybe sort of. Basically, they just want to listen to your brain. On today's podcast,
the neuroscience of BCI, or brain-computer interfaces, and the matrix? So we talked about this briefly, but just in case you're listening to this as a
one-off, I want to quickly touch on the idea of neural communication. Your brain is made up of
neurons, 86 billion of them, and there's trillions of interconnections. And the primary form of
communication is electricity. When a neuron fires, it sends an electrical signal
down the axon to the end of the axon. It releases neurotransmitter, but that in turn generates a
small electrical signal, what we call a postsynaptic potential. So with all of those
neurons firing in your brain, there is a lot of electricity that's there, and that is the primary
communication method. So why is this
important to what we're talking about today? Well, we'll get there, but it turns out that
listening to that electrical communication can be quite interesting. Now, why do neurons
communicate this way? It's really the only option they have. So it may seem weird that just these
little electrical signals can
lead to all of the thoughts you're experiencing right now, but they can. Now, assuming that's
true, and I promise you it is, how do you listen to that electrical activity? Well, we talked about
this back in season one, but the primary means of doing this is electroencephalography, or EEG, or what some
people might even call brainwaves. Now, the way EEG works is quite simple, and to warm you up for
it, let's talk about the electrocardiogram. The electrocardiogram is when you put electrodes over
the heart muscle, and when the heart muscle contracts, there's an electrical signal. Basically an
electrical signal causes the muscle to contract. And if you have electrodes in the right spot,
you can see that electrical signal. It's that little spike that you see on the hospital shows.
And as we know, if your heart rate speeds up, that spikes gets faster and faster. There's more of
them. And if your heart rate slows down, those spikes are fewer and further between.
And unfortunately, if your heart stops, those spikes go flat.
But you're listening to the electrical activity of the heart muscle
by just putting electrodes over top of it.
EEG is basically the same.
If you put electrodes on the head, you can detect electrical activity associated with muscles.
But you can also detect the electrical activity of all those neurons in the brain firing.
So even though that electrical activity is tiny, when you think of an individual neuron,
the reality is when we think the thoughts we're thinking or we try to move our body,
you need millions of
neurons firing. And if all of those neurons fire, well, then guess what? The EEG electrode sensors
can detect the signal. So where are we at? Neurons fire. It's primarily an electrical
form of communication. If you put electrodes on
the head you can listen in on that activity if you will. What if I told you
though that if we did that just using a simple headband you could drive a robot?
Well that's true. In my lab we have a Lego robot that if you put on a simple
EEG headband that you can buy at Best Buy, you can
drive it around the room. Now that's what we call a brain-computer interface or BCI. Well, how does
it actually work? Well, quite simply, the Lego robot is just waiting for a command from a computer.
If you send it one command, it goes forward. If you send it a different command, it goes backward.
one command, it goes forward. If you send it a different command, it goes backward.
So when we listen to your brainwaves of the EEG with the headband, we can decode that.
Now we're not identifying your thoughts, like what you were having for breakfast or anything like that. It's a little bit simpler. We can determine whether you're concentrating a lot
or you're relaxed. We can determine if you're focused or not. And we do that by turning the EEG signal
into what we call frequency bands or power. And you might've heard of some of this if you're into
this kind of stuff. There's alpha power, there's theta power. And to be fair, I talked all about
it when we talked about EEG. So what we do is we transform the electrical signal using some
pretty complex math into these power bands or these
numbers, if you will. And when those numbers go up or down, we can use that as a control signal.
So for instance, if you're really relaxed, we can tell the robot to drive forwards.
And if you're really focused, we can tell it to drive backwards. And if you're somewhere in
between, then we tell it to do nothing. Imagine controlling a robot if you don't get that with your heart rate.
If your heart rate speeds up too much, the robot goes forwards.
If it slows down too much, it goes backwards.
And if it's just your normal heart rate, it stays the same.
We're basically doing the same thing, but we're just doing it with your brainwaves.
We've even used it to make a drone fly.
So we can make a drone take off and move forward. And it's the brainwaves. We've even used it to make a drone fly. So we can make a drone
take off and move forward. And it's the exact same principle. We listen to the EEG signal.
We decode it using a bit of math. The key thing, if you want to Google it, is called a fast Fourier
transform. I'm not going to take you through that on this podcast. It's a bit beyond the scope of
what we're really trying to do, the neuroscience of everyday life. But then we get those power values I told you about, and we can use that as a control signal
for the drone so it can take off. And it can even be simpler than that. It can be as simple as even
controlling a cursor on the screen. Now, why would you want to do that? Well, there's always
locked-in syndrome. Locked-in syndrome is a rare neurological disorder,
and it's characterized by complete paralysis of voluntary muscles, except for the ones that
control the eyes. So basically, people with locked-in syndrome are conscious, and they can
think and they can reason, but they can't even communicate with the outside world. They can't
speak and they can't move, but they can move their eyes around. Now, previously with locked-in syndrome, people
were using those eye movements and blinking to communicate. Blink twice for yes, once for no,
you kind of get the idea. But with EEG and putting electrodes on their head, we're able to interpret
their brainwaves. And remember, people with locked-in syndrome can still think. So right now,
it's still pretty basic in the sense that we're allowing these people
or the technologies available for these people to control the cursor.
They can move it around a screen.
And that allows them a little bit more complex responses than just yes or no.
So it's pretty cool because with BCI,
these people now have an avenue to communicate with the outside world.
And the technology is moving even further.
Right now, with BCI and locked-in syndrome, it's basically moving a cursor around the screen.
But I did some work recently in my lab with Dr. Alona Feisch from the University of Alberta.
CO Alberta. And working with her team, we were able to actually demonstrate that we could use EEG and decode it to identify words. So as opposed to just moving a cursor around,
we could actually identify words that were hidden within the EEG pattern. That's kind of cool
because it's the start and it's the first step along the way to possibly reading your mind in a sense. Now let's
not get carried away. I'm pretty proud of that study because it's pretty cool but the reality
is we were getting words with a reasonable hit rate but nowhere near 100%. But hopefully we'll
get there. Now these neural signals can be used for other things. In more extreme cases the neural
signals that are
being decoded in the EEG are being used to control robotic arms. Now, I've already told you we can
control a Lego robot and a drone, but in this case, it's worked by Dr. Andrew Schwartz, and
he's really taken it a step further. As opposed to using EEG, he's actually got wires going into
the brain itself, and those wires are picking up the
same electrical signal. It's just a little bit cleaner. And in some of his early work, he had a
monkey that had a robot arm. So imagine a monkey with a robot arm. And the monkey was able to feed
itself simply by thinking. Now, it wasn't trained to control the arm. It was just sending the same communication
that it would normally send to move the arm, and it was moving a banana towards its mouth.
Now, that's pretty cool because there's no training. It's not like it's saying,
turn on the robot arm. It's just thinking naturally. And Dr. Schwartz has taken this
work even further. He's actually been working recently with a robotic
arm on a person with quadriplegia. So this is someone that's paralyzed from the spinal cord
down, and he's got a woman who, using a robot arm, is able to feed herself. Again, not by learning
how to use a robot arm specifically, but simply by thinking about moving their arm naturally. And the future of
that, well, they're already working on a full walking exoskeleton, basically something you
would wear over the entire body and you would use your brain to control it. Now I mentioned here
that Dr. Schwartz wasn't using EEG per se. He was putting wires actually into the human brain.
Now, why does he want to do that? You know, why do people like Elon Musk want to put electrodes
into our brains? Well, it's actually all about what we call signal to noise ratio. When you put
the electrodes on the outside of the head, there's a lot of tissue between the electrode and the
electrical signal that you're interested in. And that basically distorts the signal. However, if you put the electrode on
the brain itself, that distortion disappears completely. The signal-to-noise ratio is
increased. More signal, less noise, and that means a cleaner EEG signal and that means the control signals will be more
accurate. So while it might sound a little bit freaky to have electrodes actually on the surface
of the brain, think of all of the potential that could be there. You could have a computer
literally reading your thoughts. Now at the outset I mentioned The Matrix. If you're not familiar with The Matrix movie series,
basically, the idea is that people are essentially living in a world
where they're frozen in these cocoons.
I'm not going to spoil the whole movie.
But what they experience is a world around them.
So they have a full sensory sensation.
So just as it's possible to use neural firing to control things, what if you could stimulate sensory neurons? And that's what
the matrix is all about. So for instance, if I could stimulate the sensory neuron that was for
your left biceps muscle, you would feel the muscle contract, even though the muscle isn't actually contracting. And if I
could stimulate a sensory neuron for smell, then you would perceive that smell even though it wasn't
present. And if I could stimulate the right visual neurons, you would see things that weren't there.
Now, is this even possible? Well, yes and no. People are starting to experiment with this and have been
for quite some time. By stimulating certain sensory neurons, you can get brief sensations
or brief smells. But to create the entire world around us, well, you would have to have a single
stimulating electrode, if you will, for every single sensory neuron. And that's just not possible yet.
every single sensory neuron. And that's just not possible yet. But, you know, maybe one day it will be possible. Now, finally, mind control. I hinted at it at the outset,
but not with EEG. Not to the best of my knowledge. But then again, maybe I'm already under someone
else's control. Just kidding. So there's our episode. Hopefully you learned a
little bit about the idea of a brain computer interface and the reason that people might want
to put electrodes inside your brain. Now remember we have our website thatneuroscienceguy.com.
There you'll see links to Patreon where you can support us. All the money is going to my graduate
student Matt. He's the one that does all the sound production. You can also find a link to our Etsy store where you can buy some t-shirts.
I think they're kind of cool and I wear them.
They're a little bit neuroscience nerdy, but what the heck?
And of course, there's our blog now where I'm putting up the written material for the
bites and there's links to the podcast and the other things we do, including my lab and
the research that I'm a part of. Now, we're still looking for some ideas. We've got some great ideas, but if you can,
follow me on Twitter at ThatNeuroscienceGuy, and you can DM me some show ideas, or you can email us
at ThatNeuroscienceGuy at gmail.com. And of course, thank you so much for listening. Please subscribe.
It means a lot to us. That's the show for today. Thanks so much for listening. Please subscribe. It means a lot to us.
That's the show for today.
Thanks so much for listening.
My name is Olof Kregolsen, and I'm that neuroscience guy.
I'll see you soon for another Neuroscience Byte.