That Neuroscience Guy - Grabbing an Apple

Episode Date: February 28, 2021

Seems simple right? Wrong. Grabbing an apple pushes your brain to the max. In today's episode, we will discuss the neuroscience of human movement. ...

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, my name is Olof Kregolsen, and I'm a neuroscientist at the University of Victoria. And in my spare time, I'm that neuroscience guy. Welcome to the podcast. Every day we interact with a bunch of objects. We pick up forks, we reach out for an apple or a cup of coffee. And we sort of take this for granted. But have you ever thought about how complex this really is? Most of us didn't do that well in high school physics, but our brains are solving complex physics problems instantaneously with little to no effort all of the time. Every time you reach out to grab an apple, you're solving those physics problems,
Starting point is 00:00:47 and you're not even really thinking about it. Today, we're going to talk about the neuroscience of grabbing an apple. Let's start by imagining that you're sitting at a table, and you want to reach out and grab an apple. To begin to solve this problem, your brain needs two key pieces of information. You need the location of the apple, and you need to know where your hand is. So, let's deal with the location of the hand first, and you can begin with an experiment at home. Close your eyes and touch your nose.
Starting point is 00:01:20 All right, how were you able to do that without missing? You didn't poke yourself in the eye, at least I hope you didn't. Now let's do a slightly different version of the experiment. Look for something on the table. Key on your keyboard, a pen or something. Now close your eyes and reach out and touch that. I bet you found that a little bit harder than touching your own nose. So why is that? Well, your brain determines the location of your hand through sensory information,
Starting point is 00:01:54 what we call afferent feedback. Within each of your muscles and within your joints, there are receptors. And these receptors are sensitive to the current state of the muscle, whether it's contracted or not, and how much it's contracted. And within the joint, they're sensitive to the current position of the joint. And all of this information, and a lot of other information, for instance, information about temperature, pain, movement, is sent up to a part of the brain called the somatosensory or primary sensory cortex. It's located in the anterior part of the parietal cortex, which is just behind the midline of the
Starting point is 00:02:33 brain. So what the somatosensory cortex does is it takes in this information and it uses it to determine a representation of the body in space. Now, we'll talk a little bit more about that in detail on another episode, but what you can pull from this is that the somatosensory cortex is basically building a representation of where your hand is, so your brain knows where that is. Now, what about the apple? Well, visual information comes in through your eyes and it goes to the back of the brain, an area called the primary visual cortex. And from there, visual information comes in through your eyes and it goes to the back of the brain, an area called the primary visual cortex. And from there, that information flows to a bunch of different parts of the brain. In terms of grabbing an apple, we'll deal with the information that's flowing up to
Starting point is 00:03:15 the posterior parietal cortex. There, that visual information is integrated to build a location of the world around you. So as you look around the room you're in right now, the posterior parietal cortex, or the PPC for short, is basically building up your visual representation of space. It's placing the apple in the world. So the posterior parietal cortex is basically placing the apple in a mental representation of the world that you have inside your head. So your brain is working with the somatosensory cortex to establish the location of the hand through sensory or afferent feedback, and the posterior parietal cortex is building up
Starting point is 00:03:59 your visual representation of the space around you, including the location of the apple. representation of the space around you, including the location of the apple. What happens next? Well, a bunch of parts of the brain work together. There's two movement planning regions in the frontal part of the brain called the supplementary motor area and the premotor cortex, or as it's sometimes called, the lateral premotor area. These two parts of the brain, the supplementary motor area and the premotor cortex, work with the cerebellum, which is at the back of the brain and slightly below, just tucked in behind the brainstem. And they use what's called an inverse model in the cerebellum. And they use this in conjunction with that sensory information, the location of the hand and the location of the
Starting point is 00:04:41 apple, to compute a limb trajectory. And that limb trajectory is then used by the supplementary motor area and the premotor cortex to come up with the motor plan. Now once your brain has established the position of the hand from the somatosensory cortex and the location of the apple from the posterior parietal cortex, the posterior parietal cortex integrates this information through what's called a visual motor transformation. And effectively, what your brain is doing is some very complex trigonometry.
Starting point is 00:05:14 It's trying to figure out the path that the hand needs to take to get to the apple. Now, once that visual motor transformation takes place, movement planning regions within the brain begin to work. In the frontal part of the brain, you have two key regions, the supplementary motor area and the premotor cortex. These two regions work with the cerebellum, which is at the back and bottom of the brain. Within the cerebellum, it's proposed there is an inverse model. Now, the inverse model is a bit complicated to explain, but in a nutshell, what the inverse model is supposed to do
Starting point is 00:05:53 is work with these movement planning regions and the sensory information that comes from the visual motor transformations to compute a limb trajectory. In other words, it's trying to figure out the path through space that your hand needs to move to grab the apple. And once the inverse model is determined at that path, it works with the supplementary motor area and the premotor cortex to add specific movement commands. What muscle will fire? When will it fire? And for how long will it fire? And what order will it fire in?
Starting point is 00:06:25 For instance, we might want to fire the triceps muscle before we fire a forearm muscle as we reach for the apple. And we might want to fire the tricep muscle for 100 milliseconds. And with a certain amount of force, we might want to fire an extensor muscle in the forearm for a shorter period of time after the triceps muscle fires. And once the supplementary motor area, the premotor cortex, and the cerebellum have done all this using the inverse model and the visual motor transformation, you generate what's called a motor plan or a motor command. Now, that motor command is then sent to the primary motor cortex, which is sometimes referred to as area M1. And from there, neurons actually fire that contract muscles,
Starting point is 00:07:12 and you begin to move. Pretty complex, just to grab an apple. But it doesn't end there. Right as you're about to move, a copy of the motor command is sent back to the cerebellum, a different area than we were talking about before. And there, there's another type of model, what's called a forward model. And what the forward model does is it takes that copy of the motor command.
Starting point is 00:07:37 And let's just pause on that for a second. That copy is called an efference copy. And one way to think about it that I like is imagine that you're giving your order to a waiter. Well, that's the copy of the motor command that is sent to the primary motor cortex. But imagine you take a copy of your order and you put it in your pocket just to make sure that you get what you actually ordered. Well, that's the efference copy or the copy of the motor command. And what the forward model does within the cerebellum is it evaluates that copy of the motor command to see what's going to happen. Basically, it's saying, if I evaluate this series of motor commands, where is my hand going to end
Starting point is 00:08:20 up? In other words, will the movement be successful? Now, the cerebellum working with our old friend, the posterior parietal cortex is doing a simple error computation. If the movement is going to be successful, let's just keep on going the way we are. However, if the movement is not going to be successful, as in I'm going to miss the apple, then let's modify the motor command so that we can achieve movement accuracy. This is actually why our movements as adults are so smooth. As we're moving, the forward model within the cerebellum is constantly evaluating the motor command while we're moving. And that's a key point here. These evaluations occur during the movement. So while we're moving, the forward model is constantly saying, is this going to work?
Starting point is 00:09:11 Is this going to work? Is this going to work? Are we going to achieve the goal we want? And if it detects an error, it fixes that error almost instantaneously. And as a result, we have smooth movements. If you look at babies reaching for things, this is why their movements aren't smooth, because their forward models aren't good at estimation, and as a result, they have jerky movements. And as they learn to become what we would call coordinated, what they're really learning and what's really happening
Starting point is 00:09:39 is the forward model and the inverse model within the cerebellum are becoming more and more accurate in the math that they do. And I'm sorry to say, this is still not all. While we're reaching for the apple, the forward model is doing its math to see if we're going to miss the target or not. The posterior parietal cortex is still sitting there, listening to visual input from the primary visual cortex to monitor the location of the apple, and the primary sensory cortex is monitoring the limb just in case there's a change. Imagine someone moves the apple while you're reaching for it, or someone bumps into you. The posterior parietal cortex is integrating this sensory information in case something changes in the world and we have to adapt the movement even further. Let's summarize.
Starting point is 00:10:27 So, we want to grab an apple. The posterior parietal cortex takes in visual information from the primary visual cortex and uses it to determine the location of the apple in space. The primary sensory cortex computes the location of the hand. This information goes through a visual motor transformation, which is then sent to the cerebellum, which then uses an inverse model to compute a limb trajectory, the path of your hand to the apple, and then works with the supplementary motor area and the premotor cortex to come up with a motor plan. That motor plan is sent to the
Starting point is 00:11:01 primary motor cortex and you begin to move, but a copy of the motor command is sent to the cerebellum where a forward model is used to see if the movement is going to be successful or not. And if it isn't, the movement command is modified to make sure it is successful. And throughout all of this, the posterior parietal cortex is bringing in visual information about the location of the apple and information from the primary sensory cortex about the location of the limb to make sure the world hasn't changed. So there you go. Grabbing an apple is not as simple as you think. My name is Olof Kregolsen, and I'm that neuroscience guy. Thank you for listening to the podcast. Check out my website at www.olofkregolsen.com or follow me on Twitter at That Neurosag Guy. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.