That Neuroscience Guy - The Neuroscience of Artificial Intelligence: Part 2

Episode Date: July 4, 2022

Last week, we discussed how artificial intelligence is designed to learn how human brains. But, when AI learns , how exactly does it use that knowledge to make decisions? In today's episode of that ne...uroscience guy, we discussed the neuroscience of artificial intelligence decision-making. 

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, my name is Olo Krigolson, and I'm a neuroscientist at the University of Victoria. And in my spare time, I'm that neuroscience guy. Welcome to the podcast. And one more time, kia ora, greetings from New Zealand. It's my last Sunday here, and I'll be heading home back to Canada. So we'll try to keep the podcast a bit more on track. But on today's podcast, the neuroscience of artificial intelligence part two. For last Sunday's episode, which we released late, I was talking a lot about the history of artificial intelligence and how artificial intelligence learns.
Starting point is 00:00:42 So how AI can learn just as humans learn. And that's the reason I want to cover this. I find it so fascinating that we can come up with mathematical algorithms that parallel how humans learn and how humans make decisions. Now, let's dive right into it. The neuroscience of how AI learns to make decisions. Well, makes decisions.
Starting point is 00:01:04 We've covered learning. Basically, in the early stages of AI, that reinforcement learning stuff that we talked about, what was actually going on was the AI was learning the value of choices. So it was learning, hey, if I have a choice to drive the robot forward or go left or right, there's a value associated with each choice. And if you think back to the episode, basically, if you remember the vacuum example,
Starting point is 00:01:33 every time it hit a wall, it would assign that space a minus one. And therefore, the choice to go straight forward that point was a minus one. It was a bad idea. Whereas the choice to go, say, left or right might have been positive values because it takes it to locations it wants to be. So decision-making from an AI perspective paralleled exactly what humans do. And if you think back to podcasts I've made on decision-making, you might remember that the simplest form of decision-making is to assess the expected values of choices. So you look at the value for going straight ahead or for going left or right if you're
Starting point is 00:02:08 the robot and you just choose the highest value. And that's how you make your choice. The human example would be like, what do you want for dinner? Do you want to have sushi or pizza? And you essentially choose the highest value. So the whole point of reinforcement learning is to learn the values of choices. So in any situation, an AI intelligence will choose the highest value. And like I said, that's the simplest form of AI decision making,
Starting point is 00:02:37 is just this choosing the highest value approach. But I mentioned on the first half of the Artificial Intelligence podcast that what modern AI does is use what are called neural networks. And I want to dive into neural networks as far as I can go, given that this is an audio podcast and I can't draw a bunch of cool diagrams or pictures. Because the cool thing about neural networks is they've essentially been built to parallel what people think happens in the actual brain. Now, how does that actually work? Well, mathematically, the building block of a neural network is trying to represent a neuron. Now, neural networks can never cover the number of neurons in the actual brain, 86 billion.
Starting point is 00:03:26 And there's over a thousand trillion synapses. So neural networks are always a simplified version of the brain. With that being said, there's actually some research labs around the world that are working on what's called the Human Brain Project, where they're trying to build a neural network that has the same number of artificial neurons and artificial synapses as the human brain. The researchers there firmly believe if they can pull this off, they'll basically get human thought out of this network.
Starting point is 00:03:55 That sounds a little bit like science fiction and there's no guarantees it'll work, but that's the idea. Let's just get to the basics of a neural network. How do you build an artificial neuron using math? Well, if you think about the way a neuron works, a neuron has input. All right. So neurons have dendrites and they receive input most of the time from other neurons. And we've talked a bit about sensory neurons and motor neurons.
Starting point is 00:04:17 But if you think of the neurons in the brain, all of that input is just other neurons. And that input is electrical in nature. So neurotransmitter is released. It binds at the synapse. And excitatory and inhibitory postsynaptic potentials are generated. These are little electrical charges that are either positive or negative. And then essentially what happens is in the cell body of the neuron, these charges are summed up.
Starting point is 00:04:41 And if there's enough charge, the neuron fires. It communicates. It generates what's called an action potential. And it's signaling the next neuron. And if there's enough charge, the neuron fires. It communicates. It generates what's called an action potential, and it's signaling the next neuron. And guess what? The axon of that neuron is connected to the dendrite of another neuron. And this way, you've got layers of neurons communicating with each other and sending messages. Now, how do you do this mathematically? Well, you have to have an input to a neuron. All right, so imagine you have a mathematical dendrite. It would just be a variable in a computer program. And if you want
Starting point is 00:05:11 that dendrite to have input on it, you just assign it a value. Call it a one for simplicity's sake. And if you want the dendrite to not have input, you would assign it a zero. So the neuron is either on or off. In other words, an action potential has fired and that mathematical dendrite is receiving input, a one or a zero. Now, it's a little bit trickier than that because you want your neural network to parallel how the brain works. And how does the brain work? Well, we know that neural connections have different strengths. Strong connections mean that neuron A will excite and fire neuron B 100% of the time, and weak connections basically mean that neuron A may fire neuron B. And the actual strength comes from the amount of neurotransmitter being released, the number of postsynaptic receptors,
Starting point is 00:06:03 and the number of connections that are actually present. So physically, neurons have a strength of a connection or a weight of a connection. Mathematically, we just represent that with a number between 1 and minus 1. And it can be anything. It could be 0.3 or negative 0.4. So to get the actual input from a given mathematical dendrite, you just take the input, one or zero, and multiply it by the weight. So imagine that the weight is 0.5 and the input is one. The input from that one mathematical dendrite is 0.5. Now, of course, there's lots of dendrites. So for a given artificial neuron, there might be lots of inputs. So a given artificial neuron will have an input pattern that will be a bunch of ones and zeros, and it will have a bunch of weights. And if you multiply the ones and the zeros by their respective weights, and you sum that together, you've got the mathematical input to your
Starting point is 00:06:59 artificial neuron. Now I know that sounds like a lot, but just try to visualize that and use it as an example. Imagine you're trying to decide what move to make in tic-tac-toe. Alright, well you could represent the board by nine numbers. Whether there's an X in the square, alright, you can make that a 1. Alright, or whether there's an O in the square, you can make that a 0. And if the computer is trying to make a choice of where to move, you could imagine there's a weight for each square. So each square has a weight, and good weights would mean these are good squares to go in, and poor weights would mean that those are bad squares to go in. So for the neural network to decide what to do, you're going to take the input, which is a mathematical representation of the board pieces,
Starting point is 00:07:44 ones and zeros, multiply by the weight for each square, and you're going to sum that. And you actually do that for every single square, and that gives you basically a number, and it's an indication of what you should do. Now there's one tricky step here that I'm hesitant to mention, but what you actually do in a neural network is you take all those inputs and you multiply them by the weights and then you actually multiply that or you put that through a bit of math called an activation function. What that does is it scales the input to a zero and one. Now you have to really scale this up. You've got this input level, which is a whole bunch of neurons that are either ones or zeros,
Starting point is 00:08:25 showing things come in. And there's a whole bunch of weights for each connection. But neural networks have two other layers. They have an output layer, which is the move you should make, but there's a middle layer, which is called the hidden layer. Now, this is a tricky concept to get, but basically the hidden layer are a bunch of neurons that receive the each one receives input from parts or all of the input layer and it's the hidden layer that actually is making the decision and all it's doing is is
Starting point is 00:08:54 basically saying given certain board states you do all the math multiply all your ones and your zeros and your weights and that hidden layer is learning different board states if you're playing tic-tac-toe or chess, alright, or if you're recognizing images, the hidden layer is learning the patterns that comprise the image, or if you're learning to create music, which was an example I used before, it's learning what notes go together, and then the output layer is just what the neural network produces. But the key concept here, and if you don't get anything else, try to get this. There's an input layer, which represents what's going into the brain or into the neurons in the brain. There's a hidden layer, which is basically the interneurons that where you, they're sort of
Starting point is 00:09:40 doing the math, if you will, to decide what to do. And then there's an output layer, which is what you actually do. Now, the human brain has more than three layers, and modern neural networks have multiple hidden layers to try to capture the human brain in what it can do. But what's cool with neural networks is what they actually can do. So what's cool about neural networks is this is how Google's DeepMind project learned to play Go. All right, they had an input layer, which was the board state of Go. They had this hidden layer. Now, it's important to realize when a neural network starts to learn, the hidden layer is just random. It's making random moves. But by basically multiplying the input into the hidden layer,
Starting point is 00:10:26 the hidden layer learns every time the neural network wins or loses the patterns that matter for winning or losing. And the output layer is literally just a representation of the board, again, that's telling the neural network where to move or in the case of image classification, where to move or in the case of image classification, whether it's you or someone else. So the neural network is simulating the layers of complexity in the human brain. It's doing this by mathematically representing individual neurons. Those neurons have inputs to them, which are a combination of whether the neuron is on or off and these weights, which is simulating the strength of the connections. All of that's fed into this hidden layer, which is just a layer of artificial neurons that takes in inputs. And the key to understanding the hidden layer is that's what learns the patterns.
Starting point is 00:11:14 So in a given board state, that's the move you make, all right? Or for a given image, this is the correct label. And then the output is just literally the output. And like I've said, these neural networks underlie almost everything in AI these days. You can train a neural network to do almost anything. To identify smells, to identify sounds, to identify pictures, to make moves in video games, to drive cars. All right? This is the key tech behind these things are neural networks. This is the key tech behind these things, are neural networks. Now I hope that gives you a little bit of understanding of the neuroscience of artificial intelligence.
Starting point is 00:11:51 The first episode was all about how neural networks learn. The second episode was all about how neural networks make decisions. And they do it either through choosing the highest value or through these neural networks that output basically a decision choice. And one point I want to make quickly before I sign off is remember that you put these two things together. The whole point of how AI learns is to train neural networks or to learn the values for certain choices. And that's a little bit about how artificial intelligence works. And the reason I, again, I wanted to cover it one last time is because this is how your brain works.
Starting point is 00:12:28 We believe that the learning algorithms that AI uses are similar to the ones that are implemented in our own brains. And we believe that things like neural networks and value representations are how we make choices. My name is Olaf Kregolsen, and I'm that neuroscience guy. Thank you so much for listening. And again, apologies for delays while I've been on holidays, but we're now up to date. I'll be back in Canada and back on track for the rest of season three. Remember, you can send us episode ideas. Follow me on Twitter at that Neurosci guy, and you can just DM me directly. Hey, this is a cool idea or something I want to know about the neuroscience of daily life. Of course, subscribe to the podcast and please check out the website.
Starting point is 00:13:04 Thanks to everyone that's following us or thanks to everyone that's supporting us on Patreon. science of daily life. Of course, subscribe to the podcast and please check out the website. Thanks to everyone that's supporting us on Patreon and by buying t-shirts from our Etsy store, all the money goes to graduate students in the Kregolson Lab. That's all I've got for this week. I'll see you soon for another neuroscience bite.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.