Embedded - 187: Self-Driving Arm

Episode Date: February 15, 2017

Crossing machine intelligence, robotics, and medicine, Patrick Pilarski (@patrickpilarski) is working on smart prosthetic limbs. Build your own learning robot references: Weka Data Mining Software ...in Java for getting to know your data, OpenIA Gym for understanding reinforcement learning algorithms, Robotis Servos for the robot (AX is the lower priced line), and five lines of code: pred = numpy.dot(xt,w) delta = r + gamma*numpy.dot(xtp1,w) - pred e = gamma*lamda*e + xt w = w + alpha*delta*e xt = xtp1 Patrick even made us a file (with comments and everything!). Once done, you can enter the Cybathlon. (Or check out a look at Cybathlon 2016 coverage.) Machine Man by Max Barry Snow Country by Bokushi Suzuki Aimee Mullins and her many amazing legs (TED Talk) Patrick is a professor at University of Alberta, though a lot more than that: he is the Canada Research Chair in Machine Intelligence for Rehabilitation at the University of Alberta, and Assistant Professor in the Division of Physical Medicine and Rehabilitation, and a principal investigator with both the Alberta Machine Intelligence Institute  (Amii) and the Reinforcement Learning and Artificial Intelligence Laboratory (RLAI). See his TED talk: Intelligent Artificial Limbs.

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Embedded. I am Elysia White, alongside Christopher White. Our guest this week is Patrick Pilarzki. I think we're going to be talking about machine learning, robotics, and medicine? That's got to be cool. It's your show. You should know. I should know. Hi, Patrick. Thanks for joining us today. Hey, thanks so much. It's great to be on the show. Could you tell us about yourself as though you were introducing yourself for a panel?
Starting point is 00:00:38 Sure. I never like introducing myself on panels, but in this case, I'm a Canada research chair at the University of Alberta here in Edmonton, Canada. And I'm specifically a research chair in machine intelligence for rehabilitation and sort of putting together two things you don't usually see in the same place. A lot of what I do is working on connecting machines to humans. So bionic body parts, artificial limbs, and other situations where people and machines need to interact. So we look at making humans and machines better able to work as a team. So you're working on Darth Vader? No, he's working on a $6 million man.
Starting point is 00:01:15 Yeah, that's the more positive spin on things. Yes, we're definitely working on that. We're working on, well, hopefully it's not $6 million. The health system wouldn't be too thrilled with a $6 million man. Now it would be $6 billion anyway. A little lighter price tag might be good, yeah. This is the point of the show where normally we ask you a bunch of random questions, but the lightning round is on vacation.
Starting point is 00:01:37 Oh, no. So we only have one question to get to know you, and that is who would you most like to have dinner with living or deceased and why well i think the the most immediate person i'll have dinner with is my dear wife uh later on this evening and that's actually the one i like having dinner with the most um but so the tricky thing is that i don't like most i'm not a guy that does most or best but uh i will answer in a different way is that I've been currently reading a very cool book from an author named Suzuki Bokushi. And Suzuki Bokushi was
Starting point is 00:02:12 actually alive, you know, end of the 1700s. So long dead, I guess in this answer, but he lived up in the really snowy bits of Japan. And so this is a very cool book, all these little snapshots of one of the snowiest places that Japan could imagine. And it's even snowier than here in Edmonton. So I'd love to sit down with him over a cup of tea or some kind of nice evening meal and chat about how they deal with all their snow. Because we've got some here, but wow, they get really socked up in certain parts of Japan. So I think that would probably be the one I'd pick for now, just recently. All right.
Starting point is 00:02:45 That is pretty cool. And so now I wanted to ask you another strange question. Do we have the technology? Do we have the technology? Okay. Sorry. How so? It's a $6 million man quote.
Starting point is 00:03:04 Oh, sorry. I'm off my game today. I am so sorry. Yes and no. Yes and no. We can rebuild him. We can rebuild him. Bionics, prosthetics that are smart.
Starting point is 00:03:18 This is crazy talk. I mean, that's just, that's very cool on one hand. On the other hand, what do you do? I mean, what does this mean? So, what is like a really good reaction? And it's actually probably the most common reaction, I think, when we start to say, hey, yeah, you know, you might have an artificial limb and it might be learning stuff about you. But it's actually not that crazy. So, I mean, if you bear with me a little bit, when someone's lost a part of their body due to like an injury or an illness, they need to have a sometimes assistive technologies. They need technologies that are able to replace
Starting point is 00:03:56 or sort of put back some of the function that was lost. And I like the really tricky bit here is that the more things you lose, in many cases, the more things you have to put back. The problem though, is that the more things that are lost in the case of like an amputation or something, if you're losing an arm, you need to restore the function of the arm, but you have less places really to record from the human body. You have less sort of windows into what the person really wants. And so a very natural way to start thinking about how you might want to start putting back that function and understanding what the person wants isn't even sometimes to be able to try and pry out more signals from the human body, but it's, you know, why don't we just make the
Starting point is 00:04:33 technology itself just a little bit smarter? And then it can know things like, hey, you know, it's Thursday and I'm making soup. Okay, cool. I'll be able to fill in the gaps. I'll be able to sort of guess at what the person might want or what the person might do. And it makes it a little bit more seamless and a little bit more natural. So you can do more with less. This is sort of like without the thermodynamics police coming in and locking us all up. I mean, we're really trying to get something for nothing. And machine intelligence helps us get a lot for a little.
Starting point is 00:05:01 So a smart prosthetic hand helps us do more with less. I think that's the key thing. And so it sort of takes the blah to a, oh yeah, maybe, maybe that makes sense. So it's more like a self-driving arm. More like a self-driving arm. Exactly. And I, this is, this is actually very much, that's a good analogy because you can, a lot of the systems we think about that, that do stuff for us, um, you give really high level commands. You do so the big picture thinking and the technology fills in the gaps. We see this with everything from our smartphones to our
Starting point is 00:05:31 computers to maybe someday soon, I hope someday soon, the vehicles we have even up here in Edmonton. And the nice thing about this is that, yeah, you could say, you know what, the self-driving arm is you're giving the high level commands. But I mean, we just can't in some cases, for the bionic body parts case, we can't even measure the right signals from the body. We can't sometimes get the information out for, say, fine finger control to play piano or catch a ball. But the system could say, hey, in this situation, you're making these big kinds of motions. I bet you want your fingers to coordinate in this kind of way.
Starting point is 00:06:02 So you may be able to play that good lick on the piano. So yeah, it's kind of like a self-driving arm. But without the sort of scary, the bit that people always get scared about is like sort of the Dr. Octopus side of things. Oh, my arms are like controlling me or they're doing things that I don't want them to do. I think if we've done everything right,
Starting point is 00:06:19 it's like a really good human team, right? A good sports team or a good team in any other sense is that they work together so seamlessly that it doesn't seem like one is controlling the other, but everybody's working really efficiently towards achieving the same goal. I think that's where we're going with smarter, with smart parts and better bionic bits. I think we've all seen those horror movies where, you know, the arm made me do it, but I don't want to. Exactly, exactly. And one of my students is actually, one of my graduate students is actually working on a, what she wants is a, you know, a hand that's,
Starting point is 00:06:47 just a disembodied hand that might, you know, crawl across the room and go get stuff for you. We've got another set of students working on what we call prosthetic falconry. So you might, instead of having an arm attached to your body, have a quadcopter with a hand that flies across the room and picks stuff up and comes back for you with a prosthetic guyer, essentially.
Starting point is 00:07:03 So we're doing some cool stuff like that. And then you could imagine,etic guy or essentially so we're uh we're doing some cool stuff like that and then you could imagine yeah okay the thing the thing is actually pretty autonomous in the fact that it could actually you know move around the room a little um but but for the most part for the most part it's uh yeah the chance of the system actually controlling you back is very very low i think the doc aquari doc aquari is not something that we we we have to take too seriously although uh we do have a no doc oc rule in my lab so you can put one extra body part on your body you can put two on the minute you put four extra body four extra limbs on your body you're kicked right out of the lab so we have a no doc oc rule all right but before we get any deeper into that and while alicia is trying to
Starting point is 00:07:39 get off the floor from laughing um i did i did want to kind of establish a baseline. I don't think I understand and I don't think a lot of other listeners might understand what the current state of the art is. Like if I were to lose my forearm, heaven forbid next week, what would, you know, if I had the best insurance in the world, what would I end up getting as a prosthetic and what would that be capable of doing? Yeah, this is a great place to start. So I really, first, I really hope that you actually don't lose any body parts. If you do, you know, drop me an email. We'll see what might be some good suggestions for you. You might get a prosthetic quadcopter. He would really like that. I'll be right back. I'm going to get an axe. No, no, no. See, this is just flat out no.
Starting point is 00:08:25 Although, actually, just as a side note, this might have come up later on in our conversation, but if you ever get a chance, there's a fantastic book called Machine Man written by Max Barry. It's about a guy who, growing up, wants to be a train. He doesn't want to be a train engineer. He actually wants to be a train. Anyway, he loses one of his legs in an accident and pretty soon realizes that the leg he built is actually better than the biological one he's left with. Anyway, it goes all downhill from there. It's a fantastic sort of a dark satirical work of fiction, but it's definitely worth reading. It's on the required reading list for my
Starting point is 00:08:57 laboratory. So we got a copy on the shelf, but it fits right in with your question. So no going to get the ax. But to but, uh, to answer your actual question, the, uh, the state of the art is partially dependent on, on what kind of amputation someone has. So usually what happens is when someone, someone presents with an amputation at the clinic, they'll be assessed and, and the vast majority of people will, will get something that isn't actually that robotic at all. They'll get something that's what we call a body-powered prosthesis, but it's essentially a series of cables and levers. So it's something that they control with their body. It's purely mechanical with no electrical parts. And for the most part, a lot of people really like those systems in that
Starting point is 00:09:40 they're trustworthy, they respond really quickly, they can sort of feel through the system itself. So if they tap the table with it, they can feel it sort of resonating up their arm. Recently, there's been a big surge in newer, more robotic prostheses. We call them myoelectric prostheses. But really, what this means is that they're recording electrical signals from the muscles of the body. So if someone has an amputation, say just above the elbow, then you imagine they might have a socket, they might have something that's put over top of their residual limb or the stump, and they might have sensors that are embedded inside that socket. So those sensors would be measuring the electrical signals that are generated when people contract their
Starting point is 00:10:19 muscles. So when they flex the muscles in that stump and the remaining limb, the system can measure that and use that to control, say, a robotic elbow or maybe a robotic hand. Are these flex sensors or are these like the heart rate sensors that are lights and looking at the response from that? So they're actually, they're a multipole electrical sensors. So you're looking at actual voltage differences. So it just, it makes contact with the skin. There's a, there, some of them are these little sort of silver domes that sort of just press in, press lightly into the skin. Some of them have these little tiny, tiny strips of, of I think very expensive wire just that, that make good
Starting point is 00:10:57 electrical contact with the skin. But when your muscles contract, you actually, when all those motor units get recruited and start doing their thing, they, they actually generate changes actually generate changes in the electrical properties of the tissue. So you can really, in a very straightforward way, measure it. There's actually commercial products now that you can go down to your favorite consumer electronics store and get something. One of the products is called a Myo made by Thalmic Labs. Yeah, SparkFun's awesome. Yeah, exactly. And you can easily get one of those and jam it right in. And that's using the same kind in terms of top of the line systems where you have a robotic, let's say a robotic hand and a robotic elbow for someone, the hand itself might be able to move individual fingers. But the caveat there is that the fingers can typically only move
Starting point is 00:11:56 to open or close. What that means is the person would say, pick a grip pattern, like I want to make a fist or I want to grab a key. And then the hand would just open and close. So they don't really have full control over the individual fingers, the individual actuators. Likewise, the wrist is typically fixed or rigid and people won't be rotating their wrist or flexing their wrist. This is starting to change, but in terms of what we see out there in the clinic, what people are actually fitted with uh it's very uncommon to see anything more than say a robotic elbow with a robotic hand attached that opens and closes so that's the that's the sort of clinical state of the art the fancy dancy what might actually be happening soon kind of thing is a a robotic arm where there there's individual finger control the fingers can sort of
Starting point is 00:12:42 adduct and abduct so they can essentially move side to side or open and spread your hand multi-degree of freedom wrists or wrists that move like they flex they they bend sideways and they also rotate and uh and also full shoulder actuators so i mean if you if you think about what will be coming down the pipe in another five to ten years a lot of our colleagues out east and and some of those down in the states have done some really, really cool jobs of building very lightweight, very flexible, and highly articulated bionic arms. And those will, I hope, be commercialized sometime soon. So we're seeing a big push towards arms that can do a lot. But you have an amputation above an elbow
Starting point is 00:13:25 you have to learn how to fire the right muscles to control to generate that voltage we're reading and send it down to the fingers it's a hard mental problem and a lot of work for somebody to be able to use these, isn't it? Well, that's if we have a million dollar, if we have the million to six million dollar man, that's the six million dollar question is how do we actually control all those bits? And so I really think this is the sort of the critical issue that we're solving, not just with prosthetics, but also with a lot of our human machine interaction technology is now that, I mean, we have sensors, we have really. We have really
Starting point is 00:14:05 smart folks making really spectacular sensors of all different kinds. We're getting sensors getting cheaper. They're getting the density of sensors we can put into any kind of devices is just like it's skyrocketing. Likewise, we have fancy arms. We have really advanced robotic systems that can do lots of things. They can do all the things a biological limb can do to a first approximation and maybe someday even more. But the point you bring up is a really good one. Like gluing those two things together is in my mind, the big remaining gap. So how do we actually,
Starting point is 00:14:36 even if we could record a lot from the human body and even if we have all those actuators, even we have all those robotic pieces that move in the ways we hope they would, how do we connect those two? How do we connect the dots? How do you read people's minds? Yeah, that really is, I think, the big question because reading from all the things we could sample from their body is like, I really think of it like looking at body language. It's the same kind of idea, but we're really good at it as meat computers. We're great at looking at another body and sort of trying to
Starting point is 00:15:10 infer the intent of that particular person. We're asking our machines to really do the same thing. We're asking them to look at all of the different facets of body language that are being presented by the, say, the wearer of a robotic arm arm and then the robotic arm has to figure out what that person actually wants uh a lot of the time our engineering breaks down at that scale so our ability to to say map any combination of sensors directly to any combination of actuators if i'm recording like a if i put a sensor on on someone's biceps and on their triceps so you know the bits that make the elbow flex and extend, it's pretty, I mean, all of us could sit down and hack out a quick script or build a hardwire system that would take the signals from the bicep and the tricep, just sort of maybe subtract them. And now you've got a great control signal for the elbow to make the elbow go up and down. And in the clinic, this is typically how the elbow control works. But if we start to think about having 10 sensors, hundreds of sensors, if we
Starting point is 00:16:11 start reading directly from the nerves of the arm, so the peripheral nervous system, or even recording directly from tens, hundreds, or thousands of neurons in the brain, suddenly it's not so clear how you'd go about hand engineering a a sort of fancy control algorithm that takes all those signals and turns them into some kind of control signal for the robot arm that's the really hard thing i mean that's really where where the machine learning starts to fit in where we can start to learn the patterns as opposed to engineer those patterns okay so and that's how we get to machine learning, which is the machine intelligence. Actually, do you prefer machine learning or machine intelligence or artificial intelligence or neural nets?
Starting point is 00:16:52 What are the right words? The right words are something that I think, oh, we're always trying to figure out what the right words are. The most important thing is sort of pin down what it is that you're actually talking about. I think just starting out from the top is that artificial intelligence is often the wrong word. And it's a phrase that comes with so much baggage. I think we see it so much in the media and the popular culture. It gets thrown around a lot. I gave a lecture just last week talking about really, I mean,
Starting point is 00:17:20 we have people applying AI to what amounts to an advanced toaster and calling that artificial intelligence and then arguing about toaster rights or saying, oh, my goodness, this toaster is like a existential threat to my ongoing existence. And sometimes people are really applying terms like artificial intelligence to just a clever control system in something like a toaster or robot vacuum cleaner. And then there's people that are thinking really about machines that might have some kind of very, very strong or detailed kind of general intelligence. And we conflate those two together. So I think AI, because of all of its baggage is actually a something that just doesn't really hit the point. The other the other tricky thing about about just talking about intelligence, artificial or meat intelligence or, or hardware intelligence, when we talk about intelligence, people often think it's sort of like it is intelligent or it isn't intelligent. I think by casting a term like AI onto the entire endeavor, it really tries to make it very binary. And really, we get a gradation.
Starting point is 00:18:22 I mean, your thermostat is in some level fairly intelligent. It figures out where it needs to go to keep the temperature in your house right on point. A self-driving car is a different kind of intelligence. A Sony AIBO, one of the little robot dogs. Yeah, you could say that there's intelligence there. And likewise, when we start looking at programs like AlphaGo, the Google DeepMind program that recently took out Lisa Dahl in a human machine match in the game of Go. I mean, you could argue that there's intelligence there. Now, I'm just going to keep
Starting point is 00:18:51 breaking this down a little bit, if that's okay. The intelligence piece is also a bit soft in terms of how we throw in things like learning. So you asked me about machine learning or machine intelligence. I can imagine, I think a lot of us could imagine that there might be a system that we would call very intelligent, a system that has lots and lots of facts. Think of like Watson, Jeopardy, playing robot style thing that knows lots and lots and lots of facts. Those facts, let's pretend that those facts have been hand engineered. They've been put in by human experts. So the system might not have learned at all, but it might exhibit behaviors that we consider very, very intelligent. At the same time, we might have systems that maybe we don't think are that intelligent, but that are very evident actually learning. But I mean, it wouldn't be able to tell you where Siberia is or who is the leading public figure in Japan. That's something that is facts versus learning. So intelligence,
Starting point is 00:19:54 I think, involves learning. It involves knowing things. It involves predicting the future or being able to acquire and maintain knowledge. And it actually revolves around using that knowledge to do something to maybe pursue goals or to, to try to try to achieve outcomes. So I break down intelligence, maybe in into machine intelligence, let's be specific about machine intelligence, breaking down machine intelligence into representation, how a machine actually perceives the world, and then prediction prediction which is really in my mind building up facts or knowledge about the world and then control which is in a very engineering sense being able to take all of that that that structured information all of those facts
Starting point is 00:20:34 and then use that to to change a system's behavior to achieve a goal so i think that's a nice clear way of thinking about intelligence and specifically machine intelligence so so when i talk about these these kinds of technologies that we work on the lab or when i'm talking more generally about what most people say is artificial intelligence i really do like i prefer machine intelligence because it's it's kind of clear we can say yeah we're talking about machines we're talking about intelligent machines it doesn't like there's nothing artificial about it if it's intelligence then it's intelligence is deep learning a subset of machine intelligence or sort of the same level but a different word for it so deep learning i mean the there's a lot of excitement i'm sure i'm sure you uh you've seen all of the the um large
Starting point is 00:21:20 amounts of publicity that deep learning has received in recent months and years. And for good reason, it does some very, very cool things. In the same way, there are people who are looking at deep learning to do things that we would consider very, I guess, higher level intelligence tasks. Looking at things like manipulating language and understanding speech is already what we might consider to be a very intellectual pursuit. And there's also deep learning, which is being used for some fairly specific applications, things that are maybe what we consider less general in terms of intelligence, but more like a very targeted or a specific function. So I mean, one thing we've looked at is applying deep learning to some laser welding. So looking at how we could use it to
Starting point is 00:22:05 see whether or not a laser weld might be good or bad. This is just one project I worked on with one of my collaborators. And that I mean, that's a very, it's not what I would consider a system that has very general intelligence. When you compare that to something like a language translation system, like some of the things that Google's been working on with deep learning, to be able to, to generally translate between multiple languages. That we'd consider a higher level kind of intelligence. Still not really a general intelligence. You wouldn't like stick that in your room, but it goes around and suddenly bakes you toast and then writes a dissertation on the ancient Chinese poetry. Like that's another step up the ladder, I think.
Starting point is 00:22:41 Maybe a couple of steps. Maybe a couple of steps. Yeah, maybe one, maybe two. But deep learning, yeah, it's a step in the right direction. So step in a direction that leads us towards more complex systems that might have more general capabilities. So when I think of deep learning, it's about taking an enormous amount of data and throwing it at a few different algorithms that are pretty structured and it leads to neural net-like things.
Starting point is 00:23:15 And you can't always see inside of deep learning. Like you want to know, you want to build a heuristic instead, don't go the deep learning path. That's not going to, you're not going to go there. Is that right? Or am I, it's been a long time since. You're not going to go there. Is that right? It's been a long time since I've learned the difference between these things.
Starting point is 00:23:30 Yeah, so deep learning, deep neural nets especially, most of the time when we speak of deep learning, we're really talking about a deep neural network. And people have been working. There's some very nice maps you can find on the internet showing the different kinds of deep nets and the different ways that they're structured. Some of them are more interpretable than others. In essence, you're very right. You're taking in a lot of data. And I think one way that maybe the clearest way to start separating out the different kinds of machine learning and machine
Starting point is 00:24:00 intelligence that we might want to play with as engineers, as designers, as just interested people, is to think less about the usual way we label things. Like deep learning is typically a case of what we call supervised learning. There's unsupervised learning as well, which also leverages deep nets. And then there's the field that I work in called reinforcement learning. But maybe more clearly, we could say that a lot of the cases of deep learning that people use deep learning for are, are actually cases of learning from labeled examples. So it's like you give like a ton, a ton of examples. And each of those examples has a usually human generated label attached to it. So you're going through the internet, you're like, I want to find pictures of grumpy cat. And so you show a bunch of images. And then the system says, yeah, grumpy
Starting point is 00:24:44 cat. You're like, no, that wasn't grumpy cat or grumpy cat. Yeah, that was the system and adapts its internal structure. It changes its weights so that it better lines up the samples with the labels. So a lot of what we see in deep learning, the majority, I think, is a case of learning from labeled examples. So you already know what the truth is when you go in. Absolutely. And now for training. So this is also something that we see a lot with the especially with deep nets is that you you usually have a phase of training. Many, many complex heuristics have been developed to try and figure out how to train them correctly. And there's some really smart people working on that. I don't work on that because there's plenty of other smart people solving those problems. But the idea is that you find a way to train it usually on a batch of data. And now you have other examples during deployment, let's say, now you have a grumpy cat detector that you've sent off into the world and has to do its job. And it now sees new examples of photographs and has to say yes or no, or say what that photograph
Starting point is 00:25:40 actually is, or what that string of speech is. So the deployment systems will now be seeing new data that has not previously been presented. So this is a training and a testing paradigm. That's one of the important things as well about the usual way that we deal with learning from labeled examples. You build some kind of classifier or some kind of system that learns about the patterns in the information, and then you would deploy that system. You make it sound so easy, but yes. I make it sound so easy. It's actually not.
Starting point is 00:26:12 Actually, I think as we were just comparing our notes earlier before the show, it's often one of the most difficult things is just installing all the right software packages. I think sometimes that's one of the most challenging bits. But the understanding of the concepts is actually, none of it's really that fancy or that tricky when you think about it at the highest level really it's like saying hey yeah I have this machine this machine has some internal structure I showed a sample of something I showed an example
Starting point is 00:26:34 and I tell it what that thing should be and it just sort of shifts itself around it jiggles its internal structure in a really nice way so that it's better able to say the thing I wanted to say when it sees another example that's close to the one I showed it. So that's what I mean by what we usually mean by supervised learning. It covers a lot of what we consider deep learning. And the only thing that makes it deeper is that how many, how complex is that internal structure of that thing that jiggles. So the internal structure that changes to better line up samples with labels,
Starting point is 00:27:03 when we look at deep learning as opposed to earlier work on like multilayer perceptron or the like one or two layer neural nets, we're just adding the complexities, that internal system and the way that it, that pieces interconnect with other pieces. So we're just dialing up the complexity of it. And because of that, the kinds of relationships,
Starting point is 00:27:20 the kind of sample label pairs that can be learned gets, gets a lot more, a lot more powerful. We get more capacity out of that. But in essence, it's very much the same thing as before, but more. Just the training bit, the actual method for going about updating that black box, that deep neural net, that's one of the things that becomes even more complex now than it was in previous years. But when you talk about smart prosthetics, it's hard to get million-point samples for a human
Starting point is 00:27:51 who just went through something pretty traumatic like losing a limb. And their samples aren't going to apply to somebody else's because our bodies are different. So you don't do this type of deep learning, do you? You mentioned reinforced learning. Yeah, so that's actually great. So let's just jump into reinforcement learning because that is the area, that's my area of specialty, my area of study and the area where most of my students do research. So I talked about learning from labeled examples being the general case that we see in machine learning and one of the most theories of greatest excitement.
Starting point is 00:28:28 There's also what we could consider learning from trial and error. So when I say reinforcement learning, I actually do mean learning from trial and error. And the kind of learning I work on is a real time learning approach. So instead of trying to have a training and a testing period where you show a large batch of previously recorded data, the systems we work with are essentially dropped in cold. So they could be attached to a prosthetic arm. They could be attached to a mobile robot. And while that system is actually operating, while that system is interacting with the person or the world around it, it's learning. It's learning all the time, and it's changing itself all the time. So the data that's being acquired is actually readily available and it's available from the
Starting point is 00:29:10 actual use of the system. So this is the case where we're learning from, instead of a, like, I think of it instead of learning like from a vat of data, we're learning from a river of data or a fire hose of data, the information that's currently flowing through the system and flowing by the system. So it's a different kind of learning. And it's a very nice thought that we can have systems that not only learn from stored data, but can also learn from real ongoing experience. So that's the area we work in. So could you do something like, I know some of the self-driving car manufacturers have their software on,
Starting point is 00:29:43 but it's not actually doing any self-driving. It's in shadow mode. Do you do any training where, okay, somebody lost one arm, but they have a good right arm, let's say. Could you do any training with the good arm and say, okay, this is how this works, and this is where these signals are, and this is how this person uses this, and then apply it to the prosthetic later? Oh, that is actually exactly
Starting point is 00:30:05 what we're doing right now. So one of my students is, we're just finishing up a draft of a research paper to submit to an international conference. And this student's work on that paper and actually that student's thesis is really about that very idea where you could imagine if you have someone who's lost one arm, but they have a healthy biological arm on the other side, you could just have the biological arm doing the task, again, cutting vegetables or catching a ball or doing some complex task. And you could have the other, the robotic limb, just watching that, essentially seeing what needs to happen and actually being trained by the healthy biological limb.
Starting point is 00:30:41 And you could have this in a sort of a one-off kind of fashion where you show it a few things and it's able to do it. Or you could have it actually watching the way that natural limbs move in an ongoing fashion and just getting better with time. So it's a, really, that's a, that's a great insight is that, yeah, we could actually have a, a system learning. And actually the way the students, the students teaching the arm is that it actually gets rewarded or punished depending on how close it is to the biological limb. So I talked about reinforcement learning and if we get right down to it, that's essentially a learning through trial and error is learning through reward and punishment. So like you'd train a puppy, we're training bionic body parts or any other kind of robot you'd like. When the robot does the right thing, or when the system does the right thing, it actually gets reward.
Starting point is 00:31:23 And its job is to maximize the amount of reward it gets over the long term. So that's the idea of reinforcement learning is the system not just wants to get reward right now, but it wants to acquire reward, positive feedback for a extended future for some kind of window into the near or far future. Okay, digging a little bit more into this because I'm just fascinated. We are mostly symmetric creatures. And sure, chopping vegetables is something that you do with one hand. And you kind of have to do it with one hand because the other hand is used for holding
Starting point is 00:31:57 the vegetables. But as I sit here gesturing wildly, I realize I am mostly symmetric with my gestures. Do you worry about that sort of thing as well? Or are you mostly task oriented? A lot of what we do is task oriented. So specifically, I do many things. Some of the things we do are wild and wacky, like we have the third arm that you connect to your chest, and we're looking at how to control the third arm that you wear. We've got the prosthetic falconry. We've got all this other weird stuff that we do. And I really enjoy that.
Starting point is 00:32:28 We actually, one of my students is building a go-go gadget arm. So he's building a, a telescoping forearm so that if you lose an arm, maybe you could have an arm that stretches out and grab stuff. Something our, our biological limbs couldn't actually do. So in those cases, we, the symmetry might be lost. You might not have another arm on the other side coming out of your chest. You might not have a telescoping forearm on your healthy arm because only your robot arm can do that.
Starting point is 00:32:51 But in the cases where we are looking at people that have a arm that's trying to mirror the kind of function we see in a biological limb, a lot of what we look at is very task-focused. So we're looking at helping people perform activities of daily living. So the activities that they need to succeed and thrive in their daily life and to make their daily life easier. So we do start and often finish with actual real world tasks. Now, this is a nice gateway towards moving to systems that can do any kind of motion. So the training example, that sort of learning from demonstration that we just talked about where the robot limb learns from the biological limb, that's a sort of a gateway towards systems that can do much more flexible or less task-focused things. But we usually start out with tasks and we validate on tasks that we know in the clinic
Starting point is 00:33:41 are going to be really important to people carrying out their daily lives. Okay, so what about the internet? Are these phone are these are these prosthesis going to be controlled with my smartphone? So instead of it knowing it's Thursday and time to make soup? Now I can tell it go into soup mode. That's it. So this is a really this gets towards a conversation on what sensors are actually needed. So right now, just the general state of things is that the robot limbs, the ones that we would see attached
Starting point is 00:34:13 to someone in the clinic, are typically controlled by embedded systems. We have small microcontrollers. We have small chips that are built onto boards and they're stuck right in the arm. There's a battery. The chips are very, very old. Usually they're not that fancy. They're not that powerful. They don't store data. There's actually very little even closed loop control that goes on on the typical systems
Starting point is 00:34:35 in most prostheses. Now for lower limb, for leg robots, that's a little, I'll soften that constraint. But for the upper limb, often we're not seeing devices that have that much complexity. Those are not internet enabled. They do not connect to other devices around them. Only very recently have we seen robotic hands that now connect to your cell phone via Bluetooth and are able to, say, move or change their grips depending on what you tell it through your cell phone. There's also examples of what we call grip chips.
Starting point is 00:35:05 There's some of the commercial suppliers have built essentially little RFID chips that you hang around your house so that when you go into your coffee maker, your hand will pre-shape into the coffee cup holding shape. So we're starting to see a little internet of things essentially surrounding prosthetic devices, but it's still, I think, maybe not in its infancy,
Starting point is 00:35:24 but maybe in its toddler phases in terms of what what could happen when we begin to add in say integration with your calendar integration with the other things that that permeate our lives in terms of the data about about our patterns and our routines that might really make the uh the limb better able to understand human needs human intent and human schedules and fill in the gaps that we we can't fill in with other sensors but there are a lot of sensors being used uh in in various medical and non-medical ways to help us get to better health fitbit is the obvious case with lots of data um and it it has changed people uh they you know i feed my fitbit you know let's go for a walk um but are we seeing the same sort of things through rehab and physical therapy are there tools to help people that are sensors and and i? Yeah, so there's, in terms of new, I think new
Starting point is 00:36:30 sensors is actually one of the areas where we'll see the most progress in terms of increasing people's ability to really use their technologies. A lot of what's limiting current devices, I mean, some of the control is not intuitive. The control is a bit limited. And the feedback back to the human is also quite limited. A lot of that could be, I think, mitigated if we give the device themselves better views into the world. So this gets back towards what you're saying. I mean, you could imagine that we have things like a series of, we have Fitbits. We have other ways of recording the way the body is changing in terms of how much you're sweating, what's happening around it, the humidity of the air. There's many sensors we could
Starting point is 00:37:09 add that would sort of fill in the gaps for a device. So at the conferences, at the research level, we're seeing a ton of interest in this space. So there's people that are building ultra high density force sensing arrays that you could put inside a prosthetic socket so it can actually feel how all the muscles in that residual limb are changing. There's people who are building things, they're putting accelerometers, they're putting inertial measurement units, all these different kinds of technologies. There's embeddables, so there's embedded sensors, so sensors that are implanted, little like grains of rice, implanted directly into the muscles of the body. These are also research prototypes that are, I think, already in clinical trials or beyond now, where you actually have wires technology embedded right in the flesh itself, so that you can take readings directly from the muscles, directly from the nerves themselves, and directly from all the other bodily functions that began to support these devices. So this is an area where we're going to
Starting point is 00:38:03 see a huge, let's get back to our earlier conversation about how you start mapping all of those pieces of information to the control of motors. But we're actually seeing a huge surge in interest in different sensory technologies. Even for people that haven't lost limbs, I mean, this is a just devices again, like I mentioned the Mayo earlier, because I think it's a, and there's also i think uh the the eg headset one of my students has one for we're using it for research but the the meditation supporting eg headset with a couple of eg electrodes in the front i think it's the news okay no no i've seen these i've played with them yeah i've never seen one that had any repeatable result oh really no you can you have some of the control video games and stuff you can you have to learn to concentrate on like anything i think it's just measures concentration i've seen well i mean you
Starting point is 00:38:49 could do that by just measuring how much the muscle in my forehead moves you don't have to do any anything interesting it's cooler to have it on your yeah but i don't i've never had it be repeatable beyond what you could tell because i had a line between my eyebrows. Yeah, and that's okay. So I think if we focus on trying to, I like to think of signals, this is my view, this is sort of my default view of how we approach presenting information to our machines and how I actually think about the information itself, is that we never label any signals. So when I stick signals or when I measure things from the human body and I stick them into, say, a machine learner, when I actually give some kind of set of information to a reinforcement learning system, they're just bits on wires. So the nice thing is that it doesn't actually, at least to me anyway, matter to our machine learners, matter if the contractions in
Starting point is 00:39:42 the facial muscles or if it's actually EEG that's leading to discriminating signals. It's that if we can actually get any kind of information, it doesn't have to be clean information. It could be noise. Noise is just information we haven't figured out how to use yet. So if we actually can think about recording all the more signals, lots of signals,
Starting point is 00:39:57 the system itself can figure out how to glean the best information from that soup of data. So I'm not worried, actually. It's actually a very sort of a relaxing and refreshing view into the data is that I'm not so worried about whether or not it's one kind of modality or another, or whether or not it's even actually consistent, as long as there's certain patterns. If there's no patterns, then I mean, we can say
Starting point is 00:40:19 maybe that sensor is not going to be useful. But that's more of a, do we put the expense of actually deploying that sensor as opposed to, do we give that sensor as input to our learning system? In many cases, the learning system can figure out what it uses and what it doesn't. And sometimes what it figures out how to use is actually very clever and sometimes buried in that sea of noise or the sea of what we think is unreliable signals. It's actually a very reliable signal when you put it in the context of all the other signals that are being measured
Starting point is 00:40:48 from a certain space. So it's actually a very cool viewpoint where you're like, you know what? Here, just have a bunch of bits and wires. And you think about the brain, you're like, hey, it's also kind of like
Starting point is 00:40:55 a bunch of bits and wires. No one's gone in and labeled the connections from the ear to the brain as being audio signals, but they're still containing information that comes from the audio. So anyway, it's a neat perspective. No, that's a really interesting way
Starting point is 00:41:12 of thinking about things. Because when you think about machine learning and deep learning, often the thing people bring out is, oh, well, we don't really know what's going on inside the system. But now we don't even know what's going into it. It gets signals and it makes does pattern i mean that's that's how our brains work we make patterns out of things and we don't necessarily
Starting point is 00:41:30 know what their provenance is yeah it's it's even more it's actually quite funny that when i think about the things we do we do on a very daily on a regular daily basis with the information we get so a very standard like a very smart and usual engineering thing to do would be to take like a whole bunch of signals, you've got like hundreds of signals, and you're like, okay, let's find out how to sort of reduce that space of signals into a few important signals that we can then think about how to make control systems on, or we can think of a way to clearly interpret and use in our designs. Usually, we're trying to take a lot of things and turn them into a few things. Almost exclusively, every learning system that we use
Starting point is 00:42:08 takes those things, let's say we have 100 signals, and it might blow that up into not just 100 signals, but 100,000 or 100 million signals. We're essentially taking a space and building a very large set of nonlinear combinations between all of those signals. And now the system, the learning system actually gets all that much larger, that much more detailed input space that contains all of the correlations and all these other fancy ways that other information is relating to itself. It now gets that as input. And even if you don't do a deep learning, like there's some of my colleagues who have published a paper on shallow learning, which says, hey, you know, all the stuff you can do with deep learning, if you think of a really good shallow representation, like a single layer with lots of inherent complexity, you can do the same kinds of things.
Starting point is 00:42:55 So you can think of that as like, yeah, let's just take a few signals and blow them up into lots of signals that capture the nonlinear relationships between all of those other input variables. It's kind of cool, but it's kind of weird. And it scares the heck out of, especially some of my medical or my engineering collaborators. I'm saying, yeah, no, this is great. No, we're not going to do principal component analysis. We're going to do the exact opposite. We're going to
Starting point is 00:43:16 build this giant nonlinear random representation or a linear random representation out of those input signals. It's kind of cool. Do you ever associate a cost with one of the signals? I mean, as a product person, I'm thinking all of these sensors, they do actually have physical costs. And so if you are building a representation in machine learning world, do you ever worry about the cost of your input? Absolutely. And the cost of the input is not even just the physical costs, but also things like the computation costs. A lot of what I do is real-time machine learning. I'm hoping that I can have a learning system that learns all the time, and not just all the time, but very rapidly. So many, many, many, many times
Starting point is 00:44:02 a second. And so as we start to add in, say, visual sensors, if you want to do any kind of processing on that visual input that the camera inputs, you're starting to incur a cost in terms of the rate at which you can get data. So there's physical costs that we do consider. There's also the computational costs and just the bulk of those particular signals. So we do consider that. There's interesting ways that the system itself can begin to tell us what signals are useful and which ones aren't. So when we start to look at what's actually been learned and how the system is associating signals with outputs, we can actually say, oh, yeah, you know, maybe this sensor isn't actually that useful after all.
Starting point is 00:44:40 There's some new methods that we're working on in the lab right now, actually, that are looking at how the system can automatically just sort of dial down the gains, let's say, on signals that aren't useful. So it's really easy then for us to go through and say, hey, okay, the system is clearly not using these sensors. Let's remove those sensors from the system and with them, those costs and those computational overheads as well. Yeah, there's the computation, the physical, the power, all these costs. Absolutely. And power is a big one, especially with wearable machines. I think you see this a lot with embedded systems. We have to care a lot about how long our batteries can run. If you're
Starting point is 00:45:15 going out for a day on the town and your prosthetic arm runs out of batteries in the first half an hour, that's not going to be good. So we do have to be very careful about the the power consumption as we start putting especially when we start put start putting learning systems on on wearable electronics and wearable computing you think of a a shirt with embedded uh embedded machine intelligence let's say you have a it's like fitbit writ large you have a fully sensorized piece of clothing that's also learning about you as you're moving. We want these systems to have persistence in their ability to continue to learn. You don't want them to stop being able to learn
Starting point is 00:45:48 or to capture data. And so that's actually one of the really appealing things about the kinds of machine intelligence we use, the reinforcement learning and the related technologies, things like temporal difference learning that underpin it, is that it's computationally very inexpensive. It's very inexpensive in terms of memory. So we actually can get a lot for a little.
Starting point is 00:46:08 We're working on very efficient algorithms that are able to take data and not have to store all of the data they've ever seen, not have to do any processing on that data, and be able to sort of update in a rapid way without incurring a lot of computation cost. So that's a big focus, these building systems that can actually learn in real time, not just for 10 minutes or 10 hours, but forever. That's a hard problem because maybe I don't want to make soup every Thursday. Yeah. So that's a really, I like that example as well, because the question is maybe not how do I build a heuristic or how do I build some kind of
Starting point is 00:46:46 good rule of thumb to say when I do and don't want something, but what other sensors, what other, it doesn't have to be a sensor. Think of any kind of signal. What other signals might we need to let the machine know what we want and to let it know when something is appropriate or not appropriate. Actually, let's go back to the, remember I mentioned we're building that Google gadget risk. I'm building a telescoping forearm prosthesis. So you can imagine that there's two very, very similar cases that we'd want to tell apart.
Starting point is 00:47:12 One is the picking up, say, picking up something from a table where you're reaching downwards and you're going to close your hand around, let's say, say a cup of tea. And the other is you're shaking hands with someone. In one of those cases, if you're far away from the thing you're reaching, maybe it's appropriate for that arm to telescope outwards and grab. If you're shaking hands with someone, maybe it's not appropriate because it's going to telescope and punch them in the groin, right? So no one wants to be punched in the groin. So the system itself maybe has to know when it might expect that this is appropriate or not appropriate. One of the cool ways that we're getting some leverage
Starting point is 00:47:46 in this particular sense is that we're building systems to predict when the robot might be surprised, when the robot might be wrong. So it's one thing to know when you might be wrong or to be able to detect when you're wrong. It's another thing to be able to make a forecast to look into the future just a little ways or a long ways and actually begin to make guesses about when you
Starting point is 00:48:10 might be wrong in the future. So if it's like, you know, okay, I think it's Thursday. I think I'm going to make soup. We're good. If there's actually other things that allow the system to begin to make other supporting predictions, like, hey, I actually think that this prediction about making soup is going to be wrong, we can start to then dial the autonomy forward or backward in terms of how much the machine tries to fill in the gaps for the person. It's a really cool, it's a very, very sort of wild and woolly frontiers direction for some of this research. But I have a great example where the robot arm's moving around the lab, and you actually try to shake its hand, but it's a, we, I have a great example where the robot arms moving around the lab and, uh, you actually try to shake its hand and it's surprised. And it starts to learn that, Oh, wow. Every time I do this, someone's going to monkey with me in ways that I've never,
Starting point is 00:48:52 I've never felt before. I'm going to, I have a, uh, one video where I put little weights in its hand and I like hang something off its hand. And then occasionally I like bump it from the bottom and it learns that in certain situations, it it's going to be wrong. It doesn't know how it's going to be wrong. But there's certain use cases, certain parts of its daily operation, where it's going to be wrong about stuff. And it can start to predict when it might be wrong. It's very rudimentary, but it's a neat example of when we might be able to not only fill in the gaps, but also allow the system to know when it shouldn't fill in the gaps. Are you creating anxiety in your robots? That's a great question.
Starting point is 00:49:28 That is a really good question. I hope it's not anxious. I really, I actually worry about this now. We do a lot of personifying our systems. I don't know. Is that anxiety? I guess it is. Maybe, like, I always think about it when I'm giving a demo of this.
Starting point is 00:49:43 I kind of think about it, like, you know, when I'm sitting at home watching Netflix or something or having tea, I'm not expecting, like I predict I'm not going to be surprised. When I'm walking down a dark alley in a city I've never been in before, I do predict that I might be surprised and I'm a little more cautious. And maybe that's anxiety. So in that case, maybe, yeah, maybe we're making anxious robots. I'm not sure this is, I don't know, poor things.
Starting point is 00:50:07 Okay, back to smart devices and smart prosthetics. I'm going to go with prosthetics because I can say it. What are some of the reasons people give for not wanting to go in this direction? I mean, you've talked about, we've talked about cost, you've talked about battery life and lack of dependability. Are there other reasons? Do you hear people worrying about privacy or other concerns? Yeah. So privacy is, I think maybe because of the lack of really high performance computing and connectivity and prosthetic devices at present, the privacy argument is something I haven't heard come up at least very much in any of the circles, either clinical or the more in-depth research circles that I've associated with.
Starting point is 00:51:00 One very common thing that people want is actually is cosmetic appearance. So there's, there's multiple classes of users, much like multiple classes of users for any technology. You have the people that, you know, want the flashiest, newest thing with all the Chrome on it and all the, like the, you know, the, the oleophobic glass and it has to look great. There's people who are early adopters of very cool tech. I want to have as many LEDs as possible. Exactly.
Starting point is 00:51:24 Right. You want this thing should have like ground effects. And then you have other classes where they want to do the exact opposite. They don't want to stand out. So we see this as well with users of assistive technologies. This is everything from prosthetics to, you might imagine, exoskeletons to standing and walking systems to wheelchairs. Even canes. Even canes. Yeah. Yeah, it's a really good pointchairs. Even canes. Even canes.
Starting point is 00:51:45 Yeah. Yeah, that's a really good point, actually. Even canes. Like you have some people that don't want to be seen with a cane or use a cane. And if they have a cane, it should be inconspicuous. And there's some people that are like, no, this thing better be a darn good looking cane. Have a skull on top and diamonds and spikes. Diamonds in the eyes, exactly.
Starting point is 00:52:00 So this is a, this is, I think i'd probably be in the latter category where i want a flashy looking cane if i had a cane or at least a very cool cane if it's not flashy but for prosthetics as well we see some people that that like to have the newest technology they deliberately roll up their pants or roll up their arms so they people can see that they have this really artistically shaped carbon fiber socket with carbon fiber arm um it looks cool people get an airbrush like a goalie mask in hockey. They'll actually have like really artistic designs airbrushed on their arms. There's even, again, we're looking a lot in the lab at non-physiological prostheses because
Starting point is 00:52:35 by that I mean prostheses that don't look or operate like the natural biological piece. So you can imagine like having a tool belt of different prosthetic parts. You can clip one onto your hand when you need to go in the garage and do work. I want a tentacle. I want to inject that right now before you go anywhere. I want a tentacle. I know.
Starting point is 00:52:54 And this is one of the things we really want to build for you. No, not you. Cause you need to not lose your hand, but we actually talk a lot about building an octopus arm. That's one of the most common things that we talk about. Oh, yes. Right? Way too excited about that yeah but it's a it's a good point is that there's certain there's a there's a certain um certain user base uh i think it's a smaller user base but it's one that one that would like uh that
Starting point is 00:53:21 that would like to have really cool um unconventional body parts then there's a there's a whole a whole nother class that might be willing to sacrifice function for appearance so it's cosmesis a uh a prosthesis that doesn't have any function at all but has been artistically sculpted to look exactly like it's matching biological limb so there's there's actually a um a whole a whole uh class of prostheses where someone's gone in, they'll do a mold or a cast of the biological arm. They'll try to paint moles. They'll try to put hair on it. They'll try to make it look exactly like the matching biological limb or the other parts of the person's body, including skin tone and things like that. Most of those don't even move. They're
Starting point is 00:54:04 very lightweight and they just strap onto the body. And you can't tell unless you look very carefully that that person actually has an artificial arm. You can imagine the same thing for eyes. If you're trying to have a really nicely sculpted artificial eye, that's just a ball of glass, but it looks like your other eye and it's almost indistinguishable from your actual eye. So there are cases where people will choose to have something that looks very appropriate, but doesn't actually do anything except, except look like a biological limb. So those, those are, it's totally valid choice as well. Uh, but it depends on that person's needs, that person's, uh, what, what their, what their goals are and what they're trying to do. So, um, we do see, I think more
Starting point is 00:54:41 than privacy, we do see a push towards limbs that are very cosmetically accurate. Also, lightweight things like we talked about, battery function. Lightweight. Function is a huge thing. Intuitive control, it's really unfortunate. But for the majority of the myoelectric prostheses, the robotic prostheses, we actually do see a really large, what we call a rejection rate, or people saying, hey, I don't want to use this anymore. And this means that what could be a $100,000 piece of technology paid for by the health system goes in a closet, because mainly
Starting point is 00:55:14 it's hard to control. And this is actually one of the coolest areas, I think, that I'm really excited. There's a company, our colleagues down in the States, down in Rehab Institute Chicago, have spun off a company called Co-App, but it's a company that's doing essentially pattern recognition. So they're using a classification system that allows people to essentially deploy pattern recognition, that's what it's called, it's a form of machine learning, in prosthetic limb control. So now the system can, after training, so you press the button, you train the system, it monitors some of the patterns in the arm, the muscles, the way the muscles are contracting, and it learns how to map those to say, hand open, close or wrist rotation. And people are actually getting much more intuitive
Starting point is 00:55:53 control, it's much more reliable. And, and for instance, they might be able to control more different kinds of bits for their arms. So you might be able to get like an elbow and a hand instead of just having a hand. So there's some really cool ways that machine learning is actually already being used to start reducing that control burden. But I think that's one of the biggest complaints that we see is that this thing's hard to control and it's not reliable. And sometimes like after I sweat a bit or after I fatigue, it just starts fritzing out. So yeah, I'm going to go back to using a simple hook and cable system, something where there's a little a cable that opens and closes a spring-loaded a spring-loaded hook because it actually does what i wanted to do all the time all the time
Starting point is 00:56:34 yeah all the time do uh you may have seen cybathlon so this is actually a good segue into into cybathlon it was this awesome competition of assistive technologies that was hosted in um in switzerland it was in the swiss arena just outside of zurich uh it was last october this awesome competition of assistive technologies. It was hosted in Switzerland. It was in the Swiss arena just outside of Zurich. It was last October. But this isn't the Paralympics. No, it is not. This is those you're trying to do as well or better than the normal human body. Yeah.
Starting point is 00:56:57 The stock human body. Forget normal. But this, wait, Cybathlon? Is that what it's called? Yep, Cybathlon that's what it's called yep that's improving what we have to do better i mean there were people in the paralympics and actually in the olympics who who had legs and and they were there was some controversy of whether or not it was easier to run on those yeah like the carbon recurve legs. Yes. If you don't want to turn, those things can go like, they can go very, very fast. They have better spring constants than our legs do.
Starting point is 00:57:31 Yeah. So it's neat. The Cybathlon is different in that respect in that it's actually saying, hey, we're going to put a person and machine together and see how well they can do. And they actually call the people that are using the technologies pilots. So you might pilot a functional electrical stimulation bike or pilot an exoskeleton or pilot a prosthesis. So it was a really like, it's almost like the Formula One to the stock car racing. But in this case, there was people using wheelchairs that would actually like climb upstairs. There were exoskeletons, there were very cool lower leg prostheses. And the person who actually won the upper limb prosthetic competition were very cool lower leg prostheses. And the person who actually won the upper limb prosthetic competition was using a body-powered prosthesis, so a non-robotic prosthesis.
Starting point is 00:58:11 And it's because the person really tightly integrates with that machine. And there's technical hurdles for some of the robotic prostheses. These are just not the same level of integration. So things like the CYBATHLON are a great way that we can begin to see how different technologies stack up, but also really assess how well the person, the machine are working together to complete some really cool tasks. And it goes beyond just how fast can you sprint to like, pick up shopping bags and then open a door and run back and forth across an obstacle course. Your wheelchair has to be able to go around these slanty things and climb upstairs. It's a neat way to start thinking about the relationship between the person and the machine and start to allow
Starting point is 00:58:49 people to optimize for that relationship as we talk more and more i keep thinking how a camera is probably one of the better sensors for solving this problem because you can solve the the soup mode problem because if you're in the kitchen you might be making soup um but you can also use the camera to communicate with your robotic arm you know i want you have a special thing you do with your your uh wetware that you show the camera i want my other hand to look like this and the other robot then makes the gripping sound gripping motion this all makes a lot more sense if you can see my hands it really does i we actually one of uh one of my students built a very cool new 3d printed hand we'll actually be open sourcing it uh hopefully sometime in in the
Starting point is 00:59:40 coming year uh just uh we're we're building a new version of it but it's it's in addition to having sensors again i'm I'm all over sensors. We have sensors in every knuckle of the robot hand, so it knows where its own digits are. It's also got a camera in the palm. What kind of sensors do you have? They're little potentiometers. They're really simple sensors.
Starting point is 00:59:58 Nothing fancy. We've got some force sensors in the fingertips. We're adding sensors every day. So we're putting things on. But cameras in the palm and maybe the knuckles are as you point out really natural and either to show things or even just as simple as hey i'm moving towards something that's bluish like let's not even talk about fancy a lot of people love doing computer vision you're like oh hey let's find the outlines of things and compute distances really it's even simpler than like hey what like what's the distribution
Starting point is 01:00:25 of pixels? What kind of colors are we looking at here? Is it soupish? Is it can of soda-ish? Is it doorknob-ish? There's patterns that we can extract even from the raw data that you're right, cameras are great. Mount cameras everywhere. They're getting cheaper and cheaper. So put them on someone's hat when they're wearing their prosthesis. Now the prosthesis knows if they're out for a walk or they're in the house. There's a lot of things we can do that will start linking up to the cell phone so that you maybe do either using the camera or even just the accelerometer
Starting point is 01:00:51 so we know if they're walking or sitting down. It's very easy to start thinking about sensors we already have. And the camera, as you pointed out, is like a really natural one, especially if we don't do the fancy-dancy computer vision stuff with it, but we just treat it as a, hey, there's lots of pixels here those each pixel is a sensor
Starting point is 01:01:09 each pixel gives us some extra information about about the relationship between the the robot and the person and the environment around them so that's a great point yeah right right on target there if you've ever tried to tie your shoes without looking you do use your eyes to do a lot of these things it's pretty impressive yeah and i mean when you're connected up to your meat where when you have like a full arm and you have all of our biological parts are connected we have this nice relationship we have this feed we have feedback loops we have information flowing when we have a disconnect when we suddenly introduce a gigantic bottleneck between part of the body and another part of the body
Starting point is 01:01:44 and here i mean a robotic prosthesis and the biological part of the body, the density of connection goes down. So feedback is diminished and also the signal is going the other direction. So you can think about ways to make the best of that choke point by saying, hey, well, we've got cameras on the biological side, we call them eyes. Well, let's put a camera or two on the robotic side. Let's put other kinds of sensors there that are like eyes. And hey, well, we've got cameras on the biological side, we call them eyes. Well, let's put a camera or two on the robotic side. Let's put other kinds of sensors there that are like eyes. And hey, maybe now the two systems are on the same page. We get around that choke point by making sure that the context is the same for both,
Starting point is 01:02:15 that both systems are perceiving the same world. Maybe not in the same ways. In fact, absolutely not in the same ways. But it's interesting to think that we can make both parts of a team, the human, the machine team, a human, human team, a machine, machine team, make sure that we're able to make those partners able to perceive the same kind of world in their own special ways. And then when they use that limited channel, when they use the few bits they can pass over that choke point, they can use them most efficiently to communicate high-level information or communicate not just the raw material, but actually communicate high-level thoughts, commands, information. The machine can say, hey, you know what? You're
Starting point is 01:02:53 reaching for a stove, and I've got heat sensors. I've got heat, like range-finding heat sensors, and I can say it's going to be really, really hot. Communicate, oh, it's going to be hot across that limited channel instead of all of the information that it's perceiving. I think it's a good way to start managing choke points and more efficiently using the bandwidth that we have available in these partnerships. Yes. I have so many more questions and we're starting to run out of time and I'm looking at all of my questions trying to figure out what I most want to ask you about. But I think the most important thing is I can't be the only one saying oh my god i want to try it i want to try it how do people get onto this path of robotics and intelligence what do
Starting point is 01:03:34 they need to know as prerequisite and then how do they get from a generic embedded systems background with some signal processing to where you are. So it's actually, I think that when we're moving forward with trying to implement things, the barriers are actually more significant in our heads than they are in actual practice. So in terms of getting up and running with, let's say a reinforcement learning robot, like you want to build a robot that was able to, like you could give it reward with a button and it could learn to do something. It seems like that's actually this gigantic hurdle. I think it's probably not.
Starting point is 01:04:11 So in terms of just going from no experience with machine learning to, hey, I've got a robot and I'm teaching it stuff. My usual steps, the first step is, I think it's actually a, I like to say, get to know your data. So usually when people come to me and say, hey, I want to start doing machine learning, any kind, supervised learning, learning from labeled examples, reinforcement learning,
Starting point is 01:04:32 like I want to start doing machine learning. What should I start with? The thing I usually suggest is, you know what? Don't actually try to install all those packages. Don't like try to figure out which Python packages or which fancy MATLAB toolboxes you want to install. I usually point them in the direction of something like Weka. It's the data mining toolkit from New Zealand. It's a free open source Java toolkit.
Starting point is 01:04:54 It has almost every major supervised learning, machine learning method that you might want to play with. And I usually say, you know what? Pick a system that has some data and get to know your data. So use this data mining toolkit and take your data out for dinner. Get to know what it does, what it likes, and get to really understand the information and the way those different, like the many different machine learning techniques actually work on that data. And it's as simple as just pushing buttons. Like you don't have to worry too much about getting into the depth or actually writing the implementation code. You can just play with it.
Starting point is 01:05:27 Then once you get to know a little bit about how machine learning works, either you say, hey, this technique is perfect for me, then you can go and deploy it and use the right package from one of your favorite languages. But you can also then start to move into other more complex things.
Starting point is 01:05:40 OpenAI Gym is another really great resource. OpenAI Gym is a new platform where you can try out things like reinforcement learning as well. My students have been using it and it's really, really pretty functional in a very quick amount of, in a quick ramp up cycle. So people can get very familiar with, again, with the machine learning methods without having to spend a Herculean amount of effort implementing the actual details. That's, I think, the part that will scare people off. But in terms of going straight to a robot,
Starting point is 01:06:10 this is what I'm actually teaching an applied reinforcement learning course at the university right now. It's the first time we're teaching the course. Part of the Alberta Machine Intelligence Institute, we're trying to ramp up some of the reinforcement learning course offerings. And what's really cool about this
Starting point is 01:06:23 is that the students come on the first day of class, they get a pile of robot actuators, like two robot bits. In this case, they're robot robotus dynamixel servos. They're really nice, pretty robust hobby style servo that also has sensation in it. So they have they have microcontrollers in the servos, the servos can talk back and say how much load they're experiencing where their positions are, you talk to them over a USB port, and right away you can just start controlling those robots. So the robot bit is really simple. One Python script that you can download from the internet, and you're talking to your robot, you're telling it to do stuff,
Starting point is 01:06:54 and it's reading stuff back. And then the really cool bit is that if you want to start doing reinforcement learning, you want to implement that, it's actually only about five lines of code, and you don't need any libraries. So you could just write a couple of lines of Python code and you could actually have that already learning to predict a few things about the world around it. You could learn that it's moving in a certain way. You can even start rewarding it for moving in certain ways. So the barriers are actually pretty small. So again, in terms of a pipeline, first, don't try to implement everything right away.
Starting point is 01:07:22 If you want to do some machine learning, go out and try some of the really nicely abstracted machine learning toolkits out there, like Weka or maybe the OpenAI Gym if you want to get a bit more detailed. And then after that, go write for the robots. The robots now are very accessible, and it's not a hard thing to do. And again, if you want those five lines of code, hey, send me an email. I'll send them to you. I do. I may request those for the show notes just because that's pretty cool.
Starting point is 01:07:52 Yeah, wow. Okay, well, excuse me. I need to go buy some robot parts. And they're not even that expensive anymore. The world is getting so exciting. Isn't it? How are we going to learn to trust our robotic overlords?
Starting point is 01:08:07 They'll reprogram us. They'll reprogram us. That's great. I was like, oh, no, no. They'll have our best interests in mind. I think it'll be fine. I think it'll be fine. Every time I'm asked about this, I'm like, oh, you know, I think maybe that's probably one of my closing thoughts. I know you're going to ask me for closing thoughts, and one of them is like, don't panic. It'll be
Starting point is 01:08:23 cool. And the reason I say that is like, you know, you know, I have like, I have, I have a puppy, our puppy. I treat our puppy really, really well. I don't like, I don't mistreat our puppy. I take him out for lots of walks. I give him treats. We just bought him a new couch so he can sleep. I'm like, I have a hope that when someday there's a super intelligence much smarter than us, that it'll buy me a couch and take me out for walks and give me treats and buy me Netflix subscriptions. So, uh, so i think probably that's my that's my high level picture is i'm you know don't panic i think it's actually gonna gonna turn out okay i think super intelligent systems will be with the increasing intelligence will come increasing respect and
Starting point is 01:08:59 increasing compassion so i'm actually not i'm not worried i think i think douglas adams had it right with the the big friendly letters don't panic and and now i'm like well what about the the dog and cat photos i mean are we just going to be are they going to take pictures of us and say we're oh that's so cute and show it to the other super intelligences in the cloud they're like look at what my human is trying to do linear algebra oh man my human tried to solder wires together it was so cute oh it's just so quaint yeah exactly who knows maybe maybe it will be like that i hope they're supportive and they buy us nice toys and when we're trying to you know do our linear algebra and solder our wires christopher doesn't look convinced i'm not sure I appreciate that future. What?
Starting point is 01:09:49 Do you have any more questions or should we kind of close it on that? We should probably close it on that. I don't think I can. All right. Patrick, do you want to go with that as your final thought? I will go with that. My final thought is don't panic. It's all going to work out. Thank you so much for being with us.
Starting point is 01:10:04 This has been great. Hey, thank you. It's been awesome. It's all going to work out. Thank you so much for being with us. This has been great. Hey, thank you. It's been awesome. It's been a great conversation. Our guest has been Patrick Pelarski, a Canadian research chair in machine intelligence for rehabilitation at the University of Alberta, assistant professor in the Division of Physical Medicine and Rehabilitation, and a principal investigator with both the Alberta Machine Intelligence Institute, AIME, and the Reinforcement Learning and Artificial Intelligence Laboratory, or laboratory, depending on how you say it. Thank you to Christopher for producing and co-hosting.
Starting point is 01:10:39 Thank you for listening and for considering giving us a review on iTunesunes so long as you really like the show and only give us five star reviews but we really could use some more reviews what are we uber go to embedded.fm if you'd like to read our blog contact us and or subscribe to the youtube channel and now a final thought from you the final thought for you from douglas now we're just gonna sit here in silence waiting for their final thought to come in might might work send us your final thoughts uh that sounds morbid from douglas adams don't panic all right that's all i got i i didn't i apparently didn't finish the outline
Starting point is 01:11:26 normally there would be a robotic quote in here but i think we're going don't panic all right embedded is an independently produced radio show that focuses on the many aspects of engineering it is a production of logical elegance an embedded software consulting company in California. If there are advertisements in the show, we did not put them there and do not receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.