Embedded - 187: Self-Driving Arm (Repeat)
Episode Date: May 2, 2019Crossing machine intelligence, robotics, and medicine, Patrick Pilarski (@patrickpilarski) is working on smart prosthetic limbs. Build your own learning robot references: Weka Data Mining Software i...n Java for getting to know your data, OpenIA Gym for understanding reinforcement learning algorithms, Robotis Servos for the robot (AX is the lower priced line), and five lines of code: Patrick even made us a file (with comments and everything!). Once done, you can enter the Cybathlon. (Or check out a look at Cybathlon 2016 coverage.) Machine Man by Max Barry Snow Country by Bokushi Suzuki Aimee Mullins and her many amazing legs (TED Talk) Patrick is a professor at University of Alberta, though a lot more than that: he is the Canada Research Chair in Machine Intelligence for Rehabilitation at the University of Alberta, and Assistant Professor in the Division of Physical Medicine and Rehabilitation, and a principal investigator with both the Alberta Machine Intelligence Institute  (Amii) and the Reinforcement Learning and Artificial Intelligence Laboratory (RLAI). See his TED talk: Intelligent Artificial Limbs.
Transcript
Discussion (0)
Hello, and welcome to Embedded. I am Elysia White alongside Christopher White. Our guest
this week is Patrick Pilarzki. I think we're going to be talking about machine learning,
robotics, and medicine. That's got to be cool. It's your show. You should know.
I should know. Hi, Patrick. Thanks for joining us today.
Hey, thanks so much. It's great to be on the show.
Could you tell us about yourself as though you were introducing yourself for a panel?
Sure. I never like introducing myself on panels, but in this case, I'm a Canada research chair at the University of Alberta here in Edmonton, Canada.
And I'm specifically a research chair in machine intelligence for rehabilitation and sort of putting together two things you don't usually see in the same place.
A lot of what I do is working on connecting machines to humans.
So bionic body parts, artificial limbs, and other situations where people and machines need to interact.
So we look at making humans and machines better able to work as a team.
So you're working on Darth Vader?
No, he's working on the $6 million man.
Yeah, that's the more positive spin on things.
Yes, we're definitely working on that.
We're working on, well, hopefully it's not $6 million.
The health system wouldn't be too thrilled with a $6 million man.
That would be $6 billion anyway.
A little lighter price tag might be good, yeah.
This is the point of the show where normally we ask you a bunch of random questions, but the lightning round is on vacation.
Oh, no.
So we only have one question to get to know you, and that is, who would you most like to have dinner with, living or deceased,
and why? Well, I think the most immediate person I'll have dinner with is my dear wife
later on this evening, and that's actually the one I like having dinner with the most.
So the tricky thing is that I don't like most. I'm not a guy that does most or best,
but I will answer in a different way is that I've
been currently reading a very cool book from an author named Suzuki Bokushi. And Suzuki Bokushi
was actually alive, you know, end of the 1700s. So long dead, I guess in this answer, but he lived
up in the really snowy bits of Japan. And so this is a very cool book, all these little snapshots
of one of the snowiest places that Japan could imagine.
And it's even snowier than here in Edmonton.
So I'd love to sit down with him over a cup of tea or some kind of nice evening meal and chat about how they deal with all their snow.
Because we've got some here, but wow, they get really socked up in certain parts of Japan.
So I think that would probably be the one I'd pick for now, just recently.
All right.
That is pretty cool.
And so now I wanted to ask you another strange question.
Do we have the technology?
Do we have the technology?
Okay.
Sorry.
How so?
It's a $6 million man quote.
Oh, sorry.
I'm off my game today.
I am so sorry.
Yes and no.
Yes and no.
We can rebuild him.
We can rebuild him.
Bionics, prosthetics that are smart.
This is crazy talk.
I mean, that's just, that's very cool on one hand.
On the other hand, what do you do? I mean,
what does this mean? So, what is like a really good reaction? And it's actually probably the
most common reaction, I think, when we start to say, hey, yeah, you know, you might have an
artificial limb and it might be learning stuff about you. But it's actually not that crazy. So,
I mean, if you bear with me a little bit, when someone's lost a part of their body due to like an injury or an illness, they
need to have a sometimes assistive technologies. They need technologies that are able to replace
or sort of put back some of the function that was lost. And I like the really tricky bit here
is that the more things you lose, in many cases, the more things you have to put
back. The problem though, is that the more things that are, that are lost in the case of like an
amputation or something, if you're, if you're losing, losing an arm, you need to restore the
function of the arm, but you have less places really to record from the human body. You have
less sort of windows into what the person really wants. And so a very natural way to start thinking
about how you might want to start putting
back that function and understanding what the person wants isn't even sometimes to be able to
try and pry out more signals from the human body, but it's, you know, why don't we just make the
technology itself just a little bit smarter? And then it can know things like, hey, you know,
it's Thursday and I'm making soup. Okay, cool. I'll be able to fill in the gaps. I'll be able
to sort of guess at what the person might want or what the person might do.
And it makes it a little bit more seamless and a little bit more natural.
So you can do more with less.
This is sort of like without the thermodynamics police coming in and locking us all up.
I mean, we're really trying to get something for nothing.
And machine intelligence helps us get a lot for a little.
So a smart prosthetic hand helps us do more with less.
I think that's the key thing.
And so it sort of takes the blah to a, oh yeah, maybe, maybe that makes sense.
So it's more like a self-driving arm.
More like a self-driving arm. Exactly. And I, this is, this is actually very much,
that's a good analogy because you can, a lot of the systems we think about that, that do stuff
for us, you give really high level commands. You do sort of the big picture
thinking and the technology fills in the gaps. We see this with everything from our smartphones to
our computers to maybe someday soon, I hope someday soon, the vehicles we have even up here
in Edmonton. And the nice thing about this is that, yeah, you could say, you know what, the self-driving
arm is you're giving the high level commands. But I mean, we just can't in some cases, for the bionic body parts case, we can't even measure the right signals from the body.
We can't sometimes get the information out for, say, fine finger control to play piano or catch a ball.
But the system could say, hey, in this situation, you're making these big kinds of motions.
I bet you want your fingers to coordinate in this kind of way. So you may be able to play that, that, that good lick on the piano.
So yeah, it's kind of like a self-driving arm, but, uh, but without the, without the
sort of scary, the bit that people always get scared about is like sort of the Dr. Octopus
side of things.
Oh, my arms are like controlling me or they're doing things that I don't want them to do.
I think if we've done everything right, it's like a really good human team, right?
A good sports team or a good team in any other sense is that they, they work together
so seamlessly that it doesn't, doesn't seem like one is controlling the other, but everybody's working
really efficiently towards achieving the same goal. I think that's where we're going with
smart parts and better bionic bits. I think we've all seen those horror movies where,
you know, the arm made me do it, but I don't want to. Exactly, exactly. And one of my students is
actually, one of my graduate students is actually working on a, what she wants is a, uh, you know, a hand that's just
a disembodied hand that might, you know, crawl across the room and go get stuff for you. We've
got another, another set of students working on what we call prosthetic falconry. So you might,
instead of having an arm attached to your body, have a quad copter with a hand that flies across
the room and pick stuff up and comes back for you with a prosthetic guy or essentially. So we're,
uh, we're doing some cool stuff like that.
And then you could imagine,
yeah,
okay.
The thing,
the thing is actually pretty autonomous in the fact that it could actually,
you know,
move around the room a little.
Um,
but,
but for the most part,
for the most part,
it's,
uh,
yeah,
the chance of the system actually controlling you back is very,
very low.
I think the doc awkward,
doc awkward is not something that we,
we,
we have to take too seriously.
Although,
uh,
we do have a no doc rule in my lab. So you can put one extra body part on your body you can put two on the minute you put
four extra body four extra limbs on your body you're kicked right out of the lab so we have a
no doc oc rule all right but before we get any deeper into that and while alicia is trying to
get off the floor from laughing um i did i did want to kind of establish a baseline.
I don't think I understand, and I don't think a lot of other listeners might understand
what the current state of the art is.
Like, if I were to lose my forearm, heaven forbid, next week, what would, you know, if
I had the best insurance in the world, what would I end up getting as a prosthetic?
And what would that be capable of doing? Yeah, this is a great place to start. So I really first, I really hope that you actually
don't lose any body parts. If you do, you know, drop me an email. We'll see what might be some
good suggestions for you. You might get a prosthetic quadcopter. He would really like that.
I'll be right back. I'm going to get an axe. No, no, no, no. See, this is just flat out. No.
Although actually just as a, as a side note, this might've come up later on in our conversation,
but if you ever get a chance, there's a fantastic book called machine man written by Max Barry.
It's about a guy who, who growing up wants to be a train. He doesn't want to be a train engineer.
He actually wants to be a train. Anyway, he loses one of his legs in an, in an accident and
pretty soon realizes that the leg he built is actually better than the biological one he's
left with. Anyway, it goes all downhill from there. It's a fantastic sort of a dark satirical
work of fiction, but it's definitely worth reading. It's on the required reading list for my
laboratory. So we got a copy on the shelf, but it fits right in with your question. So no going to get the ax. But to answer your actual question, the state of the art is partially dependent on what kind of amputation someone has. majority of people will get something that isn't actually that robotic at all. They'll get something that's what we call a body-powered prosthesis, but it's essentially a series of
cables and levers. So it's something that they control with their body. It's purely mechanical
with no electrical parts. And for the most part, a lot of people really like those systems in that
they're trustworthy, they respond really quickly, they can sort of feel through the system itself.
So if they tap the table with it, they can feel it sort of resonating up their arm.
Recently, there's been a big surge in newer, more robotic prostheses. We call them
myoelectric prostheses, but really what this means is that they're recording electrical
signals from the muscles of the body. So if someone has an amputation, say just above the
elbow, then you imagine they might have a socket. They might have something that's put over top of their residual
limb or the stump, and they might have sensors that are embedded inside that socket. So those
sensors would be measuring the electrical signals that are generated when people contract their
muscles. So when they flex the muscles in that stump and the remaining limb, the system can
measure that and use that to control, say, a robotic elbow or maybe a robotic hand.
Are these flex sensors or are these like the heart rate sensors that are lights and looking at the response from that?
So they're actually a multipole electrical sensor.
So you're looking at actual voltage differences.
Okay.
Yeah.
So it makes contact with the skin. There's a, there, some of them are these little
sort of silver domes that sort of just press in, press lightly into the skin. Some of them have
these little tiny, tiny strips of, of I think very expensive wire just that, that make good
electrical contact with the skin. But when your muscles contract, you actually, when all those
motor units get recruited and start doing their thing, they, they actually generate changes actually generate changes in the electrical properties of the tissue. So you can really,
in a very straightforward way, measure it. There's actually commercial products now that you can go
down to your favorite consumer electronics store and get something, one of the products that is
called a Myo made by Thalmic Labs. Yeah, SparkFence also.
Yeah, exactly. And you can easily get one of those and jam it right in. And that's using the same kind of signals.
Obviously, the clinical systems have a bit more precision to them.
And also, they're a bit more expensive.
But yeah, so the idea is you measure some of these signals.
And they can be used to say whether a robotic arm should go up or down or whether a robotic hand should open or close. So in terms of top of the line systems where you have a robotic, let's say a robotic hand and a robotic elbow for someone, the hand itself
might be able to move individual fingers. But the caveat there is that the fingers can typically
only move to open or close. What that means is the person can say, pick a grip pattern, like
I want to make a fist or I want to grab a key. And then the hand would just open and close. So they don't really have full control over the individual fingers, the individual
actuators. Likewise, the wrist is typically fixed or rigid and people won't be rotating their wrist
or flexing their wrist. This is starting to change, but in terms of what we see out there in the
clinic, what people are actually fitted with. It's very uncommon to see anything
more than say a robotic elbow with a robotic hand attached that opens and closes. So that's the
sort of clinical state of the art. The fancy dancy, what might actually be happening soon
kind of thing is a robotic arm where there's individual finger control. The fingers can sort
of adduct and abduct. So they can simply move side to side or
open and spread your hand multi-degree of freedom wrists or wrists that move like they flex they
they bend sideways and they also rotate and uh and also full shoulder actuators so i mean if you
if you think about what will be coming down the pipe in another five to ten years
a lot of our colleagues out east and and some of those down in the States have done some really, really cool jobs of building very lightweight, very flexible, and highly articulated bionic arms.
And those will, I hope, be commercialized sometime soon.
So we're seeing a big push towards arms that can do a lot.
But you have an amputation above an elbow,
you have to learn how to fire the right muscles to control, to generate that voltage we're reading
and send it down to the fingers. It's a hard mental problem and a lot of work for somebody to
be able to use these, isn't it?
Well, that's if we have a million dollar, if we have the million, six million dollar man,
that's the six million dollar question is how do we actually control all those bits? And so I really
think this is the sort of the critical issue that we're solving, not just with prosthetics, but also
with a lot of our human machine interaction technology is now that, I mean, we have sensors,
we have really. We have really
smart folks making really spectacular sensors of all different kinds. We're getting sensors
getting cheaper. They're getting the density of sensors we can put into any kind of devices is
just like it's skyrocketing. Likewise, we have fancy arms. We have really advanced robotic
systems that can do lots of things. They can do all the things a biological limb can do to a first approximation and maybe someday even more.
But the point you bring up is a really good one.
Like gluing those two things together
is in my mind, the big remaining gap.
So how do we actually,
even if we could record a lot from the human body
and even if we have all those actuators,
even we have all those robotic pieces
that move in the ways we hope they would, how do we connect those two?
How do we connect the dots?
How do you read people's minds?
Yeah, that really is, I think, the big question.
Because reading from all the things we could sample from their body is like, I really think of it like looking at body language.
It's the same kind of idea.
But we're really good of it like looking at body language. It's the same kind of idea, but we're
really good at it as meat computers. We're great at looking at another body and sort of trying to
infer the intent of that particular person. We're asking our machines to really do the same thing.
We're asking them to look at all of the different facets of body language that are being presented
by the, say, the wearer of a robotic arm arm and then the robotic arm has to figure out what that person actually wants uh a lot of the time our engineering
breaks down at that scale so our ability to to say map any combination of sensors directly to
any combination of actuators if if i'm recording like a if i put a sensor on on someone's biceps
and on their triceps so you know the bits that make the elbow flex and extend, it's pretty – I mean, all of us could sit down and hack out a quick script or build a hardwire system that would take the signals from the bicep and the tricep, just sort of maybe subtract them.
And now you've got a great control signal for the elbow to make the elbow go up and down.
And in the clinic, this is typically how the elbow control works. But if we start to think about having 10 sensors, hundreds of sensors, if we
start reading directly from the nerves of the arm, so the peripheral nervous system, or even
recording directly from tens, hundreds, or thousands of neurons in the brain, suddenly it's not so
clear how you'd go about hand engineering a sort of
fancy control algorithm that takes all those signals and turns them into some kind of control
signal for the robot arm. That's the really hard thing. I mean, that's really where the machine
learning starts to fit in, where we can start to learn the patterns as opposed to engineer those
patterns. Okay, so and that's how we get to machine learning, which is the machine intelligence. Actually, do you prefer out from the top is that artificial intelligence is often the wrong word.
And it's a phrase that comes with so much baggage.
I think we see it so much in the media
and the popular culture.
It gets thrown around a lot.
I gave a lecture just last week talking about,
really, I mean, we have people applying AI
to what amounts to an advanced toaster and calling that artificial intelligence and then arguing about toaster rights or saying, oh, my goodness, this toaster is like a existential threat to my ongoing existence.
And sometimes people are really applying terms like artificial intelligence to just a clever control system in something like a toaster or robot vacuum cleaner.
And then there's people that are thinking really about machines that might have some kind of very,
very strong or detailed kind of general intelligence. And we conflate those two together. So I think AI, because of all of its baggage is actually a something that just doesn't
really hit the point. The other the other tricky thing about about just talking about intelligence,
artificial or meat intelligence or, or hardware intelligence, when we talk about intelligence, people often think it's sort of like it is intelligent or it isn't intelligent.
I think by casting a term like AI onto the entire endeavor, it really tries to make it very binary.
And really, we get a gradation.
I mean, your thermostat is in some level fairly intelligent.
It figures out where it needs to go to keep the temperature in your house right on point.
A self-driving car is a different kind of intelligence.
A Sony AIBO, one of the little robot dogs.
Yeah, you could say that there's intelligence there.
And likewise, when we start looking at programs like AlphaGo, the Google DeepMind program that recently took out Lisa Dahl in a human machine match in the
game of Go. I mean, that you could argue that there's intelligence there. Now, I'm just going
to keep breaking this down a little bit, if that's okay. The intelligence piece is also a bit sort of
a bit soft in terms of how we throw in things like learning. So you asked me about machine
learning or machine intelligence. I can imagine, I think a lot of us could imagine that there might be a system that we would call
very intelligent, a system that has lots and lots of facts. Think of like Watson,
Jeopardy, playing robot style thing that knows lots and lots and lots of facts. Those facts,
let's pretend that those facts have been hand engineered. They've been put in by human experts.
So the system might not have learned at all, but it might exhibit behaviors that we consider very, very intelligent. At the same time, we might have systems that maybe we don't think are that intelligent, but that are very evident actually learning. But I mean, it wouldn't be able to tell you what the what where Siberia is or who is the who is the the leading public figure in Japan. Like that's that's
something that is facts versus learning. So intelligence, I think, involves learning. It
involves knowing things. It involves predicting the future or or being able to acquire and
maintain knowledge. And it actually revolves around using that knowledge to do
something, to maybe pursue goals or to try to achieve outcomes. So I break down intelligence
maybe into machine intelligence. Let's be specific about machine intelligence. Breaking down machine
intelligence into representation, how a machine actually perceives the world, and then prediction,
which is really in my mind building up facts or knowledge about the world.
And then control, which is in a very engineering sense, being able to take all of that structured information, all of those facts, and then use that to change a system's behavior to achieve a goal.
So I think that's a nice clear way of thinking about intelligence and specifically machine intelligence. So when I talk about these kinds of technologies that we work on in the lab or when I'm talking more generally
about what most people say is artificial intelligence, I really do like, I prefer
machine intelligence because it's kind of clear. We can say, yeah, we're talking about machines
and we're talking about intelligent machines. It doesn't, like, there's nothing artificial about it.
If it's intelligence, then it's intelligence.
Is deep learning a subset of machine intelligence or sort of the same level,
but a different word for it?
So deep learning, I mean, there's a lot of excitement. I'm sure you've seen all of the large amounts of publicity that deep learning has received in recent months and years. And for good reason.
It does some very, very cool things.
In the same way, there are people who are looking at deep learning to do things that we would consider very, I guess, higher level intelligence tasks.
Looking at things like manipulating language and understanding speech is already what we might consider to be a very intellectual pursuit. And there's also deep learning, which is being used for some fairly specific applications,
things that are maybe what we consider less general in terms of intelligence, but more like
a very targeted or a specific function. So I mean, one thing we've looked at is applying
deep learning to some laser welding. So looking at how we could use it to see
whether or not a laser weld might be good or bad. This is just one project I worked on with one of
my collaborators. And that, I mean, that's a very, it's not what I would consider a system that has
very general intelligence. When you compare that to something like a language translation system,
like some of the things that Google's been working on with deep learning, to be able to
generally translate between multiple languages, that we'd consider a higher level kind
of intelligence. Still not really a general intelligence. You wouldn't stick that in your
room, but it goes around and suddenly bakes you toast and then writes a dissertation on the
ancient Chinese poetry. That's another step up the ladder, I think.
Maybe a couple steps.
Maybe a couple steps. Yeah, maybe one, maybe two.
But deep learning, yeah, it's a step in the right direction. So step in a direction that
leads us towards more complex systems that might have more general capabilities.
So when I think of deep learning, it's about taking an enormous amount of data and throwing it at a few different algorithms that are pretty structured.
And it leads to neural net-like things.
And you can't always see inside of deep learning.
Like you want to build a heuristic instead, don't go the deep learning path.
You're not going to go there.
Is that right?
It's been a long time since I've learned the difference between these things.
Yeah, so deep learning, deep neural nets especially,
most of the time when we speak of deep learning,
we're really talking about a deep neural network.
And people have been working.
There's some very nice maps you can find on the internet
showing the different kinds of deep nets and the different ways that they're structured.
Some of them are more interpretable than others.
In essence, you're very right.
You're taking in a lot of data.
And I think one way that maybe the clearest way to start separating out the different kinds of machine learning and machine intelligence that we might want to play with as engineers, as designers, as just interested people, is to think less about the usual way we label things. Like deep learning
is typically a case of what we call supervised learning. There's unsupervised learning as well,
which also leverages deep nets. And then there's the field that I work in called reinforcement
learning. But maybe more clearly, we could say that a lot of the cases of deep learning that
people use deep learning for are, are actually cases of learning from labeled examples. So it's
like you give like a ton, a ton of examples and each of those examples has a usually human
generated label attached to it. So you're going through the internet, you're like, I want to find
pictures of grumpy cat. And so you show a bunch of images and then the system says, yeah, yeah,
grumpy cat. And you're like, no, that wasn't Grumpy Cat or Grumpy Cat. Yeah, that was. The system adapts its internal
structure. It changes its weights so that it better lines up the samples with the labels.
So a lot of what we see in deep learning, the majority, I think, is a case of learning from
labeled examples. So you already know what the truth is when you go in. Absolutely.
And now for training.
So this is also something that we see a lot with the, especially with deep nets, is that you usually have a phase of training.
Many, many complex heuristics have been developed to try and figure out how to train them correctly.
And there's some really smart people working on that.
I don't work on that because there's plenty of other smart people solving those problems.
But the idea is that you find a way to
train it, usually on a batch of data. And now you have other examples during deployment, let's say.
Now you have a grumpy cat detector that you've sent off into the world and has to do its job,
and it now sees new examples of photographs and has to say yes or no, or say what that photograph
actually is, or what that string of speech is. So the deployment systems will
now be seeing new data that has not previously been presented. So this is a training and a
testing paradigm. That's one of the important things as well about the usual way that we deal
with learning from labeled examples. You build some kind of classifier or some kind of system
that learns about the patterns in the information, and then you would deploy that system.
You make it sound so easy, but yes.
I make it sound so easy.
It's actually not.
Actually, I think as we were just comparing our notes earlier before the show,
it's often one of the most difficult things
is just installing all the right software packages.
I think sometimes that's one of the most challenging bits.
But the understanding of the concepts is actually,
none of it's really that fancy or that tricky
when you think about it at the highest level. Really, it's like saying, hey, yeah, I have this
machine. This machine has some internal structure. I showed a sample of something. I showed an
example and I tell it what that thing should be. And it just sort of shifts itself around. It
jiggles its internal structure in a really nice way so that it's better able to say the thing I
wanted to say when it sees another example that's close to the one I showed it.
So that's what I mean by what we usually mean by supervised learning.
It covers a lot of what we consider deep learning.
And the only thing that makes it deeper is that how complex is that internal structure of that thing that jiggles.
So the internal structure that changes to better line up samples with labels, when we look at deep learning as opposed to earlier work on multilayer perceptron or the one or two layer neural nets, we're just adding the complexities of that internal system and the way that pieces interconnect with other pieces.
So we're just dialing up the complexity a bit.
And because of that, the kinds of relationships, the kind of sample label pairs that can be learned gets a lot more powerful.
We get more capacity out of that. But in essence, it's very much the same thing as before, but more.
Just the training bit, the actual method for going about updating that black box, that deep neural
net, that's one of the things that becomes even more complex now than it was in previous years. But when you talk about smart prosthetics, it's hard to get million-point samples for
a human who just went through something pretty traumatic like losing a limb.
And their samples aren't going to apply to somebody else's because our bodies are different.
So you don't do this type of deep learning, do you? You mentioned
reinforced learning. Yeah, so that's actually great. So let's just jump into reinforcement
learning because that is the area, that's my area of specialty, my area of study and the area where
most of my students do research. So I talked about learning from labeled examples being the general
case that we see in machine learning and one of the most theories of greatest excitement.
There's also what we could consider learning from trial and error.
So when I say reinforcement learning, I actually do mean learning from trial and error.
And the kind of learning I work on is a real time learning approach. Instead of trying to have a training and a testing period where you show a large batch of previously recorded data, the systems we work with are essentially dropped in cold.
So they could be attached to a prosthetic arm.
They could be attached to a mobile robot.
And while that system is actually operating, while that system is interacting with the person or the world around it, it's learning.
It's learning all the time, and it's changing itself all the time. So the data that's being acquired is actually readily available and it's available from the
actual use of the system. So this is the case where we're learning from, instead of a, like,
I think of it instead of learning like from a vat of data, we're learning from a river of data or a
fire hose of data, the information that's currently flowing through the system and flowing by the
system. So it's a different kind of learning. And it's a very nice, a nice thought that we can have systems
that not only learn from, from stored data, but can also learn from real ongoing experience. So
that's the area, that's the area we work in. So could you do something like, I know some of the
self-driving car manufacturers have their software on, but it's not actually doing any self-driving.
It's in shadow mode. Do you do any training where, okay, somebody lost one arm, but they have a good
right arm, let's say, could you do any training with the good arm and say, okay, this is how this
works. And this is where these signals are. And this is how this person uses this and then apply
it to the prosthetic later? Oh, that is actually exactly
what we're doing right now. So one of my students is, we're just finishing up a draft of a research
paper to submit to an international conference. And this student's work on that paper and actually
that student's thesis is really about that very idea where you could imagine if you have someone
who's lost one arm, but they have a healthy biological arm on the other side, you could
just have the biological arm doing the task, again, cutting vegetables or catching a ball
or doing some complex task.
And you could have the other, the robotic limb, just watching that, essentially seeing
what needs to happen and actually being trained by the healthy biological limb.
And you could have this in a sort of a one-off kind of fashion where you show it a few things and it's able to do it. Or you could have it actually watching the way
that natural limbs move in an ongoing fashion and just getting better with time. So it's a,
really, that's a, that's a great insight is that, yeah, we could actually have a,
a system learning. And actually the way the students, the students teaching the arm is that
it actually gets rewarded or punished depending on how close it is to the biological limb.
So I talked about reinforcement learning
and if we get right down to it,
that's essentially a learning through trial and error
is learning through reward and punishment.
So like you'd train a puppy,
we're training bionic body parts
or any other kind of robot you'd like.
When the robot does the right thing
or when the system does the right thing,
it actually gets reward.
And its job is to maximize the amount
of reward it gets over the long term. So that's the idea of reinforcement learning is the system
not just wants to get reward right now, but it wants to acquire reward, positive feedback for a
extended future for some kind of window into the near or far future.
Okay, digging a little bit more into this, because I'm just fascinated. We are mostly symmetric creatures. And sure, chopping vegetables
is something that you do with one hand. And you kind of have to do it with one hand, because the
other hand is used for holding the vegetables. But as I sit here gesturing wildly, I realize I am mostly symmetric with my gestures. Do you worry about that
sort of thing as well? Or are you mostly task oriented? A lot of what we do is task oriented.
So specifically, I do many things. Some of the things we do are wild and wacky. Like we have
the third arm that you connect to your chest and we're looking at how to control the third arm that
you wear. We've got the prosthetic falconry.
We've got all this other weird stuff that we do.
And I really enjoy that.
We actually, one of my students is building a go-go gadget arm.
So he's building a telescoping forearm so that if you lose an arm, maybe you could have an arm that stretches out and grabs stuff.
Something our biological limbs couldn't actually do.
So in those cases, the symmetry might be lost.
You might not have another arm on the other side
coming out of your chest. You might not have a telescoping forearm on your healthy arm because
only your robot arm can do that. But in the cases where we are looking at people that have a arm
that's trying to mirror the kind of function we see in a biological limb, a lot of what we look
at is very task-focused. So we're looking at helping people perform
activities of daily living. So the activities that they need to succeed and thrive in their
daily life and to make their daily life easier. So we do start and often finish with actual real
world tasks. Now, this is a nice gateway towards moving to systems that can do any kind of motion.
So the training example, that sort of learning from demonstration that we just talked about where the robot limb learns from the biological limb,
that's a sort of a gateway towards systems that can do much more flexible or less task-focused
things. But we usually start out with tasks and we validate on tasks that we know in the clinic
are going to be really important to people carrying out their daily lives.
Okay, so what about the internet? Are these phone are these are these prosthesis going to be controlled with my smartphone? So instead of it knowing it's Thursday and time to make soup?
Now I can tell it go into soup mode.
That's it. So this is a really this gets towards gets towards a conversation on what sensors are actually needed.
So right now, just the general state of things is that the robot limbs, the ones that we would see attached to someone in the clinic, are typically controlled by embedded systems.
We have small microcontrollers.
We have small chips that are built onto boards, and they're stuck right in the arm.
There's a battery.
The chips are very, very old, usually. They're not that fancy. They're not that, uh, that powerful.
They don't store data. There's actually very little, uh, even closed loop control that goes
on on the, the typical systems in, in most prostheses. Now for lower limb, for leg, for
leg robots, that that's getting that that's a little, I'll soften that, that constraint,
but for the upper limb, often we're not seeing devices that have that much complexity.
Those are not internet enabled.
They do not connect to other devices around them.
Only very recently have we seen robotic hands that now connect to your cell phone via Bluetooth
and are able to, say, move or change their grips depending on what you tell it through
your cell phone.
There's also examples of what we call grip chips.
There's some of the commercial suppliers have built essentially little RFID chips that you
hang around your house so that when you go into your coffee maker, your hand will pre-shape
into the coffee cup holding shape.
So we're starting to see a little internet of things essentially surrounding prosthetic
devices.
But it's still, I think, maybe not in its infancy, but maybe in its toddler phases in
terms of what could happen when we begin to add in, say, integration with your calendar, integration
with the other things that permeate our lives in terms of the data about our patterns and our
routines that might really make the limb better able to understand human needs, human intent,
and human schedules, and fill in the gaps that we can't fill in with other sensors.
But there are a lot of sensors being used in various medical and non-medical ways to help us get to better health.
Fitbit is the obvious case with lots of data.
And it has changed people. I feed my Fitbit, you know, let's go for a walk.
But are we seeing the same sort of things through rehab and physical therapy? Are there tools to help people that are sensors and IoT connections? Yeah, so there's, in terms of new, I think new
sensors is actually one of the areas where we'll see the most progress in terms of increasing
people's ability to really use their technologies. A lot of what's limiting current devices, I mean,
some of the control is not intuitive. The control is a bit limited.
And the feedback back to the human is also quite limited.
A lot of that could be, I think, mitigated if we give the device themselves better views into the world.
So this gets back towards what you're saying.
I mean, you could imagine that we have things like a series of, we have Fitbits.
We have other ways of recording the way the body is changing in terms of how much you're sweating, what's happening around it, the humidity of the air.
There's many sensors we could add that would sort of fill in the gaps for a device.
So at the conferences, at the research level, we're seeing a ton of interest in this space.
So there's people that are building ultra high density force sensing arrays that you could put inside a prosthetic socket so it can actually feel how all the muscles in that residual limb are changing.
There's people who are building things, they're putting accelerometers, they're putting inertial measurement units, all these different kinds of technologies.
There's embeddables, so there's embedded sensors, so sensors that are implanted, little like grains of rice, implanted directly into the muscles of the body. These are also research prototypes that are, I think, already in clinical trials or beyond now,
where you actually have wires technology embedded right in the flesh itself so that you can take
readings directly from the muscles, directly from the nerves themselves, and directly from all the
other bodily functions that began to support these devices. So this is an area where we're
going to see a huge, let's get back to our earlier conversation about how you start mapping all of those pieces of information to the control
of motors. But we're actually seeing a huge surge in interest in different sensory technologies.
Even for people that haven't lost limbs, I mean, this is a, just devices again, like I mentioned
the Mayo earlier, because I think it's a, and there's also, I think, the EEG headset.
One of my students has one.
We're using it for research.
But the meditation-supporting EEG headset with a couple of EEG electrodes in the front.
I think it's the Muse.
Okay.
No, no, no.
I've seen these.
I've played with them.
Yeah, yeah, yeah.
I have never seen one that had any repeatable results.
Oh, really?
No, you can, they have some that control video games and stuff.
You can, you have to learn to concentrate on anything.
I think it just measures concentration.
I've seen it work.
Well, I mean, you could do that by just measuring how much the muscle in my forehead moves.
You don't have to do anything interesting.
It's cooler to have it on your brain.
Yeah, but I've never had it be repeatable beyond what you could tell because I had a line between my eyebrows.
Yeah, and that's okay. So I think if we focus on trying to, I like to think of signals,
this is my view, this is sort of my default view of how we approach presenting information to our
machines and how I actually think about the information itself, is that we never label
any signals. So when I stick signals or when I measure things from the human body and I stick them into, say, a machine learner, when I actually give some kind of set of information to a
reinforcement learning system, they're just bits on wires. So the nice thing is that it doesn't
actually, at least to me anyway, matter to our machine learners, matter if the contractions in
the facial muscles or if it's actually EEG that's leading to
discriminating signals. It's that if we can actually get any kind of information, it doesn't
have to be clean information. It could be noise. Noise is just information we haven't figured out
how to use yet. So if we actually can think about recording all the more signals, lots of signals,
the system itself can figure out how to glean the best information from that soup of data.
So I'm not worried, actually. It's actually a very
sort of a relaxing and refreshing view into the data is that I'm not so worried about whether or
not it's one kind of modality or another, or whether or not it's even actually consistent,
as long as there's certain patterns. If there's no patterns, then I mean, we can say maybe that
sensor is not going to be useful. But that's more of a, do we put the expense of actually
deploying that sensor as opposed to, do we give that sensor as input to our learning system?
In many cases, the learning system can figure out what it uses and what it doesn't. And sometimes
what it figures out how to use is actually very clever and sometimes buried in that sea of noise
or the sea of what we think is unreliable signals. It's actually a very reliable signal when you put
it in the context
of all the other signals
that are being measured
from a certain space.
So it's actually a very cool viewpoint
where you're like, you know what?
Here, just have a bunch of bits and wires.
And you think about the brain,
you're like, hey,
it's also kind of like
a bunch of bits and wires.
No one's gone in and labeled
the connections from the ear
to the brain as being audio signals,
but they're still containing information
that comes from the audio.
So anyway, it's a neat perspective.
No, that's a really interesting way
of thinking about things.
Because when you think about machine learning
and deep learning, often the thing people bring out is,
oh, well, we don't really know
what's going on inside the system.
But now we don't even know what's going into it.
It gets signals and it does pattern. I mean, that's how our brains work even know what's going into it. It gets signals and it makes, does pattern.
I mean, that's, that's how our brains work.
We make patterns out of things and we don't necessarily know what their
provenance is.
Yeah. It's, it's even more,
it's actually quite funny that when I think about the things we do,
we do on a very daily,
on a regular daily basis with the information we get.
So a very standard,
like a very smart and usual
engineering thing to do would be to take like a whole bunch of signals. You've got like hundreds
of signals and you're like, okay, let's find out how to sort of reduce that space of signals into
a few important signals that we can then think about how to make control systems on, or we can
think of a way to clearly interpret and use in our designs. Usually we're trying to take a lot
of things and turn them into a few things. Almost exclusively, every learning system that we use
takes those things, let's say we have a hundred signals, and it might blow that up into not just
a hundred signals, but a hundred thousand or a hundred million signals. We're essentially taking
a space and building a very large set of nonlinear combinations between all of those signals. And now the system, the learning system actually gets all that much larger, that much more detailed input space that contains all of the correlations and all these other fancy ways that other information is relating to itself. It now gets that as input. And even if you don't do a deep learning, like there's some of my colleagues who have published a paper on shallow learning,
which says, hey, you know,
all the stuff you can do with deep learning,
if you think of a really good shallow representation,
like a single layer with lots of inherent complexity,
you can do the same kinds of things.
So you can think of that as like,
yeah, let's just take a few signals
and blow them up into lots of signals
that capture the nonlinear relationships
between all of those other input
variables. It's kind of cool, but it's kind of weird. And it scares the heck out of, especially
some of my medical or my engineering collaborators. I'm saying, yeah, no, this is great. No, we're not
going to do principal component analysis. We're going to do the exact opposite. We're going to
build this giant nonlinear random representation or a linear random representation out of those
input signals. It's kind of cool. Do you ever associate a cost with
one of the signals? I mean, as a product person, I'm thinking all of these sensors, they do actually
have physical costs. And so if you are building a representation in machine learning world,
do you ever worry about the cost of your input? Absolutely. And the cost of the
input is not even just the physical costs, but also things like the computation costs. A lot of
what I do is real-time machine learning. I'm hoping that I can have a learning system that
learns all the time, and not just all the time, but very rapidly. So many, many, many, many times
a second. And so as we start to add in, say, visual sensors, if you want to do any kind of processing on that visual input that
the camera inputs, you're starting to incur a cost in terms of the rate at which you can get data.
So there's physical costs that we do consider. There's also the computational costs and just the
bulk of those particular signals. So we do consider that. There's interesting ways that
the system itself can begin to tell us what signals are useful and which ones aren't. So
when we start to look at what's actually been learned and how the system is associating signals
with outputs, we can actually say, oh yeah, you know, maybe this sensor isn't actually
that useful after all. There's some new methods that we're working on in the lab right now,
actually, that are looking at how the system can automatically just sort of dial down the
gains, let's say, on signals that aren't useful. So it's really easy then for us to go through and
say, hey, okay, the system is clearly not using these sensors. Let's remove those sensors from
the system and with them, those costs and those computational overheads as well.
Yeah, there's the computation, the physical, the power, all these costs.
Absolutely. And power is a big one, especially with wearable machines. I think you see this a
lot with embedded systems. We have to care a lot about how long our batteries can run. If you're
going out for a day on the town and your prosthetic arm runs out of batteries in the
first half an hour, that's not going to be good. So we do have to be very careful about the power consumption
as we start putting,
especially when we start putting learning systems
on wearable electronics and wearable computing.
You think of a shirt
with embedded machine intelligence.
Let's say you have a,
it's like Fitbit writ large,
you have a fully sensorized piece of clothing
that's also learning about you as you're moving.
We want these systems to have persistence
in their ability to continue to learn.
You don't want them to stop being able to learn
or to capture data.
And so that's actually one of the really appealing things
about the kinds of machine intelligence we use,
the reinforcement learning and the related technologies,
things like temporal difference learning that underpin it,
is that it's computationally very inexpensive.
It's very inexpensive in terms of memory.
So we actually can get a lot for a little.
We're working on very efficient algorithms that are able to take data and not have to store all of the data they've ever seen, not have to do any processing on that data, and be able to sort of update in a rapid way without incurring a lot of computation cost.
So that's a big focus, these building systems that can actually learn in real time,
not just for 10 minutes or 10 hours, but forever.
That's a hard problem because maybe I don't want to make soup every Thursday.
Yeah.
So that's a really – I like that example as well because the question is maybe not
how do I build a heuristic or how do I build some kind of good rule of thumb to say when I do and don't want something.
But what other sensors – it doesn't have to be a sensor.
Think of any kind of signal.
What other signals might we need to let the machine know what we want and to let it know when something is appropriate or not appropriate?
Actually, let's go back to the – remember I mentioned we're building that go-go gadget
wrist.
I'm building a telescoping forearm prosthesis.
So you can imagine that there's two very, very similar cases that we'd want to tell
apart.
One is the picking up, say, picking up something from a table where you're reaching downwards
and you're going to close your hand around, let's say, say a cup of tea.
And the other is you're shaking hands with someone.
In one of those cases, if you're far away from the thing you're reaching, maybe it's appropriate for that arm to telescope
outwards and grab. If you're shaking hands with someone, maybe it's not appropriate because it's
going to telescope up and punch them in the groin, right? So no one wants to be punched in the groin.
So the system itself maybe has to know when it might expect that this is appropriate or not
appropriate. One of the cool ways that we're getting some leverage in this particular sense is that we're building systems to predict
when the robot might be surprised, when the robot might be wrong. So it's one thing to know when you
might be wrong or to be able to detect when you're wrong. It's another thing to be able to make a
forecast, to look into the future, just a little ways or a long ways, and actually begin to,
to make guesses about when you might be wrong in the future. So if it's like, you know, okay,
I think it's Thursday. I think I'm going to make soup. We're good. Um, if, if there's actually
other things that allow the system to begin to make other supporting predictions, like,
Hey, I actually think that this prediction about making soup is
going to be wrong, we can start to then dial the autonomy forward or backward in terms of how much
the machine tries to fill in the gaps for the person. It's a really cool, it's a very, very
sort of wild and woolly frontiers direction for some of this research. But I have a great example
where the robot arm's moving around the lab, and actually try to shake its hand and it's surprised. And it starts to learn that,
oh wow, every time I do this, someone's going to monkey with me in ways that I've never felt
before. I have one video where I put little weights in its hand and I hang something off
its hand and then occasionally I bump it from the bottom. And it learns that in certain situations,
it's going to be wrong. It doesn't know how it's going to be wrong, but there's certain use cases, certain parts of its
daily operation where it's going to be wrong about stuff. And it can start to predict when it might
be wrong. It's very rudimentary, but it's a neat example of when we might be able to not only fill
in the gaps, but also allow the system to know when it shouldn't fill in the gaps.
Are you creating anxiety in your robots?
That's a great question.
That is a really good question.
I hope it's not anxious.
I really actually worry about this now.
We do a lot of personifying our systems.
I don't know.
Is that anxiety?
I guess it is.
Maybe I always think about it when I'm giving a demo of this.
I kind of think about it.
When I'm sitting at home watching Netflix or something or having tea, I'm not expecting – like I predict I'm not going to be surprised.
When I'm walking down a dark alley in a city I've never been in before, I do predict that I might be surprised and I'm a little more cautious.
Maybe that's anxiety.
So in that case, maybe, yeah, maybe we're making anxious robots.
I'm not sure this is – I don't know, poor things.
Okay, back to smart devices and smart prosthetics. I'm going to go with prosthetics because I can say it. What are some of the reasons people give for not wanting to go in this direction? I mean, you've talked about, we've talked about cost, you've talked about battery life and lack of dependability.
Are there other reasons? Do you hear people worrying about privacy or other concerns?
Yeah. So privacy is, I think maybe because of the lack of really high-performance computing and connectivity and prosthetic devices at present, the privacy argument is something I haven't heard come up at least very much in any of the circles, either clinical or the more in-depth research circles that I've associated with.
One very common thing that people want is actually is cosmetic appearance.
So there's, there's multiple classes of users, much like multiple classes of users for any
technology.
You have the people that, you know, want the flashiest, newest thing with all the Chrome
on it and all the, like the, you know, the, the oleophobic glass and it has to look great.
There's people who are early adopters of very cool tech.
I want it to have as many LEDs as possible.
Exactly.
Right.
You want this thing should have like ground effects. Optr is a very cool tech. I want it to have as many LEDs as possible. Exactly, right?
You want this thing to have ground effects.
And then you have other classes where they want to do the exact opposite.
They don't want to stand out.
So we see this as well with users of assistive technologies.
This is everything from prosthetics to, you might imagine, exoskeletons to standing and walking systems to wheelchairs.
Even canes.
Even canes. Yeah. Yeah, that's a really good pointchairs. Even canes. Even canes.
Yeah.
Yeah, that's a really good point, actually.
Even canes.
Like you have some people that don't want to be seen with a cane or use a cane.
And if they have a cane, it should be inconspicuous. And there's some people that are like, no, this thing better be a darn good looking cane.
Have a skull on top and diamonds and spikes.
Diamonds in the eyes, exactly.
So this is a, this is, I think I'd probably be in the latter category where I want a flashy
looking cane if I had a cane, or at least a very cool cane if it's not flashy.
But for prosthetics as well, we see some people that like to have the newest technology,
they deliberately roll up their pants or roll up their arms so people can see that they
have this really artistically shaped carbon fiber socket with carbon fiber arm.
It looks cool.
People get an airbrush,
like a goalie mask in hockey. They'll actually have really artistic designs airbrushed on their
arms. There's even, again, we're looking a lot in the lab at non-physiological prostheses,
because by that I mean prostheses that don't look or operate like the natural biological piece.
So you can imagine having a tool belt of different prosthetic parts.
You can clip one onto your hand when you need to go in the garage and do work.
I want a tentacle.
I want to inject that right now before you go anywhere.
I want a tentacle.
I know.
And this is one of the things we really want to build for you.
No, not you because you need to not lose your hand.
But we actually talk a lot about building an octopus arm.
That's one of the most common things that we talk about.
Oh, yes.
Right? Way too excited about. Like, yeah, why wouldn't, right?
Why wouldn't someone want to? Way too excited about that.
Not you, her.
Yeah, but it's a good point is that there's certain,
there's a certain user base.
I think it's a smaller user base,
but it's one that would like to have really cool,
unconventional body parts.
Then there's a whole other class that might be willing to sacrifice function for appearance.
So, Cosmesis, a prosthesis that doesn't have any function at all, but has been artistically sculpted to look exactly like its matching biological limb.
So, there's actually a whole class of prostheses where someone's gone in, they'll do a mold or a cast of the biological limb. So there's, there's actually a, um, a whole, a whole, uh, class of prostheses
where someone's gone in, they've, they've, they'll do a mold or a cast of the biological arm.
They'll try to paint moles. They'll try to put hair on it. They'll try to make it look exactly
like the, the, the matching biological limb or, or the other parts of the person's body,
including, including skin tone and things like that. Most of those don't even move.
They're very lightweight and they just strap onto the body. And you can't tell unless you look very carefully that that
person actually has an artificial arm. You can imagine the same thing for eyes. If you're trying
to have a really nicely sculpted artificial eye, that's just a ball of glass, but it looks like
your other eye and it's almost indistinguishable from your actual eye. So there are cases where
people will choose to have something that looks very appropriate, but doesn't actually do
anything except, except look like a biological limb. So those, those are, it's totally valid
choice as well. Uh, but it depends on that person's needs, that person's, uh, what, what
their, what their goals are and what they're trying to do. So, um, we do see, I think more
than privacy, we do see a push towards limbs that are very cosmetically accurate.
Also, lightweight things like we talked about, battery function.
Lightweight.
Function is a huge thing.
Intuitive control, it's really unfortunate.
But for the majority of the myoelectric prostheses, the robotic prostheses, we actually do see a really large, what we call a rejection
rate, or people saying, hey, I don't want to use this anymore. And this means that what could be
a $100,000 piece of technology paid for by the health system goes in a closet, because mainly
it's hard to control. And this is actually one of the coolest areas, I think, that I'm really
excited. There's a company, our colleagues down in the States, down in Rehab Institute Chicago,
have spun off a company called Co-App, but it's a company that's doing essentially
pattern recognition. So they're using a classification system that allows people to
essentially deploy pattern recognition. That's what it's called. It's a form of machine learning
in prosthetic limb control. So now the system can, after training, so you press the button,
you train the system. It monitors some of the patterns in the arm, the muscles, the way the muscles are contracting, and it learns how to map
those to, say, hand open, close, or wrist rotation. And people are actually getting much more intuitive
control. It's much more reliable. And for instance, they might be able to control more different
kinds of bits for their arms. So you might be able to get like an elbow and a hand instead of just
having a hand. So there's some really cool ways that machine learning is actually already being used
to start reducing that control burden. But I think that's one of the biggest complaints that we see
is that this thing's hard to control and it's not reliable. And sometimes like after I sweat a bit
or after I fatigue, it just starts fritzing out. So yeah, I'm going to go back to using a simple
hook and cable system, something where there's like a little cable that opens and closes a spring-loaded
hook because it actually does what I want it to do all the time.
All the time, yeah.
All the time. You may have seen CYBATHLON. So this is actually a good segue into CYBATHLON.
It was this awesome competition of assistive technologies. It was hosted in Switzerland.
It was in the Swiss Arena just outside of Zurich.
It was last October.
But this isn't the Paralympics.
No, it is not.
This is those you're trying to do as well or better than the normal human body.
Yeah.
The stock human body.
Forget normal.
Yeah.
Cybathlon, is that what it's called?
Yep, Cybathlon that's what it's called yep that's improving what we have to do better i mean there were people in the paralympics and actually in the olympics who
who had legs and and they were there was some controversy of whether or not it was easier to
run on those yeah like the carbon recurve legs. Yes. If you don't want to turn,
those things can go like,
they can go very, very fast.
They have better spring constants
than our legs do.
Yeah.
So it's neat.
The Cybathlon is different in that respect
in that it's actually saying,
hey, we're going to put a person
and machine together
and see how well they can do.
And they actually call the people
that are using the technologies pilots.
So you might pilot a functional electrical stimulation bike or pilot an exoskeleton or pilot a prosthesis. So it was a really like, it's almost like the Formula One to the stock car racing.
But in this case, there was people using wheelchairs that would actually like climb upstairs.
There were exoskeletons, there were very cool lower leg prostheses. And the person who actually won
the upper limb prosthetic competition was using a body-powered prosthesis, so a non-robotic
prosthesis. And it's because the person really tightly integrates with that machine. And there's
technical hurdles for some of the robotic prostheses. These are just not the same level
of integration. So things like the Cybathlon are a great way that we can begin to see how
different technologies stack up, but also really assess how well the person, the machine are
working together to complete some really cool tasks. And it goes beyond just how fast can you
sprint to like, hey, pick up shopping bags and then open a door and run back and forth across
an obstacle course. Your wheelchair has to be able to go like around these slanty things and
climb up stairs. It's a neat way to start thinking about the relationship between the person and the machine and start to allow
people to optimize for that relationship. As we talk more and more, I keep thinking how
a camera is probably one of the better sensors for solving this problem because you can solve the
soup mode problem because if you're
in the kitchen you might be making soup um but you can also use the camera to communicate
with your robotic arm you know i i want you have a special thing you do with your your uh wetware
that you show the camera i want my other hand to look like this. And the other
robot then makes the gripping motion. This all makes a lot more sense if you can see my hands.
It really does. We actually, one of my students built a very cool new 3D printed hand. We'll
actually be open sourcing it hopefully sometime in the coming year. We're building a new version
of it. But in addition to having sensors,
again, I'm all over sensors.
We have sensors in every knuckle
of the robot hand
so it knows where its own digits are.
It's also got a camera in the palm.
What kind of sensors do you have?
They're little potentiometers.
They're really simple sensors.
Nothing fancy.
We've got some force sensors
in the fingertips.
We're adding sensors every day.
So we're putting things on.
But cameras in the palm and maybe the knuckles every day. So we're putting things on,
but cameras in the palm and maybe the knuckles are, as you pointed out, really natural.
And either to show things or even just as simple as,
hey, I'm moving towards something that's bluish.
Like, let's not even talk about fancy.
A lot of people love doing computer vision.
You're like, oh, hey, let's find the outlines of things
and compute distances.
Really, it's even simpler than like,
hey, what's the distribution
of pixels? What kind of colors are we looking at here? Is it soupish? Is it can of soda-ish?
Is it doorknob-ish? There's patterns that we can extract even from the raw data that you're right,
cameras are great. Mount cameras everywhere. They're getting cheaper and cheaper. So put
them on someone's hat when they're wearing their prosthesis. Now the prosthesis knows if they're
out for a walk or they're in the house. There's a lot of things we can do
that will start linking up to the cell phone
so that you maybe do either using the camera
or even just the accelerometer
so we know if they're walking or sitting down.
It's very easy to start thinking about
sensors we already have.
And the camera, as you pointed out,
is like a really natural one,
especially if we don't do
the fancy-dancy computer vision stuff with it,
but we just treat it as a, hey, there's lots of pixels here. Those each pixel is a sensor. Each pixel gives us some extra information about, about the relationship
between the robot and the person and the environment around them. So that's a great
point. Yeah. Right on target there. If you've ever tried to tie your shoes without looking,
you do use your eyes to do a lot of these things. It's pretty impressive.
Yeah. And I mean, when you're connected up to your meat, where when you have like a full arm,
and you have all of our biological parts are connected, we have this nice relationship,
we have this feed, we have feedback loops, we have information flowing, when we have a disconnect,
when we suddenly introduce a gigantic bottleneck between part of the body and another part of the
body. And here I mean, a robotic prosthesis and the biological part of the body,
the density of connection goes down. So feedback is diminished and also the signal is going the
other direction. So you can think about ways to make the best of that choke point by saying,
hey, well, we've got cameras on the biological side, we call them eyes. Well, let's put a camera
or two on the robotic side. Let's put other kinds of sensors there that are like eyes. And hey, well, we've got cameras on the biological side, we call them eyes. Well, let's put a camera or two on the robotic side. Let's put other kinds of sensors there that are
like eyes. And hey, maybe now the two systems are on the same page. We get around that choke point
by making sure that the context is the same for both, that both systems are perceiving
the same world, maybe not in the same ways. In fact, absolutely not in the same ways.
But it's interesting to think that we can make both parts of a team, the human, the
machine team, a human, human team, a machine, machine team, make sure that we're able to
make those partners able to perceive the same kind of world in their own special ways.
And then when they use that limited channel, when they use the few bits they can pass over
that choke point, they can use them most efficiently to communicate high-level information or communicate not just the raw material,
but actually communicate high-level thoughts, commands, information.
The machine can say, hey, you know what?
You're reaching for a stove, and I've got heat sensors.
I've got range-finding heat sensors,
and I can say it's going to be really, really hot.
Communicate, oh, it's going to be hot across that limited channel
instead of all of the information that it's perceiving.
I think it's a good way to start managing choke points and
more efficiently using the bandwidth that we have available in these partnerships.
Yes. I have so many more questions and we're starting to run out of time and I'm looking at
all of my questions trying to figure out what I most want to ask you about. But I think
the most important thing is I can't be the only one saying, oh, my God, I want to try it. I want to try it. How do people get onto this path of robotics and intelligence? What do they need to know as prerequisite? And then how do they get from a generic embedded systems background with some signal processing to where you are.
So it's actually, I like, I think that when we, when we're, when we're moving forward with,
with trying to implement things, the barriers are actually more significant in our heads than they
are in actual practice. So in terms of, of getting up and running with, let's say a reinforcement
learning robot, like you want to build a robot that was able to, like you could give it reward with a button and it could learn to do something.
It seems like that's actually this, this gigantic hurdle. I think it's probably not.
So in terms of just going from, from no experience with machine learning to,
Hey, I've got a robot and I'm teaching it stuff. Um, my usual steps, the first,
the first step is I think it's actually a, uh like to say, get to know your data. So usually
when people come to me and say, hey, I want to start doing machine learning, any kind, supervised
learning, learning from labeled examples, reinforcement learning, like I want to start
doing machine learning. What should I start with? The thing I usually suggest is, you know what,
like, don't actually try to install all those packages. Don't like try to figure out which
Python packages or which fancy MATLAB toolboxes you want to install.
I usually point them in the direction of something like Weka.
It's the data mining toolkit from New Zealand.
It's a free open source Java toolkit.
It has almost every major supervised learning, machine learning method that you might want to play with.
And I usually say, you know what?
Pick a system that has some data and get to know your data.
So use this data mining toolkit and take your data out for dinner. Get to know what it does, what it likes, and get
to really understand the information and the way those different, like the many different machine
learning techniques actually work on that data. And it's as simple as just pushing buttons. You
don't have to worry too much about getting into the depth or actually writing the implementation code. You can just play with it. Once you get to know a
little bit about how machine learning works, either you say, hey, this technique is perfect
for me, then you can go and deploy it and use the right package from one of your favorite languages.
But you can also then start to move into other more complex things. OpenAI Gym is another really
great resource. OpenAI Gym is a new platform where
you can try out things like reinforcement learning as well. My students have been using it and it's
really, really pretty functional in a very quick amount of, in a quick ramp up cycle. So people
can get very familiar with, again, with the machine learning methods without having to spend
a Herculean amount of effort implementing the actual details. That's, I think,
the part that will scare people off. But in terms of going straight to a robot, this is what I'm
actually teaching an applied reinforcement learning course at the university right now. It's the first
time we're teaching the course. Part of the Alberta Machine Intelligence Institute, we're trying to
ramp up some of the reinforcement learning course offerings. And what's really cool about this is
that the students come on the first day of class, they get a pile of robot actuators, like two robot bits. In this
case, they're robot robotus Dynamixel servos. They're really nice, pretty robust hobby style
servo that also has sensation in it. So they have they have microcontrollers in the servos,
the servos can talk back and say how much load they're experiencing where their positions are,
you talk to them over a USB port, and right right away you can just start controlling those robots. So the robot bit is
really simple. One Python script that you can download from the internet and you're talking
to your robot, you're telling it to do stuff and it's reading stuff back. And then the really cool
bit is that if you want to start doing reinforcement learning, you want to implement that, it's
actually only about five lines of code and you don't need any libraries. So you could just write
a couple of lines of Python code and you could actually have that already learning
to predict a few things about the world around it. You could learn that it's moving in a certain way.
You could even start rewarding it for moving in certain ways. So the barriers are actually pretty
small. So again, in terms of a pipeline, first, don't try to implement everything right away.
If you want to do some machine learning, go out and try some of the really nicely abstracted machine learning toolkits
out there like Weka or maybe the OpenAI Gym if you want to get a bit more detailed.
And then after that, go write for the robots. The robots now are very accessible and it's not
a hard thing to do. And again, if you want those five lines of code, send me an email,
I'll send them to you. I do.
I may request those for the show notes just because that's pretty cool.
Yeah, wow.
Okay, well, excuse me.
I need to go buy some robot parts.
And they're not even that expensive anymore.
The world is getting so exciting.
Isn't it?
How are we going to learn to trust our robotic overlords?
They'll reprogram us.
They'll reprogram us. That's great. I was like, oh no, no,
they'll have our best interest in mind. I think it'll be fine.
Every time, every time I'm asked about this, I'm like, oh, you know,
I think maybe that's like a, that's probably one of my closing thoughts.
I know you're going to ask me for closing thoughts.
And one of them is like, don't panic. It'll be cool.
And the reason I say that is like, you know, you know,
I have like, I have, I have a puppy, our puppy, I treat our puppy really, really well. I don't like,
I don't mistreat our puppy. I take him out for lots of walks. I give him treats. We just bought him a new couch so he can sleep. I'm like, I have a hope that when someday there's a super
intelligence much smarter than us, that it'll buy me a couch and take me out for walks and give me
treats and buy me Netflix subscriptions. So, so i think probably that's my that's my high level
picture is i'm you know don't panic i think it's actually gonna gonna turn out okay i think super
intelligence systems will be with the increasing intelligence will come increasing respect and
increasing compassion so i'm actually not i'm not worried worried. I think, I think Douglas Adams had it right with the, the big friendly letters.
Don't panic.
And now I'm like, well, what about the dog and cat photos?
I mean, are we just going to be,
are they going to take pictures of us and say, Oh, that's so cute.
Show it to the other super intelligences in the cloud.
They're like, look at what my human did today.
My human was trying to do linear algebra.
Oh man, my human tried to solder wires together.
It was so cute.
Oh, it's just so quaint.
Yeah, exactly.
Who knows?
Maybe it will be like that.
I hope they're supportive and they buy us nice toys
when we're trying to do our linear algebra
and solder our wires.
Stuart doesn't look convinced.
I'm not sure I appreciate that future.
What?
Do you have any more questions or should we kind of close it on that?
We should probably close it on that.
I don't think I can.
All right.
Patrick, do you want to go with that as your final thought?
Or do you want to know?
I will go with that.
My final thought is don't panic.
It's all going to work out.
Thank you so much for being with us.
This has been great. Hey, thank you. It's been awesome. It's all going to work out. Thank you so much for being with us. This has been great.
Hey, thank you. It's been awesome. It's been a great conversation. Physical Medicine and Rehabilitation, and a Principal Investigator with both the Alberta Machine Intelligence Institute, AIME, and the Reinforcement Learning and Artificial Intelligence Laboratory, or laboratory, depending on how you say it.
Thank you to Christopher for producing and co-hosting.
Thank you for listening and for considering giving us a review on iTunes so long as you really like the show and
only give us five-star reviews. But we really could use some more reviews. What are we, Uber?
Go to embedded.fm if you'd like to read our blog, contact us, and or subscribe to the YouTube
channel. And now a final thought from you, the final thought for you from Douglas Adams.
No, we're just going to sit here in silence waiting for their final thought to come in.
Might work.
Send us your final thoughts.
That sounds morbid.
From Douglas Adams, don't panic.
All right.
That's all I got.
I apparently didn't finish the outline.
Normally there would be a robotic quote in here, but I think we're going to... Don't panic. All right.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are Logical Elegance and listeners like you.