Embedded - 490: Wait Until Physics Has Happened

Episode Date: November 28, 2024

Nikolaus Correll spoke with us about robots, teaching robotics, and writing books about robots.  Nikolaus is a Professor of Computer Science at the University of Colorado, see his lab website (or his... Wikipedia page). We discussed Nikolaus’ Introduction to Robotics with Webots Specialization Coursera course (or YouTube Playlist). These go along with his Introduction to Autonomous Robots (which can be compiled from source from github). Masters of Computer Science online via University of Colorado and Georgia Tech. While the Arcbotics Sparki is no longer in production, Nikolaus also mentioned the Amazon Racer. Transcript Nordic Semiconductor has been the driving force for Bluetooth Low Energy MCUs and wireless SoCs since the early 2010s, and they offer solutions for low-power Wi-Fi and global Cellular IoT as well. If you plan on developing robust and battery-operated applications, check out their hardware, software, tools, and services.   On academy.nordicsemi.com, you’ll find Bluetooth, Wi-Fi, and cellular IoT courses, and the Nordic DevZone community covers technical questions:  devzone.nordicsemi.com.   Oh, and don’t forget to enter Nordic Semiconductor’s giveaway contest! Just fill out the entrance form, and you're in the running. Good luck!

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Embedded. I am Alicia White, alongside Christopher White. Our guest this week is Professor Nicholas Koro, and we are going to talk about robots, which of course is one of my favorite subjects. Hi, Nicholas. Thanks for joining us. Hi, Alicia. Hi, Chris. Nice to meet you. Thanks for having me on the show. Could you tell us about yourself as if we met at, I don't know, a faculty new orientation dinner? Yeah, I'm a professor in computer science for 15 years now at the University of Colorado. And I do research in robotics. And most recently,
Starting point is 00:00:45 I started looking into generative AI models and humanoids and apply those to robotic manipulation. And of course, that's where I have all the questions. But first, we want to do lightning round, which is short answers. And we want short, no, it's short questions. And we want short answers. And if we're behaving ourselves, we won't ask for a lot more detail, like, can I be your grad student? Are you ready? Yep. Thank you. You're really going to make me ask this question?
Starting point is 00:01:15 This is listed in the thing. I didn't come up with this. Which is better, Caltech or MIT? I think MIT is probably better. More people. I'm glad I didn't put harvey mudd in there because he would have just said there's no people there yeah that's a mean question it is a mean question it has a history other than robotics 101 what is the single most important college class when learning robotics? Discrete mathematics, probably from a computer science perspective.
Starting point is 00:01:57 If I was looking for a robotics grad program, what would be some things, the various schools that I should look for? That's a good question because we just started a robotics grad program. And of course, it's the faculty and's critical mass um its relationships with industry and i think um mostly cool research going on at the university that you're interested in all right well i have so many robots robotic questions that i'm actually going to cut this short well you have to do favorite fictional robot. Okay, right, right. What is your favorite fictional robot? It's the T-1000. Sorry, the T-800. Because in the nice version with the upgrade, you know, the Arnold Schwarzenegger.
Starting point is 00:02:38 And he comes back, Terminator 2. I'll be back. Okay, so you have a book called Introduction to Autonomous Robots. Could you tell us a little bit about it? Who should read it? I started it maybe 15 years ago as an lecture notes, an open source book. And then a few years back, as our robotics faculty grew, my colleagues joined the effort and we then published it with a real publisher. And the goal is still to really provide everything computer science people need to know. Of course, other people who want to build robots are equally addressed, but the approach is very much for people who come from programming or want to program the robot versus building robots from scratch. I know this is a silly question, but can you define what a robot is?
Starting point is 00:03:51 Because I think, well, lay people think C-3PO. Engineers think anything with a servo motor and code running. And it is probably a mix of things. But what do you think? What is your definition of what a robot is? So you definitely know one when you see one. Giving an equally mean answer that has a lot of depth to it. But it's a difficult problem. start doing that by showing people different things like an elevator, you know, a coffee maker,
Starting point is 00:04:28 an autonomous car, and also C3PO like things. And then we ask them, okay, what is the road? What is not? And it's very difficult to define that. And so it has to have a confluence of sensing and actuation. So it has to move. And that has to be wired in some kind of intelligent, non-trivial manner. But then when you look at very simple robots where the sensing is directly wired to the actuation, I think they're also robots already like insect-like things. And it's probably almost as difficult to ask what is life. So where does life start? And what do you really need? What are the ingredients? Is it a certain complexity where that mechanism then becomes alive? So sorry for digressing,
Starting point is 00:05:18 but the robot question is equally difficult to answer. I like, going back to your book, I like that it's for computer science folks. One of the things that I always have trouble with is the mechanicals of it. I mean, in your course on Coursera, which is connected to the book, is it connected to the book?
Starting point is 00:05:44 Yeah, the Coursera course to the book? Is it connected to the book? Yeah. The Coursera course follows the book. You use the WeBots simulator, which is sort of ROS2, Gazebo, made much, much, much easier. What made you decide to use WeBots? It's a scaling issue to teach the class to multiple people or to many people at once. And we always use the simulator. We always used Weebots. And sometimes I taught the class with, so as a simulator, we never use Gazebo or anything else, but sometimes I try to teach the
Starting point is 00:06:21 class using Sparky robots. And then every student could have a Sparky. So you can have like maybe 60 students with 30 Sparkys or something like this. And then they would share. But I also realized that it's becoming very much an embedded systems class at this point. And when you actually use ROS, which we also have done with the Baxter robot, it becomes really a software engineering class. And the people who do not have those skills, or Linux embedded Linux skills, they basically fall through the cracks. And so with Rebots, you can really focus on the discrete algorithms that drive the robot in the simulator, a few lines of Python.
Starting point is 00:07:02 And I like that about it, but defining the robot has been really difficult for me i i have i want to do a robot that is very different from what you have in your class or is defined in we bots otherwise but the mechanicals part is just sort of beyond me how do i learn that you mean like putting together the mechanics in the simulator, like starting from a wheel and adding 3D objects to create your own robot? Yes, I can follow the instructions, but I can't do it myself. You could. I think I know what you mean.
Starting point is 00:07:40 So I let the students in the first you know exercise lab session um follow the rebirth tutorial to build um some simple robots that drive um it uh and then later they i want them to learn how the simulator works in the you know the scene tree and things like this so that they understand that there's a a tree of like components because then later they have to change the robots that are given to them and um i mean i think you're just it's just a matter of expectations if you can put a humanoid robot together in like 15 hours then it's maybe 10 000 times faster than doing it for real but you may be expecting to do it in 10 minutes so maybe that's the problem and then of course the physics are finicky so i i have this one robot i
Starting point is 00:08:33 don't know if you've seen it the one where um there's a custom a side spinning wheel that makes the robot turn once it goes off a cliff and you know setting up the physics right in so that this actually works so the real speeds and the friction coefficients is is quite difficult so it requires a lot of tinkering in order to understand the intrinsic problems of the physics simulator that's i think bullet that they use inside So that also might create problems if you do that for the first time. So you're telling me that it's hard. Well, that's irritating. Do you need to understand the physics to be able to, like, I would imagine that these robots get so complex that you're not going to sit down on a sheet of
Starting point is 00:09:23 paper and solve analytically, you knowically the equations of motion of the system. Or is that something that people should be considering with maybe simpler robots before going into the model? Well, I think you need the equations of motion once you want to solve the inverse kinematics problem or even the forward kinematics. So you want to know where things are, like when the wheels turn. But generally, you don't have to build your own robot and robots anyways. You just use one of the many that it comes with. And so I never had to do that, really. So maybe in the class, it's a distraction in the beginning,
Starting point is 00:10:08 but on the other hand, I want people to understand how the simulator works. So you don't have to be able to build your own. It's really just getting people to the point that they see, I mean, when they follow my curriculum in Coursera or something, I guess that's what, Alicia? Yes. What you refer to. Yeah.
Starting point is 00:10:26 Then I really want them to understand how these components work and that they are like discrete elements like motors and sensors that you build this up with. And I think it's good feedback, actually, that this is maybe an annoying hurdle. Maybe one should do easier things in the beginning and once people like the class or like the content then let them play with the simulator or learn the simulated you know intrinsics oh it's not really a comment in the class the class is paced well uh it was just that i wanted the class in order to be able to do what I wanted to do. But it's a WeBots hurdle.
Starting point is 00:11:08 It's not really a class hurdle. It's a WeBots tutorial that I probably should figure out and then write. Yeah, I mean, any hurdle is a hurdle in the class. So I think it's good feedback. I appreciate it. I will use it. So back to, you mentioned the model. You were talking about this wheel that is perpendicular to the normal. Okay, so there's a robot.
Starting point is 00:11:34 And you have a really good description of this. And the robot, behaviorally, it is just a little tiny wind-up sort of toy robot, and it goes to the edge, and then it turns. It goes to the edge of the table it's on, and then it turns. So it avoids falling off the edge. It avoids falling off the edge. Okay. But it is done entirely through sensing and actuation. There's no intelligence to it.
Starting point is 00:12:01 Because when it is going forward and it starts to lean forward, it engages a different wheel that turns it to the side. Okay. So it's just a little more sophisticated than those 1980s dog robots outside Spencer Gifts that went around. As sophisticated as the 1950s wind-up robots. Okay. All right. That seems like the simplest version of a robot possible. And according to your definition, it is a robot. Yeah, that's why I show it to the students and ask them, do you think it's a robot?
Starting point is 00:12:34 And I also ask them, how would you build it? I mean, if you had to build it from scratch using your imagination, what kind of components would you envision? What kind of logic do you need? How could you go about it would you need a computer um and i think then that is already where the definition becomes really murky because it's it's really just mechanistic but then if you add electronics then why is electronics any different than a mechanism? It's also just, you know, photons or electrons doing physical stuff.
Starting point is 00:13:12 There's no algorithm there. But it's really a conversation starter also in the book. But it also has another component to it, which I think you want to get to also is it shows how important the overall systems design is. And even if you have a computer in there, the geometry of the mechanism is super important for this to work. And this will stick with the robotics curriculum throughout the book or the class or your roboticist career.
Starting point is 00:13:46 Exactly. And as I said, it is one of those areas that I find personally difficult. How do people learn that part? I think seeing such a robot and being primed for this uh is is very important and um then seeing examples throughout class um or in commercial products like the roomba and things like that but it's very very difficult uh to to design and learn that as a design skill um I haven't figured out how to do that or how to teach that, but I'm also not very gifted at that part. Like I think there's some, if you study mechanical engineering,
Starting point is 00:14:32 then you will be exposed to many, many, many complex mechanisms. And it's, you know, it's kind of a school of thought for that design process. But yeah, animals are, of course course another big source of inspiration in robotics historically where people look how it's done there. What are the concepts that most people have trouble with when learning robotics? So the computer science students often haven't learned what a Finestate machine is for some reason because that's something that is taught in computer architecture.
Starting point is 00:15:09 And they write code that runs from the top to the bottom, but they don't have a loop that goes into different states and then has state transitions. And so that is one part. But then the other part is that the code doesn't run. The code just triggers what the robot does, and then physics takes its course. And so while physics evolves, you have to take a backseat and wait until physics has happened. So you, for instance,
Starting point is 00:15:50 time your loop or something, and then the robot has moved a little bit, and then you can sense again. And wrapping the head around that is usually new to people, even if they programmed a lot before. There's a lot that goes into that too, because there's sampling rates, there's how fast your sensor updates and how fast you can read your sensor, which might be separate things. There's a lot that's just part of your system that's related to physics, but on top of physics. I feel like being an embedded systems engineer, I'm used to working with the world. Right. But yes, I can see how that might be odd.
Starting point is 00:16:22 So I have another question. What is the most fun part to teach you mean the most fun part for me or the most fun part for the students well now i have two questions okay uh so um i like them getting excited about the robot moving and the opportunities they get once they realize that they can very easily create illustrations and visualizations of actually something that they have complete control on, like, you know, the Tiago robot going to the kitchen and picking up items, something they probably didn't expect that this can be accomplished by the end of like a one semester class. And so that happens throughout the class when they engage with the robots and see what
Starting point is 00:17:11 happens there i guess that's also the most fun for them to do and i was a little reluctant because most of the students actually expect to work with real robots and then are disappointed if they cannot. And I think it's also difficult to appreciate what the simulator does for you if you haven't had that embedded systems robotic experience. So they really wanted things to catch on fire. No, not only that. It's like, okay, we're six weeks in and you've managed to install your compiler correctly. Oh, it doesn't work?
Starting point is 00:17:52 That's just a jiggly wire. Yeah. Now I understand why you guys are laughing. Because that's exactly right. That's the embedded systems background that you're bringing. And they have no idea. I mean, I tell them, look how it's so easy. And normally you have to take a ruler and put it on the table and follow the
Starting point is 00:18:10 robot and reset it and carry it through the room. And here you have to do none of this. Not to mention all this plugs being, becoming loose somewhere inside or things heating up weirdly and then the behavior changing throughout. So yeah, I always tell them how lucky they are, but it's difficult to communicate that. Just make them do the IMU calibration procedure where they have to do the hard and soft iron calibration and then all of the accelerometers. Do that with the 40-pound robot. It gets really
Starting point is 00:18:41 heavy. I have to do that with a 40-pound drone. It's horrible. It is really weird for me to think about robotics being separate from embedded systems. Maybe that's just my background. Is that a common separation in academia? Yeah, I think so. Because, I mean i mean historically robotics or robotic textbooks are about dynamical systems no like the kinematics and then the dynamics of these link chains of actuators and i uh don't teach dynamics but only the kinematics and um all this new stuff i would call it which is the planning aspects. So there are all these algorithms like from networking, Dijkstra's, where we didn't have path planning or a lot of probability theory that came in.
Starting point is 00:19:35 And so the question is, how can you pack that all into one class if you have to deal with the limitations of the hardware? So example, when we used Sparky, which is an Arduino-based robot, they had to implement Dijkstra's in C on a 4x4 grid. And so they don't really see how the path is planned or like the path planning is difficult. So now they can have a real map that is collected by a laser scanner and plan a real map that is collected by a laser scanner and and
Starting point is 00:20:05 plan a map on that so you can push it much further if you disconnect it because otherwise the tiago robot i don't know how many computers are there and how many sensors and power routing it's quite a complex system which might take like a few weeks of just learning how to use it properly. And then I can't teach them all this stuff and have the robot go to the room and pick up something from the kitchen sink or the kitchen counter. Is it realistic to let the students think that this easy part of robotics can happen without a real robot? Yeah, it's a good question.
Starting point is 00:20:51 I wouldn't call it easy. I know, I thought that was wrong. I think it's really, so at CU, another big deal for them is to program it in Python. And before that, they learned only C++ in our curriculum. So in the third year, then some of them know Python, but it's the first time they really have to go through this. So there are so many things that they don't know and that they have to learn and that this class is using to transform them into computer scientists
Starting point is 00:21:21 that we basically pick our battles. And I would say a lot of these things that you experienced the hard way and they should experience if they want to go on to robotics is not what all of the computer science graduates, or I don't know how many percent, but I think a large percentage of our graduates take the robotics class because it's some kind of the core curriculum. I don't think these are things they have to know. And so it's really about finding a way to get those people, these computer science people, into the robotics field and then keeping them there and let them take the next class, which is advanced robotics. And that uses the Amazon Racerer which is one of those um
Starting point is 00:22:07 you know remote control cars which you can then make autonomous and so there they start to get exposed to much more embedded system stuff so they have to log into that linux box they have to use ross ubuntu command line and then you know things like power and you know cables coming loose becomes an issue i don't think they really add sensors on their own but then they can also all work in the robotic labs and become real roboticists so it's really just one direction to come in which is maybe very orthogonal to coming in from a VEX robotics perspective or from an Arduino-based perspective. It's interesting how matrix math really is very extremely useful in lots of things, but specifically in robotics seems like a good course path. Yeah, and I already regretted when I said discrete mathematics, the other classes, linear algebra. Yeah.
Starting point is 00:23:18 And maybe that is more important. I'm not sure which one it is. It depends on what you want to do I think you can probably get away without the matrix math if you use high level AI planning
Starting point is 00:23:35 but then everything else you will need to live with it especially computer graphics, computer vision and things like that, or even machine learning. So maybe we should correct that answer. And so here we do. Do you remember that episode on the Big Bang Theory when they suggested adding a Bluetooth to a flower barrette to attract a male audience? Penny asked Sheldon, wait a minute, you want to make a hair barrette with Bluetooth?
Starting point is 00:24:12 To which Sheldon replied, Penny, everything is better with Bluetooth. The right answer was actually, Penny, everything is better with Nordic Semiconductors Bluetooth, because with a 40% share in the Bluetooth low energy market, chances are you have at least a handful of Nordic devices in your home at the minimum. But it's not just about Bluetooth, because over the years, Nordic has become a market leader in IoT connectivity, providing hardware, software, tools, and services to create the IoT devices of the future across a wide technology portfolio that contains not only Bluetooth Low Energy, but also LE Audio, Bluetooth Mesh, Thread, Matter, Cellular IoT, Wi-Fi, and more. And to thank you for listening, Nordic Semiconductor has a contest to give away some parts. Fill out the entrance form and you're in the running. That link is in the newsletter and in the show notes.
Starting point is 00:25:10 You, you're okay. Sorry. You have the book, you have a Coursera class, the Coursera class, the videos also seem to be on YouTube. Are they the same? The same videos. Yeah. I uploaded everything to YouTube for broadening participation. Which one should I take? Why would I take the Coursera over YouTube? I guess they're both free? Yeah, so Coursera is marketed as free. But then I think what happens if you want the credit, they charge you.
Starting point is 00:25:46 And you have to be a premium subscriber. But I think also Coursera changes things all the time. So I'm not really sure what the status quo is. I think it's a free class if you don't want to download the certificate. And I think there are certain quizzes that are not available in the free version, which do not add much in terms of content. It's just an additional assessment. And so I think the Coursera class is better because it is actually,
Starting point is 00:26:23 when I made it, i made it without videos um so sorry i take it back i made it without lectures so it had all these mini videos that show examples and robots but then it has a lot of like um quizzes where which i call like guided exercises and the idea is that you are told or asked to do something in Weebots and then I would ask a very simple question that is easy to answer if you have the robot in front of you and you can just press play and see what it does but very difficult to answer if you don't so the idea is that you really implement everything along as you go in Coursera. And on YouTube, these lectures just provide some content, but do not provide that self-guided activity.
Starting point is 00:27:15 But then I have to sign up for Coursera. Okay. And then the book, which is for sale on all the normal book sites, I can also compile in LaTeX. Yes, that's right. It's on GitHub. And the license is that we can make the source code available. And so people have to make their own PDF. Yeah, that was harder than it seemed, but that's okay.
Starting point is 00:27:49 So basically there are free versions of the whole class. How are you going to make any money from that? Oh, I have a job. So, I mean, that is not the primary driver. Actually, there is royalties for the book if people buy it and there is royalties for the Coursera class if people from outside of CU take it um and I think there's some some key on how this is split across the university and the people who created the content uh so there are certain I don't think these um these money none of these money sources really scalable um i i feel like it is more part of the profession to see where things are going
Starting point is 00:28:36 and um you know actively contributing to how the university is changing. And that's why, for instance, you have engaged in this master's program in Coursera. And I don't know if you know that, but you can, if you pass three of the classes in the master's program, you get enrolled in the master's program, even if you don't have a bachelor's. So you can now just have a high school degree and enroll in a computer science master's program. And I think it's actually indistinguishable
Starting point is 00:29:12 from the on-campus experience. And of course, a lot of people ask questions and wonder, including our own faculty, whether that's the right thing to do because there's also some kind of gatekeeping idea. And I think what happens is now education is freely available and ubiquitous, and we have to find new ways to make the university a platform rather than a broker of knowledge, an exclusive broker.
Starting point is 00:29:48 And funny things happened. I think somebody took this master's degree and then started working at CU in a research lab for money during the summer. And so it creates completely new avenues for people to do their career and engage with CU. People who would never have come to CU as a student are now getting a degree there and interacting with people in various ways and then moving on with their careers. So I think that's part of the job to explore. And we have awards. So I got an award for the book and I got an award for the open source activity. So the university libraries are pushing that or the engineering school and rewarding professors for being proactive and developing materials instead of just using other people's work to educate. I'm unfamiliar with the University of Colorado's master's program. I'm familiar with the Georgia Tech computer science master program that's all online and reasonably priced compared to doing it in person. So what you're saying is that you have an entirely online master's
Starting point is 00:31:03 degree in computer science? Yeah, I think it's actually the same as the Georgia Tech ones, except we have this entry point that is different, which is a mastery-based access. But we also came after Georgia Tech. I think the Georgia Tech one is very well regarded in industry. And so we are trying to, you know, compete in this space, I guess. Can you imagine going back?
Starting point is 00:31:33 I mean, I think we are about the same age given when you graduated and all of that. Can you imagine if this was possible and you could live anywhere and get a master's degree and start your career that way? It just seems so odd to me. I think it's very odd. And I think five years ago, you couldn't have done that. Technically, it wasn't possible or i mean we always had remote universities by the way in germany where you could get a remote education and offerings like this always existed but at that scale that you can learn so many things um by yourself um by the way without getting the
Starting point is 00:32:20 degree uh i think that is really the novelty that you can um build or you know set your goals to anything and learn it from the internet using youtube videos and um chat gpt in particular i think that is really the novelty and that's where we have to then say and by the way um we also sell degrees um do you really need one and think about what that means do you really need one i do think that applies to certain kinds of people though because like you're asking elisa can you imagine doing that when we were in school no because i would never have finished i would have goofed off and looked at videos. I am not temperamentally set up for that. I need to have people in my face, in person, or some sort of accountability.
Starting point is 00:33:16 I don't think I could do a thing like that without just failing out of it. Yeah, I have the greatest respect for the people who actually do the master's degree. And I often write them letters. And I think they're just so much more capable than, I mean, shouldn't put it this way, but they are extremely heavily self-motivated. That goes, it's not what the on-campus students often are because they behave as you described. There's a funny other aspect.
Starting point is 00:33:50 I mean, if you really want a job and you want the money, then you'll probably do it. And I had a student come to me recently and he asked me if he should stay on for the master's degree. And I then asked him what his goals are and what his grades are, because maybe he has very poor grades. So then he gets the master's degree and then he has finally a good grade that he can show. No, he had 4.0. And then I asked him his goals. He didn't really know.
Starting point is 00:34:18 He wanted to learn more to be proficient in a specific industry. And I then told him that his goal should be to get a job because that's why he takes the degree. And now if he cannot get a job because of the economy or his other limitations of his GPA or knowledge, he wants to get a different job, then yes, go get the master's degree. But I feel like sometimes the students forget that they want the job.
Starting point is 00:34:47 And so with this new world of online education, I can look what are the jobs that I want, and I can try to get that knowledge, and then I can demonstrate it. And I had a student once, actually, he was a student, and he worked in my company, and he wanted a job at Rivian. And they didn't give it to him because he didn't have certain qualification, I think, with OpenGL. And then he spent a weekend learning OpenGL and programmed this Rivian car, driving in some kind of like muddy something. And he called that recruiter and said,
Starting point is 00:35:25 look, I now open GL now. Can I have the job? And he did get the job. And so I mean, that was when five years ago, maybe three, something like that, when Rivian was hot and new. And I didn't realize how easy it was. was i mean not that it was easy for him but it was possible to learn all of this and find youtube examples and at the time i haven't learned from
Starting point is 00:35:56 youtube yet i was a textbook kind of guy so if i would have done this i would have ordered an open gl book from m Mick Effie or something, or this thing with the animals on. Oh, really? And I read the entire thing. And then I, you know, but this younger generation, they are much, he's much more result-oriented. He wanted to do this.
Starting point is 00:36:21 So he found some example online and he cobbled stuff together and, um, he, he got, um, much quicker results than I would have gotten. And so, um,
Starting point is 00:36:33 that's why I said it, he wanted the job. He wanted the money that comes with it, which is much more than, his, oh, he had a mechanical engineering degree too. So they are paid less than the CS people.
Starting point is 00:36:47 So I think, Chris, you would be the same if you were motivated to get the job. But it's also a developmental stage where the university serves as a bed, not as a bed, as an environment for personal growth. And that is a very important aspect too. Yeah, and I think that was very important for me because I went from a place where in high school everything was somewhat easy and I did very well. And then as soon as I got to college, I suddenly realized that I was, you know, mid-tier to lower quarter tier of this class. And that didn't feel right. And things were very difficult. And it took me a couple of years to figure out, oh, how to apply myself and how to be motivated and things like that. So it's an age thing. Like now I can learn from YouTube videos and I can learn from online classes. And I did a master's degree when I was an adult and that was self-motivating and stuff like that. But at 18 to 20, that would have been very difficult. But I also didn't grow up with YouTube.
Starting point is 00:37:59 So maybe I would be a different person. But I still have trouble watching things on YouTube. I do so much better with the textbook. How did you switch that? Yeah, I think it was actually also that student who introduced me to the idea. When I bought books, he was watching videos. I think you need really good stuff like, I like these Make More from Andrzej Karpaty, for instance.
Starting point is 00:38:31 And that was just, you know, my world that opened the world for me to transformer neural networks. So I think you have to get to the right content. But normally, to me, it's also too slow. I mean, it is a pacing thing. The book, you can pace yourself, the YouTube video. So you can also set it to 2x. So sometimes you have to do that and power through. And yeah, again, I think the content has become so much better on YouTube,
Starting point is 00:39:04 on video formats, than you could find on books. So I think that's really what made the switch for me. There's a lot more examples and seeing things happen. And with the robotics, it's actually really useful. Because I read about the robot that Nicholas is talking about with the two wheels and the one perpendicular and it doesn't fall off the table. And in the book, it talks about it and you're like, okay. And then when you see it in the video, it's like, oh, that's what he meant. So there are times where motion matters. Yeah, the podcast audience is probably already quite all right
Starting point is 00:39:45 that they couldn't see the robot that we all have seen and described. Oh, some of them have already clicked the buttons to look it up. Others have just cursed us on their drive and said, I'll never find it again. That's fine. We've had much worse visual descriptions on this podcast. Origami folding. folding takes the cake. What kind of research do you do?
Starting point is 00:40:15 So right now, we are very interested in these chat GPT, like open world models, and how they affect robotics and how we can use use them to make robots smarter and to make that more clear you can now upload pictures to chat gpt so you can take a photo of your kitchen table and ask chat gpt what are the things i need to do to clean the table make a list or tell it I am a robot, whatever. And it would then enumerate the thing. So you get quite a high level of common sense and almost general intelligence to interact with the world. And the research is not just using TGPT in that way, but also to train models that are like or using the same underlying techniques
Starting point is 00:41:07 to do that. And so then, you know, humanoid robotics is something that is also coming up. And I'm just about to order my first humanoid robot. And then i hope we can do things that were not conceivable like 10 years ago or five years ago even so for chat gpt and robotics i have in in in my metaphorical vision a roombo looking robot that has a pen and is the logo turtle robot from many years ago. And it talks to chat GPT and says, I want to draw an owl. And I know how to go left, right. And it lists out the turtle logo instructions. Is that what you're talking about? That we're simplifying what the robot wants to do
Starting point is 00:42:08 versus what it knows how to do? That's definitely a great example because it actually pulls in this extremely abstract knowledge about what an owl is. And what we can do now, if you want, we could go to Chachapiti
Starting point is 00:42:24 and ask for the owl ASCII for the Aul ASCII art to make an ASCII art using characters and to see if that would even work but I guess it would and I guess it could be done relatively easily and
Starting point is 00:42:39 now you have to just provide as you said the API that's like the prompt engineering you have to just provide, as you said, the API. That's like the prompt engineering you have to do. And then copy and paste the output and it will work. Will it? That's quite incredible. I mean, chat GPT doesn't always work so the limitation might be that the number of instructions are quite long for this task and it doesn't like to spit out too much information
Starting point is 00:43:14 but i think you can get to some kind of version of this and i just saw a paper from Berkeley, Ken Goldberg's group, who showed the robot 3D puzzle pieces like cubes and triangles and then asked, make me a giraffe. And then the robot would pick those items and build the giraffe or whatever, like animals of that kind. And there's some more engineering to that, but in general it uses the vision language abilities of these models and demonstrates that things like that can be done. So that's very similar to your drawing example. It's just in 3D and involves blocks. I'm going to go back to you you dropped that you have ordered a human humanoid robot um you dropped that you've ordered a human right that's so good uh
Starting point is 00:44:14 why this is a question we've asked other robotics adjacent people on this podcast in various forms why or is humanoid form important? Yeah, great question. I spend a lot of time in manufacturing trying to sell robots to them and they often discard them. So as you might know, these robot startups are often not successful
Starting point is 00:44:41 and then, you know, be it arms or be it mobility they get sent back and uh when you then ask why do you not want it anymore then they say it doesn't work and then you say no it does work moves as it should and they say no it's and so i think what happens often is that the robots disrupt the flow so you So they cannot work faster or slower. So they basically enforce a certain tack time, which is a musical word which you use in manufacturing for how the pace at which things move. And I think humans are able to adjust better and adapt better. And so it's very difficult to integrate.
Starting point is 00:45:30 You have some existing setup with humans that do stuff and machines. And then when you bring in new machines, they disrupt the flow of things. And so I think humanoids are the best bet to create something that very seamlessly integrate. And I have to give you a longer answer. I can give you a second part to this answer, another example. And I often use this Bialetti espresso maker from Italy. It's that silver can that you have to screw on and off and put the powder in. And then,
Starting point is 00:46:07 uh, you set it on a heat plate. Oh yeah. You know that. Yeah. Like a camping espresso thing. That kind of, yeah,
Starting point is 00:46:14 the Italians actually, they live with it. Um, but that's fine. So this is how it should be done. It's not a camping thing. Sorry. Sorry.
Starting point is 00:46:23 So if you, um, let's say you have a factory and it has these kind of things, the heat plate and the grinder and this. And so you go to a roboticist and say, hey, please automate that for me. And then for $150,000, you could probably think of something which involves a couple of robot arms and careful jigs that make that work. And I think it would be very brittle. And now the alternative is an espresso maker where you press a button. And that espresso maker actually has the same configuration inside. Everything is the same it has it boils water or it heats water and then it
Starting point is 00:47:06 has this um you know um sieve um and the water gets shot through that and so you basically mimic this entire mechanism but you can get that done for 300 or 500 right and so that that is one problem it's it's called greenfield versusfield, where you have a Brownfield installation that you want to retrofit versus you can discard that stuff and build something new on the Greenfield. And so now I think we haven't really talked about that the robot installation wouldn't really work. I mean, you can see how you have like universal robot arms and there would be problems and problems all the time that something would get of the tools or let's say i don't know what it is form factor and functionality to actually replace the human who has previously operated that existing brownfield installation and so that's the answer because we've adapted all of we've adapted all of our environments to ourselves necessarily so it's much easier to stick a human shape thing in a human shape hole yeah and maybe that would be the simple answer
Starting point is 00:48:30 but i don't think people buy that answer anymore i think they need better concrete examples like that um and of course it's a made-up example too and you're right that that is the summary. And people don't want to change their environments and their setup. Very difficult to make even small changes in a factory. See, I thought you were going to say people wouldn't want to send back something that looks like a human. It's a nice one it's funny how much i disagree with this i mean i didn't really expect to have such a strong opinion but i am just like no it's sunday you can't have a strong opinion on this it's okay so i would use that one chris i like it it's much easier i just say it like cold like without a smile just very serious bartender if i was building a bartender robot i would not want a humanoid i would want something
Starting point is 00:49:34 with wheels instead of legs and i would want four arms or eight arms i wouldn't want a humanoid even though it's a humanoid that does that job. I want something designed for purpose and not designed for the flexibility that is our feeble humanness. But the bar already exists as it is. That's the problem. Right, and my robot does not break that in any way. You haven't designed it yet. With wheels and forearms. There are many bartending robots, I mean, startups,
Starting point is 00:50:09 like cocktail mixing machines. Yeah, yeah, yeah. But that's a green field, right? They've got all the things in there like a good espresso machine. Yeah, but my robot could do the whole flair cocktail thing. Then let's do that startup. So I think there's a problem that you have only a marginal
Starting point is 00:50:27 value prop and then it doesn't justify all of this mechatronic investment and the humanoid can really, you know, go and do multiple things. And the problem with this is also my example only works if the
Starting point is 00:50:43 humanoid is so good that it could do all of this dexterity. Right. We're just kind of sliding over that. Yeah. And once you don't have that anymore, then you'd exponentially lose value. And that's what we are seeing right now.
Starting point is 00:50:57 When people try, you know, figure tries beam, a beam W tries to figure humanoid robot. And the problem is if they cannot deliver or raise the bar of value, then this will fall through. And then we have a failed bubble, which would be a pity. One of the other problems with humanoid robots or my bar robot is that robots are inherently not safe. They're usually stronger than humans and they don't really have any compunction against hittiness, if that's what
Starting point is 00:51:34 their programming tells us to do. And so in manufacturing, we put them in cages or paint lines around them and tell the humans not to go in there. Puts blinking lights and orange paint on them. Yeah. Yeah. We make them brightly colored so that people don't hurt themselves or get hurt by the robot. How would your humanoid robot prevent that? Or are you going to make it squishy?
Starting point is 00:51:59 Yeah, it's a great question. All of these things, squishy you know collaborative robots i mean that exists um the key in collaborative robots is that the robot measures its own joint torques and as soon as it exceeds what it thinks it should exert uh it will stop and so before you had robots that could lift 100 kilograms and so if if you get in the way they would just move uh right through the obstacle or you um and and these new collaborative robots they could sense that and then stop and of course once you move them faster then they whip you again so it doesn't work and i think one really has to make the robots behave safely and in the worst case, just stop moving if people approach.
Starting point is 00:52:50 And that's, of course, a problem if you think, you know, care situations. But I think there are lots of applications where the robot actually doesn't have to do anything with humans. So if the robot would do something in the manufacturing environment or cleaning the kitchen, then it's also safe that it gets into a less aggressive motion mode that is moving slower as people approach. And then the assumption is that when contact is made, the robot can tell. And it can tell already from monitoring its own joint talks and then there's of course this whole idea of adding skins uh to the robot and um there's
Starting point is 00:53:33 some interesting what's up and clothing sorry go ahead skins yeah i i i'm actually really bullish on the idea of making robot clothing, like with the humanoids, like jerseys and things that are functionalized and provide them with, because the textile industry is very well understood, very well set up, and there's lots of ideas for wearables out there to make them, to functionalize clothing. And you start to add another layer of um of safety and also aesthetic uh appeal i was kidding robots shouldn't have clothes
Starting point is 00:54:11 yeah sure robots are people too you can't just fire them uh you also mentioned before recording that you are doing battery disassembly with robots. Could you describe that? A lot of first-generation electric vehicles come to end of life soon, and there will be many more. And at the same time, lithium and other rare earths are a limited resource. So there's great interest in dismantling these batteries and recycling them or remanufacturing them by exchanging individual cells and modules but then at the end also to extract these raw materials again and i do this or i propose to do that with
Starting point is 00:55:13 humanoids and the hope is also to use this as a case study for productive use of humanoids because it meets this dull dirty and dangerous paradigm that we like in robotics and it is a better idea than making a robotic coffee maker that operates that bialetti machine where people can really see oh i don't want the human to deal with 400 volt battery or 800 volt battery i'd rather not be close to that battery when it catches fire because i you know drilled um through the you know if you drill into it or something it can just um spontaneously combust and then you can't even um put it out so it's very dangerous and i think there's a big um driver for having a humanoids do that Now, you could also have,
Starting point is 00:56:08 many people have done this with static manipulators. But the problem is you have to be able to do this for possibly hundreds of different battery types and maybe also do this at places where the batteries are being delivered and not move them across the country first, because that not only costs a lot of money, it's also very dangerous again because of this spontaneous combustion risk or short circuits that come when you put it in the back of a truck. So I hope that this will also drive a little bit the humanoid use case.
Starting point is 00:56:46 So instead of having people send all of their batteries to someplace in, well, let's be realistic, Nevada, you would send these robots to various centers, probably on the outskirts of town, and they would disassemble the lithium ion batteries there because the robots are humanoid and can move and can be flexible enough to disassemble many different kinds of lithium ion batteries is that am i understanding properly yeah that was correct and there's lots of stuff to unpack of course and one is this very many different kinds of things is again like a chat gpt idea like this open world abilities of these generative models multi-modal generative models will help you to identify screws and balls and things in a more generic way so even if you
Starting point is 00:57:40 haven't seen that type of battery before you might be able to disassemble some of it or ask a human worker how to do it. But also the humanoid would provide the mobility that is allowing the robot to move around the battery because sometimes they're bigger and you would need quite a big installation that is statically and like a big coffee machine, if you want, espresso maker that sits somewhere and costs hundreds of thousands of dollars versus having a rather simple, it's not simple, but a more compact
Starting point is 00:58:13 version that can use standard tools that people are currently using. And of course, this assembly is not going to be completely autonomous in the beginning, but it must be somehow working together with the human worker. So let's say the humanoid could only do the things until the battery is safe. Or, for example, opening up the case and probing what is inside, and then the human can take over. So, and because it is a fluent process where you automate it step by step, I think that's why the humanoid form factor again is important. So it can integrate with the existing places because people already do this for like 10 years now to, you know, dismantle these batteries and sort them and send them to these big plants in Nevada and other states, but send other pieces elsewhere. So it's already happening and it's about that market
Starting point is 00:59:12 to help them doing it better. You call them humanoid robots instead of androids. Why? Android is an operating system from Google now, isn't it? Okay. That's fair. Totally fair. Yeah, now that you say that, it does sound familiar. Yep, yep. I don't think they should be allowed to have that one. Yeah, I think maybe is it also Latin and Greek now? Andros means man and human.
Starting point is 00:59:42 That's true. and human. I usually use, I don't use the word Android for just culturally peasant. Are you using Android over humanoid? No, I use an iPhone. No. You know, you're very funny.
Starting point is 01:00:01 No, I don't. I tend to use robot. I tend to use robot which is not specific enough I just have I read a lot of science fiction and it seems like that terminology has been changing and it didn't occur to me that it was because of Google and their misuse of their operating system
Starting point is 01:00:23 Google and gender specific. I think that was what you're saying, right? I understand that, but I guess it doesn't, that one doesn't bug me. You could always use Droid, but then you'll be sued by somebody else. Yeah, exactly. Disney. One more question I have for you, if you have just a couple more minutes. Sure.
Starting point is 01:00:47 You do art. And what kind of art do you do? And how do you find time for it? I haven't done it in a long time. That answers the second part of your question. But, you know, there's always, I think most of the robotic projects are art projects because you're not use oriented and you're not solving anybody's problem, but you are pushing the envelope. And if you are too wild about that, then people will get back to you and stop funding that and say, no, I mean, why do you want to do this? What's the point?
Starting point is 01:01:28 And you say, no, it's just cool. I want to try that. I want to make robotic sand or I want to make robot swarms. And so if that's your problem, then maybe art is the better label. And I think I met once a colleague michael theodore and he had very similar interests than i in terms of um swarming and how it can be that um you have uh you know atoms that have physical intercourse like it's very simple to describe but then they create um cell or molecules and then cells and then you have smart things or you know a bee swarm or galaxies and you have these scale-free phenomena where these physical basic interactions uh turn into these complex systems. And he was equally curious about this as I was,
Starting point is 01:02:26 and he wanted to explore it with art. And so we worked together and we built the swarm wall, which was a wall of swarming slinkies that were just driven by several motors, move across a surface that he designed and the behavior of that system then later was refined by a visiting professor ken sugevara who has been studying swarming and writing about the equations of swarming and what happens is you attribute more to these things than actually happen so you think that thing has emotions or it is somewhat intelligent because of the complexity that you get from these interactions
Starting point is 01:03:12 so that was a fine example of um why i would do art and how i would find the time because it is just my main pursuit and we also got money for that even to build the whole system and then at the same time there are sometimes just artifacts of the robots that i find interesting beyond the publication value that they have and i then like to connect this to other you know art where people explore similar phenomena so i really like these synchronization and desynchronization things which also happen in the swarming systems they happen in the swarm wall and then i like to sometimes play with these and see what happens and then also connect it to music that is made in that way and things like this. It's not very exciting for other people, but it's very exciting to me.
Starting point is 01:04:11 So I think that's probably the definition of art, by the way. It's a different definition than I'm used to, but I like it a lot. Niklas, it has been wonderful to talk to you. Do you have any thoughts you'd like to leave us with? No. Keep on building, I think. Keep on blogging and try to educate yourself and others. I think that's what makes the world go round.
Starting point is 01:04:41 Our guest has been Nicholas Korol, professor of computer science at the University of Colorado and author of Introduction to Autonomous Robots. There will be links in the show notes to his Coursera course and some of the robots we've talked to. Thanks, Nicolas. Thank you, Chris. Thank you, Alicia. Bye-bye. Thank you to Christopher for producing and co-hosting. Thank you to Nordic for sponsoring the show. And of course, thank you, Alicia. Bye-bye. Thank you to Christopher for producing and co-hosting. Thank you to Nordic for sponsoring the show. And of course, thank you for listening.
Starting point is 01:05:09 You can always contact us at show.embedded.fm or at the contact link on Embedded FM. And now a thought to leave you with. Should robots be humanoid? you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.