We Study Billionaires - The Investor’s Podcast Network - TECH010: The Real Robotics Timeline w/ Ken Goldberg (Tech Podcast)

Episode Date: December 24, 2025

Ken and Preston examine whether robotics has lost its way, echoing Rodney Brooks’ concerns. They dissect the gap between AI language models and physical robotics, focusing on dexterous manipulation,... tactile sensing, and visual feedback. IN THIS EPISODE YOU’LL LEARN: 00:00:00 - Intro 00:02:37 - Why Ken agrees that robotics may have “lost its way” 00:03:37 - The critical gap between AI language skills and robotic manipulation 00:04:33 - How robot mobility is advancing, but dexterity still lags 00:08:15 - Why tying shoelaces is still too complex for robots 00:12:37 - The role of tactile sensing vs. vision in robotic surgery 00:14:45 - How camera placement in robotic hands affects manipulation 00:20:18 - Why the robot data gap could be 100,000 years behind language models 00:25:13 - Why simpler grippers often outperform human-like robotic hands 00:27:03 - The engineering behind Dex-Net and Ambi Robotics’ success 00:34:37 - How real-world testing exposed unexpected robotic limitations Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Official website: Ken Goldberg. Website mentioned: Ambi Robotics.  Research Article: Dex-Net in Science Robotics January 2019. Executive Education profile: Prof. Ken Goldberg.  Ken Goldberg interview by Kara Manke: Are We Truly on the Verge of the Humanoid Robot Revolution?   Goldberg on Moravec's Paradox. Goldberg on AI and Creativity. TEDx Talk: "Robots:  What's Taking So Long?" Op-Ed by Ken Goldberg, Boston Globe: Let's Give AI a Chance. Research Papers are available for download. Related⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ books⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ mentioned in the podcast. Ad-free episodes on our⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Premium Feed⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. NEW TO THE SHOW? Join the exclusive ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Mastermind Community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X (Twitter)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Check out our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bitcoin Fundamentals Starter Packs⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Browse through all our episodes (complete with transcripts) ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Try our tool for picking stock winners and managing our portfolios: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Finance Tool⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Enjoy exclusive perks from our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠favorite Apps and Services⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Get smarter about valuing businesses in just a few minutes each week through our newsletter, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Intrinsic Value Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Learn how to better start, manage, and grow your business with the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠best business podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. SPONSORS Support our free podcast by supporting our sponsors: Simple Mining Linkedin Talent Solutions HardBlock Alexa+ Unchained Amazon Ads Vanta Shopify Abundant Mines Horizon Public.com - see the full disclaimer here. References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to TIP. Hey everyone, welcome to this Wednesday's release of Infinite Tech. Today, we're talking AI and robotics, and where there's still key areas of development that need some work. My guest is Ken Goldberg, a leading robotics researcher whose work bridges academic AI, real-world automation, and large-scale commercial robotics systems. One of the things we discuss that's super interesting is the assumption that large language models automatically unlock physical intelligence.
Starting point is 00:00:29 And this is an area where Ken is really well versed and does a great job explaining what that actually means. We cover what has improved like mobility and automation and what's still painfully hard, like dexterity, sensing, and real-world manipulation. This is a grounded conversation about engineering reality versus expectation, and Ken is a true expert in this field as you'll see in the conversation. So without further delay, I hope you guys enjoy this chat. You're listening to Infinite Tech by the Investors Podcast Network, hosted by Preston Pish.
Starting point is 00:01:05 We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today. And now, here's your host, Preston Pish. Hey, everyone, welcome to the show. I am here with Ken Goldberg, and I am. I am so excited to have this conversation because you are an expert in this field of robotics and AI and it's something that we talk about all the time on the show. And it's just exciting to have somebody like yourself here today to talk about it. So welcome to the show, Ken.
Starting point is 00:01:51 Thank you, Preston. I'm excited to talk to you too. So before we started chat and you sent over an article that I think is very pertinent to kind of set the stage for probably most of the conversation we're having today. And it was an article that was in the New York Times and it's talking about it's talking about Rocking Rodney Brooks has quote unquote said, the field has lost its way. And for people that aren't familiar with Rodney Brooks, he's the former director of MIT's computer science and artificial intelligence. He's the Romba inventor and, you know, running that entire company
Starting point is 00:02:26 and product line. And so for him to come out and say something like this, this is kind of a big deal. And I just really want to kind of capture your take on what is he talking about the field has lost its way. Well, I think he's very, very respected individual. He's a good friend of mine. And I agree with him very much. And I think he's provocative. He's put it in his own words. But I sent that to you because I think it's very relevant for us to start this conversation about what's real and what's hype and in robotics. And I have to be careful about the word hype, but I want to say there's a call it inflated expectations that are out there. And I understand where they come from.
Starting point is 00:03:07 You know, I think people are excited about technology. Everyone, I am too. We all grew up with science fiction and we love it and we love new things. And there has been some breakthroughs. I mean, there's no doubt that the advances in artificial intelligence, in particular, deep learning and then generative AI with the transformer model have been transformative in the field. that AI systems are doing things that no one thought would be possible by now. And I will be the first to admit they're capable of creativity.
Starting point is 00:03:38 They're immensely valuable. But people then take the next logical step and say, okay, these systems have solved language. So therefore, they'll solve robotics too. And that is where I have a lot of concerns. I'm really, we can get into the details. But Rod and I agree that there is not at all obvious that what if the advances in language in AI will extend to robotics. What would you say is the number one thing that you're seeing that's just grossly out
Starting point is 00:04:13 of touch with reality when it comes to the robotics piece being not as far along or it's not coming in the talking point is in five years we're going to have humanoid robots doing everything, right? So like what are the big chunk pieces that people that aren't intimately, familiar with the space that you see are missing on that particular topic. Okay. So let me tell you, first of all, where some of the advances have been made. And one of them is in quadrupeds.
Starting point is 00:04:39 That's walking dogs, basically, and bipeds. That's walking machines and navigation. And I would call mobility. So the ability to get around with robots, with legs, has made immense progress. That's been very exciting. And there's no doubt about it. Those machines are capable of doing backflips, as you know, side flips, parkour, all kinds of things that I certainly can't do. Also, huge advances in drones.
Starting point is 00:05:04 And so the past decade, we've seen drone technology take off from something that was very experimental, but it's been a number of advances that have made that possible. In both cases, a lot of it has to do with motors and the hardware, but also advances in simulation. And the ability, for example, for drones, is to stabilize themselves and then to be able to control very accurately the motors on the four or six rotors that are there. And the same is true for robots that have legs or quadrupeds or bipeds. So where these are big, undeniable and major advances. And if you just look at the field, you say, okay, all this is coming. And now, the next thing, we're going to have home robot taking care of us. And, you know, this is around
Starting point is 00:05:51 the corner, according to Elon Musk. And I'm sure I'm going to get some pushback from some of your listeners are going to say, I don't know what I'm talking about. Okay, I've seen, I've had that happen from a number of very confident expert, quote, experts from Silicon Valley, tell me that. But I've been working in this field for 45 years, and I've studied very closely and I understand where the gaps remain for particular manipulation. And manipulation is being able to pick things up, you know, all kinds of things that just happen to be in your environment and then being able to manipulate them, you know, do things. That skill is very, very nuanced and tricky. And it's not clear that the current methods for doing AI is going to get us there.
Starting point is 00:06:35 I've heard in interviews, Elon in particular, say that the hand, mimicking the hand and, you know, the tendons and being able to have that tactile ability is extremely difficult is the way he has said it in interviews. But I think, you know, I suspect that. that when it really comes down to it, what you're seeing is a lot of demos online that you see, like this video somebody picks up a pen and the robot did it. But what's actually happening there behind the scenes, whether that was a programmed publicity stunt or something that the robot can just do quite well, there seems to be, there seems to be a large gap there.
Starting point is 00:07:12 So talk to us about where you see that gap and what it is in reality in your humble opinion. Okay. So what I, and this is understanding. Again, I don't want to say people are naive. I get where they're coming from. They see something and it looks human-like. And so they attribute human-like qualities to it and skills. I understand that.
Starting point is 00:07:34 And by the way, when Elon says the hands are hard, we can produce, people are designing hands that look very much like human hands, that is they have 22 degrees of freedom, and they can move all these joints independently very quickly. And they look almost identical to human hands. So we can reproduce that. In fact, there's like a hundred different hands that are being produced by different companies in China right now. Okay. So the advances in the hand itself are very sophisticated, but the control of the hands is where the challenge is.
Starting point is 00:08:06 And this is where if you have this hand doing this, but then get it to actually tie your shoe lace, that is where the challenge is. And this is because we have, there's so many nuances in the interactions that we have with these fingers with the environment that we are sensing the environment. We are exerting forces on the environment. And this is very subtle, very nuanced. And we perceive this through a variety of techniques. We have something like 15,000 sensors in our hand, in every hand. Yeah, I know it's remarkable. We don't even think about it because it's subconscious.
Starting point is 00:08:44 Yeah. But then we also have sensors. in our joints, every one of our joints. So we are able to perceive very subtle forces, slip, and in particular, one very, very nuanced thing is deformation. So if you look at your fingertips, they've evolved in a really interesting way that those pads are extremely helpful. If you put on, let's say, thimbles on your finger, right?
Starting point is 00:09:07 Like you're doing your sewing, that makes it much more difficult to do anything. Yeah. You can imagine. Or just actually heavy gloves. Gloves as well. Right? But we can do these things very, very subtly. We have learned this ability to interact with the forces of objects that the objects are
Starting point is 00:09:24 constantly being moved and deformed. So if you think of the shoelace, right? The object is being deformed. The fingertip is being deformed as well. This mutual deformation is something that's really nuanced and subtle. And we don't even know how to simulate it accurately. So we can't even simulate the forces and torques and deformations that are occurring. And then we don't have the sensing capabilities to perceive these nuances.
Starting point is 00:09:50 Like, you can feel a shoelace if it's a little bit slipping out of your fingertip. No robot can do that. So what happens is that when you now have this hand and you actually try to execute something, sometimes it works, but a lot of times it doesn't work. And now you have the issue of reliability. Yes. That is where we're seeing. And by the way, you can see robots all day long picking up stuff off the table and moving it somewhere.
Starting point is 00:10:14 That's actually not so difficult. If you just want to pick up, especially a stuffed animal, by the way, stuffed animals are very easy because you almost can't go wrong. You put your gripper anywhere near them and close it and you'll pick up that thing. Okay. Those are like, you know, they're sitting ducks, right? They're no, this is super easy, low hanging fruit, let's call it. And that makes it very easy to pick up and move things.
Starting point is 00:10:36 But now when you want to start doing things like inserting things, like repairing stuffed animal by opening it up and getting things and pulling out the stuffing or sewing it back up. This is totally different and much, much more difficult. Yeah. Your example of a shoelace is really profound because until you like take a step back and just think if I had to design or build a robot to tie a shoelace, I can't even imagine how incredibly difficult something like that would be because it is such a complex task. and I've never even thought about how difficult something like that is.
Starting point is 00:11:13 Well, here's what I know because a shoelace we all do. We learn when we're young and we kind of do it without even thinking about it. It's a subconscious, right? I can be on the phone tying my shoelace. Don't even think. But think about this one. I don't know about you, but do you know how to tie a bow tie? A tie, but not a bow tie.
Starting point is 00:11:28 Yeah. Bow tie. Okay, I thought you might because you seemed like a fashionable guy. I have tried it. It's very tricky. Yeah. It's very tricky business. And it's subtle.
Starting point is 00:11:36 You have to be able to feel and pull and all. these different directions. Yeah. Forget it. There's no robot that's going to be able to do that for a long time. I would love to have it happen because that would be something I would love to have a robot tie my bow tie. And here's another one that's very simple. It's just buttoning your shirt. Yeah. It's actually a little tricky for humans if you think about it, how you have to kind of fiddle with it a little bit to button on and off, especially a small button. So that's way beyond robotics. You'll never see a demo of a robot buttoning up a shirt. We're actually working on it in my lap, but it's It's really hard.
Starting point is 00:12:10 Wow. Yeah, it's things that you just really take for granted. Now, when you get into solving that problem, it seems like you mentioned this earlier, that it's almost a sensing issue that we need a lot of developments on the whatever type of sensors you have in the fingertips or whatever you're using for the manipulation. Is that the biggest hurdle right now is in just kind of replicating how our fingertips can have so much sensing capability? Okay.
Starting point is 00:12:37 So that's one, but here's something that's somewhat encouraging for me, at least, which is that if you look in the realm of robot surgery, and by the way, there's a lot of misconceptions about that. I give talks where people say, well, robot took out my nephew's appendix. And I'll say, that was not a robot that, sir. That was a surgeon using a robot as a tool to do that operation, right? So they call it a robot, but it's really a tele-robot or more literally a puppet, a very, very important and very useful and expensive puppet, but it's a puppet. And so that's very important to understand. So surgeons can do remarkable tasks. They can sew, you sew up a wound, they can take out an appendix or a gallbladder with these tools. Now, they do not have
Starting point is 00:13:25 tactile sensing. Actually, the very latest versions of it, they started to introduce some, but for many years, they didn't. And surgeons are still able to do amazing things. So this is evidence that, that maybe we don't need to know how to do tactile sensing. It's just a hypothesis, but it says we have an existence proof that manipulation, dexterous complex manipulation with very complex deformable surfaces, right? I mean, it's harder than having a bow tie to take out an appendix. That you can do that without tactile sensing. Now, what's fascinating is the way surgeons seem to do it is essentially accommodating
Starting point is 00:14:03 the lack of tactile and then using vision. their eyes. They have cameras in there and they're watching what's happening and they're seeing what happens and that they have a feedback loop based on vision so they can see very small deformations of the tissues and they sort of they infer what's going on. This is remarkable because we, this I think is the most exciting path to you getting to manipulation, which is rather than trying to reproduce tactile, which is extremely difficult for all kinds of reasons, but I think it's interesting and worth pursuing, but there's another path which is to understand the visual tactile interactions. And that, I think if we can do that, we might be able to get away
Starting point is 00:14:43 with just using cameras. Interesting. So this week, or last week, I'm sorry, I saw an article that was talking to the difference between Elon's approach, particularly on the hands, between him and what figure AI is doing, where they put right here in the palm, they put a camera, to your point, They put a camera right here in the hand, and Elon is refusing to put a camera in the hand. And the person who posted this was saying this is akin to him not using LIDAR in the cars, because in the end it's going to come down to a cost thing. And he wants to force his team to figure it out without additional sensors and for all intensive purposes, costs for manufacturing. And he's playing this longer game.
Starting point is 00:15:31 Yeah, what are your thoughts on that? Well, that's a brilliant point, Preston. I'm really glad you made it. It's a very good. The analogy really works there. Elon is very, in some sense, he's very confident. He's done amazing things, understandably. He shouldn't be confident.
Starting point is 00:15:46 But sometimes that can blind you. So in this case, his decision not to use LIDAR has really, I think, put a limitation on the Tesla driving systems. LIDAR, it can be very helpful for filling in the edge cases with certain cases. with certain conditions when vision cameras can be distorted or blinded by light flares or especially in rain. So, LiDAR actually is a great addition there. And also the cost, I don't know. I think it will come down over time. I'm not in the car business.
Starting point is 00:16:17 So I defer to his expertise there. But in the same way, he has, you know, originally, you might remember when he first started Tesla, he wanted all the cars to be made in with robotic factories. And he had a decree, we will have no. No humans. You know, everything must be done by robots. And I remember engineers coming in from Tesla to my lab and saying, can you help us? We're trying to do this thing and we can't get it to work with a robot. And he was just, you know, unrelenting. And then finally, he said, I was wrong. Yeah. He was wrong. He said, I was mistaken. Humans are underrated. Do you remember that?
Starting point is 00:16:53 Yes. Yeah. So that was really interesting because it was one of the rare times he admitted. But it was also, it was a great example. of the idea that you can't do everything, right, with robots. And even if you will it, you know, he can will things into existence, right, by demanding this. It doesn't always work that way. And so the LiDAR story is very analogous.
Starting point is 00:17:16 And I think you're right about the cameras. You know, having cameras in the hand makes a lot of sense. It's not how humans or animals work, right? They don't have eyes in their hand. But cameras are something we understand very well. We have very high quality cameras. They're very fast, very accurate, and they're really low in cost comparatively. So I'm for more cameras, you know, put a lot of cameras in there.
Starting point is 00:17:39 Because the other issue is when you walk, it's one thing you have a camera on the head. You can sort of see what's around you, right? Or drive. By the way, driving, I should have mentioned this earlier, but driving is much easier than manipulation because driving, you're just trying to avoid objects, avoid hitting anything. In manipulation, you must make contact with objects. You must manipulate them, right? So it's very different.
Starting point is 00:18:01 To your point, this is really fascinating because to your point, when you talk about this idea of, you know, if I'm holding a shoelace and the tip of my finger is indented or I can see the compression of that, I can feel it. I'm relying on that touch like I'm tying my shoe. I'm relying on that sense of touch. But if you were going to try to build a sensor that can do that and you're kind of hitting a roadblock or can't find something that can provide that tactile feedback, I could look at a camera and say, okay, it went in by a half a millimeter, therefore it's about this much
Starting point is 00:18:34 pressure. And you can substitute that sensing capability through an image or a video of being able to see it. So it's, it is kind of interesting that we have figure going that path. And yeah. Well, okay, so let me let me add on to this. So you just made a very nice nuance point. You said if you want just looking at your fingertip and you saw the shoelace pressing into it by looking at the shadow structures and others, you could probably figure out that it was slipping away or it was firmly grasp. Absolutely. That's what surgeons do.
Starting point is 00:19:06 And they, by the way, work with surgical thread, which is really thin. And they have to use a needle. It's very complicated, right? But they're doing a lot of this with their eyes, with their intuition. Now, it's not just a matter of putting cameras around because that doesn't solve it alone. You actually now you need to be able to understand that imagery, the video and you need to interpret that. And that's also extremely difficult.
Starting point is 00:19:30 Because humans have this incredible ability and we can't underestimate it. It's just amazing what humans can do. Yeah. So when- From an inference standpoint, as far as like if I hold the shoelace this way, I can also kind of just intuitively infer that if it was held 90 degrees from that, that it's going to have this same slipping sensation. And that's something that's really hard to train a robot on versus human
Starting point is 00:19:55 can just kind of like figure it out like very easily. Is that what you mean by that? That's what I mean. Yeah. And here's the thing. We don't have good language for describing this, right? Yeah. We're trying to because it's all intuitive for us.
Starting point is 00:20:06 Yeah. You know, if you ask me, tell me how to tie a shoelace, right? I'd be like, you know, it's not easy, right? We don't have language. And that's part of the reason. By the way, this is the other issue. I'll come to is the data gap, the gap between the amount of data we want for language versus robots.
Starting point is 00:20:23 Maybe this is a good time to. Yeah, let's talk that. Yeah, let's talk this. Okay. All right. Well, there's a way of quantifying all this, and this is something that I call the robot data gap. And it is, it's a following, that if you put together all of the data that was used to
Starting point is 00:20:37 train language models, now it's vast, but it's hard to wrap your head around. How much data is that? Well, my students and I were able to calculate that if you actually look at it, and actually there's another Kevin Black, who's a researcher at physical intelligence, very, very smart guy. he had the first insight about this. And then we've been taking it a little further. But basically, it's that if you added up all the hours, it would take you to read a human, average human, to read all the text that's available to train the language models. Right. So it's all the books that are out there. It's all Wikipedia. It's everything that's on the internet. If you add up all those
Starting point is 00:21:14 tokens, if you will, and then to figure out, well, a human can read at the average speed of 238 words per minute, right? You can do the math and you end up with 100,000 years. Okay? So you could sit down and read everything that's used and be 100,000 years later. You'd be done. Okay. Now, we don't have such data for robot manipulation. Oh. It doesn't exist. It's not like we can just find it on the internet. It's the data is very different. There we want to start with vision images and then end with control signals to the robot. This doesn't exist. So we have to start and basically generate this data. But what we're up against, right, is it's 100,000 years. We're 100,000 years behind the language model. So again, I'm
Starting point is 00:22:01 sort of exaggerating and to make a point, which is certainly there's a number of ways to accelerate that. And I think we can eventually get there. By the way, I'm not saying this will never happen. Please don't get me wrong on it. I believe it will happen, but my big question is when. I think it's really important to be prepared for the reality. There's a lot of people say, hey, this makes sense. I want this. We should have this soon. And, you know, but remember, there's a lot of cases where people have talked about that in the past. Fusion energy, nuclear fusion, right? Makes a lot of sense. Sort of the technology is pretty obvious. But you have to contain this plasma. It seems like a technical issue. We can figure that out. Well, 50 years later, we're still
Starting point is 00:22:42 working on it and it's hard. It's a very, one of these very, very nuanced problems. Another one is curing cancer. When I was a kid, they used to say, we're going to have a war on cancer, just like we got to the moon. 10 years will solve cancer. We haven't solved it. So there are problems that are extremely difficult and they take much longer than anyone expects. Yeah. It seems like robotics is like that. We don't know. And listen, I'd be the first to celebrate if we, if someone wakes up, I wake up and I read, someone has solved it, right? It could happen. Yeah. Yeah. And then you'll look back on this podcast and say Goldberg was completely wrong. No.
Starting point is 00:23:15 It could totally happen. But I want to be a voice to say, hey, it might not happen. And let's just think about that and be a little bit realistic because I know how a lot of people are thinking that it's inevitable. It's going to happen. And he, you know, hopefully by next year, according to Elon and many of his followers. But they have to be ready for that maybe not to happen. And I'm worried about a backlash that people will say, hey, this whole robotics thing is, you know, was, you know, hocus pocus and, you know, we're going to move out of this field in droves.
Starting point is 00:23:47 I don't, and I don't want to put words in your mouth. So correct me if I'm stating this wrong, I don't think you're saying it's not going to happen. You're just really suspect on the timeline that everybody seems to be. Yeah. Yeah, that's it. That's it. Exactly. And that, you know, that's where I line up with Rod Brooks, because for very similar reasons, we have experience. We've both been working in this field for like 40. He's been working slightly longer than me, 60, 50 years. But, you know, we have a lot of experience with trying to solve these problems, and they're much more nuanced than they seem on the surface, especially because a child can pick things up and manipulate it. Yeah. It seems obvious. Why can't robots? It's very counterintuitive.
Starting point is 00:24:25 But when you work with these things and you really see their limitations, you start to understand that this is a very, very complex problem. Let's take a quick break and hear from today's sponsors. All right. I want you guys to imagine spending three days in Oslo at the height of the summer. You've got long days of daylight, incredible food, floating saunas on the Oslo Fjord, and every conversation you have is with people who are actually shaping the future. That's what the Oslo Freedom Forum is. From June 1st through the 3rd, 2026, the Oslo Freedom Forum is entering its 18th year, bringing together activists, technologists, journalists, investors, and builders from all over the world. many of them operating on the front lines of history. This is where you hear firsthand stories from people using Bitcoin to survive currency collapse, using AI to expose human rights abuses, and building technology under censorship and authoritarian pressures. These aren't abstract ideas. These are tools real people are using right now.
Starting point is 00:25:24 You'll be in the room with about 2,000 extraordinary individuals, dissidents, founders, philanthropists, policymakers, the kind of people you don't just listen to but end up having, dinner with. Over three days, you'll experience powerful mainstage talks, hands-on workshops on freedom tech, and financial sovereignty, immersive art installations, and conversations that continue long after the sessions end. And it's all happening in Oslo in June. If this sounds like your kind of room, well, you're in luck because you can attend in person. Standard and patron passes are available at Oslof Freedomforum.com with patron passes offering deep access, private events, and small group time with the speakers.
Starting point is 00:26:06 The Oslo Freedom Forum isn't just a conference. It's a place where ideas meet reality and where the future is being built by people living it. If you run a business, you've probably had the same thought lately. How do we make AI useful in the real world? Because the upside is huge, but guessing your way into it is a risky move. With NetSuite by Oracle, you can put AI to work today.
Starting point is 00:26:29 NetSuite is the number one AI Cloud ERP trusted by over 43,000 businesses. It pulls your financials, inventory, commerce, HR, and CRM into one unified system. And that connected data is what makes your AI smarter. It can automate routine work, surface actionable insights, and help you cut costs while making fast AI-powered decisions with confidence. And now with the Netsuite AI connector, you can use the AI of your choice to connect directly to your real business data. This isn't some add-on, it's AI built and into the system that runs your business. And whether your company does millions or even hundreds of millions,
Starting point is 00:27:07 NetSuite helps you stay ahead. If your revenues are at least in the seven figures, get their free business guide, demystifying AI at netsuite.com slash study. The guide is free to you at netsuite.com slash study. NetSuite.com slash study. When I started my own side business, it suddenly felt like I had to become 10 different people
Starting point is 00:27:30 overnight wearing many different hats. Starting something from scratch can feel exciting, but also incredibly overwhelming and lonely. That's why having the right tools matters. For millions of businesses, that tool is Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the U.S. from brands just getting started to household names. It gives you everything you need in one place, from inventory to payments to analytics. So you're not juggling a bunch of different platforms. You can build a beautiful online store with hundreds of ready-to-use templates, and Shopify is packed with helpful AI tools that write product descriptions and even enhance your product photography. Plus, if you ever get stuck, they've got award-winning
Starting point is 00:28:15 24-7 customer support. Start your business today with the industry's best business partner, Shopify, and start hearing... Sign up for your $1 per month trial today at Shopify.com. slash WSB. Go to Shopify.com slash WSB. That's Shopify. dot com slash WSB. All right. Back to the show.
Starting point is 00:28:43 Ken, if you're a program manager, you're kind of looking at all the different swim lanes to get there. The hand seems to be like one of the critical path, if you will, for getting there. Is there anything else that you would define as being on that critical path?
Starting point is 00:28:57 Or is the hand so far out there as far as difficulty goes, compared to everything else, that that's really kind of the limiting factor. Okay, great question. I would say it's not only the, when you say hand, it's the manipulation ability. Yeah. Right. Because by the way, I do have another thing to say here, another opinion, which is that we will get much more out of very simple grippers than we will out of hands that look
Starting point is 00:29:22 like human hands. Again, if you look at surgery, the tools that surgeons use to perform an appendectomy are very simple grippers with this. and they can do immensely complicated things. So I believe you don't need complex hand. So I'm not saying that's the path to go. I believe you can do simple grippers. In fact, my company, Ambirobics,
Starting point is 00:29:43 uses an even simpler gripper, which is a suction cup. And you can do incredible things with them. So it's not necessarily the hardware, but it's the software, it's the control of this nuanced interaction that is very, very challenging. I think many of the other aspects are addressable.
Starting point is 00:30:03 We have the inability to tell a robot, go pick up the orange, you know, the orange jumper off the table. We can solve that now. Computer vision systems are good enough to know that a jumper is a sweater and there's an orange one and it'll pull that up. No problem. But it's being able to actually pick it up and maybe put it on you and then button it up for you.
Starting point is 00:30:23 That's where it's difficult. Yeah. Yeah. I mean, maybe what we see in the interim is robots that go to market, humanoid robots that go to market that have simplified the hands or have the range of activities or things that they can actually perform is very limited relative to a real human being in there and being able to do it. I don't know if that's how they go to market or not. I want to talk a little bit more about your company, Ambie.
Starting point is 00:30:48 So this is really fascinating. So you guys have gone the market primarily focusing on logistics and warehouse type activities for robotics. Is that correct? Correct. So this started about seven years ago. We had a breakthrough in robot grasping, and that was just simply ability to pick things out of a bin. Okay.
Starting point is 00:31:10 So it's not manipulating, you know, doing surgery, but it's just picking things out of a bit. That was a very old problem. It's been known as the bin picking problem, and people have been looking at that for decades. But we had it, we made in advance, and this was especially the work of Jeff Mahler, who was the PhD student of mine, who was the lead researcher on this. And we can go into more details on the technical aspects of this, but the system was called DextNet, dexterity network. And it was based on collecting data, lots of examples.
Starting point is 00:31:39 And it was a somewhat, it was analogous in many ways to ImageNet, which was a breakthrough for computer images. So we did something similar. We synthesized this data set. We added noise in a very specific way. But the system started working remarkably well. And so it could pick up almost any object that you put into a bin. And it would just pull it out. And you would throw in a whole pile of objects.
Starting point is 00:32:01 We were digging around in our garages and closets and throwing everything we could into it. And it was consistently just being able to pull these things out. And so that was a very exciting moment for us. We got some publicity. It was in the New York Times and other places. And then we were approached by a number of companies and we decided to form our own company. That's awesome. I'm curious where you've seen just good old fashioned engineering matter more than a
Starting point is 00:32:27 additional data or larger models. And then to the converse of that, when did you have data actually really surprise you? Okay, good. Well, that was a great example. That was a case where data really did surprise us. We were able to generate six million example grasps over because we had collected 10,000 object models, and then we could generate grasps on those models. And then we had all these, and we trained a network to be able to learn essentially where
Starting point is 00:32:54 to grasp an object. So that was a data-driven approach. But I will tell you that when you take that and you have to move that into an experimental system or into a commercial system, then you need a huge amount of what I call good old fashion engineering. And this is where you have to really sweat the details. You have to make sure that the sensors are calibrated correctly, that your robot arms are calibrated and accurate. You have to be able to do the computation to move the arms very quickly. You have to control the surfaces of the grippers, the suction, cups, myriad of details like that, the lighting, that we had a little scale underneath the system that would recognize when an object was removed from the bin. It was like a digital scale. It was just another piece of engineering that had to go in there. So lots of all that.
Starting point is 00:33:41 That was just our demonstration system in the lab, experimental system. But then when we moved into ambi robotic, and by the way, I want to give credit also to the other students who were involved. Matt Matt Mattel was another computer scientist working closely with Jeff Mahler. and then I also had two other PhD students from mechanical engineering and one of them, Steve McKinley and David Geely, brilliant, all four of these guys, extremely, extremely brilliant engineering students. And so they really knew how they were very good friends, they remained good friends. They all worked very closely and spent a huge amount of time camping and hanging out together too,
Starting point is 00:34:19 but they were perfectly complementary because we had the computing skills and the mechanical skills. And mechanical guys knew how to design machines that could work reliably over a great period of time. And that's when we moved into building the Amby Sort system, which sorts packages for e-commerce. And this was a little bit, we didn't go in with this plan, but what we saw very quickly was that e-commerce was growing and we needed, there's a huge demand for sorting packages, right? It's very challenging to get packages out to the customer fast. So we started using that technique, DexNet, we evolved it and made it, it commercialized it, and then we can make it work very fast.
Starting point is 00:35:00 And then all kinds of other elements had to come in. We had another gantry system that would drop it into, pick it out of bins, pick an object out of bin, had to be scanned for its zip code, figure out which bin to go into, then put it into the right bin, avoid jamming the whole time, make the system reliable safe and easy to use, right? All this is what I call good old-fashioned engineering. And so I become a big advocate for this because, after all, this is the body of research and ideas and insights that have been developed over 400, 500 years in engineering. And still what we teach at Berkeley and all the major universities, we teach the engineering principles.
Starting point is 00:35:39 And my point is, let's not forget about those. Those are still extremely valuable for engineering and for robots and getting them to actually work in practice. And anyone working in robotics, I think will acknowledge that, although the public perception is, oh, it's just now, you know, we're using AI and that's solving everything. It's not. It's solving certain little pieces of it. And as I said, there's certain pieces that are very, very difficult with that still remain very difficult. So this comes back to what I was saying earlier, Preston, about my fear, which is that because there's so much expectation around humanoid right now, that if the companies can't deliver on that ability. then there might be a big backlash. And that's going to hurt companies like Ambi who are not trying to do that. Amby is trying to solve a real practical problem and do it efficiently and cost effectively and actually, you know, basically something that's very valuable for everyone who shops at Amazon or any of the online companies, right?
Starting point is 00:36:38 We've sorted 100 million packages so far. Wow. And I'm very proud of that because these machines, as we're talking, are out there sorting packages and they're very reliable. They're not featured in the videos about there's no human, By the way, although some have said that, you know, we'll have a humanoid doing that, but humanoid with hands, it's going to be a long time before that's even close to the efficiency of the systems that we have with destruction cups.
Starting point is 00:37:03 Let's take a quick break and hear from today's sponsors. No, it's not your imagination. Risk and regulation are ramping up, and customers now expect proof of security just to do business. That's why VANTA is a game changer. Vanta automates your compliance process and brings compliance, risk, and customer trust together on one AI-powered platform. So whether you're prepping for a SOC 2 or running an enterprise GRC program, VANTA keeps you secure and keeps your deals moving.
Starting point is 00:37:33 Instead of chasing spreadsheets and screenshots, VANTA gives you continuous automation across more than 35 security and privacy frameworks. Companies like Ramp and Ryder spend 82% less time on audits with Vantta. That's not just faster compliance, it's more time for growth. If I were running a startup or scaling a team today, this is exactly the type of platform I'd want in place. Get started at vanta.com slash billionaires. That's vanta.com slash billionaires.
Starting point is 00:38:04 Ever wanted to explore the world of online trading, but haven't dared try? The futures market is more active now than ever before, and plus 500 futures is the perfect place to start. Plus 500 gives you access to a wide range of instruments, the S&P 500, Nasdaq, Bitcoin, gas, and much more. Explore equity indices, energy, metals, 4X, crypto, and beyond. With a simple and intuitive platform, you can trade from anywhere, right from your phone. Deposit with a minimum of $100 and experience the fast, accessible futures trading you've been waiting for. See a trading opportunity, you'll be able to trade it in just two clicks once your account is open. Not sure if you're ready, not a problem. Plus 500 gives you an unlimited, risk-free demo account
Starting point is 00:38:53 with charts and analytic tools for you to practice on. With over 20 years of experience, plus 500 is your gateway to the markets. Visit plus500.com to learn more. Trading in futures involves risk of loss and is not suitable for everyone. Not all applicants will qualify. Plus 500, it's trading with a plus. Billion dollar investors don't typically park their cash in high-yield savings accounts. Instead, they often use one of the premier passive income strategies for institutional investors, private credit. Now, the same passive income strategy is available to investors of all sizes,
Starting point is 00:39:33 thanks to the Fundrise income fund, which has more than $600 million invested in a 7.97% distribution rate, With traditional savings yields falling, it's no wonder private credit has grown to be a trillion-dollar asset class in the last few years. Visit fundrise.com slash WSB to invest in the Fundrise income fund in just minutes. The fund's total return in 2025 was 8%, and the average annual total return since inception is 7.8%. Past performance does not guarantee future results, current distribution rate as of 1231, 2025. Carefully consider the investment material before investing, including objectives, risks,
Starting point is 00:40:14 charges, and expenses. This and other information can be found in the income funds prospectus at fundrise.com slash income. This is a paid advertisement. All right. Back to the show. After shipping robots that work every day in the warehouse, what's one belief that you held earlier in your academic career that you've had to revise based on that?
Starting point is 00:40:37 Lots. I would tell you, one of the things that, you know, is very interesting is that you think, okay, I have this great new technology. That's the breakthrough that really solves an important problem. Therefore, I can rush out into commercial world and build a company around it. Well, it turns out that technology is only a very small core part. It enables, but then there's all these things that have to come around it that are equally, if not more important. And actually, when you go to the customers and you say, hey, we have this new AI thing, they're like, wait a second, I don't care about that. How much money is it going to save me?
Starting point is 00:41:10 That's all they care about. And that's business. That's business. Both my grandfather's your business. We were entrepreneurs and so was my father. So I grew up in these kind of environments and it's tough. It's tough out there. One grandfather was very successful in electronics and my other grandfather was in the housing
Starting point is 00:41:26 business, building homes. But my father struggled. He was a metalurgist and he had a company doing chrome plating and it was very, very difficult. And, you know, he was buffeted by things way behind. behind his control, like recession of the 70s actually really hurt his business very badly. So he struggled. So there's a lot of factors. And it has to do with competition and timing. What I would also say in industry is that, and this is going to come back to the data aspect, which is that you can do things in a lab that you think are, we've really explored the full
Starting point is 00:42:00 range of a problem. So let me give you this example. We're addressing the bin picking problem, remember? And we were dropping all kinds of objects in there. In fact, when people would come to the lab, they would visit and I'd say, well, you have your car keys? Drop them in here. I said, if the robot will pick it out, we'll keep the car. And then, but it would always do that. It was no problem picking out someone's car keys, right? And we tried it on all kinds of things again. Toys, we made 3D printed, weirdly shaped objects, all kinds of things we could think of. We tried to basically consider everything. And we were just trying to push the envelope. right? Well, the envelope was the key word because it turned out that one thing we didn't ever
Starting point is 00:42:41 really experiment with was bags. And bags are extremely common in shipping. Yeah. You probably if you know this, if you receive bags from your, from e-commerce, from Amazon or others, you get bags of all kinds of forms. Now, bags are often plastic or paper, but the problem, the issue with bags is they're loose. And so they have objects in them, but there's a lot of slack. and they tend to fold in interesting ways. So we weren't testing those really in the lap. That wasn't something that we would have thought about too much, but that's so much more common in real shipping.
Starting point is 00:43:16 So my point is we had to adapt all of our systems to the reality of the consumer market, which in this case is bags. And that was something we didn't have a lot of data on. So we had to adapt our systems to work on data on real bags. And real bags are very difficult to have. actually even simulate and model because they fold again. And by the way, the folding matters because if you go to pick up with a suction cup
Starting point is 00:43:40 right on top of a fold, as you lift it, the fold will unfold and the suction will lose the suction and drop the object, right? So we started collecting data as we started putting these robots to work. So as our customers were putting these systems into production at Ambie, right? We also had an agreement that we would maintain these systems at high performance levels because we were constantly monitoring them. So we have a dashboard at the central headquarters in Berkeley where the team keeps an eye on every machine that's in operation out there. And so what we do is we get data on every single pick operation, what happens, how long it takes, whether it dropped the object, whether it was classified correctly, all kinds of things like that, right? And we use that so that we can immediately tell when the performance, let's say the picks per action,
Starting point is 00:44:27 performance, that's how it's often measured, drops. We can spot that early and say, and we call the company and we say, what's going on? Did something change? Did a camera get knocked? Is the suction cup getting worn? And so we're constantly on top of it. Part of it is that that's a source of big pride for us. We really customer focused and we want to make sure our machines work completely reliably.
Starting point is 00:44:49 But the nice, amazing side effect of this is that we've been able to collect data from all this real systems in real environments over the last four years. And we now have elapsed on 22 years of robot data. So remember I talked about the 100,000 years? Yeah, yeah. Now, we have 22 years, though, and it's started, okay? But it's real robot data. It's extremely valuable. It's a high quality. It's a gold standard for data. And so we're now using that to refine our systems and that make them better and more higher performing, more reliable, but also allowing us to now branch out into new related types of products. So we now have, we introduce a new product called Ambistack,
Starting point is 00:45:34 that stacks boxes very efficiently, very densely. And that's a new product that we sold out our first batch of these systems here. Amazing. On this idea of robot data or the covering this 100,000 years, gap that you're talking about. For a company that would be trying to overcome this because the data just isn't there, are they just having to construct a bunch of physical real world going back to the hand example? Would they have to have a bunch of physical hands with just a bunch of physical objects to then just be doing this? Or is this something that you think we could simulate
Starting point is 00:46:11 in a virtual environment to accelerate that speed or kind of a combo of both? Good. So for grasping, It turns out simulation is it works pretty well because there you just need to know the geometry of the environment fairly accurately. And then the geometry of the object and the gripper. And then you can actually model that fairly well. Now grasping is just lifting an object off the table, okay? We're out of a bin. That's very different than tying the shoe that we talked about earlier. There, it turns out that we can't simulate that so well.
Starting point is 00:46:44 As I mentioned, we don't know how to model and simulate the deformations. the minute forces that are going on in the process of interacting with that object. So that's a challenge. This is a little nuanced. And I know that your audience might say, what is Goldberg talking about? He said, this couldn't be solved. Now he says it can be solved. Well, it depends.
Starting point is 00:47:04 There's certain categories of problems that can be addressed. And I think that picking objects out of a bin is something we've made an enormous amount of progress in the last five years. So I'm very optimistic about that. I think we're getting faster, more reliable. and those systems are, you know, that's the cutting edge of robotics. And it's real. But then tying shoes and doing things around a home, by the way, or in a factory
Starting point is 00:47:29 where you're actually trying to put together, you know, electronic parts or car bodies or installing upholstery and wiring inside a car. These are extremely difficult, by the way. And they're even in Detroit or anywhere in the world, they're still humans doing those jobs because they're very, very hard. So those are hard to simulate. And I do think everything's pointing toward this deformation is a key obstacle to doing it. And I've talked with people who are physicists and experts in deformation.
Starting point is 00:47:59 And they agree this is a very, very hard problem. We don't even understand the physics of friction and deformation very well. Interesting. You've said your views on AI creativity have changed. Walk us through some of the timeline and what's changed and just kind of, your overall opinion today. Okay, well, on a very different note, I have been working as an artist in parallel with my work as a researcher and engineer. You know, I like to say my day job is teaching at Berkeley and running a lab there. But I have another passion, which is making art.
Starting point is 00:48:33 And I've worked on this for almost the same amount of time. And I make installations and projects. We did a project called the Telegarden, where we had a robot that was controlled by people over the internet and the robot could tend to garden, a living garden. We put this online in 1995, which was the very early days of the internet. And I'm very proud of that project because it stayed online for nine years. Huh. 24 hours of the day, people can come in and explore this garden and plant seeds and water them. So it was a very interesting. It was an artistic project, but it was also an engineering proof of concept. And it had to work reliably. And so, you know, really pushed us. I sometimes say people think doing engineering is hard. Try art. It's really
Starting point is 00:49:21 hard because you have to deal with the public and they're going to interact and do all kinds of crazy things. So we had to really spend a lot of time designing that system. But I continue an interest in art and I have a new show coming up. It's a joint project with my wife, Tiffany Slane, who's an artist, and she and I are collaborating on an exhibition that's going to open in San Francisco in January 22nd. Okay. All right. So this is a big passion of mine. And it's using technology like AI and robots to ask questions about technology. And I'm very interested in this contrast between the digital and the natural world.
Starting point is 00:49:59 When they seem very symmetric and similar, but there's very profound differences between them. So that's what I think about or I try to express in my artwork. And so your question about the creativity. So I always said, you know, AI won't be creative. creative in the sense that it will, you can ask it questions, but it's not going to actually come up with original ideas. But I actually have shifted my view on that. And I give this example where I asked chat GPT in the early days, hey, give me 100 uses for a guitar pick. I just thought it would get, you know, it would start repeating the same thing over and over again.
Starting point is 00:50:35 And it did, you know, it started with a screwdriver to scrape by itself, a windshield, things like that, which are all made sense. But then it started listing these, like, as fast as I could read them or faster. And then it came up with one that I was like, it was at a miniature sale for a toy boat. And when I saw that, I was like, oh, my God, that is a genius idea. And I would not have thought of it. And I, you know, immediately, when you see something that's original and creative like that, you spot it and you say, ah, why didn't I think of that? Those are those rare ideas. And AI is capable of that now. And so it's very exciting. Yeah, it is exciting.
Starting point is 00:51:13 It's super exciting. So I'm not negative about AI at all. I love it. I use it. I advocate for it. My daughters, my wife, everyone uses it. And so I'm 100% for it. I do think it's going to help with robotics.
Starting point is 00:51:27 But the question is, is it going to do everything that people are hoping? And that's where I hope that this conversation, Preston, will put things into context for your audience. So I don't know if you're going to like this question or not, but I'm going to throw over because I'm curious what you would think of this. So Figure AI recently sold their humanoid robot to put into the home and there was a lot of pushback from, you know, at least the comments that I saw online that this was just a giant marketing scheme or maybe they're trying to raise their next round. I don't know. But for the audience, I'll
Starting point is 00:52:00 just kind of frame it. It's a humanoid robot. All the demos that I've seen to date are extremely suspect as to its ability to like actually do anything in the home. When you dig into it more, they were using, you're putting this thing in your house, and then I guess that it has this ability to like go back to a human that would actually be manipulating the robot inside the house, which I think has all sorts of security, privacy issues, and everything else, right? But the reason I bring this up is because I don't know if he's, I'm pretty sure he's the founder and CEO of the company, he was suggesting that in order for AI to really start to accelerate its learning, that it needs to start being embodied into the physical form
Starting point is 00:52:43 and to put itself into a challenging learning environment. And what he means by some of this, and can correct me if I'm wrong in the way that I'm describing it, but what he's getting at is there's all these ambiguous situations that happen in the household with respect to social dynamics, the way that the family would interact and what they would ask of the robot, like, hey, go get me a cup of water and then the person who's asking for it always likes it half full or they like it warm or they like it cold or whatever. And so that learning that the robot would be forced to kind of undergo from a social dynamic would assist in its ability to get smarter collectively because I'm sure all this information is then going back up into the mothership
Starting point is 00:53:27 and getting networked. But his argument is the point of my question, which is do you also agree that for AI to kind of take this next step or this next quantum leap from where it is today, that it really needs to be able to immerse itself and basically partition itself into the physical form. Well, I think that is helpful. And certainly understanding the dynamics of human interaction, especially in a home, is the social dynamics are very, very subtle. And as you said, very important. Just understanding tone of voice, gestures, like my daughter will say, you know,
Starting point is 00:54:09 does this, right, which is don't bother me. She's a teenager, okay. Or just rolls her eyes, right? Like that subtle. Oh, yeah. The subtle body language is super complex when you think about it. Super complex. We pick up on it in a myriad of ways we don't even recognize, right?
Starting point is 00:54:26 Like I can pick up if, one thing I always notice is when I'm teaching, I can pick up if students are starting to lose interest or get tired or bored. Yeah, I just feel it. You know, I look around, but I'm always watching. That's why it's too tricky to teach online. But all these things are very nuanced. Body language can tell you a lot of what's going on. And just interpreting what's the dog doing and what's the dog, you know, how's the dog feeling? Right.
Starting point is 00:54:51 There's all a lot of nuance there. So all that is you're going to, you have to be in real homes to be able to do that. And I think that's actually, that makes sense. I'm not opposed to having, let's say, a humanoid in a home that might be helpful for doing certain things. Like maybe fetching water or being able to pick up things around the house. Remember grasping? I said that is actually something I think robots can do. So if you said, hey, you know, pick up all the things that are on the floor.
Starting point is 00:55:21 We would all like that. We have a Roomba, you know, you mentioned earlier, the vacuum cleaners. But the next step is to be able to actually pick things up and put them away. Yeah. And that I think we can get there. I do. I actually think that's going to come. That can happen in the next decade.
Starting point is 00:55:36 And it's very valuable because, by the way, if you're a senior citizen, you really want things off the floor. And if you're a young parent, you have a lot of kids, you have kids, or if you have a teenager, there's like, can you clean your room? You know, it would be great to have a robot go in and just clean off the, pick up all the clothes. We call that the teenager's problem, by the way. We have a paper on this. Okay, okay. Which is how to get a robot to efficiently pick up clothes. And it's not the obvious thing because, you know, if you just program a robot and go in and pick up, it'll pick up one sock, take it to the bin, take it to the bin, take it to the bin.
Starting point is 00:56:07 You actually need to be able to pick up lots of socks together. And so how do you do that? That's called multi-object grasping. It's a very complex and nuanced topic. And so we're studying that in the lab. So just coming back to your bigger point, I think that robots will do something is useful in the house. I think that's possible. it could be useful for security.
Starting point is 00:56:26 Also, maybe in some form of companionship, somewhere down the road, or as you get older, and I can appreciate this more and more, that I might want to have a robot that might help me, you know, shower or get changed or help me get out of bed in the morning. I think that would be nice. I'd rather have that than a stranger in my house. Let me put it that way. Right. I think you can relate.
Starting point is 00:56:48 Just not one that's networked back to some other person on the control. Yeah. Yeah, right. I mean, well, the privacy issues are huge. Yeah. And that's something also. A lot of engineers don't appreciate that. I faced this at Berkeley a few years ago where I was talking about privacy and I made an art project about privacy and we had cameras, surveillance cameras. We did a whole installation about this. And some of my friends were like, I don't care about privacy. I have nothing to hide. I said, oh, really? I said, okay, can I see all the letters of recommendation you wrote for
Starting point is 00:57:15 the last 10 years? Oh, no. You know, I'm not going to share those. I said, okay, how about all the research proposals that you're working on? Oh, no. I can't share it. Right. And so all kinds of stuff that you don't want to share. It's not that you're hiding, you know, it's not even doing something criminal or embarrassing, but you just, there's a lot of stuff you don't care to share because it's important and it's confidential.
Starting point is 00:57:35 So, same as in your home. You know, you just, it's not that I'm going to, you know, it's going to catch me naked, but in the morning, you know, a bad hair day and I don't necessarily want that to be transmitted widely. So it's all kinds of things like that. So I'm not opposed to humanoid robot. I think it's going to be interesting to see what. happens in the next few years, that we will probably start to see these. It'll be very
Starting point is 00:57:56 interesting to see that roll out from figure. And Burnt is very, you know, he's a very compelling businessman, like Elon, you know, he has a lot of optimism, a lot of confidence, and he's definitely building something that's working to some degree. So it'll be interesting to see. And, you know, I'm not a naysayer. I'm not saying that all this is going to fail. I just say that be patient. Yeah. It's going to take longer. The real science fiction stuff is going to take longer than we think. Last question I got for you, Ken, what's the most exciting or surprising thing that you've seen in the lab or just in the space in general that you were almost gasped when you saw it
Starting point is 00:58:32 in the past call it year? Okay. So actually, I have a good answer for that. You know, I'm so proud of Ambie for being able to sort packages around the clock at very high speeds. Right. But I recently saw a company called Dinah, Dina Robotics. Okay.
Starting point is 00:58:45 And I'm friends with the founders. So I'm maybe slightly biased, but I have to tell you, they, they, they, they, they, demonstrated folding napkins with a robot. Okay. And they did it for 24 hours. So they just had the camera set up and they had, and now this is, by the way, just two grippers. Two grippers.
Starting point is 00:59:01 Okay. Right. No head, but it has cameras, but there are two grippers. Basically, folding a stack of napkins over and over again. And they did it for 24 hours and I showed you the whole process. Now, that to me, as a roboticist is a big deal because they were able to do it fairly fast, reliably, the napkins were, you know, often get tangled up and it would figure out how to untangle them and keep going. And the folds were actually pretty nice.
Starting point is 00:59:24 So that's impressive. And then I got to see a live demo of the new version of that, which can now fold shirts. And it worked really well. I saw this in a, they had a booth in a conference in Korea in September, and it was folding shirts as it just round the clock. And it was fantastic. Even you could bring your own T-shirt and it would fold it. So that to me, that's very exciting. That shows, and again, it's a specific task.
Starting point is 00:59:49 I do think we're going to make progress there. And by the way, everyone wants to have something full their clothes. That's for sure. That is for sure. I'm so bad. I'm so bad I got one of those little folding. Oh, you do. Yes, I do because I'm so bad at folding it.
Starting point is 01:00:04 But when I use that, I'll actually, you know, do it. Oh, right. You have a folding board. Okay. Yes, I do. And I'm sure that you're going to be a great customer for this. But you know, you have high standards, right? because you want the things just right.
Starting point is 01:00:14 And that's where it gets tricky, right? But the Dino Robotics guys, it's Jason and Lyndon, who are the leaders of this company, they really are pulling something off. And I think it's very interesting to keep an eye on there. And a lot of the other robotics companies are now trying to emulate that, which is to show one task, one special task, doing it very reliably, making coffee or folding boxes. That's really exciting.
Starting point is 01:00:36 And I think that is actually going to be important. Rather than trying to do general robotic do everything in a home, which is, I think, going to take a long, long time. But if you get it to do certain tasks, like folding laundry, or maybe making coffee, certain things like that, that's a way sort of bottom up from certain tasks, learn other tasks rather than top down. I think that's going to be a path to getting progress. But again, it's going to take longer than most people think.
Starting point is 01:01:02 Ken, I can't thank you enough for making time. Your expertise is just off the charts. And I know the audience is going to love this. If you have anything else you want to highlight or point people towards that we can put in the show notes, you know, just let us know what that is. Okay, I'll send you some links because I have a bunch of things online. I can link to that follow up on this in various ways. And no, I think it's great.
Starting point is 01:01:24 Thanks for doing this. I'm really glad you're also going to connect with my good friend, Rich Wallace. Yes, yes. Because he is fascinating. You know, he's a very, very original thinker and very, I would say, very much an unsung hero around chatbots. People don't know. But he was really a pioneer in. in doing this, you know, very early.
Starting point is 01:01:43 And he still has a lot of great, really interesting insights and ideas. So you'll appreciate it. Amazing. All right. Well, Ken, thank you so much for making time and coming on the show. My pleasure, Preston. Thanks for listening to TIP. Follow Infinite Tech on your favorite podcast app.
Starting point is 01:01:58 And visit The Investorspodcast.com for show notes and educational resources. This podcast is for informational and entertainment purposes only and does not provide financial, investment, tax, or legal advice. The content is in person. and does not consider your objectives, financial situation or needs. Investing involves risk, including possible loss of principle and past performance is not a guarantee of future results. Listeners should do their own research and consult a qualified professional before making any financial decisions. Nothing on this show is a recommendation or solicitation to buy or sell
Starting point is 01:02:27 any security or other financial product. Hosts, guests, and the Investors Podcast Network may hold positions in securities discussed and may change those positions at any time without notice. References to any third-party products, services or advertisers do not constitute. endorsements and the Investors Podcast Network is not responsible for any claims made by them. Copyright by the Investors Podcast Network. All rights reserved.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.