No Priors: Artificial Intelligence | Technology | Startups - Sunday Robotics: Scaling the Home Robot Revolution with Co-Founders Tony Zhao and Cheng Chi

Episode Date: November 19, 2025

The robotics industry is on the cusp of its own “GPT” moment, catalyzed by transformative research advances. Enter Memo, the first general-intelligence personal robot, focused on taking on your ch...ores to give back your time. Sarah Guo sits down with Tony Zhao and Cheng Chi, co-founders of Sunday Robotics, to discuss the state of AI robotics. Tony and Cheng speak to the challenges they faced while developing their technology, the innovative glove system employed to scale real-world data collection, and the impact of diffusion policy and imitation learning. Plus, they talk about their 2026 in-home beta program and why personal robots are only a handful of years away from mass deployment. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tonyzzhao | @chichengcc | @sundayrobotics Chapters: 00:00 – Tony Zhao and Cheng Chi Introduction 00:56 – State of AI Robotics 02:11 – Deploying a Robot Pre-AI 03:13 – Impact of Diffusion Policy  04:29 – Role of ACT and ALOHA 07:02 – Imitation Learning - Enter UMI 10:38 – Introducing Sunday 11:57 – Sunday’s Robot Design Philosophy 15:05 – Sunday’s Shipping Timeline 19:02 – Scale of Sunday’s Training Data 23:58 – Importance of Data Quality at Scale 24:56 – Technical Challenges 27:59 – When Will People Have Home Robots? 30:48 – Failures of Past Demos 32:34 – Sunday’s Demos 36:53 – What Sunday’s Hiring For 39:10 – Conclusion

Transcript
Discussion (0)
Starting point is 00:00:00 Nobody want to do their dishes. Nobody want to do their laundry. People will love to spend more time with their family, with their loved ones. So what we believe in is that if the robot is cheap, safe, and capable, everyone will want our robots. And we see a future where we have more than one billion of these robots in people's homes within the decade. Thanks, Memo. Hi, listeners.
Starting point is 00:00:36 Welcome back to No Pryors. Today we're here with Tony Zao and Chang Chi, co-founders of Sonday, makers of Memo, the first general home robot. We'll talk about AI and robotics, data collection, building a full-stacked robotics company, and a world beyond toil. Welcome.
Starting point is 00:00:52 Chang, Tony, thanks for being here. Thanks for having you. Okay. Okay. First, I want to ask, like, where are we here? Because classical robotics has not been an area of great optimism over time or, like, massive velocity of work. And now people are talking about a foundation model for robotics or a Chad GPT moment. Can you just contextualize, like, the state of AI robotics and why we should be excited? I would say, I think we're kind of in between the GPT moment and the chat GPT moment, like in the context of LMs. What it means is that it seems like we, it seems like we, have a recipe that can be scaled, but we haven't scaled up yet. And we haven't scaled up so much so that we can have a great consumer product out of it. So this is where I mean like GBT, which is like a technology and chat GBT, which is a product. Yeah. So we're seeing
Starting point is 00:01:43 across academia, there's consensus around what's the method for manipulation, but everybody's talking about scaling up. It's like we know there's sign of life for the algorithms people are picking, but people don't know if we have more data like what happened to GPD2, GV3, what will happen? But we see a clear trend that, you know, there's no reason to believe that robotic doesn't follow the trajectory
Starting point is 00:02:06 of other AI fields that, you know, scaling up is going to improve performance. Maybe even if you took a step back, like what was the process for deploying a robot into the world like 10 years ago, like pre-set of generalizable AI algorithms? Like, why was it so slow as a field? Yeah, so previously, you know, classical robotics have this sense plan act modular approach
Starting point is 00:02:31 where there's a human designing interface between each of the modules, and those aren't need to be designed for each specific task and each specific environment. In academia, that means for every task, that means a paper. So a paper is you design a task, design an environment, and you design interfaces, and then you produce engineering work for that specific task. But once you move on to the next task, you throw away all your code, all your work, and you start over again. And that's also kind of what happened to industry. And so for each application, people build a very specific software and hardware system around it, but it's not
Starting point is 00:03:02 really generalizable. And therefore, it's just feel like we're just running in loops. We build a one system and then we build the next one. But there's like no synergy between them. And as a result, the progress has been somewhat slow. I feel like that's a good segue into some of the amazing research work that you guys have contributed over the last five years to the field. Should we start with diffusion policy? What was the impact of that? Yeah, so the different policy is like a specific algorithm for a paradigm called imitation learning. That's really like the most intuitive way of, you know, how to use machine learning for robotics. So you collect paired data, action, observation of what the robot should do.
Starting point is 00:03:38 You use that to train a model with supervised learning, and then the robot do the same thing. The problem is that in the field is known to be very finicky. So when I talk to researchers, when I start into the field, people are like, the researcher themselves, the specific researcher need to collect the data so that there's exactly one way to do everything. Otherwise, the robot either, like, the model will train will diverge or the robot will behave some weird way.
Starting point is 00:04:03 And diffusion model really allows us to capture multiple modes of behavior for the same observation in a way that's still preserved training stability. And that really kind of unlocked more scalable training and more scalable data collection. So it doesn't have to be you personally wearing, you know, a telop set in order to make a robot learn.
Starting point is 00:04:23 Yeah, yeah. So, like, we can have multiple people, sometimes even untrained people, collecting data, and the result will still be great. Where do Aloha and Act play into this? Yeah, so these two papers are actually, like, super close to each other. They're, like, one month or two months away. That's actually how me and Chen know each other. It was like looking at each other's paper, like, and we met on Twitter, I think,
Starting point is 00:04:43 when Chen is back in Columbia. Before, Aloha, I think the typical way people call. collect data is with a teleoporussian setup with VR headset. And it turns out to be very unintuitive to do. And it's hard to collect data that is actually dexterous. Well, Aloha brains is a very simple and reproducible setup. So it's very intuitive. Sorry, in terms of just for most people who haven't worn a teleop setup, is it the lag?
Starting point is 00:05:09 Is it like just, you know, how should I compare it to like playing a video game or something? Yeah. I think Aloha make it feel more like playing a video game. normally it feels kind of disconnected that you're just like moving in the free air and the robot is moving with some delays but Aloha reduces that delay by a lot and that contribute to the kind of smoothness
Starting point is 00:05:29 and how fast human can react. Like once we get those really dexterous data, what it allows us to do is to investigate on algorithms that are actually solving things that are difficult. In this case is sort of the introducing of using transformers in the case of robotics.
Starting point is 00:05:48 And there was a long period of time that I think robotics was stuck with three-layer, MLPs, and Confnets, and as you make it deeper, it works worse. But it turns out that once you have very strong and dexterous data sets, like just throw a transformer at it and it works quite well. Actually, like just in terms of progress of the industry over time, transformers didn't make sense without a certain level of data collection capability. Okay. And also auto system around it, for example,
Starting point is 00:06:16 example, action chunking, which is to predict a trajectory as opposed to predicting single samples of actions. All these things kind of combined to make dexterous tasks by manual tasks more scalable. Why is chunking important here if I think about like just the analogy to LMs and like text sequence prediction? I think it just kind of throws the amount off if you're trying to force it to react every millisecond. That's not how human act. We perceive and we can actually move quite a bit without looking at things again. And that turns out to make things, the motion a lot more consistent and overperformance to be a lot better.
Starting point is 00:06:56 And you discovered that actually transformers architecturally did apply to robotics. Chang, you felt then that data collection was still a problem. So enter Umi. Yeah. So after Aloha and diffusion policy, I was super excited about imitation learning. But at a time, both of us are still doing catty operation. and that just feels super limiting. I think the problem is that, you know,
Starting point is 00:07:19 in your set-up at a time, like a tele-off setup, it takes a PC student a couple of hours to set up in the lab. It pretty much restrict data collection to a lab. But in order for the robot to actually work as a product, you need to work in the wild in unseen environments, and that requires data that also be collected in the wild. And at the time, I was thinking, okay, is there a way we can collect robotic data
Starting point is 00:07:43 without actually using a robot? That, like, forced me to think, okay, what's the actual most essential part of a robotics data? And after diffusion policy and act, actually, the paradigm is kind of simple. You just need paired observation and action data. In our case, observation is the video clip. The action is the movement of your hand, plus, you know, how their finger moves. I realized that all of this information you can get from a GoPro. You can track the movement of the GoPro in space, and you can, you know, track the motion of the, like, you know, of the group
Starting point is 00:08:14 and also finger through images as well. And that's why I built this Umi gripper. It's 3-printed. At a time, like, the product had three feature students. Like, we just took the grippers everywhere. Like, you know, I think it was two weeks before the paper deadline. Like, every time it goes to a restaurant, before the way to come in, we just collect some data.
Starting point is 00:08:33 And very quickly, we got, you know, I think 1,500 video clips of this, like, Expresso Cup serving task. And that turns out to be one of the biggest datasets in your robotics. and it's simply by three people. And that's where the power kind of shines. And then with that amount of data, allows to train the first end-to-end model that can actually generalize to unseen environments.
Starting point is 00:08:57 So we can push the robot around in Stanford. Actually, Tony was there as well. You know, push the robot arm around the Stanford campus and then anywhere, you know, the robot can serve you a drink. Yeah, I think that is the moment I was like, hey, maybe we should start the company. This is actually working so well. I remember like just following child.
Starting point is 00:09:14 and... That's your time that doesn't work well. Yes. I think the only exception I saw was when it's under direct sunlight. Yeah. And I think the reasoning was like over that whole like two, three weeks of data. That two weeks is all raining. So there's no sunlight data.
Starting point is 00:09:29 So like it fails. That also demonstrates the importance of distribution matching. So in order for a robot to work in a sunny environment, it must have seen sunny environments in training data. Yeah, it's really interesting because I remember when I first met you guys, it was like you spent like, I don't know, $200,000 across all of your academic research, and yet the scale of data collection as translated to model capability is leading, right? So it's very interesting that, you know, we look at where
Starting point is 00:09:58 we are, maybe going back to Tony's point of scaling and massive capital deployment, but that entire paradigm actually wasn't relevant before people realized, like, you should train on all of the internet data. And we just don't have that in robotics. So the entire field is just blocked on having any scale of data that's relevant. Yeah. I think these days are still like, so many debates about like what is even the right way to scale. They're like world models, there are simulations, there is teleoperation.
Starting point is 00:10:26 There are like all these new ideas. And I think this is the sort of area that we really want to innovate, that we want to differentiate. They want to find out something that is both high quality and scalable. And then you guys, you decide to start a company pushing this cart around Stanford. Tell me about that decision. and congratulations on the launch and sort of the direction and team you've built.
Starting point is 00:10:48 Yeah, it's a very interesting journey. I remember in the beginning, especially two of us, in Chen's apartment on his desk, we were like clamped a robot there and tried to do some tasks. And it soon becomes like, I think an eight-person team
Starting point is 00:11:02 towards the end of 2024. And now we're at around like 30 to 40 people. We're not the best at everything, right? But starting a company allows us to find people who really love working with. And then bring all the expertise together from mechanical engineering, supply chain, like software engineering, like controls, and to build a system together that is not like a demo, but a real product.
Starting point is 00:11:27 He's built this amazing team. What are people actually signing up for? What's the mission of Sunday? Yes. It is to put a home robot in everyone's home. I think there are a lot of AI trying to make you more efficient during the work, but there is not enough AI that actually helps you with all these mundane things that are not creative, that really has nothing to do with what's making us more intensely human.
Starting point is 00:11:49 What's ideal for people to spend more time on is actually with their hobbies, with their passions, as opposed to spending more time doing chores. So if you guys are going from, you know, these amazing research breakthroughs to we're actually going to ship a home robot, you know, that's a product you have to talk about cost and capability and robustness, like what's the design philosophy? As these AI models becomes more capable, and as hardware costs continues to go down, the home robots,
Starting point is 00:12:19 all kinds of robots, will be everywhere. So if we start from the most surface level, which is the design of the robots, when we design it, we think about what should the robot look like if it is ubiquitous? You need to see it every single day. What should it look like?
Starting point is 00:12:37 And all we end up with is that we really think the robot should have a face. You should have a cute face. and it should be very friendly. So instead of like a Terminator doing your dishes, we want the robot to feel like it's out of a cartoon movie. And then a huge decision is like,
Starting point is 00:12:54 how many arms should the robot have? Should you have like four arms? Should you have like one arms? Should it have like five fingers, two fingers, three fingers? It's a huge space. Why isn't the obvious answer it should just be like a full human arm?
Starting point is 00:13:07 I think the core motivation for us is how can we build a useful robot as soon as possible. So whenever we see something that we can accelerate it with simplification, we'll go simplify that. So one example of that is the hand that we designed, which has three fingers. We kind of combine the three of the fingers that we have together.
Starting point is 00:13:28 And the reasoning there is just that most of the time when we use those fingers, we use it together. Let it be like rasping a handle, let it be opening the dishwasher. So it really doesn't make sense to add the cost, like multiply by 3x. to have separated into three, when we can do one with most of the benefits.
Starting point is 00:13:48 So this is how we think about the whole robot as well. It's kind of with the constraint that we're building a general-purpose robot that can eventually do all your chores and will simplify everything we possibly can so that a robot can be as low-cost and as easy to repair as possible. Yeah, I just want to add a little bit more
Starting point is 00:14:05 to the architecture and mechanical design. Traditionally, most robots are designed for industrial use cases, and the robot are very fast, they are very stiff, and they're very precise. The reason is because all the industrial robots are blind. So they're blindly flowing a trajectory that's programmed by someone. It's not reaction to perception. Correct, correct.
Starting point is 00:14:23 But because of the breakthrough we have in AI, like now the robot have eyes. So I can actually correct its own mechanical and hardware inaccuracies. So that kind of like opened up a new different space of design. And intuitively, it should be like, I can't tell you exactly what the distance. is here on a millimeter scale, but I'm going to get to the cup because I could stop. Yeah, exactly. So that allows us to use these low-cost actuators
Starting point is 00:14:49 that's cheap, that's compliant, but they're imprecise. But because of the AI's algorithms and systems we build, it allows to build a robot that's mechanically inherently safe and compliant, while simultaneously be able to achieve the sufficient accuracy we need for the home tasks. Where are we in that timeline? You said we're between GPT and chat-GBT. And so, like, when do consumers get chat GPT and when will you guys ship something? Yeah.
Starting point is 00:15:14 It's actually a really exciting time because, like, we have so many prototypes internally. What we will do next year, 2026, is actually start doing beta programs. We'll have these robots all kinds of different ones into people's home and see how they react to it. That will be when we learn the most about, like, how people, like, people want to talk to a robot. if people want to have their robots maybe teach their kids some new knowledge about the world. And this will inform us what the eventual product should look like. Internally, we just have an extremely high standard of what is the minimal consumer product we want to ship. It needs to be extremely safe.
Starting point is 00:15:55 It needs to be extremely capable and low cost. Do you feel like you know something now that you didn't when you started the company? Absolutely. So I think at the beginning, I would describe it as like we see light at the end of the tunnel of there are two axes, there's dexterity, there's generalization, when we add more data, things works better. And what this company about is the cross product of these two. How can we scale and have both dexterity and journalism? And this is something we're able to show in our generalization demo, which is like we can pick up these like very precise, like actual metallic forks, only ceramic works. plates with very high success rates.
Starting point is 00:16:35 And honestly, this is not something that, like, we thought that would work so easily just by having so much more data. Yeah, so actually just want to expand a little bit, you know, is actually the process was long and painful. So, yeah, there are so many issues, like just scaling up a system, a robotic system is very, very hard. There are mechanical issues, like reliability issues. There's, like, data, you know, quality issues that come out of the system.
Starting point is 00:17:02 In the beginning, I actually thought it's going to be much easier than this. But really, it takes time and effort to grind out all the little details for this to work. And also, actually, compared to Teddy Up, it's much harder to get this system scaled up. But once it's scaled up, it's very powerful, very repeatable. So it is both harder than you thought it would be to get to here, and you were further than you thought you would be. Yes. And I remember in the beginning, we're having this, like, funny conversation of where, like, if we built this, someone can just, like, take our glove.
Starting point is 00:17:32 and they'll build the same thing. Like, what more do we have? Are we worried about that? And in the beginning, actually, we were a little bit worried here because we thought like, you know, they can probably just replicate it. But as we go along the path,
Starting point is 00:17:45 it turns out things are so much harder than when you thought it was. There's so many, the small device. Yeah, yes. And when you say it's scaling up the robotic system, you mean the data collection to training pipeline and the hardware itself. Yeah, so actually, like, for this to work at all,
Starting point is 00:18:00 you need the data collection system. Yeah. You need the robotic and control system to be able to deliver the hand to where we want to go. And you also need the data filtering pipeline and data cleaning pipeline and the training pipeline. And all these things need to be iterated together. So actually gone through several loop of these. It's kind of hard to imagine without having a full-stack team in-house, how this can't even be done. Yeah.
Starting point is 00:18:22 The glove we're using right now is we call it like V5. And for V-0 to V-5, each version has like around 20 iterations. Okay. And so 100. Yes. Yes. And also like just when you make these as scale, right now we have more than 500 people using these gloves in the wild. Like all the things that could go wrong will go wrong.
Starting point is 00:18:44 For example, they did. They did. Yes. For example, like how things are assembled. If you don't specify exactly how it should be done, people will assemble it in creative ways. And the creativity doesn't help us here because we really want the data collection device to be extreme presents. So you guys can't obviously know everything that's happening in every company in academia and industry, but from what you know, how would you compare the scale of training
Starting point is 00:19:13 data that you have today relative to the industry? At this point, we are almost 10 million trajectories being collected in the wild. And those trajectories are not just like, oh, pick up a cup. It's these long trajectories with, like, walking, with navigation, and then, like, doing these long horizon tasks. Tony, as you mentioned, like, it's an open question, actually, what the right way to scale data up is. And so there are strong theories around teleop, around, like, pure RRL, around video and world models.
Starting point is 00:19:47 Like, how did you think about all of these? Yeah. So from our perspective, actually, it's kind of somewhat surprising. So in the beginning, we worried that, you know, the, you know, the. data from Glove or Umi-like data that has higher quantity but lower quality compared to tele-up. Because for teleop, you're using exactly the same hardware and software
Starting point is 00:20:03 stack between training and testing. It's perfectly distribution match. But what we realize is actually, you know, this glove form factor encouraged people to do more dexterous and more natural movement. And those actually result in like a more intelligent behavior on the modeling side. And
Starting point is 00:20:19 in terms of, you know, data quality, we don't really see a difference in terms of you know, how much like, like there's a gap between Taliaop and a glow of data. After we did the 20 engineering, like, yeah, because like apparently there is a mismatch, right? That's in the camera frame, there's a human instead of the robots. And there are just a lot of things that we need to do to kind of convert a human data one-to-one
Starting point is 00:20:47 to like as if it is robot data and have the model not being able to tell the difference. Yeah, and that kind of relies on, again, the whole, the full cycle, between hardware and software. What about RL? We see a lot of great promise for RL in local motion, and we think that will continue to be true for local motion. So what we see, it really felt like RL as a method, it's very powerful, but it is much less sample efficient
Starting point is 00:21:13 compared to imitation learning. And we see that to work great in environments where it's easy to simulate. For the case of local motion, you don't need to worry about rigid body dynamics and rigid body contact between the robot and the ground. And, you know, because you engineer a robot, you know everything. But for manipulation, it's kind of hard for us to imagine,
Starting point is 00:21:33 like, have actually the same amount of diversity and the distribution of real object in terms of matching both appearance and physical properties. And we think that it's going to be challenging compared to globe data collection and talibia. Yeah, I think it's really about which method can get us there faster. there might be different methods that will eventually get there. For example, like, you know, simulation world model, right?
Starting point is 00:22:00 And, like, it's almost a tautology to say that if I have a perfect world simulator, anything can be done there. Like, as long as you're going to do it in a real world, you can do it in a simulation, and you can, like, cure cancer in a simulator, right? But what it turns out for robotics is that some things are harder than others, and it really depends on the problem itself. So in the case of locomotion, as I mentioned, all we need to model in a simulator are point context with a somewhat flat ground. Like feet.
Starting point is 00:22:34 Yes. But sort of the behavior we want out of it is actually very difficult to model. Like it's all these active behaviors that when you feel like your leg is hitting something, you should like retract and step again. These are very, very hard to describe or try to learn from demonstrations directly. But in the case of manipulation, I think the difficulty is flipped. That it's a lot easier to capture the behavior itself, and it's a lot harder to simulate the world. For example, if you were to grasp a transparent cup with some orange juice in it, it's ridiculously hard to simulate how, like, your hand deforms around the cup and how all those ripplings,
Starting point is 00:23:21 how those, like, the color of the juice results in, like, the rendering and what the policy ends up seeing, simulating that is very expensive and difficult. But all we need to learn is just to, like, guess your hand to be in front of the cup and then close with the appropriate amount of force. And that's actually very easy to learn. That's why, like, we see so many success of imitation learning in the case of robotics, manipulation, is because the behavior itself is actually. not as hard as simulating the world.
Starting point is 00:23:56 And that's why we see faster progress there. Is there anything that you have changed your point of view on in data over the last year? Like, it's one thing. I wouldn't say change, but just data quality really matters. I think we know, I always knew data quality matters, but once you scale it up, like it really matters. And then because, you know, just like the diversity of behavior, like you experience. in the wild. It's very hard to control. And the hardware failure is a hard to control. You need to constantly monitor them. You just spend a lot of huge amount of engineering effort
Starting point is 00:24:29 just to make sure that the data is clean. Yeah. And also building all those automatic processes, right? We have our own way of calibrating the glove before we ship it out. And we have this whole software system to catch if something is broken on a glove and we can detect it automatically. is like the importance of data quality kind of translates to all these repeatable processes and we don't need a human to be staring at the data to know that something is wrong. When you describe the beta for next year,
Starting point is 00:24:58 a lot of it sounded like, you know, we just want to understand behavior, like how people actually want to use it. We can make some design decisions for the actual product. What technical challenges do you still see? So to me, I think there's like two kind. the number one is really figuring out the training recipe at scale. We as a field just entered the realm of scaling
Starting point is 00:25:21 and we just got the amount of data that we need. I think now it's a perfect time to start to do research and actually figure out what exact training recipe we need to actually get robust behaviors. And I think we're in a unique position because the amount of data and the entire pipeline we built around data. The second point, I think just really hardware is hard.
Starting point is 00:25:39 We're still pushing the balance of envelope, performance envelope of hardware. It's not really clear actually what is needed, what is necessary for the hardware to be reliable. Because whenever the mechanical team build a hardware, the learning team will try harder to push it against the boundary. And then it will break at some point. But I think what's interesting in this company is that everybody's on the same roof. So immediately after something breaks, it goes straight back into mechanical design. And then we have another iteration, like I say, for the hand parts very quickly.
Starting point is 00:26:11 Hardware is hard, but it is important. And I think it's a hard but right thing to do. And I think we as a field shouldn't avoid doing the hard things just because they're hard. Yeah, I want to echo Chen's point about, like, first, the research. I think when there is data scarcity, it is really easy to come up with, like, cute, fancy research ideas that doesn't end up scaling very well. And this is why, like, when we build a company, we actually focus on the infrastructure and a scalable data pipeline and operations before we started to, like, really dive into research, which we only started to do, like, three months ago, I think we really want to avoid doing research
Starting point is 00:26:50 that doesn't scale and want to focus on things that contribute to the final product. The second point is, like, I think robotics is so intrinsically a, like, a system that right now we don't like there's not a existing general purpose home robot out there and we don't really know what the interface of different system is like what is even good and in that case if you're working with the partner it's actually really hard for them to understand your standard of good because your standard of good is changing all the time this is why we are like building everything in-house in a more full-stack approach that we build our own data collection device that is co-designed with a robot we build
Starting point is 00:27:33 our own like operation team to be like, how can we most efficiently get the most high quality data out? And of course, our own AI training team that make the best use of these data. I think these are the things that are really not easy. It makes a company a lot harder like to build that right now is suddenly like so many teams and they need to all orchestrate together. But we believe it is the right thing to do. Okay, I'm going to ask you a few questions that are uncomfortable guesses now. But when will people be able to buy robots commercially for the home? Like, this is something we're really excited about
Starting point is 00:28:08 because we have so many of prototype robots in our office and we really want to get it out there. So the next step of our plan is to have a beta group program, 26. And what it means is that for people who sign up that we selected, they will have a real robot in their home, and it will start doing chores for you. And it's going to be a really interesting learning lesson for us because we will see how human interact with the robot.
Starting point is 00:28:34 We'll see what kind of things people just really want the robot to do. I think this will be before we actually ship it to the masses because we just have an incredibly high standard of what we are willing to ship as a for a consumer experience standpoint. We want the robot to be highly reliable, want it to be capable, want it to be cheap. I think it really depends on the results of the beta program that will decide when is a good time to show.
Starting point is 00:29:01 ship it. Is it 2027? It's the 28. But all of those are possible. But it's not a decade away. No, it's definitely not a decade away. How much do you think it could cost? Right now, the pro-type robots we have in-house, I think the cost ranges from like $6,000 to something like $20,000. And this is actually pretty interesting that the big difference here is not like, oh, we find a better actuator. They're using the same actuators. They're like very low cost. But actually, it's a cladding of the robot. When you're trying to make them at low scale, it's just really expensive. Like, the cladings are like a few thousand dollars to make. But this is a type of things that as we scale up, it becomes like they're cheap. Because instead of like the NCNC, instead of hint painting
Starting point is 00:29:47 them, it'll become injection molding. What we see is that as we get the scale to a few thousand units, we can drastically reduce the material cost, likely under 10K. And what it implies, is that when we sell the robots, the price will be somewhere around it. Okay, so you fast forward two, three years out. If you look like five years and beyond, the home robots are ubiquitous, like, what does life look like? How does it change for your average person? This is a different answer for everyone.
Starting point is 00:30:21 For me, like, I just really hate dishes. Like, in my sink, there's always like four or five dishes that are like somewhat dirty out there that kind of stinks a little bit. And after a long date of work, it really doesn't feel good to come with, like, see a home like that. So I think the world will live in is... It's going to be cleaner. It's going to be cleaner. And I was just thinking about it as, like, the marginal cost of labor in homes goes to zero.
Starting point is 00:30:47 The last thing I want to make sure we do is, like, talk about demos, right? There's a lot of robotics launch videos today. It's been years since you saw an optimist serving drinks at a bar. why are those not available and what is actually hard? I think the way I will put it is make zero assumptions, no prior.
Starting point is 00:31:08 As in, if you see a robot handing one drink to one person, first ask the question of, is that autonomous or is a teleoperated? So this is the first thing. So we should look at the tweet and see what they say about it. And then is that
Starting point is 00:31:23 does it show giving another slightly different color cup to the same person? or not. If they didn't show it, it means that a robot can literally only pick up that single cup and give it to that same person. When we look at demos, we tend to put our human instinct into it. They're like, oh, if you can hand a cup to that person, it must be able to hand a different cup to another person. Maybe you can also do our dishes. Maybe you can do a laundry. There are a lot of visual thinking that we can have about it, which is what's great about robotics, that there are a lot of imaginations, but I think when we look at demos, only index on things that is shown.
Starting point is 00:32:00 And that's likely the full scope of that task. I think another aspect is, at least me as a review researcher, I appreciate the number of interactions that happens in demos. Usually, the more interactions you have, like every interaction, there's a chance of failure. So the longer the sequence is, the harder it actually is. So that's something we really emphasize here. And that's actually somewhat uniquely easy for us because the glove way of data collection is so intuitive to people.
Starting point is 00:32:31 Yeah, it's really about like generalization and reliability. So can you explain the demos that you guys are showing? Yeah, of course. So we're showing like basically three categories of demos. The first one, as you saw, is we have this whole like messy table. And what the robot does is to clean up the whole table and, you know, dump the food into the food waste bin, and load the dishes to the dishwasher and then operate the dishwasher.
Starting point is 00:32:53 What makes this demo really hard is that it is a mix of really fine-grained manipulation with these super long horizon full-range task, as in like, you need to go up and also need to go down very much. It's a mobile manipulation type. Yes, exactly. The reason that we can show this is just how nimble and easy for us to collect these data sets to make Horizon Dexter demo possible. And it's also about the forces as well.
Starting point is 00:33:22 So you might have seen, like, we're trying to pick up two wine glasses with one hand. I struggle with this, but yes. It's actually really hard. And because it's like transparent objects, we need to also load it very precisely into the dishwasher. A lot of it is about how much force you apply. Because if you are trying to grasp two,
Starting point is 00:33:43 in one hand, if you squeeze a little bit harder, you're going to break one of the glass. And when you load it into a dishwasher, if you're pushing it in the wrong direction and it hits something, it's going to shatter. We did a ton of glasses when we were, like, expanding with it. So these are tasks that are, like, really high stake that is not just about recovering from mistakes, but about not making those mistakes in the first place. And this is what's generally the case in a lot of the home tasks, that you're just not allowed to make any mistakes. And then we get into the generalization demos, which we basically show our robot,
Starting point is 00:34:17 We book like six A and B&Bs, and we get it there, zero shots and see if we can do, like, part of the task. So two tasks we use. One is I go around the table and collect all the utensils into the cabby. The other is to grasp a plate and then load it into the dishwasher. What makes these demos very interesting is that we don't need any data when we enter that home. It's pure generalization. And this is as close to getting like a real product as you can get. because when someone buy our home robot,
Starting point is 00:34:50 we really don't want them to, like, collect a huge idea of themselves, just to, like, unbox it. Also, in addition to the generalization, those two paths are also really precise. We're using the exact silverwares in that home, and you need, like, basically, like, a few millimeter of precision. We'll do grasp it properly. Those forks are also hard to perceive because they're reflective,
Starting point is 00:35:11 like the lights look weird on it. We have a transparent table home. I think that the table looks like not. and the robot still reacts very well to it. And again, the reason that we can do it is because we have all these like more than 500 people and we've seen so many glass tables in that dataset. So the robot is able to do it. I think the last bit of the tasks that we did is kind of pushing what's possible in terms of
Starting point is 00:35:36 dexterity. The two tasks we chose, one is an espresso operating espresso machine. The other is like folding socks. What makes these hard is that they're really. require very fine-growing force that is hard to get if you're doing with teleoperation. Because these days, there's not a good teleoperation system that can let you feel how much force the robot is feeling. So basically, when you're teleoperating, your hand is numb. And sometimes you are applying a huge amount of force on the robot, but you don't know it. And that can result
Starting point is 00:36:12 in very low data quality that robot is also doing in that aggressive way that we really want to avoid for our system. The SOC is a very good example that when you're trying to fold it, your two fingers can touch. And that forms a, what we call like a force closure. You have a closed loop for the force.
Starting point is 00:36:30 And if your robot is stiff, you can apply infinite amount of force at it and doesn't look like anything. But for us, because we're using the glove to collect the data, the human who is collecting it can just naturally feel it. It's very intuitive.
Starting point is 00:36:42 I think we're the first to do the sock folding, and using end-to-end to do, like, espresso machine out of the whole industry. One of the things that you will also need to scale as you guys, you know, scale up the company is the team. What are you hiring for? What are you looking for? One thing I'm really looking forward is... Thanks to speak stuff. Yeah. Yeah. So it's full-stack roboticists and people who aspire to become full-sacropodotuses.
Starting point is 00:37:14 Really, we learn in this complex. It's just that robotics is such a multidisciplinary field. You need to know, you know, mechanical, a little mechanical, a little of code, a little data to actually fully optimize the system. And we have a couple examples of training, you know, just full-size-suff engineers to become robotic. Training and engineers to become robotic. And so if you want to learn about robotics, you want to learn the whole thing, not just to be boxing into your small, you know, little cubicle. Let us know. And you told me that you
Starting point is 00:37:47 didn't write code until you got to college or something. Yeah. I was super enthusiastic about robotics, but I was mostly doing like a mechanical and like for design before that. And then I realized, okay, the bottom is actually how the robot will move and there's like
Starting point is 00:38:02 there's something called like programming. And then the more I get into it, the deeper it gets. And then toward the end of college, I realized, okay, there's a thing called machine learning. And you can figure out how to trade models. I think the things just go on and on. I think it's very natural for me to gradually expand my skill set
Starting point is 00:38:19 because I'm always looking forward to build a robot. Well, I hope you discover the next field because you're no longer doing dishes too. It's a very fun place to work. Whatever you can imagine about robotics and consumer products and machine learning, you can find it here because we're just fundamentally such a full-stack company. We're not just about the software.
Starting point is 00:38:38 We're not just about the hardware, but we're about the whole experience, the whole product. and making sure that product is general and, like, scalable in the future. Awesome. Congratulations. It's really exciting. Find us on Twitter at No Pryor's Pod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen.
Starting point is 00:39:02 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.