Dwarkesh Podcast - Fully autonomous robots are much closer than you think – Sergey Levine

Episode Date: September 12, 2025

Sergey Levine, one of the world’s top robotics researchers and co-founder of Physical Intelligence, thinks we’re on the cusp of a “self-improvement flywheel” for general-purpose robots. His me...dian estimate for when robots will be able to run households entirely autonomously? 2030.If Sergey’s right, the world 5 years from now will be an insanely different place than it is today. This conversation focuses on understanding how we get there: we dive into foundation models for robotics, and how we scale both the data and the hardware necessary to enable a full-blown robotics explosion.Watch on YouTube; listen on Apple Podcasts or Spotify.Sponsors* Labelbox provides high-quality robotics training data across a wide range of platforms and tasks. From simple object handling to complex workflows, Labelbox can get you the data you need to scale your robotics research. Learn more at labelbox.com/dwarkesh* Hudson River Trading uses cutting-edge ML and terabytes of historical market data to predict future prices. I got to try my hand at this fascinating prediction problem with help from one of HRT’s senior researchers. If you’re curious about how it all works, go to hudson-trading.com/dwarkesh* Gemini 2.5 Flash Image (aka nano banana) isn’t just for generating fun images — it’s also a powerful tool for restoring old photos and digitizing documents. Test it yourself in the Gemini App or in Google’s AI Studio: ai.studio/bananaTo sponsor a future episode, visit dwarkesh.com/advertise.Timestamps(00:00:00) – Timeline to widely deployed autonomous robots(00:17:25) – Why robotics will scale faster than self-driving cars(00:27:28) – How vision-language-action models work(00:45:37) – Changes needed for brainlike efficiency in robots(00:57:59) – Learning from simulation(01:09:18) – How much will robots speed up AI buildouts?(01:18:01) – If hardware’s the bottleneck, does China win by default? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Transcript
Discussion (0)
Starting point is 00:00:00 Today, I'm chatting with Sergey Levin, who is a co-founder of physical intelligence, which is a robotics foundations model company and also professor at UC Berkeley. And just generally, one of the world's leading researchers in robotics, RL, and AI. Sergey, thank you for coming on the podcast. Thank you. And thank you for the kind introduction. Let's talk about robotics. So before I pepper you with questions, I'm wondering if you can give out the audience a summary of where physical intelligence says at right now. You guys started a year ago.
Starting point is 00:00:27 Yeah. And what does the progress look like? What are you guys working on? Yeah. So physical intelligence aims to build robotic foundation models. And that basically means general purpose models that could, in principle, control any robot to perform any task. We care about this because we see this as a very fundamental aspect of the AI problem.
Starting point is 00:00:47 Like the robot is essentially encompassing all AI technologies. If you can get a robot that's truly general, then you can do, you know, hopefully a large chunk of what people can do. And where we're at right now is, I think we've kind of gotten to the point where we've built out a lot of the basics. And, you know, I think those basics actually are pretty cool. Like they work pretty well. We can get a robot that will, like, fold laundry and that will go into a new home and, like, try to clean up the kitchen. But in my mind, what we're doing at physical intelligence right now is really the very, very early beginning.
Starting point is 00:01:19 It's just like putting in place the basic building blocks on top of which we can then tackle all these, like, really tough problems. And what's the year by your vision? So one year in, no, I got a chance to watch some of the robots. And they can do pretty dexterous tasks, like folding a box using grippers. And it's like, I don't know, it's like pretty hard to fold the box, even with like my hands. If you got to go year by year until we get to the full like robotics explosion, what is happening every single year? What is the thing that needs to be unlocked, et cetera? So there are a few things that we need to get right.
Starting point is 00:01:50 I mean, dexterity obviously is one of them. And in the beginning, we really want to make sure that we understand. whether the methods that we're developing have the ability to tackle the kind of intricate tasks that people can do. As you mentioned, like folding a box, folding different articles of laundry,
Starting point is 00:02:06 cleaning up a table, making a coffee, that sort of thing. And that's good. Like, that works. You know, I think that the results would have been able to show are pretty cool. But again, like,
Starting point is 00:02:14 the end goal of this is not to fold a nice t-shirt. The end goal is to just, like, confirm our initial hypothesis that, like, the basics are kind of solid. Yeah. But from there,
Starting point is 00:02:22 there are a number of really major challenges. And I think that, you know, sometimes when results get abstracted to the level of like a three-minute video, someone can look at this video, it's like, oh, that's cool, like that's what they're doing. But it's not. Like, it's a very simple and basic version of what I think is to come. Like, what you really want from a robot is not to tell it like, hey, please fold my t-shirt. What you want from a robot is to tell it like, hey, robot, like, you're now doing all sorts of home tasks for me.
Starting point is 00:02:49 I like to have dinner made at 6 p.m. I wake up and go to work at 7 a.m. I'd like, you know, I like to do my laundry on Saturday, so make sure that's ready, this and this and this. And by the way, check in with me like every Monday to see, like, you know, what I want you to pick up when you do the shopping. Right. Right. Like, that's the prompt. And then the robot should go and do this for like, you know, six months, a year.
Starting point is 00:03:12 Like, that's the duration of the task. So it's ultimately, if this stuff is successful, it should be a lot bigger. And it should have that ability to learn continuously. it should have the understanding of the physical world, the common sense, the ability to go in and pull in more information if it needs it. Like if I ask you like, hey, tonight, like, you know, can you make me this type of salad? It's like, you should like figure out what that entails, like look it up, go and buy the ingredients. So there's a lot that goes into this. It requires common sense.
Starting point is 00:03:41 It requires understanding that there are certain edge cases you need to handle intelligently, cases where you need to think harder. It requires the ability to improve continuously. it requires understanding, safety, being reliable at the right time, being able to fix your mistakes when you do make those mistakes. So there's a lot more that goes into this. But the principles there are you need to leverage prior knowledge and you need to have the right representations. So this grand vision, what year, if you had a median estimate, or 25 percentile 5075?
Starting point is 00:04:12 I think it's something where it's not going to be a case where we develop everything in the laboratory and then it's done and then, you know, come 20, 30 something, you get a robot in a box. I think it'll be the same as what we've seen with AI assistance that once we reach some basic level of competence where the robot is delivering something useful, it'll go out there in the world. The cool thing is that once it's out there in the world, they can collect experience and leverage that experience to get better. So to me, like, what I tend to think about a lot in terms of timelines is not the date
Starting point is 00:04:44 when it will be done, but the date when it will, when like the flywheel starts. Right. So when does the flywheel start? I think that could be very soon. And I think there's some decisions to be made. Like the tradeoff there is the more narrow you scope the thing, the earlier you can get it down to the real world. So, but soon as in like this is something we're already exploring. We're already trying to figure out like what are like the real things this thing could do that could allow start spinning the flywheel. But I think in terms of like stuff that you would actually care about that you would want to see. So I don't know. But I think that single digit years is very realistic. I'm really hoping it'll be more like. like one or two before something is actually out there, but it's hard to say. And something being out there means what?
Starting point is 00:05:24 Like what is out there? It means that there is a robot that does a thing that you actually care about, that you want done, and it does so competently enough to actually do it for real, for real people that want it done. We already have LLMs, which are like broadly deployed, and that hasn't resulted in some sort of like flywheel. That's right. At least not some obvious flywheel for the model companies where the now Claude is. like learning how to do every single job in the economy, or GPT is learning how to do every single job in the economy.
Starting point is 00:05:52 So why doesn't that flywheel work for LLMs? Well, I think it's actually very close to working, and I am like 100% certain that many organizations are working on exactly this. In fact, arguably, there is already a flywheel in the sense that not an automated flywheel, but a human loop flywheel where everybody who's deploying in LLM is, of course, going to look at what it's doing
Starting point is 00:06:16 and it's going to use that to then modify. its behavior. It's complex because it comes back to this question of representations and figuring out the right way to derive supervision signals and ground those supervision signals in the behavior of the system so it actually improves on what you want. And I don't think that's like a profoundly impossible problem. It's just something where the details get pretty gnarly and challenges with algorithms and stability become pretty complex.
Starting point is 00:06:44 So it's just something that's taken a while for the community. collectively to get their hands right. Do you think it'll be easier for robotics or just that like this, the state of this kind of techniques to label data that you collect out in the world and use it as a word will just the sort of like the whole wave will rise and robotics will rise as real? Or is there some reason why robotics will benefit more from this? Yeah, I don't think there's like a profound reason why robotics is that different, but there are a few small differences that I think make things a little bit more manageable.
Starting point is 00:07:16 So especially if you have a robot that, doing something in cooperation with people, whether it's a person that's supervising it or directing it. Like, there are very natural sources of supervision. And there's a big incentive for the person to provide the assistance that will make things succeed. There are a lot of dynamics where you can make mistakes and recover from those mistakes and then reflect back on what happened and avoid that mistake in the future. And I think that when you're doing physical things in the real world, that kind of stuff just happens more often than it does if you're like an AI assistant answering a question. Like if you answer a question, you just answered it wrong.
Starting point is 00:07:48 It's like, well, it's not like you can just, like, go back and, like, tweak a few things. Like, the person you told the answer to might not even know that it's wrong. Whereas if you're, like, folding the T-shirt and you messed up a little bit, like, yeah, it's pretty obvious. Like, you can reflect on that, figure out what happened and do it better next time. Yeah. So, okay, in one year we have robots which are, like, doing some useful things. Maybe if you have some, like, relatively simple, like, loopy process, they can do it for you. Just, like, you've got to keep folding, like, thousands of boxes or something.
Starting point is 00:08:14 But then there's some flywheel, dot, dot, dot. There's some machine which will just run my house for me, as well as a human housekeeper would. What is the gap between this thing which will be deployed in a year that starts the flywheel and this thing, which is like a fully autonomous housekeeper? Well, I think it's actually not that different than what we've seen with LMs in some ways,
Starting point is 00:08:37 that it's a matter of scope. Like, if you think about coding assistance, right? Like, initially, the best tools for coding, they could do like a little bit of completion. Like you give them a function signature and they'll like try their best to type out like the whole function and they'll maybe like get half of it right. And as that stuff progresses,
Starting point is 00:08:55 then you're willing to give these things a lot more agency so that like the very best coding assistance now, like if you're doing something relatively formulaic, maybe it can like put together most of a PR for you for something, you know, fairly accessible. Right. So I think it'll be the same thing. That we'll see an increase in the scope
Starting point is 00:09:12 that we're willing to give to the robots. as they get better and better, where initially the scope might be like, there is a particular thing you do, like you're making the coffee or something. Whereas as they get more capable, as their ability to have common sense and a broad repertoire of tasks increases, then we'll give them greater scope. Now you're running the whole coffee shop. I get that there's a spectrum, and I get that there won't be a specific moment that feels like we've achieved it. But we've got to give a year in which, like, that, your median estimate of when that happens. I mean, my sense there, too, is that this is probably a single-digit thing rather than a double-digit thing, The reason it's so hard to really pin down is because, as with all research, it does depend
Starting point is 00:09:49 on figuring out a few question marks. And I think my answer in terms of the nature of those question marks is I don't think these are things that require profoundly, deeply different ideas, but it does require the right synthesis of the kinds of things that we already know. And, you know, sometimes synthesis, to be clear, is just as difficult as coming up with like profoundly new stuff, right? So I think it's intellectually a very deep and profound problem and figuring that out is going to be like very exciting. But I think we kind of like know like roughly the puzzle pieces and it's something that we need to work on.
Starting point is 00:10:27 And I think if we work on it and we're a bit lucky and everything kind of goes as planned, I think single digit is reasonable. I mean, I was going to do binary search until like that year. Okay, so it's less than 10 years. So more than five years? your median estimate. I know it's like a range. I think five is a good medium. Okay.
Starting point is 00:10:44 Five years. So if you can fully autonomously run a house, then I think you've like, you can fully autonomously do most blue collar work. So your estimate is in five years it should be able to do most like blue collar work in the economy. So I think there's a nuance here. And the nuance is it becomes more obvious
Starting point is 00:11:04 if we consider the analogy to the coding assistance, right? It's not like the, the nature of coding assistance today is that there's a switch that flips and suddenly, instead of writing software, suddenly like all software engineers get fired and everyone's using ALMs for everything. And that actually makes a lot of sense that the biggest gain in productivity comes from experts, which is software engineers, whose productivity is now augmented by these really powerful tools.
Starting point is 00:11:34 Yeah. I mean, separate from the question of whether people will get fired or not, A different question is just like, what will the economic impact be in five years? The reason I'm curious about this is with LLMs, the relationship between the revenues for these models to their inherent, their seeming capability has been sort of mysterious in the sense that, like, you have something which feels like AGI. You can have a conversation where it really like is like, you know, like passes a storing test. It really feels like it can do all this knowledge work. It's obviously doing a bunch of coding, et cetera. but then the revenues for these AI companies are cumulatively on the order of like $20, 30 billion per year.
Starting point is 00:12:11 And that's so much less than all knowledge work, which is $30, $40 trillion. So in five years, are we in a similar situation that LLMs are now? Or is it more like we have robots deployed everywhere and they're actually like doing a whole bunch of real work, et cetera? It's a very subtle question. I think what it probably will come down to is this question of scope, right? Like, the reason that LLMs aren't doing all software engineering is because they're good within a certain scope, but there's limits to that. And those limits are increasing, to be clear, every year. And I think that there's no reason that we wouldn't see the same kind of thing with robots.
Starting point is 00:12:49 That the scope will have to start out small because there will be certain things that these systems can do very well and certain other things where more human oversight is really important. And the scope will grow. and what that will translate into is increased productivity. And some of the productivity will come from, like, the robots themselves being valuable, and some of it will come from the people using the robots are now more productive in that work. But there's something that will be able to see this productivity or, like, I don't know. But then it's like you want to understand something which, like, increases productivity 100-fold versus, like, you know, wearing glasses or something which has, like a small increase. So robots already increased productivity for workers, right?
Starting point is 00:13:35 Where LLMs are right now in terms of the share of knowledge work they can do, which is I guess probably like one thousandth of the knowledge work that happens in the economy LLMs are doing, at least in terms of revenue, are you saying that fraction will be possible for robots but for physical work in five years? That's a very hard question to answer. I think I'm probably not prepared to tell you what percentage of all labor work can be done by robots because I don't think right now off the cuff I have a sufficient understanding of what's involved in that big of a cross-section of all physical labor. I think what I can tell you is this, that I think it's much easier to get effective systems rolled out gradually in a human-in-the-loop setup.
Starting point is 00:14:24 And again, I think this is exactly what we've seen with coding. systems, and I think we'll see the same thing with automation, where basically, robot plus human is much better than just human or just robot. And that just like makes total sense. It also makes it much easier to get all the technology bootstrap, because when it's robot plus human, now there's a lot more potential for the robot to like actually learn on the job, acquire new skills. It's just like, you know. Because the human can label what's happening? And also because the human can help. The human can give hints.
Starting point is 00:14:54 You know, let me tell you this story. Like, we, when we were working on the Pio5 project, this was the paper that we released last April, we initially controlled our robots with teleoperation in a variety of different settings. And then at some point we actually realized that we can actually make significant headway once the model was good enough by supervising it, not just with low-level actions, but actually literally instructing it through language. Now, you need a certain level of competence before you can do that. But once you have that level of competence, just standing there and
Starting point is 00:15:25 telling the robot, okay, now pick up the cup, put the cup in the sink, put the dish in the sink, just with words, already actually gives the robot information that it can use to get better. Right. Now, imagine what this implies for the human plus robot dynamic. Like now, basically learning is not, for these systems, is not just learning from real actions, it's also learning from words, eventually be learning from observing what people do, from the kind of natural feedback that you receive when you're doing a job together with somebody
Starting point is 00:15:55 else. And this is also the kind of stuff where the prior knowledge that comes from these big models is tremendously valuable because that lets you understand that that attraction dynamic. So I think that there's a lot of potential for these kind of human plus robot deployments to make the model better. Interesting. So I got to go to Libel Box and see the robotic setup and try operating some of the robots myself. So the thing is like these triggers, be very mindful of pressing them and don't do some like very fast movements. Keep it like kind of slow. Do they need to keep holding it?
Starting point is 00:16:28 Oh, ahead. Sorry, okay. That's okay. And don't move it very fast because he can get hurt, actually. Yeah, yeah, okay. Okay, so operating ended up being a bit harder than I anticipated. But I did get to see the Labelbox team rip through a bunch of tasks. I also got to see the output data that labs actually have to use to train their robots
Starting point is 00:16:50 and ask Manu, Labelbox's CEO, about how all this is packaged together. So what you're looking at is actually the final output that is then deliver. to the labs, which then they use to train the models. And so you can see on the left the visualization of the movements of the robot, including its 3D model and so forth. And on the right, you see all the camera streams synchronized with the configuration.
Starting point is 00:17:13 Labelbox can get you millions of episodes of robotics data for every single robotics platform and subtas that you want to train on. And if you reach out through Labelbox.com slash Thorcasch, Manu will be very happy with me. In terms of robotics progress, why won't it be like self-driving cars where we, you know, it's been more than 10 years since Google launched its, it wasn't in 2009 that they launched a self-driving car initiative?
Starting point is 00:17:39 And then I remember when I was a teenager, like watching demos where we would go buy Taco Bell and drive back. And only now do we have them actually deployed. And even then, you know, they may make mistakes, et cetera. and so maybe it'll be many more years before most of the cars are self-driving. So why wouldn't robotics, you know, you're saying five years to this quite robust thing, but actually it'll just feel like 20 years or just like once we get the cool done though in five years, then it'll be another 10 years before like we have the Waymo and the Tesla FSD working.
Starting point is 00:18:14 Yeah, that's a really good question. So one of the big things that is different now than it was in 2009 actually has to do with the technology for machine learning systems that understand the world around them. Principally, for autonomous driving, this is perception. For robots, it can mean a few other things as well. And perception certainly was not in a good place in 2009. The trouble with perception is that it's one of those things where you can nail a really good demo with a somewhat engineered system, but hit a brick wall when you try to generalize it.
Starting point is 00:18:47 Now, at this point in 2025, we have much better technology for generalizing. and robust perception systems, and more generally generalizable and robust systems for understanding the world around us. Like when you say that the system is scalable and machine learning scalable really means generalizable. So that gives us a much better starting point today. So that's not an argument about robotics being easier than autonomous driving. It's just an argument for 2025 being a better year than 2009. But there's also other things about robotics that are a bit different than driving.
Starting point is 00:19:18 Like in some ways, robotic manipulation is a much, much harder problem. but in other ways, it's a problem space where it's easier to get rolling to start that flywheel with a more limited scope. So to give you an example, if you're learning how to drive, you would probably be pretty crazy to learn how to drive on your own without somebody helping you. Like you would not trust your teenage child to learn to drive just on their own, just drop them in the car and say, like, go for it. And that's like a 16-year-old who's had a significant,
Starting point is 00:19:50 him out of time to learn about the world. He would never even dream of putting a five-year-old in a car and tell him to get started. But if you want somebody to clean the dishes, like dishes can break too, but you would probably be okay with a child trying to do the dishes without somebody constantly like, you know, sitting next to them with a break, so to speak. So for a lot of tasks that we want to do with robotic manipulation, there's potential to make mistakes and correct those mistakes. And when you make a mistake and correct it, well, first you've achieved the task because you've corrected, but you've also gained knowledge that allows you to avoid that mistake in the future.
Starting point is 00:20:27 With driving, because of the dynamics of how it's set up, it's very hard to make a mistake, correct it, and then learn from it because the mistakes themselves have significant ramifications. Now, not all manipulation tasks are like that. There are truly some very safety critical stuff. And this is where the next thing comes in, which is common sense. common sense, meaning the ability to make inferences about what might happen that are reasonable guesses, but that do not require you to experience that mistake and learn from it in advance. That's tremendously important, and that's something that we basically had no idea how to do about five years ago. But now, we can actually use LMs and VLMs, ask them questions, and they will make reasonable guesses.
Starting point is 00:21:10 Like, they will not give you expert behavior, but you can say, like, hey, there's a sign that says slippery floor. like what's going to happen when I walk over that? It's kind of pretty obvious, right? And no autonomous car in 2009 would have been able to answer that question. So common sense plus the ability to make mistakes and correct those mistakes. Like that's sounding like an awful lot like what a person does when they're trying to learn something. All of that doesn't make robotic manipulation easy necessarily, but it allows us to get started with a smaller scope and then grow from there.
Starting point is 00:21:38 So for years using, I mean, not since 2009, but we've had. lots of video data, language data, and Transformers for five, seven, eight years. And lots of companies have tried to build transformer-based robots with lots of training data, including Google, meta, et cetera. And what is the reason that they've been hitting roadblocks? What has changed now? Yeah, that's a really good question. So I'll start out with maybe a slight modification to your comment is I think they've made a lot of progress.
Starting point is 00:22:13 And in some ways, a lot of the work that we're doing now at physical intelligence is built on the backs of lots of other great work that was done, for example, at Google, like many of us were actually at Google before. We were involved in some of that work. Some of it has work that we're drawing on that others did. So there's definitely, like, been a lot of progress there. But to make robotic foundation models really work, it's not just a laboratory science kind of experiment. It's also, it also requires kind of industrial scale building effort. Like, it's more like the Apollo program than it is like a science experiment. And the excellent research that was done in the past industrial research labs, and I know I was involved in much of that, was very much framed as a fundamental research effort.
Starting point is 00:23:04 And that's good, like the fundamental research is really important, but it's not enough by itself. You need the fundamental research, and you also, need the impetus to make it real. And make it real means like actually put the robots out there, get data that is representative, the kind of tasks that they want to do in the real world, get that data at scale, build out the systems, get all that stuff right. And that requires a degree of focus, a singular focus, on really nailing the robotic foundation model for its own sake, not just as a way to do more signs, not just as a way
Starting point is 00:23:36 to publish a paper and not just as a way to kind of like, you know, have a research lab. What is preventing you now from scaling that data even more? If data is a big bottleneck, why can't you just increase the size of your office 100x, have 100x more operators, we're operating these robots and collecting more data? Yeah, why not ramp it up immediately 100x more? Yeah, that's a really good question. So the challenge here is in understanding which axes of scale,
Starting point is 00:24:10 contributes to which axis of capability. So if we want to expand capability horizontally, meaning like the robot knows how to do 10 things now and I'd like it to do 100 things later, that can be addressed by just directly horizontally scaling what we already have. But we want to get robots to a level of capability where they can do practical useful things in the real world,
Starting point is 00:24:32 and that requires expanding along other axes too. It requires, for example, getting to very high robustness. It requires getting them to perform tasks very efficiently, quickly. It requires them to recognize edge cases and respond intelligently. And those things, I think, can also be addressed with scaling, but we have to identify the right axes for that, which means to figure out what kind of data to collect, what settings to collect it in, what kind of methods consume that data, how those methods work.
Starting point is 00:24:58 Right. So answering those questions more thoroughly will give us greater clarity on the axes, on those dependent variables, on the things that we need to scale. and we don't fully know right now what that will look like. I think we'll figure it out pretty soon and something we're working on actively. But we want to really get that right so that when we do scale it up,
Starting point is 00:25:21 it'll directly translate into capabilities that are very relevant to practical use. Just to give an order of magnitude, how does the amount of data you have collected compared to Internet scale pre-training data? And I know it's hard to do like a token by token count because, yeah, how does video information compare to Internet information, et cetera?
Starting point is 00:25:38 But like, using your reasonable estimates, what fraction of? That's right. It's very hard to do because robotic experience consists of time steps that are very correlated with each other. So, like, the raw, like, bite representation is enormous, but probably the information density is comparatively low. Maybe a better comparison is to the data sets that are used for multimodal training. Yeah. And there it's, I believe last time we did that count, it was like between one and two orders of magnitude. The vision you have of robotics will not be possible until you collect like what 100x, 1,000 X more data?
Starting point is 00:26:15 Well, that's the thing that we don't know that. It's certainly very reasonable to infer that like, you know, robotics is a tough problem and probably it requires, you know, as much experience as the language stuff. But because we don't know the answer to that, to me, a much more useful way to think about is not how much data do we need to get before we're fully done, but how much data do we need to get before we can get started, meaning before we can get a data flywheel that represents a self-sustaining and ever-growing data collection.
Starting point is 00:26:49 When you say self-sustustaining? Is it just like learning on the job, or do you have something else in mind? Learning on the job or acquiring data in a way that the process of acquisition of that data itself is useful and valuable. I see, like just some kind of URL. doing something like actually real. Yeah. I mean, ideally, I would like it to be R.L because you can get away with the robot acting
Starting point is 00:27:09 autonomously. Right. Which is easier. But that's not out of the question that you can have next autonomy. You can, you know, as I mentioned before, robots can learn from all sorts of other signals. I described how we can have a robot that learns from a person talking to it. So there's a lot of middle ground in between fully teleoperated robots and fully autonomous robots.
Starting point is 00:27:29 Yeah. Okay. And how does the pie model work? Yeah. So the current model that we have basically is a vision language model that has been adapted for motor control. So to give you a little bit of like a fanciful brain analogy, a VLM, a vision language model is basically an LLM that has had a little like pseudo visual cortex grafted to it, a vision encoder. Right. So our models, they have a vision encoder, but they also have an action expert, an action decoder essentially.
Starting point is 00:27:58 So it has like a little visual cortex and notionally a little motor cortex. And the way that the model actually makes decisions is it reads in the sensory information from the robot. It does some internal processing, and that could involve actually outputting intermediate steps. Like you might tell it, clean up the kitchen, and it might think to itself like, hey, to clean up the kitchen, I need to pick up the dish, and I need to pick up the sponge, and I need to put this and this. And then eventually it works its way through that chain of thought generation down to the action expert, which actually produces continuous actions. And that has to be a different module because the actions are continuous, they're high frequency, so they have a different data form. than text tokens, but structurally, it's still an end-to-end transformer, and roughly speaking, technically, it corresponds to a kind of mixture of experts architecture.
Starting point is 00:28:42 And, like, what is actually happening is that it's like, like, it's like predicting I should do X thing, then it's like there's an image token, then some action tokens, like what it actually ends up doing, and then more image, more text description, more action tokens, basically I'm like looking at what stream is going on. That's right. with the exception that the actions are actually not represented as discrete tokens. It actually uses a flow matching kind of diffusion because they're continuous and you need to be very precise with your actions for Dexterous Control. I find it super interesting that, so you are, I think you're using the open source Gemma model, which is like Google's LLM, the release open source, and then adding the section expert on top.
Starting point is 00:29:21 I find it super interesting that the progress in different areas of AI is just based on not only the same ticket, but literally the same model that you can just use an open source LLM and then add this action expert on top. It is notable that like you naively might think that, oh, there's like separate area of researchers as robotics and there's a separate area of research called LLMs and natural language processing. And no, it's like it's literally the same. It's like the considerations are the same. The architectures are the same. Even the weights are the same. I know you do more training on top of these open source models, but that I find super interesting. Yeah. So one theme here that, like, I think is important to keep in mind is that the reason that those building blocks
Starting point is 00:30:05 are so valuable is because the AI community has gotten a lot better at leveraging prior knowledge. And a lot of what we're getting from the pre-trained LMs and VLMs is prior knowledge about the world. And it's kind of like, it's a little bit abstracted knowledge. It's like, you know, you can identify objects. You can figure out like, you know, roughly where things are in image, that sort of thing. But I think if I had to like summarize in one sentence, the big benefit that recent innovations in AI give to robotics,
Starting point is 00:30:34 it's really the ability to leverage prior knowledge. And I think the fact that the model is the same model, that's kind of always been the case in deep learning, but it's that ability to pull in that prior knowledge, that abstract knowledge that can come from many different sources. That's really powerful. Today I'm here with Mark, who is a senior researcher at Hudson River Trading. He has prepared for us a big data set of market prices and historical market data,
Starting point is 00:30:58 and we're going to try to figure out what's going on and whether we can predict future prices from historical market data. Mark, let's dig in. Happy to do it. So it sounds like the first fun thing to do is probably to start looking at what an order book actually looks like. Yeah, I think so.
Starting point is 00:31:13 So I've given you like real order book data that is snapshots of the top five levels of the order book, both on the bid and ass side, for a couple of different tech stocks. Invidia, Tesla, AMD, etc. What is the shape of the prediction? Are we predicting? Why don't you take the data frame, look at its Y values, and just kind of like histogram it?
Starting point is 00:31:33 They are centered at zero. They're roughly centered at zero. Yeah. But target of what exactly? So these things are changes in the midprice from like now to some short period of time in the future. This is actually quite interesting. It's like a mystery to solve. And each one of these can be like a sizable chunk of time for a researcher.
Starting point is 00:31:48 If this sounds interesting to you, you should consider working at Hudson River Trading. Mark, where can people learn more? They can learn more at Hudson-dastrating.com slash Dorcash. Amazing. I was talking to this researcher, Sandra, at GDM, and he works on video and audio models. And he made the interesting point that the reason, in his view, we aren't seeing that much transfer learning between different modalities. That is to say, like, training a language model on video and images doesn't seem to necessarily make it that much better at textual questions and tasks is that images are represented in. a different semantic level than text. And so his argument is that text has this high-level semantic representation within the model,
Starting point is 00:32:35 whereas images and videos are just, like, compressed pixels. There's not really a semantic. When they're embedded, they don't represent some, like, high-level semantic information. They're just, like, compressed pixels. And therefore, there's no transfer learning at the level at which they're going through the model. And obviously, this is super relevant to the work you're doing because your hope is that by training the model both on the visual data that the robot sees. Visual data
Starting point is 00:33:01 generally, maybe even from YouTube or whatever eventually, plus language information plus action information from the robot itself. All of this together will make it generally robust. And then you had a really interesting blog post about why video models aren't as robust as
Starting point is 00:33:17 language models. Sorry, this is not a super well-formed question. I just wanted to do you react to a lot. What's up with that? Yeah, yeah. Yeah, so I have maybe two things I can say there. I have some bad news and some good news. So the bad news is
Starting point is 00:33:31 what you're saying is really getting at the core of a long-running challenge with video and image generation models. In some ways, the idea of getting intelligent systems by predicting video
Starting point is 00:33:50 is even older than the idea of getting intelligence systems by predicting text. But the text stuff turn into practically useful things earlier than the video stuff did. I mean, the video stuff is great. Like, you can generate cool videos, and I think that the work there that's been done recently
Starting point is 00:34:06 is, like, amazing. But it's not like just generating videos and images has already resulted in systems that have this kind of, like, deep understanding of the world where you can, like, ask them to, like, do stuff beyond just generating more images and videos. Whereas with language, clearly it has. And I think that this point about representations
Starting point is 00:34:23 is really key to it. One way we can think about it is this, imagine pointing a camera outside this building. There's the sky, there's the clouds are moving around, the water, cars driving around people. If you want to predict everything that will happen in the future, you can do so in many different ways. You can say, okay, there's people around, so let me get really good understanding like the psychology of how people behave in crowds and predict the pedestrians. But you could also say, like, well, there's clouds moving around. Let me like understand everything about water molecules and ice particles in the air.
Starting point is 00:34:53 And you can go super deep on that. Like if you want to like fully understand like all, you know, down to the subatomic level, everything that's going on. Like as a person, you could spend like decades just thinking about that and you'll never even get to the pedestrians or the water, right? So if you want to really predict everything that's going on in that scene, there's just so much stuff that even if you're doing a really great job and capturing like 100% of something, by the time you get to everything else, like, you know, ages will have passed. Whereas with text, it's sort of been abstract into those bits that we as humans care about. So the representation is already there. not just good representations, they actually focus in on what really matters. Okay, so that's the bad news.
Starting point is 00:35:30 Here's the good news. The good news is that we don't have to just get everything out of pointing a camera outside this building because when you have a robot, that robot is actually trying to do a job. So it has a purpose. Yeah. And its perception is in service to fulfilling that purpose. And that is like a really great focusing factor. We know that for people, this really matters.
Starting point is 00:35:54 Like literally what you see is affected by what you're trying to do. Like there's been no shortage of psychology experiments showing that people have like almost a shocking degree of tunnel vision where they will like literally not see things right in front of their eyes if it's not relevant to what they're trying to achieve. And that is tremendously powerful. Like there must be a reason why people do that because, you know, certainly if you're out in the jungle, seeing more is better than seeing less. So if you have that powerful focusing mechanism, it must be darn important for getting it to achieve your goal. And I think robots will have that focusing mechanism because they're trying to achieve a goal. By the way, the fact that video models aren't as robust, is that bearish for robotics because it will, so much of the data you will have to use will not be, I guess some of, you're saying a lot of it will be labeled, but like, ideally you just want to be able to like throw all of everything on YouTube, every video we ever recorded and have it learned how the physical world works and how to like move about, et cetera, just see humans performing tasks and learn from that. But if, yeah, I guess you're saying
Starting point is 00:36:52 like it's hard to learn just from that and it actually like needs to practice a task itself. Well, let me put it this way. Like, let's say that I gave you lots of videotapes or lots of recordings of different sporting events and gave you a year to just watch sports. And then after that year, I told you,
Starting point is 00:37:09 okay, now your job, you're going to be playing tennis. Yeah. Okay, that's like, that's pretty dumb, right? Whereas if I told you first, like, you're going to be playing tennis and then I let you study up, right? Like, now you really know what you're looking for.
Starting point is 00:37:21 Right. So I think that actually, like, there's a very real challenge here. I don't want to understate the challenge, but I do think that there's also a lot of potential for foundation models that are embodied that learn from interaction, from controlling robotic systems, to actually be better at absorbing the other data sources because they know what they're trying to do. I don't think that by itself is like a silver bullet.
Starting point is 00:37:41 I don't think it solves everything. But I think that it does help a lot. And I think that we've already seen the beginnings of that. that where we can see that including web data in training for robots really does help with generalization. And I actually have the suspicion that in the long run, it'll make it easier to use those sources of data that have been tricky to use up until now. Famously, LLMs have all these emerging capabilities that are never engineered in because
Starting point is 00:38:07 somewhere in the Internet text is the data to train and to give it the knowledge to do a certain kind of thing. With robots, it seems like you are collecting all the data manually. So there won't be this mysterious new capability that is somewhere in the data set that you haven't purposefully collected, which seems like it should make it even harder to then have robust out-of-distribution kind of capabilities. And so I wonder if the trek over the next five, ten years will just be like each subtask you have to give it thousands of episodes. And then it's very hard to actually automate much work just by doing subtask. So if you think about what a barista does, what a waiter does, what a chef does,
Starting point is 00:38:51 very little bit involved just like sitting at one station and like doing stuff, right? They're just like, you've got to move around, you got to restock, you got to fix the machine or, et cetera, go between like the counter and the cashier and the machine, et cetera. So, yeah, will it just be like, will there just be this long tale of things that you had to keep skills? You had to keep like adding episodes for manually and labeling and seeing how well they did, et cetera? or is there some reason to think that it will progress more generally than that? Yeah. So there's a subtlety here.
Starting point is 00:39:26 Emerging capabilities don't just come from the fact that Internet data has a lot of stuff in it. They also come from the fact that generalization, once it reaches a certain level, becomes compositional. There was a cute example that one of my students really like to use in some of this presentation. which is, you know what international phonetic alphabet is? No. If you look in a dictionary, they'll have the pronunciation of a word and written in like kind of funny letters. That's basically international phonetic alphabet.
Starting point is 00:39:57 So it's an alphabet that is pretty much exclusively used for writing down pronunciations of individual words and dictionaries. And you can ask an LLM to write you a recipe for like making some meal in international phonetic alphabet, and it will do it. And that's like, holy crap. That is definitely not something that. that it has ever seen because IPA's only ever used for writing down pronunciations of individual words.
Starting point is 00:40:20 So that's compositional generalization. It's putting together things you've seen like that in new ways. And it's like, you know, arguably there's nothing like profoundly new here because like, yes, you've seen different words written that way, but you've figured out that now you can compose the words in this other language the same way that you've composed words in English. So that's actually where the emerging capabilities come from. And because of this, in principle, if we have a sufficient diversity of behaviors,
Starting point is 00:40:47 the models should figure out that those behaviors can be composed in new ways as the situation calls for it. And we've actually seen things, even with our current models, which, you know, I should say that, I think they're in the grand scheme of things like looking back five years from now. We'll probably think that these are tiny in scale. But we've already seen what I would call emerging capabilities. When we were playing around with some of our laundry folding policies,
Starting point is 00:41:07 actually we would discover this by accident, the robot accidentally picked up two T-shirts out of the bin instead of one, starts folding the first one, the other one gets in the white, picks up the other one, throws it back in the bin. And we're like, we didn't know,
Starting point is 00:41:19 we didn't know we'd do that. Like, holy crap. And then we tried to play around with it and it's like, yep, it does that every time. Like you can drop in, you know, it's doing its work, drop something else on the table,
Starting point is 00:41:26 just pick it up, put it back. Right. Okay, that's cool. Shopping bag, it starts putting things in the shopping bag, the shopping bag tips over, it picks it back up and stands it upright. We didn't,
Starting point is 00:41:35 we didn't tell anybody to collect data for that. I'm sure somebody accidentally at some point or maybe intentionally picked up the shopping bag, but it's just you have this kind of compositionality that emerges when you do learning at scale and that's really where all these remarkable capabilities come from. And now you put that together with language, you put that together with all sorts of chain of thought reasoning
Starting point is 00:41:55 and there's a lot of potential for the model to compose things in new ways. Right. I had an example like this when I got a tour of the robots, by the way, at your office. So it was folding shorts, and I don't know if there was an episode like this in the training set, but just for fun, I took one of the shorts and, like, turned it inside out. And then it was able to understand that it first needed to get... So first of all, the grippers are just like this, like two limbs, or just a opposable finger and thumb-like thing. And it's actually shocking how much you can do with just that.
Starting point is 00:42:32 Yeah, it understood that I first needed to fold that inside out before folding it correctly. I mean, what's especially surprising about that is, it seems like this model only has like one second of context. So as compared to these language models, which can often see the entire code base, and they're observing hundreds of thousands of tokens and thinking about them before outputting, and they're observing their own chain of thought
Starting point is 00:42:52 for thousands of tokens before making a plan about how to code something up, your model is like seeing one image of what happened in the last second, and it vaguely knows, like, it's supposed to fold this short, and it's seen like the image of what's happened the last second.
Starting point is 00:43:07 And I guess it works. It's like crazy that it like, no, it will just see the last thing that happened and then keep executing on the plan. So fold it inside out, then fold it correctly. But it's shocking that a second of context is enough to execute on a minute-long task. Yeah, I'm curious why you made that choice in the first place and why it's possible to actually do tasks. If a human could only like think I had like a second of memory and had to like do physical work. I feel like that would just be impossible. Yeah. I mean, it's not that there's something good about having less memory to be clear. Like, I think that adding memory, adding longer context, all that stuff, adding higher resolution images, I think those things will make the model better. But the reason why it's not the most important thing for the kind of skills that you saw when you visited us, it at some level, I think it comes back to Morvix paradox. So more of ex paradox is basically that it's like, you know, if you know one thing about, if you want to know one thing about robotics,
Starting point is 00:44:05 It's like that's the thing. More of Experodox says that basically in AI, the easy things are hard and the hard things are easy, meaning like the things that we take for granted, like picking up objects, seeing, you know, perceiving the world, all that stuff. Those are all the hard problems in AI. And the things that we find challenging, like playing chess and doing calculus, actually are often the easier problems. And I think this memory stuff is actually more of Experodox in disguise, where we think that
Starting point is 00:44:29 the cognitively demanding tasks that we do, that we find hard that kind of cause us to think like, oh man, I'm sweating. I'm working so hard. Those are the ones that requires to keep lots of stuff in memory, lots of stuff in our minds. Like if you're solving some big math problem, if you're having a complicated technical conversation on a podcast, like those are things we have to keep all those pieces, all those puzzle pieces in your head. If you're doing a well-rehearsed task, if you are an Olympic swimmer and you're swimming with perfect form and you're like right there in the zone, like people even say like, it's in the moment. It's in the moment, right? Like, It's like you've practiced that so much, you've baked it into your neural network in your brain
Starting point is 00:45:09 that you don't have to think carefully about keeping all that context. Yeah. Right. So it really is just more of its paradox manifesting itself. But that doesn't mean that we don't need the memory. It just means that if we want to match the level of dexterity and physical proficiency that people have, there's other things we should get right first and then gradually go up that stack into the more cognitively demanding areas, into reasoning, into context.
Starting point is 00:45:34 into planning, all that kind of stuff. And that stuff will be important, too. And how physically will... So you have this like trilemma. You have three different things which all take more compute during inference that you want to increase at the same time. You have the inference speed.
Starting point is 00:45:53 And so humans are processing 24 frames a second or whatever it is. We're just like, we can react to things extremely fast. Then you have the context length. And for, I think, the kind of robot which is just like cleaning up your house. I think it has to kind it has to be aware of like things that happened minutes ago or hours ago and how that influences its plan about the next task it's doing and then you have the model size and I guess at least with LLMs we've seen that there's gains from
Starting point is 00:46:21 increasing the amount of parameters and I think currently you have 100 millisecond uh inference speeds you have a second long context and then the model is what a couple billion parameters how many And so each of these, at least two of them, are many orders of magnitude, smaller than what seems to be the human equivalent, right? Like, the model, if a human brain has, like, trillions of parameters, and this has, like, 2 billion parameters, and then if humans are processing, at least as fast as a model, like, actually a decent bit faster,
Starting point is 00:46:53 and we have hours of context, it depends on how you define human context, but hours of context, minutes of context. Sometimes decades of context. Yeah, exactly. So you have to have many order of magnitude improvements, across all of these three things, which seem to oppose each other or like increasing one reduces the amount of computer you can dedicate
Starting point is 00:47:15 towards the other one in inference. So how are we gonna, yeah, how are we gonna solve this? Yeah, well, that's a very big question. Yeah, let's try to unpack this a little bit. I think there's a lot going on in there. One thing that I would say is a really interesting technical problem And I think that it's something where we'll see perhaps a lot of really interesting innovation over the next few years is the question of representation for context. So if you imagine some examples you gave, like if you have a home robot that's doing something that needs to keep track, as a person, there's certainly some things where you keep track of them very symbolically, like almost in language.
Starting point is 00:47:58 Like, you know, I have my checklist. Like I'm going shopping. And I, you know, at least for me, I can like literally visualize in my mind like my. checklist, like, you know, pick up the yogurt, pick up the milk, pick up whatever. And I'm not like picturing the milk shelf with the milk sitting there. I'm just thinking like milk, right? But then there's other things that are much more spatial, almost visual. You know, when I was trying to get to your studio, I was thinking like, okay, here's what the street looks like, here's what that street looks like, here's what I expect the doorway to look like. So
Starting point is 00:48:30 representing your context in the right form that captures what you really need to achieve your goal and otherwise kind of discards all the unnecessary stuff. I think that's a really important thing. And I think we're seeing the beginnings of that with multimodal models. But I think that multimodality has so much more to it than just like image plus text. And I think that that's a place where there's a lot of room for really exciting innovation. Ooh, do you mean in terms of how we represent? Mm-hmm.
Starting point is 00:48:57 Okay. Yeah, how we represent both content. both what happened in the past and also plans or reasoning, as you can call it in LLM world, which is what we would like to happen in the future or intermediate processing stages in solving a task. I think doing that in a variety of modalities, including potentially learn modalities that are suitable for the job, is something that has, I think, enormous potential to overcome some of these challenges. Interesting. Another question I have as we're discussing these tough tradeoffs in terms of inference is comparing it to the human
Starting point is 00:49:30 brain and figuring out the human brain is able to have hours, decades of context while being like being able to act on the order of 10 milliseconds while having 100 trillion parameters or however you want to count it. And I wonder if the best way to understand what's happening here is that human brain hardware is just way more advanced than the hardware we have in GPUs or that the algorithms for encoding video information are way more efficient and maybe it's like some crazy mixture of experts where
Starting point is 00:50:07 the active parameters is also on the order of billions or some mixture of the two basically if you had to think about like why do we have these models that are across many dimensions orders of magnitude less efficient is it harderer or algorithms than compared to the brain Yeah, that's a really good question.
Starting point is 00:50:29 So I definitely don't know the answer to this. I am not by any means well versed in neuroscience, but if I had to guess and also provide an answer that leans more on things I know, it's something like this. The brain is extremely parallel. It kind of has to be just because of the biophysics. But it's even more parallel than your GPU. Yeah. If you think about how a modern multimodal language model processes the input, if you give it some images and some text, like first it reads in the images, then it reads in the text,
Starting point is 00:51:02 and then proceeds one token at a time to generate the output. It makes a lot more sense to me for an embodied system to have parallel processes. Now mathematically, you can actually make close equivalences between parallel and sequential stuff. aren't actually fundamentally sequential, like you kind of make them sequential by putting in position embeddings. Transformers are fundamentally actually very paralyzable things. That's what makes them so great.
Starting point is 00:51:27 So I don't think that actually mathematically, this highly parallel thing where you're doing perception and proprioception and planning all at the same time is actually necessarily needs to look that different from a transformer, although it's practical implementation will be different. And you could imagine that the system will in parallel think about, okay, here's like my long-term memory, like here's what I've seen, you know, a decade ago, here's, you Here's my short-term kind of spatial stuff. Here's my semantic stuff.
Starting point is 00:51:52 Here's what I'm seeing now. Here's what I'm planning. And all of that can be implemented in a way that there's some very familiar kind of attentional mechanism. But in practice, all running in parallel, maybe at different rates, maybe with the more complex things running slower, the faster reactive stuff running faster. I'm sure you've been seeing a bunch of fun images that people have been generating with Google's new image generation model, nanobanana. My X feed is full of wild images. But you might not realize that this model can also help you do less flashy tasks like restoring historical pictures or even just cleaning up images. For example, I was reading this old paperback because I was prepping to interview Sarah Payne, and it had this really great graph of World War II Allied shipping that I wanted to overlay in the lecture.
Starting point is 00:52:33 Now, in the past, this would have taken one of my editors 20 or 30 minutes to digitize and clean up manually. But now, we just took a photo of the page and then dropped into Nano Banana and got back a clean version. This was a one shot. But if nanominnanata doesn't nail it on the first attempt, you can try to just go back and forth with it until you get a result that you're super happy with. We keep finding new use cases for this model. And honestly, this is one of those tools
Starting point is 00:52:56 that just doesn't feel real. Check out Gemini 2.5 Flash image model, aka Nanopanana, on both Google AI Studio and the Gemini app. All right, back to Sergei. If in five years we have a system, which is like as robust as a human in terms of interacting with the world,
Starting point is 00:53:14 then what has happened that makes it physically possible to be able to run those kinds of models? To have video information that is streaming at real time or hours of prior video information is somehow being encoded and considered while decoding in like a millisecond scale and with many more parameters. Is it just that like Nvidia has shipped much better GPUs or that you guys have come up with much better like encoders and stuff or like what's happened in that five years? I think there are a lot of things to this question. I think certainly there's a really fascinating systems problem. I'm by no means a systems expert, but I would imagine that the right architecture in practice, especially if you want an affordable low-cost system,
Starting point is 00:53:57 would be to externalize at least part of the thinking. You could imagine maybe in the future you'll have a robot that has like, if your internet connection is not very good, the robot is in kind of like a dumber reactive mode, but if you have a good internet connection, then it can be a little smarter. That's pretty cool. But I think there are also research and algorithms things that can help here, like figuring out the right representations,
Starting point is 00:54:20 concisely representing both your past observations but also changes in observation, right? Like, you know, your sensory stream is extremely temporally correlated, which means that the marginal information gain from each additional observation is not the same as the entirety of that observation. Because the image that I'm seeing now is very correlated to the image I saw before. So in principle, I want to represent it concisely. I get away with a much more compressed representation than if I, represent the images independently. So there's a lot that can be done
Starting point is 00:54:46 on the algorithm's side to get this right, and that's really interesting algorithms work. I think there's also like a really fascinating systems problem. To be truthful, like, I haven't gotten to the systems problem because, you know, you want to implement the system once you sort of know the shape
Starting point is 00:54:58 of the machine learning solution. But I think there's a lot of cool stuff to do there. Maybe you guys need to hire like the people who run the YouTube data centers because they know how to encode video information. Okay, this is actually a interesting question, which is that with LLMs, course, they're being, theoretically you could run your own model on this laptop or whatever,
Starting point is 00:55:18 but realistically what happens is that the largest, most effective models are being run in batches of thousands, millions of users at the same time, not locally. Well, the same thing happened in robotics because of the inherent deficiencies of batching, plus the fact that we have to do this incredibly computer-intensive inference task. and so you don't want to be carrying around like you know like $50,000 GPUs per robot or something
Starting point is 00:55:49 you just want that to happen somewhere else so yeah this robotics world should we just be anticipating something where you need connectivity everywhere you need robots that are like have like super fast and you're streaming video information
Starting point is 00:56:03 back and forth right or at least video information one way so does that have interesting implications about like how this how this deployment of robots will actually be instantiated? I don't know, but if I were to guess, I would guess that it will actually see both. They will see low-cost systems with off-board inference and more reliable systems, for example, in settings where, like if you have an outdoor robot
Starting point is 00:56:28 or something where you can't rely on connectivity that are costlier and have onboard inference. A few things I'll say from a technical standpoint that might contribute to understanding this. while a real-time system obviously needs to be controlled in real time, often at high frequency, the amount of thinking you actually need to do for every time step might be surprisingly low. And again, we see this in humans and animals. When we plan out movements, there is definitely a real planning process that happens in the brain. Like if you record from a monkey brain, you will actually find neural correlates of planning. And there is something that happens in advance of a movement, and when that movement actually takes place, the shape of the movement correlates with what happened before the movement.
Starting point is 00:57:18 Like, that's planning, right? So that means that you put something in place and, you know, set the initial conditions of some kind of process and then unroll that process, and that's the movement. And that means that during that movement, you're doing less processing and you kind of batch it up in advance. But you're not like entirely an open loop. It's not like you're playing back a tape recorder. You are actually reacting as you go. you're just reacting at a different level of abstraction, a more basic level of abstraction. And again, this comes back to your representations.
Starting point is 00:57:44 Figure out which representations are sufficient for kind of planning and advance and then unrolling, which representations require a tight feedback loop. And for that tight feedback loop, like, what is it, where are you doing feedback on? Like, you know, if I'm driving a vehicle, maybe I'm doing feedback on the position of the lane marker so that I stay straight. And then at a lower frequency, I sort of gauge where I am in traffic. And then so you have a couple lectures from a few years back where you say, like, even for robotics, RL is, in many cases, better than imitation learning.
Starting point is 00:58:11 But so far, the models are exclusively doing imitation learnings. I'm curious how you're thinking on this has changed, or maybe it's not changed, but then you need to do this for the RL. Like, why can't you do RL yet? Yeah. So the key here is prior knowledge. Yeah. So in order to effectively learn from your own experience, it turns out that it's really, really important to already know something about what you're doing. Otherwise, it takes far too long.
Starting point is 00:58:35 It's just like it takes a person when they're a child a very long time to learn very basic things, to learn to write for the first time, for example. Once you already have some knowledge, then you can learn new things very quickly. So the purpose of training the models with supervised learning now is to build out that foundation that provides the prior knowledge so they can figure things out much more quickly later. And again, this is not a new idea. This is exactly what we've seen with LMs, right? LLMs started off being trained purely with Next Token prediction.
Starting point is 00:59:05 and that provided an excellent starting point first for all sorts of synthetic data generation and then for RL. So I think it makes total sense that we would expect basically any foundation model effort to follow that same trajectory where we first build out the foundation essentially in like a somewhat brute force way.
Starting point is 00:59:22 And the stronger that foundation gets, the easier it is to then make it even better with much more accessible training. In 10 years, will the best model for knowledge work also be a robotics model or have like a action expert attached to it. And the reason I ask is like, so far we've seen advantages
Starting point is 00:59:40 from using more general models for things. And will robotics fall into this bucket of we will just have the model, which does everything, including physical work and knowledge work? Or do you think they'll continue to stay separate? I really hope that they will actually be the same. And, you know, obviously I'm extremely biased. I love robotics. I think it's like it's very fundamental to AI.
Starting point is 01:00:01 But I think that it's optimistic. that it's actually the other way around that the robotics element of the equation will make all the other stuff better. And there are two reasons for this that I could tell you about. One has to do with representations and focus. So what I said before, with video prediction models, if you just want to predict everything that happens, it's very hard to figure out what's relevant. If you have the focus that comes from actually trying to do a task, Now that acts to structure how you see the world in a way that allows you to more fruitfully utilize the other signals.
Starting point is 01:00:37 That could be extremely powerful. The second one is that understanding the physical world at a very deep fundamental level, at a level that goes beyond just what we can articulate with language, can actually help you solve other problems. And we experience this all the time. Like when we talk about abstract concepts, we say, like, this company has a lot of momentum. Yeah. Right? we'll use like social metaphors to describe inanimate objects like my computer hates me right like we experience the world in a particular way in our subjective experience shapes how we think about it in very profound ways
Starting point is 01:01:11 and then we use that as a hammer to basically hit all sorts of other nails that are far too abstract to handle any other way I guess but there might be other considerations that are relevant to physical robots in terms of like infant speed and model size etc which might be different than the considerations for knowledge work but then maybe you can, maybe that doesn't change, maybe it's still the same model, but then you can serve it in different ways. And the advantages of co-training are high enough that, yeah, whenever I'm, like, I'm wondering in five years,
Starting point is 01:01:41 if I'm using a model to code for me, does it also know how to do robotic stuff? And, yeah, maybe the advantages of co-trading on robotics are high enough that it's worth. Well, and I should say that the coding is probably like the pinnacle of an abstract knowledge work in the sense that like just by the mathematical, nature of computer programming. It's an extremely abstract activity, which is why people struggle
Starting point is 01:02:01 with it so much. I'm a bit confused about why simulation doesn't work better for robots. If I look at humans, smart humans do a good job of, if they're intentionally trying to learn, noticing what about the simulation is similar to real life and paying attention to that and learning from that. So if you have pilots who are learning in simulation or F1 drivers who are learning in simulation, should it be expected to be a case that as robots get smart, martyr, they will also be able to learn more things to simulation or is this curse and we need real world data forever? This is a very subtle question.
Starting point is 01:02:38 Your example with the airplane pilot using simulation is really interesting. But something to remember is that when a pilot is using a simulator to learn to fly an airplane, they're extremely goal directed. So their goal in life is not to learn to use a simulator. Their goal in life is to learn to fly the airplane. They know there will be a test afterwards and they know that eventually they'll be in charge of like a few hundred passengers and they really need to not crash that thing. And when we train models on data from multiple different domains, the models don't know that they're
Starting point is 01:03:09 supposed to solve a particular task. They just see like, hey, here's one thing I need to master, here's another thing I need to master. So maybe like a better analogy there is if you, if you're like playing a video game where you can fly an airplane and then eventually someone puts you in the cockpit of a real one. Like, it's not that the video game is useless, but it's not the same thing. And If you're trying to play that video game and your goal is to really master the video game, you're not going to go about it in quite the same way. Oh, can you do some kind of meta-R-L on this, which is almost identical actually to the – there's this really interesting paper you wrote in 2017
Starting point is 01:03:43 where maybe the loss function is not how well it does that a particular video game or particular simulation, but how well being trained at different video games makes it better at some other downstream task. I did it a terrible job explaining, but – I understand what you mean. Yeah, yeah. Okay, can you do a better job explaining what I was trying to say? So I think what you were trying to say is basically that, well, maybe if we have like a really smart model that's doing metal learning, perhaps it can figure out that its performance on a downstream problem, a real world problem, is increased by doing something in a simulator. And then specifically make that the loss function, right?
Starting point is 01:04:15 Yeah, that's right. But here's the thing with this. There's a set of these ideas that are all going to be like something like trained to make it better on the real thing by leveraging something. else. And the key linchment for all of that is the ability to train it to be better on the real thing. The thing is, like, I actually suspect in reality we might not even need to do something quite so explicit because metal learning is emergent, as you pointed out before, right?
Starting point is 01:04:40 Like LLMs essentially do a kind of metal learning via in-context learning. I mean, we can debate us to how much that's learning or not. But the point is that large, powerful models trained on the right objective on real data get much better at leveraging all the other stuff. And I think that's actually the key. And coming back to your airplane pilot, like the airplane pilot is trained on a real world objective. Like their objective is to be a good airplane pilot, to be successful to have a good career. And all of that kind of propagates back into the actions they take in leveraging all these other data sources.
Starting point is 01:05:10 So what I think is actually the key here to leverage your auxiliary data sources, including simulation, is to build the right foundation model that is really good that has those immersion abilities. And to your point, to get really good like that, it has to have the right objective. Now, we know how to get the right objective out of real world data. Maybe we can get out of other things, but that's harder right now. And I think that, again, we can look to the examples of what happened in other fields. Like these days, if someone trains an LLM for solving complex problems, they're using lots of synthetic data. But the reason they're able to leverage that synthetic data effectively is because they have this starting point that has trained on lots of real data that kind of gets it.
Starting point is 01:05:50 And once it gets it, then it's more able to leverage all this other stuff. Right. So I think, like, perhaps ironically, the key to leveraging other data sources, including simulation, is to get really good at using real data, understand what's up with the world, and then now you can fruitfully use all this stuff. So once we have this like in 2035, 2030, basically the sci-fi world, are you optimistic about the ability of like true AGIs to build simulations in which they are rehearsing skills that no human or AI has ever had a chance to practice before? Some, you know, they need to like practice to be astronauts because we're building the Dyson sphere and they can just do that in simulation or like will the issue with simulation continue to be one regardless of how smart models get. So here's what I would say that deep down at a very fundamental level, the synthetic experience that you create yourself doesn't allow you to learn more about the world.
Starting point is 01:06:46 It allows you to rehearse things. It allows you to consider counterfactuals. but somehow information about the world needs to get injected into the system. And I think the way you pose this question actually elucidates this very nicely because in robotics classically, people have often thought about simulation as a way to inject human knowledge because a person knows how to write down like differential equations. They can code it up and that gives the robot more knowledge than had before. But I think that increasingly what we're learning from experiences in other fields,
Starting point is 01:07:18 from how like the video generation stuff goes, from synthetic data for LLMs, is that actually probably the most powerful way to create synthetic experience is from a really good model because the model probably knows more than a person does about those fine-grained details.
Starting point is 01:07:31 But then, of course, where does that model get the knowledge from experiencing the world? So in a sense, what you said, I think is actually quite right in that a very powerful AI system can simulate a lot of stuff, but also at that point,
Starting point is 01:07:45 it kind of almost doesn't matter because viewed as a black black, box, what's going on with that system is that information comes in and capability comes out. And whether the way processed that information is by imagining some stuff and simulating or by some model-free method is kind of irrelevant and understanding it's capabilities. Do you have a sense of what the equivalent is in humans? Like whatever we're doing when we're daydreaming or sleeping or I don't know if you have some sense of like what this auxiliary thing we're doing is, but if you had to make an ML analogy for it, what is it?
Starting point is 01:08:14 Well, yeah. I mean, certainly when you sleep. your brain does stuff that looks an awful lot like what it does when it's awake, that looks an awful lot like playing back experience or perhaps generating new statistically similar experience. And so I think it's like it's very reasonable to guess that perhaps simulation through a learned model is like part of how your brain figures out like counterfactuals basically. Yeah. But something that's kind of even more fundamental than that is that optimal decision making at
Starting point is 01:08:47 its core, regardless of how you do it, requires considering counterfactuals. You basically have to ask yourself, if I did this instead of that, would it be better? And you have to answer that question somehow. And whether you answer that question by using a learned simulator or whether you answer that question by using a value function or something like that, by using a reward model, in the end, it's kind of all the same. Like, as long as you have some mechanism for considering counterfactuals and figuring out which counterfactual is better, you've got it. Yeah. So I like thinking about it this way, because it kind of simplifies things. It tells us that the key is not necessarily
Starting point is 01:09:18 to do a really good simulation. The key is to figure out how to answer counterfactuals. Yeah, interesting. So, stepping big picture again, the reason I'm interested in getting concrete understanding of when this robot economy will be deployed is because it's actually pretty relevant to understanding how fast AGI will proceed
Starting point is 01:09:35 in the sense that, well, it's obviously the data fly wheel, but also if you just extrapolate out the CAPEX for AI, Suppose by 2030, you know, people have different estimates, but many people have estimates in the hundreds of gigawatts, 100, 200, 300, 300, 300, 300, 300,000 gigawatts. And then you can just, like, crunch numbers on, like, if you have 200 gigawatts deployed or 100 gigawatts deployed by 2030, the marginally. The marginal capx per year is, like, trillions of dollars. It's like $2, $4 trillion a year. And that corresponds to actual data centers you ought to build, actual chip foundries you have to build, actual solar panel factories you have to build. And I'm very curious about whether by this time, by 2030,
Starting point is 01:10:20 if the big bottleneck we have is just like people to lay out the solar panels next to the data center or assemble the data center, whether the robot economy will be mature enough to help significantly in that process. That's cool. So you're basically saying like, how much concrete should I buy now to build the data center so that by 2030 I can power all the robots? Yeah, yeah. That is a more ambitious way of thinking about it than that has occurred to me. But it's a cool question.
Starting point is 01:10:48 I mean, the good thing, of course, is that the robots can help you build that stuff. Right. But will they be able to, by the time that, like, there's some flight, like, there's the non-robotic stuff, which will also, like, mandate a lot of CAPEX. And then there's a robot stuff, although you actually had to build the robot factories, et cetera. But every way, there will be this industrial explosion across the whole stack, and how much will robotics be able to speed that up or make it possible? I mean, in principle, quite a lot, right?
Starting point is 01:11:15 I think that we have a tendency sometimes to think about robots as, like, mechanical people. But that's not the case, right? Like, people are people and robots are robots. Like the better analogy for the robot, it's like your car or a bulldozer. Like, it has much lower maintenance requirements. You can put them into all sorts of weird places, and they don't have to look like people at all. You can make a robot that's, you know, 100 feet tall. You can make a robot that's tiny.
Starting point is 01:11:42 So I think that if you have the intelligence to power very heterogeneous robotic systems, you can probably actually do a lot better than just having like, you know, mechanical people in effect. And it can be a big productivity boost for the real people. And it can allow you to solve problems that are very difficult to solve now. Yeah. You can, you know, for example, I'm not an expert on data centers by any means, but you could build your data centers in a very remote location because the robots don't have to worry about whether there's like a shopping center nearby.
Starting point is 01:12:13 And then do you have a sense of how, so there's like, where will a software be? And then there's a question of how many physical robots will we have? So like how many of the kinds of robots you're training in physical intelligence, like these tabletop arms, are there physically in the world? How many will there be by 2030? How many will be needed? I mean, these are tough questions. Like how many we need for that.
Starting point is 01:12:34 There are very tough questions. And also, you know, economies of scale and robotics, so far have not functioned the same way that they probably would in the long term, right? Just to give you an example, when I started working in robotics in 2014, I used a very nice research robot called a PR2 that cost $400,000 to purchase. When I started my research lab at UC Berkeley, I bought robot arms that were $30,000. The kind of robots that we are using now at physical intelligence, each arm costs about $3,000, and we think they can be made for a small, of that. So these things...
Starting point is 01:13:12 What is it cause of that learning rate? Well, there are a few things. So one, of course, has to do with economies of scale. So custom-built, high-end research hardware, of course, is going to be much more expensive than kind of more productionized hardware. But the other... And then, of course, there's a technological element that as we get better at building actuated machines, they become cheaper.
Starting point is 01:13:36 But there's also a software element, which is the smarter you're a... AI system gets, the less you need the hardware to satisfy certain requirements. So traditional robots and factories, they need to make motions that are highly repeatable, and therefore it requires a degree of precision and robustness that you don't need if you can use cheap visual feedback. So AI also makes robots more affordable and lowers the requirements on the hardware. Interesting. Okay, so doing the learning rate will continue, do you think it will cost hundreds of by the end of the decade to buy mobile arms? That is a great question for my co-founder, Adnan Esmeil, who is probably like the best person,
Starting point is 01:14:17 arguably in the world to ask that question of. But certainly the drop in costs that I've seen has surprised me year after year. Okay. And how many arms are there probably in the world? Is it more than a million, less than a million? So I don't know the answer to that question, but it's also a tricky question to answer because not all arms are made equal. Like arguably the kind of robots that are like assembling cars in a factory are,
Starting point is 01:14:38 are just not the right kind to think about. So the kind you want to train on? Very few, because they are not currently commercially deployed. Yeah. Unlike the factory robots. So like less than 100,000? I don't know, but probably, yeah. Okay.
Starting point is 01:14:52 And we want billions of robots, like at least millions of robots. If you're just thinking about, like, the industrial explosion that you need to have this AI explosive growth, not only do you need the arms, but then you need something that can move around. Basically, I'm just trying to think about, like, will that be possible by the time that you need a lot more labor to power this AI boom? Well, you know, economies are very good at filling demand when there's a lot of demand, right? Like how many iPhones were in the world in 2001, right? That's right. So I think there's definitely a challenge there.
Starting point is 01:15:33 And I think it's something that is worth thinking about. And a particularly important question for researchers like myself is how can AI affect how we think about hardware? Right. Because there are some things that I think are going to be really, really important. Like you probably want your thing to, like, not break all the time. Yeah. There's some things that are firmly in that category of like question marks. Like, how many fingers do we need?
Starting point is 01:15:56 Like you said yourself before that you were kind of surprised that a robot with two fingers can do a lot. Okay, maybe you still want like more than that. but still, like, finding the bare minimum that still lets you have good functionality, that's important. That's in the question mark box. And there's some things that I think, like, we probably don't need. Like, we probably don't need the robot to be, like, super duper precise because we know that feedback can compensate for that. So I think my job, as I see it right now, is to figure out what's sort of the minimal package we can get away with. And I really like to think about robots in terms of minimal package because I don't think that we will have, like, the one ultimate robot, like sort of the mechanical person, basically.
Starting point is 01:16:32 I think what we will have is a bunch of things that good effective robots need to satisfy, just like good smartphones need to have a touchscreen, like that's something that we all kind of agreed on. And then a bunch of other stuff that's kind of optional depending on the need, depending on the cost point, et cetera. And I think there will be a lot of innovation where once we have very capable AI systems that can be plugged into any robot to endow it with some basic level of intelligence, then lots of different people can innovate on how to get the robot hardware to be optimal for each niche it needs to be. In terms of manufacturers, is there some invidia of robotics? Not right now. Maybe there will be someday.
Starting point is 01:17:07 I would really like, maybe I'm being idealistic, but I would really like to see a world where there's a lot of heterogeneity in robots. What is the biggest bottom of like in the hardware today as somebody who's designed the algorithms that run on it? It's a tough question to answer, mainly because things are changing so fast. I think that to me, the things that I spend a significant amount of time thinking about on the hardware side is really more like a lot of, reliability and cost. It's not that I'm not like that worried about cost is just that cost translates to number of robots, which translates to amount of data. And being an ML person, I really like having lots of data.
Starting point is 01:17:42 So I really like having robots that are low cost because then I can have more of them and therefore more data. And reliability is important, more or less for the same reason. Yeah. But I think it's something that we'll get more clarity on as things progress because as we basically the AI systems of today are not pushing the hardware to the limit. So as the AI systems get better and better, the harder will get pushed to the limit, and then we'll hopefully have a much better answer to your question. Okay, so this is a question I've had for a lot of guests, and is that if you go through any layer of this AI explosion, you find that a bunch of the actual source supply chain is being manufactured in China.
Starting point is 01:18:25 So other than chips, obviously. But then, you know, if you talk about data centers and you're like, oh, all the wafers for solar panels and a bunch of the cells and modules, et cetera, are manufactured in China, then you just go through the supply chain. And then obviously, robot arms are being manufactured in China. And so if you live in this world where the hardware is just incredibly valuable to ramp up manufacturing of because each robot can produce some fraction of value that a human worker can produce, And not only is that true, but the value of human workers or any kind of worker has just tremendously skyrocketed because we just need tons of bodies to lay out the tens of thousands of solar farms and data centers and foundries and everything. In this boom world, the big bottom like there is just like, how many robots can you physically
Starting point is 01:19:21 deploy? How many can you manufacture? Because you guys are going to come up with the algorithms and now we just need the hardware. And so this is a question I've asked many guests, which is that, like, if you look at the part of the chain that you are observing, what is the reason that China doesn't win by default, right? If they're producing all the robots and you come up with the algorithms that make those robots super valuable, why don't they just win by default? Yeah. So this is a very complex question. I'll start with the broader themes and then try to drill a little bit into the details.
Starting point is 01:19:54 So one broader theme here is that if you want to have an economy where you get ahead by having a highly educated workforce, by having people that have high productivity, meaning that for each person's hour of work, lots of stuff gets done, automation is really, really good because automation is what multiplies the amount of productivity that each person has. Again, same as like LLM coding tools, LM coding tools. amplify the productivity of a software engineer. Robots will amplify the productivity of basically everybody that is doing work. Now that's kind of like a final state, like a desirable final state. Now there's a lot of complexity in how you get to that state, how you make that an appealing journey to society, how you navigate the geopolitical dimension of that. All of that stuff is actually pretty complicated and it requires making a number of really
Starting point is 01:20:55 good decisions, like, you know, good decisions about investing in a balanced robotics ecosystem, supporting both software innovation and hardware innovation. I don't think any of those are insurmountable problems. It just requires a degree of kind of long-term vision and the right kind of balance of investment. But what makes me really optimistic about this is that final state, that if, I think we can all agree that in the United States, we would like to have the kind of society where people are highly productive, where we have highly educated people doing high value work. And because that end state seems to me very compatible with automation, with robotics, at some level, there should be a lot of incentive to get to that state. And then from there, we have to solve for all the details that will help us get there.
Starting point is 01:21:50 And that's not easy. Like, I think there's a lot of, like, complicated decisions that need to be made in terms of private industry, in terms of investment, in terms of the political dimension. But I'm very optimistic about it because it's like, it seems to me like the light at the end of the tunnel is kind of it's in the right direction. I mean, yeah, I guess there's a different question which is that if the value is sort of bottlenecked by hardware, and so you just need to produce more hardware, what is the path by which hundreds of millions of robots or billions of robots are. are being manufactured in the U.S. or with allies. I don't know how to approach that question, but it seems like a different question than like, okay, well, what is the impact on, like, human wages or something?
Starting point is 01:22:29 So, again, for the specifics of how we make that happen, I think that's a very long conversation that I'm probably not the most qualified to speak to, but I think that in terms of the ingredients, the ingredient here that I think is important is that robots help with physical things, physical work. and if producing robots is itself physical work, then getting really good at robotics should help with that.
Starting point is 01:22:56 It's a little circular, of course, and as with all circular things, you have to like kind of bootstrap it and try to get that engine going. But it seems like it is an easier problem to address than, for example, the problem of digital devices where work goes into creating computers, phones, etc., but the computers and phones don't themselves help with the work.
Starting point is 01:23:18 Right. I guess feedback loops go both ways. They can help you or they can help others. And it's a positive some world, so it's not necessarily bad that they help others. But to the extent that a lot of the things which would go into this feedback loop, the subcomponent manufacturing and supply chain already exists in China, it seems like the stronger feedback loop would exist in China. And then there's a separate discussion, like maybe that's fine, maybe that's good,
Starting point is 01:23:44 and maybe they'll continue exporting this to us. but it's just like notable that I just find it notable that whenever I talk to a guest about different things is just like oh yeah that within a few years the key bottleneck to every single part of the supply chain here will be something that China is like the 80% world supplier of something.
Starting point is 01:24:03 Well yeah and this is why I said before that I think something really important to get right here is a balanced robotics ecosystem right like I think I think AI is tremendously exciting but I think we should also recognize that getting AI right is not the only thing that we need to do. And we need to think about how to balance our priorities, our investment, the kind of
Starting point is 01:24:24 things that we spend our time on. Just as an example, at physical intelligence, we do take hardware very seriously, actually. We build a lot of our own things, and we want to have a hardware roadmap alongside our AI roadmap. But I think that, you know, that's just us. I think that for the United States, for, you know, arguably for human civilization as a whole, like I think we need to think about these problems very holistically. Yeah.
Starting point is 01:24:52 And I think it is easy to get distracted sometimes when there's a lot of excitement and a lot of progress in one area like AI. And we are tempted to lose track of other things, including things you've said, like, hey, like, you know, there's a hardware component. There's an infrastructure component with compute and things like that. So I think that in general it's good to have a more holistic view of these things. I wish we had, you know, more holistic conversations about that sometimes. I do think from the perspective of society as a whole, how should they be thinking about the advances in robotics and knowledge work? And I think it's basically like society should be playing for full automation. Like there will be a period in which people's work is way more valuable because there's this huge boom in the economy where like building all these data centers or building all these factories.
Starting point is 01:25:35 But then eventually humans can do things with their body and we can do things with their mind. There's not like some secret third thing. So what should society be planning for? It should be full automation of humans. And there will also be a society to be much wealthier. So presumably there's ways to do this in a way that everybody is much better off than they are today. But then like the end state, the light of the end of the tunnel is the full automation plus super wealthy society with some redistribution or whatever way to figure that out, right? I don't know if you disagree with that characterization.
Starting point is 01:26:07 So I think at some level that's a very reasonable. way to look at things. But I think that if there's one thing that I've learned about technology, it's that it rarely evolves quite the way that people expect. And sometimes the journey is just as important as the destination.
Starting point is 01:26:24 So I think it's actually very difficult to plan ahead for an end state. But I think directionally, what you said makes a lot of sense. And I do think that it's very important for us collectively to think about how to structure the world around us in a way that is amenable to greater
Starting point is 01:26:40 and greater automation across all sectors. But I think we should really think about the journey just as much as the destination because things evolve in all sorts of unpredictable ways and we'll find automation showing up in all sorts of places, probably not the places we expect first.
Starting point is 01:26:56 So, you know, I think that the constants here that I think are really important is education is really, really valuable. Yeah. Like education is the best buffer somebody has against the negative effects of change. So if there is like one single lever that we can pull collectively as a society, it's like more education. Is that true? I mean, the Moravex paradox is like the things which are like most beneficial from education for humans might be the easiest to automate because it's really easy to educate AI.
Starting point is 01:27:27 You know, you can throw the textbooks that would take you eight years of grad school to do at them in an afternoon. Well, what education gives you is flexibility. So it's less about the particular facts you know as it is about your ability to. to acquire skills, acquire understanding. So it has to be good education. Right. Okay. Sergey, thank you so much for coming on the podcast. Thank you.
Starting point is 01:27:51 Yeah, this was intense. That's tough questions. I hope you enjoyed this episode. If you did, the most helpful thing you can do is just share it with other people who you think might enjoy it. Send it to your friends, your group chats, Twitter, wherever else. Just let the word go forth.
Starting point is 01:28:07 Other than that, super helpful if you can subscribe on YouTube and leave a five-star review on Apple Podcasts and Spotify. Check out the sponsors in the description below. If you want to sponsor a future episode, go to Thwarcash.com slash advertise. Thank you for tuning in. I'll see you on the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.