Dwarkesh Podcast - Andrej Karpathy — AGI is still a decade away

Episode Date: October 17, 2025

The Andrej Karpathy episode.During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries ...of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education.It was a pleasure chatting with him.Watch on YouTube; read the transcript.Sponsors* Labelbox helps you get data that is more detailed, more accurate, and higher signal than you could get by default, no matter your domain or training paradigm. Reach out today at labelbox.com/dwarkesh* Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our accounts, cash flows, AR, and AP all in one place. Apply online in minutes at mercury.com* Google’s Veo 3.1 update is a notable improvement to an already great model. Veo 3.1’s generations are more coherent and the audio is even higher-quality. If you have a Google AI Pro or Ultra plan, you can try it in Gemini today by visiting https://gemini.googleTimestamps(00:00:00) – AGI is still a decade away(00:29:45) – LLM cognitive deficits(00:40:05) – RL is terrible(00:49:38) – How do humans learn?(01:06:25) – AGI will blend into 2% GDP growth(01:17:36) – ASI(01:32:50) – Evolution of intelligence & culture(01:42:55) - Why self driving took so long(01:56:20) - Future of education Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Transcript
Discussion (0)
Starting point is 00:00:00 Today, I'm speaking with Andre Carpathie. Andre, why do you say that this will be the decade of agents and not the year of agents? Well, first of all, thank you for having me here. I'm excited to be here. So the quote that you've just mentioned, it's the decade of agents. That's actually a reaction to an existing, pre-existing quote, I should say, where I think some of the labs, I'm not actually sure who said this, but they were alluding to this being the year of agents with respect to LLMs
Starting point is 00:00:24 and how they were going to evolve. And I think I was triggered by that because I feel like there's some, overpredictions going on in the industry. And in my mind, this is really a lot more accurately described as the decade of agents. And we have some very early agents that are actually extremely impressive and that I use daily, you know, cloud and codex and so on. But I still feel like there's so much work to be done. And so I think my reaction is like, we'll be working with these things for a decade.
Starting point is 00:00:49 They're going to get better. And it's going to be wonderful. But I think I was just reacting to the timelines, I suppose, of the implication. What do you think we'll take a decade to accomplish? What are the bottlenecks? Well, actually make you work. So in my mind, I mean, when you're talking about an agent, I guess, or what the labs have in mind and what maybe I have in mind as well,
Starting point is 00:01:09 is you should think of it almost like an employee or like an intern that you would hire to work with you. So for example, you work with some employees here. When would you prefer to have an agent like Cloud or Codex do that work? Currently, of course they can't. What would it take for them to be able to do that? Why don't you do it today? And the reason you don't do it today is because they just don't work.
Starting point is 00:01:26 So, like, they don't have enough intelligence. They're not multimodal enough. can do computer use and all this kind of stuff. And they don't do a lot of the things that you've alluded to earlier. You know, they don't have continual learning. You can't just tell them something and they'll remember it. And they're just cognitively lacking and it's just not working. And I just think that it will take about a decade to work through all of those issues.
Starting point is 00:01:44 Interesting. So as a professional podcaster and a viewer of AI from afar, it's easy to identify for me, like, oh, here's what's lacking. Continual learning is lacking or multimodality is lacking. but I don't really have a good way of trying to put a timeline on it. Like if somebody's like, how long we'll continue learning take? There's no like prior I have about like this is a project that should take five years, 10 years, 50 years. Why a decade?
Starting point is 00:02:13 Why not one year? Why not 50 years? Yeah, I guess this is where you get into like a bit of, I guess, my own intuition a little bit. And also just kind of doing a bit of an extrapolation with respect to my own experience in the field. So I guess I've been in AI for almost two decades. I mean, it's going to be maybe 15 years or so, not that long. You had Richard Sutton here who was around, of course, for much longer. But I do have about 15 years of experience of people making predictions of seeing how they actually turned out.
Starting point is 00:02:39 And also, I was in the industry for a while, and I was in research, and I've worked in the industry for a while. So I guess I kind of have just a general intuition that I have left from that. And I feel like the problems are tractable. They're surmonable. Yeah. But they're still difficult. And if I just average it out, it just kind of feels like a ticket, I guess, to me. This is actually quite interesting.
Starting point is 00:02:59 I want to hear not only the history, but what people in the room felt was about to happen at various different breakthrough moments. What were the ways in which their feelings were either overly pessimistic or overly optimistic? Yeah. Should we just go through each of them one by one? Yeah. I mean, that's a giant question because, of course, you're talking about 15 years of stuff that happened. I mean, AI is actually like so wonderful because there have been a number of, I would say,
Starting point is 00:03:23 seismic shifts that were like the entire. field has sort of like suddenly looked a different way, right? And I guess I've maybe lived through two or three of those. And I still think there will continue to be some because they come with some kind of like almost surprising irregularity. Well, when my career began, of course, like when I started to work on deep learning, when I became interested in deep learning, this was just kind of like by chance of being right next to Jeff Hinton at the University of Toronto. And Jeff Hinton, of course, is kind of like the godfather figure of AI. And he was training all these neural networks and I thought it was incredible and interesting. But this was not like the main thing
Starting point is 00:03:53 that everyone in AI was doing by far. This was a niche little subject on the side. That's kind of maybe like the first, like, dramatic sort of seismic shift that came with the AlexNet and so on. I would say, like, AlexNet sort of reoriented everyone and everyone started to train neural networks, but it was still like very, like, per task, per specific task.
Starting point is 00:04:10 So maybe I have an image classifier or I have a neural machine translator or something like that. And people became very slowly, actually interested in basically kind of agents, I would say. And people started to think, okay, well, maybe we have a checkmark next to the visual cortex or something like that. But what about the other parts of the brain?
Starting point is 00:04:26 How can we get an actual, like, full agent or full entity that can actually interact in the world? And I would say the Atari sort of deep reinforcement learning shift in 2013 or so was part of that early effort of agents, in my mind, because it was an attempt to try to get agents that not just perceive the world, but also take actions and interact and get rewards from environments. And at the time, this was Atari games, right?
Starting point is 00:04:47 And I kind of feel like that was a misstep, actually. And it was a misstep that actually, even the early Open AI that I was a part of, of course, kind of adopted, because at that time the zeitgeist was reinforcement learning environments, games, game playing, beat games, get lots of different types of games, and Open AI was doing a lot of that. So that was maybe like another, like, prominent part of, I would say, AI where maybe for two or three or four years, everyone was doing reinforcement learning on games. And basically, that was a little bit of a misstep. And what I was trying to do at OpenAA actually is like I was always a little bit suspicious of games as being like this thing that would actually lead to AGI because in my mind you want something like an accountant or like something that's actually interacting with the real world. And I just didn't see how games kind of like add up to it. And so my project at Open AI, for example, was within the scope of the universe project on an agent that was using keyboard and mouse to operate webpages. And I really wanted to have something that like interacts with, you know, the actual digital world that can do knowledge work.
Starting point is 00:05:46 And it just so turns out that this was extremely early, way too early, so early that we shouldn't have been working on that, you know, because if you're just stumbling your way around and keyboard mashing and mouse-clicking and trying to get rewards in these environments, your reward is too sparse and you just won't learn and you're going to burn a forest computing and you're never actually going to get something off the ground. And so what you're missing is this power of representation in the neural network.
Starting point is 00:06:12 And so, for example, today people are training those computer-using agents, but they're doing it on top of a large language model. And so you actually have to get the language model first. You have to get the representations first. And you have to do that by all the pre-training and all the LLM stuff. So I kind of feel like maybe loosely speaking, it was like people keep maybe trying to get the full thing too early a few times where people really try to go after agents too early, I would say,
Starting point is 00:06:34 and that was Atari and Universe and even my own experience. And you actually have to do some things first before we sort of get to those agents. And maybe now the agents are a lot more competent, but maybe we're still missing. sort of some parts of that stack. But I would say maybe those are like the three, like, major buckets of what people were doing, training neural nets per tasks, trying to the first round of agents, and then maybe the LLMs and actually seeking the representation power of the neural networks
Starting point is 00:06:59 before you tack on everything else on top. Interesting. Yeah, I guess if I were to steal man, the sort of a sudden perspective would be that humans actually can just take on everything at once, right? Even animals can take on everything at once, right? Animals are maybe a better example because they don't even have the scaffold of language. they just get thrown out into the world and they just have to make sense of everything
Starting point is 00:07:18 without any labels and the vision for AGI then should just be something which just looks at sensory data looks at the computer screen and it just like figures out what's going on from scratch. I mean if a human was put in a similar situation that would be trained from scratch but I mean this is like a human growing up where animal growing up
Starting point is 00:07:35 so why shouldn't that be the vision for AI rather than like this thing where we're doing millions of years of training? I think that's a really good question and I think I mean, so Sutton was on your podcast, and I saw the podcast, and I had a write-up about that podcast almost that gets into a little bit of how I see things. And I kind of feel like I'm very careful to make analogies to animals because they came about by a very different optimization process.
Starting point is 00:07:59 Animals are evolved, and they actually come with a huge amount of hardware that's built in. And when, for example, my example in the post was the zebra, zebra gets born, and a few minutes later, it's running around and following its mother. That's an extremely complicated thing to do. That's not reinforcement learning. That's something that's baked in. And evolution obviously has some way of encoding the weights of our neural nuts in ATCGs. And I have no idea how that works, but it apparently works.
Starting point is 00:08:24 So I kind of feel like brains just came from a very different process. And I'm very hesitant to take inspiration from it because we're not actually running that process. So in my post, I kind of said, we're not actually building animals. We're building ghosts or spirits or whatever people want to call it. because we're not doing training by evolution. We're doing training by basically imitation of humans and the data that they've put on the internet. And so you end up with these like sort of ethereal spirit entities
Starting point is 00:08:54 because they're fully digital and they're kind of mimicking humans. And it's a different kind of intelligence. Like if you imagine a space of intelligences, we're starting off at a different point almost. We're not really building animals. But I think it's also possible to make them a bit more animal-like over time. And I think we should be doing that.
Starting point is 00:09:08 And so I kind of feel like, sorry, just I guess one more point is, I do feel like Sutton basically has a very, like his framework is like we want to build animals. And I actually think that would be wonderful. If we can get that to work, that would be amazing. If there was a single, like, algorithm that you can just, you know, run on the internet and it learns everything. That would be incredible. I almost suspect that I'm not actually sure that it exists. And that's certainly actually not what animals do.
Starting point is 00:09:32 Because animals have this outer loop of evolution. Right. And a lot of what looks like learning is actually a lot more maturation of the brain. and I think that there's actually very little reinforcement learning for animals. And I think a lot of the reinforcement learning is actually more like motor tasks. It's not intelligent tasks.
Starting point is 00:09:48 So I actually kind of think humans don't actually like really use RL, roughly speaking is what I would say. Can you repeat the last sentence? A lot of that intelligence is not motor tasks. That's what, sorry? A lot of the reinforcement learning in my perspective would be things that are a lot more like motorlike, like simple kind of like tasks, throwing hoop,
Starting point is 00:10:03 something like that. But I don't think that humans use reinforcement learning for a lot of intelligence tasks like problem solving and so on. Interesting. That doesn't mean we shouldn't do that for research, but I just feel like that's what animals do or don't. I'm going to take a second to digest that
Starting point is 00:10:18 because there's a lot of different ideas. Maybe one clarification question I could ask to understand a perspective. So I think you suggest that, look, evolution is doing the kind of thing that pre-training does in the sense of building something which can then understand the world. The difference, I guess, is that evolution
Starting point is 00:10:37 has to be titrated in the case of humans through three gigabytes of DNA. And so that's very unlike the weights of a model. I mean, literally the weights of the model are a brain, which obviously is not encoded in the sperm and the egg, or does not exist in the sperm and the egg. So it has to be grown. And also the information for every single synapse in the brain
Starting point is 00:11:01 simply cannot exist in the three gigabytes that exist in the DNA. Evolution seems closer to finding the algorithm, which then does the lifetime learning. Now, maybe the lifetime learning is not analogous to RL, to your point. Is that compatible with the thing you were saying, or would you disagree with that? I think so.
Starting point is 00:11:17 I would agree with you that there's some miraculous compression going on because obviously the weights of the neural net are not stored in the ATCGs. There's some kind of a dramatic compression, and there's some kind of learning algorithms encoded that take over and do some of the learning online.
Starting point is 00:11:30 So I definitely agree with you on that. Basically, I would say, I'm a lot more kind of, like, practically minded. I don't come at it from a perspective of like let's build animals. I come from perspective of like, let's build useful things.
Starting point is 00:11:40 So I have a hard hat on. And I'm just observing that, look, we're not going to do evolution because I don't know how to do that. But it does turn out we can build these ghost spirit-like entities by imitating internet documents. This works.
Starting point is 00:11:52 And it's actually kind of like, it's a way to bring you up to something that has a lot of sort of built-in knowledge and intelligence in some way, similar to maybe what evolution has done. So that's why I kind of call pre-training this kind of like crappy evolution. It's like the practice
Starting point is 00:12:06 practically possible version with our technology and what we have available to us to get to a starting point where we can actually do things like reinforcement learning and so on. Just to steal man the other perspective, because after doing this under interview and thinking about it a bit, he has an important point here. Evolution does not give us the knowledge, really, right? It gives us the algorithm to find the knowledge. And that seems different from pre-training. So if perhaps the perspective is that pre-training helps build the kind of entity which can learn better, it teaches meta-learning, and therefore it is similar to like finding. an algorithm. But if it's like evolution gives us knowledge, pre-training gives knowledge,
Starting point is 00:12:40 that analogy seems to break down. So it's subtle and I think you're right to push back on it. But basically, the thing that pre-training is doing, so you're basically getting the next token predictor on over the internet and you're training that into a neural net. It's doing two things actually that are kind of like unrelated. Number one, it's picking up all this knowledge, as I call it. Number two, it's actually becoming intelligent. By observing the algorithmic patterns in the internet, it actually kind of like boots up all these like little circuits and algorithms inside the neural net to do things like in-context learning and all this kind of stuff. And actually, you don't actually need or want the knowledge.
Starting point is 00:13:12 I actually think that's probably actually holding back the neural networks overall because it's actually like getting them to rely on the knowledge a little too much sometimes. For example, I kind of feel like agents, one thing they're not very good at is going off the data manifold of what exists on the internet. If they had less knowledge or less memory, actually maybe they would be better. And so what I think we have to do kind of going forward, and this would be part of the research paradigms, is I actually think we need to start, we need to figure out to remove some of the knowledge and to keep what I call this cognitive core.
Starting point is 00:13:40 Is this like intelligent entity that is kind of stripped from knowledge but contains the algorithms and contains the magic, you know, of intelligence and problem solving and the strategies of it and all this kind of stuff. There's so much interesting stuff there. Okay. So let's start with in context learning. This is an obvious point, but I think it's worth just like saying it explicitly and meditating on it.
Starting point is 00:14:00 The situation in which these models seem the most intelligent in which they are like, I talk to them and I'm like, wow, there's really something on the other end that's responding to me thinking about things. If it like makes a mistake, it's like, oh, wait, that's actually the wrong way to think about it. I'm packing up. All that is happening in context. That's where I ever feel like the real intelligence you can like visibly see. And that in context learning process is developed by gradient descent on pre-training, right? Like it spontaneously meta-learns in context learning. But the in context learning itself is not gradient descent in the same way that our lifetime intelligence as humans to be able to do things
Starting point is 00:14:35 is conditioned by evolution, but our actual learning during our lifetime is happening through some other process. Actually, don't fully agree with that, but you should continue with help. Actually, then I'm very curious to understand how that analogy breaks down. I think I'm hesitant to say that
Starting point is 00:14:50 in context learning is not doing gradient descent because, I mean, it's not doing explicit gradient descent, but I still think that, so in context learning, basically, it's pattern completion within a token window, right? And it just turns out that there's a huge amount of patterns on the internet. And so you're right,
Starting point is 00:15:03 the model kind of learns to complete the pattern. And that's inside the weights. The weights of the neural network are trying to discover patterns and complete the pattern. And there's some kind of adaptation that happens inside the neural network, right? Which is kind of magical and just falls out from Internet, just because there's a lot of patterns. I will say that there have been some papers that I thought were interesting that actually look at the mechanisms behind in context learning. And I do think it's possible that in context learning actually runs a small gradient
Starting point is 00:15:28 loop internally in the layers of the neural network. And so I recall one paper in particular where they were doing. linear regression actually using in context learning. So basically your inputs into the neural network are XY pairs, X, Y, X, Y, X, Y, X, Y, that happen to be on the line. And then you do X and you expect the Y. And the neural network, when you train it in this way, actually does do linear regression. And normally when you would run linear regression, you have a small gradient
Starting point is 00:15:55 that's an optimizer that basically looks at X, Y, looks at an error, calculates the gradient of the weights, and does the update a few times. it just turns out that when they looked at the weights of that in context learning algorithm, they actually found some analogies to gradient descent mechanics. In fact, I think even the paper was stronger because they actually heartcoated the weights
Starting point is 00:16:14 of a neural network to do gradient descent through attention and all the internals of the neural network. So I guess that's just my only pushback is that who knows how in context learning works, but I actually think that it's probably doing a little bit of some kind of funky gradient descent internally and that I think that that's possible.
Starting point is 00:16:32 So I guess I was only pushing back on you're saying it's not doing in context learning. Who knows what it's doing? But it's probably maybe doing something similar to it, but we don't know. So then it's worth thinking about, okay, if both of them are implementing gradient distance, sorry, if in context learning and pre-training
Starting point is 00:16:45 are both implementing something like gradient descent, why does it feel like in context learning actually we're getting to this like continual learning, real intelligence-like thing, whereas you don't get the analogous feeling just from pre-training. At least you could argue that. And so if it's the same algorithm, what could be different?
Starting point is 00:17:02 Well, one way you can think about it is how much information does the model store per information it receives from training? And if you look at pre-training, if you look at Lama 3, for example, I think it's trained on 15 trillion tokens. And if you look at the 70B model, that would be the equivalent of 0.07 bits per token in that it sees in pre-training in terms of the information in the weights of the model compared to the tokens it reads. Whereas if you look at the KV cache and how it grows per additional token and in context learning, it's like 320 kilobytes. So that's a 35 million-fold difference in how much information per token is assimilated by the model. I wonder if that's relevant at all. I think I kind of agree.
Starting point is 00:17:47 I mean, the way I usually put this is that anything that happens during the training of the neural network, the knowledge is only kind of like a hazy recollection of what happened in a training time. And that's because the compression is dramatic. you're taking 15 trillion tokens and you're compressing it to just your final network with a few billion parameters. So obviously it's a massive amount of compression going on. So I kind of refer to it as like a hazy recollection of the internet documents, whereas anything that happens in the context window of the neural network,
Starting point is 00:18:12 you're plugging all the tokens and it's building up all this KV cache representation, is very directly accessible to the neural net. So I compare the KV cache and the stuff that happens at test time to more like a working memory. Like all the stuff that's in the context window is very directly accessible to the neural net. So there's always like these almost surprising analogies between LLMs and humans, and I find them kind of surprising because we're not trying to build a human brain, of course,
Starting point is 00:18:35 just directly. We're just finding that this works and we're doing it. But I do think that anything that's in the weights, it's kind of like a hazy recollection of what you read a year ago. Anything that you give it as a context at test time is directly in the working memory. And I think that's a very powerful analogy to think through things. So when you, for example, go to an LLM
Starting point is 00:18:54 and you ask it about some book and what happened in it, like Nick Lane's book or something like that. The L.M. will often give you some stuff, which is roughly correct. But if you give it the full chapter and ask it questions, you're going to get much better results because it's now loaded in the working memory of the model. So I basically agree with your very long way of saying that I kind of agree, and that's why. Stepping back, what is it the part about human intelligence that we have most fail to replicate with these models? I almost feel like just a lot of it still.
Starting point is 00:19:25 So maybe one way to think about it. I don't know if this is the best way, but I almost kind of feel like, again, making these analogies, imperfect as they are, we've stumbled by with the transformer neural network, which is extremely powerful, very general. You can train transformers on audio or video or text or whatever you want, and it just learns patterns,
Starting point is 00:19:44 and they're very powerful, and it works really well. That, to me, almost indicates that this is kind of like some piece of cortical tissue. It's something like that, because the cortex is famously very plastic as well. You can rewire, you know, parts of brains. And there was a slightly gruesome experiments with rewiring, like, visual cortex, the auditory cortex and this animal like learn find, etc.
Starting point is 00:20:05 So I think that this is kind of like a cortical tissue. I think when we're doing reasoning and planning inside the neural networks, so basically doing a reasoning traces for thinking models, that's kind of like the prefrontal cortex. And then I think maybe those are like little check marks. but I still think there's many brain parts and nuclei that are not explored. So maybe, for example,
Starting point is 00:20:27 there's a basic ganglia doing a bit of reinforcement learning when we find tune the models on reinforcement learning. But, you know, whereas like the hippocampus, not obvious what that would be. Some parts are probably not important. Maybe the cerebellum is, like,
Starting point is 00:20:37 not important to cognition its thoughts so maybe we can skip some of it. But I still think there's, for example, the amygdala, all the emotions and instincts. And there's probably like a bunch of other nuclei in the brain that are very ancient that I don't think we've like really replicated. I don't actually know that we should be pursuing,
Starting point is 00:20:51 the building of an analog of human brain. I'm again, an engineer, mostly at heart. But I still feel like maybe another way to answer the question is, you're not going to hire this thing as an intern, and it's missing a lot of, because it comes with a lot of these cognitive deficits that we all intuitively feel when we talk to the models. And so it's just like not fully there yet.
Starting point is 00:21:11 You can look at it as like not all the brain parts are checked off yet. This is maybe relevant to the question of thinking about how fast these issues will be solved. So sometimes people will say about continual learning, look, actually, you could already, you could easily replicate this capability. Just as in-context learning emerged spontaneously as a result of pre-training, continual learning over longer horizons will emerge spontaneously if the model is incentivized to recollect information over longer horizons or horizons longer than one session. So if there's some like outer loop RL, which it has. as many sessions within that outer loop, then this continual learning where it uses, it fine tunes itself,
Starting point is 00:21:57 where it writes to an external memory or something, will just sort of like emerge spontaneously. Do you think things are anything that are plausible? I just don't have really a prior over it. How plausible is that? How likely is that to happen? I don't know that I fully resonate with that because I feel like these models, when you boot them up and they have zero tokens in the window,
Starting point is 00:22:12 they're always like restarting from scratch where they were. So I don't actually know in that worldview what it looks like, because, again, maybe maybe, making some analogies to humans just because I think it's roughly concrete and kind of interesting to think through. I feel like when I'm awake, I'm building up a context window of stuff that's happening during the day. But I feel like when I go to sleep, something magical happens where I don't actually think that that context window stays around. I think there's some process of distillation
Starting point is 00:22:36 into weights of my brain. And this happens during sleep and all this kind of stuff. We don't have an equivalent of that in large language models. And that's to me more adjacent to when you talk about continual learning and so on as absent. These models don't really have this distillation phase of taking what happened, analyzing it, obsessively thinking through it, basically doing some kind of a synthetic data generation process and distilling it back into the weights,
Starting point is 00:23:02 and maybe having a specific neural net per person, maybe it's a laura, it's not a full, yeah, it's not a full weight neural network that's just some of the small, some of the small sparse subset of the weights are changed. But basically, we do want to create ways of creating these individuals that have very long contexts. It's not only remaining in the context window because the context windows grow very, very long. Like, maybe we have some very elaborate sparse attention over it. But I still think that humans obviously have some process for distilling some of that knowledge into the weights.
Starting point is 00:23:34 We're missing it. And I do also think that humans have some kind of a very elaborate sparse attention scheme, which I think we're starting to see some early hints of. So DeepSeek V3.2 just came out, and I saw that they have like a sparse attention as an example, and this is one way to have very, very long context windows. So I almost feel like we are redoing a lot of the cognitive tricks that evolution came up with through a very different process,
Starting point is 00:23:58 but I think can converge on a similar architecture cognitively. Interesting. In 10 years, do you think it'll still be something like a transformer, but with a much more modified attention and more sparse MLPs and so forth? Well, the way I like to think about it is, okay, let's translation invariance in time, right? So 10 years ago, where were we? 2015, we had convolutional neural networks primarily. Residual networks just came out.
Starting point is 00:24:21 So remarkably similar, I guess, but quite a bit different still. I mean, transformer was not around. You know, all these sort of like more modern tweaks on a transformer were not around. So maybe some of the things that we can bet on, I think, in 10 years, by translational sort of equivalence, is we're still training giant neural networks with forward, backward, pass, and update through gradient descent. But maybe it looks a little bit different, and it's just everything is much bigger.
Starting point is 00:24:49 Actually, recently I also went back all the way to 1989, which was kind of a fun exercise for me a few years ago, because I was reproducing Jan Lacoon's 1989 convolutional network, which was the first neural network I'm aware of, trained via gradient descent, like modern neural network trained gradient descent on digit recognition. And I was just interested in, okay, how can I modernize this? How much of this is algorithms?
Starting point is 00:25:12 how much of this is data, how much of this progress is compute and systems. And I was able to very quickly, like half the learning rate, just knowing by time travel by 33 years. So if I time travel by algorithms to 33 years, I could adjust what the online couldn't do in 1989, and I could basically half the learning,
Starting point is 00:25:28 half the error. But to get further gains, I had to add a lot more data. I had to like 10x the training set. And then I had to actually add more computational optimizations, had to basically train for much longer with dropout and other regularization techniques. And so it's almost like all these things have to improve simultaneously.
Starting point is 00:25:45 So, you know, we're probably going to have a lot more data. We're probably going to have a lot better hardware. Probably going to have a lot better kernels and software. We're probably going to have better algorithms. And all of those, it's almost like no one of them is winning too much. All of them are surprisingly equal. And this has kind of been the trend for a while. So I guess to answer maybe your question, I expect differences algorithmically to what's happening today.
Starting point is 00:26:08 But I do also expect that some of the things that have stuck around for very long time, we'll probably still be there. It's probably still a giant neural network trained with gradient descent. That would be my guess. It's surprising that all of those things together only halved half the error. Yeah. Which is like 30 years of progress. Maybe half is a lot because if you half the error, that actually means that half is a lot. Yeah, yeah. But it's, I guess what was shocking to me is everything needs to improve across the board. Yeah. Architecture optimizes a loss function and also has improved across the board forever. So I kind of expect all those changes to be alive and well. Yeah, actually, I was a about tasker's a very similar question about
Starting point is 00:26:44 nanochat because since you just coded up recently, every single sort of step in the, you know, process of building chatbot is like fresh in your RAM. And I'm curious if you had similar thoughts about like, oh, there was no one thing that was relevant to going from GPD2 to NanoChat. What are sort of like surprising takeaways from the experience? Building Nanchat? So Nanot chat is a kind of repository I released. Was it yesterday? or the day before. I can't remember. We can see this leave deprivation
Starting point is 00:27:15 that went into the... Well, it's just trying to be a... It's trying to be the simplest, complete repository that covers the whole pipeline into end of building a chat chappet clone. And so, you know, you have all of the steps, not just any individual step, which is a bunch of...
Starting point is 00:27:32 I worked on all the individual steps sort of in the past and really small pieces of code that kind of show you how that's done in algorithmic sense in like simple code. But this kind of handles all the entire pipeline. I think in terms of learning, it's not so much, I don't know,
Starting point is 00:27:46 that I actually found something that I learned from it necessarily. I kind of already had in my mind as like how you build it. And this is just a process of mechanically building it and making it clean enough so that people can actually learn from it and that they find it useful. Yeah. What is the best way for somebody to learn from it? Is it just like delete all the code and try to re-implement it from scratch, try to add modifications to it?
Starting point is 00:28:09 Yeah, I think that's a, that's a great question. I would probably say, so basically it's about 1,000 lines of code that takes you through the entire pipeline. I would probably put it
Starting point is 00:28:17 on the right monitor, like if you have two monitors, you put it on the right, and you want to build it from scratch. You build it from start. You're not allowed to copy paste. You're allowed to reference. You're not allowed to copy paste.
Starting point is 00:28:27 Maybe that's how I would do it. But I also think the repository by itself, it is like a pretty large beast. I mean, it's, you know, it's... When you write this code, you don't go from top to bottom. You go from chunks, and you grow the chunks.
Starting point is 00:28:39 And that information is absent. Like, you wouldn't know where to start. And so I think it's not just a final repository that's needed. It's like the building of the repository, which is a complicated chunk growing process. Right. So that part is not there yet. I would love to actually, like, add that probably later this week or something in some way. Like, either it's a, it's probably a video or something like that. But maybe, roughly speaking, that's what I would try to do, is build the stuff yourself, but don't allow yourself copy-paste. Yeah. I do think that there's two types of knowledge almost. Like, there's the high-level
Starting point is 00:29:09 surface knowledge. But the thing is that when you actually build something from scratch, you're forced to come to terms with what you don't actually understand and you don't know that you don't understand it. Interesting. And it always leads to a deeper understanding. And it's like just the only way to build is like if I can't build it, I don't understand it. Is that a fine-man quote, I believe, or something along those lines? I 100% I've always believed this very strongly. Because there's all these like micro things that are just not properly arranged and you don't really have the knowledge. You just think you have the knowledge. So don't write block posts. Don't do slides. Don't do any of that.
Starting point is 00:29:40 Like, build the code, arrange it, get it to work. It's the only way to go. Otherwise, you're missing knowledge. You tweeted out that coding models were actually a very little help to you in assembling this repository. And I'm curious why that was. Yeah. So the repository, I guess I built it over a period of a bit more than a month.
Starting point is 00:29:57 And I would say there's like three major classes of how people interact with code right now. Some people completely reject all of LLMs, and they are just writing by scratch. I think this is probably not the right thing to do anymore. the intermediate part, which is where I am, is you still write a lot of things from scratch, but you use the autocomplete that's basically available now from these models. So when you start writing out a little piece of it,
Starting point is 00:30:20 it will all complete from you, and you can just tap through, and most of the time it's correct. Sometimes it's not, and you edit it. But you're still very much the architect of what you're writing. And then there's the, you know, vibe coding. You know, hi, please implement this or that, you know, enter,
Starting point is 00:30:35 and then let the model do it. And that's the agents. I do feel like the agents work in very specific settings, and I would use them in specific settings. But again, these are all tools available to you, and you have to learn what they're good at and what they're not good at and what they're not good at and what they're not good at and want to use them.
Starting point is 00:30:50 So the agents are actually pretty good, for example, if you're doing boilerplate stuff. Boilet code that's like just copy-based stuff. They're very good at that. They're very good at stuff that occurs very often in the Internet because there's lots of examples of it in the training sets of these models. So there's like features of things that,
Starting point is 00:31:08 where the models will do very well. I would say nanocet is not an example of those because it's a fairly unique repository. There's not that much code, I think, in the way that I've structured it. And it's not boilerplate code. It's actually like intellectually intense code almost, and everything has to be very precisely arranged.
Starting point is 00:31:24 And the models were always trying to, they kept trying to, I mean, they have so many cognitive deficits, right? So one example, they keep trying to, they keep misunderstanding the code because they have too much memory from all the typical ways of doing things on the internet that I just wasn't adopting. So the models, for example,
Starting point is 00:31:41 I mean, I don't know if I want to get into the full details, but they keep thinking I'm writing normal code and I'm not. Maybe one example. Maybe one example is, so the way to synchronize, so we have eight GPUs that are all doing forward records. The way to synchronize gradients between them is to use distributed data parallel container of PyTorch, which automatically does all the,
Starting point is 00:32:01 as you're doing the backward, it will start communicating and synchronizing gradients. I didn't use DDP because I didn't want to use it because it's not necessary. So I threw it out. And I basically wrote my own synchronization routine that's inside the step of the optimizer. And so the models were trying to get me
Starting point is 00:32:15 to use the DDP container. And they were very concerned about, okay, this gets way too technical. But I wasn't using that container because I don't need it and I have a custom implementation of something like it. And they just couldn't internalize that you had your own.
Starting point is 00:32:27 Yeah, they couldn't get past that. And then they kept trying to like mess up the style. Like, they're way too over-defensive. They make all these try-catch statements. they keep trying to make a production codebase and I have a bunch of assumptions in my code and it's okay. And it's just like I don't need all this
Starting point is 00:32:44 extra stuff in there. And so I just kind of feel like they're bloating the codebase, they're bloating the complexity, they keep misunderstanding, they're using deprecated APIs a bunch of times. So it's total mess and it's just not net useful. I can go in, I can clean it up, but it's not not useful.
Starting point is 00:33:00 I also feel like it's kind of annoying to have to type out what I want in English because it's just too much typing. Like, if I just navigate to the part of the code that I want, and I go where I know the code has to appear, and I start typing out the first three letters, autocomplete gets it and just gives you the code. And so I think this is a very high information bandwidth to specify what you want. If you point to the code where you want it, and you type out the first few pieces, and the model will complete it. So I guess what I mean is I think these models are good in certain parts of the stack.
Starting point is 00:33:29 Actually use the models a little bit in... There are two examples where I actually use the models that I think are illustrative. one was when I generated the report that's actually more boilerplatey so actually bytecoded partially some of that stuff that was fine because it's not like mission critical stuff and it works fine
Starting point is 00:33:45 and then the other part is when I was rewriting the tokenizer in Rust I'm actually not as good at Rust because I'm fairly new to Rust so I was doing, there's a bit of vibe coding going on when I was writing some of the Rust code but I had Python implementation that I fully understand and I'm just making sure I'm making more efficient version of it
Starting point is 00:34:02 and I have tests so I feel safer doing that stuff And so basically they lower or like the increase accessibility to languages or paradigms that you might not be as familiar with. So I think they're very helpful there as well. Yeah. Because there's a ton of rust code out there. The models are actually pretty good at it. I happen to not know that much about it. So the models are very useful there.
Starting point is 00:34:22 The reason I think this question is so interesting is because the main story people have about AI exploding and getting to super intelligence pretty rapidly is AI automating, AI engineering and AI research. So they'll look at the fact that you can have cloud code and make entire appellate application, crud applications from scratch and be like, if you had this same capability inside of open AI and deep mind and everything, well, just imagine the level of like just, you know,
Starting point is 00:34:48 a thousand of you or a million of you in parallel finding a little architectural tweaks. And so it's quite interesting to hear you say that this is the thing they're sort of asymmetrically worse at. And it's like quite relevant to forecasting whether the AI 2027 type explosion is likely to happen. anytime soon.
Starting point is 00:35:04 I think that's a good way of putting it. And I think you're getting at some of my, like why my timelines are a bit longer. You're right. I think, yeah, they're not very good at code that hasn't never been written before. Maybe it's like one way to put it, which is like what we're trying to achieve when we're building these models. Very naive question, but the architectural tweaks that you're adding to nanotchat, they're in a paper somewhere, right?
Starting point is 00:35:27 They might even be in a repo somewhere. So it's, is it surprising that they aren't able to entercharges? that into whenever you're like add rope embeddings or something, they do that in the wrong way? It's tough. I think they kind of know, they kind of know, but they don't fully know, and they don't know how to fully integrate it into the repo and your style and your code and your place and some of the custom things that you're doing. And how fits with all the assumptions of the repository and all this kind of stuff. So I think they do have some knowledge, but they haven't gotten to the place where they can actually integrate it, make sense of it, and so on.
Starting point is 00:36:01 I do think that a lot of the stuff, by the way, continues to improve. So I think currently probably state-of-the-art model that I go to is the GPD 5 Pro. And that's a very, very powerful model. So if I actually have 20 minutes, I will copy-paste my entire repo, and I go to GPD5 Pro, the Oracle, for like some questions. And often it's not too bad and surprisingly good compared to what existed a year ago. Yeah. But I do think that overall the models are, they're not there.
Starting point is 00:36:26 And I kind of feel like the industry, it's over, it's, it's over, it's making too big of a jump and he's trying to pretend like this is amazing and it's not, it's slop. And I think they're not coming to terms with it and maybe they're trying to fundraise or something like that. I'm not sure what's going on, but we're at this intermediate stage.
Starting point is 00:36:44 The models are amazing. They still need a lot of work. For now, out of complete is my sweet spot. But sometimes, for some types of code, I will go to a null-em agent. Yeah. Actually, here's another reason that this is really interesting. Through the history of programming, there's been many productivity improvements,
Starting point is 00:37:03 compilers, linting, better programming languages, etc., which have increased a programmer productivity, but have not led to an explosion. So that sounds very much like auto-complete tab. And this other category is just like automation of the programmer. And so it's interesting you're seeing more in the category of the historical analogies of like, you know, better compilers or something. And maybe you guys discuss that one other kind of thought of that is like,
Starting point is 00:37:28 I do feel like I have a hard time differentiating where AI begins and stops, because I do see AI as fundamentally an extension of computing in some pretty fundamental way. And I feel like I see a continuum of this kind of like recursive self-improvement or like of speeding up programmers all the way from the beginning. Like even like I would say like code editors. Yeah. Syntax highlighting. Yeah.
Starting point is 00:37:50 Syntax or like checking even of the types, like data type checking. All these kinds of tools that we've built for each for each other. Even search engines. Like, why aren't search engines part of AI? Like, I don't know, like, ranking is kind of AI, right? At some point, Google was like, even early on, they were thinking of themselves as an AI company doing Google search engine, which I think is totally fair.
Starting point is 00:38:10 And so I kind of see it as a lot more of a continuum than I think other people do, and I don't, it's hard for me to draw the line. And I kind of feel like, okay, we're now getting a much better auto-complete. And now we're also getting some agents, which are kind of like these loopy things, but they kind of go off rails sometimes. And what's going on is that the human is progressively doing a bit less and less of the low.
Starting point is 00:38:28 level-level stuff. For example, we're not writing the assembly code because we have compilers. Yeah. Like compilers will take my high-level language and see and write the assembly code. So we're abstracting ourselves very, very slowly. And there's this what I call autonomy slider of like more and more stuff is automated, of the stuff that can be automated at any point of time. And we're doing a bit less and less than raising ourselves in the layer of abstraction over the automation. One of the big problems with RL is that it's incredibly information sparse. Labelbox can help you with this by increasing the amount of information that your agent gets to learn from with every single episode. For example, one of their customers wanted to train
Starting point is 00:39:03 a coding agent. So Labelbox augmented an IDE with a bunch of extra data collection tools and staffed a team of expert software engineers from their aligner network to generate trajectories that were optimized for training. Now, obviously, these engineers evaluated these interactions on a pass-fail basis, but they also rated every single response on a bunch of different dimensions like readability and performance. And they wrote down their thought processes
Starting point is 00:39:28 for every single rating that they gave. So you're basically showing every single step an engineer takes and every single thought that they have while they're doing their job. And this is just something you could never get from usage data alone. And so label box packaged up all these evaluations
Starting point is 00:39:44 and included all the agent trajectories and the corrective human. for the customer to train on. This is just one example. So go check out how Labelbox can get you high-quality frontier data across domains, modalities, and training paradigms. Reach out at Labelbox.com slash Thwar Cash. Let's talk about RL a bit.
Starting point is 00:40:07 You too did some very interesting things about this. Conceptually, how should we think about the way that humans are able to build a rich world model just from interacting with our environment. And in ways that seems almost irrespective of the final reward at the end of the episode, if somebody's starting to start a business and at the end of 10 years, she finds out whether the business succeeded or failed,
Starting point is 00:40:31 we say that she's earned a bunch of wisdom and experience. But it's not because, like, the log probs of every single thing that happened over the last 10 years are upweighted or downweight. It's something much more deliberate and rich is happening. What is the ML analogy?
Starting point is 00:40:44 And how does that compare to what we're doing with other ones right now? Yeah, maybe the way I would put it is humans don't use reinforcement learning, as I've said it all. I think they do something different, which is, yeah, you experience. So reinforcement learning is a lot worse than I think the average person thinks. Reinforcement learning is terrible. It just so happens that everything that we had before is much worse. Because previously we're just imitating people, so it has all these issues.
Starting point is 00:41:10 So in reinforcement learning, say you're working with you're solving a math problem. This is very simple. You're given a math problem, and you're trying to find the solution. Now, in reinforcement learning, you will try lots of things in parallel first. So you're given a problem. You try hundreds of different attempts. And these attempts can be complex, right? They can be like, oh, let me try this, let me try that.
Starting point is 00:41:32 This didn't work. That didn't work, et cetera. And then maybe you get an answer. And now you check the back of the book, and you see, okay, the correct answer is this. And then you can see that, okay, this one, this one, and that one got the correct answer, but these other 97 of them didn't. So literally what reinforcement learning does is it goes to the ones that worked really well
Starting point is 00:41:48 and every single thing you did along the way every single token gets upweighted of like do more of this. The problem with that is, I mean, people will say that your estimator has high variance but I mean, it's just noisy, it's noisy. So basically, it kind of almost assumes that every single little piece of the solution
Starting point is 00:42:04 that you made that right-to-dry answer was correct thing to do, which is not true. Like you may have gone down the wrong alleys until you write-the-write solution. Every single one of those incorrect things you did as long as you got to the correct solution will be upweighted as do more of this. It's terrible.
Starting point is 00:42:18 It's noise. You've done all this work only to find a single, at the end you get a single number of like, oh, you did correct. And based on that, you weigh that entire trajectory is like upweight or downweight.
Starting point is 00:42:29 And so the way I like to put it is you're sucking supervision through a straw because you've done all this work that could be a minute to rollout and you're like sucking the bits of supervision of the final reward signal through a straw and you're like putting it, you're like,
Starting point is 00:42:42 basically like yeah you're broadcasting that across the entire trajectory and using that to upway or downward that trajectory is crazy a human would never do this number one a human would never do hundreds of rollouts number two when a person
Starting point is 00:42:55 sort of finds a solution they will have a pretty complicated process of review of like okay I think these parts that I did well these parts I did not do that well I should probably do this or that and they think through things there's nothing in current LLMs that does this there's no equivalent of it
Starting point is 00:43:09 but I do see papers pop out that are trying to do this because it's obvious to everyone in the field. Yeah. So I kind of see as like the first imitation learning actually, by the way, was extremely surprising and miraculous and amazing that we can fine-tune by imitation on humans. And that was incredible. Because in the beginning, all we had was base models. Base models are autocomplete.
Starting point is 00:43:28 And it wasn't obvious to me at the time, and I had to learn this. And the paper that blew my mind was instruct GPT because it pointed out that, hey, you can take the pre-trained model, which is autocomplete. And if you just fine-tune it on text that looks like conversational, The model will very rapidly adapt to become very conversational, and it keeps all the knowledge from pre-training. And this blew my mind because I didn't understand that it's just like stylistically can adjust so quickly and become an assistant to a user through just a few loops of fine-tuning on that kind of data. It's very miraculous to me that that worked. So incredible, and that was like two years, three years of work.
Starting point is 00:44:04 And now came RL. And RL allows you to do a bit better than just imitation learning, right? because you can't have these reward functions and you can hill climb on the reward functions. And so some problems have just correct answers. You can hill climb on that without getting expert trajectories to imitate. So that's amazing.
Starting point is 00:44:19 And the model can also discover solutions that a human might never come up with. So this is incredible. And yet, it's so stupid. So I think we need more. And so I saw a paper from Google yesterday that tried to have this reflect-and-review idea in mind. What was the memory bank page?
Starting point is 00:44:38 or something, I don't know. I've actually seen a few papers along these lines. So I expect there to be some kind of a major update to how we do algorithms for LLMs coming in that realm. And then I think we need three or four or five more. Something like that. But you're so good to come up with evocative evocative phrases. Sucking supervision through a straw is like so good. Why hasn't, so you're saying like your problem with outcome-based reward is that you have this huge trajectory. And then at the end, you're trying to learn every single possible thing about what you should do and we should learn about the world from that one final bit. Why hasn't, given the fact that this is obvious, why hasn't processed-based supervision as an alternative been a successful way to make models more capable? What has been preventing us from using this alternative paradigm? So process-based supervision just refers to the fact that we're not going to have a reward function only at the very end of after you have made 10 minutes of work, I'm not going to tell you you did well or not well.
Starting point is 00:45:35 I'm going to tell you at every single step of the way how well you're doing. And this is basically the reason we don't have that is not tricky, it's tricky how you do that properly. Because you have partial solutions and you don't know how to assign credit. So when you get the right answer, it's just an equality match to the answer. Very simple to implement. If you're doing basically process supervision, how do you assign an automatable way partial credit assignment? It's not obvious how you do it. Lots of labs, I think, are trying to do it with these LLM judges.
Starting point is 00:46:02 So basically you get LLMs to try to do it. So you prompt an LLM, hey, look at a partial solution of a student. How well do you think they're doing if the answer is this? and they try to tune the prompt. The reason that I think this is kind of tricky is quite subtle. And it's the fact that anytime using an LLM to assign a reward, those LLMs are giant things with billions of parameters
Starting point is 00:46:20 and they're gamable. And if you're reinforcement learning with respect to them, you will find adversarial examples for your LLM judges, almost guaranteed. You can't do this for too long. You do maybe 10 steps or 20 steps, maybe it will work, but you can't do 100 or 1,000 or 1,000 because it's not obvious.
Starting point is 00:46:33 Because I understand it's not obvious, but basically the model will find a little, It will find all these like spurious things in the nooks and crannies of the giant model and find a way to cheat it. So one example that's prominently in my mind is, I think this was probably public. But basically, if you're using an alum judge for a reward, so you just give it a solution from a student and ask it if the student did well or not, we were training with reinforcement learning against that reward function. And it worked really well. And then suddenly the reward became extremely large.
Starting point is 00:47:06 It was massive jump and it did perfect. And you're looking at it like, wow, this means the student is perfect in all these problems. It's fully solved math. But actually what's happening is that when you look at the completions that you're getting from the model, they are complete nonsense. They start out okay, and then they change to the, the, the, the, the, the, the, the, the. So it's just like, oh, okay, let's take two plus three and we do this and this and then da-da-da-da-da-da. And you're looking at it's like, this is crazy.
Starting point is 00:47:28 How is it getting a reward of one or 100%. And you look at the LLM judge and it turns out the the the-da-da-da-da as an adversarial example for the model. and it assigns 100% probability to it. And it's just because this is an out-of-sampal example to the LLM. It's never seen you during training, and you're in pure generalization land. Right. It's never seen it during training,
Starting point is 00:47:46 and in the pure generalization land, you can find these examples that break it. You're basically training the LLM to be a prompt injection model. Not even that. Prompt injection is way too fancy. You're finding adversarial examples as they're called. These are nonsensical solutions
Starting point is 00:48:02 that are obviously wrong, but the model things are amazing. So to the same thing, you think this is the bottleneck to making RL more functional, then that will require making LLM's better judges, if you want to do this in an automated way. And then so is it just going to be like some sort of GAN-like approach where you had to train models to be more robust to... I think the labs are probably doing all that.
Starting point is 00:48:23 Like, okay, so the obvious thing is like, the-da-da-da should not get 100% reward. Okay, well, take the-da-da-but in the training set of the LLM judge and say, this is not 100%, this is zero-percent. You can do this. But every time you do this, you get a new LLM, And it still has adversarial examples. There's infinity adversarial examples.
Starting point is 00:48:37 And I think probably if you iterate this a few times, it'll probably be harder and hard to find other serial examples. But I'm not 100% sure because this thing has a trillion parameters or whatnot. So I bet you the LLabs are trying. I don't actually, I still think we need other ideas. Interesting. Do you have some shape of what the other idea could be? So, like, this idea of, like, a review,
Starting point is 00:49:04 review a solution and come up with synthetic examples such that when you train on them, you get better and, like, meta-learn it in some way. And I think there's some papers that I'm starting to see pop out. I only am at a stage of, like, reading abstracts because a lot of these papers, you know, they're just ideas. Someone has to actually, like, make it work on a frontier LLM lab scale in full generality.
Starting point is 00:49:23 Because when you see these papers, they pop up and it's just, like, a little bit of noisy, you know? It's cool ideas, but I haven't actually seen anyone convincingly show that this is possible. That said, the LLM labs are fairly closed. So who knows what they're doing now? But yeah. So I guess I see a very, not easy,
Starting point is 00:49:40 but like I can conceptualize how you would be able to train on synthetic examples or synthetic problems that you have made for yourself. But there seems to be another thing humans do. Maybe sleep is this, maybe daydreaming is this, which is not necessarily come up with fake problems, but just like reflect. Yeah.
Starting point is 00:49:58 And I'm not sure what the ML analogy for daydreaming or sleeping, but just reflecting, I haven't come up with a new problem. I mean, obviously, the very basic analogy is to be like fine-tuning on reflection bits,
Starting point is 00:50:09 but I feel like in practice that probably wouldn't work that well. So I don't know if you have some take on what the analogy of like this thing is. Yeah, I do think that we're missing some aspects there. So as an example, when you're reading a book, I almost feel like,
Starting point is 00:50:22 currently when LLMs are reading a book, what that means is we stretch out the sequence of text and the model is predicting the next token and it's getting some knowledge from that. That's not really what humans do, right? So when you're reading a book, I almost don't even feel like the book is like exposition I'm supposed to be attending to and training on.
Starting point is 00:50:37 The book is a set of prompts for me to do synthetic data generation or for you to get to a book club and talk about it with your friends. And it's by manipulating that information that you actually gain that knowledge. And I think we have no equivalent of that, again, with all alums.
Starting point is 00:50:51 They don't really do that, but I'd love to see during pre-training some kind of a stage that thinks through the material and tries to reconcile it with what it already knows and things through for some amount of time and gets that to work. And so there's no equivalence of any of this. This is all research.
Starting point is 00:51:05 There's some subtle, very subtle that I think are very hard to understand reasons why it's not trivial. So if I can just describe one, why I can just synthetically generate and train on it? Well, because every synthetic example, like if I just give synthetic generation of the model thinking about a book,
Starting point is 00:51:20 you look at it and you're like, this looks great. Why can't I train on it? Well, you could try, but the model will actually get much worse if you continue trying. And that's because all of the samples you get from models are silently collapsed.
Starting point is 00:51:31 They're silently, this is not obvious if you look at any individual example of it, they occupy a very tiny manifold of the possible space of sort of thoughts about content. So the LLMs, when they come off, they're what we call collapsed. They have a collapsed data distribution. If you sample, one easy way to say it
Starting point is 00:51:47 is go to chat GPT and ask it, tell me a joke. It only has like three jokes. It's not giving you the whole breadth of possible jokes. It's given you like, it knows like three jokes. They're silently collapsed. So basically, you're not getting the richness and diversity and the entropy from these models, as you would get from humans. So humans are a lot more sort of noisier, but at least they're not biased. They're not in a statistical sense.
Starting point is 00:52:09 They're not silently collapsed. They maintain a huge amount of entropy. So how do you get synthetic data generation to work despite the collapse and while maintaining the entropy is a research problem? Just to make sure I understood, the reason that the collapse is relevant to synthetic data generation is because you want to be able to come up with synthetic problems. or reflections which are not already in your data distribution? I guess what I'm saying is say we have a chapter of a book and I ask a nullum to think about it. It will give you something that looks very reasonable. But if I ask it 10 times, you'll notice that all of them are the same.
Starting point is 00:52:43 You can't just leave scaling, scaling quote unquote, reflection on the same amount of, you know, prompt information and then get returns from that. Yeah, yeah, yeah. So any individual sample will look okay, but the distribution of it is quite. terrible. And it's quite terrible in such a way that if you continue training on too much of your own stuff, you actually collapse. I actually think that there's no fundamental solutions to this possibly. And I also think humans collapse over time. I think this is, again, these analogies are surprisingly good, but humans collapse during the course of their lives. This is why children have completely, you know, they haven't overfit yet. And they will say stuff that will shock you
Starting point is 00:53:19 because it's kind of, you can see where they're coming from, but it's just not the thing people say. And because they're not yet collapsed. But we're collapsed. We, end up revisiting the same thoughts, we end up saying more and more of the same stuff, and the learning rates go down, and the collapse continues to get worse, and then everything deteriorates. Have you seen a super interesting paper that dreaming is a way of preventing this kind of overfitting and collapse? That the reason dreaming is evolutionary adaptive is to put you in weird situations that are very unlike your day-to-day reality, so that to prevent this kind of overfitting? That's an interesting idea.
Starting point is 00:53:56 I mean, I do think that when you're generating things in your head and then you're attending to it, you're kind of like training on your own samples. You're training on your synthetic data. And if you do it for too long, you go off rails and you collapse way too much. So you always have to like seek entropy in your life. So talking to other people, it's a great source of entropy and things like that. So maybe the brain has also built some internal mechanisms for increasing the amount of entropy in that process. But yeah, maybe that's an interesting idea. This is a very ill-formed thought, so I'll just put it out and let you react to it.
Starting point is 00:54:30 The best learners that we are aware of, which are children, are extremely bad at recollecting information. In fact, at the very earliest stages of childhood, you will forget everything. You're just an amnesiac about everything that happens before a certain year date. But you're like extremely good at picking up new languages and learning from the world. And maybe there's some element of like being able to see the forest for the trees. Whereas if you compare it to the opposite end of the spectrum, you have LLM pre-training, which these models will literally be able to regurgitate word for word, what is the next thing in a Wikipedia page.
Starting point is 00:55:01 But their ability to learn abstract concepts really quickly the way a child can is much more limited. And then adults are somewhere in between where they don't have the flexibility of childhood learning, but adults can memorize facts and information in a way that is harder for kids. And I don't know if there's something interesting about that. I think there's something very interesting about that. Yeah, 100%.
Starting point is 00:55:21 I do think that humans actually, they do kind of like have a lot more of an element compared to LLMs of like seeing the forest for the trees. And we're not actually that good at memorization, which is actually a feature. Because we're not that good at memorization, we actually are kind of like forced
Starting point is 00:55:37 to find the patterns in a marginal sense. I think LLMs in comparison are extremely good at memorization. They will recite passages from all these training sources. You can give them completely nonsensical data. Like you can hash some amount of text or something like that. You get a completely random sequence. If you train on it, even just, I think, a single iteration or two,
Starting point is 00:55:58 it can suddenly regurgitate the entire thing. It will memorize it. There's no way a person can read a single sequence of random numbers and recite it to you. And that's a feature, not a bug, almost, because it forces you to only learn the generalizable components. Whereas LLMs are distracted by all the memory that they have of the pre-training documents. And it's probably very distracting to them in a certain sense.
Starting point is 00:56:20 So that's why when I talk about the cognitive core, I actually want to remove the memory, which is what we talked about. I'd love to have less the memory so that they have to look things up. And they only maintain the algorithms for, like, thought and the idea of an experiment and all this cognitive glue of acting.
Starting point is 00:56:37 And this is also relevant to preventing model collapse. Let me think. I'm not sure. I think it's almost like a separate axis. It's almost like the models are way too good at our memory. And somehow we should remove that. And I think people are much worse, but it's a good thing.
Starting point is 00:56:57 What is a solution to model collapse? I mean, there's very naive things you could attempt as just like the distribution over loggis should be wider or something. Like there's many naive things you could try. What ends up being the problem with the naive approaches? Yeah, I think that's a great question. I mean, you can imagine having a regularization for entropy and things like that. I guess they just don't work as well empirically.
Starting point is 00:57:18 Because right now, like the models are collapsed, But I will say most of the tasks that we want of them don't actually demand the diversity. It's probably the answer of what's going on. And so it's just that the Frontier Labs are trying to make the models useful. And I kind of just feel like the diversity of the outputs is not so much. Number one, it's much harder to work with an evaluate and all this kind of stuff. But maybe it's not what's actually capturing most of the value.
Starting point is 00:57:43 In fact, it's actively penalized, right? If you're like super creative in an RL, it's like not good. Yeah. Or like maybe if you're doing a lot of writing, help, From LLLLLLLLLMs and stuff like that, I think it's probably bad because the models will give you these, like, silently, all the same stuff, you know. So they're not, they won't explore lots of different ways of answering a question, right? But I kind of feel like maybe this diversity is just not as big of a, yeah, maybe like, yeah, not as many applications needed so the models don't have it, but then it's actually a problem. It's synthetic generation time, et cetera.
Starting point is 00:58:11 So we're actually shooting ourselves in the foot by not allowing this entropy to maintain in the model. And I think possibly the labs should try harder. And then I think you hinted that it's a, it's a very fundamental. problem, it won't be easy to solve. And yeah, what's your intuition for that? I don't actually know if it's super fundamental. I don't actually know if I intended to say that. I do think that I haven't done these experiments,
Starting point is 00:58:34 but I do think that you could probably regularize the entropy to be higher. So you're encouraging the model to give you more and more solutions. But you don't want it to start deviating too much from the trainee data. It's going to start making up its own language. It's going to start using words that are extremely rare. So it's going to drift too much from the distribution. So I think controlling the distribution is just like a tricky. It's just like someone just has to, it's probably not trivial in that sense.
Starting point is 00:58:58 How many bits should the optimal core of intelligence end up being if you just had to make a guess? The thing we put on the von Neumann probes, how big does it have to be? So it's really interesting in the history of the field because at one point, everything was very scaling pill in terms of like, oh, we're going to make much bigger models, trillions of parameter models. And actually what the models have done in size is they've gone up and now they've actually kind of like actually even come down. The state of their models are smaller. And even then, I actually think they memorized way too much.
Starting point is 00:59:31 So I think I had a prediction a while back that I almost feel like we can get cognitive course that are very good at even like a billion billion parameters. It should be all very like, like if you talk to a billion parameter model, I think in 20 years, you can actually have a very productive conversation.
Starting point is 00:59:45 It thinks and it's a lot more like a human. But if you ask, it's some factual question, might have to look it up, but it knows that it doesn't know, and it might have to look it up, and they will just do all the reasonable things. That's actually surprising that you think it will take a billion per... Because already we have a billion parameter models, or a couple billion parameter models that are, like, very intelligent. Well, certainly our models are like a trillion parameters, right?
Starting point is 01:00:05 But they remember so much stuff, like... Yeah, but I'm surprised that in 10 years, given the pace, okay, we have GPT, OSS, 20B, that's way better than GPD4 original, which was a trillion, plus parameters. So given that trend, I'm actually surprised you think in 10 years, the cognitive core
Starting point is 01:00:24 is still a billion parameters. Yeah, I'm surprised you're not like that's going to be like tens of millions or millions. No, because I basically think that the training data is, so here's the issue. The training data is the internet, which is really terrible.
Starting point is 01:00:37 So there's a huge amount of gains to be made because the internet is terrible. Like if you actually, and even the internet, when you and I think of the internet, you're thinking of like, a Wall Street Journal or that's not what this is. When you're actually looking at a pre-train data set
Starting point is 01:00:48 in the front of your lab, and you look at a random internet document, it's total garbage. Like, I don't even know how this works at all. It's some, like, stock ticker symbols. It's a huge amount of slop and garbage from, like, all the corners of the internet. It's not like your Wall Street Journal article
Starting point is 01:01:03 that's extremely rare. So I almost feel like, because the internet is so terrible, we actually have to sort of build really big models to compress all that. Most of that compression is memory work instead of, like, cognitive work. Interesting.
Starting point is 01:01:15 But what we really want is the cognitive part actually delete the memory. Right. And then, so what I'm saying is, like, we need intelligent models to help us refine even the pre-training set to just narrow it down to the cognitive components. And then I think you can get away with a much smaller model because it's a much better data set
Starting point is 01:01:30 and you could train it on it. But probably it's not trained directly on it. It's probably distilled for a much better model still. But why is a distilled version still a billion? Is I guess the thing I'm curious about? I just feel like distillation work extremely well. So almost every small model, if you have a small model, it's almost certainly distilled.
Starting point is 01:01:45 Why would you train on? Right. No, no, no, but why is the distillation not, in 10 years, not getting below 1 billion? Oh, you think it should be smaller than a billion? I mean, come on, right? I don't know. At some point, it should take at least a billion knobs to do something interesting. You're thinking it should be even smaller?
Starting point is 01:02:02 Yeah, I mean, just like if you look at the trend over the last few years, just finding low-hanging fruit and going from, like, trillion-plus models that are, like, literally two orders of magnitude smaller in a matter of two years and having better performance. Yeah, yeah. It makes me think the sort of core of intelligence might be even way, way smaller. Like, plenty of room at the bottom to paraphrase Feynman. I mean, I almost feel like I'm already contrarian by talking about a billion in the parameter cognitive core, and you're outdoing me. I think, yeah, maybe we could get a little bit smaller.
Starting point is 01:02:31 I mean, I still think that there should be enough, yeah, maybe it can be smaller. I do think that practically speaking, you want the model to have some knowledge. You don't want it to be looking up everything. Because then you can't, like, think in your head. You're looking up way too much stuff all the time. So I do think it needs to be some basic curriculum needs to be there for knowledge. But it doesn't have esoteric knowledge, you know. Yeah.
Starting point is 01:02:49 So we're discussing what, like, plausibly could be the cognitive core. There's a separate question, which is, what will actually be the size of Frencher models over time? And I'm curious to have prediction. So we had increasing scale up to maybe 4.5, and now we're seeing decreasing slash plateauing scale. There's many reasons that could be going on. But do you have a prediction about going forward? Will the biggest models be bigger? Will they be smaller?
Starting point is 01:03:13 Will they be the same? Yeah, I don't know that I have a super strong prediction. I do think that the labs are just being practical. They have a flops budget and a cost budget. And it just turns out that pre-training is not where you want to put most of your flops or your cost. So that's why the models have gotten smaller, because they are a bit smaller.
Starting point is 01:03:28 The pre-training stages, smaller, etc., but they make it up in reinforcement learning and all this kind of stuff, mid-training and all this kind of stuff that follows. So they're just being practical in terms of all the stages and how you get the most bank for the buck. So I guess forecasting that trend, I think, is quite hard. I do still expect that there's so much lo-hanging fruit. That's my basic expectation. And so I have a very wide distribution here. Do you think they're looking for it to be similar
Starting point is 01:03:54 in kind to the kinds of things that have been happening over the last two to five years? Like just in terms of like, if I look at nano chat versus nano-GPT and then the architectural tweaks you made, is that basically like the flavor of things you continue to keep happening? Or is there, you're not expecting any giant pharynx? I expect the datasets to get much, much better because when you look at the average data sets, they're extremely terrible, like so bad that I don't even know
Starting point is 01:04:17 how anything works, to be honest. Look at the average example in the training set. Like factual mistakes, errors, nonsensical things. Somehow when you do it at scale, the noise washes away and you're left with some of the signal. So datasets will improve a ton.
Starting point is 01:04:33 It's just everything gets better. So our hardware, our older kernels, all the kernels for running the hardware and maximizing what you get with the hardware, You know, so Nvidia is slowly tuning the actual hardware itself,
Starting point is 01:04:44 tensor course and so on. All that needs to happen and will continue to happen. All the kernels will get better and utilize the chip to the max extent. All the algorithms will probably improve over optimization,
Starting point is 01:04:54 architecture, and just all of the modeling components of how everything is done and what the algorithms are that we're even training with. So I do kind of expect like a just very, just everything.
Starting point is 01:05:04 Nothing dominates. Everything plus 20%. Right. Is like roughly what I've seen. Okay. This is my, general manager Max. Good to be here, here every day. And you have been here since you were on boarded about six months ago. But when I was... Oh, right. Time passes so fast. But when I onboarded you,
Starting point is 01:05:22 I was in France. And so we basically didn't get the chance to talk at all almost. And you basically just gave me one login. I gave you access to my Mercury platform, which is the banking platform that I was using at the time to run the podcast. And so I logged into Mercury, assuming that that would just be the first of many steps, but I realized that was how you were running. the entire business, even down to a lot of our editors, our international contractors, and so you would just figure out how to set up these recurring payments to set up basic payroll. I mean, Mercury made the experience of all these things I was doing before so seamless that it didn't even occur to me until you pointed it out that this is not the natural way to
Starting point is 01:05:57 set a payroll or invoicing or any of these other things. Yeah, I was surprised, but I was like, it's worked so far. That's right, yeah. So maybe I'll trust it. And then now I can't think of doing anything else. All right, you heard him. Visit mercury.com to apply online in Minutes. Cool. Thanks, Max. Thanks for having me. Dude, you're great at this. I'm so nervous, but thank you. Mercury is a financial technology company, not a bank. Banking Services provided through Choice Financial Group, Column A, and Evolve Bank and Trust members FDIC. People have proposed different ways of charting how much progress we've made towards full AGI. Because if you can come up with some line, then you can see where that line intersects with AGI and where that would happen on the X-axis. And so people have proposed, oh, it's like the education level. Like we had a high
Starting point is 01:06:41 schooler and then they went to college with RL and they're going to get a PhD. I don't like that one. Or then they'll propose horizon link. So maybe they can do tasks to take a minute. They can do those autonomously. Then they can autonomously do tasks to take an hour, a human an hour, a human a week, et cetera. How do you think about what is the relevant Y-axis here? What is the, how should we think about how AI is making progress?
Starting point is 01:07:06 So I guess I have two answers to that. Number one, I'm almost tempted to like reject the question entirely because, again, like, I see this as an extension of computing. Have we talked about, like, how to chart progress in computing? Or how do you chart progress in computing since 1970s or whatever? What is the X axis? So I kind of feel like the whole question is kind of, like, funny from that perspective a little bit. But I will say, I guess, like, when people talk about AI and the original AGI
Starting point is 01:07:26 and how we spoke about it when opening I started, AGI was a system you can go to that can do any task that is economically valuable, any economically valuable task at human performance or better. Okay, so that was the definition, and I was pretty happy with that at the time, and I kind of feel like I've stuck to that definition forever, and then people have made up all kinds of other definitions. But I feel like I like that definition. Now, number one, the first concession that people make all the time
Starting point is 01:07:54 is they just take out all the physical stuff, because we're just talking about digital knowledge work. I feel like that's a pretty major concession compared to the original definition, which was like any task a human can do. I can lift things, etc. Like, AI can't do that, obviously. So, okay, but we'll take it. what fraction of the economy are we taking away by saying only knowledge work?
Starting point is 01:08:13 I don't actually know the numbers. I feel like it's about 10 to 20%, if I had to guess, is only knowledge work. Like someone could work from home and perform tasks, something like that. I still think it's a really large market. Like, yeah, what is the size of the economy and what is 10, 20%. Like we're still talking about a few trillion dollars of even in the U.S. of market share almost or like work. So it's still a very massive bucket.
Starting point is 01:08:39 But I guess going back to the definition, I guess what I would be looking for is, to what extent is that definition true? So are there jobs or lots of tasks, if we think of tasks as, you know, not jobs, but tasks kind of difficult. Because the problem is like, society will refactor based on the tasks that make up jobs
Starting point is 01:08:57 compared to what's... Based on what's automatable or not. But today, what jobs are replaceable by AI? So a good example recently was Jeff Hinton's prediction that radiologists would not be a job anymore and this turned out to be very wrong in a bunch of ways, right? So radiologists are alive and well and growing
Starting point is 01:09:14 even though computer vision is really, really good at recognizing all the different things that they have to recognize in images. And it's just messy, complicated job with a lot of surfaces and dealing with patients and all this kind of stuff in the context of it. So I guess I don't actually know that by that definition AI has made a huge amount of dent yet.
Starting point is 01:09:32 But some of the jobs maybe that I would be looking for have some features that I think make it very amenable to automation earlier than later. As an example, call center employees often come up, and I think rightly so. Because call center employees have a number of simplifying properties with respect to what's automatable today. Their jobs are pretty simple. It's a sequence of tasks, and every task looks similar.
Starting point is 01:09:53 Like you take a phone call with a person, it's 10 minutes of interaction or whatever it is, probably a bit longer. In my experience, a lot longer. And you complete some task in some scheme, and you change some database entry. around or something like that. So you keep repeating something over and over again, and that's your job. So basically, you do want to bring in the task horizon, how long it takes to perform a task. And then you want to also remove context. Like, you're not dealing with different parts of services of companies or other customers. It's just the database you and a person you're serving.
Starting point is 01:10:23 And so it's more closed. It's more understandable. And it's purely digital. So I would be looking for those things. But even there, I'm not actually looking at full automation yet. I'm looking for an autonomy slider. And I almost expect that we are not going to instantly replace people. We're going to be swapping in AIs that do 80% of the volume. They delegate 20% of volume to humans. And humans are supervising teams of five AIs doing the call center work that's more rote. So I would be looking for new interfaces or new companies that provide some kind of a later that allows you to manage some of these AIs. They are not yet perfect. And then I would expect that across the economy. And a lot of jobs are a lot harder than
Starting point is 01:11:02 call center employee. I wonder with radiologists, I'm totally speculating. I have no idea what the actual workflow of radiologists involves. But one analogy that might be applicable is when Wayne was their first being ruled out, there would be a person sitting in the front seat, and you just had to have them there to make sure that if something went really wrong, they're to monitor. And I think even today, people are still watching to make sure things are going well. Robotaxy, who is just deployed, actually still has a person inside it. And we could be in a similar situation. situation where if you automate 99% of a job, that last 1% the human has to do is incredibly valuable because it's bottlenecking everything else. And if it had, if it was the case with, like,
Starting point is 01:11:43 with radiologists where the person sitting in the front of the Uber or the front of the Waymo has to be specially trained for years in order to be able to provide the last 1%. Their wages should go up tremendously because they're like the one thing bottlenecking wide deployment. So radiologists, I think their wages have gone up for similar reasons. If you're like the last bottleneck, you should, you're like, and you're not fungible, which like, you know, a wayman driver might be fungible with other things. So you might see this thing where like your wages go like whoop and then until you get a 90% and then like just like that.
Starting point is 01:12:09 And when the last one percent is gone. I see. And I wonder if we're some similar things with radiology or salaries of call center workers or anything like that. Yeah. I think that's an interesting question. I don't think we're currently seeing that with radiology or, and I don't have like in my understanding,
Starting point is 01:12:27 but I think radiology is not a good example, basically. I don't know why Jeff Hinton picked on radiology because I think it's an extremely messy, messy, complicated profession. So I would be a lot more interested in what's happening with call center employees today, for example, because I would expect a lot of the road stuff to be automatable today. And I don't have first level access to it, but maybe I would be looking for trends of what's happening
Starting point is 01:12:47 with the call center employees. Maybe some of the things I would also expect is maybe they are swapping in AI, but then I would still wait for a year or two because I would potentially expect them to pull back can actually rehire some of the people. I think there's been evidence that that's already been happening generally in companies that have been adopting AI, which I think is quite surprising. And I also find what is really surprising, okay, AGI, right?
Starting point is 01:13:11 Like a thing we should do everything and, okay, we'll take out physical work. So the thing we should be able to do all knowledge work. And what you would have naively anticipated that the way this regression would happen is like, you would take a little task that a consultant is doing, you take that out of the bucket, you take a little task that an accountant who's doing, you take that out of the bucket, and then you're just doing this across all knowledge work. But instead, if we do believe we're on the path of AGII
Starting point is 01:13:37 with the current paradigm, the progression is very much not like that. At least it just does not seem like consultants and accounts and whatever are getting like huge productive improvement. It's very much like programmers are like getting more and more chills of the way of their work. If you look at the revenues of these companies, discounting just like normal chat revenue, which I think is like, I don't know, that's similar to like Google or something.
Starting point is 01:14:00 Just looking at API revenues, it's like dominated by coding, right? So this thing which is general, quote unquote, which should be able to do any knowledge work, it's just overwhelmingly doing only coding. And it's a surprising way that you would expect like the AGI to be deployed. So I think there's an interesting point here because I do believe coding is like the perfect first thing for these LLMs and agents. And that's because coding has always fundamentally –
Starting point is 01:14:26 worked around text. It's computer terminals and text, and everything is based around text. And LLMs, the way they're trained on the internet, love text. And so they're perfect text processors, and there's all this data out there, and it's just perfect fit. And also we have a lot of infrastructure pre-built for handling code and text. So, for example, we have a Visual Studio code or, you know, your favorite IDE showing you code. And an agent can plug into that. So for example, if an agent has a diff where it made some change, we suddenly have all this code already that shows all the differences to a codebase using a diff. So it's almost like we've pre-built a lot of the infrastructure for code.
Starting point is 01:15:07 Now, contrast that with some of the things that don't enjoy that at all. So as an example, like there's people trying to build automation, not for coding, but for example, for slides. Like I saw a company doing slides, that's much, much harder. And the reason that's much harder is because slides are not text. Yeah. Slides are little graphics and they're arranged spatially. and there's visual component to it.
Starting point is 01:15:26 And slides don't have this pre-built infrastructure. Like, for example, if an agent is to make a different change to your slides, how does a thing show you the diff? How do you see the diff? There's nothing that shows divs for slides. Someone has to build it. So it's just some of these things are not amenable to AIs as they are, which is text processors.
Starting point is 01:15:47 And code surprisingly is. Actually, I'm not sure if that alone explains it, because... I personally have tried to get LLMs to be useful in domains, which are just pure language and language out, like rewriting transcripts, like coming up with clips based on transcripts, etc. And you might say, well, it's very plausible that, like, I didn't do every single possible thing I could do. I put a bunch of, you know, good examples in context, but maybe I should have done, like, some kind of fine-tuning, whatever. So our mutual friend Annie Matushak told me that he actually tried 50 billion things. to try to get models to be good at writing space repetition prompts.
Starting point is 01:16:27 Again, very much language in, language out tasks, the kind of thing that should be dead center in the repertoire of these LLLNs. And he tried, in context learning, obviously, with a few shot examples. He tried, I think he told me like a bunch of things, like a supervised fine-tuning and like, you know, retrieval, whatever. And he just could not get them to make hearts to a satisfaction. So I find it striking that even in language out domains, it's actually very hard to get
Starting point is 01:16:53 a lot of economic value out of these models separate from coding. And I don't know what explains it. Yeah, I think that makes sense. I mean, I would say I'm not saying that anything text is trivial, right? I do think that code is like it's pretty structured.
Starting point is 01:17:10 Text is maybe a lot more flowery and there's a lot more like entropy in text, I would say. I don't know how I also put it. And also, I mean, code is hard and so people sort of feel quite empowered by LLMs, even from like simple, simple kind of knowledge. I basically, I don't actually know that I have a very good answer.
Starting point is 01:17:30 I mean, obviously, like, text makes it much, much easier maybe. It's maybe why I put it, but it doesn't mean that all text is trivial. How do you think about superintelligence? Do you expect it to feel qualitatively different from normal humans or human companies? I guess I see it as like a progression of automation in society, right? And again, like extrapoling the trend of computing, I just feel like there will be a gradual automation of a lot of things. And superintelligence will be sort of like the extrapolation of that. So I do think we expect more and more autonomous entities over time that are doing a lot of the digital work and then eventually even the physical work, probably some amount of time later.
Starting point is 01:18:07 But basically I see it as just automation, roughly speaking. I guess automation includes the things humans can already do and super intelligence supplies things humans. Well, but some of the things that people do is invent new things, which I would just put into. the automation, if that makes sense. Yeah. But I guess maybe less abstractly and more sort of like qualitatively. Do you expect something to feel like, okay, this, because this thing can either think so fast or has so many copies or the copies can merge back into themselves or is quote-unquote
Starting point is 01:18:42 much smarter, any number of advantages in AI might have, it will qualitative, the civilization in which these AI exists will just feel qualitative different from human civilization. I mean, it is fundamentally automation, but I mean, it will be like extremely foreign. I do think it will look really strange because like you mentioned, we can run all of this on a computer cluster, etc.
Starting point is 01:19:02 And much faster in all this thing. I mean, maybe some of the scenarios, for example, that I start to get like nervous about with respect to when the world looks like that is this kind of like gradual loss of control and understanding of what's happening. And I think that's actually the most likely outcome, probably, is that there will be a gradual loss
Starting point is 01:19:16 of understanding of, and we'll gradually layer all the stuff everywhere, and there'll be a few and fewer people who understand it, and that there will be a sort of this like scenario of gradual, less of control and understanding of what's happening. That to me seems most likely outcome of how all the stuff will go down. Let me prove on that a bit. It's not clear to me that loss of control and loss of understanding are the same things. A board of directors at like, whatever, TSM, Intel, name a random company. They're just like, prestigious 80-year-olds. They have very little understanding.
Starting point is 01:19:50 And maybe they don't practically actually have control. But, or actually, maybe a better example is the president in the United States. President has a lot of fucking power. I'm not trying to make a good statement about the current operant, but maybe I am. But like, the actual level of understanding is very different from the level of control. Yeah, I think that's fair. That's a good pushback. I think, like, I guess I expect loss of both.
Starting point is 01:20:14 How come? I mean, the loss of my understanding is obvious, but why a loss of control? So we're really far into territory of, I don't know what this looks like, but if I was to write sci-fi novels, they would look along the lines of not even a single entity or something like that. That just sort of like takes over everything, but actually like multiple competing entities that gradually become more and more autonomous. And some of them go rogue and the others, like fight them off and all this kind of stuff. And it's like this hot pot of completely autonomous activity that we've delegated to. I kind of feel like it would have that flavor.
Starting point is 01:20:53 It is not the fact that they are smarter than us that is resulting in the loss of control. It is the fact that they are competing with each other and whatever arises out of that competition that leads to the loss of control. I mean, I basically expect there to be, I mean, a lot of these things, I mean, they will be tools,
Starting point is 01:21:12 two people and the people could some of the population is like they're acting on behalf of people or something like that so maybe those people are in control but maybe it's a loss of control overall for society in the sense of like outcomes we want or something like that um where you have entities acting on behalf of individuals that are still kind of roughly seen as out of control yeah yeah this is a question i should have asked earlier so we were talking about how currently it feels like when you're doing a i engineering or i research these models are more like in the category compiler rather than in the category of a replacement. At some point, if you have quote-unquote AGI, it should be able to do what you do.
Starting point is 01:21:47 And do you feel like having a million copies of U.N. Parallel results in some huge speed up of AI progress. Basically, if that does happen, do you expect to see an intelligence explosion? Or even once we have a true A.J. I'm not talking about LLMs today, but real A. I guess what I mean is I do, but it's business as usual because we're in an intelligence explosion already and have been for decades. When you look at GDP, it's basically the GDP curve. That is an exponential, weight at some over so many aspects of the industry.
Starting point is 01:22:16 Everything is gradually being automated. Has been for hundreds of years. Industrial Revolution is automation and some of the physical components and the tool building and all this kind of stuff. Compilers are early software automation, etc. So I kind of feel like we've been recursively self-improving and exploding for a long time. Maybe another way to see it is, I mean, Earth was a pretty, I mean, if you don't look at the biomechanics and so on. It was a pretty
Starting point is 01:22:39 boring place, I think, and looked very similar if you just look from space, and Earth is spinning and then, like, we're in the middle of this, like, firecracker event. Right. But we're seeing it in slow motion. But I definitely feel like this has already happened for a very long time. And, again, like, I don't see AI
Starting point is 01:22:55 as, like, a distinct technology with respect to what has already been happening for a long time. So you think it's like continuous with this hyper exponential trend? And that's why, like, this is, this was very interesting to me because I was trying to find AI in the GDP for a while. I thought that GDP should go up. But then I looked at some of the other technologies that I thought were very transformative, like maybe computers or mobile phones
Starting point is 01:23:17 or et cetera. You can't find them in GDP. GDP is the same exponential. And it's just that even, for example, the early iPhone didn't have the app store and it didn't have a lot of the bells and whistles that the modern iPhone has. And so even though we think of 2008 was it when iPhone came out as like some major seismic change, it's actually not. Everything is like so spread out and so slowly diffuses, that everything ends up being averaged up into the same exponential. And it's the exact same thing with computers. You can't find them in the GDP is like, oh, we have computers, now, it's not what happened
Starting point is 01:23:43 because it's such a slow progression. And with AI, we're going to see the exact same thing. It's just more automation. It allows us to write different kinds of programs that we couldn't write before, but AI is still fundamentally a program. And it's a new kind of computer and a new kind of computing system, but it has all these problems,
Starting point is 01:23:59 it's going to diffuse over time, and it's still going to add up to the same exponential. And we're still going to get an exponential that's going to get extremely vertical, and it's going to be very foreign to live in that kind of an environment. Are you saying that, like, what will happen is, so if you go, if you look at the trend before the Industrial Revolution to currently, you have a hyper exponential where you go from like 0% growth to then 10,000 years ago, 0.02% growth, and then currently we're at 2% growth. So that's the hyper exponential, and you're saying, if you're charting AI on there, then it's
Starting point is 01:24:28 like AI takes you to 20% growth or 200% growth. Or you could be saying, if you look at the last 300 years, what you've been seeing? is you have technology after technology, computers, electrification, and steam engines, railways, etc. But the rate of growth is the exact same. It's 2%. So are you saying the rate of growth will... No, I basically...
Starting point is 01:24:47 I expect the rate of growth has also stayed roughly constant, right? For only the last 200, 300 years. But over the course of human history, it's, like, exploded, right? It's like gone from like 0%, basically, to like faster, faster, faster, industrial explosion, 2%. Basically, I guess what I'm saying is for a while I tried to find AI or look for AI in the GDP curve.
Starting point is 01:25:06 And I kind of convinced myself that this is false. And that even when people talk about recursive self-improvement and labs and stuff like that, I even don't, this is a business as usual. Of course, it's going to recursively self-improve and it's been recursively self-improving.
Starting point is 01:25:17 Like, LLMs allow the engineers to work much more efficiently to build the next round of LLM. And a lot more of the components are being automated and tuned and et cetera. So all the engineers having access to Google search is sort of part of it.
Starting point is 01:25:30 All the engineers having an ID, all of them have auto-complete or having cloth code, etc. It's all just part of the same speed up of the whole thing. So it's just so smooth. But just to clarify, you're saying that the rate of growth will not change. Like, you know, the intelligence explosion will show up as like, it just enabled us to continue staying on the 2% growth trajectory,
Starting point is 01:25:51 just that the internet helped us stay on the 2% growth trajectory. Yeah, my expectation is that it stays the same pattern. Yeah. I mean, just to throw the opposite argument against you, my expectation is that it like blows up because I think true AGI and I'm not talking about LLM coding bots I'm talking about like actual
Starting point is 01:26:10 this is like a replacement of a human in a server is qualitatively different from these other productivity improving technologies because it's labor itself right I think we're living in a very labor constrained world if we talk to any startup founder
Starting point is 01:26:25 or any person you can just be like okay what do you need more of you just like need really talented people and if you just have billions of extra people who are inventing stuff, integrating themselves, making companies, bottoms start to finish. That feels
Starting point is 01:26:39 qualitative different from just like a single technology. It's just sort of like just asking if you if you get 10 billion extra people on the planet. I mean, maybe a counterpoint. I mean, number one, I'm actually pretty willing to be convinced one way or another on this point. But I will say, for example, computing is labor. Computing was labor. Computers, like, a lot of jobs disappear because computers are automating
Starting point is 01:26:57 a bunch of digital information processing that you now don't need a human for. And so computers are labor, and that has played out. And, you know, self-traving as an example is also like computers doing labor. So, like, I guess that's already been playing out. So it's still business as usual. Yeah. I guess you have a machine which is spitting out more things like that at potentially
Starting point is 01:27:18 faster pace. And so we historically have examples of the growth regime changing where, like, you went from, you know, 0.2% growth to 2% growth. So it seems very plausible to me that, like, a machine which is then spitting out the next self-driving car and the next internet and whatever. I mean, I kind of, yeah, I see where it's coming from.
Starting point is 01:27:38 At the same time, I do feel like people make this assumption of like, okay, we have God in the box and now it can do everything. And it just won't look like that. It's going to be able to do some of the things. It's going to fail at some other things. It's going to be gradually put into society and basically end up with the same pattern,
Starting point is 01:27:52 is my prediction. Yeah. Because this assumption of suddenly having a completely intelligent, fully flexible, fully general human in a box, and we can dispense it at arbitrary, problems in society. I don't think that we will have this like discrete change. And,
Starting point is 01:28:07 and so I think we'll arrive at the same, at the same kind of gradual diffusion of this across the industry. I think what often ends up being misleading in these conversations is people, I don't like to use a word intelligence in this context, because intelligence applies you think, like, oh, super intelligence will be sitting, there will be a single superintelligence sitting in a server and it will like divine how to come up with new technology. and inventions that causes this explosion. And that's not what I'm imagining, when I'm imagining 20% growth.
Starting point is 01:28:37 I'm imagining that there's billions of, you know, basically like very smart human-like minds potentially, or that's all that's required. But the fact that there's hundreds of millions of them, billions of them, each individually making new products, figuring out how to integrate themselves into the economy, just the way if like a highly experienced smart immigrant came to the country, you wouldn't need to figure out how we integrate them in the economy.
Starting point is 01:29:00 They figured it out. They could start a company. They could make inventions or like just increased productivity in the world. And we have examples, even in the current regime, of places that have had 10, 20% economic growth. You know, if you just have a lot of people and less capital in comparison to the people, you can have Hong Kong or Shenzhen or whatever just had decades of 10% plus growth. And I think it's just like there's a lot of really smart people who are ready to like make use of the resources and do this like period of catch up. because we've had this discontinuity.
Starting point is 01:29:32 And I think, yeah, maybe similar. So I think I understand, but I still think that you're presupposing some discrete jump, there's some unlock that we're waiting to claim, and suddenly we're going to have geniuses in data centers. And I still think you're presupposing some discrete jump that I think has basically no historical precedent that I can't find in any of the statistics
Starting point is 01:29:51 and that I think probably won't happen. I mean, the Industrial Revolution is such a jump, right? You went from like 0.2% growth to 2% growth. I'm just saying, like, you'll see another jump like that. I'm a little bit suspicious. I would have to look at it. I'm a little bit suspicious and I would have to take a look.
Starting point is 01:30:05 For example, like maybe some of the logs are not very good from before the industrial evolution or something like that. So I'm a little bit suspicious of it, but yeah, maybe you're right. I don't have strong opinions.
Starting point is 01:30:15 Maybe you're saying that this was a singular event that was extremely magical and you're saying that maybe there's going to be another event that's going to be just like that, extremely magical.
Starting point is 01:30:22 It will break paradigm and so on. I actually don't think the... I mean, the crucial thing about the industrial revolution was that it was not magical, right? Like, if you just zoomed in, what you would see in 1770 or 1870 is not that there was some key invention. Yeah, exactly. But at the same time, you did move the economy to a regime where the progress was much faster.
Starting point is 01:30:45 And the exponential 10xed. And I expected some other thing from AI where it's not like there's going to be a single moment where we made the crucial invention. There's some overhang that's being unlocked. Like maybe there's a new energy source. There's some unlock, in this case, some kind of a cognitive capacity. And there's an overhang of cognitive work to do. That's right. And you're expecting that overhang to be filled by this new technology went across to the threshold.
Starting point is 01:31:07 Yeah. And I mean, maybe one way to think about it is through history, a lot of growth. I mean, growth comes because people come up with ideas. And then people are like out there doing stuff to execute those ideas and make valuable output. And through most of this time, population isn't exploding that has been driving growth. For the last 50 years, people have argued that growth has stagnated. population in frontier countries is also stagnated. I think we go back on the hyper-explancial growth in population and output.
Starting point is 01:31:34 Right, I'm sorry, exponential growth and population that causes hyper-extensional growth and output. Yeah, I mean, yeah, it's really hard to tell. Yeah. I understand that viewpoint. Yeah. I don't intuitively feel that viewpoint. So we just got access to Google's V03.1. And it's been really cool to play around with.
Starting point is 01:31:51 The first thing we did was run a bunch of problems through both V-O-3 and 3.1. to see what's changing the new version. So here's V-O-3. Hi, I'm Max, and I got stuck in a local minimum again. It's okay, Max. We've all been there. Took me three epochs to get out. And here's VO3.3.1. Hi, I'm Max, and I got stuck in a local minimum again.
Starting point is 01:32:14 It's okay, Max. We've all been there. Took me three-uprocks to get out. 3-1's output is just consistently more coherent, and the audio is noticeably higher quality. We've been using VO for a while now, actually. we released an essay earlier this year about AI firms fully animated by VEO2, and it's been amazing to see how fast these models are improving. This update makes VEO even more useful in terms of animating our ideas and our explainers.
Starting point is 01:32:40 You can try Vio right now in the Gemini app with pro and ultra subscriptions. You can also access it through the Gemini API or through Google Flow. You recommended Nick Lane's book to me, and then on that basis, I also find it super interesting and I interviewed him. And so I actually have some questions about sort of thinking about intelligence and evolutionary history. Now that you, over the last 20 years of doing air research, you maybe have a more tangible sense of what intelligence is, what it takes to develop it. Are you more or less surprised as a result that evolution just sort of spontaneously stumbled upon it?
Starting point is 01:33:19 I love Nick Lane's books, by the way. So, yeah, I was just listening to his podcast way up here. with respect to intelligence and its evolution, I do came, it came fairly, I mean, it's very, very recent, right? I am surprised that it evolved. Yeah. I find it fascinating to think about all the worlds out there.
Starting point is 01:33:36 Like, say, there's a thousand planets, like Earth and what they look like. I think Nick Lane was here talking about some of the early parts, right? Like, okay, he expects basically very similar life forms, roughly speaking, in bacteria-like things and most of them. Yeah. And then there's a few breaks in there. I would expect that the evolution of intelligence
Starting point is 01:33:52 intuitively feels to me like it should be fairly rare event and there have been animals for, I guess maybe you should base it on how long something has existed. So for example, if bacteria have been around for two billion years and nothing happened, then going to your care, it's probably pretty hard because bacteria actually came up quite early in Earth's evolution or history. And so I guess how long have we yet animals, maybe a couple hundred million years, like multicellular animals that like run, wrong, crawl, etc., which is maybe 10% of Earth's lifeband or something like that. So I mean, maybe on that time scale is actually not too tricky.
Starting point is 01:34:27 I still feel like it's still surprising to me, I think intuitively, that it developed. I would maybe expect just a lot of animal-like life forms doing animal-like things. The fact that you can get something that creates culture and knowledge and accumulates it, it is surprising to me. Okay, so there's actually a couple of interesting follow-ups. If you buy the Sun perspective that actually the crux of intelligence is animal intelligence, what the quote he said is, if you got to this screen, you'd be most of the way to AGI.
Starting point is 01:34:57 Then we got to squirrel intelligence, I guess, right after the Cambrian explosion, 600 million years ago. It seems like what instigated that was the oxygenation event 600 million years ago. But immediately the sort of like intelligence algorithm was there to like make the squirrel intelligence, right? So it's suggestive that animal intelligence was like that. As soon as you had the oxygen environment, you had the ecuriot, you could just like get the algorithm.
Starting point is 01:35:22 Maybe there was like sort of an accident that evolution smell it bonded so fast, but I don't know if that suggests it's actually quite, at the end, going to be quite simple. Yes, basically it's so hard to tell, right, with any of this stuff. I guess you can base it a little bit on how long something has a zigset or how long it feels like something else in bottlenecked. So Nicolane is very good about describing this like very apparent bottleneck in bacteria for two billion years. Nothing happened. Like extreme diversity of chemical, of biochemistry and yet nothing that grows to become. animals, two billion years.
Starting point is 01:35:54 I don't know that we've seen exactly that kind of an equivalent with animals and intelligence to your point, right? But I guess maybe we could also look at it with respect to how many times we think evolution or intelligence has like individually sprung up.
Starting point is 01:36:07 That's a really good thing to investigate. Maybe one thought on that is I almost feel like, well, there's the hominid intelligence. And there's, I would say like the bird intelligence, right? Like ravens, etc. are extremely clever. Yeah. But their brain parts are actually quite distinct,
Starting point is 01:36:22 and we don't have that much existence. So maybe that's a slight event of, there's a slight indication of maybe intelligence springing up a few times. And so in that case, you'd maybe expect it more frequently or something like that. Yeah. A former guest, Gwern and also Carl Schroen, have made a really interesting point about that, which is their perspective is that the scalable algorithm
Starting point is 01:36:44 which humans have and primates have arose in birds as well. and maybe other times as well. But humans found a evolutionary niche, which rewarded marginal increases in intelligence, and also had a scalable brain algorithm that could achieve those increases in intelligence. And so, for example, if a bird had a bigger brain, it would just like collapse out of the air.
Starting point is 01:37:08 So it's very smart for the size of its brain, but it's not in a niche which rewards the brain getting bigger. Yeah. Maybe similar with some really smart... Or dolphins, etc. Exactly, yeah. Whereas humans, you know, like we have hands that like reward being able to learn how to do tool use, being externalized digestion, more energy to the brain. And that kicks off the fly wheel. Yeah, and just stuff to work with. I mean, I'm guessing it would be harder to, if I was a dolphin. I mean, how do you do, you can't have fire, for example, and stuff like that. I mean, they're probably like the universe of things you can do in water, like inside water is probably lower than what you can do on land. Just chemically. Right. Yeah, I do agree with this, with this viewpoint of these niches and what's being incentivized.
Starting point is 01:37:49 I still find it's kind of miraculous that I don't, I would have maybe expected things to get stuck on like animals with bigger muscles, you know? Yeah. Like going through intelligence is actually a really fascinating breaking point. The way where it is, the reason it was so hard is, is a very tight line between being in a situation where something is so important to learn that it's not just worth distilling the. exact right circuits directly back into your DNA versus it's not important enough to learn at all. Yeah. It has to be something which is like you have to incentivize building the algorithm to learn in lifetime. Yeah, exactly.
Starting point is 01:38:28 You have to incentivize some kind of adaptability. You actually want something that you actually want environments that are unpredictable. So evolution can't bake your algorithms into your weights. A lot of animals are basically pre-baked in this sense. And so humans have to figure it out that test time when they get born. And so maybe there was, you actually want these kinds of environments that actually change really rapidly or something like that where you can't foresee what will work well. And so you actually put all that intelligent, you create intelligence to figure it out at this time. So Quentin Pope had this interesting blog post where we're saying the reasoning doesn't expect a sharp takeoff is so humans had the sharp takeoff where 60,000 years ago we seem to have had the kind of architectures that we have today.
Starting point is 01:39:09 And 10,000 years ago, agriculture revolution. modernity, dot, dot, dot. What was happening in that 50,000 years? Well, you had to build this sort of like cultural scaffold where you can accumulate knowledge over generations. This is an ability that exists for free in the way we do AI training where if you retrain a model, it can still,
Starting point is 01:39:30 I mean, in many cases they're literally distilled, but they can be trained on each other, they can be trained on the same pre-training corpus. They don't literally have to start from scratch. So there's a sense in which the thing which, it took humans a long time to get this cultural loop going, just comes for free with the way we do LLM training. Yes and no, because LLMs don't really have the equivalent of culture.
Starting point is 01:39:50 And maybe we're giving them way too much and incentivizing not to create it or something like that. But I guess like the mention of culture and of written record and of like passing down notes between each other, I don't think there's an equivalent of that with LLM's right now. So LMs don't really have culture right now. And it's kind of like one of the, I think, impediments, I would say. Can you give me some sense of what LLM culture might look like?
Starting point is 01:40:11 So in the simplest case, it would be a giant scratch pad that the LLM can edit. And as it's reading stuff or as it's helping out with work, it's editing the scratch pad for itself. Why can't an LLM write a book for the other LLMs? That would be cool. Yeah. Like, why can't other LLMs read this LLM's book and be inspired by it or shocked by it or something like that? There's no equivalence for any of the stuff. Interesting.
Starting point is 01:40:32 When would you expect that kind of thing to start happening? And more general question about like multi-agent systems and a sort of like independent AI, civil in culture. I think there's two powerful ideas in the realm of multi-agent that have both not been like really claimed or so on. The first one I would say is culture and LLM's basically a growing repertoire of knowledge for their own purposes. The second one looks a lot more like the powerful idea of self-play in my mind is extremely powerful. So evolution actually is a lot of competition basically driving intelligence and evolution. And in AlphaGo, more algorithmically, like AlphaGo is playing against itself
Starting point is 01:41:12 and that's how it learns to get really good at Go and there's no equivalent of self-playing in LLMs, but I would expect that to also exist but no one has done it yet. Why can an LLM, for example, create a bunch of problems that another LLM is learning to solve and then the LLM is always trying to like serve more and more difficult problems,
Starting point is 01:41:28 stuff like that, you know? So like, I think there's a bunch of ways to actually organize it and I think it's a realm of research but I think I haven't seen anything that convincingly claims both of those like multi-agent improvements. I still think we're mostly in the realm of a single individual agent,
Starting point is 01:41:44 but I also think that will change. And in the realm of culture, also I would bucket also organizations. And we haven't seen anything like that commisingly either. So that's why we're still early. And can you identify the key bottleneck that's preventing this kind of collaboration between other ones? Maybe like the way I would put it is somehow remarkably, again, some of these analogies work and they shouldn't, but somehow remarkably they do.
Starting point is 01:42:09 A lot of the smaller models, or the smaller models somehow remarkably resemble like a kindergarten student or then like an elementary school student or high school student, et cetera. And somehow we still haven't graduated enough where the stuff can take over. Like it's still mostly, like my cloth code or codex,
Starting point is 01:42:25 they still kind of feel like this elementary grade student. I know that they can take PhD quizzes, but they still cognitively feel like a kindergarten or an elementary school student. So I don't think they can create culture because they're still kids. You know, like they're savant kids. They have perfect memory of all this stuff, et cetera,
Starting point is 01:42:44 and they can convincingly create all kinds of slop that looks really good. But I still think they don't really know what they're doing, and they don't really have the cognition across all these little checkboxes that we still have to collect. Yeah. So you've talked about how you were at Tesla leading self-driving from 2017 to 2022, and then you firsthand saw this progress from, we went from cool demos to now
Starting point is 01:43:07 thousands of cars out there actually autonomously doing drives. Why did that take a decade? What was happening through that time? Yeah. So I would say one thing I would almost instantly also push back on is this is not even near done.
Starting point is 01:43:21 So in a bunch of ways that I'm going to get to. I do think that self-driving is very interesting because it's definitely like where I get a lot of my intuitions because I spent five years on it. And it has this entire history where actually the first demos of self-driving go all the way to 9080s. You can see a demo from CMU at 1986.
Starting point is 01:43:39 There's a truck that's driving itself on roads. But, okay, fast forward. I think when I was joining Tesla, I had a very early demo of a Waymo, and it basically gave me a perfect drive in 2014 or something like that. So perfect Waymo drive a decade ago. Took us around Palo Alto and so on
Starting point is 01:43:59 because I had a friend who worked there. And I thought it was like very close and then still took a long time. And I do think that for some kinds of tasks and jobs and so on, there's a very large demo to product gap where the demo is very easy, but the product is very hard. And it's especially the case in cases like self-driving where the cost of failure is too high, right? Many industries, tasks, and jobs maybe don't have that property. But when you do have that property, that definitely increases the timelines. I do think that, for example, in software engineering, I do actually think that that property does exist.
Starting point is 01:44:33 I think for a lot of vibe coding, it doesn't. But I think if you're writing actual production great code, I think that property should exist because any kind of mistake actually leads to security vulnerability or something like that. And millions and hundreds of millions of people's personal social security numbers, et cetera, get leaked or something like that. And so I do think that it is a case that in software,
Starting point is 01:44:51 people should be careful. Kind of like in self-driving. Like in self-driving, if things go wrong, you might get injury in, I guess there's worse outcomes. But I guess in software, I almost feel like, It's almost unbounded how terrible some things could be. So I do think that they share that property. And then I think basically what takes the long amount of time
Starting point is 01:45:12 and the way to think about it is that it's a march of nines and every single nine is a constant amount of work. So every single nine is the same amount of work. So when you get a demo and something works 90% of the time, that's just the first nine. And then you need the second nine and third nine, four, nine and ninth of nine. And while I was at Tesla for, was it five years or so, I think we went through maybe three nines or two nines.
Starting point is 01:45:34 I don't know what it is, but like multiple nines of iteration, there's still more nines to go. And so that's why these things take so long. And so it's definitely formative for me, like seeing something that was a demo. I'm very unimpressed by demos. So whenever I see demos of anything, I'm extremely unimpressed by that.
Starting point is 01:45:52 It works better if you can, if it's a demo that someone cooked up and is just showing you its worst. If you can interact with it, it's a bit better. But even then you're not done. You need actual product. It's going to face all these challenges. in when it comes in contact with reality and all these different pockets of behavior that need patching. And so I think we're going to see all this stuff play out. It's a march of nines. Each nine is
Starting point is 01:46:10 constant. Demos are encouraging. Still a huge amount of work to do. I do think it is a kind of a critical safety domain unless you're doing bi-coding, which is all nice and fun and so on. And so that's why I think this also enforced my timelines from that perspective. That's very interesting to hear you say that the sort of safety guarantees you need from software are actually not dissimilar to self-driving because what people will often say is that self-driving took so long because the cost of failure is so high. Like a human makes a mistake on the average every 400,000 miles or every seven years. And if you had to release a coding agent that couldn't make a mistake for at least seven years,
Starting point is 01:46:50 it would be much harder to deploy. But I guess your point is that if you made a catastrophic coding mistake, like breaking some important system every seven years. to do. And in fact, in terms of sort of wall clock time, it would be much less than seven years because you were like constantly outputting code like that, right? So it's like per tokens or in terms of tokens, it would be seven years, but in terms of
Starting point is 01:47:10 wall clock time, it would be pretty close. It's a much harder problem. I mean, self-driving is just one of thousands of things that people do. It's almost like a single vertical, I suppose. Whereas when we're talking about general software engineering, it's even more, there's more surface area. There's another objection people make to that analogy, which is that with self-driving, What took a big fraction of that time
Starting point is 01:47:31 was solving the problem of having basic perception that's robust and building representations and having a model that has some common sense so it can generalize to when I see something that's slightly out of distribution if somebody's waving down the road this way you don't need to train for it
Starting point is 01:47:49 the thing will have some understanding of how to respond to something like that and these are things we're getting for free with LLMs or VLMs today so we don't have to solve these very basic representation problems. And so now deploying AIs across different domains will sort of be like deploying a self-driving car with current models to a different city, which is hard, but not like a 10-year-long
Starting point is 01:48:09 task. Yeah, basically, I'm not 100% sure if I fully agree with that. I don't know how much we're getting for free. And I still think there's like a lot of gaps in understanding in what we are getting. I mean, we're differently getting more generalizable intelligence in a single entity, whereas self-trapping is a very special purpose task that requires, in some sense, building a special purpose task is maybe even harder in a certain sense because it doesn't fall out for a more general thing that you're doing at scale
Starting point is 01:48:32 if that makes sense. So, but I still think that the analogy doesn't, I still don't know if it fully resonates because, like, the al-ams are still pretty fallible and I still think that they have a lot of gaps and that it still needs to be filled in. And I don't think that we're getting like magical generalization completely out of the box sort of in a certain sense. And the other aspect that I wanted to also actually return to it when I was in the beginning was self-driving cars are nowhere and they're done still.
Starting point is 01:48:59 So even though, so the diplomas still are pretty minimal, right? So even Waymo and so on has very few cars. And they're doing that, roughly speaking, because they're not economical, right? Because they've built something that lives in the future. And so they had to pull back future, but they had to make it uneconomical. So they have all these, like, you know, there's all these costs, not just marginal costs for those cars and their operation and maintenance, but also the CAPEX of the entire thing. So making the economical is still going to be a slog, I think, for them.
Starting point is 01:49:28 And then also I think when you look at these cars and there's no one driving, I also think it's a little bit deceiving because there are actually very elaborate teleoperation centers of people actually kind of like in a loop with these cars. And I don't have the full extent of it, but I think there's more human in a loop that you might expect and there's people somewhere out there basically beaming in from the sky. And I don't actually know they're fully in the loop with the driving. I think some of the times they are, but they're certainly involved
Starting point is 01:49:54 and there are people. And in some sense, we haven't actually removed the person. We've, like, moved them to somewhere we can't see them. I still think there will be some work, as you mentioned, going from environment to environment.
Starting point is 01:50:03 And so I think, like, there's still challenges to make self-driving real. But I do agree that it's definitely across the threshold where it kind of feels real, unless it's, like, really tall-operated.
Starting point is 01:50:13 For example, Waymo can't go to all the different parts of the city. My suspicion is it's like parts of city where you don't get good signal. Anyway, so basically, I don't actually know anything about the stack. I mean, I'm just making up, making up stuff.
Starting point is 01:50:26 I truly let self driving for five years of Tesla. Sorry, I don't know anything about the specifics of Waymore. I feel like to talk about them. I actually, by the way, a lot for Waymo, and I take it all the time. Yeah. So I don't want to say, like, sure. I just think that people, again, are sometimes a little bit too naive about some of the progress, and I still think there's a huge mind of work.
Starting point is 01:50:42 And I think Tesla took, in my mind, a lot more scalable approach. Yeah. And I think the team is doing extremely well and it's going to, and I'm kind of like on the record for predicting how this thing will go. which is like when we had like early start because you can package up so many sensors. But I do think Tesla is taking the more scalable strategy and it's going to look a lot more like that.
Starting point is 01:51:00 So I think this will have to still play out and hasn't. But basically like, I don't want to talk about self-driving as something that took a decade because it didn't take it didn't take yet. If that makes sense. Because one, the start is at 1980, not 10 years ago and then two, the end is not here yet. Yeah, the end is not near yet.
Starting point is 01:51:17 Because when we're talking about self-driving, usually in my mind, it's self-driving at scale. Yeah. People don't have to get a driver's license, etc. I'm curious to bounce two other ways in which the analogy might be different. And the reason I'm especially curious about this is because I think the question of how fast AI is deployed, how valuable it is when it's early on is potentially the most important question in the world right now, right? Like if you're trying to model what the year or 20 or 30 looks like, this is the question you want to have some understanding of. So another thing you might think is, one, you have this latency requirement.
Starting point is 01:51:51 with self-driving where you have I have no idea what the actual models are but I assume like tens of millions of parameters or something which is not
Starting point is 01:51:58 the necessary constraint for knowledge work with LLMs or maybe it might be with the computer use and stuff but anyways the other big one
Starting point is 01:52:06 is maybe more importantly on this KAPX question yes there is additional cost to serving up an additional copy of a model
Starting point is 01:52:15 but the sort of op-x of a session is quite low and you can amortize the cost of AI into the training run itself, depending on how inference scaling goes and stuff. But it's certainly not as much as,
Starting point is 01:52:30 like, building a whole new car to serve another instance of a model. So it just, the economics of deploying more widely are much more favorable. I think that's right. I think if you're sticking in a realm of bits, bits are like a million times easier
Starting point is 01:52:44 than anything that touches the physical world. I definitely grant that. bits are completely changeable, arbitrarily reshuffledable at a very rapid speed. So you would expect a lot more faster adaptation also in the industry and so on. And then what was the first one? The latency requirements. Oh, the latency requirement. And the limitations for model size.
Starting point is 01:53:05 I think that's roughly right. I mean, I also think that if we are talking about knowledge work at scale, there will be some latency requirements, practically speaking, because we're going to have to create a huge amount of compute and serve that. And then I think like the last aspect that I very briefly want to also talk about is like all the all the rest of it. Just all the rest of it. So what the society think about it. What is the legal?
Starting point is 01:53:28 How is it working legally? How is it working insurance-wise? Who's really like what is the where are those layers of it and aspects of it? What happens with what is the equivalent of people putting a cone on a Waymo? Yeah. You know, there's going to be equivalence of all that. And so I do think that I almost feel like self-traving is a very nice. analogy that you can borrow things from.
Starting point is 01:53:47 Yeah, what is the equivalent of a cone on the car? What is the equivalent of a teleoperating worker who's like hidden away? And almost like all the aspects of it. Yeah. Do you have any opinions on whether this implies that the current AI build out, which would like 10x the amount of an available computer in the world in a year or two and maybe like 100, more than 100 X at by the end of the decade? If the use of AI will be lower than some people in IATLY predict,
Starting point is 01:54:13 Does that mean that we're overbuilding compute, or is that a separate question? Kind of like what happened with railroads and all this kind of stuff. With what, sorry? Was it railroads? Sorry. Yeah, that's right. There is like historical precedent or was it with telecommunication industry, right? Like prepaving the internet that only came like a decade later, you know, and creating
Starting point is 01:54:31 like a whole bubble in the telecommunications industry in the late 90s kind of thing. Yeah. So I don't know. I mean, I understand I'm sounding very pessimistic here. I'm only doing that. I'm actually optimistic. I think this will work. I think it's tractable.
Starting point is 01:54:45 I'm only sounding pessimistic because when I go on my Twitter timeline, I see all this stuff. That makes no sense to me. And I think there's a lot of reasons for why that exists. And I think a lot of it is, I think, honestly, just fundraising. It's just incentive structures.
Starting point is 01:55:00 A lot of it may be fundraising. A lot of it is just attention, you know, converting attention to money on the internet, you know, stuff like that. So I think there's a lot of people. that going on, and I think I'm only reacting to that, but I'm still like overall very bullish on technology. I think we're going to work through all this stuff, and I think there's been a rapid amount of progress. I don't actually know that there's overbuilding. I think that there's
Starting point is 01:55:24 going to be, we're going to be able to gobble up what, in my understanding, is being built, because I do think that, for example, cloud code or opening eye codex and stuff like that, they didn't even exist a year ago, right? Is that right? I think it's roughly right. This is a miraculous technology that didn't exist. I think there's going to be a huge amount of demand as there as we see the demand in CHAPT already and so on. So yeah, I don't actually know that there's overbuilding. But I guess I'm just reacting to like some of the very fast timelines that people continue to say incorrectly. And I've heard many, many times over the course of my 15 years in AI where very reputable people keep getting this wrong all the time. And I think I want us to be properly
Starting point is 01:56:03 calibrated. And I think some of this also, it does have like geopolitical ramifications and things like that when, like, some of these questions, and I think I don't want people to make mistakes on that, on that sphere of things. So I do want us to be grounded in reality of what technology is and isn't, so. Let's talk about education in Eureka and stuff. One thing you could do is start another AI lab and then try to solve those problems. Yeah, you're curious what you're up to now. Yeah. And then, yeah, why not AI research itself? I guess maybe like the way I would put it is I feel some amount of like determinism around the things that AI labs are doing. And I feel like I could help out there, but I don't know that I would like uniquely,
Starting point is 01:56:48 I don't know that I would like uniquely improve it. But I think like my personal big fear is that a lot of the stuff happens on the side of humanity and that humanity gets disempowered by it. And I kind of like, I care not just about all the Dyson spheres that we're going to build and that AI is going to build in a fully autonomous way. I care about what happens to humans. And I want humans to be well off in this future. And I feel like that's where I can a lot more uniquely add value
Starting point is 01:57:14 than like an incremental improvement in the frontier lab. And so I guess I'm most afraid of something maybe like depicted in movies like Wally or idiocracy or something like that where humanity is sort of on the side of this stuff. And I want humans to be much, much better in this future. And so I guess to me, this is kind of like through education that you can actually achieve this. And so what are you working on there?
Starting point is 01:57:38 Oh, yeah. So Eureka is trying to build, I think maybe the easiest way I can describe it is we're trying to build the Starfleet Academy. I don't know if you watch Star Trek. I haven't, but, yeah. Okay, Starfleet Academy is this like elite institution for frontier technology, building spaceships and graduating cadets to be like, you know, the pilots of these spaces, no whatnot. So I just imagine like an elite institution for technical knowledge and basically a kind of school that's very up-to-date and very up-to-date and very, very, very. like a premier institution. A category of questions I have for you is just explaining how one teaches technical
Starting point is 01:58:14 or scientific content well, because you are one of the world masters at it. And then I'm curious both about how you think about it for content you've already put out there on YouTube, but also to the extent it's any different, how you think about it for Eureka. Yeah, yeah. Well, with respect to Eureka, I think one thing that is very fascinating to me about education is, like, I do think educational pretty fundamentally change with AIs on the side. And I think it has to be rewired and changed to some extent. I still think that we're pretty early.
Starting point is 01:58:41 I think there's going to be a lot of people who are going to try to do the obvious things, which is like, oh, have an LLM and ask it questions and do all the basic things that you would do via prompting right now. I think it's helpful, but it still feels to me a bit slop, like slop. I'd like to do it properly, and I think the capability is not there for what I would want. What I'd want is like an actual tutor experience. maybe a prominent example in my mind is I was recently learning Korean and language learning
Starting point is 01:59:07 and I went through a phase where I was learning Korean by myself on the internet I went through a phase where I was actually part of a small class in Korea taking a Korean with a bunch of other people which was really funny but we had a teacher and like 10 people or so taking Korean and then I switched to a one-to-one tutor and
Starting point is 01:59:24 I guess what was fascinating to me is I think I had a really good tutor but I mean just thinking through like what this tutor was doing for me and how incredible that experience was and how high the bar is for like what I actually want to build eventually because I mean she was extremely
Starting point is 01:59:41 so she instantly from a very short conversation understood like where I am as a student what I know and don't know and she was able to like probe exactly like the kinds of questions or things to understand my world model no LLM will do that for you 100% right now not even close right
Starting point is 01:59:55 but a tutor will do that if they're good once she understands she actually like really served me all the things that I needed at my current sliver of capability. I need to be always appropriately challenged. I can't be faced with something too hard or too trivial. And a tutor is really good at serving you just the right stuff. And so basically, I felt like I was the only constraint to learning, like my own.
Starting point is 02:00:15 I was the only constraint. I was always given the perfect information. I'm the only constraint. And I felt good because I'm the only impediment that exists. It's not that I can't find knowledge or that it's not properly explained or et cetera. Like it's just my ability to memorize and so on. And this is what I want for people. How do you automate that?
Starting point is 02:00:31 So a very good question about the current capability you don't. But I do think that with, and that's why I think it's not actually the right time to actually build this kind of an AI tutor. I still think it's a useful product and lots of people will build it. But I still feel like the bar is so high and the capability is not there. But I mean, even today I would say Chachapitin is an extremely valuable educational product. But I think for me it was so fascinating to see how high the bar is. And when I was with her, I almost felt like, there's no way I can build this.
Starting point is 02:01:04 But you are building it, right? Anyone who's had a really good tutor is like, how are you going to build this? So I guess I'm waiting for that capability. I do think that in a lot of ways in the industry, for example, I did some AI consulting for computer vision. A lot of my times, the value that I brought to the company was telling them not to use AI.
Starting point is 02:01:22 It wasn't like I was the AI expert, and they described a problem and I said, don't use AI. This was my value head. And I feel like it's in the same, in education right now, where I kind of feel like, for what I have in mind, it's not yet the time, but the time will come. But for now, I'm building something that looks maybe a bit more conventional, that has a physical and digital component and so on. But I think there's obvious,
Starting point is 02:01:43 it's obvious how this should look like in the future. Do you think you're willing to say it? What is the thing you hope will be released this year or next year? Well, so I'm building the first course, and I want to have a really, really good course. state-of-the-art, obvious state-of-the-art destination you go to learn AI in this case, because that's just what I'm familiar with, so I think it's a really good first product to get to be really good. And so that's what I'm building,
Starting point is 02:02:06 and Nanachad, which you briefly mentioned, is a capstone project of LLM 101N, which is a class that I'm building. So that's a really big piece of it, but now I have to build out a lot of the intermediates, and then I have to actually, like, hire a small team of, you know, TAs and so on, and actually, like, built the entire course.
Starting point is 02:02:22 And maybe one more thing that I would say is, like, many times when people think about, education, they think about sort of like the more, what I would say is like kind of a softer component of like diffusing knowledge or like, but I actually have something very hard and technical in mind. And so in my mind, education is kind of like the very difficult technical like process of building ramps to knowledge. So in my mind, nanocat is a ramp to knowledge because it's a very simple, it's like the super
Starting point is 02:02:47 simplified full stack thing. If you give this artifact to someone and they like look through it, they're learning a ton of stuff. Yeah. And so it's giving you a lot of what I. call urecas per second, which is like understanding per second. That's what I want. Lots of Eurekas per second. And so to me, this is a technical problem of how do we build these ramps to knowledge. And so I always think of Eureka as almost like a, it's not like maybe that different
Starting point is 02:03:10 maybe through some of the frontier labs or some of the work that's going to be going on, because I want to figure out how to build these frontier ramps very efficiently so that people are never stuck. And everything is always not too hard or not too trivial. And you have just right material to actually progress. Yeah, so you're imagining the short term that instead of a tutor being able to probe your understanding, if you have enough self-awareness to be able to probe yourself, you're never going to be stuck. You can find the right answer between talking to the TAA or talking to another one and looking
Starting point is 02:03:43 at the reference implementation. It sounds like automation or AI is actually not a significant even, like, so far, it's actually the big alpha here is your ability to explain AI. codified in the source material of the class, right? That's fundamentally what the course is. I mean, I think you always have to be calibrated to what the capability exists in the industry. And I think a lot of people are going to pursue,
Starting point is 02:04:08 like, oh, just ask Chachapiti, etc. But I think, like, right now, for example, if you go to Chachapitin, you say, oh, teach me AI. There's no way. I mean, it's going to give you some slop, right? Right. Like, when I... AI is never going to write nanochat right now. But Nanot chat is a really useful, I think,
Starting point is 02:04:22 intermediate point. So I still... I'm collaborating with AI to create all this material. So AI is still fundamentally very helpful. Earlier on, I built a CS-231N at Stanford, which was one of the earlier... Actually, sorry, I think it was the first deep learning class
Starting point is 02:04:36 at Stanford, which became very popular. And the difference in building out 231N and LN 101N now is a quest dark, because I feel really empowered by the LMs as they exist right now, but I'm very much in the loop. So they're helping me build little materials. I go much faster. They're doing a lot of the boring stuff, et cetera.
Starting point is 02:04:54 So I feel like I'm developing the course, faster and those LLM infused in it, but it's not yet at a place where I can creatively create the content. I'm still there to do that. So like, I think the trickiness is always calibrating yourself to what exists. And so when you imagine what is available through Eureka in a couple of years, it seems like the big bottleneck is going to be finding Carpathies in field after field who can convert their understanding into these ramps, right? So I think it would change over time. So I think right now it would be hiring faculty to help work. hand in hand with AI and a team of people probably to build a state-of-the-art courses.
Starting point is 02:05:31 Yeah. And then I think over time it can, maybe some of the TAs can actually become AIs, because some of the TAs like, okay, you just take all the course materials, and then I think you could serve a very good like, an automated T.A. Yeah. For the student when they have more basic questions or something like that, right? But I think you'll need faculty for the overall architecture of a course and making sure that it fits. And so I kind of see a progression of how this will evolve. And maybe at some future point, you know, I'm not even that useful in AI is doing most of the design much better than I could. But I still think that that's going to take some time to play out. But are you imagining that like people who have
Starting point is 02:06:05 expertise in other fields are then contributing courses? Or do you feel like it's actually quite essential to the vision that you, given your understanding of how you want to teach, are the one designing the content? Like, I don't know, Sal Khan is like narrating all the videos of Khan Academy. Are you imagining something like that? Or no, I will hire faculty, I think, because there are domains in which I'm not an expert. And I think that's the only way to offer the state-of-the-art experience for the student, ultimately. So, yeah, I do expect that I would hire faculty, but I will probably stick around in AI for some time. But I do have something, I think, more conventional in mind for the current capability, I think, than what people would probably anticipate.
Starting point is 02:06:44 And when I'm building Starfleth Academy, I do probably imagine a physical institution and maybe a tier below that, a digital offering that is not the state-of-the-art experience. you would get when someone comes in physically full-time, and we work through material from start to end and make sure you understand it. That's the physical offering. The digital offering is, yeah, a bunch of stuff on the internet, maybe some L-L-L-M assistant,
Starting point is 02:07:06 and it's a bit more gimmicky in a tier below, but at least it's accessible to, like, 8 billion people. Yeah, I think you're basically inventing college from first principles for the tools that are available today, and then just like for, just like selecting for people who have the motivation and the interest of actually really engaging with material. Yeah, and I think there's going to have to be a lot of not just education, but also re-education,
Starting point is 02:07:32 and I would love to help out there because I think the jobs will probably change quite a bit. And so, for example, today a lot of people are trying to upskill in AI specifically. So I think it's a really good course to teach in this respect. And, yeah, I think the motivation-wise, before AGI, motivation is very simple to solve because people want to make money, and this is how you make money in the industry today. I think post-AGI is a lot more interesting, possibly, because, yeah, if everything is automated and there's nothing to do for anyone, why would anyone go to a school, etc.? So I think, I guess, like, I often say that pre-AGI education is useful. Post-AGI education is fun.
Starting point is 02:08:12 And in a similar way, as people, for example, people go to gym today. But we don't need their physical strength to manipulate heavy objects because we have machines to do that. They still go to gym. Why do they go to gym? Well, because it's fun, it's healthy, it's, and it's, and you look hot when you have a six-pack, I don't know. I guess like, so it's, I guess what I'm saying is it's attractive for people to do that in a certain like very deep psychological, evolutionary sense for humanity. Yeah. And so I kind of think that education will kind of play out in the same way, like you'll go to school, like you go to gym. And you'll, and I think that right now, I think not that many people learn because learning is hard.
Starting point is 02:08:51 You bounce from material because, and some people overcome that. barrier, but for most people, it's hard. But I do think that we should, it's a technical problem to solve. It's a technical problem to do what my tutor did for me when I was learning Korean. I think it's tractable and buildable and so much to build it. And I think it's going to make learning anything like trivial and desirable and people will do it for fun. Because it's trivial. If I had a tutor like that for any arbitrary piece of like knowledge, I think it's going to be so much easier to learn anything. And people will do it. And they'll do it for the same reasons they go to gym. I mean, that sounds different from using,
Starting point is 02:09:24 Using this, supposed to AI, you're using this to basically as entertainment or as like self-betterment. But it sounded like you had a vision also that this education is relevant to keeping humanity in control of AI. I see. And they sound different. And I'm curious, is it like it's entertaining for some people, but then empowerment for some others? How do you think about that? I think this, so I do definitely feel like people will be, I do think like eventually it's a bit of a losing game. If that makes sense.
Starting point is 02:09:51 I do think that it is in long term. Yeah. Long term, which I think is longer than I think maybe most people in the history, it's a losing game. I do think that people can go so far and that we barely scratch the surface of much a person can go. And that's just because people are bouncing off of material that's too easy or too hard. And I actually kind of feel that people will be able to go much further. Like anyone speaks five languages, because why not? Because it's so trivial.
Starting point is 02:10:16 Anyone knows, you know, all the basic curriculum of undergrad, etc. Now that I'm understanding the vision, And that's very interesting. Like, I think it actually has a perfect analog in gym culture. I don't think 100 years ago anybody would be, like, ripped. Like, nobody would have, you know, be able to, like, just spontaneously bench two plays or three plays or something. And it's actually very common now.
Starting point is 02:10:38 And you're, because this idea of systematically training and lifting weights in the gym or systematically training to be able to run a marathon, which is a capability spontaneously you would not have, or most humans would not have. And you're imagining similar things for, learning across many different domains, much more intensely, deeply, faster.
Starting point is 02:10:56 Yeah, exactly. And I kind of feel like I am betting a little bit implicitly on some of the timelessness of human nature. And I think it will be desirable to do all these things. And I think people will look up to it as they have for millennia.
Starting point is 02:11:13 And I think this will continue to be true. And actually, also, maybe there's some evidence of that historically. Because if you look at, for example, aristocrats, or you look at maybe ancient Greece or something like that. Whenever you had little pocket environments that were opposed to AGI in a certain sense, I do feel like people have spent a lot of their time flourishing in a certain way, either physically or cognitively. And so I think I feel okay about
Starting point is 02:11:34 the prospects of that. And I think if this is false and I'm wrong and we end up in like, you know, Wally or Idiocracy future, then I think it's very, I don't even care if there's like Dyson spheres. This is terrible outcome. Yeah. Like I actually really do care about humanity. Like, everyone has to just be superhuman in a certain sense. I guess it's still a world in which that is not enabling us to, it's like the culture world, right? Like, you're not fundamentally going to be able to, like, transform the trajectory of technology or influence decisions
Starting point is 02:12:09 by your own labor or cognition alone. Maybe you can influence decisions because the AI is for approval, but you're not like, it's not because I've, like, I can, because I've invented something or I've like come up with a new design, I'm like really influencing the future. Yeah, maybe. I don't actually think that, I think there will be a transitionary period where we are going to be in the loop and, you know, advance things if we actually understand a lot of stuff.
Starting point is 02:12:32 I do think that long term, that probably goes away, right? But maybe it's going to even become a sport. Like right now you have power lifters who go extreme on this direction. So what is powerlifting in a cognitive era? Yeah. Maybe it's people who are really trying to make Olympics out of knowing stuff. Yeah. Like, and if you have a perfect AI tutor, maybe you can get extremely far.
Starting point is 02:12:54 Yeah. I almost feel like we're just barely, the geniuses of today are barely discussion on the surface of what a human mind can do, I think. Yeah. I love this vision. I also, it's like, I feel like the person who have, like, most product market fit with is, like, me, because, like, my job involves having to learn different subjects every week. and I am like very excited if you can... I'm similar for that matter. I mean, a lot of people, for example, hate school and when I get out of it.
Starting point is 02:13:24 I was actually, I really liked school. I love learning things, et cetera. I wanted to stay in school. I stayed all the way until PhD, and then they wouldn't let me stay longer. So I went to the industry. But I mean, basically, it's roughly speaking, I love learning, even for the sake of learning, but I also love learning because it's a form of empowerment and being useful and productive. I think you also made a point that we started also.
Starting point is 02:13:44 just to spell it out. I think what's happened so far with online courses is that why haven't they already enabled us to enable every single human to know everything? And I think they're just
Starting point is 02:13:56 so motivation-laden because there's not obvious on-ramps and it's like so easy to get stuck. And if you had instead this thing, basically like a really good
Starting point is 02:14:09 human tutor, it would just be such an unlocked from a motivation's perspective. I think so. Yeah. Because it feels bad. to bounce from material. It feels bad. You get a negative reward from a sinking amount of time
Starting point is 02:14:20 in something and this doesn't pan out or like being completely bored because of what you're getting us too easy or too hard. So I think, yeah, I think when you actually do it properly, learning feels good. And I think it's a technical problem to get there. And I think for a while it's going to be AI plus human collab. And at some point maybe it's just AI. Can I ask some questions about teaching well? If you had to like sort of like give advice to another educator in another feel that you're curious about to make the kinds of YouTube tutorials you've made. Maybe it might be especially interesting to talk about domains where you can't just like, you can't test somebody's technical understanding by having them code something up or something.
Starting point is 02:14:58 What advice would you give them? So I think that's a pretty broad topic. I do feel like there's basically, I almost feel like there are 10, 20 tips and tricks that I kind of semi-consciously probably do. But I guess like on a high level, I always try to, I think a lot of this comes from physics background. I really, really did enjoy my physics background. I have a whole rant when I think how everyone should learn physics in early school education, because I think early school education is not about criminaling knowledge or memory for tasks later in the industry. It's about booting up a brain.
Starting point is 02:15:30 And I think physics uniquely boots up the brain the best because some of the things that they get you to do in your brain during physics is extremely valuable later. The idea of building models and abstractions and understanding that there are, there's a first order of approximation that describes most of the system, but then there's a second order, third order, first order terms that may or may not be present. And the idea that you're observing
Starting point is 02:15:49 like a very noisy system, but actually there's like these fundamental frequencies that you can abstract away. Like when a physicist walks into the class and they say, assume there's a spherical cow and dot-da-dot. And everyone laughs at that,
Starting point is 02:16:01 but actually it's brilliant. It's brilliant thinking that's very generalizable across the industry because, yeah, cow can be approximated as a sphere, I guess, in a bunch of ways. There's a really good book, for example, scale.
Starting point is 02:16:11 it's basically from a physicist talking about biology and maybe this is also a book I recommend reading but you can actually get a lot of really interesting approximations and chart scaling loss of animals and you look at their heartbeats and things like that and they actually line up and with the size of the animal and things like that. You can talk about an animal as volume
Starting point is 02:16:30 and you can actually derive a lot of, you can talk about the heat dissipation of that because your heat dissipation grows as the surface area which is growing as square, but your heat creation or generation is growing as a cube. And so I just feel like physicists have all the right cognitive tools to approach problem solving in the world.
Starting point is 02:16:47 So I think because of that training, I always tried to find the first order terms or the second order terms of everything. When I'm observing a system or thing, I have a tangle of a web of ideas or knowledge in my mind. And I'm trying to find what is the thing that actually matters? What is the first order component? How can I simplify it?
Starting point is 02:17:03 How can I have a simple thing that actually shows that thing, right? It shows an action. And then I can tackle on the other terms. Yeah. maybe an example from one of my repos that I think illustrates it well is called micrograd. I don't know if you're familiar with this, but... So micrograd is 100 lines of code that shows back propagation. It can...
Starting point is 02:17:20 You can create neural networks out of simple operations like plus and times, etc., Lego blocks of neural networks. And you build up a computational graph, and you do a forward pass and a backward pass to get the gradients. Now, this is at the heart of all neural network learning. So micrograd is at 100 lines of pre-interpretable Python code, and it can do forward and backward with arbitrary neural networks. but not efficiently.
Starting point is 02:17:40 So micrograd, these 100 lines of Python are everything you need to understand how neural networks train. Everything else is just efficiency. Yeah. Everything else is efficiency. And there's a huge amount of work to do efficiency. You know, you need your tensors,
Starting point is 02:17:52 you lay them out and you stride them. You make sure your kernels orchestrating memory movement correctly, et cetera. It's all just efficiency, roughly speaking. Yeah. But the core intellectual sort of piece of neural network training is micrograd. So hundred lines can easily understand it.
Starting point is 02:18:03 You're chaining. It's a recursive application of chain rule to derive a gradient, which allows you to optimize any arbitrary differential function. So, it's a, I love finding these, like, you know, the smaller terms and serving them on a platter and discovering them. And I feel like education is like the most intellectual interesting thing because you have a tangle of understanding and you're trying to lay it out in a way that creates a ramp where everything only depends on the thing before it. And I find that this like, you know, untangling of knowledge is just so intellectually interesting as a cognitive task.
Starting point is 02:18:36 Yeah. And so I love doing it personally. But I just find I have fascination with. with trying to lay things out in a certain way. Maybe that helps me. It also just makes a learning experience so much more motivated. Your tutorial on the Transformer begins with biograms. Literally like a lookup table from here's the word right now.
Starting point is 02:18:55 Or here's the previous word. Here's the next word. And it's literally just a lookup table. Yes, the essence of it, yeah. I mean, it's such a brilliant way. Like, okay, start with a lookup table and then go to a transformer and then each piece is motivated. Why would you add that?
Starting point is 02:19:07 Why would you add the next thing? You couldn't memorize this sort of attention formula, but just like having an understanding of why every single piece is relevant, what a problem solves. Yeah, yeah. Yeah, you're presenting the pain before you present the solution and how clever is that. And you want to take the student through that progression.
Starting point is 02:19:20 So there's a lot of other small things like that that I think make it nice and engaging and interesting. And, you know, always prompting the student. There's a lot of small things like that I think are important and a lot of good educators will do. Like, how would you solve this? Like, I'm not going to present a solution before you're going to guess.
Starting point is 02:19:38 that would be wasteful. That would be, that's a little bit of a, I don't want to swear, but like it's a, it's a dick move towards you to present you with the solution before I give you a shot to try to, right, to come up with it yourself. Yeah, yeah.
Starting point is 02:19:52 And because if you try to come up with yourself, I guess you get a better understanding of like, what is the action space? Yeah. And then what is the sort of like objective? Then like, why does only this action fulfill that objective, right? Yeah. Well, you have a chance to like try yourself
Starting point is 02:20:06 and you have an appreciation. when I give you the solution. And it maximizes the amount of knowledge per new fact added. That's right, yeah. Why do you think by default, people who are genuine experts in their field are often bad at explaining it to somebody ramping up? Well, it's the curse of knowledge and expertise. This is a real phenomenon, and I actually suffered from it myself as much as I try to not suffer from it. But you take certain things for granted, and you can't put yourself in the shoes of people who are just starting out.
Starting point is 02:20:37 and this is pervasive and happens to me as well. One thing that I actually think is extremely helpful, as an example, someone was trying to show me a paper in biology recently. And I just had instantly so many terrible questions. So what I did was I used chatypte to ask the questions with the paper in context window. And then it worked through some of the simple things. And then I actually shared the thread to the person who shared it, who actually wrote that paper or worked on that work.
Starting point is 02:21:02 And I almost feel like it was like if they can see the dumb questions I had, it might help them explain better in the future or something like that. Because, so for example, for my material, I would love if people shared their dumb conversations with ChachyPT about the stuff that I've created because it really helps me put myself again in the shoes of someone who's starting out. Another trick like that that I just works astoundingly well. If somebody writes a paper or a blog post or an announcement, it is in 100% of cases true that just the narration or the transcription
Starting point is 02:21:38 of how they would explain it to you over lunch is way more not only understandable but actually also more accurate and scientific in the sense that people have a bias to explain things in the most abstract, jargon-filled way possible, and to clear their throat for four paragraphs before they explain the central idea.
Starting point is 02:22:01 But there's something about communicating one-on-one with a person which compels you to just say the thing. Just say the thing. Yeah. Actually, I saw that tweet. I thought it was really good. I shared it with a bunch of people, actually.
Starting point is 02:22:13 I think it was really good. And I noticed this many, many times. Maybe the most prominent example is, I remember back in my PhD days doing research, etc. You read someone's paper, right? And you work to understand what it's doing, et cetera. And then you catch them, you're having beers at the conference later. And you ask them, so, like, this paper, like,
Starting point is 02:22:31 so what were you doing? Like, what is the paper about? And they will just tell you these, like, three cents. that like perfectly captured the essence of that paper and totally give you the idea, and you didn't have to read the paper yet. And like it's only when you're sitting at the table with a beer or something like that and like, oh, yeah, the paper is just, oh, you take this idea, you take that idea, and try this experiment and you try this thing.
Starting point is 02:22:48 And they have a way of just putting it conversationally. Right. And just like perfectly, like, why isn't that the abstract? Exactly. This is coming from the perspective of how somebody who's trying to explain an idea should formulate it better. What is your advice? As a student to other students, where if you don't have a Carpathie who is doing the exposition of an idea, if you're reading a paper from somebody or reading a book, what strategies do you employ to learn material you're interested in in feels you're not an expert in?
Starting point is 02:23:21 I don't actually know that I have unique tips and tricks, to be honest. Basically, it's kind of a painful process. But, you know, like redraft one. I think one thing that has always helped me quite a bit is I had a small tweet about this actually so learning things on demand is pretty nice, learning depth-wise. I do feel like you need a bit of alternation of learning depth-wise on-demand. You're trying to achieve a certain project that you're going to get a reward from
Starting point is 02:23:48 and learning breath-wise, which is just, oh, let's do whatever one-on-one, and here's all the things you might need, which is a lot of school does a lot of breath-wise learning. Like, oh, trust me, you'll need this later, you know, that kind of stuff. Like, okay, I trust you, I'll learn it because I guess I need it. But I love the kind of learning where you'll actually get a reward out of doing something and you're learning on demand. The other thing that I've found is extremely helpful is maybe this is an aspect where education is a bit more selfless because explaining things to people is a beautiful way to learn something more deeply. This happens to me all the time. I think it probably happens to other people too because I realize if I don't really understand something, I can't explain it.
Starting point is 02:24:25 And I'm trying and I'm like, actually, actually I don't understand this. and it's so annoying to come to terms with that. And then you can go back and make sure you understood it. And so it fills these gaps of your understanding. It forces you to come to terms with them and to reconcile them. I love to re-explain and things like that. And I think people should be doing that more as well. I think that forces you to manipulate the knowledge
Starting point is 02:24:46 and make sure that you know what you're talking about when you're explaining it. Oh, yeah. I think that's an excellent note to close on. Yeah. Andre, that was great. Yeah, thank you. Thanks. Good time.
Starting point is 02:24:55 Hey, everybody. I hope you enjoyed that episode. If you did, the most helpful thing you can do is just share it with other people who you think might enjoy it. It's also helpful if you leave a rating or comment on whatever platform you're listening on. If you're interested in sponsoring the podcast, you can reach out at dwarish.com slash advertise. Otherwise, I'll see you on the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.