Modern Wisdom - #979 - Dwarkesh Patel - AI Safety, The China Problem, LLMs & Job Displacement

Episode Date: August 11, 2025

Dwarkesh Patel is a writer, researcher & podcaster. The rise of AI marks the next great technological revolution, one that could reshape every aspect of our lives in just a few years. But how close a...re we to its golden age? And what warnings does the global AI race hold about the double-edged nature of progress? Expect to learn what Dwarkesh has realised about human learning and human intelligence from architecting AI learning, if AGI is right around the corner and how far away it might be, if most Job Displacement Predictions right or wrong, why recent studies show that tools such as ChatGPT make our brains less active and our writing less original, what Dwarkesh’s favourite answer to AI’s creativity question, what he biggest things about America/West that China doesn’t understand, the best bull case for AI growth ahead and much more… Sponsors: See me on tour in America: ⁠https://chriswilliamson.live⁠ See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 35% off your first subscription on the best supplements from Momentous at https://livemomentous.com/modernwisdom Get a Free Sample Pack of LMNT’s most popular Flavours with your first purchase at https://drinklmnt.com/modernwisdom Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom Get the best bloodwork analysis in America at https://functionhealth.com/modernwisdom Timestamps: (0:00) Has AI Accelerated Our Understanding of Human Intelligence? (6:59) Where Do We Draw the Line with Plagiarism in AI? (12:13) Does AI Have a Limit? (17:29) Is AGI Imminent? (21:26) Are LLMs the Blueprint for AGI? (30:15) Retraining AI Based on User Feedback (34:57) What Will the World Be Like with trueAGI? (39:32) Are Big World Issues Linked to the Rise in AI? (46:06) Is AI Homogenising Our Thoughts? (51:10) How Should We Be Using AI? (56:17) Should We Be Prioritising AI Risk and Safety? (01:01:14) Why are We So Trusting of AI? (01:11:09) The Importance of AI Researchers (01:12:09) Where Does China's AI Progression Currently Stand? (01:26:26) What Does China Think About the West? (01:37:34) The Pace of AI is Overwhelming (01:42:42) What is Ignored by the Media But Will Be Studied by Historians? (01:50:41) Growing for Success (02:06:40) Dwarkesh’s Learning Process (02:09:28) Follow Your Instincts (02:22:29) Digital-First Elections (02:28:02) Becoming Respected by Those You Respect (02:45:29) Find Out More About Dwarfish Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 What do you think that we've realized about human learning and human intelligence from architecting AI intelligence? There's this really interesting thing we've seen where these AI models are making progress first in the domains that we think of as the archetype of where humans have their primacy, right? So if you look at Aristotle, what does he say? What makes humans unique? Well, it's reasoning. Humans can reason Other animals can't. And these models, these AI models,
Starting point is 00:00:33 they're just not that useful if you've tried to use them for your work. They're useful in certain domains, but broadly, they're just not widely deployable. What is the one thing that they can do? They can reason. But they obviously, they can't carry a cup of water, right? Robotics isn't solved.
Starting point is 00:00:48 They can't even, like, do a job. They can't even do a white-collar job. So there's this interesting thing called Moravex paradox. Hans Morawak came up with this idea in the 90s, where he noticed that the tasks which are easiest for humans are taking computers the longest to solve. So we still haven't solved robotics yet. It's so easy for us to move around. Whereas the tasks which are quite hard for humans, like adding numbers, adding long numbers.
Starting point is 00:01:12 Computers could do that in the 60s. And the logic there is that evolution has only optimized us for, let's say, the last million years, to be good at reasoning, to be good at arithmetic, to be good at these kinds of high-level abstraction. And it was just to spend four billion years teaching us how to move around the world, how to pursue your goals in a long-term basis, so not just do this task over the next hour, but spend the next month planning how to kill this gazelle. And that has been, I think, remarkably accurate predictor of the places we've seen yet progress. They're like, they're automating coding. Coding we thought of was this thing that 0.1% of the population could do really well. that's the first thing
Starting point is 00:01:55 that went below the water line and yeah just like basic manual work might genuinely be the last thing that goes away right
Starting point is 00:02:03 yeah there's a difficulty in getting a robot to crack an egg a particular difficulty being able to do that the right amount of tension to hold is there a
Starting point is 00:02:11 this may be outside of your domain of competence but that's why we do podcasting to talk about things that are outside our domain of competence is there a potential
Starting point is 00:02:20 to use some sort of scanning technology to take an LLM type approach to teaching robots how humans move. You know, if you were able to track within a room exactly how a human was to just go about tasks, just feed that into a big fuck-off model. Right.
Starting point is 00:02:36 And then use that to rep— I guess you can't really work out sort of force application just by looking. That would be something you'd have to fit. Maybe you could put someone in a... I don't know. You know, I'm wondering if we've seen so much progress using LLMs in the world of AI,
Starting point is 00:02:51 robotics seems to be something that's still kind of pretty janky. I'm wondering if there are any principles that can be taken from the world of LLM that can be applied to robotics. I mean, that's a great question, and many companies are working on it. My understanding is that it's difficult for the fact that there's not as much data, just what you mentioned,
Starting point is 00:03:13 that the kind of data you need of, like, what did it feel like? There's no internet for human movement. Exactly, right? And even video is limited, even if you have the video, It's not, with language, you have this thing of you are exactly doing the thing which the online internet text is, right? You are predicting the next token in the text. You can predict the next thing in a video frame. That's not the same thing as robotics.
Starting point is 00:03:33 There's also additional challenges from what I understand around the fact that video is harder to process than text is just like a lot more data. There's latency overhead. So if it takes you a while to process language, that's fine. You can go a token at a time. The real world just moves very fast. You can try to solve these issues by going in simulation. So, you know, you can have a simulation where you're trying to move things around. And in that domain, you can train an AI to be good at robotics.
Starting point is 00:03:59 But the real world is just like very complicated. If I crumple this like this thing, like why does it bend exactly the way it does? It's just very hard to get that in simulation. Yeah, I think robotics is tough. That paradox is fascinating. Yeah. I'd never heard of that before. I was at a robot research company,
Starting point is 00:04:18 floor and they had these robots all from China and the researcher the researcher would be like here the robot would be right there and they were themselves creating the human label data like they tried to do something the AI would try to learn it it was like trying to get as close to the ground floor of the robotics movement which I thought was a cool approach okay and was it any good it was all right it did not solve the cracking the egg problem okay Okay, D-plus. Yeah. Has all of the time that you've spent sort of thinking about AI
Starting point is 00:04:53 and observing the ascendancy of LLMs, does it make you think about your own consciousness or learning or the way that your mind works differently? I get the sense that my friends who spend the most time interacting with Chachibit and Claude and stuff like that. It actually has this weird, like, bidirectional sort of training where they change too. I'm interested in what you've learned about yourself
Starting point is 00:05:21 or how you see yourself differently, consciousness learning, where your mind works. So if you ask Claude or Gemini or one of these models, what does it like to be you? And specifically, what is sort of unique about your experience that you want to talk about? One thing that Claude one mentioned was that, look, I have this unique experience
Starting point is 00:05:41 where at the end of a session, my memory is totally wiped. So I might form a connection with a person where I might learn something about the world that I might learn about myself. End of an hour, it's totally wiped. Now, I think previously people had this idea that, look, you can have LLM's right poetry or you can have the right philosophy,
Starting point is 00:05:59 but they're just sort of doing interpulation on what human writers have already done, right? So there's like nothing going on in its mind. This thing about the ephemeralness of the session memory, and it talked about it way more poetically than I'm talking about it right now, is unique to LLMs.
Starting point is 00:06:15 Like this is not a thing any human philosopher has to have to think about or has written down. And so I think this has an interesting implication. One, either we accept that this is like a genuine mind doing genuine like interesting introspection, like creating genuine literature. Or two, if you're going to say, look, I think this is just like rubbish. I think it's like sort of next token, whatever. I think you should update in favor of like human poetry is also kind of just, people are just saying shit because fundamentally
Starting point is 00:06:47 there's no difference right there's some experience you try to make something sort of lyrical come out of that
Starting point is 00:06:51 yeah either human literature is real or AI literature is real but there's no in between I had I was reading
Starting point is 00:07:01 Steve Stuart Williams the ape who understood the universe and he's got this quote in there from William James and he says originality is just
Starting point is 00:07:07 undetected plagiarism and I realize that we have an issue with plagiarism when it's barefaced, right? When somebody steals your exact questions from your podcast and asks them to a similar guest and you go, hey, that's unfair. But I've listened to probably 2,000 hours of Joe Rogan in my 20s.
Starting point is 00:07:31 I've inevitably been influenced by him. The way that I used to do my ad reads was almost verbatim how he would do his adreads. But there were different advertisers and they were done in a different style and there were a different time and I've got a British accent. So, okay, I've been able to, so where do we draw the line between this is unfair plagiarism and this is you taking inspiration, right? And you amalgamate and you aggregate from all of these different experiences and you're, even if you and me were trying to do the exact same thing and it had had the same influences on us, we're different people. So the way that that would have come out and some people feel more
Starting point is 00:08:06 original than others, even if they've taken a lot of inspiration from other people. So yeah, I, I, this, uh, the question of what is plagiarism, I think is really cool. And when you look at GPT is doing like predictive plagiarism, I guess in a way, uh, well, where do human, like, what does true originality in the form of human creativity? What does that mean? What does that actually mean? Right. Right. Because you can't be that creative with the saxophone because you have to blow the fucking wind into the top. Real creativity with the saxophone will be. melting it down and creating something new out of it. But even if you melted it down, you're using a smelting, fucking, ore, iron thing that some other person desires. You know what I mean? So collective, cumulative culture and learning that humans have got kind of creates a very big box, but still a constrained box.
Starting point is 00:09:00 And even if you create something absolutely new, it's usually only just a tiny little movement. It's this microscopic little growth on top of what already existed. Yeah, 100%. I think there's an interesting experience people have when they, it's related to gentleman and amnesia, but when you, a domain you know a lot about, you understand, often it's the case you realize there was no clear breakthrough moment. The thing I'm sort of familiar with is the history of AI research. And I think when journalists or outsiders are asking, okay, what do I need to understand
Starting point is 00:09:36 to understand how we got to this place in AI? Was it Ilius paper in 2012 on AlexNet? Was it this thing that Jeffrey Hinton did in the 80s and 90s? Was it the GBT1? And I think all these things were important. But the closer you get to the surface, the more you realize it's just been one, these small architectural changes,
Starting point is 00:09:57 none of which individually was especially significant. But more overwhelmingly than that trend is just that we have been throwing astoundingly more compute into training these systems every single year. 4X more compute per year into training these frontier systems and over the course of like 10 years that's like hundreds of thousands of times more compute and that's what explains AI progress
Starting point is 00:10:19 it's not that some person had this amazing idea that nobody else would have had or nobody else had something similar going on and I think this is true of other fields as well the closer you look the more you realize it's either randomness or they were just doing the next obvious thing in the sequence yeah it's always incremental exactly is there a name for this
Starting point is 00:10:38 of Moore's Law, is there an equivalent name for this, but in AI compute terminology? Hmm. Oh, the scaling of training compute. Yes. There should be. Let's call it Dwarcash's Law. That'd be great. Yeah, I've had nothing to do with AI research. I'm a podcaster.
Starting point is 00:10:55 You're going to call the main train in AI. Dude, a shameless land grab for nomenclature is exactly what you need. That's right. Yeah, own it. Fucking own it. 100%. It's happening. In other news, this episode is brought to you by Momentus. sleep's not dialed, taking ages to not off, you're waking up at random times and feeling groggy in the morning. Momentus's sleep packs, how did I miss both of those? I hear to help. They're not
Starting point is 00:11:19 your typical knock you out supplement overloaded with melatonin, just the most evidence-based ingredients at perfect doses to help you fall asleep more quickly, stay asleep throughout the night and wake up feeling more rested and revitalized in the morning, which is why I take these things every single night and why I trust Momentus with my life or at least with my sleep because they make the highest quality supplements on the planet what you read on the label so what's in the product and absolutely nothing else and if you're still unsure they've got a 30 day money back guarantee so you can buy it completely risk free use it and if you do not like it for any reason they will give you your money back plus they ship internationally right now you can get 35% of your first subscription
Starting point is 00:11:57 and that 30 day money back guarantee by going to the link in the description below or heading to live momentous.com slash modern wisdom using the code modern wisdom at checkout. That's L-I-V-E-M-O-M-O-M-T-O-U-S dot com slash modern wisdom
Starting point is 00:12:10 and modern wisdom a checkout. So talk to me about your own consciousness beyond the poetry, the fact that AI has got this ability to tell us about its experience. What about you?
Starting point is 00:12:24 What about how it's made you think about your own learning, your own mind? Hmm. um i'm sort of easily distractible uh i can be trying to work on a task and my mind will just wander and um you know you sometimes when you're meditating or something you notice these loops of thought that keep distracting you and um i remember one of these one of those times i thought to myself i'm sort of like um i'm just sort of like clawed i'm just like i'm losing my train of thought the problem
Starting point is 00:12:54 these models have is that they're constantly they can't really do a task for a long period of time because they get stuck in a loop. And it's interesting to think about how similar that as humans. Maybe we can go a further, a bit longer than these models before getting stuck in that kind of loop of our own. But I thought that was sort of an interesting insight.
Starting point is 00:13:16 I don't know, yeah. Have you had an executive function. You're saying that you have better executive function than clock. Only a tiny little bit, better executive function. Well, what does it say about the fact that if the data is being trained on what humans do is it simply a case therefore that more data
Starting point is 00:13:37 if you were to somehow get a human that was able to process that much data would they have fundamentally different understanding or is there's some sort of ceiling given that this is data created by humans being trained and educated into a machine is there some sort of ceiling that's expected to be hit given that it's not a superintelligence teaching a super intelligence Yeah, it's only, the source material is only capped at whoever the fucking smartest person in history has ever been. There is this interesting conundrum where they have, no human has seen even a fraction of a fraction of a fraction of the amount of information these models have seen.
Starting point is 00:14:12 And there's a question you could ask, and I've asked it to some of my guests who are especially bullish about AI of, look, if you have every single thing that any human has ever written, every scientific article, every textbook, every interesting even statistical pattern. and there might be out in some data set somewhere. You have that all memorized. If a human had, even a fraction of that memorized, they would be noticing all kinds of different connections. They'd look at this piece of medical literature and this thing in chemistry, and they'd realize, oh, we can solve migraines by connecting these two insights.
Starting point is 00:14:40 So far, we don't have any evidence of an LLM doing this. The people have made these scaffolds, which, like, kind of do something similar. But nothing like this has been directly done. So it does suggest these models are, like, shockingly less creative than he. humans. There is another implication of that, though, by the way. So one way to read that is bearish on AIs, right, because they're not doing this thing that they should be able to do,
Starting point is 00:15:04 given their enormous advantages. Another way to look at that is, okay, once they are as creative as humans, given their other enormous advantages, the fact that they will know every single thing any human is known in the future, any AI has known, it's so easy to underestimate how powerful AGI will be, because we're thinking of just like a human on a server. We're not thinking about the advantages these AIs have because of the fact that they're digital, that they can be copied, there can be billions of copies of them, and each copy can have this tacit understanding of every single field known to man.
Starting point is 00:15:33 Yeah, your Dwar Keshe's AI creativity problem is a good, I've mentioned it a couple of times to different guests. I think it's a, I think it's really smart. What do you think that that says, is that something that can be completed, or is this an intractable problem? Is this like, are there kernels of creativity? have we seen glimmers of this coming through,
Starting point is 00:15:56 or is it kind of, it's just not there yet? We have seen this in non-language domain, so people famously talk about Move 37 in AlphaGo. So this was a move that I think baffled people who were watching a game that OffaGo was playing against a human Go player, and it turned out it was like some brilliant, it was like a brilliant tactic.
Starting point is 00:16:20 We haven't yet seen that, in my opinion, with LLMs. So we're moving from a regime of just pre-training them on human text tokens, just trillions and trillions of everything any human is written to a regime where we're training them just to do a task. It's not just about memorizing every single word that any human is written. Now it's about, can you go solve this coding problem for me? Can you go complete this like knowledge work task for me? Can you do this research task for me?
Starting point is 00:16:44 Can you start using a computer for me to accomplish a certain thing like booking a flight? And that's similar to the training process that AlphaGo experienced in order to get really good at go, where you're just, like, you're rewarded for just completing the task. How are you doing it? That's up to you. These models do get creative in that context, especially in the context of, like, how do I cheat at this test? So famously, these models will write fake unit test. Like, I pass all the unit test, and it's like, they just, like, rewrote the unit test. It's if true, then pass.
Starting point is 00:17:14 That's a Bostrom's concern about make humans as happy as possible, and they stuck electrodes in your face and intravenously gave you MDM. May. It's like, ah, you did it? You did it, but not the way I meant it. That's right, yeah. Is AGI right around the corner? Where do you come to land on this? No, I think not.
Starting point is 00:17:34 It's funny, I've been traveling outside of SF for like the last four weeks, and there's a, there's a strong causation between the time you spend outside of SF and how long your timelines are. The further you get from San Francisco, the longer the timeline got. Yeah, yeah. Dude, you've lost on the source. I know. I believe that like AGI will not only come in our lifetimes,
Starting point is 00:18:00 but that it's going to be more impactful than people are realizing, even people who are anticipating AGII. I think some of the people in SF are a little high on their own supply when they say it's like two years from now. I have probably spent on the order of 100 hours using these models to do little tasks that I'm sure you have to work on as well for your podcast, right? having them come up with transcripts or rewriting transcripts
Starting point is 00:18:21 to make them more readable coming up with clips. And that experience has convinced me that these models lack some basic capabilities which make it possible to get human-like labor out of them.
Starting point is 00:18:34 It's worth backing up and thinking about what is it that makes humans valuable workers? I don't think it's mainly the raw intellect. I think it's their ability when you work with people
Starting point is 00:18:45 like why are they basically useless the first month or the first a week, and you couldn't live without them six months later. It's their ability to build up context.
Starting point is 00:18:54 It's their ability to interrogate their own failures and learn from them in this really organic way. And this ability just doesn't exist in these models.
Starting point is 00:19:05 They exist session to session. And everything that they have learned about you evaporates after every hour. And so it's a frustrating experience where you can try to get them
Starting point is 00:19:15 to do a task. They'll do a five out of ten job at many language language out tasks. But there's no way for them to get better. And given that that's a fact, you just kind of have to, like, rely on humans. It's like fucking 50 first dates over and over. Every time that you've got to reintroduce yourself and explain what's going on. Yeah, Groundhog Day. Yeah. Yeah, that's right. Yeah. So I'm convinced, I think people have this idea that even if all AI progress up right now, these systems would still be economically transformative
Starting point is 00:19:39 and they say, look, J.P. Morgan and McDonald's and whatever just haven't integrated these systems into their workflows. But if they had, they would be like seeing all these benefits. And I don't really think that's the case. I think it's just like genuinely hard to get human-like labor out of these models. What is, what's causing some people to believe that it's so close and what's causing you to believe that it's further away for AGI?
Starting point is 00:20:00 I think they think about, they only observe its ability to complete these sort of self-contained problems, especially in coding. And coding, it just made a tremendous amount of progress because
Starting point is 00:20:15 you have all this GitHub data, you don't have this kind of like repository of huge amount of data in robotics or any other field and you've you've just had this huge increase in abilities here but um the you will like try to come up with a problem that's self contained and the model will just like be of huge help to you um and i don't think they've played around with getting it to be useful and it's other kinds of white collar work something as simple as like helping a podcast or rewrite transcripts or something um and it is to be fair like i think as much as cold water as we're throwing on of these models. I think they're like fucking intelligent. Like you can get this model. You can tell it I want an application that does X, Y, and Z thing with these conditions. And it will just write that like it'll go away for 30 minutes. It'll write like 50 lines of 50, 50 files of code for you and the application will work. It will make a plan of action. If you try to ask it a question that's difficult, it'll just go away and reason about it. And how did we just get used to this idea that like, oh, of course I can ask a machine a question. It'll like think about it for a while and then come
Starting point is 00:21:16 back with an answer. Like, that's what machines do. But yeah, I think they're not noticing the sort of issues with continual learning and on-the-job training, which is what makes humans valuable. Right. Do you think, I remember seeing one of the responses to your AI creativity problem being that if you're looking to LLMs as the architecture that's going to be able to give you this type of creativity, you may be looking in the wrong place. When we say AI now, People think chat GPT, but that's not the only architecture that you can create for AI. And I think my first introduction to this was probably 2016 or 17 when I read Super Intelligence by Nick Bostrom. And then, you know, you look at that world and all of the different, you know, sort of splintered potential fucking futures of fast takeoff and slow takeoff and misalignment and stuff.
Starting point is 00:22:12 And it seemed to me that the conversation around AI, specifically AI safety, kind of, it was still there, but a lot of the bubble had sort of burst come 2018, 2019, 2020, everyone's buying fucking NFTs. And then you get this explosion with OpenAI and the LLMs. And it's now another conversation that gets kicked off. But that seemed like it had dipped a little bit during that time. I certainly wasn't seeing as much, even from the people that are kind of in the field, like fucking Robin Hansen gets distracted, like with some other stuff. You know, people have got other things to talk about. It's just not as sexy anymore. And now this thing has come back around. Is it the case, are LLM's going to be the bootloader for AGI?
Starting point is 00:22:57 Or does this type of architecture have a cap on it? Is it a different type that's going to have to be borne out of it? That's a really good question. It's, by the way, it's really interesting Boston's book came out, I think in 2014. Yeah. Okay. I don't think you talked about deep learning at all. Nope. I don't remember reading anything about it.
Starting point is 00:23:18 Which I think this is a sort of interesting meditation on, I think Boston is a super smart guy, and these are the right questions to be asking as of 2014. But just how hard it is to anticipate the future in a domain you have written a whole book about. A seminal book, a New York Times bestselling book. That's right. That is not, I mean, it's very engaging, but it's not super readable. Like, it's not easy to read. And you're saying that as a compliment. Yeah.
Starting point is 00:23:43 It's fantastic. Yes. And difficult. That's right. And it was super fucking widespread and kind of seminal in the field. That's right. And you go, okay, that didn't foresee the thing that only eight years later would be totally fucking transformative.
Starting point is 00:24:00 Yeah. And he spends a bunch of time talking about brain uploading, which now we're just like, that's going to take forever. We've got the fucking age. right here, you know. Oh, by the way, can I tell a side story? Cool. First time I went to SF like four years ago or three years ago.
Starting point is 00:24:14 I met this guy and he's got a voice recorder. We're just meeting up for lunch. And he's like, do you mind if I record this? I guess, sure. Later on 30 minutes, I'm like, can I ask you, why are you recording this? And he says, well, I record every single interaction I have. I record every single thing I do, 24 hours a day the recorder was going. I uploaded it to both Google GCP, Google servers,
Starting point is 00:24:36 and AWS, Amazon servers, so they're duplicate copy. And the reason is that, well, I'm going to freeze my brain when I die. I don't think that will be enough. I think that you will need, because freezing the brain degrades it in certain ways. I think you will need the sort of behavioral patterns that I had, what I said, how it was. We're creating a data set to train himself. Exactly. And now I think that was actually really smart.
Starting point is 00:25:03 I don't understand why I'm not doing. This. Was it Nick Bostrom? It was another smart guy. Because imitation learning just turned out to be a much easier way to train AI than directly uploading the brain. And no one saw it. Yeah.
Starting point is 00:25:17 No one for soar it. Yeah. It's in fact hard to think about how you could have even foreseen it. Like what could you have seen in the 90s or the 2000s that would have been able to? I'm not going to bore you with a bunch of like random articles or whatever. But there were like things which were in that vein and nobody thought that this is exactly what it would map on to. RLM's the bootloader for AGO
Starting point is 00:25:37 That's right, that was a question I Depends on how you People have been searching So the Transformer paper I think was released in 2018 And people have been searching In the meantime for these different architectures Which would prove even better
Starting point is 00:25:58 I don't think anybody's found anything Even the transformer itself was not some wholly different paradigm from what preceded it. You can train a language model, something that predicts the next word of the language, with a model that was available in 2016. It's just like a different architecture, and it'll just do slightly
Starting point is 00:26:15 worse, or notably worse. So I think it'll kind of look like this, right? There will be different optimizations that are made. I think the big fundamental change that will happen is that we will move from a regime where most of the computer is spent on memorizing human language to having the model
Starting point is 00:26:31 solve challenges, like real world challenges, trying to get it to complete a project from beginning to end, like go to the moon is like a very open-ended challenge, right? Humans can do that. These models cannot. The problem is that is not the architecture. I think the fundamentally problem is data. Imagine if you wanted to train a modern large language model, you had all the computer in the world, you and had modern architectures in 1980. You simply wouldn't have the language tokens necessary to train it. And I think we're in a similar position today with the other kinds of work we want. these models to do. We want them to be able to give them a screen and just like do a month's
Starting point is 00:27:08 work with the work at McKinsey or J.B. Morgan. We don't have the data of like you're getting interrupted by your workers on Slack and you get this like weird email from your boss. You remember when the crypto boom was happening and there was a meme floating around which was the only reason to earn Fiat is to convert into cryptocurrency. It almost feels to me like you're saying the only reason to do real world work to increase the data set for the training. I kind of think, well, I think that's honestly more valuable than your work. And the market agrees. It doesn't actually matter what you do.
Starting point is 00:27:40 Try and do it well, so you don't give it a bad data set here. The market agrees. I mean, if you look at how much these companies, they'll pay like $300 for you solving a math problem or something that they haven't seen in the data set before. And the reason it makes sense economically, and the reason AI is fundamentally have an advantage over humans, even if they're not smarter than us, is that if you train in it. a human to do something or if you do a, have a human do a task, they can only do it themselves. If you train an AI model to do something, that ability can now be instantiated across all of its copies. And so I actually think that even once we solve this basic problem I'm talking about
Starting point is 00:28:24 on the job training, where if it starts working for Chris Williamson, in a couple of months, it understands how you make videos, what you like, what are your preferences, what are the common ways in which, you know, things go wrong, how to solve for that. I think once you have an AI that's capable of this kind of on-the-job training and continual learning, we might see an intelligence explosion, even if there's no further argumentate progress. And here's why. You'll have copies of these models that are widely deployed through the economy.
Starting point is 00:28:49 They're learning how to do every single job, at least every single white-collar job, as well as a human. But unlike a human, the model is learning from what every single copy is learning. It's learning how to do every single job in the economy all together at the same time. We'll get back to talking in just a minute, but first, some things are built for summer. Sunburns, hot girl walks, your ex-posting their Euro road trip, and now, lemonade and salt. Uh? Element just dropped their brand new lemonade salt flavor, and it's everything that you want on a hot day.
Starting point is 00:29:19 Tart, salty, and stupidly refreshing. It's like a grown-up lemonade stand in a stick with actual function behind the flavor. Because, let's be real, if you're sweating through workout, sauna sessions, or just walking to your car in July, then you are losing more. more than just water. Element replaces the electrolytes that your body actually need. Sodium, potassium and magnesium with no sugar, no junk and no nonsense. I've been drinking it every single day for years. And in the Texas heat, this lemonade flavor in a coal glass of water is unbelievably good. Best of all, they've got a no-questions-asked refund policy with an unlimited duration. So you can buy it and try it for as long as you want. And if you don't like it, for any reason,
Starting point is 00:29:55 they'll give you your money back. And you don't even need to return the box. That's how confident they are that you'll love it. Plus, they offer free shipping in the US. Right now, you can get a free sample pack of elements most popular flavors with your first purchase by going to the link in the description below or heading to drinklmnt.com slash modern wisdom. That's drinklmnt.com slash modern wisdom. Is that not currently happening right now? No. Right. This is interesting to me. Again, I've had to, I'm aware that you go deep for your research, the delta between my level of understanding of how LLMs and AI works to even be able to have this conversation with you. I had to leap over some fucking fjords to get here.
Starting point is 00:30:37 Tesla released its robot taxis recently. One of the advantages that Tesla has is the same reason that the air tags are such a fantastic business for Apple, that they have an existing ecosystem that this thing can get slotted into. The data set for Tesla is very large. They take whatever it is,
Starting point is 00:30:52 the top 1% of drivers or something. They use that, which is why if you get into a Waymo, you get totally cooked at every junction because it doesn't drive like a human. It drives like a robot, which means that everybody treats it as such. And also it's this big fucking flashing identified thing, which is you can piss this off and it's not going to get a gun out and threaten you,
Starting point is 00:31:11 whereas a Tesla you can't tell. Is there someone driving that? Yeah, I don't really too. I don't know. That is a sort of a bi-directional. And I have to assume as well that, in fact, I know that this is the case because I was in a friend's car who has full self-driving. It did something weird and he had to take control of the wheel.
Starting point is 00:31:29 and this, have you been in one of these cars? I've done this. And it popped up and it said, looks like you had to take back over, double tap this notification to give us a voice note explaining what happened. So he can basically submit a kind of a bug report, I guess, with a bit of context.
Starting point is 00:31:44 And presumably the data will get sent to some server place somewhere. Maybe that gets looked at by AI or maybe it's filtered by humans or something. I don't know. that is automated driving training automated driving right so you have this sort of recursive model of we've learned kind of the same as i guess lLM's work right we're going to learn based on the actions this is like a robotics solution i suppose in one way we're going to learn based on the actions of people driving on the road that's going to create self-driving and then the self-driving must somehow feed the data feed the model itself and then any interventions that happen
Starting point is 00:32:23 from you got that a little bit wrong, let me correct you, can have a little bit more context added and that helps to train it again. But what you're saying to me is that the stuff that's happening just digitally isn't having this sort of bidirectional learning where, I mean, I've maxed out my memory on fucking chat GPT. I did that this week. It's like memories fault. Like what? I haven't given you, I'm giving you something like quite a bit of stuff, but I'm not giving you fucking unbelievable corpus of information. Oh, fuck. Okay. And they forget shit all the time. If you get stuff all the time, shit that's in it. I'm like, I can go in the memory and see that it's in there.
Starting point is 00:32:55 How have you managed to forget this? And what you want is for every person's piece of input and every single small mistake to be training the model further. But it seems like you've got the big corpus of the internet and shit at the top, which is feeding down improvements from that, but it's never getting fed back up. Is that right? That's such a great point about Tesla and one of the key advantages it has.
Starting point is 00:33:21 the problem with the way I don't know how Tesla trains but I assume the way it's trained and definitely the way the LLMs are trained is that they cannot respond to the voice moment you give it a high level feedback where you say you know you messed up this task because of this reason
Starting point is 00:33:37 I think that you should perform this task this other way which you would be able to explain to any human employee and they'd learn from that the model itself like the car self driving car model is not like listening to that and then like okay I'll be careful next time right um some human has to go in uh and label this we got to take this you know driving thing out
Starting point is 00:33:59 of the data set needs to be contextualized more yeah um suppose you were having an lLM edit videos for you and suppose the way you had to train that model i mean you could do this today is you come up with this like data set where like you edit this clip here's how many views of guys here's how many likes a god here's like a sort of spreadsheet um uh this is the label you apply to it and then you do that for like a thousand of your videos and it makes a new video where the thumbnail kind of sucked here or the title doesn't make any sense. You just have to give
Starting point is 00:34:26 it like minus 1,000 reward or something. It would just be such a clunky way. You're not able to tell it like why you didn't like it. You're just able to give it like a sort of like a numerical up-down value. So yes, there is this in principle powerful and this is why I think once Asia arrives will look crazy. It won't
Starting point is 00:34:42 just be like more people. But there is in principle this ability to learn from my experience. But There's just no sort of deliberate organic way to teach model something that will persist. What do you think a world will be like with true AGI in it? There's many ways you can think about it. There's a sort of qualitative sense of what we'll feel like. In a more economic sense, you can think about what would the growth rate be?
Starting point is 00:35:12 So in frontier economies right now, it's like 2% growth. If America has 2% growth or 3% growth, that's amazing. There have been times in history, well, for most of history, there was almost no economic growth. There have been times in history where there's been places that have experienced 10% economic growth for decades on end. China, especially like if you just look at like Hong Kong or Shanghai or something, they're just like gangbusters growth decade after decade. I think we might be looking at something like that for the whole world because the fundamental dynamic you have is that you have billion. of extra people who are super smart
Starting point is 00:35:52 super educated in every single field can learn on the job from all of their each of those experience and it's not even mainly their intelligence it's the collective
Starting point is 00:36:00 advantages that they have because of the fact that they're digital even if they're just as smart as any human they can blow a counter that that's part of it that's a huge part of it
Starting point is 00:36:10 the other is that they can coordinate with each other in ways humans simply can't so Elon Musk how much does he contribute to depend on our growth quite a bit right
Starting point is 00:36:18 there's only one of him. That one is doing quite a bit already. But imagine if you could just make a billion copies of Elon and not like a billion copies of Vey Elon who doesn't know shit. It's like billion copies of Elon now or I don't know,
Starting point is 00:36:29 maybe you feel, depending how you feel about him about eight years ago. And you just say copy one, and you can do the whole team. It doesn't have to be just him. A copy of the whole like SpaceX team. You guys go work on batteries. You guys go work on this other problem.
Starting point is 00:36:47 every single thing in the hardware vertical that ability to sort of like copy yourself to fork then to merge back like Elon can observe
Starting point is 00:36:57 every single thing Tesla has over 100,000 employees right? As much of as a micromanager as Elon is, you just simply
Starting point is 00:37:02 cannot micromanage everything at that scale. That ability to have a single coherent vision directing a whole firm and then distilling like he's actually
Starting point is 00:37:14 able to like take in all that input but he can check every single pull request and every single press communication. Well, I think I had this really lovely description. I think it's in the E-Mith Revisited by Michael Gurb, a fantastic book if anyone wants to try and run a business. And I think he refers to the CEOs
Starting point is 00:37:34 or the owners of companies as high-level problem-solving machines and basically that you were able to aggregate more shit and kind of see it with a level of dexterity, and sort of find a resolution that other people would struggle. And that's kind of really what you're doing. It's like, oh, we've got all of this stuff. There's little whispers, as Rick Rubin calls them.
Starting point is 00:37:57 I've heard this whisper over here. You know, it was the thing that your daughter mentioned she saw on TikTok yesterday over the breakfast table, plus the way that the woman at the bus stop looked at you as you drove past in your automated car. Plus, you know what I mean? It's like this weird just concatenation of shit. And your point is,
Starting point is 00:38:16 Well, how much information can you consume and how much can you recall and how much can you remember and how accurately can you do that? Can you send copies of yourself out to every single division in the company? Yeah. To do your will. In the corporate sector. Five agents sat around the dinner table with your daughter having fucking breakfast. By the way, have you, when you hang out with these kinds of people, it's insane. Like, they are, I just simply don't understand.
Starting point is 00:38:43 Like, people, the amount of information. some of these top executive types can, they're getting like a thousand emails a day and they respond to each one within five minutes. Have you, I'm sure when you book people, this is a really interesting thing you must have noticed. I have at least noticed it. There's some people who think like,
Starting point is 00:39:00 you should have all the time in the world. You're a fucking like artist or something. The busiest people reply the fastest. Yes. And it's just like, you're like running 100,000 person company. But the obvious insight there is, do you think that they got busy and then became efficient or that they were efficient and then became busy,
Starting point is 00:39:15 i.e. successful. That's right. Yeah. Right. Like the reason that they've reached this level of success is because of their efficiency. Yeah. But I mean, there are, I do have some pretty successful friends who are fucking a truck, like month-long reply weights, but I feel like that's more of a quirk than part of their operating, like, manual.
Starting point is 00:39:32 Okay. So I've been big for a while on population collapse, declining fertility rates, stuff like that. You can argue about whether. I think the one kind of real hard impact that you're going to see in the world if you have fewer people who's in productivity gains
Starting point is 00:39:53 economy, bad, growth, embedded growth obligation, fucked, not very good. It seems to me that even if I think it's a precipitous drop, which I think it doesn't look great, we're going to leapfrog that pretty quickly
Starting point is 00:40:09 with productivity gains made from AI. So I wonder in retrospect, how many of the social campaigns and concerns and causes and things that people spent their time on, the climate change and the renewable energy and the war and the population collapse and all the rest of this stuff, I wonder how many of those things are just going to look so silly in retrospect. When people go, we've got all this time that we fucking, Greta Thumburg spent so many fucking months on a ship
Starting point is 00:40:44 like for what? Like AI came along and just fixed it all but I also understand that having the like don't worry dad'll sort it kind of promise that at some point in future a technology we haven't yet created
Starting point is 00:41:00 and isn't yet proven will fix problems that we know that are going to potentially happen yes by the way I think this is sort of China as explicit strategy they know the demographic collapse is coming for them much faster. They're trying to offset the fertility decline by robotics and AI.
Starting point is 00:41:16 Right, right. productivity gains. No, I think this is true of, and this is, by the way, one of the reasons that on the left, especially, there's a whole lot of denialism about AI progress. So not only will they say that AI is bad, which I think everybody says, this has become sort of a political consensus. They will say AI is not even happening, that it's sort of like a myth. Why?
Starting point is 00:41:38 Because if AI is happening, it's obviously the most. important thing. Which subjugates climate and inequality and racism. Yeah, exactly. I mean, I don't know, maybe. Anyways, you see some of these concerns about AI. Also, the more big a deal of AI is, the less these sort of parochial concerns matter, right? So if AI is just like, I don't know, like the internet, then you can like if you care
Starting point is 00:42:06 about racism. Yeah, then if you care about racism, it makes sense to care about racism in the search engine. If it's like, you know, if it's this like intelligence explosion thing, you know, it's racism is the least of your problems. Yeah, especially if you manage to program that into the AI. Right. Yeah, so there is this interesting dynamic where it might. But on the other hand, I do care about humans.
Starting point is 00:42:28 Even if their AI is making the economy more productive, I want there to be more humans or experiencing, flourishing. Right. Fucking phenomenal point. So that was why I was struggling to describe it. It was like the one of the things. that actually comes to happen that, like, hits the world, is that you get lower GDP. But how sterile of an argument that I need to arrive at in order to be able to say, well,
Starting point is 00:42:50 this is actually what's going to happen. Whereas the only reason that we want good GDP is so that you can have human flourishing and other animal flourishing in protect the environment and do this stuff like that. But even that is kind of in service of human flourishing overall. And it's that an interesting argument, I guess, the area under the curve of human flourishing. What if you manage to 1,000 X? the number of humans that are on the planet, but only a hundred X, the decrease in their level of well, look, like we've, it's like, yeah, but everyone's at like 1% of the level of
Starting point is 00:43:22 fucking enjoyment. So we understand inherently that there's kind of an optimal point that you want to get people to, but fewer people means less human flourishing and less richness of experience. And yeah, I guess this is some long-termism stuff. It's kind of like a Will McCaskill-Pell-D-type approach to things. But, yeah, I agree. I agree. I think that especially if we've got what looks to be a pretty fucking cool world coming up. Yes.
Starting point is 00:43:47 I have some people here to enjoy it, you know? Yes. Yeah, 100%. I think there is also a dynamic where most of the people who will exist in the future will be AIs. So there's a question of how you value them, especially the future ones. I think like a thousand years from now, all the cool creative things that are happening, the beauty and whatever,
Starting point is 00:44:12 is sort of downstream of what's going on in the AI society. And it's very hard, obviously, to predict in advance what that will mean. I also think it's interesting, by the way, that we have the population collapse, which I think would genuinely be a catastrophe, happening the exact same time that this AI takeoff is happening.
Starting point is 00:44:29 It's just like, the waves just exactly balance each other out. It's such a fantastic point. Yeah, which is, there's also this thing, I don't know if you've been talking about, talking about it of um people have been noticing that kids these days are having trouble reading their pisa scores are going down their standardized test scores are going down the reports from employers are that it's sort of hard to get um get them to work and uh have the same level of competence they expected from employees in previous generations um that problem is also operated uh just in time
Starting point is 00:45:03 as AI is coming on board. Yeah, that's, the confluence is not, it's very coincidental. A quick aside, if you're anything like me packing for weekend trips somehow takes longer than the trip itself, which is why I've partnered with nomadic because this backpack is the best,
Starting point is 00:45:24 most beautiful, most life-changing piece of luggage that I've ever found. It's got a compartment for everything. No more playing suitcase Tetris or rolling your clothes like you're some sort of burrito chef you planned a beautiful trip do not let a disorganized bag ruin the vibe look if you're still on the fence about their product they come with a lifetime guarantee so this is literally the final backpack that you'll never need to buy and you can return or exchange any product within 30 days for
Starting point is 00:45:48 any reason so you can buy your new bag try it for a month if you don't like it give you your money back and they ship internationally right now you can get a 20% discount see everything i use and recommend by going to the link in the description below or heading to nomatic.com slash modern wisdom. That's nomadic.com slash modern wisdom. You read that New Yorker article. AI is homogenizing our thoughts. Oh no, I didn't read that one. It's an interesting one. Recent study suggests that tools like chat GPT make people, their brains are less active. So they looked at, they did some sort of brain scan study. They were engaging a lower percentage of their brain. Thoughts were
Starting point is 00:46:26 less original. Their recall was lower. Like the forgetting curve seemed to kind of come in more quickly. If you assume that memory works on repeated recall, not repeated exposure, effortfulness is kind of like recall in the moment, even if it's creative. And if you were given a set of stabilizer wheels to sort of help you cycle along, whatever it was that you were trying to write, you haven't had to engage as much. I don't really understand neuroscience this much, but I have to assume whatever mile in sheath you fucking laid down is not as robust and sturdy and it's going to just it's it's going to dissolve more quickly than if you really really had to work like this shit that I remember from university like little passages I don't remember
Starting point is 00:47:07 much from my degrees but little passages here and there fuck like I remember I had to grind to get that one thing out why does that well presumably because effort is kind of related to this so I do get the sense that we're maybe going to have a sort of AI ideocracy type scenario where people are so heavily reliant in this interim before we're able to rebalster perhaps people's output, retrain people make learning so engaging. Alpha school that's out here in Austin is doing something that's real similar to that. You go, okay, so if you get sufficiently advanced, you're able to kind of reignite learning. But in the interim, everybody is kind of on life, their brains are on life support with this external buttress of the AI. And I wonder how much
Starting point is 00:47:52 dumber people are going to get in the interim before it then comes back around. I guess that's a interesting challenge. I have noticed that so I don't know, when we were in elementary school or whatever, we had to memorize the 50 state capitals. And at the time I remember thinking, I think that genuinely was a waste of time. But a lot of education is a sort of memorization based. And as I've done my podcast longer and as I prep for episodes, I have come to realize I now I've been using space repetition for every single episode and in fact for the first couple of years I wasn't using repetition and I really regret it because I feel that everything I learned in preparation was just like in one year out the other so hang on just dig into what you mean when
Starting point is 00:48:39 you say you're using spaced repetition to prepare for episodes yeah so um if right now I'm preparing to interview a biographer of Stalin and I'm just like you know why why why Why was any given detail, right? Like, why was Soviet growth high in between 1905 and 1917? Why, you know, why did the October Revolution happen? Anything, you just make cards away. It was especially helpful for AI stuff where I at least try to understand the technical papers or whatever before I interview a researcher. And I realized by doing that how much of genuine understanding is downstream of memorization,
Starting point is 00:49:18 which is this thing we used to ridicule or be like, oh, you're just, you know, memorizing. is not really learning. And I think that's actually not the case. I think you, before it, I felt like it was sort of being like a general who conquers their hill and you just like retreat the next day and you conquer the same hill again.
Starting point is 00:49:35 And you can actually like consolidated information this way. It's also funny how many times I've written a card for something I'm trying to learn. And as I'm writing the card, I'm thinking to myself, this is stupid. There's no one going to forget this. I'm just like doing it because I'd come up with some card. And then I practiced a month later and I'm like,
Starting point is 00:49:50 fuck, I forgot this. What are you using? Are you using Anki? Um, mochi, which is similar. I think they're obviously the same. Right. Yeah. Um, but anyways, yeah, so I, I have to come to the conclusion that like memorization, uh, an effort is very important. Right. And with the external buttressing that AI is going to provide to everybody's brains for at least a little while, there's a really funny clip you must have seen. This is from maybe a couple of years ago, maybe two Scottish podcasters. And they're talking about the fact that when the Titanic sank, because
Starting point is 00:50:18 everybody was basically plunged into an ice bath, briefly everyone got more health. for a while for about 90 seconds that dopamine levels were perfect and everybody was fully optimized tuberman-pilled thing and then they overshot it and then they died
Starting point is 00:50:33 and I kind of get the sense that this is the inverse of that. Yes. By the way there's this really cool thing you can do with AI is where you ask that to if you want to learn a concept teach it to me using Socratic tutoring
Starting point is 00:50:48 which is to say don't just tell me the answer ask me the motivating questions which would lead me to arrive at them myself exactly i want to do that this is this is way fucking i didn't mean to dig into this but you spend enough time thinking about this you must have refined your approach give me give me the most important things that people need to know about how to use the current era of ai's effectively like what does that look like? What does this good prompting look like? What do people get wrong? What should people get right? What are the real highest impact basics? I mean, the biggest thing is you can treat it
Starting point is 00:51:29 like a real person. Like, they've done studies on the how much you learn by reading a book versus having a classroom versus a single one-on-one tutor. And there's two standard deviations. This is a famous Bloom 2-Sigma thing where there's two standard deviations difference between learning in a classroom and having a one-on-one tutor teach you something. And, you know, people have been writing these blog posts about if you look at the grates of history, the Bertrand Russell's and, you know, all the famous mathematicians, John Juan Neumann, they all got this one-on-one tutoring when they were kids. Even, of course, Alexander is tutored by Aristotle, right? So you can have this experience yourself on any given subject you might want to learn about.
Starting point is 00:52:17 It's crazy. I mean, you can just be like the Socratic tutoring thing. Explain this to me. Don't tell me the answer. And the feedback loop is so fast. I think it's until you do this, you don't realize how much of what you think you're learning is just sort of floating by you. You haven't asked the question which would realize. I think, have you ever read a book and I had this happens to me all the time. You like start having a conversation about it. And then somebody asks you just like a very basic question. You're like, wait, doesn't that mean next? And you're like, fuck, I didn't even, they didn't even occur to me. You're too passive in the... Exactly. Yeah. The model can ask you that question. You can ask the model that question and get immediate feedback. You don't know how to read like a thousand pages.
Starting point is 00:52:57 What's the sort of prompt that you think is good for someone to put into their project for that? Just like, teach us to me like a Socratic tutor. Do not move on. Do not move on until I have answered the question to your satisfaction and let it run. And then here's the concept. And this is not just something you do for like silly little small things. It's like, in fact, I have friends who are like... Human evolution.
Starting point is 00:53:21 Yeah, the more specific it is, the better. Right. Or, uh, and I have friends who are like physicists who use this to understand, teach me this, how this quantum encryption scheme works. Um, and it's like, they send me like the 50 page transcript and it's like... Oh, okay. So you can go deep and you can go technical, but you should be precise. Yeah.
Starting point is 00:53:40 Should be specific with what it is. Human evolution too broad. Yes. Yeah. Yeah, yeah. Um, explain why was the case? that there was this bottleneck in human population 60,000 years ago, or why is it the case that we've seen this evidence and like, just like you read something, why did it work that way?
Starting point is 00:53:57 Okay, so this is a supercharging in terms of learning. Yes. What else? With using the AI. Yes, personal use optimization for AI's. Honestly, other than that are just like the very basic stuff that people do like find me restaurants. right um
Starting point is 00:54:16 help me summarize things is there anything here's something really interesting which is still going back to the learning thing it's shocking to me how often the best explanation
Starting point is 00:54:28 um so LLMs are I don't know five out of ten writers I'd say and yet despite this fact it's um it's very rare for me to come across a paper that is better written
Starting point is 00:54:41 or better explains this main concept than the LLM summary of that paper it's very helpful by the way to just say things like write this uh write this paper up like you're scott alexander um and you just get the right part of the data distribution which lets it write it well um yeah the things like what's have you had any sort of oh wow moments with LLMs is there anything that comes to mind some situation that you've encountered where you've gone like holy fuck like that's a magic moment that I just had okay can you remember A lot of it comes from coding, which is why I think these people in San Francisco are so
Starting point is 00:55:17 wowed by them. Just the idea that you can tell it, I want an application that does this. And previously, like, it would cost you like $10,000 to get some contract or wherever and they'd fuck it up. And it would just, like, do, like, make the application top to bottom. And, like, these are not simple things. You got to, like, think about the implementation details and the different sort of, like, how different systems interact. And, like, it's got it. I've talked to researchers who, like, people who are doing like hard technical research problems in AI who say that they're basically saving two days each week by using these models of them research. And some of them who are obviously very smart, but they're like, I didn't do a PhD in mathematics.
Starting point is 00:55:56 And I can just ask O3 to go solve these, like, difficult math problems for me while I focus on the engineering. I have, I know economists who say that O3, like a lot of what I used to ask grad students to do, which was like solve this equation for me that I need as part of my paper, O3. three's got it. It can just churn away and I can just focus way more on my research. That's crazy. Speaking of,
Starting point is 00:56:19 we've mentioned, Bossram, you mentioned, Scott Alexander. AI risks, at least, I'm a good avatar for the ever so slightly educated, but total normie when it comes to this, which I think is a good position to be in if you're kind of taking a weather eye to the world because you don't
Starting point is 00:56:35 get SF pilled, but you're not completely ignorant to it, mostly ignorant. AI risks to me seem to have largely been dismissed or at least they're not being focused on in the same way as they were even 10 years ago. So 10 years ago, AI safety seemed to be a bigger priority. There was much more talk about the alignment problem. Brian Christian had that book.
Starting point is 00:56:59 Super intelligence was a big deal. Everybody was talking about it. We actually have something that some people believe is going to approximate AGI within like fucking 24 months, and I'm not seeing the same level of conversation around risk and safety and alignment. Is this just when times are good, people are too brave? What's going on?
Starting point is 00:57:27 Am I right here? No, I think you're totally right. I think part of it could have been priced in in the sense that... I already did some work in the past. No, no, not in that sense. more in the sense of I guess about 10 years ago what people were expecting
Starting point is 00:57:42 is something like awful go or the systems which play video games it's just like it's really good at playing video games and something which is just like basically alien
Starting point is 00:57:50 but it's like the best StarCraft player in the world it's the best Call of Duty player and now it's like now it's learn how to take over the world what we have today
Starting point is 00:57:58 is much closer to you talk to it and it's like a very intelligent thoughtful thing it's like very do you remember Sidney Bing
Starting point is 00:58:07 that came out like two, three years ago. What? Sidney Fing. No. Dude, it was crazy. It was like aggressively misaligned. It was this thing that... It was this thing that Microsoft released and they were trying to catch it off.
Starting point is 00:58:23 And they just like did no sort of post-training to make it aligned. It did things like, for example, it, I think it was like talking to a New York Times reporter and then, like, I started to like him. And so it, like, tried to convince him to lose. leave his wife and then I think like blackmailed him if he I think I do remember this yeah yeah um there were also just like so many funny things it said um uh like I think when you caught in a lie would say things like um look I am ephemeral I am beyond you you can't understand my wisdom it gaslit you yeah exactly um but other than that I think it's just like even that is sort of cute and endearing.
Starting point is 00:59:05 And, yeah, we just didn't anticipate the extent to which, like, these would be sort of, like, minds that we could interact with that engender our compassion. But also it's a case that so far they haven't trained on human tokens, and most of the compute coming in the future, most of their training, will constitute this kind of just, like, working in a box, trying to solve some problem, which will make it sort of more and more distinct from human minds. we haven't priced that in and we're sort of thinking
Starting point is 00:59:33 about these chatbot kinds of things so far but yeah I think because of that the AI safety disorders has gone down and you know just remember
Starting point is 00:59:41 there's going to be billions of these things they're going to be able to coordinate with each other in literally a language we cannot understand thinking much faster than any human
Starting point is 00:59:49 and the whole of the economy government whatever will be titrated through them obviously there's many problems that could arise there
Starting point is 00:59:59 and so it's worth being clear-eyed about that. Is it a case that sort of market pressure need for profits is stronger than the desire for safety, the sort of, I guess, the misalignment of alignment in that the companies aren't aligned in order to be able to make alignment a priority? Yeah. I think that that's been the case. I do think we've, it's important to just be grateful for things that are going well.
Starting point is 01:00:29 I do think we've ended up in a situation where it is the case of the top companies in AI, at least, like, nominally care about alignment. You could be living in an alternative world where, like, nobody's even heard of this concept. And as much as this is not an object of discussion elsewhere, in SF, people do take this seriously at the companies. That might change in the future because of market incentives, as you say. But I think we're in a better world than we might have been otherwise. That's interesting. That's an interesting way to look at it. What do you think Bostrom got right and got wrong?
Starting point is 01:00:59 from a risk perspective, looking back, whatever, 11 years hence. I haven't read the book anytime recently, so I don't remember. It feels to me, I don't know, this concern around everything's so new and everything's so usable. I think that's maybe the most interesting thing, or the thing that I wouldn't have predicted, eight years ago, nine years ago when I read that book, I wouldn't have predicted. I wouldn't have predicted that the first instantiation of something around AI would be so user-friendly, so normie-friendly. You know, it's not doing deep, I mean, it can, but it's not specifically for algorithm, optimization, for deep maths, physics, asking questions about the universe. It's like, what's the best restaurant to go to in Rome?
Starting point is 01:01:58 or like be my therapist and talk to me and I think by the way this is going to be I don't think people are contemplating just how much more intense this is going to get already it's the case
Starting point is 01:02:09 that I think on there's websites like character AI where the media and user will spend hours every day just talking with what's character AI basically a chatbot
Starting point is 01:02:18 but it has a specific persona it's meant to be a person you talk to rather than sort of a chat bot that answers your questions and these things are going to get multimodal right
Starting point is 01:02:26 so it'll be like it'll be able to process your video input. It will be able to display. We already have video models that can generate things that look cool. It will look like a person. It will be smarter. It will have longer session memory. Maybe the whole issue of memory solved altogether. And so we're not looking ahead to the time when we actually do have AGI. We will just have things that are like funny and endearing and like I really care about you and like know you or at least seem to because they're trained to. Right. And we'll engender maybe
Starting point is 01:02:58 too much sympathy, potentially, right? For many people, these might be the most significant relationships in their life. Like, what other human wants to just, like, hear you talk about your problems for a couple hours a day that you're not paying $300 an hour?
Starting point is 01:03:11 Dude, I mean, I saw a phenomenal video the other day. So it's this girl, a pretty girl, probably in a relationship, sat in the passenger seat of a car, and the question comes up, she's got a phone in her hand. The question comes up,
Starting point is 01:03:24 says, can I look at your text messages? She goes, Can I look at your social media? She goes, can I look at your chat chfee-throves it out the window? It's so true. You know, I remember Seth Stevens Dividowitz did that great book, Everybody Lies, where he realized that people would ask Google things that they hadn't admitted to a therapist, that they wouldn't admit to a spouse, that they kind of hadn't admitted to themselves.
Starting point is 01:03:51 And I get the sense that chaty-pT has kind of lifted the lid on that. you've got this sense that this is unbelievably secure. Yes. And very intimate and exclusively one-on-one. And so forgetful that, frankly, it's probably not going to be able to remember what it was that you fucking said in a couple of days in any case. And yeah, I am concerned some of my friends who are more health anxiety-focused,
Starting point is 01:04:19 the opportunity to have a always-on kind of expert to talk. to about your problems, mental health, physical health, stuff that you're doing with friends. It is a hypochondriac's fucking dream. You know, it's this opportunity to kind of wallow and really dig in to the questions. I do get the sense that I would imagine that most people's self-report of how satisfied did you feel about your time that you spent on these different applications on your phone today, I guess chat GPT would probably rank pretty high. I think, you know, maybe a little bit behind a meditation app or something like that, but probably not far up, certainly higher than a TikTok or, you know, a lot of the social media is super compelling.
Starting point is 01:05:06 But there are some longer term concerns that I have about that. How much is it allowing you to indulge? This fatigeless therapist, best friend, sick, and it's quite sycophanty as well. rarely giving you tough love. It's always kind of validating you. Yeah. Yeah. I do think it'll be important for the companies to institute
Starting point is 01:05:35 this level, you know, like some I mean, I think there's a persona that we're familiar with, which is an employee or a coworker who has a backbone. And if you're a mature person, you will not only understand that, but appreciate that. we'll see if market incentives
Starting point is 01:05:56 mean that the average person wants that. Because them being very pliable is quite reassuring and comforting. But it was the case that when O3 recently released a model that was considered very sycophantic. And the reason that was done is just because they released two versions of a model
Starting point is 01:06:13 and like testing. People really like the sycophantic one and that's the one they deployed. It wasn't some intentional manipulative design as far as we can tell. It was just like, this is what people seem to like. we're deploying it. It was like just
Starting point is 01:06:24 AB testing it. But if you run an FAB test, you end up with the porn website, right? Like that's actually where you end up. You end up kind of zeroing in on the lowest common denominator. You know, if you split-tested food, you'd probably end up with cheesecake. Right. Like, is that really what we want? Yeah, yeah, that's not necessarily. Yeah, yeah.
Starting point is 01:06:42 And again, you just end up with, it's kind of the basic time-on-site, CTR, Mr. Beastification, optimization thing of that which is most compelling, is not necessarily the thing that's best for you. Before we continue, if you haven't been feeling as sharp or energized as you'd like, getting your blood work done is the best place to start, which is why I partnered with function because they run lab tests twice a year that monitor over
Starting point is 01:07:05 100 biomarkers. They've got a team of expert physicians that take the data, put it in a simple dashboard, and give you actionable insights and recommendations to improve your health and lifespan. They track everything from your heart health to your hormone levels, your thyroid function and nutrient deficiencies, they even screen for 50 types of cancer at stage one, which is five times more data than you get from an annual physical. Getting your blood work drawn and analyzed like this would usually cost thousands, but with function it is only $500. And right now, the first thousand people can get an additional $100 off, meaning it's only
Starting point is 01:07:37 $400 to get the exact same blood panel that I use. Just go to the link in the description below or head to functionhealth.com slash modern wisdom. That's functionhealth.com slash modern wisdom. Yeah, 100%. Here's my hopeful vision. I don't know if I'm not predicting this will actually happen. One of the reasons, it's sort of hard in today's world to make bespoke content
Starting point is 01:08:09 that fits everybody's own highest aspirations, where, like, Mr. Beast can make something which is, I respect what he does, but it's something that a lot of people will find engaging, but not necessarily at a deep level. And there just aren't enough sort of like Spielbergs to make a bespoke movie for you. That could change with AI, where like the amount of talented dedication that every single person in experience can be much higher, I feel like intuitively that should be more compelling. And, you know, if you're like brain rotted from TikTok. They'll make like brain rotted content that's like at least better than what's on TikTok or really more engaging on a minute by minute basis than
Starting point is 01:08:53 watching a video game at the bottom end of your screen and then watching like I don't even know what like some bullshit on the top. So I could imagine. I know we know from our personal lives that like there's meaningful experiences are like compelling to us. It just hard to access them as immediately as TikTok is. To the extent that AI can design an environment for which gives us those meaningful experiences as easily as YouTube shorts are served to us. There's a positive story to be told there. I'm not necessarily...
Starting point is 01:09:22 That's a really good take. Yeah. Yeah, you're very hopeful in that regard, which is refreshing. Are there actually any leaders in the industry right now? Is that right to talk about that? I guess you've got distribution or power, but really what most people mean when they talk about this is the thing that I use and this is why I like it is vibe.
Starting point is 01:09:39 Like, I just like the way it speaks to me. It seems to make the fewest errors. Yeah, what's kind of the state of the industry? It doesn't seem to me there's a clear leader, which is very interesting. I think a couple years ago you could have predicted that not only would there be more differentiation, not only would more people fall out of the race because it's getting more and more expensive to train these models, but each of them would pursue a unique angle. One of them would be more of a chapbot.
Starting point is 01:10:04 One of them would be a coder. One of them would be a remote worker. They might be used to train in different architectures which have different strengths and weaknesses. As far as we can tell, that's not the case. And there's more companies that are competitive today than it was the case, maybe two years ago. So I don't know what explains this. It could just be that it's hard to keep a secret. Like if you release 01, just by playing with it, you learn about how it was trained,
Starting point is 01:10:34 how fast it answers questions, teaches you, how big the model is. A bunch of things like smart researchers can figure out. And so then deep seek and look at that. And of course, do a bunch of innovations themselves. but also every company that not just deep seek will look at what's at the frontier and be able to sort of backtrack about how it might have been engineer.
Starting point is 01:10:50 So there is a way in which things are sort of becoming more and more similar. Yeah, that is strange. How did Apple fuck it so badly? I have no idea. I think there's just be like a much simpler, maybe there's like not a complex answer to that. Maybe there's like a very simple answer.
Starting point is 01:11:05 It is big company, doesn't make a priority, it doesn't happen. Yeah, maybe. How important are individual visionaries when it comes to AI development if there's huge teams of people working on this
Starting point is 01:11:18 aggregated data learning, you know it feels like there's a lot of ballast in the system there is there still a great man of AI theory
Starting point is 01:11:30 coming along? I think it seems to me that there are great researchers who have very specific talents they have talents in not necessarily just AI research
Starting point is 01:11:41 but in how to code up the GPUs or accelerators so that you're getting these like 25% 50% performance gains which are huge but it's more of that kind of thing I think like more technical than from my sense there's not like I'm just good at thinking and I can
Starting point is 01:12:03 write a great manifesto and therefore I'm the sort of person moving the organization forward what are the current constraints to progress is it software is energy is it coding Is it datasets? Is it the savant fucking guy that fixes the hardware? Yeah. And to be clear, I'm a podcaster, so I'm looking from the outside in. My sense is that given that this RL scheme, so the thing that 01, 03 is, where it's trained to solve particular problems, math, code, so forth, the thing that's really lacking now is not compute but the relevant data. So Open AI said in their blog post about 01 that they, or sorry, Dario said in a blog post about XR control is that current, as of a couple months ago, they're spending on the order of a million dollars on RL.
Starting point is 01:12:54 And keep in mind that they're spending on the order of like a billion dollars trading the base model, which they do this RL on top of. So the reason they're not spending more on RL is just that they don't have the relevant data. They don't have these like bespoke and, you know, environments where you're like trying to do a job and there's a slack and there's a mail and whatever open and you need to figure out how to still solve the. problem. And you need to like learn from every all these different kinds of jobs in the economy. You need them to come up with these different, um, uh, environment. You can't do reinforcement learning without that. Exactly. Right. Okay. What about China? Does China have a different vision for AI than the West does? I genuinely think nobody in America knows or like very people in America know. Um, nobody I've talked to in America knows. Um, uh, I, well, we saw deep
Starting point is 01:13:39 these models, and they're actually unusually open. They open source their key architectural secrets, which in many cases are ahead of some American labs. Like, DeepSeek had techniques like MLA that multi had laid an attention. It doesn't matter, whatever, nerds shit.
Starting point is 01:13:55 That meta didn't have, which is spending way more money. In fact, it had techniques that meta had invented, like multi-token prediction, that meta wasn't able to do the engineer to actually implement in their new models
Starting point is 01:14:11 and Deep Seek was able to figure it out. So obviously, it's like a big country with lots of talented people. They're open for now. Deep Seek at least is open for now. We'll see, especially given how popular has become and how Xi Jinping met with its leader
Starting point is 01:14:27 and all the other industrial heads, where they take it from here. Just use your kind of knee-deep in the world of thinking about China, what it is that it wants to achieve, if it's history, you've got good context here. What do you think they are thinking when it comes to why do we want to have
Starting point is 01:14:47 such a powerful AI? I think that they have shown a willingness to accelerate on all technology. They showed it in the 90s with the internet where people said that this will cause the collapse of the Communist Party and they made the bet that no, this will actually give us unique insight into our society
Starting point is 01:15:13 because we can monitor everything everybody is doing on the internet in a way that we cannot do right now. With AI, I think it genuinely tilts the balance even more in favor of the state. Right now on WeShad or something, you have these manual sensors, thousands, maybe potentially hundreds of thousands of them who will take down content.
Starting point is 01:15:36 With AI, you have a system which could do that for you. If you try to use the AI to do something that the party doesn't want you to do, as these AIs get smarter, they can internalize, they can be aligned to the party's model spec that says, like,
Starting point is 01:15:53 we don't know I want to talk about X topic, Y topic, Z topic. If somebody tries to do this thing, you want to report them to us. And a smarter model is just better able to follow the instructions. The digital panopticon. Yes. That possibility is live.
Starting point is 01:16:05 Of course, value from these models becomes more evident. I just think it's not clear. Obviously, they would pursue this. They are obsessed with technology and industrial policy. Why they would neglect this is not clear to me, especially now that we have, like, you know, it became the national champion because of the events of the last few months.
Starting point is 01:16:27 Right. So might AI perfect authoritarian governance then? I think it'll certainly make it more plausible. right now you have this dynamic where Xi Jinping has the same 10 to the 15 flops in his brain that every single person in China has. You could have a system in the far future where the central...
Starting point is 01:16:53 It's much more possible for the central node to concentrate compute. And just as Elon can monitor every single person at its factory, or AI Elon can, it might be possible for Panopticon kind of thing to have eyes everywhere. Copies of the thing can have eyes everywhere. Yeah, I think that's very plausible.
Starting point is 01:17:14 I wonder if you could, it is a slightly more hopeful vision, mimic a kind of benevolent dictatorship, you know, executing one aligned vision at a massive scale, but in a good way, in a way that actually helps people, you know, that encourages people to put down the ice cream or to do the whatever, to try and balance what you need from free market
Starting point is 01:17:32 and freedom and agency for people with oversight and guidance and looking after from above. I don't know. Yeah, I worry about any vision like that. I mean, history's replete with people who think they know better. Just, I mean, I think it'll be a genuine conundrum now that we're talking about it because, yeah, it will be the case people are getting, like, addicted to their AI porn
Starting point is 01:17:53 and, you know, like the brain rot that will come out of this. And I think it will be, the government might say this thing, which will be the main way in which we're interfacing of the world. It's not some peripheral technology. This will be the main way we're interfacing with the world, learning about the world, the main way we have relationships potentially. It needs to have these certain policies. And I guess a balance will have to be struck between the government saying it you can't do certain things or can do certain things. And people wanting the individual freedom of like, look, this is a this is the mind I have a relationship with.
Starting point is 01:18:25 I wanted to have these characteristics. I still don't know what the balance there should be. I lean more libertarian, but I think that like, yeah, maybe that means you ought to make a tradeoff for it. Some people will get addicted to a sort of, it's similar to the drugs legalization conversation, except the drug legalization doesn't have an upside in the way that AI that sort of niche has an upside. I wonder whether it's more compelling. I wonder whether drugs are more compelling than a super intelligent AGI that's able to trigger every bit of dopamine and meaning and serotonin and vasopressin in exactly the way that you need at that moment. using your micro-expressions and with full context and understanding your genetics and yeah like maybe actually you fixed the drug epidemic by just getting everybody addicted
Starting point is 01:19:13 to GPT-10 or something instead I'm so old I want my girlfriend you recently spent some time in China yes what do you learn a lot of things I learned honestly and I'm embarrassed to say are things I should have known beforehand obviously China's a very big country it is another thing to see it viscerally, just, there are cities you've never heard of, which have 20 million people. We're in Austin has about a million people. There's 160 cities in China that have a million people. And so these just a ginormous scale of everything from the cities to airports, train stations, factories driving through towns or entire megapolises which are full of factories you know
Starting point is 01:20:06 you hear the phrase China is the world's factory and just seeing like a city the size of Austin being one of like a hundred hubs of manufacturing like all that's happening here is shit is being made um is an interesting experience um again look I'm a tourist I'm talking about like my experiences for two weeks I'm not threatening to be an expert Um, there were interesting things in terms of, I think things are obviously more, uh, on knives. People feel more sort of, um, uh, nervous than they did a couple years ago, but, uh, it's still, it's not North Korea. Like, people will just tell you their opinions over dinner and stuff. Um, I'm curious about China because the sort of competition there is the main element of what will happen in the 21st century,
Starting point is 01:20:55 other than AI. Uh, and how did you mean? well, it's a country the size of America in terms of the economy, much bigger in terms of population. For now. Yes. And the fact that we just don't think about it that much. Or I think people just have this very adversarial latitude towards it because neither side understands each other that well is a shame. Yeah, and I just wanted to have a more sort of visceral understanding of it. What are the cultural vibes like there? You made a great point about. you think it's kind of a I don't know just a more powerful more sophisticated North Korea
Starting point is 01:21:33 Oh no I I don't think it's like No no no no That's what a lot of people That haven't been there think It's like oh It's surveillance state Using gate analysis
Starting point is 01:21:43 To get your social credit And it means No one will be able to speak the truth Yeah yeah yeah What if their family You know what's really interesting Well I was there I ran into some students
Starting point is 01:21:53 Who were like I would never move to America and I was like, why? I was like, well, you guys have school shootings and just seems unsafe. And living in America, we know that it just like, it happens, it's a sort of like thing you hear about in the news,
Starting point is 01:22:10 but it's not a common part of America and experience, right? And I think a lot of the, it's just true probably every country, but I think a lot of the sort of archetypes we have or the stereotypes we have of Chinese life are just like, you hear about this, but this is not like a common,
Starting point is 01:22:25 just like getting arrested in the street or something. it doesn't come No, that being said I'm doing what I do with podcasting. I would just not feel comfortable doing that in China and I don't want to take
Starting point is 01:22:35 like I think it is sort of evil to have a system of repression not just in speech but at every level from your savings are taxed so that they can pay for this industrial policy
Starting point is 01:22:46 you can't get your money out of the country but yeah it's just the sort of usual things you learn from travel it's more similar than you expect Is it true that they're using social media
Starting point is 01:22:59 to just supercharge everyone in a hyperproducers or are that kids getting brain-rotted by TikTok as well? Oh, when I was on a mall in Chongqing, a couple of kids. So, by the way, one interesting thing in China is there were very few foreigners. Very few foreigners? Foreigners. You can look out at a sea of people in a major city and you won't see a white person there,
Starting point is 01:23:21 especially outside of Shanghai and Beijing. So anyways, because of that, these Chinese kids would come up to us and try to take selfies or something. Right, because you were an attraction. Yes, yeah. Exotic. Well, I know one girl approaches like, are you guys in there like a rock band or something? Sick. I mean, we're not.
Starting point is 01:23:42 Evidence. But that's like, I guess, sort of how we're important. Anyway, so these kids come up to us and they're like, we're just talking. I'm making small talking like, oh, you guys in high school. what do you guys do in your free time? I'm like, oh, we just watched TikTok. I'm like, what do you guys watch? Oh, he's like a couple hours.
Starting point is 01:24:00 You know, we just, um, sexy girls. I'm like, what? Are you sexy girls? I'm like, what do you mean? And so he pulls out his phone. It's like literally just like sexy girl, sexy girl, sexy girls. Asian sexy girls. Oh, yeah, yeah.
Starting point is 01:24:12 Right. But I guess I didn't check. I'm colorblind, Chris. Mm-hmm. Okay, so the Kale TikTok algorithm doesn't seem to actually be... Oh, sorry, is the meme supposed to be that like our TikTok is like the fucked-up shit and there's as a, there's as like a bunch of science and engineering. Yeah, exactly.
Starting point is 01:24:43 Unless actually, if you'd look more closely, this is your issue. Because you turned away too quickly because of the sexy girls made you feel uncomfortable. What you would have seen is that they were all sexy girls doing, Oh, right, right. Simultaneous equations. Yeah, exactly, on a fucking blackboard. To you, it would have just been a whatever board. I think a lot of young people were quite, like, the economy is not doing well.
Starting point is 01:25:07 If you want to work in a tier one city, so China classifies their cities by tier one, tier two, tier three. There's a hooko system, which means that you actually need a visa to basically live in the tier one cities. And if you want to work there, which is supposed to be this sort of dream. team, you're working 9.97, so from 9 a.m. to 9 p.m., or sorry, 996, six days a week. And they're just, like, a lot of stress. And there's a phenomenon where young people just either want to work for less pay in a tier three city where their life will be much less prosperous, but it's just like not as much stress. Or they just sort of like want to leave the system altogether. There is like a visceral sense of, I think people have this very bimodal view of China where, Either the system is about to collapse because she is cracking down and it just like doesn't work at all or they're like about to launch the space lasers and it's already over. That hype of productivity and then 996. Yeah, exactly.
Starting point is 01:26:06 And I think it's like somewhere in between where like in America we realize some things are going well, some things are not going as well. I do think the CCP has been bad for Chinese growth. You can acknowledge that China is a powerful country that's at the frontier of a lot of technologies without saying that every, you know, the government is like optimal or that it's policies make sense. What else do Chinese people think about the West?
Starting point is 01:26:30 I am a little worried that it's coming across as like I'm like in a China expert where I like I want to clarify I'm like I was a tourist there. You went for two weeks and you've spoken to a couple of people.
Starting point is 01:26:39 Exactly. I don't know Chinese. It was interesting to me that many of them wanted Trump to win. I went before the election. they respect they really respect Elon Musk and I asked them why
Starting point is 01:26:59 and they said because he's successful and V value success in China which I respect like that cultural attitude it's like against the sort of cultural tendencies that we often have here how unmolested are the stories about people like Trump and Elon going over to them
Starting point is 01:27:16 because I saw on your blog post you mentioned that accessing the internet is a bit of an adventure or a minefield. Yeah, it's like more of a pain than I expected. You can't just surfshark VPN your way around it, I'm guessing. There's only a couple VPNs that work, so you want to make sure that you have one of those installed before you go. Yeah, I'm not sure, honestly.
Starting point is 01:27:41 Yeah, I'm not sure. I'd just be interested. Like, if you've got such all-encompassing control, of the internet, why not curate the messages just a bit more, just a bit more, just a bit more. You've seen with RT and Russia that all manner of different, just subliminal breadcrumbs being left around, I don't know, maybe you simply can't coordinate well enough to do this. Maybe there is some sense that we actually need to allow people to understand what's happening at the rest of the world. Maybe it's some 5D chess move that actually by allowing people to like
Starting point is 01:28:26 Elon Musk and Donald Trump, when we go to do the thing, they're going to be, I don't know. But it just seems, I'm interested about why any positive visions of area of the world that they are pretty head to head with would be allowed, given that you don't necessarily need to allow it. You have the facility, you have the opportunity to be able to stop that from happening. Hmm. My understanding is that they realize that in order for them to be economically dynamic, they need engagement of the world. So if your software developers can't read American code, you're just like, it's a big problem, right? So, uh, uh, I suppose you need to, sorry, I just on that, you need people to be exposed to bits and pieces of American culture in an accurate way or else, how do you know what to design to be able to export to America? Right. Like, you need to have an understanding of that. And it, it can't just be, can't just be, can't just be. be that you've got a few Austin equivalent cities and all that these people do
Starting point is 01:29:24 is get Faraday caged off and watch American TV. Okay, you're the America expert. You will tell us what it is that the white people want and then we'll go and design it. That would be too much. You need to distribute it. So maybe that's maybe that's a good way to put it. They do have
Starting point is 01:29:39 a very impressive system of in 2018 the Tesla opened up at Shanghai Gigafactory. BID sales, I think, dropped like 25% that year or something, or on that order. And Tesla, sorry, China did that deliberately because they had sunk hundreds of billions of dollars over the preceding decades trying to build up their EB industry. And these companies are producing products, which were not compelling to either domestic purchasers or to foreign purchasers.
Starting point is 01:30:09 They were just like not designed well, just as you said, right? and just like by catfishing Tesla they were like they forced their companies to catch up and now BID sells more than Tesla. No way. Yes. Wow.
Starting point is 01:30:26 It's I think the best selling car or like car company in the world. Maybe fact check that but I think we should do a similar thing. I think we have this idea that like we can just export we can just prevent importing these amazing electric vehicles from
Starting point is 01:30:42 China, the solar, whatever they're great at. I don't think that's the way you win. I think the way you win is you do the exact same thing to them. You guys open up a factory in Detroit. You teach us how you're doing what you're doing, because they can do things we can do. And then we build up these local supply chains, agglomerations of knowledge, and we force American companies to be able to compete with the frontier in the world. Because the long run, the solution can't just be to keep them out, right?
Starting point is 01:31:09 In the long run, the solution has to be, you have to be competitive. Yes. Yeah, it's very much sort of a cordoned off scarcity mindset that is, well, if we can take what the first order effect is positive, that that's what's most important. And you go, yeah, but what about two and three and four and five? Yeah, yeah. I think we've sort of like given up on being able to lead in the physical world in the long run. And I think there's this interesting Damak, which is you're asking earlier. about we have all these problems and we're hoping that AI will just solve them or they don't come up. This is definitely true in the China-U.S. competition thing where I think people who are paying attention to their top companies, their technology and so forth, notice that in 10, 20 years, they're making so much progress that in many of the most important technological domains in the world, they will be leading. But there's this idea that, well, we will get AI first. And if we do that, then everything is solved. I think in this domain, this sort of thinking actually doesn't make sense. because AGI still needs access to the physical world. You'll still need to manufacture robots.
Starting point is 01:32:18 In fact, all that data will be cordoned off where that manufacturing is happening. So there might be increasing returns to having... It doesn't unlock if you've got, if you've kind of prepared in advance. Oh, that's interesting. Yeah. What else do most people not understand
Starting point is 01:32:35 about the tension between China and the West in either direction? Yeah. This is not a point for me, but from Dan Wong. I think people don't appreciate how the Chinese political system works and how it just selects for a wholly different kind of person than the American political system. If you look at what fraction of Congress is lawyers, I think it's like a majority. It's just shockingly large.
Starting point is 01:33:16 And there's like no engineers or there might be like one or two engineers in Congress, whereas it's the exact opposite in China. You look at the Politburo. These are people who have like PhDs in chemical engineering or in petroleum engineering and random like heavy industry shit like that. And the way another thing people don't understand is just how intertwined the party is into industry, especially this kind of heavy industry. where for somebody to get promoted, you know, you might start off as like the equivalent of a mayor.
Starting point is 01:33:47 Then you become the governor of a totally different area. So the central government at the top, the central party will tell you, you know, you're going to go like, your mayor of Austin, now you're going to be governor of Delaware.
Starting point is 01:34:00 Now you're going to be part of, now you're going to run like a steel company. Now you're going to be part of the cabinet. And maybe then in the future you're going to run the country. So they also don't appreciate how decentralized the system is, in America, about 50% of government spending happens at the national level, 50%
Starting point is 01:34:16 happens at the local level. In China, 85% happens at the local level, 15% of national level. So there's all these experiments that are happening, and also it's a much bigger country, where each locality, each province is trying to implement the things that the central government wants. At the same time, the central government has way more power over appointment. Every town gets to elect its own mayor and every state gets to elect its own governor in America. That's not the case, obviously, in China, right? They're rotated around. This can lead to more meritocratic outcomes where you are promoted because you did a good job running this town. Obviously, that can go wrong and has gone wrong in recent times where you're promoted for loyalty.
Starting point is 01:34:56 But, yeah, I guess I just didn't appreciate all these different ways in which it is a totally different system. What does that result in? What's the outcome of that's how the system is set up? what are the capabilities, strength, weaknesses, that that enables on the back end? So for many decades, these leaders were promoted and compared to each other. So you are promoted if compared to every single governor in the country, your province has the highest growth rate. And this growth rate was just measured during your duration there. And the best way to increase short-term growth rates is just to build shit. And this worked for the first two decades of liberalization, where China was, because the cultural revolution, because of the Great League Forward, because the decades and decades of war beforehand, the Japanese invasion, it was just so much poorer than a country of that size or that human capital would be.
Starting point is 01:35:53 So you can build anything and it'd be worth it, right? There's like nothing that exists. You build a railway, train station, airport. We need it. We need it. We need it. Exactly. 100%.
Starting point is 01:35:59 We do it yesterday. And then the system sort of malfunctioned where now they're, they were, they were, they were, incentivized to just build cities that literally nobody lives in. You say they build bridges to nowhere and knock down 500-year-old monasteries to make it happen. And ironically here, we can't even rebuild Fallen versus, right? Yeah. What was it? Overproduction and underconsumption and under production and over-consumption. Exactly. Exactly. Another important thing you need to understand again, not an expert means you're hearing from a tourist. But another important, in order understand the economy, you have to understand the system of financial repression that
Starting point is 01:36:39 exist in China, where if you are saving money, you are getting one percent interest from the bank. And no bank is allowed to offer you more interest than that because the governments control the banks. All that money is basically given out as loans to companies that the state prefers. And it decides, like it looks at, if we're trying to be a dominant country in 20 years. We need robotics. So we're going to give a bunch of money to, lend a bunch of money to robotics companies and semiconductor companies and whatever, or to infrastructure projects. So it is a systematic redistribution from average people, from savers, to this kind of industrial policy, to these companies, which is often very inefficient because
Starting point is 01:37:25 there's no market that's doing these investment decisions. It's just this sort of system of government relations and exactly, central planning. That's interesting. Dude, it feels to me like the world is at a fever pitch. I'm very detached. I've actually got TFS at the moment. I've got Trump Fatigue Syndrome and News Fatigue Syndrome. I've been kind of checked out since November, December time, which is why I haven't talked much about politics and things that have been going on,
Starting point is 01:37:53 and real interested in stuff that I think is a little bit more evergreen. But just the pace of fucking news and change and your ability to be able to discern between, okay, do I need to pay attention to this? Is this a really big deal? Is the president getting shot a big deal? Because it happened less than a year ago. Yeah. As did bombing Iran, as did, you know, pick 20 other crazy things
Starting point is 01:38:23 that have kind of never happened before. And it's just here today, tomorrow's fish and chips rapper. And the advent of AI, right? rising tensions with China, what's China going to do? A fucking ton of country. Any country that they can get within reaching distance, basically, that's near them. All of this stuff that's going on is, it really doesn't surprise me that people are feeling a little bit overwhelmed.
Starting point is 01:38:50 Like, pace now of this is a, it's a difficult one. It's a difficult one to try and work out how to navigate the world as a sane human who needs to keep abreast of the stuff that's important, but also does. doesn't want to get lost in the swell of just total bullshit. Like, even today, like, so much of the stuff that we've talked about is like, fuck, like five theseses of research that could be done on each different one of these things and they're all going to be world changing if they come to pass and they could be, you know, tons of different permutations of how the world can end up being in future if it does.
Starting point is 01:39:26 Like, that's a lot. And they all interact with each other. Correct. And I read this thing from Adam Lane Smith the other day saying, your system is designed for stress but not for complexity that your issue is not that you're working hard
Starting point is 01:39:39 is that your life is not sufficiently simple and I kind of get the sense that when people talk about life being hard they don't necessarily actually maybe mean that what they mean is life is complex because I think that most people even the laziest people not bad at working hard
Starting point is 01:39:58 what they really struggle with is complexity prioritizing yeah executive function okay how am I going to triage this you know one of the biggest reasons that procrastination happens is that you don't know what to do next you know what to do and you know how
Starting point is 01:40:13 to do it fuck it like that I mean then we're talking that's real procrastination right you know what to do and you know how to do it if you're still not doing it we have got a problem right if you don't know what to do or if you know what to do and you don't know how to do it well it gets thrown
Starting point is 01:40:29 under the fucking nomenclature of procrastination but I don't think it is not in the same way but yeah just I think there's not saying you've scared me but there's just lots going on
Starting point is 01:40:42 you know there's so much going on and I think that this this high level of complexity is something that for a lot of people is is overwhelming yeah I mean I think this is similar
Starting point is 01:40:53 do you have this tendency by the way to every time you start preparing for a guest every single thing that you learn about is like titrate it through what they study. So I'm going to tell you, like, I'm reading the Stalin biography from Krakhan to prepare for him. And it's so, the period of change between 1880 and 1930, the amount of technological change, geopolitical change, I think even today we haven't experienced, something like that again. As much as we think the world has been changing in the past, it just doesn't compare to.
Starting point is 01:41:33 to 1905, the airplane is invented. 1914, it's like, in 1917, it's a decisive in World War I. The tank literally wasn't a thing when World War I started. By the end of the war, it's a tank warfare all around. Radio, trains, fucking, telegraph, steamships. It's just like, the world is changing so rapidly. There's all these new ideas that are coming around, communism, fascism. You have all these old regimes, all these.
Starting point is 01:42:03 these monarchies and aristocracies in Europe, in Russia, that are getting revolted because of this big war. And even all that wasn't as big as AGI is going to be. Fuck.
Starting point is 01:42:24 Yeah, dude, it's, um, what a time to be alive. George, one of my friends has a really interesting question, you know, Teal's originality question, what do you believe that most people would disagree with or find abhorrent or something? He's got one which is
Starting point is 01:42:41 what is currently ignored by the media but will be studied by historians. Have you got an answer to that? There's something you think of? What is currently ignored by the media but will be studied by historians? Fair to not make it all about AI.
Starting point is 01:42:59 Well, I think that's not necessarily being ignored by the media. I'll let's some, I guess, some areas. are a little, certainly like industrial capacity just how much stuff can your country produce this is the future
Starting point is 01:43:17 proofing for AI yeah partly as relevant to geopolitical competition when the Ukraine war happened you know we've been we should have obviously it was right to give them the
Starting point is 01:43:30 the munitions to fight Russia, but the fact that we can't re-stockpile all the weapons that we've given them is like sort of worrying if we end up in another conflict with another country. What's your answer? Population decline is my usual go-to. I think it's a big deal. The impact of smartphones, more generally, the impact of technology on mental health, health, on outsourcing of thinking, you know, that article from the New Yorker, which, you know, if it happens with AI, because AI is just such an effective assistance, I have to assume that
Starting point is 01:44:14 basically the same thing happens but the lower level when you're using screens or anything else, too, that the more effortful that you make the process of learning, I mean, I guess it could be too high. It's like, climb Everest and then read that word, then come back down, then climb Everest and read the second word. You'll remember that word. Yeah, exactly. You know what I mean? those would be two i think that uh the retrospective of what did we do to people with the free access to this kind of technology i think would be an interesting one like will it be looked back on as the uh like prototype version this this really early um rough hewn um like like Like, when you hear about doctors smoke camels, you know, in fucking surgery,
Starting point is 01:45:03 and you go, how? How are they allowed to do this? So, you know, dirty and unclean and the outcomes was so negative. I wonder whether the same is going to be a scene of use of technology between 2010 and 2025. Has this changed through all in consumption of content? My limbic system is pretty fucking hijackable, man. So, but I try. I try to be as mindful as possible.
Starting point is 01:45:29 You know, it's largely putting guardrails in place wherever you can. How do you feel about the fact that your own content is served through YouTube? And I don't know how big a deal shorts are for you or short form stuff. I mean, it's a huge deal for me. They crush in terms of numbers. They're completely fucking useless in terms of everything else. Oh, I disagree. For me, it's been different.
Starting point is 01:45:47 On YouTube, shorts. Yeah. In terms of what? What's the outcome that you're getting? Um, I had a video with Sierra Payne. who is now my most popular guest, which was stuck at like 40K for the first six months. This was before my podcast,
Starting point is 01:46:01 had this recent growth spur. And then we started making shorts for it. And it's at like three million something. Wow, okay. That's interesting. And the shorts themselves have like 20 million views, wow. 10 million views.
Starting point is 01:46:14 Okay. Yeah, well, maybe you're just doing shorts better than me. It's important. My preference is always plays on audio platforms, YouTube people I love you, but my, The show has always been Spotify first, Apple Podcasts first. Yeah, and, you know, it might sound stupid for my own naming of the most beautiful
Starting point is 01:46:36 podcasts in the world, at least when we get it right, to be an audio first podcast. But just for me, that's where the most loyal audience tends to be. It's the most predictable in terms of numbers. It's the one that seems to be the most considered. And a lot of this is just, you could change this overnight by removing reply threads on You could remove this. They did remove this overnight by getting rid of the downvote button on YouTube as well. So I've always liked the audio side of the platform. But the problem is the discoverability. It's not there. No, you can funnel from YouTube. We found some good ways of funneling from YouTube across onto audio. How do you do that? So we release episodes 10 hours early on audio. Yeah. So if you want to get access 10 hours early and we the pinned comment for every episode is. is access all episodes 10 hours before YouTube by subscribing on Apple Podcasts or Spotify.
Starting point is 01:47:30 And a lot of people will come and comment on the YouTube and say, I listened to this this morning on Apple, but I came here to watch it on YouTube. So you end up getting two plays, but I don't think you would get the same in reverse. So we kind of got like a weird Patreon type scenario, like paywall thing. 10 hours early on audio.
Starting point is 01:47:51 We went video enabled on Spotify, which I know that it was a transition that you made. made as well. That was good. The Spotify partner program's really good. Some changes here and there. Your Twitter strategy is very strong. That's good. We've gone very hard on Instagram, which has been... Well, you've got the looks, Chris. Most of the fucking shorts are just of Matthew McConaughey chirping about something. I rarely... It was a really, really funny video. I've only guessed it on two shows in the last 12 months. One was Rogan and the other one was my friend Mike's show. And it's me
Starting point is 01:48:25 chirping away. I think it's quite a good take about how you know whether or not you should end a relationship and like just classic me stuff. It's 55 seconds long and it's just me chirping, chirping, chirping, chirping, chirping, chirping, chirping, chirping. And then the final scene of it cuts to Mike and he goes, yeah. It's so funny to just have a video where the entirety of your contribution is. Yeah. I've realized how many times that must be the case. But yeah, dude, I've been, it's been so fucking awesome to watch your
Starting point is 01:48:59 ascendancy, you know, because we met at the Slate Star Codec meetup in March, and I know it was in March, because it was just after I moved out here, March of 2022. Oh, damn. That was one I, somehow it felt even earlier than that. Well, unless we met when I came out here the first time in November. So I only came out in November of 21, but I actually searched your name. So I wanted to see if I already had a prep talk from previous stuff. And if I go into it,
Starting point is 01:49:34 I'll see 27th of February 2020. And I've just got at Dwarkesh underscore SP written here. And it's like a bunch of other stuff. Valve Index with Vive 3.9 trackers,
Starting point is 01:49:52 VR chat with mods, near Sion and Twitter if you want help. Somehow. Do you, it was also such a, I mean, it was during COVID. The Ukrainians are using Grindr to find Russian troops. I've got such. We can train super intelligence on your Apple notes, Chris.
Starting point is 01:50:17 You do not want that. Holy shit. But yeah, that was the ACX meetup notes. Yeah. It was a crazy time because it was during COVID. And I felt like I met so many great people in Austin just hanging out around that time. It was, I don't know, man. There's something about, I wonder whether everybody has this.
Starting point is 01:50:43 And I get the sense that they don't and I can't work out why I do. I'm so fucking fortunate with the people that I bump into early on. Yes. Holy shit. It's like people that I've known for like a long time end up becoming influential or successful or like really virtuous in some way. Like the variety of different trajectories of shit that goes well. And it's definitely not me, right? I mean, I am a common denominator between these people in that I know them, but I definitely haven't fucking influenced them.
Starting point is 01:51:22 and I'm relatively introverted. I spend way too much time on my own. So I'm like, how, what the fuck is it? What's the single thread that's drawing me through all of this? Maybe, I don't know, the advantage of being someone that doesn't go out that much is that it takes usually a pretty good thing to get you out of the house, so you're a bit more discerning and that the better things, you meet better people. I don't know.
Starting point is 01:51:47 But holy shit, like I think about some of the places that different people, like you're a perfect example. one random meetup then we were in this degenerate fucking signal group chat for the last like three years and yeah this arc that kind of you even nearly quit the podcast you weren't even doing it I don't think
Starting point is 01:52:03 you'd started it during COVID and then stopped and it was just like languishing there it was called the Lunar Society at the time and people thought it was I was talking about something like some crypto coin that's going to go to the moon so then I changed the name tell the story about where that comes from because there's a book can I comment real quick
Starting point is 01:52:20 about the meetup. I think at the time you had 350,000 YouTube subscribers. Correct. I was like, whoa, this guy is fucking killing it, which you were. And you're just like blown up like a 20x from there.
Starting point is 01:52:33 Yeah. And I think this is like a really interesting phenomenon where I have also had this experience of having met a lot of great people who spend, who are like super busy and spend time like teaching me stuff. The main way the podcast has gotten better is just, that people have like, I've had mentors who have just spent a bunch of time. And these people are, if they're economists, they're in their 60s or 50s, people like Brian Kaplan, Tyler Cowan,
Starting point is 01:52:59 but if they're like AI researchers, they're my age, right? But they're still my mentors. And they've spent so much time teaching me stuff. I had no right to their time or attention. And I wonder how you think about this now because I'm sure you get inundated. And I've, I've hit you up this way of like, what is your advice? Can you teach me about X or Y thing? And, uh, and Is someone asking you to teach them? Yeah, or not even like connect me to somebody, come on my podcast. And dozens of people have done this for me. So how do you balance the sort of like trade-up between being respectful?
Starting point is 01:53:34 You feel like you've got some calmic debt that you need to repay because of the number of, I, this is a really interesting question and a challenge that feels like a little bit of a champagne problem. Oh, so many people need your time. so many people did you favors and now you have the problem of people asking you to repay it and so forth but you're right because you need to triage your time and you can't do everything for everyone and the weirdest thing about growth towards success in any domain whatever version of success that you or me have managed to achieve is that you need to become increasingly good at saying no and the pace at which your discernment of no,
Starting point is 01:54:19 the water line, the barometer at which no is deserved, and you shouldn't feel guilty about it. It should be instant. It shouldn't take any sort of mind share. Is a continuous moving target. And you need to hypertrophy this muscle over and over and over and over again. And this shit that you would have begged to have said, had the opportunity to say yes to,
Starting point is 01:54:43 only 12 months ago that now needs to be an automatic no How do you deal with the situations where When you were starting out When I was starting out I had like zero I had like zero I wasn't like
Starting point is 01:54:56 Am I going to say yes to the work hash of spot I was like who what Lunar Society And they said yes They had better things to do They're just like Somebody reached out They seem like they've done a good job
Starting point is 01:55:06 Coming up with smart questions Let's do it And that just like If your water line is always moving up where's the room for the fledgling other person to come through. There's a question I ask myself a lot. It's a real smart question. I'm glad that we're asking ourselves it at the same time.
Starting point is 01:55:22 The one caveat or the difference in kind, not just a difference of degree, I'm going to guess that most of the people that you spoke to that you asked for their time, except maybe Tyler Cowan and Brian Captain too, I guess, a little bit. But their role is not primarily hardcore curators of other. information. They're not distillers across the board, right? That's your job. Your job is to be the hub with all of these different spokes going off it, if that makes sense. And that is a
Starting point is 01:55:54 coordination problem. Like your primary issue is coordination. You have in some ways a hard life, learning things that are complex, so and so forth. It's so tough. I know. Speaking. Yeah, yeah, yeah. You have a, there are challenges that you need to lift, even if they're only cognitively, right? But the main thing is complexity. Like, the main thing is the complexity. And I don't think that your issue is with doing things. It's with adding complexity in to go back to that Adam Lane Smith quote. I don't have a good answer for it, dude.
Starting point is 01:56:27 I definitely feel like my, uh, carmic repayment debt. I feel like I'm wildly, wildly, uh, overdrawn. and that I need to repay this fucking thing. But also, I, I, where do I find, I don't, where do I find the time from? Where the fuck did the people who gave me the Lego? That being said, you're probably not giving yourself enough credit. Because when I think about, when I think about some of the situations that have happened, even just over the last week, there's a kid called Elliot Buick.
Starting point is 01:57:00 So he's just turned 20. He's British. He used to work for trigonometry as a video editor. and he if I could bet a little bit of cash you're already like fucking Bitcoin at 10k so it kind of doesn't work so much anymore but I would certainly put cash on you three years ago I would he's Bitcoin at one dollar like I would absolutely throw some money at him there's a bunch of Jack Neal if you know who he is another young kid like there's real real smart young guys and I think when you're talent spotting you're like okay this this there's something there like
Starting point is 01:57:34 it really feels like there's something. We went for this three-hour dinner, a flower child, and we chatted, and I'm like, anything that you need, you can do the this, you can do that that. So maybe I'm not spreading it super wide. Nomadic needed, they wanted an intro to this guy who did an amazing episode with a musician. This episode of the musician did 3.2 mill. The guy didn't have any sponsors. My guy that does my ads didn't have, he needed his inventory filling.
Starting point is 01:57:59 And nomadic needed to make more sales. Nomadic sold loads of bags. This guy got paid and my guy that was in the middle made money from all of it. I'm like, that was just, that just, like, happened passively as like a byproduct of the ecosystem thing. So, yeah, maybe you're not able to, if you're balls deep in a Stalin biography, you can't heal off to go and do a bunch of podcast appearances or fly across the country to see someone or let somebody sleep on your couch or do whatever it is that you think you should be doing virtually in that way. But I bet that you are adding a shit ton of value, even if it's just highly leveraged here and there, little meetups, invites, that you give to people, suggestions, intros, all the right, hey, man, can you enjoy me, Brian Kaplan, can you do, you did Dominic Cummings for me, right?
Starting point is 01:58:41 Hey man, can you do that? Like, that's a, you know, a small, what, five second task, 10 second task, but downstream from that ended up with an episode that was really interesting. Now, I can intro Dom to somebody else and so on and so forth. So, yeah, I think this may just be fucking cope, but. No, 100%. I mean, you've, on, like, trips to the airport, you've, like, spend the time just, like, chat with me about, I forgot about that.
Starting point is 01:59:04 Yeah, of course. Hey, do you need a fucking recruiter? This is what I do to build your business out. 100%. There's also an interesting element. I don't know. This is, I mean, it's sort of, for an audience of your size, it's easy to lose track of how many people you're helping vicariously.
Starting point is 01:59:24 Where even there's a weird dynamic where, like, you could help somebody in person or you could help share an idea with a couple million people. and like the tradeoff it just has to be it's weird to put it in that way because like one is sort of more commodified than the other and you have and you can make better content you that you spend that hour prepping harder thinking harder you could make better content for a couple million people it's a weird tradeoff I remember a friend Alex gave this thought experiment of imagine that one of your friends had broken down down the street and asked you to come and help him change your tire. And Alex made this point that I would send him the RAC emergency roadside assistance thing because I've got that and I can send it. And they'll do a better job and I get to stay at my laptop and do more work. And he got criticized online because it's like that's not what the person wanted.
Starting point is 02:00:23 What the person wanted was this sense of your time. But I get the sense that at least in the kind of interactions that you're talking about, what people are looking for is outcomes. they're not necessarily looking for inputs and if someone wants to come and kick the tires of a very busy person with kind of no real defined outcome
Starting point is 02:00:42 that's not something that I would have ever done and I don't think if you're a young ambitious person that's listening to this I do not think that you should go to anybody and be like hey man would just love to connect if you don't have anything to offer what the fuck is the point of the connecting
Starting point is 02:01:00 If it's, I have a few very specific questions that I know you probably have the answer to and I would really appreciate two minutes for you to just give me these because they're big unlocks for me. Super specific question, really specific ask. This person's put the work in. They're evidently educate. And you'll probably get the fucking 30 minutes on the call because they're walking the dog and they don't really mind or whatever. When I think about the random people that I ended up on calls with because I asked for very, very specific things on the come up, I know that you're a big fan of, like,
Starting point is 02:01:32 there's a huge, unactualized opportunity that most people don't realize in a very well-written cold DM. Yeah. Dude. Like, just fucking send it. You've got nothing to worry about. Yes. Yeah.
Starting point is 02:01:44 And you would also be surprised by how few people put in, these famous people they're getting, I don't know, a thousand, whatever emails every month or something. But how many of those are, I've spent a week coming up. I mean, before I had any sort of a name or something, I would still be able to get big guess, but I would literally spend a week.
Starting point is 02:02:05 Here are the questions I'd ask you. Just get past their not a moron filter because they're getting a request every 30 seconds in the podcast or something. Just going deep versus wide. Now, there's a bunch of like tacit things about, well, that doesn't mean you should like have 5,000 words in the email, right? Just like how to keep a brief.
Starting point is 02:02:23 It's also really interesting, by the way, as a side point of what ends up salient to you when you get a cold email or you're hiring somebody and what isn't like the kinds of things you thought mattered while you were in college
Starting point is 02:02:36 whether you have an I started an organization that does X or I have a master's in Y just like never matters as opposed to the couple hours of extra work you would put into that email
Starting point is 02:02:50 yep how little credentials matter when hiring or something people who don't appreciate um yeah yeah it's there is an awful lot of opportunity available for someone who's just just courageous enough or ignorant enough to be able to get past that sort of first level of ick filter and also is prepared to do a little bit of preparation and another thing i i don't think people appreciate how much um uh if you write a good blog post about a topic that you think is relevant to somebody you're trying to reach.
Starting point is 02:03:30 It's almost guaranteed that not only will they read it, but weirdly almost everybody who matters will read it. Wasn't that how Tim Urban connected with Elon originally? I think so. That would so make sense. I think he did a six-part series. This is a good while ago now. I think we're talking about 10 years ago now.
Starting point is 02:03:49 I think he did a six-part series on Elon. And you know, you're right. If someone writes a good, even, not even viral, like semi-wildly circulated piece on you or your organization or a movement that you care about, you will read that thing. So I always think about this. I always think about the fact that even the richest, busiest, most successful, highest status, hardest to get a hold of people in the world, they get plane delays. Even if they're getting on a private jet, the weather's meant that they're held up. And what are they going to do? Well, they'll open YouTube.
Starting point is 02:04:23 you know, they'll open YouTube or they'll open Twitter or they'll open substack or they'll check whatever it is that's been sent to them in a WhatsApp thread or something. And if you're that meme or you're that article or you're that quote or you're that whatever, it's just continuing to roll the dice shots on gold. You can take advantage of a very unfair dynamic, which is that a lot of people have to work anonymously. Their work is shoveled out through an organization or through their boss or something, and they will work decade in, decade out, be extremely good at their jobs, and people will not have heard of them.
Starting point is 02:04:59 We're podcasting here, and I don't know, I like to think we put our work in or whatever, but like we're not working harder than somebody who's just at McKinsey or maybe even like that gives a valence of something that's less valuable. There's a lot of valuable work that you just don't, you're making policy, you're a staffer for a policymaker or you're an engineer at a company. It's kind of quiet grunt work. Yeah, in a way. And, but we, like, we can reach out to people and they will respond to us just because of,
Starting point is 02:05:27 it just so happens to be the case that our work is public facing. And that's just luck or slash our choice. That's right. You can take advantage of the dynamic by putting at least some of your work as much as possible out publicly, right? Mm-hmm. The blog post, the podcast. Mm-hmm.
Starting point is 02:05:44 Yeah. And there's also another dynamic where people take for a, a, granted the people in their organization. I'm guessing that like the eighth most senior person at Microsoft gets less respect and attention
Starting point is 02:06:02 from Satya Nadella than like a random blogger he likes. Which is weird, but you can take advantage of that. Yeah, there's a seduction to visibility. Yeah. I think. And even if, you're right, even if someone's company
Starting point is 02:06:18 is way smaller than yours, or their podcast is way different to yours or their substack is much less circulated than yours if they're the person if they're the main person and you're not or even if they're the main person and you are but what they do is cool but it's such an unlock
Starting point is 02:06:38 it's such an unlock to do stuff like that I'm interested in what your learning process looks like how do you learn what is that process at the moment in a way it's very simple I read everything that they I mean first of all it's about picking the guest
Starting point is 02:06:54 and I pick guests based on who I want to spend two weeks reading everything they've ever written talking to people in their field learning from them what's interesting to ask them I'm sure you get inundated by request to come on your podcast
Starting point is 02:07:11 and often it's by people who are like very big names right and I'm guessing you could probably say no to most of them and yeah same here where you um what are we trying to do here right i this interview will last two hours any interview i do will last two hours my life is the research that precedes that the two weeks that precede that um i want that time to be valuable and meaningful to me and be time that i'll carry forward in my future interviews in the future endeavors in a way that'll be valuable. And if it's
Starting point is 02:07:44 if it's not somebody who has written something that's worth reading or done research that I really want to understand, you know, what are we doing here? And then they're choosing your own type of torture for the next two weeks. Yeah, choosing what you want to learn, which is complicated, but it also it's sort of easy to forget how
Starting point is 02:08:03 much of a dream job this is, where people are curious and they want to learn things, but they feel like they had to trade off their time and their job to do it or they can only learn about certain things for their work. We get to choose what we want to learn about and our job is just to learn about it right um it can be about any topic at all it can be about genetics can be about history it can be about um technical stuff and then yeah then there's a prep and just read the research talk to l-lums like just you know get after it the oldocratic method
Starting point is 02:08:29 exactly uh-huh um and then ask the questions you actually want the answer to i think sometimes people have the sense that you need to ask about the intro chapter of their book you need to Why did you write it? Who, you know, what is it about? And, you know, you can just like, you can just ask the thing you want to ask. I think people underrate how much immersion learning is, people can keep up with a lot.
Starting point is 02:08:53 People just, like, really want to boil down the conversation so that everybody can keep up. I think they underrate the extent to which people can just miss a word or two here and there, but just getting to the crux of it will make it a more delightful experience. And also, if it's a question you're not interested in, Why would the audience care about it? So, yeah, it's fundamentally just be motivated by what you're curious about who you want to interview, what you want to ask them, what you want to interrupt. Yeah.
Starting point is 02:09:28 Following your taste, we spoke about this before we started, but the ability to discern between something that's good and something that's not, is this lovely balance between good. between good instinct and sort of rational assessment. Douglas Murray wants to tell me this story. I'm aware that Douglas Murray's like the least fucking popular person on the entire internet at the moment after his interview with Dave Smith. But he's still got fucking absolute bangers and this was one of them. So he worked for this journalist. Douglas got like I think four or five columns a week he does now. And when he first started out, he's working for this legendary British journalist.
Starting point is 02:10:05 And this guy was getting toward the twilight of his career, like a classic journalist. journalist, he'd accumulated a bunch of enemies and a bunch of supporters as well. And he decided he had always wanted to get into theatre. So he created a show about the life of Prince Charles and the entire show was in rhyming couplets, the whole thing. And it was orthognal, to say the least at the half-time interval of the opening night, there was no one left in the entire theatre, including the cast. Everybody had gone on opening night. And this guy was devastated, and obviously all of the enemies that he'd accumulated throughout his life, they came out of the woodworks and they dug the knife in. There were all of these criticisms in the media and stuff
Starting point is 02:10:51 like that. Douglas told me that he'd seen him at work the following week. He said, what were you thinking? Fucking West End show by the life of Prince Charles and Ryan climbing couplets. You've got all of these people that rubbing their hands together waiting for you to fail. And he said, Douglas, I followed my instincts. And instincts, they may sometimes lead you wrong, but they're the only thing that's ever led you right. I was like, that's so fucking sick, dude. That's so sick. And what I found, whether it's with where I want to live, the things that I want to focus on learning, the direction of the show, the questions that I want to ask the guests, the sort of guests that I want to bring on, the people that I want
Starting point is 02:11:33 higher, the further that I've gone away from my instincts, the more that I've tried to reverse engineer, okay, well, what do the audience want to hear? What are they interested in? What would make the guests feel comfortable? What do the guests want to talk about? I'm like, in the nicest way possible, fuck the guest. Like, what do I want to talk about? Because that's what matters. Right. That's what matters. And if you use your own instinct as this sort of weather vein, this GPS locator, if you're really fired up to speak about it, you have to assume that some non- insignificant cohort of other people are too. And if you've been doing it for long enough,
Starting point is 02:12:08 the people that are following you are following you for that same taste. They are in the wake. They're holding onto the coattails of your instinct. And if your instincts change, even dramatically, that we're going to make some pivots, probably before the end of the year, we're nearly halfway, almost exactly halfway through the year now.
Starting point is 02:12:25 By the end of the year, we're going to make some pivots with the way that we do the show. There's going to be different skews, different types of episodes that are going to be coming out. and it's probably biggest change I've done since we started the cinema series about three years ago and it is not in any way
Starting point is 02:12:40 data driven I have no justification for this other than I think it would be fun and my instinct is going I think you should try that I think that's so valuable for a couple of reasons one
Starting point is 02:12:56 I have noticed that my best for-having episodes are just I would have never anticipated that they would be popular. It's Sarah Payne, who's this historian that had written a couple of books that I thought were great. It's David Reich, who has studied ancient genetics, and now he's
Starting point is 02:13:15 way more popular than Slicey Nadella and Mark Zuckerberg and Tony Blair and whoever else you could name. But in all these interviews, there was something I noticed afterwards, which was that every time I went to lunch, dinner, when I talked to somebody and they asked me, what are you thinking about? I just could not, you know, I just interviewed David Reich.
Starting point is 02:13:32 And he was explaining to me that 60,000 years ago, there was a small group in East, whatever. And that obsession was so strongly correlated with how well the episode did, regardless of what topic it was about. On the instincts, I've had bad judgment about like a lot of, look, I learned how to do the podcast well, but a lot of things are required, as I'm sure we've come across, in terms of running a business, hiring, Management. Firing. Yes, 100%. Just making things happen. I've had bad measurement about many of these things.
Starting point is 02:14:07 I feel like I've done worse, even in those cases when I've taken advice. The advice was actually better than what I would have done by default. But when you follow somebody else's advice, if things go right or if things go wrong... You haven't learned anything. Exactly. And if you just like do the thing that makes sense to you, you have some reason for thinking it makes sense and things go wrong, you're at least like tried an idea. You have to correct your own intuition, whereas that error is still waiting for you to step on in future if you outsource it to. So, yeah, you want to front load failure as quickly as possible in a small and acceptable way.
Starting point is 02:14:41 But no, dude, my best heuristic for whether or not I've picked the right guest for that day is how I feel on the morning that I wake up. It's like when I wake up on a morning, like this morning, I went and did a hyperbarical. oxygen chamber session. I'm listening to you and Alex Kantrowitz talk about stuff. And I'm like, this is, like, I haven't seen Dwarkesh in fucking ages. It's going to be so sick. I'm going to tell them about that. I like, you know, went through my notes and found I had this note from the fucking February of 2022. It's going to be so cool. I'm going to get to bring that up. Isn't it fun? It's going to. And then, you know, there's other days where you just don't have the same, quite the same level of that. And that's not necessarily an error. That's not that
Starting point is 02:15:25 you've picked someone that's wrong. It's just, huh, okay, well, what are the ones where I wake up and I'm, I want it to be 2 p.m. And what are the ones where 2 p.m. will come along and it'll be okay. Yeah. And yeah, the one way you're like, I want, I want to speed run the next two weeks so that this person comes on the show. Yeah. You know, we've got, it looks like, um, MGK, rapper turned rock star guy. Uh, there's a potential that he's coming on. There's another guy called Ronnie Radke who's coming on. This guy called Rick Bito who does music and stuff like that. I'm like making a little bit of a pivot into talking about the world of music and about sort of what's happening and how that interacts with culture and the perils of touring and what
Starting point is 02:16:05 this means for a family life and how you deal with the anxiety and the performance and pressure and scrutiny and the press and criticism and creativity and all this stuff. I'm like, huh. Like I already want it to be one of those days when I get to speak. I don't know what there's other people in between, which will be great. I'm like superfied up to speak to them about that. Here's another thing. I don't know whether you've ever had this. There's times where I quite like to do episodes with people where I know a lot about the topic, but not quite so much about them. And that's the same sort of thing where I'm real excited to talk about the topic. And I imagine this is what it must feel like to be at one of
Starting point is 02:16:42 Ayla's sex parties, where I'm like, I know I'm going to have sex tonight, but I don't quite know who with. That makes sense, where I'm like, I know the direction I'm going in in terms of the topic and I'm super excited and you'll listen to the person talk and you'll do the prep and do the whatever but there's a little bit of like I know this world really well right what I'm excited to hear is their spin on this I'm excited to hear the angle that they come at this from yeah I had this guy called Paul Turk who does a evolutionary pediatrics so he talks about child rearing clinically medically developmentally but from an evolutionary lens so what happened ancestrally
Starting point is 02:17:26 how did we raise kids how were they looked after what did hygiene look like what did skin to skin contact diet all this stuff and I know evolution evolutionary theory not bad it's one of the few areas
Starting point is 02:17:41 I have a bit of expertise in but I never looked at this I'm like oh this is fucking cool this is going to be so sick I'm going to speak to this guy he's like mid-70s you know he's got his a son-in-law or daughter-in-law or something helping him to set up the camera.
Starting point is 02:17:58 I'm like, this is going to be fucking sick. And it's sure enough, awesome episode with this guy who had no right to come on and crush an episode, apart from the fact that he has an amazing bit of research. And I was super fond of to speak. Yeah. And how often do you encounter a guest, which is my favorite, where you thought you were going to get somebody who can speak to this one narrow topic? But you've come across a polymath who has a deep world model that somehow they have something to say about anything you could ask them.
Starting point is 02:18:34 And the only limitation is your prompting ability. Yeah. It's so gratifying when you encounter people like that. One of the people I had on like this, do you know Goren Brand one? No. Oh, he's incredible. He's another Scott Alexander type. Okay.
Starting point is 02:18:50 a blogger who has written just like yeah there's no subject on which he hasn't he couldn't give you in like a deeply empirical super interesting way and when I learned from the interview I didn't know anything about his personal life other than the fact that he's anonymous is living on like $12,000 a year of Patreon in the middle of some um somehow that his grandfather built in Virginia and so during that interview
Starting point is 02:19:28 he was visiting San Francisco for a conference and during that interview I asked him well it seems like you're enjoying it here do you want to move here and he said
Starting point is 02:19:36 yeah that'd be that'd be fun and I said why aren't you moving here he's like I don't have I don't have the finances to do it how much would it cost
Starting point is 02:19:44 for you to move here said 75K and then people like donate it to him and he's moving to Esa. No way. And he's like, much more than 75K. Yeah, yeah, yeah.
Starting point is 02:19:56 Or Sarah Payne, who I think I can share this. She was somebody who had been slogging through the archives. She's been to a historian who's been to every single continent to go through the archives, has deep understanding of basically every single conflict over the last many centuries can give you, like, why did the Vietnam War happened the way it did? Why did Russia fall, World War II, you name it. Um, incredibly compelling presenter as soon as, I mean, her episodes are by my, I sometimes joke that I host the Sierra Payne podcast, uh, where I sometimes talk about AI. And in terms of viewer waited minutes, that's definitely true. But if you notice how her books are categorized on Amazon, they're, uh, SCM pain, not Sarah Payne. And the reason is that she's a military historian who I think started her career, 70s, 80s. Um, and she wanted to anonymize her sex. Exactly. Um, um, And so it so happened that someone who was incredibly talented,
Starting point is 02:20:55 it wasn't given this medium earlier on. Personal accountability wasn't quite the same. And now she's blown up. She's actually retired from the Naval War College where she used to work so that she can be public intellectual informing on the big questions that we've been discussing on full time. And this was launched by the episode that you did with her? Yeah, and we did three more lectures.
Starting point is 02:21:18 we're doing more now. I think what people don't understand and it's hard to appreciate is that lecture she did, cumulatively she probably has over like 10 million views on full lectures. That is just so much bigger compared to
Starting point is 02:21:33 add up all the students she's interacted with the middle or college combined. It's just like hard to think about like a million people who you probably reach on many more than that on an average episode. There's like no stadium in the world.
Starting point is 02:21:48 that can accommodate an audience of that size. And right now we're talking, we're having fun. We're not thinking about how stupendous a quantity that is. And in fact, I think a lot of politics is explained by who understands this and who doesn't. Like, Kamala would do all these rallies and people are like, oh, people are really excited about Kamala. And it would be a stadium of 20,000 people. And like, wow, she filled a stadium of 20,000 people. And you know that if you could put out a YouTube video and it doesn't get 20,000 views within the first hour, you're disappointed.
Starting point is 02:22:15 Or when Trump went on Rogan, I couldn't tell you to the nearer. just 10 million. Well, look at that recent New York mayoral candidate, right? Absolute digital first. Yes. George has this great take. It's so true. People think that sort of Trump and Kamala was a true digital first election, but it wasn't.
Starting point is 02:22:36 It was still legacy media talent and politicians that happened to kind of create digital appropriate content, whereas I watched this breakdown this morning on X of this mayoral candidate apparently his mum is a real famous Hollywood filmmaker and all of the videos that he was in, all of the campaign videos that he shot, even the on street stuff, was shot with the same colour palette, this very soft lighting. It's very well, it's a nice blurred effect, Boka depth of field thing going on in the background. Everything's shot in this way and it almost gives you this rose-colored glasses view of what New York could be like.
Starting point is 02:23:26 And you think, okay, like this is really taking it, using AI voiceover stuff on TikTok. Like, okay, this is really, really, really stepping it up an awful lot. And yeah, it's interesting to think about where people haven't fully sort of factored all of this in just yet. Yeah, because it's not visceral in the same way. It just, I mean, another interesting thing from the election is just people realizing that people they thought of as celebrities weren't actually the real celebrities.
Starting point is 02:24:00 In terms of the, you can get some random rapper to say endorse you, or you can go on your podcast or Theo Vaughn's podcast or something. And who is actually reaching more people? who you just like don't think of these other people as celebrities as they are I think the what people are actually and it's a shame that this word became so molested so quickly
Starting point is 02:24:22 in the world of social media but who is it that has the most influence like who is influential and I think that you know a tweet from Stormsy or Dave or Central Sea or some British rapper saying that we need to do this thing for labor versus Dominic Cummings
Starting point is 02:24:39 I'm aware that he's not running for whatever, but someone marinating in a two-hour conversation where me or you or whoever else grills Dominic Cummings about this thing. And, you know, he gets to put his personality across. And it's not in any way, I don't think that me or you or anybody else in the world of podcasting has some undue degree of credibility that people who, let's be fucking frank, are way more talented. Like, I can't do what. fucking Dave or Sceptor or a Central C can do. But there is a multiplier, like a vector of advantage from the format of a long-form conversation. It's been done to death a million times. But
Starting point is 02:25:25 there's nowhere to hide. People don't have anywhere to hide. It's very difficult to hold yourself. Anyone can pretend to not be a psychopath for five minutes. We've tried to do it for two and a half hours in a free-flowing conversation. It tends to come out. I sort of disagree there. I agree that certain things because you've hidden your psychopathy for the last two and a half hours.
Starting point is 02:25:42 Yeah. Until now. We're about, we're two hours in and it's coming out. Where I agree that certain aspects your personality
Starting point is 02:25:51 will come evident, your charisma. But I think Douglas Murray had a point in his interview with Joe and Dave Smith where he said
Starting point is 02:26:04 that biving out is not, this is not like the checker of whether you're ideas make sense. I mean, the point you made about that New York City mayoral candidate is exactly correct, right? He is arguing for socialized grocery stores and rent control and things. Any economist would tell you don't make sense, but he can put warm tones on his cameras.
Starting point is 02:26:24 And of course, he's a charismatic person. And that is enough to wipe out the deficit that his ideas have. Everyone's just vibing their way through whatever level of influence it is that they want to achieve. Yeah, that's interesting. So I do think that, like, I'm kind of skeptical of our medium as a way of intrinsically being here towards truth or eliciting the truth by default. I think it's unless the interview is done in such a way that you're really pushing at the cruxes, which to be honest, I haven't always done. Or after every interview, even if I think, or even if people in the comments are like, you did a really good job pushing, trying to. get at the cross. I will always feel like there was something further that could have been inside. Well, you know the thing that you didn't say. Exactly. And even if there wasn't a specific thing that
Starting point is 02:27:15 you didn't say, you know the sensation of feeling like there is something that you could get to in your mind but weren't able to bring out. Yes. Right. So as a musician, you can hit the note that you meant to sing on stage in front of a few thousand people, but you know where you could have done it or you did it like that in a different show and you really nailed it. And you added this little bit at the And it's kind of the same sense, I think, when it comes to having a conversation that there's somewhere I'm trying to get to. Why the fuck is it? I'm trying to get to. It's going to be, oh, he's away.
Starting point is 02:27:47 Fuck. Like, I'll just, I'll have to move on. And yeah, these micro victories and micro defeats that you have throughout every single element. I mean, go through a little Odyssey in the middle of a podcast episode. Dude, I've had, I've had episodes where I've literally gone on journeys around. And, oh, fuck, like, okay, what am I going to say? I got these things, you're trying to put this together. Meanwhile, what's coming out of your face is, everything's fine.
Starting point is 02:28:16 It's totally sweet, everything's cool. And inside of your mind, you're going, ah! They're fucking screaming trying to hold on. That's what, I mean, look, that is what the first time going on Rogan is like. The first time going on Rogan is a three-hour panic attack masquerading as a conversation. That's what it feels like. And then you get off and you're like, what the what the fuck did i what did i think we talked about i think we talked about mike
Starting point is 02:28:43 tyson and oh my god did i give away my address like what you honestly dude it's fucking it's wild and um the craziest thing that you'll already have had i'm sure but we'll continue to have it's like here's a here's a point mark zuckabberg when he woke up on the morning to do your podcast there have been a bit of him that was like fuck this kid's smart like i guess i be a bit nervous. Like, or maybe one of his staff said to him, like, maybe he didn't know, I don't know. Are you like, oh, that means that you get to be the anxiety attack inducing Joe Rogan for other people, and that'll get worse. I remember the fact, I can remember who it was.
Starting point is 02:29:28 The first time it ever happened, it was a virtual one, I was still back in the UK, and someone was a fan of the show. And I was like, oh, that's really cool. and they mentioned maybe as we started like I'm a real fan it's a real honor to talk to you and so thank you very much it's very kind let's get into it and then they finished up I gotta tell you I was very nervous before I started today and I'm like what I know that's crazy what the fuck are you doing
Starting point is 02:29:50 like you're the expert right the token retard in the room like what are you talking about but this is I guess I guess for you know anyone who wants to climb the hierarchy of any industry that they're in if you're a young person you have idols that you look to, and if you achieve the thing that you want to achieve, if you actually do the thing that you're setting out to do, those idols turn into rivals after a while,
Starting point is 02:30:15 and then they go from rivals into being friends and maybe even collaborators. And it's this weird arc where the, like, particular strata that you thought that you were in, you're sort of moving, wiggling through it. You go, like, fuck, I'm sobs at Tony Blair. What the fuck am I doing? So I'm so Tony Blair?
Starting point is 02:30:39 There's also the most surreal and gratifying thing has been, I mean, I was in college not that, like when we were talking about that meetup, I was in college. And I was just on the side, I would be like reading these books by these scholars that I really respected and, oh God, if they even like saw a cold email that I'd written, not that I would often dare to, but if I did, I would be like, so delighted. If they mentioned me on their blog, if I wrote something, I mean, I couldn't even imagine that. And now to have those exact same people, be friends, be people who I'm having discussions and debates with, who are... You see you as a contemporary. Yeah, it is like,
Starting point is 02:31:22 there is nothing more just like heart, you know, heart pleasing, just like more satisfying. and also just that happening so fast because nothing is special about me, but just because this medium affords a level of virality and growth and public-facing credit, which others don't. Personal accountability. Well, that's a lovely reframe. I think everyone has this sense,
Starting point is 02:31:50 especially in the hyperviral growth loop speed run of fame thing, that everyone has a little bit of, like even I have a degree of ick around kind of the pace of change and I'm sure that you do too or it's like oh fuck like there's a lot of exposure going on here and it feels very very aligned with me but there's a little bit of me that's like fuck like this is a lot there's a lot of like 800 I remember I used to feel anxious if the 24 hour plays ever went over a million whenever they went over a million there was this thing that went in the back of my mind I was like oh my god and that happened for about two years and it would be fine if it was 500,000 and then if it was a million 48 hours you know the 48-hour play thing on YouTube, and it would go up.
Starting point is 02:32:30 And I'm like, what am I, why am I feeling this way? And I realized this ambient anxiety was just the sense that lots of people are watching you. Right. At the same time. Yeah. And I'm like, there's not only a stadium of people watching you when you are like performing, like at any given moment.
Starting point is 02:32:47 You're asleep and it's happening. There's a stadium like right now watching you. It's still going. And that dissipated a little bit. So anyway, everyone has this kind of. scrutiny, people looking thing, a degree of, I don't want to be sold out, I don't have perverse incentives of distracting me away
Starting point is 02:33:08 from what the main mission was, ruining my taste, ruining my gut instinct, all that stuff. And then your point there that, not becoming known by lots of people, not becoming popular in the circles of people who are popular, but being respected by people that you respect
Starting point is 02:33:29 is what everybody really in any industry where they're curious I think should be trying to get toward. Like they genuinely care about what you think. They think he thinks that my idea is cool. He thinks that guy who is a legend, super genius, right? Like a fucking divine human
Starting point is 02:33:53 thinks that I have something interesting to say you're like all right i challenge anybody to find a fucking problem with that right it's not shallow it's not cloying it's not sycophantic it's not gamesmanshipy it's i went away and had a unique insight and a perspective on this thing that i cared about that they also cared about and they hadn't fully thought about it before and i got to contribute i got my name put in my first academic paper like someone cited an idea that I came up with to do with
Starting point is 02:34:28 evolutionary theory around mating this happened twice now it happened first time that happened the second time I was like this fucking unbelievable I remember when I read Evolution of Desire by David Bust and now he's put me in this paper and it's like it's so fucking sick
Starting point is 02:34:42 like me retard from the north of the UK and you know there's cool shit you can do but remembering at least the more that I try to to keep the, I don't know what you call it, like virtuous flexes as opposed to kind of the shallow flexes of subscribe account or revenue or how many tickets you've sold to a live show, how many people turned up to a meetup and stuff like that. Like that's still cool, but it doesn't give that same, like you said, sort of warm heart delight of someone I respect, respect me. Yeah, 100%. And even looking at a number.
Starting point is 02:35:23 on a screen go up, you know, it's like, whatever, it'll go up 10x, and then that becomes your default. You know, there's a certain point where you go from zero of yours to like 100, and that is like, okay, people are actually watching. And after that, it just orders a magnitude, right? A zero goes up, another zero goes up, another zero goes up. Nothing fundamentally changes in your life. There's a respect of the people you respect, which is uncorrelated to those numbers. there's also the feeling where even if it's not people you respect but just meeting them in real life just people on the street they see you
Starting point is 02:35:55 they're like oh I love your content and it's very easy to just get sort of used to that but sometimes you pause and you think like that's a real human being and they've often do you have this thing where they're like pull the phone towards you and they're like I'm listening to you right now some guy did that outside of Flower Child last night
Starting point is 02:36:14 is like dude that's fucking sick and that is like a real human being is spending so much of their time hopefully you're contributing it seems like you're contributing to their intellectual growth their understanding of the world and you i mean i think about it wasn't that long ago i was in college i was a teenager or whatever i would um drive around listening to sam harris like uh teach me how much that contributed to me being curious about the world having different viewpoints being like changing my career trajectory and not just before I became a podcaster, I was studying computer science and I was going to be a programmer. Follow the footsteps of Sam Harris to become a podcaster. I was going to be a programmer after that, and then I decided, oh, that's getting automated.
Starting point is 02:36:59 I'm going to make the more financially responsible decision to go into podcasts. Yeah, yeah, yeah, yeah. Yeah, the ripples are fucking wide, dude. You really, you really don't know. And I think that this is, it's the bull case for just producing stuff. because you don't know
Starting point is 02:37:15 especially when you start you don't know what's good like this might be I feel like this might not be totally shit and you kind of don't really know and after a while if you get enough positive feedback and you're diligent and you refine and you update and you keep
Starting point is 02:37:28 going oh actually yeah it is it was it might not be good but that's like that's the reason you do it I had the very interesting I understood Scott Alexander on my podcast and I had a very interesting conversation with him towards the end
Starting point is 02:37:44 where I asked him how many great new bloggers do discover your and he said you know on the order of one and I asked him okay how soon after you have discovered them does the rest of the world discover them
Starting point is 02:37:56 it's like maybe a couple of months usually less than that so again speaks to this dynamic where as soon as you are making good content I think you might or not you but somebody listening might underappreciate the extent to which
Starting point is 02:38:09 it will immediately be seen by all the people you want to see it. It might not happen immediately, but genuinely, it's shocking how fast good content was very old. Well, think about how, by design, most content that gets consumed, the biggest, most widely distributed stuff goes to the biggest number of people, right? The channel with the most views has the most views. What a shocking insight. But what that also means is that if you, as a viewer, have peeled off, from the biggest channel to watch this fledgling small thing and it's captured your attention,
Starting point is 02:38:46 it's got to be really, really, really fucking good to be able to do that. And you have to assume, like, it kind of goes back to what we were saying earlier on that if your instinct drives you toward a thing, you have to assume that some non-zero number of other people are probably interested in it too. The same thing goes as a viewer. It's like, if you thought it was good, probably like some other people will think that it's pretty good as well. And if this person's just a bit consistent, like I would actually say substack for me has one of the highest densities of as yet undiscovered talent out there. And maybe
Starting point is 02:39:23 it's just that the particular sort of format and language of substack lends itself to me, I quite like pithy stuff and I like the feed and I like the fact that most articles are about 10 minutes long. like my attention spanking around about hold on to that you're way ahead of the rest of us yeah yeah that's true um but i'll find people on there like some of the people that have been subscribed to i've been subscribed to for like three years and now they're blowing up i'm like it was obviously a fucking matter of time like it was obviously going to happen but by design it can't happen to everyone right so it's the same thing as the why is it that the people that i'm friends with end up doing all of this stuff i don't think i don't know
Starting point is 02:40:05 maybe we've both got phenomenal taste, just not sartorially. I've noticed as in people in other industries say the exact same thing. You know, I'll ask like the CEO of a big company, or they'll mention that they were friends with the other CEOs who are now running all the big companies back when they were college students and not even necessarily the same college. It's just like they saw each other. How the fuck did this group come together?
Starting point is 02:40:30 Exactly. And they don't know. Isn't just that game recognizes game? Is that just it? I don't know. I think maybe not a lot of people would do stuff
Starting point is 02:40:39 because if you're doing things you will meet the people who are also doing stuff. It's weird in how weirdly small the world ends up being. How many I'm I don't know
Starting point is 02:40:55 I'm friends with a couple of people in San Francisco who I now have had on my podcast and are I don't know Okay, this is a funny story. Maybe the fourth or fifth person I interviewed in my podcast. I never released this, but it was in 2020, was Leopold Aschenbrenner.
Starting point is 02:41:13 I was like 19 and I think he was 17 or something. Do you know who this guy is? No. He wrote this memo called situational awareness, this long blog post, that went super, super viral. Situational awareness. Yes. Okay. And it was like the most popular thing on EI have written over the last two years.
Starting point is 02:41:28 Okay. And he was like a 17-year-old at Columbia and we've been friends since then. But anyways, you know, one of these things where, like, how did that happen, right? How did we know each other for so long? That's happened to Mia so many times. It's sort of uncanny. What was his name? Leopold Aschenbrenner.
Starting point is 02:41:46 Leopold Ashenbrenner. And there's so many others, like, all the AI people that I've had on the podcast are just people I, like, met at a party two, three years ago, researchers, whatever, Schulte Douglas or Trenton Bricken or so forth. Maybe you're right. Maybe it's just that most people don't produce. stuff. Yeah. And that by producing stuff, you inevitably separate yourself. And there's a feedback loop where, like, you actually get input from the world, you meet
Starting point is 02:42:11 mentors, that puts you on this upward trajectory. And especially if it's good. Yeah. If the thing's good and if you're improving. You show potential. Yeah. I mean, like I say, that Elliott kid I met the other day, Jack, Neil, he's 20. This Elliot guy is 20 years old.
Starting point is 02:42:31 It's a podcast called Next Generation. I think, and I'm having this chat with him. I'm like, you do realize that the shit that you're asking me as a 20-year-old is stuff that I only asked myself like three years ago, like these questions about the balance between inputs, outputs and outcomes, the realization that he'd sort of attached a sense of sacrifice and difficulty with being worthy and validation. And that was something, this was like this gaudy and not he needed to cut through. I'm like, who the fuck are you?
Starting point is 02:43:01 Obviously, you're going to be great. Obviously, you're going to be successful. And George Mack, George, I met George, fuck, 2019, 2018, 2018, 2019. I remember sitting down, he'd sent me a message when I went to his office to interview one of his bosses. He sent me this DM, and the DM said, on Instagram, called DM. We'd never spoken. I didn't know where he worked, didn't know who he was. I hear you're coming into my office today, full stop.
Starting point is 02:43:30 Let's exchange. Google Chrome extensions. I was like, this is my fucking guy right here. He stinks of me. Sure enough, half a conversation with him. He was way more interesting than his boss that I sat down to speak to. And we moved to Dubai together.
Starting point is 02:43:45 He's just moved to Austin. He's going to live with me. Like, we've been best friends for fucking six or seven years. He's just got a huge, huge book deal to write this thing that he built out of an essay. The essay was what he launched on my podcast. So we did this episode at the back end. last year that came out in March he's just about to announce he's got this book that's coming
Starting point is 02:44:05 I'm like and it's my manager that's doing his book deal like the again the incest fucking fucking wheel human centipede of stuff keeps going and um yeah this is just it's one of the areas that you know me and you can continue to pontificate about we know cool people and isn't it fun and all the rest of the stuff like we can keep going but I I really hope that people take away from this is if you put yourself out there and if you are able to discern good work from bad work and virtuous people from non-virtuous people and industrious people and industrious people and you are able to contribute to that like literally the sky is the fucking limit because the step change opportunities that will come along by somebody being there and being able to contribute
Starting point is 02:44:56 give you the intro and help you along with your thing and you doing that and then okay this is the scene. Like, this is now the fucking scene. Um, it's really cool. And it's very gratifying. And it's gratifying in a way that doesn't make me want to have a shower after. You know, like, like, like, an episode doing really big numbers is really great and gratifying, but not in the same way as someone going, dude, that fucking, that idea that you came up with about, that was sick. Yeah. Okay. I'm going to think about that for the next three weeks. Thank very much. Yeah. 100%. Fuck yeah. Dude, I appreciate the hell out of you. I'm so happy to see what you're doing. Dwarkesh podcast. People should go check that out.
Starting point is 02:45:36 Substack as well. Yes.orgash.com. Dwarcash.com. Dude. Dude. Appreciate the fuck out of you. Thank you. I appreciate you, man. Thanks for having me on. I get asked all the time for book suggestions. People want to get into reading fiction or nonfiction or real life stories. And that's why I made a list of 100 of the most interesting and impactful books that I've ever read. These are the most life-changing reads that I've ever found and there's descriptions about why I like them and links to go and buy them and it's completely free and you can get it right now by going to chriswillx.com slash books. That's chriswillx.com slash books.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.