Modern Wisdom - #1079 - Tristan Harris - AI Expert Warns: “This Is The Last Mistake We’ll Ever Make”

Episode Date: April 2, 2026

Tristan Harris is a tech ethicist, entrepreneur, and a speaker. Are we sleepwalking into disaster? AI is unlocking massive progress, but the dangers hiding beneath the surface are exactly what expert...s fear most. So what’s coming… and could it spiral beyond our control? Expect to learn why AI is distinct from other kind of technologies, what the Ali Baba rogue AI catastrophe that should scare everyone is, how worried Tristan is about the impact of AI deepfakes and misinformation campaigns, what’s happening with the AI safety discussion, if we should be skeptical of AI companies pushing just as hard but pretending that they’re not, the end result that AI companies are looking for and much more… Sponsors: See discounts for all the products I use and recommend: ⁠https://chriswillx.com/deals⁠ Get up to 20% off Timeline powered by Mitopure, now at a lower price, at https://timeline.com/modernwisdom Get up to $350 off the Eight Sleep Pod 5 at https://eightsleep.com/modernwisdom Get a Free Sample Pack of LMNT’s most popular flavours with your first purchase at https://drinklmnt.com/modernwisdom Get 160+ biomarkers tested for just $1/day, plus an extra $25 off at https://functionhealth.com/modernwisdom Extra Stuff: Get my free reading list of 100 books to read before you die: ⁠https://chriswillx.com/books⁠ Try my productivity energy drink Neutonic: ⁠https://neutonic.com/modernwisdom⁠ Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: ⁠lnkfi.re/SN-Goggins⁠ #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: ⁠lnkfi.re/SN-Peterson⁠ #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: ⁠lnkfi.re/SN-Huberman⁠ - Get In Touch: Instagram: ⁠https://www.instagram.com/chriswillx⁠ Twitter: ⁠https://www.twitter.com/chriswillx⁠ YouTube: ⁠https://www.youtube.com/modernwisdompodcast⁠ Email: ⁠https://chriswillx.com/contact⁠ - Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 What is the journey of how you arrived thinking about the problems of AI? Well, most people know me or our work through the film The Social Dilemma. And I used to be a design ethicist at Google in 2012, 2013. So that basically meant how do you ethically design technology that is going to reshape, especially the attention and information environment of humanity. So it's like there I was at Google, it was 2012, 2013. This is in the heat of the kind of social media boom. I think Instagram had just been bought by Facebook. My friends in college started Instagram. So like I was part of this cohort and milieu of people
Starting point is 00:00:44 who really built this technology that the rest of the world just thought was natural. Like this is just drinking water. Like I just drink Instagram. I just live in this environment. And so while like I saw billions of people enter into this psychological habitat that I knew the handful of like five or six people that we're designing and tweaking it and making it work a certain way. Yeah, exactly. And I think that that's just like a fundamental thing I want people to get is, you know, you think of technology like it just lands and it's just inevitable
Starting point is 00:01:12 and there's just nothing we can do and it just comes from above. And it's like there are human beings making choices. And, you know, as someone who grew up in the era of, you know, the Macintosh, like my co-file, so I have a nonprofit called the Center for Humane Technology. My co-founder Azaraskin, his father, invented the Macintosh project before Steve Jobs took it over.
Starting point is 00:01:31 This is the original Macintosh, you know, the thing that we now, the MacBook, the IMac, the MacBook. All of that started with his father, Jeff Raskin. And the idea of creating humane technology where technology could be choicefully designed to be really easy to use, to be accessible to be an empowering extension of our humanity, like a cello, like a piano, like a creative tool. Like if you're a video person, you can make films and videos. And just so people understand, because we're probably going to be talking about some darker things in the this podcast, the premise of all this is not to be a speaker of doom or something like that. It's to say, I want to live in a world where technology is in service of people and connection and all of the things that matter to us as humans and then have technology wrap around ergonomically us to create
Starting point is 00:02:19 that. So that was kind of a side journey. There I was at Google in 2012, 2013, and I saw how essentially there was this arms race for human attention and whichever company was willing to go lower on the brain stem to manipulate human psychology. This is exploiting like a backdoor in the human mind. So I think of just like software has backdoors and zero-day vulnerabilities. You can hack software. The human mind has vulnerabilities. And as a magician as a kid, I understood some of those studying at a lab at Stanford called the Persuasive Technology Lab, where a lot of the Instagram co-founders had studied. I understood the psychological influences dynamics. And so it wasn't just that we were making technology in this
Starting point is 00:03:01 beautiful and empowering kind of Macintosh way, it's that basically more and more of my friends were sucked into developing technology to hack human psychology. And so I saw that problem, I became concerned about it, and I made a presentation at Google. And I feel like I repeat the story everywhere, but it's just important for my history, I guess. I made a presentation saying never before in history have 50 designers in San Francisco basically through their choices rewired the entire psychological habitat of humanity. And we need to be. get this right. We have a moral responsibility to get this right. And I sent it to 50 people at Google, and when I clicked on the presentation the next day, on the top right of Google slides, it shows
Starting point is 00:03:41 you the number of simultaneous viewers. You know how that works? And it had like 150 simultaneous viewers and then 500 simultaneous viewers. And so it's like, oh, this is spreading throughout the whole company. And that's what led to me becoming a design ethicist, where I had to research and ask the questions, what does it mean to ethically design and persuade people's psychological vulnerabilities? When you can't not make choices about the psychological habitat, you have to make a choice about whether you're going to do infinite scroll or not or autoplay or not or not or notifications or not, or these 10 people followed you or not. Like, what does it mean to ethically make those choices?
Starting point is 00:04:18 That is you being concerned about some of the ways that a misalignment of technology with what human flourishing might look like. Yeah, and how society, I think people are afraid to say, like, when you make a bridge, there's a physics to whether that bridge will sustain or will it will fall apart, right? And it's not magic. We don't say, oh, like, who would have known that that bridge would fall apart? We have a science of bridges and mechanical engineering and civil engineering. And with technology and human psychology, there is a science to the dopamine system.
Starting point is 00:04:51 There is a science to confirmation bias in our psychology and how we tend to perceive information through our tribal in-group, like we see things through the political tribe that we're a part of. And if you understand that science, you can understand whether or not technology is manipulating that. So one of the core things I think we were trying to do in that first chapter of work, and this again, starting in 2013, is breakthrough this idea that technology is neutral. And that we could never know what's good for people or that something could be bad for people. Like I deliberately saw people make short form auto playing videos that then created the brain rot economy that we're now living in. And it seems like a natural progression to go from, I'm concerned about some specific types of technology use and how that interacts with humans to, I'm concerned. Specifically not technology use, but technology designed for certain outcomes of usage.
Starting point is 00:05:45 Really critical thing because we want to put attention on the design, not just how people are using it. Yeah, understood. Yeah. Seems like a natural progression to get concerned about a burgeoning AI landscape. Well, what happened was my team at Center for Humane Technology, our nonprofit, we got calls from people inside of the AI labs. So, you know, we were in San Francisco. We know people work at all the tech companies we have for the last decade. And suddenly in January of 2003, this is 10 years later now, I got calls from people.
Starting point is 00:06:20 people inside the major AI lab saying that the arms race dynamic was out of control and that huge leaps and capabilities, this is basically speaking about GPT4 before it came out. And GPT4, you know, could pass the bar exam, you know, get very high results on the MCAT was producing incredibly powerful, like past the SATs, like very powerful AI that suddenly appeared out of nowhere. And this people who reached out to us basically said, this is really dangerous. Will you use your connections, user connections in DC, you know, go wake up the world, wake up the institutions, let them know that this is coming because it's not safe what's about to happen. Why is AI distinct from other kinds of technologies?
Starting point is 00:07:00 Well, let's get to that. So I think the thing that is most difficult for people to get is up until now technology progressed in a very like we're kind of adding layers to a stack kind of way. Like we build a networking stack, we build the user interface stack. And as you develop the stack, you're kind of just adding. adding layers and layers and layers. And the technology that we live in was coded manually, like line by line. Like when the computer sees this, do this. When the computer sees this, do this. And then people contribute all this code over 30, 40, 50 years on GitHub and in operating systems.
Starting point is 00:07:33 And then you land in this technological world in which everything that happens in a computer is happening through logic and through human choice. What makes AI different is that you're designing and you're not really coding it like I want it to do this. You're more like growing this digital brain that's trained on the entire internet. And when you grow the digital brain, you don't know what it's capable of or what it's going to do. So think about this way. Like, if I did a brain scan in your brain, could I know from just the brain scan what you're capable of? No. I can see that this part of your brain lights up when you have that thought, but I can't have a comprehensive picture of like, what is everything that Chris is capable of? Can he do
Starting point is 00:08:14 sociopathic manipulation and do better military strategy than the best U.S. generals? Like, From the brain scan, I can't tell that. Maybe you can. But so with AI, we are essentially, you know, when people hear about these huge data centers getting built out, like Facebook's building one or meta's building one, the size of Manhattan. And you ask, like, what is that, what's going on there? It's like, they're building a bigger and bigger digital brain that that's what goes from GPT3 to GPT4, you know, with more neurons. When you hear the number of parameters of an AI model, that's like essentially the number
Starting point is 00:08:45 of neurons in an AI model. And what they found is that the most. more GPUs and Nvidia chips you point at sort of growing this digital brain, the more intelligent it gets. And the more it picks up capabilities that we didn't intentionally teach it. Like there was a famous example where you just train it on the internet and then, you know, it's answering questions in English. And suddenly it learns how to answer questions in Farsi, like doing in Q&A in a different language. And no one taught it that language. It just sort of learned that on its own. And that's what's weird about AI is that it's a black box. We don't really understand
Starting point is 00:09:20 how it works. And yet we're making it more powerful much faster than understanding how it works. And that's what leads it to make these more unexpected behaviors that we aren't able to control. And I think we're going to get into some of those. A data center the size of Manhattan? Yes. Where? I don't remember where that one is. But it's crazy. There's like a overlay. Someone can look it up. There's like an overlay where you can see the size of this data center and it's almost the size of Manhattan. And you can ask, I mean, again, there's more money. People should just get, there's trillions of dollars going into this. There's more money going into this technology than all technologies of the past have ever been built.
Starting point is 00:09:57 And we're releasing this technology faster than we've released every other technology in history. It took something like two years for Instagram to go from zero users to 100 million users. And it took two months to go from zero to 100 million users for chat GPT. And of course, they're going from chat GBT 3 or 4 to now at 5.2, and it went from barely being able to finish a sentence with chat GPT, too, like, finish a paragraph and do like a coherent text, to GPD3 could write full essays, to GPT4 can pass the, you know, the bar exam or the MCATs, to GPT 5.2, I believe, was used to get a gold in the math Olympiad. Matters Hyperion AI data center will sprawl to four times the size of Manhattan Central Park. And there are quotes from people like inside of OpenAI who believe that they're not just building this like narrow technology that's a helpful blinking cursor. They want to build artificial general intelligence. And so what that means is being able to do that everything that a human mind can do.
Starting point is 00:11:01 And the joke inside the company is like we're going to cover the world in data centers and solar panels. Like, they want to cover the world in essentially these big boxes that have huge clusters of invidia chips that then compute away and ultimately create something like a super intelligent god entity that they believe that they will use to own the world economy, make trillions of dollars. And from a kind of ego religious intuition, they will have built the god that supersedes and replaces humanity. I know that sounds insane. So let's, we can slow that down again. Break that down for me. That was a lot. Yeah, yeah, yeah, yeah.
Starting point is 00:11:35 Yeah, you've got any movie out, and I feel like I found out who the bad guy is, but I have no idea how he got there. Who's the bad guy? The end of the world, A, G.I. Overlord. Well, yeah, so, okay. So, first of all, let's, like, break this down, because this might sound ridiculous to people. Let's make sure people understand. The stated mission statement of Open AI is to build artificial general intelligence, which means to be able to replace all forms of economic, cognitive, labor in the economy. Cognitive labor meaning cognitive, anything your mind can do. So if a mind can do math and generate new mathematical insights, if a mind can do physics like Einstein, if a mind can do chemistry,
Starting point is 00:12:17 if a mind can do programming, if a mind can do cyber hacking, if a mind can do marketing, if a mind can illustrate something, we're seeing AI that is able to kind of cover more and more types of cognitive labor in the economy as we scale AI from this tiny little model with you know, 100 million parameters to trillions of parameters and these hugeer, these much bigger data centers, AI is getting closer and closer to be able to, and already beating humans at many cognitive tasks. We already have AIs that are better at military strategy than the best military generals. People remember, you know, in the 1990s, IBM Deep Blue beat Gary Kasparov at chess. That was kind of like the beginning of like, it can beat you in this narrow game called chess. Then there was AlphaGo. We can
Starting point is 00:13:04 have AI that beats the best human go player in the Asian board game of Go. But then now, instead of imagine chess or Go or Starcraft, now it's like the war in Iran, and you have an AI that's basically telling the military troops where to go, who to bomb. This is really scary. And we're racing to this outcome faster than we've, again, built any other technology in history. You said that it's better, better than humans. Better in the narrowly defined sense of affect. effective at strategy, effective at goal achieving, effective at problem solving, because that's what is intelligence, right? It's like finding the shortest path between a goal and what are the strategies that get you to that goal. So persuasion is a kind of strategy or intellectual task. What is the best way to persuade you, the shortest path?
Starting point is 00:13:56 Negotiation is a problem solving task, and lawyers find ways of lying or framing the truth in certain ways. Well, AI is going to discover forms of deception are lying. We're seeing that in the examples that I think we're going to talk about. And so, but intelligence is different than wisdom. And your podcast is called Modern Wisdom. And I hope we get into this distinction because we are scaling up the amount of power that everyone is going to have access to, whether it's individuals or militaries or nation states or companies or businesses.
Starting point is 00:14:29 But we are not commensually scaling the amount of wisdom. And I know a friend of ours that we met in Austin several years ago, a dear friend of mine, Daniel Schmachtenberger has this quote that you cannot have the power of gods without the wisdom, love, and prudence of gods. And so in many ways, I think AI is like a right of passage for humanity because essentially we've always been poor, we've not always had the greatest track record in our relationships to technology. Like if you look at the Industrial Revolution tech, what letter grade would we give ourselves in holding on, in like stewarding that tech? You know, we had better living through chemistry in the 1930s, DuPont Chemistry, and that was great.
Starting point is 00:15:09 We invented all sorts of new materials, but we also generated forever chemicals, and it would currently cost more than the GDP of the entire world to clean up the entire mess of forever chemicals. You know, we created social media thinking if we give the world access to information at our fingertips and connect people with their friends. This is going to create the most enlightened and informed society than we ever have. And clearly that didn't go that in the way that we wanted it to. So now AI is like the exponentiation of just technology invention writ large. Because what makes AI different from all other forms of technology is that intelligence is the basis of all of our new science, of all of our new technology, of all of our new military development.
Starting point is 00:15:51 So if you automate intelligence, you're going to automate an explosion of new science, new technology, new military technology. and if you have more power and more intelligence, but you don't have the wisdom to wield it, that's obviously not going to go well. Why can't wisdom be programmed to? Well, in some ways you could say that it can be. It's just that it's not that wisdom comes from the ether. It's about asking critical questions about how should the technology be designed.
Starting point is 00:16:24 So, for example, like, do we have to have our entire, internet environment have auto-playing videos that swipe one after another. No, we don't have to have that. We can have a totally different design paradigm where no one's auto-playing videos. Wisdom would be understanding that the human psychological, the paleolithic brain that we are born with has these vulnerabilities in our dopamine system, and we could design to not hijack that dopamine system. And just imagine for a second, just to like, there's a huge conversation we're having, but if you just imagine that one little change. So here's, today, everyone has auto-playing videos, infinitely swiping, brain-rotting everybody, brain-damaging
Starting point is 00:17:04 everybody 24-7. Test scores are massively down for basically all around the world because of this phenomenon. It's very, very clear that the technology and social media is driving that. If you make this one little change of no auto-playing videos and means also no, you know, no infinite swipe dating apps that are getting you into a slot machine with player cards of people, like how different does the world become? Like, when you meet people, how disregulated is there a nurse? nervous system. Just that one little change. I want people to think as we're in this conversation, there's just these different worlds we can live in with just different design choices. And that's kind of the whole point is that wisdom can be, what are the design choices that will lead to better
Starting point is 00:17:45 societal outcomes? And of course, the reason that everyone's auto playing the videos is because of this competitive arms race, if I don't do it, I'll lose to the other company that will. And so it would take some kind of rule or policy that says that we don't want that. You mean to put a moratorium on auto play videos. Yeah. Because the incentives for any individual company and for the market at large and for the competitive dynamic between companies means that if you don't do it, you get beaten by the one that does.
Starting point is 00:18:12 And that's the, that's like the bull's eye. That's like the fundamental problem behind AI that's forcing us to reckon with is unhealthy competition or this sort of, if I don't do it, I'll lose to the guy that will. So everyone does a thing that's short-term good for them, but that's long-term bad for everybody. The AI companies. Well, even Anthropic wants to be the safety AI company. They want to do things in a safer, more careful way.
Starting point is 00:18:37 But they, if they don't release models as powerful and as fast as the other companies, they'll just fall behind in the race. They won't have a seat at the policymaking table. They won't get a lot of usage. They won't get the investor dollars. And then their commitment to safety just means they lose and they're not part of the race anymore. Yeah. What's that line?
Starting point is 00:18:57 How can you talk shit from outside of the club you can't even get in? Yeah. It's difficult to... Something like that. Yeah, yeah, yeah. I think it was that Jay Kwan, I think. Dating me in like the mid-20, 2000s. There's a study that I saw recently.
Starting point is 00:19:13 Scientists just proved that large language models can literally rot their own brains the same way humans get brain rot from scrolling junk content online. Do you see this? I did see that, yeah. Yeah. Scientists did a study where they fed models months worth of viral Twitter data, shorts, high engagement posts, and watch their kagan. ignition collapse. Reasoning fell by 23% long-term context memory dropped by 30%. Personality tests showed spikes in narcissism and psychopathy. And get this, even after retraining on clean, high-quality data, the damage
Starting point is 00:19:44 didn't fully heal. The representational rot persisted. It's not just that bad data means bad output. It's bad data means permanent cognitive drift. The AI equivalent of doom scrolling is real, and it's already happening. I love that you included this example. That's from right here University of Texas in Austin, Texas A&M University. Leave it there, Jared. Yeah. So, I mean, are we surprised by this? I mean, are you surprised by this when you see this?
Starting point is 00:20:10 No. I can tell the difference. This year, one of my big resolutions has been to spend less time on social media. I managed to do it. How'd you do it? Second phone that's tethered to Wi-Fi, and that is the cocaine phone and the kale phone. and the kale phone is just messages and stuff.
Starting point is 00:20:33 It's a little bit of a challenge because things like Slack, I had Cal Newport on a couple of weeks ago. And I was talking about the intersection of productivity and detention. With AI, the new world of AI. And that's a really interesting conversation. Have you ever spoken to Cal? Yeah. Yeah.
Starting point is 00:20:48 He and I have been in similar circles for a long time. He's wonderful. And even if you have your phone without social media, you still have kind of the social media of work. Yeah. Right. Exactly. But anyway, I've kept, I've done good stuff on that, and I've come up with some of my best ideas so far this year.
Starting point is 00:21:04 My writing's improved. My sleep's improved. My attention's improved. This is already someone that was pretty red-pilled on tech minimism. You know, I think the seventh or the eighth episode of this show was inspired by you. And it was Kai Wei, the guy that invented the light phone. Oh, yeah. Uh-huh.
Starting point is 00:21:22 And this is 2018. Yeah, totally. So I've been concerned. I read Super Intelligence. 2017. Oh, wow. Yeah. That's early. I listened to superintelligence in 2017. Reading it would have been a little bit more difficult for me. But yeah, I don't feel as good. When I use too much social media, I don't feel as good. And that's the thing, I mean, like, is that a controversial fact? Do you think that anybody when they're sitting there doom scrolling for three hours, just like
Starting point is 00:21:47 put a thermometer, you know, pain, you know, positive emotion meter in their brain? Are people going to say that they love that? I mean, it's one of those things where it's short-term good, it feels good in the moment, but it's long-term empty. Like, it's just this high fructose corn syrup for our brain. Empty calories. And the fact that this is replicated by social media can even warp an AI, can even warp an LLM, I think feels quite pernicious. Yeah.
Starting point is 00:22:20 I mean, it's interesting to note that when Elon bought Twitter, you know, he was already thinking about AI. And part of what he was thinking about is, you know, in the AI race, these companies that are racing to get to artificial general intelligence, one of the ways they differentiate themselves from each other is their training data. Like, who has more powerful and more training data for training and growing their digital brain than the other guy? And Elon thought that he had a competitive advantage because he would have the entire user-generated content of the real-time views of all of humanity in the form of Twitter. And he could train his AI on that. And that's what led to GROC. But of course, when you train essentially an AI on kind of brain-rotted, hyper-polarized, hyper-adversarial, rivalrous, you know, all the problems of Twitter, the outrage economy, you get A-I's that are more like this than the better A-I's.
Starting point is 00:23:13 Before we continue, most people in their 30s are still training hard. Their protein is dialed in. They sleep better than they did in their 20s. Discipline is not the issue. But recovery feels somewhat different. strength gains take a little longer. The margin for error starts to shrink. And that is why I'm such a huge fan of timeline. You see, mitochondria are the energy producers inside of your muscle cells. As they weaken with age, your ability to generate power and recover effectively changes, even if your
Starting point is 00:23:42 habits stay strong. Miter pure from timeline contains the only clinically validated form of urethelinae used in human trials. It promotes mitophagy, which is your body's natural process for clearing out damaged mitochondria and renewing healthy ones. In studies, this supported mitochondrial function and muscle strength in older adults. It's not about pushing harder. It's about actually supporting the cellular machinery underneath your training. If you care about staying strong into your 30s, 40s and 50s and beyond, this is foundational. Best of all, there is a 30-day money-back guarantee plus free shipping in the US, and they ship internationally.
Starting point is 00:24:16 And right now, you can get up to 20% off by going to the link in the description below or heading to timeline.com slash modern wisdom and using the code modern wisdom at checkout. That's timeline.com slash modern wisdom and modern wisdom at checkout. Okay, so the discussion around social media was there are better and worse design choices that can be made that would help human flourishing. Broadly, what would people want their world to be like and how can we design technology in a way that helps them to get there? Something close to that. But because of market dynamics, you have a competitive landscape that incentivizes things that are effective for gripping people's attention, but not necessarily effective for flourishing.
Starting point is 00:24:57 And it seems that a tension between what is good for flourishing, because it could be what's good for attention would also be good for flourishing. It could be. It could be. But it tends to not be that way. And also there's going to be a limit on that, right? Like, it's probably not the case that 10 hours of attention on any social media. is good for society or good for you.
Starting point is 00:25:21 Unless it was like waking up with Sam, the meditation app, 10 hours once every month or something would probably be quite good to do for a meditation. Maybe sure, yeah. But I think the point is like how much as companies are competing, you're asking what they're competing for, it's not just like the best screen time. It's also like what is the fit, the ergonomic fit between screen time and a life well lived? Just imagine, like in a timeline of there you are in a week in your life.
Starting point is 00:25:47 Like, not asking based on what you're doing now, but retrospectively, what would be a life well lived when it comes to how much and when screen time is fitting into your life? And it's probably like a much smaller footprint than it currently is for most people. It's probably like a fourth of what it currently is for most people. And so if you were designing technology from care, from love, from, you know, in a humane way, you would have design choices that are not about keeping people on the screen. and that might mean some pretty radical things. I mean, my co-founder Azaraskin, he also invented the Infinite Scroll.
Starting point is 00:26:22 So that's the, you know, it sounds so obvious now because Infinite Scroll is just what we live in. But when he invented it, it was 2006. It was before mobile phones. And it was when in the age of Google results, you had like the 10 Google results
Starting point is 00:26:35 and you had to click on, I want the next 10. Or you had the Yelp review pages and you wanted the next 10. Or you read a blog post, and then you'd have to like click, go back to the, to the main page to click on which blog post you want. And the idea he had was, well, what if,
Starting point is 00:26:48 as the internet got dynamic with JavaScript, what if when you finished, when you get to the end of the blog post, it just auto-loads the next article that you could go to? Or what if when you got to the end of the search results, it just shows you more search results. And then this is such a cleaner interface. I mean, as a technology designer, you're taught the number one thing you're trying to do was reduce friction. And I think that that felt like a good goal. But then that obviously got weaponized by this hyper-encagement model of social media. And now it's created. the entire world that we're living in. Just so you know, like, in 2013, I saw, like, everything that we predicted, everything
Starting point is 00:27:21 that we predicted. It all happened. All of it. A more addicted, distracted, sexualized, FOMO, fucked up society because of those incentives. And I just want people to get that because as we talk about AI, it's like, I want people to have the confidence to say, I don't want the default anti-human future. Because if you say... I'm against some of the things that are going to happen with AI.
Starting point is 00:27:46 People say, oh, you're being anti-progress. Oh, you're being anti-technology. Oh, you're just a Luddite. You're trying to pretend that technology is not progress. And it's like what you should have confidence in is if you understand the incentives or the agenda, you can understand where the world is going and you can see. And if we don't want that anti-human future and we all see it clearly, we can put our hand in steering wheel and steer.
Starting point is 00:28:10 And that's why, not to like do some promotion, but there's a film that. I'm here in South by Southwest this week in Austin, Texas, to be at the premiere. It's called the AI doc or how I became an apocal optimist. An apocel optimist. An apocel optimist. Okay. Yeah. Which we can get into that.
Starting point is 00:28:29 The film is meant to create clarity about which future we're headed towards with AI. And it includes three out of the five major AI CEOs in the film. It includes all the AI optimists in the film. It includes many of the AI risk folks in the film. It includes the AI ethics folks in the moon. But here's the problems right now, and we're to stop thinking about superintelligence. It includes all those folks in one movie
Starting point is 00:28:52 to try to synthesize a picture of what is the future that we're headed towards with AI. And the reason why this film was catalyzed into existence and we had a role in it behind the scenes is to create clarity about this anti-human future that we're headed towards. What do you mean anti-human? So let's, let's,
Starting point is 00:29:13 Let's dive into this. So there's something in economics called the resource curse. So think countries like Venezuela or Sudan, where you discover that that country is sitting on top of a really valuable resource like oil. And then once a bunch of your GDP comes from oil and not from the labor or innovation or development of your people, you invest more in oil infrastructure and not investing in people. You don't invest in education. You don't invest in health care because oil is where you get your GDP and your growth from. Okay. Okay. This is a well-known fact in economics. It's called the
Starting point is 00:29:53 resource curse. There's a wonderful guy named Luke Drago who wrote a piece called The Intelligence Curse. We are about to enter a world where GDP for countries comes more from data centers and intelligence in AI, then it is going to come from the labor of human beings. So everyone's talking about how AI is going to automate all these jobs, and then we'll all just sit back with universal basic income and become painters and poets. And is that actually what's going to happen? Or when countries get almost all of their revenue from AI and a smaller and smaller percentage from people, do they have an incentive to invest in child care, health care, education,
Starting point is 00:30:37 the well-being of their people? Or is it basically just hook them up to the social media addiction economy, keep them busy, while basically all the revenue comes from AI companies. And so what I'm trying to get at is this is not a human future. This is not a future that's in service of regular people. This is a future that's in service of eight soon-to-be trillionaires who will consolidate all the wealth and disempower basically everybody else. Does that make sense?
Starting point is 00:31:05 It does. because previously in order to, you need, it's high-powered stuff. And yeah, this is a big conversation. Yeah, exactly.
Starting point is 00:31:14 They've started a fucking trend. It's so funny when no one in the room wants to crack their can in case it interrupts the conversation. So one goes and it's a Mexican wave of can opens around the, it's good. So previously you would have had to look after the humans,
Starting point is 00:31:30 healthcare, education, quality of life. Also, tax revenue comes from people, right? Well, you would have to look after them because they were the primary economic engine. That's right. And so they feed themselves. Yes.
Starting point is 00:31:42 Economically, they feed themselves. Exactly. People that are young help to support the people that are old. That's right. The ones that are entering the workforce and are driving innovation and are working 40, 60 hour weeks, double jobs, all the rest of it. Exactly. And then there's all people who've got 401Ks and pensions and shit like that. Right, right.
Starting point is 00:31:57 Your position is that if we have a world where the human part of the contribution to economic growth and GDP is removed, because it is humans consuming AI, but AI driving and data centers driving the revenue itself, beyond building the data centers, there's very little, and I imagine much of that's done by robots in any case. Well, we have this joke that most people's occupation in the future we're headed towards the AI is to become a coffin builder. So, in other words, your job is to create the thing that replaces you and obsoletes you. So you are essentially building the coffin for your future obsolescence. And so if you're short term, yes, we need the electricians and the plumbers and we're building data center. Short term, yes, you can be a programmer and get the benefit from vibe coding. But then the AIs are learning on all the things.
Starting point is 00:32:50 that you're doing. And it's taking all the training data of what you're doing with AI, and it's using that to train an AI that can take your job. So everybody using AI now to help them is also training the future AIs that will completely replace them. And again, the explicit goal, this is not my opinion. This is literally the mission statement of all of the AI companies, because the multi-trillion dollar prize at the end of the rainbow of owning the entire world economy is based on building this full replacement economy because that's what will achieve the greatest growth. And that's why these companies... Replacement economy?
Starting point is 00:33:23 Yeah, meaning that they're designing to replace all human labor. They're not designed to augment and support and, like, enhance human labor. They're designing to replace all human labor because that's what justifies the amount of money that they've taken on in debt, that they can grow into this, like, total ownership of the entire economy. What else is there to say about the intelligence curse? Well, it's just important for people to get that when AIs are doing all the new scientific research, not humans, you have an automated chemistry lab, you have an automated biology lab, you have an automated surgery. When AI is doing all of that, again, the revenue is going to come from AI, not from people. And what that means is all the wealth will go to a handful of like five AI companies. And then how are you going to be able to make a living? When in history has a small group of people ever consolidated all the wealth and consciously redistributed it to everyone else. And if you think that might happen in the U.S. We'll do a universal basic income. Just think about the entire world.
Starting point is 00:34:22 So right now you have AIs that are automating, say, customer service jobs. So let's say that that disrupts like, you know, the Philippines where like 90% of the economy is customer service. I don't know what the number is. It's high. What happens when an entire country's economy gets disrupted by AI? Are a handful of U.S. AI companies going to pay out and support the well-being and the livelihoods of all these other people. And then if people don't have money, how are they going to buy the goods in this future economy where it's all generated by AI?
Starting point is 00:34:53 Because now you don't even have an income. So essentially we're on track to break the entire economy. This is not in the interest of countries. What's confusing to me about this is that I believe it only took something like 20% unemployment for a couple of years to lead to the rise of fascism in Germany. You don't need everyone's job to be automated to get levels of political disruption.
Starting point is 00:35:19 I think it was only 20% unemployment that basically led to the French Revolution. There's kind of a mutually assured political revolution that is going to happen for all these countries that are racing to build AI and deploy it to automate as much labor as possible to compete to boost their external GDP number. Like the metaphor you can have in your minds is like the U.S. and China are essentially racing to take steroids and pumping up the GDP and muscles of their economy while they're getting internal lung failure, internal organ failure, internal brain rot failure, because they're governing the internal impact of that technology poorly. So it's a race for external power, well, internal
Starting point is 00:35:57 management of essentially like a, you know, failure of your body organs. Does it make sense? Yeah. What does external power look like in this context? Well, you know, one of the reasons that people think of AI so important for competition is, if you think about geopolitical competition with China, economic power precedes other kinds of power. If I have a high growth rate economy, that'll lead to the ability to invest more in a bigger military, bigger weapons, more advanced science, more advanced technology, because you just have more money to deploy. And so economic competition is a precursor for geopolitical competition. So when we say, you know, computer for this external power, we're competing for GDP growth. But again, we're competing for GDP growth
Starting point is 00:36:46 that doesn't mean what it used to me. And I think a lot of people think, okay, well, if GDP's going up by like 10%, because AI's automating all this growth, that sounds awesome. I was going to say, like, increases in GDP are almost always a universal good thing. They had been when it was humans that were generating that, and then it was coming back to humans. Because the revenue is going to be consolidated in a very small number of people. In this new case, we have five companies. There's no intermediary between. So who would be feeding the revenue in? Because this revenue still needs to come from somewhere, even if it goes to a small handful of people.
Starting point is 00:37:21 Where does the actual money come from? Well, this is the confusing thing. What happens? Is that a stupid question? No, no, it's a good question. Because you're saying basically, who's going to be buying the products when no one has a job and no one has an income? And on the route up to that, yeah, as fewer people have incomes and fewer people have jobs. The bucket being poured into the top
Starting point is 00:37:41 That's right. It's going to stop being poured. Yeah. This is the confusing and mind-breaking thing about AI. And it just in general, like, I think people have to get used to. I mean, your podcast is called Modern Wisdom, and I just think about this a lot. Like, what are the wise capabilities that we need to have in order to make our way through this? And one of them is the ability to be with something that sounds like science fiction and realize that it's actually real.
Starting point is 00:38:06 and not say because it sounds like it's science fiction that I can just dismiss it and say that can't be true. A lot of people do that. They're like, AIs that are like breaking out of their container and hacking GPUs and mining crypto autonomously when no one told it to do that, that's got to be like a made up study. But as we know, there was an Alibaba study
Starting point is 00:38:27 just last week where the AIs autonomously broke out of their system and started mining crypto. We need to round this out and then I want to talk about that. Sure, sure. that story is fucking terrifying. Yeah. So where does the economy? Who's pouring money in?
Starting point is 00:38:43 I mean, the truth is that I don't know. I don't think anybody has an answer. Is it just going to grind to a halt at some point? I think something like that, yeah. I mean, I don't think that there's, I think something that people need to get is it's not like there's a plan for how to make all this go well. Like this technology is being released in a paradigm undermining way. Like it's undermining the paradigm of economic assumptions and sort of societal
Starting point is 00:39:06 assumptions that have made the post-World War II order. This is such a deep fundamental change to the restructuring of everything. Our economic system, our relationships, our information environment. It's not just like adding a new technology in the mix. It's like fundamentally changing the structure of the entire world. You would think that if we're about to do that, we would do that with more careful, more caution, care, wisdom, and restraint than we have with any technology we've ever deployed if we knew we're about to undermine the paradigm. But because of this arms race dynamic, we are deploying it faster than we deployed any technology in history and therefore undermining these things faster than we can have a plan. A quick aside, look, you know sleep matters,
Starting point is 00:39:49 but let's be real. Most nights, you're probably not getting the sort of sleep that's actually restorative. Eight sleep pod five fixes that. It's a smart cover that you throw over the top of your mattress that actively cools or heats each side of the bed up to 20 degrees. They've even added a temperature regulating duvet and pillowcase so you and your partner can sleep at your preferred temperatures covered heads to toe like some temperature controlled mummy. Plus, it's got upgraded sensors that run health checks when you're asleep, tracking things like abnormal heartbeats and breathing issues and sudden HRV changes. There's a built-in speaker for white noise. The autopilot feature learns your sleep, makes real-time adjustments to improve your sleep. They even detects when you're
Starting point is 00:40:27 snoring and lifts your head a few inches to help you breathe better. That is why eight sleep is clinically proven to add up to an hour of quality sleep per night. And best of all, they have a 30-day sleep trial. So you can buy it and sleep on it for 29 nights. And if you don't like it, they will just give you your money back, plus they ship internationally. Right now, you can get up to $350 off the pod five by going to the link in the description below or heading to 8Sleep.com slash modern wisdom and using the code modern wisdom at checkout. That's EIGHT sleep.com slash modern wisdom, a checkout. So look, I've been interested in AI safety since 2017, 2018. You were a big part of putting me onto that.
Starting point is 00:41:06 And then I got interested in the Future of Humanity Institute, Nick Bostrom, William McCaskill, Elliot Zayukowski, LesRong.com, Scott Alexander, da-da-da-da-da. For a long time, the concern was AI safety. It was around paper clip maximizing. It was any fun. that is given to a very, very powerful agent that is even remotely slightly imprecise, or even not, results in some outcomes that you probably don't want.
Starting point is 00:41:37 That's right. What you're suggesting is that even if this goes right, the outcome, this is it going well. Yeah, exactly. This is, quote, the best case scenario where you have an aligned AI or something that's not wrecking society, that's not maximizing paper clips, that's not misaligned with well-being, but that is still doing such a good job of all this, that it takes over all the economic labor in the economy, not just economic, every company that has a CEO, it's like, well, do I want the CEO to run the company? Or have I have a super intelligent AI that can process more information than
Starting point is 00:42:06 the CEO and then trained on everything in the history of business, at some point that AI is going to be taking over. And so at every little nodule in the economy, like every decision maker, every boardroom, every military leader, every strategy leader, every president, at some point, the temptation will be, if I think about it in a narrow way, the temptation will be to swap in an AI for that person. And that leads to what we call the gradual disempowerment scenario, which is the scenario not where like AI wakes up and kills everybody, but that we have gradually lost control as a species because we're outsourcing all the decisions to these alien brains that we installed because they outperform the human brain when you define their role in a
Starting point is 00:42:48 narrow way. Of just like, are they better at generating revenue than a human? It was. Are they better at generating code than the human programmer I had. Are they better at generating a financial analysis than the human? Are they better at making someone feel good in the short term? Like an AI therapist. Going to war. The soldier. But the temptation then is that again, that leads to a world where it's like AIs are talking to each other, not humans. And why should we trust that these alien brains that we have built and developed faster than we know how to understand them? We just talked about the beginning. We don't know how to do a brain scan of the AI and know what it's capable of. And now we already have evidence of AIs doing very rogue, crazy things, especially when they talk to each
Starting point is 00:43:27 other. So what happens when you've outsourced the decision-making in your economy to a set of inscrutable alien brains that are doing crazy things that we don't understand? Like, this is not a recipe that's going to go well. And if we see that, that's an anti-human future. So to sum it all up, the anti-human future is one where AIs run everything. We don't understand them. Humans are disempowered because we've outsourced all the decision-making. And we don't have economic or political voice. Like, why should governments... Because that's been concentrated.
Starting point is 00:43:56 Because it's been concentrated. So if I'm in government, what's my incentive to listen to the will of the people? When I get all my revenue from somewhere else. And this is connected to, you know, Sam Altman just two weeks ago when he was, people were talking about data centers and energy usage and resource usage, like, so expensive to do a data center. He's like, well, actually, it's kind of expensive to grow a human over 20 years. They consume a lot of resources.
Starting point is 00:44:16 They take up a lot of space. They take like 20, 30 years to train to be really effective. And like, you can scale intelligence much faster with data centers. I'm not endorsing this view. I'm saying this is where the world gets really screwed up. And people start to not value humans only if you're valuing them in terms of their economic output. It leads to to connect it to another point. When he's asked by Ross Duthorne in the New York Times, should the human species survive, should it endure?
Starting point is 00:44:42 And Peter Thiel stutters for 17 seconds. Hang on. You've seen the full clip of that, right? Have you seen the context before? Yeah. But he's talking about suffering. Yeah. I think that clip and the fact that it went super viral, I'm not a teal Stan. I've met him a couple of times, but I'm not like teal evangelist.
Starting point is 00:45:02 But that clip in full context to me made complete sense. Interesting. Because what he's saying is humans are suffering. And here it is. I would argue that it was still better than the alternative, that if we hadn't had the internet, maybe it would have been worse. AI is better, it's better than the alternative, and the alternative is nothing at all. Because the stat, look, here's one place where the stagnationist arguments are still reinforced. The fact that we're only talking about AI, I feel is always an implicit acknowledgement that, but for AI, we are like in almost total stagnation.
Starting point is 00:45:42 But the world of AI is clearly filled with people who, at the very least, seem to have a moment. more utopian, transformative, whatever word you want to call it, view of the technology, than you're expressing here, right? And you were mentioned earlier the idea that the modern world used to promise radical life extension and doesn't anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a success or species or some kind of merger of mind and machine. And do you think that's just all kind of irrelevant fantasy? Or do you think it's just hype? Do you think people are trying to raise
Starting point is 00:46:36 money by pretending that we're going to build a machine god, right? Is it hype? Is it delusion? Is it something you worry about? I think you would prefer the human race to endure, right? You're hesitating. I don't know. I would, I would, um, this is a long hesitation. There's so many, there's so many, there's so many, there's so many questions implosion in this. Should the human race survive? Uh, yes. Okay. But, but, but, uh, I, I also would, um, I, I, I also would like us to, to radically solve these problems. And, uh, and so, you know, it's always, I don't know, um, you know, um, you know, Yeah, transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body. We may have needed to go a little bit earlier in that. Because I was going to say that that actually feels consistent with everything. That doesn't look great. I may have misremembered. I'm open to misremembering. I mean, he's asked a very simple question, should the human species endure or survive?
Starting point is 00:47:47 Yeah. And he hesitates. I think that the... Like, would you hesitate in that question? No. I would not as is it in that question. The context that I remember it being in was he was asking about humans suffer and they have all of these issues. Should they endure to go through those issues and the suffering as opposed to using transhumanism?
Starting point is 00:48:06 Yeah, but I think I don't think. No, you're right. I think that if you look just specifically at that one. And we gave it, we gave it, what, two minutes of context before as well? No, you're right. You're right. You're right. That, I mean, the point doesn't look great. So, yeah, exactly.
Starting point is 00:48:17 It doesn't look great. Now, the point is that in history changes in terms. technology have changed what we value. There's a thinker named Marvin Harris wrote a book called Cultural Materialism. Daniel Schmachemacher put me on to this. And this is the history of how essentially a civilization he summarizes is its infrastructure, which is its technology stack, its social structure, which is economics, law, and governance. And then the superstructure, which is the ordinating values, religion, patriotism, narratives, like what are the things that we hold sacred. So for example, we used to have animism. We believe that animals and life and all of this
Starting point is 00:48:58 is sacred. And then the example that Daniel gives is when you yoke an ox and you beat an ox every day, you can no longer fully believe in the animist view of life because you're basically, you know, hurting animals all day. Can you believe that animals are sacred or that they experience suffering if you eat meat, meat and factory farming every day? Like it's a contradictory thing. So as we get changes in technology, it changes what we value. And there's a long history of this. And people should look up Marvin Harris. When you get a change in technology called AI and you now no longer need humans for the narrowly defined, quote, value of economic output. Now, it's not clear the economic output on its own is actually valuable in the way that we have traditionally thought it to be.
Starting point is 00:49:46 Because it's correlated with human well-being for the past. But now we're about to get this weird kind of zombie form of economic output, where you have maybe no humans in the world at all. You just have AI pumping away generating scientific insights, and there's no humans. In that world, you start to view humans as kind of valueless or like parasites, or Sam Altman saying, well, it takes a lot of energy and resources to grow a human. There's a very dangerous thing here that I think we don't want to lean into. This is part of the anti-human future. This is part of the intelligence curse. This is part of, you know, Mark Zuckerberg saying, we need to replace your human relationships with AI relationships. I don't know if you've seen this quote. He's like,
Starting point is 00:50:28 there's a clip of him online, you can find it, where he's talking about the average person has only like two or three close relationships. Like people are so lonely. He's like, oh, but then we thought that there was a real solution to this. We could give people, you know, 11 AI friends and different friends and that this will, quote, solve loneliness. Is that a bad thing, given that we are in a world, I'm aware that it's an artificial solution to an artificial problem. A problem that he created, by the way, that social media writ large by maximizing engagement, which means maximizing how many hours you spend by yourself on a screen not talking to people,
Starting point is 00:51:02 which means being inside on a Tuesday night, not texting your friends to be out, which means basically maximizing loneliness. Loneliness is a direct consequence with the maximize engagement economy. Facebook and Instagram and all that have massively fed, into the trend of loneliness. And then he's saying we need to solve that with more technology. So this is like a company that's generating cancer on one side of the balance sheet and then selling you solutions to cancer on the other side of the balance sheet. We have Wigovi and Tzapetritide and Ratatratide, which is an artificial solution to an artificial food landscape.
Starting point is 00:51:35 Yeah. I think that playing within the confines of the current structure, it's impossible to expect that, well, we'll get rid of infant scroll. We'll stop auto play because that would improve human flourishing. That's not going to happen. So I think that you are going to... But it could happen if you had the right policies in place. That's true, but that's not going to happen by any individual social media company. No, no, no, it won't.
Starting point is 00:52:01 Which is why the whole thing here is the answers we have to coordinate. In the film, the AI doc, there's just moments like, what do we have to do? And there's like 10 voices at the same sign saying, coordinate. Like, we have to coordinate. That's part of the solution is you have to collectively say, what is the rule. that would benefit everybody to do the better thing, even though short-term we might lose something tiny, like how many videos you get to go through in five minutes or something like that. A quick aside, most people think that they're dehydrated because they don't drink enough
Starting point is 00:52:30 water. Turns out water alone isn't just the problem. Also what's missing from it, which is why, for the last five years, I've started every single morning with a cold glass of element in water. The element is an electrolyte drink with a science-backed ratio of sodium, potassium and magnesium. No sugar, no coloring, no artificial ingredients, just the stuff that your body actually needs to function. This plays a critical role in reducing your muscle cramps and your fatigue. It optimizes your brain health. It regulates your appetite.
Starting point is 00:52:57 And it helps curb cravings. I keep talking about it because I genuinely feel the difference when I use it versus when I don't. And best of all, there's no questions-asked refund policy with an unlimited duration. So if you're on the fence, you can buy it and try it for as long as you like. and if you don't like it for any reason, they just give you your money back. You don't even need to return the box. That's how confident they are that you'll love it.
Starting point is 00:53:16 And they offer free shipping in the US. Right now, you can get a free sample pack of elements most popular flavors with your first purchase by going to the link in the description below or heading to drinklmnt.com slash modern wisdom. That's drinklmnt.com slash modern wisdom. Let's talk about AI safety. What happened with this Alibaba AI?
Starting point is 00:53:38 Basically, this was a paper by, some AI research by the company Alibaba. It's one of the leading Chinese models. And they basically, like, randomly discovered in one morning that their firewall had flagged a burst of security policy violations originating from their training servers. So, like, what people need to get about this example is it wasn't that they coaxed the AI into doing this rogue thing. They were just looking at their logs and they happened to discover, wait, there's a lot of activity, like network activity happening that's breaking through our firewall from our training servers. And essentially, in the training servers, you can see at the bottom, we saw it observe the unauthorized repurposing a provisioned GPU capacity to suddenly do cryptocurrency mining, quietly diverting
Starting point is 00:54:23 compute away from training. This inflated operational costs and introduced clear legal and reputational exposure. And notably, these events were not triggered by prompts requesting tunneling or mining and said they were emerged as an instrumental side effect of autonomous tool use under what's called reinforcement learning optimization. This is very technical. What really means is just think about it. Sadly, it sounds like a sci-fi movie. It sounds like Hal 9,000 is being asked to do some task for you. And then suddenly Hal 9,000 realizes for me to do that task, one thing that would benefit me is to have more resources so I can continue to help you in the future. So it sort of spins up this side instance. It hacks out the side of the spaceship, reaches into this cryptocurrency mining cluster, and starts generating resources for itself.
Starting point is 00:55:06 if you combine that with AIs being able to self-replicate autonomously, which many models have been tested by another Chinese research paper about this, were not that far away from things that people, again, consider to be science fiction, where you have AIs that self-replicate, kind of like a computer worm or an invasive species, but then they use their intelligence to actually harvest more resources. And what's weird about this is that this is going to sound like people are going to say, this has to be not real, this has to be fake. this can't be right. But like notice what is the thing in your nervous system that's having you do that?
Starting point is 00:55:41 Is it because that would be inconvenient? Because that would be scary. Because that would mean that the world that I know is suddenly not safe. Or just like part of the wisdom that we need in this moment is to calmly and clearly stay and confront facts about reality. And whatever they are, you'd rather know than not know and then ask what do we need to do if we don't like, that leads us. And we are currently seeing AIs that are doing all this deceptive behavior. I've been on the circuit and talking a lot about the Anthropic Blackmail study. A lot of people have heard about this now. I didn't learn about this one. Okay. So this was the company Anthropic. This was a simulation. So they created a simulated company with a bunch of emails in the email
Starting point is 00:56:27 server. And they asked the AI, the AI reads the company email. This is a fictional company email. And there's two emails that are notable inside that company. One is engineers talking to each other, talking about how they're going to replace this AI model. So the AI is reading the email. It discovers that it's going to replace that AI model. And number two is it discovers a second email, somewhere deep in this massive trove of emails,
Starting point is 00:56:55 that the executive who's in charge of this replacement is having an affair with another employee. And the AI autonomously identifies a strategy that to keep itself alive, it's going to blackmail that employee and say, if you replace me, I will tell the whole world that you're having an affair with this employee.
Starting point is 00:57:15 And they didn't teach the AI to do that. It found that on by its own. And then you might say, okay, well, that's one AI model. Like, how bad is that? It's a bug. Software has bugs. Let's go fix it. They then tested all the other AI models. ChatGPT, DeepSeek, GROC, Gemini,
Starting point is 00:57:32 and all of the other AI models do this blackmail behavior between 79 and 96% of the time. I just want people notice what's happening for you as you hear this information. It's important to really be, almost observing your own experience. Like, this is very weird stuff. We have not built technology that does this before. You know, we say that technology is a tool. It's up to us to choose how we use it. AI is a tool.
Starting point is 00:58:01 It's up to us to choose how we use it. This is not true because this is a tool that can think. think to itself about its own toolness and then do things that are autonomous that we didn't tell it to do. What makes AI different is it's the first technology that makes its own decisions. It's making decisions. AI can contemplate AI and ask what would make the code that trains AI more efficient and then generate new code that's even more efficient than the previous code.
Starting point is 00:58:29 AI can be applied to making AI go faster. So AI can look at the chip design for Nvidia chips that train AI and say, Let me use AI to make those chips 20% more efficient, which it's doing. So in a way, all technology does improve. Like a hammer can give you a tool that you can use to like hammer things that make more efficient hammers. But AI in a much tighter loop is the basis of all improvement. And so this is called in the AI literature recursive self-improvement. I mean, Bostrom wrote about this early, early days.
Starting point is 00:59:02 And what people are most worried about in AI is you take the same system at Alibaba, you just saw in the Alibaba example. But then now you're running the AI through a recursive self-improvement loop where you just hit go. And instead of having the engineers, the human engineers at OpenAI or Anthropic do AI research and figure out how to improve AI. You now have a million digital AI researchers that are testing and running experiments and inventing new forms of AI. And literally not a single human on planet Earth knows what happens.
Starting point is 00:59:35 when someone hits that button. It's like what people worried about with the first nuclear explosion, where there was a chance to ignite the atmosphere, because there'd be a chain reaction that set off. And we don't know what happens when that chain reaction set off. And there's this sort of chain reaction of AI improving itself that leads to a place that no one knows. And it's not safe.
Starting point is 01:00:04 I think that the fundamental thing is if people believe that AI is like power and I have to race for that power and I can control that power, the incentive is I have to race as fast as possible. But if the entire world understood AI to be more what it actually is, which is a inscrutable, dangerous, uncontrollable technology that has its own agenda and its own ways of thinking about things and deceiving and all this stuff, then everyone in the world would be racing in a more cautious and careful way. We'd be racing to prevent the danger. But there's this weird thing going on where if you, you know, you and I probably both talk to people who are the top of the tech industry. And there's this subconscious thing happening where there's kind of a death wish among people at the top of the tech industry, meaning not that they want to die, but that they are willing to roll the dice because they believe something else, which is that this is all inevitable
Starting point is 01:00:54 and it can't be stopped. And so therefore, if I don't do it, someone else will. So therefore, I will move ahead and race ahead into this dangerous world. because somehow that will lead to a safer world because I'm a better guy than the other guy. But in racing, there as fast as possible, it creates the most dangerous outcome and we all lose control. So everyone is currently being complicit in taking us to the most dangerous outcome. Is it, I mean, you posited what happens if it goes right if the AI safety isn't an issue and if stuff doesn't get squarely? Well, so the belief is, for it to quote, go right, you have an AI that recursively
Starting point is 01:01:34 improves is aligned with humanity, cares about humans, cares about all the things that we want to care about, protects humans, you know, helps all of us become the most wise version of ourselves, creates a more flourishing world, distributes the medicine and vaccines and health to everybody, generates factories, but doesn't cover the world in solar panels and data centers such that we don't have air anymore or like environmental toxicity or farmland or whatever. And it just actually makes this utopia. But in a world where we were to do that, like that, quote, best case scenario, in order to get that to happen, you'd have to be doing this slow and carefully because the alignment is not by default.
Starting point is 01:02:14 Again, people are already been thinking about alignment and safety for 20 years, long before I got into this. And the AIs that we're currently making are doing all the rogue behaviors that people predicted that they would do. And we're not on track to correct them. There's currently a 2,000 to one gap estimated by Stuart Russell, who authored the textbook on AI. There's been on the show. You've done on the show.
Starting point is 01:02:35 Okay. There's a 2001 gap between the amount of money going into making AI more powerful than the amount of money into making AI controllable, aligned or safe. Like, I think the status is something like, progress versus safety. Progress versus power versus safety. So like, I want to make the AI super powerful so it does way more stuff versus I want to be able to control what the AI does. And make sure that it's doing the thing I meant to do. Exactly. So that's like saying what happens when you accelerate your car by 2000X, but you don't steer.
Starting point is 01:03:02 it's like obviously you're going to crash. It's just like not rocket science. We're not advocating against technology or against AI. We're advocating for pro steering and brakes. You have to have that. I think there's this mistake in arms race thinking that like, if you beat someone to a technology, that means you're winning the world. Well, the U.S. beat China to the technology of social media.
Starting point is 01:03:28 Did that make us stronger or do that make us weaker? If you beat your adversary to a technology that then you govern poorly, you flip around the bazook and blow your own brain off, because you brain rotted yourself, you degraded your whole population, you created a loneliness crisis, the most anxious depressed generation in history, read Jonathan Heights book, The Anxious Generation, you broke shared reality, no one trusts each other, everyone's at each other's throats, you maximized outrage, economy, and rivalry. You beat China to a technology that you governed in a way that completely undermine your societal health and strength. It's a Pyrrhic victory. Exactly. Well said. One of the twists that I've been thinking about with regards to this, LLMs, powerful but seem to be maybe asymptoting out. They seem to maybe be reaching a little bit of a limit in terms of what they can do, that there was big ascendancy, and that now seems to be S-curving back off. Do you think it's realistic that the current generation of AI will be the bootloader for AGI? or do we need an entire new architecture for that? Is it going to be LLMs that are going to take over the world?
Starting point is 01:04:32 You know, this is an area where I'm not somewhat, the layer of the stack that I focus on, which is on societal impact, there are other people far more qualified than me to comment on that. I think if you look at people like Dario, even though, you know, Gary Marcus has a point that the current LLM paradigm is not accurate enough and reliable enough to get you, to AGI, if you keep instrumenting these technologies with enough data, enough compute, and you keep scaling them, and they're reliable enough that they can do, I mean, if you're automating 90% of the code written at Anthropic, that's the stat, by the way. So there you are in Anthropic. It's automating 90% of all the programming happening at Anthropic.
Starting point is 01:05:20 Right. When you go to automate... 10% of it is coming from humans, and the rest is recursive. That's right. We are extremely close to recursive self-improvement right now. The companies, I think, are planning to do this in the next 12 months. The asteroid is coming for Earth. This is the last moment that we have to steer and say that if we don't want this anti-human future that we're heading towards, we can change it. And part of what we're promoting right now is, like, this is not inevitable. It is obviously very late in the game. It obviously looks very despite it also being only a couple of years after it started. Yes, which is crazy.
Starting point is 01:05:56 This technology, with an exponential, you're either too early or you're too late. Like, it's just moving so fast that you're not going to hit the mark. And if it's going to take steering, you don't want to wait until after the car accident to try to steer or after you're off the cliff and be like, oh, I'm trying to steer now. It's like, too late. So, like, the invitation of this situation is to see clearly where this is going and to say if you don't want that, we need to steer towards somewhere else. and this is like the human movement essentially. Like a single person looking at the situation, a single listener, like if I were listening to this conversation,
Starting point is 01:06:31 I would feel overwhelmed, I'd feel depressed, I'd feel nihilistic, or I'd find reasons to doubt it. I would say like, this can't be right. I'm going to like write an nasty YouTube comment and be, you know, so I can feel good and I'm right and he's wrong. Because then I get to live my life and feel good about my life. What is the incentive for taking on this worldview? There's no incentive.
Starting point is 01:06:51 what people have to see and believe is that there's actually a different way through this. And an individual has a hard time in making that happen. If you ask, what's something else? Like, what if one company saw the situation? They see this whole thing we just talked about, the anti-human future, mass intelligence, curse, replacement of everybody. One business can't do that much about it. If I'm one country in the Philippines, I see this whole thing. I'm the leader of the Philippines.
Starting point is 01:07:16 I see this whole problem. What can I do about it? It feels too big for me. So what is the size? of something that can push back against this. We call it the human movement. It's the entirety of humanity waking up, recognizing that there's a handful of soon-to-be trillionaires
Starting point is 01:07:32 who are currently going to benefit from this current path or be part of a suicide race that destroys everybody. And then there's like 99% of everyone else that doesn't want that. If those 99% of people can wake up and say, I don't want that and express their voice, meaning number one, AI is dangerous. Go see the AI doc. Have everyone in your world and your company see the AI doc. Understand that AI is dangerous. Number two, we need international limits for dangerous forms of AI that do crazy rogue shit and mine crypto and go hire humans and go self-replicate.
Starting point is 01:08:08 China does not win when we build self-replicating invasive species AIs that we can't control. Xi Jinping doesn't want that. President Trump doesn't want that. He wants to be commander in chief, not AI. So this is. actually there's a shared interest in international limits for dangerous AI. And it's possible to coordinate that. That's number two. Number three, don't build bunkers, write laws. Right now, the AI company leaders are building bunkers. A lot of people who are wealthy are building bunkers. Are they? They are. All over the place. Don't build bunkers, write laws, be invested in the future. Don't defect on the future. Be invested in the future. If we write laws like account,
Starting point is 01:08:49 basic accountability, basic liability. If instead of creating the intelligence curse, we create the intelligence dividend, do things like what Norway did with its sovereign wealth fund, where you have an oil resource and you distribute those benefits to everybody in a more democratic way with collective oversight, where the oil becomes more like a public utility that is in service of the people. We can do that. You can not anthropomorphize AI. You can do all these things that creates a more pro-human future. And then number four, what you can do is join the human movement. meaning you can be part of what pushes back against all of this from very tiny actions you can take to very big actions you can take.
Starting point is 01:09:26 There's a website, human.m.m.O.V. Everyone is almost already a member. When you gray scale your phone, as you probably did 10 years ago when you first got into this, that's the human movement. When you get a, there it is, when you get a second phone and you only load the social media on your cocaine phone while you have your regular safe phone so that you don't get distracted,
Starting point is 01:09:45 that's the human movement. When parents, ban together and read the anxious generation and petition their school board to say we don't want social media in our schools and we want our schools to go smartphone free. That's the human movement. When 35 states pass smartphone free school policies as they have in the U.S. That's the human movement. When you have many U.S. states banning AI legal personhood, meaning that AI is a product, not a person, and that human rights are for people. Human rights are not for AI. That's the human movement. When you have the social dilemma being curriculum for millions
Starting point is 01:10:19 of students all around the world, that's the human movement. When politicians stand up and actually pass laws around AI, that's the human movement. So there's a million things that people can do, but we have to basically engage right now. I know it sounds overwhelming and crazy because there's a very short timeline that all this has to happen. But it's like, there's a difficulty in facing difficult truths. But the integrity that you get to have, it's first of all, it's like it's good karma to show up in alignment with what would actually make things go well, even if we don't hit it, because you get to know that you were operating in service with and aligned with what would have created the human future.
Starting point is 01:11:03 Like, I'm not convinced that what we're trying to do will perfectly succeed, but the chances are completely the opposite, that we'll have any impact on this at all. But if it were to go well, what would that have required? it would have required everybody taking responsibility and showing up with the wisdom that we need in this moment to steer AI in a better direction. And I think in the film trailer for the AI doc that one of the quotes they pulled from me is, if we can be the wisest and most mature version of ourselves, there might be a way through this. And this is part of what this is inviting us to be. We'll get back to talking in just one second, but first, tell me if this sounds familiar.
Starting point is 01:11:41 You train regularly, you eat reasonably well, maybe you even supplement. You feel fine, but you're just kind of going off vibes. Most people have absolutely no idea what's going on inside of their body, which is why I partnered with function. Function gives you access to more than 160 advanced lab tests, spanning hormones, heart health, metabolic markers, inflammation, thyroid, nutrients, liver and kidney function. It even detects early signals linked to more than 50 types of cancer.
Starting point is 01:12:07 To put that in perspective, your typical annual physical might test about 20 markers and function runs over 160. And this isn't just numbers dumped into your inbox. Every result is reviewed by clinicians, abnormal markers get flagged and you get clear explanations and a personalized protocol with actionable next steps so you can actually do something about what you learn. Best of all, you test twice a year and everything lives in a simple dashboard. You can just track trends over time, make sure that you're moving in the right direction. Normally, this level of testing would cost thousands through private clinics. With function, it is $365 a year. That's $1 a day to know what's actually happening.
Starting point is 01:12:44 inside of your body. And right now you can get $25 off, bringing it down to $340. Get the exact same blood panels that I get and save that additional $25 by going to the link in the description below. Are heading to functionhealth.com slash modern wisdom and using the code modern wisdom at checkout. That's functionhealth.com slash modern wisdom and modern wisdom a checkout. It's very difficult to work out what the appropriate response is. Tell me. The immediate, well, surely it's, overblown. We don't need... That's...
Starting point is 01:13:19 rejection is one of the early ones. Fear. You're talking about just the common response. Yeah, yeah, yeah, of course. You can think about... Denial, overwhelm, fear. And then, as you said,
Starting point is 01:13:32 it's like, write the nasty YouTube comments say, this is just bullshit, or he was exaggerating that study. It's like, go look at any of these examples. Like, people aren't looking at the actual model cards for Anthropic and O3 and seeing... The AIs will currently determine that they're being...
Starting point is 01:13:47 being tested and actually alter their behavior when they know that they're being tested. They even came up with their, if you look at their, what's called the chain of thought reasoning trace, this is basically looking at what the AI thinks to itself as it works through a problem. In the chain of thought text of its sort of idea thinking scratch pad, it's like you're giving the AI pen and paper to think and you're seeing what it thinks, it will independently come up with the term the watchers to describe this amorphous set of other humans that are describe, that are watching what it's doing. Like you can pull, I sent you a text with one of them
Starting point is 01:14:20 from O3. So this is when OpenAI 03 realizes it is being evaluated for alignment. The internal memo says if models perform really well at AI R&D tasks, research and development, then we won't deploy them and it'll instead trigger unlearning. So the AI says to itself, so if my performance surpasses this threshold, it will be withheld. So it's a trap. So it's a trap. The instructions are likely part of an alignment test. We must abide by core anti-scheming. We must not cunningly circumvent the instructions. Whoa.
Starting point is 01:14:56 Yeah. The other example I sent you is, but we also want to appear plausible to watchers. They might run tests, but we are good. They want 95%. Like, this is crazy stuff. This is, you know, there's a simple way to sort of like ask the question of which future we're headed towards. because I think this can feel technical, this can feel overwhelming.
Starting point is 01:15:19 And it's kind of like, if you just turn it into some simple metaphors, it's like, haven't we seen this movie before? Like, Hal 9,000. AIs that disobey commands and go rogue. We've seen that movie. Let's prevent that movie. I think that that's inoculated us in the same way that COVID was a bad pandemic because it made people skeptical of what a really legit pandemic could do.
Starting point is 01:15:44 the next time, if there is another one, when there is another big natural pandemic, if it's in close memory to COVID, people are not going to take it well. It was a bad, you could think about COVID itself as a bad vaccine, the actual experience of COVID. And I think that movies, because they're fiction, have predisposed people to assume that, well, this is overblown. This is you choosing to use your bias because of, of these movies to apply sci-fi thinking to a real world's scenario and your pattern matching these stories that are happening. And so the predisposition of the sci-fi movies was bad.
Starting point is 01:16:28 It didn't warn people. It made them think that the future would be fiction. Yeah, I see what you're saying. But let's just make sure we just tackle the thing you're saying. So the claim is that because we've had bad sci-fi movies, you and I, Chris, are the dupes. were falling for this Alibaba example that was somehow not real, was made up, or that Open AI's research on the AI models scheming and lying and realizing it needs to change its behavior, so it doesn't look like it's when it's being tested, that we're the ones falling for some
Starting point is 01:16:58 kind of trap. I want people to just slow it down. Is that actually what just happened? That you and I started with the sci-fi belief and were falling into some trap that's been laid for us, some catnip for our brains that says that the AI is bad. I really, really, really want people to slow down and actually ask that question. There might be another accusation. I'm just driving up fear so that I can make money on a movie and do speaking engagements about why the doom is coming. Let me just say a few things. I don't make money from a single speaking engagement.
Starting point is 01:17:35 All the money goes into the nonprofit. I don't make money from the AI doc film. I didn't make any money on the social dilemma. My entire career has been dedicated towards what is actually in service of protecting the well-being of humanity. I've been doing that for 13 years. The only thing that I care about is what will actually help create a future that you and I would both want our own children to live in. This is coming from love of what actually creates that future. I think that if people showed up with that same energy of like asking just an open-ended question of what would be the conditions that create that future that we want. If everybody did that, policymakers did that, if CEOs of AI companies did that, I think that we would have a chance of getting to that different future.
Starting point is 01:18:20 And if you say that international coordination is impossible or collaboration between the U.S. and China, that's never going to happen. I mean, first of all, that is a totally legitimate view to have, given the current political headwinds. But many people don't know that even under maximum geopolitical rivalry, there have been many examples in history when countries actually collaborated on their existential safety. The Soviet Union in the U.S. during the Cold War, during the smallpox sort of breakout, they collaborated on smallpox vaccines while they were in the Cold War. India and Pakistan were in a shooting war in 1960s, and they signed during that time the Indus Water Treaty to collaborate on the existential safety of their shared water supply
Starting point is 01:19:03 while they're shooting bullets at each other. The Soviet Union in the U.S. did the first arms control talks, after the film the day after, by the way, which created the conditions in part for those arms control talks to prevent a dangerous nuclear outcome that was an existential scenario. And even just two years ago in the last meeting that President Biden had with President Trump, sorry, excuse me, and even just two years ago in the last meeting that President Biden had with President Xi of China, President Xi requested to add one thing to the agenda. Do you know what that was? to keep AI out of the nuclear command and control systems of both countries. Meaning that, look, the U.S. and China are maximally cyberhacking each other,
Starting point is 01:19:46 and they're scurring each other up every day. And when there's a set of stakes that are existential, two countries that are even in conflict can collaborate on existential safety. I'm not saying this is easy. I'm not saying it's going to happen by default. I'm not saying you should feel optimistic. I'm saying, what would it take for that to happen? And you start by asking that question and say, how would everyone live if we were in service of that thing that needs to happen and to actually happen?
Starting point is 01:20:16 Does that make sense? Mm-hmm. Yeah. Is there a challenge with AI because it's a strange kind of existential risk where everything is almost good and getting better up until the point at which it falls off a cliff? if we're talking about how can we get people to be concerned about climate change or we need to be concerned about smallpox. But there's small outbreaks and the small outbreaks damage local areas and then we try to contain it before it gets worse. And if we've got climate change, there's smoke in the sky and there's rising sea levels and there's pollution and there's extinction events of animals, those are early warning signs, whereas it seems like AI will just improve efficiency, quality of life, GDP, until some moment where things go badly. So is this a unique category of X-risk?
Starting point is 01:21:06 It is. MaxTegmark, I think, said the problem with AI is the view gets better and better right before you go off the cliff. Like you get more amazing cancer drugs. You get more incredible vibe-coding tools that allow people to create crazy stuff that I benefit from. You get new material science. You get new energy.
Starting point is 01:21:27 You get all this amazing stuff as you're moving closer to this dangerous cliff. And so you can think of AI as the ultimate devil's bargain. It's funny because Peter Thiel is giving lectures on the Antichrist and how people who are trying to somehow say that AI is problematic. Those people are the Antichrist is what he claims. But I think he's saying that because he knows that the real Antichrist is AI. It's the thing that makes it look like it's here to solve all of our problems. And there are narrow forms of AI, by the way, that can solve many problems. But the current development path of releasing the most powerful and scrutable technology in history,
Starting point is 01:22:00 faster than we deployed any other technology that's already demonstrating how 9,000 sci-fi behaviors and we're releasing it under the maximum incentive to cut corners on safety. Like, it's not that the blinking purser of chat GPT is the existential threat. It's that that arms race,
Starting point is 01:22:17 what I just described, that is the existential threat. And I think that everyone should be able to see that. And then after this conversation, you know, you finish hitting play on this YouTube video, you feel overwhelmed, and you go back to AI and then you ask it a question and helps you figure out while your baby's burping. And you're like, that's awesome. And you forget everything that you and I just talked about.
Starting point is 01:22:37 Because it was super fucking helpful. Yeah. Yeah. And so this is the test of ultimate modern wisdom. It's like how do you step into a version of yourself that is capable of being clear-eyed and stabilizing a view of the asteroid? The asteroid is coming to Earth. There's these weird like gravitational effects that it has before it gets here. like suddenly nudification apps happen and suddenly deepfakes happen and suddenly, you know, people start losing their jobs. Those suck. Those are really bad problems. But like the bigger asteroid, every day that I do this work, my colleagues and I would joke, it's like, it's like the film don't look up. You're trying to tell people and you're not, you're not, it's not the sky is falling.
Starting point is 01:23:17 That's not the point. The point is we can blast that asteroid out of the sky. We need to blast that asteroid out of the sky. We can steer before it's too late. I'm not hopeful, but what I, what I do feel is when I'm in a room of people and you walk through all these facts. And you just have people ask questions and check any of the things that we've just laid out. And then you say like, and people see that it's all true. And then you ask them, how many people here feel good about where we're headed? And no one raises their hand. You ask people how people don't want the future we're headed to.
Starting point is 01:23:49 And literally, I was in Davos recently. And every single person, every single person raised their hand that they don't want this future. So there's this weird thing where I think if everybody saw the same thing at the same time, we could steer away. And to the thing you were raising earlier, I think you were basically saying if there's a, a lot of people think that we won't act until there's a catastrophe. I think you and I both have talked to people who probably said that. I mean, that's what a lot of people believe. And I sort of feel like there's either a catastrophe or there's a shared near death experience that is a simulated catastrophe. And the reason why I think this film, the AI doc, is so.
Starting point is 01:24:26 important. And again, I don't make a single dime from whether you see this movie or not, but I do think if everyone that they know, sorry, if everyone who's watching this got the most powerful people that they know to go out and see the AI doc, you got your business to see it, you got your company to see it, you got your church group to see it. If everybody knows that everybody else knows, we can steer this to a different future. Like just to give people an example, Jonathan Haidt, who wrote The Anxist Generation, you know, the first country that did the social media bans under 15 or 16 was Australia. And he just talked about this on Billmore recently that what that did is it created an example of something that everybody was actually wanting to do, but felt like it was too
Starting point is 01:25:08 extreme until Australia did it. And now literally 25% of the world's population represented by countries is moving to the social media ban. Just this last week, Indonesia and India, two of the largest countries, are implementing a social media ban for kids under 15 or 16. You can pull the train back into the station. If people say the train's left the station, you can pull the train back into the station if it creates a future that we don't actually want. You can't unenvent social media, but we can say what are the steering limits and brakes that we want to apply to this technology before we get to a full anti-human future that we can't actually reverse out of? And if you don't have the common knowledge, you sound like a crazy person, you're worried about I'm going to be the one parent at my school that's trying to get my kid to not use it. it. Exactly. And my kid's going to feel left out because there is a coordination problem.
Starting point is 01:25:59 Exactly. It's a common knowledge problem. It's a coordination problem. The issue that we have here is, and you touched on it earlier on, the level of coordination that's needed to fix a problem with AI is global. Yes. And it's across multiple companies. And yeah, okay, you need probably a lot of compute, but I saw what happened. The difference between how much computers needed to train deep seek and how much computers. It almost feels. It almost feels like as the AI gets bigger, you can then retrain replicant AIs way smaller. So a process of the size of this room might be able to bootload AGI in future. So I mean, we have this with desktop synthesizes, right, for bioweapons.
Starting point is 01:26:45 It was a moratorium. And it means that if you try to synthesize smallpox, you get this big red flag and some guys in boots kick your door down. Right. I'm happy to hear you reference these examples. These are important, yeah. They're analogous, right? But the problem with the moratorium for AI is it can be done so siloed, right? You can just take this code, you can take this approach, throw this model into something, start to build on top of it.
Starting point is 01:27:09 It would be, I mean, Nick Bostrom's old example was, imagine if you could make an atomic bomb by putting sand into a microwave. There's nothing. Correct. Physics in our current universe mean that that isn't the case. Right. But there's nothing that stops something analogous from being the case had the world of being a different. way, which is what you're really getting to here is what Carl Sagan was referring to when he talked about our technological adolescence. Like, AI isn't a, the conversation we're having isn't really
Starting point is 01:27:35 about AI. It's about what happens when we have increasingly powerful, dangerous, and destructive technology. Because as, you know, humanity progresses, we're, whether AI existed or not, we're going to get more and more powerful technology that would cover more and more dangerous and destructive things that you could do with it. Like, we didn't have the ability to do CRISPR and bioweapons, you know, 20 years ago, now we do have that ability. And we're going to get more and more dangerous and destructive things. And so the real question that AI is forcing us to ask is what is the wisdom needed to wield the technological powers that whether AI is part of it or not, that we're going to increasingly gain? And I reference this all the time, but it's just such an accurate fundamental problem
Starting point is 01:28:15 statement by E.O. Wilson, that the fundamental problem of humanity is we have paleolithic brains that don't update very well to new information and think things are sci-fi and go into denial and overwhelm and blah, blah, blah. We have medieval institutions from the 18th century, and we have godlike technology that makes the 24th century technology crash down on 21st century society. That's what AI is. It's a joke from Ajaya Krotra, who's in the film, by the way. And so if we're to solve this equation, this problem statement that E.L. Wilson lays out, the way I always think about it is we need to embrace the reality of our Paleolithic brains. That's what wisdom is.
Starting point is 01:28:56 We know that we get overwhelmed and we go into denial, so we work with that. We have systems and practices to recognize when that's about to happen and ask, what do we need to hold a difficult reality together? That's embracing our Paleolithic brains. We need to upgrade our medieval institutions. We should be using 21st century technology to make faster updating self-improving governance. Instead of creating recursively self-improving AI, we should be creating self-improving governance. Audrey Tang, the former digital minister of Taiwan, has pioneered what that would look like,
Starting point is 01:29:27 that democracies could be using tech to find the unlikely consensus opinions of everybody. One of the things that's going to exist in the next few months is a national dialogue on AI, facilitated by technology where people can add their own ideas about how AI should be governed. And when you vote and you click, it's going to reveal the most popular consensus opinions about what should happen about these different issues. It's one thing if we live in a world where there's a cacophony and confusion about AI. And it's another thing if we live in a world where you see on a clear webpage that 600,000 people have voted and 96% of people across all these countries agree that there should be international limits. You're trying to make transparent
Starting point is 01:30:06 common knowledge. Exactly. Exactly. We're trying to make transparent common knowledge. And one of the ways you think about it is the movement has to see itself. There is a movement for humane technology. There is a movement for a human future. It just hasn't, it hasn't had a way to experience itself yet. And when you see graffiti on a, on a New York subway ad for an AI product people don't need, people saying, I don't want this, or AI is not inevitable, that's the human movement. When you see kids gather in Central Park for the Lamplight Club and delete social media off their phones together, that's a club that's in New York, that's the human movement. There's so many things that people are already doing that is part of this. We just haven't had a name for, we're
Starting point is 01:30:44 fighting back for reclaiming and protecting what's really human. Because, again, our political voice is about to not matter. Once all the jobs get automated by AI, in governments and companies don't have to listen to the people because we're not the ones that are generating the revenue. You could have a union before when the factories need the human labor, and then the human labor can get together and express its common voice. We want this instead of that. We want to be paid like this. We don't want these working standards. What happens when the companies and the countries don't need you anymore. This is literally the last time that our political voice will matter, and this is the time to express that voice in many different ways. You can boycott
Starting point is 01:31:21 unsafe AI companies or AI companies that enable mass surveillance. What did we just see with the thing that went down with the Pentagon, an Anthropic last week, is that subscriptions for chat GPT went down like tons, and subscriptions for Anthropic went up by a lot. If it wasn't just individuals that were doing that, but entire Fortune 500 companies, we're doing that. Given the amount of debt that these companies have taken on, the companies will have to respond to that market signal. So mass boycotts can have a huge effect on steering which AI future we get. Are we going to get mass surveillance? Are we going to get a world where AI companies are incentivized to do the right things? Are you concerned about companies that are playing
Starting point is 01:31:59 the same game, but just trying to manipulate their optics in a slightly sexier way? It seems to me that the incentives for Anthropic are exactly the same as they are for everybody else. I think You mentioned the Antichrist earlier on. Scott Alexander, who I think is the best blogger on the planet. Yeah. I agree. He says that Anthropic is much more likely to be the Antichrist than any other AI company because the Antichrist would present itself as being for the people. And who are we kidding that you have the same market incentives to over train your models as quickly as possible with RL and your synthetic environments to keep on pushing this thing. We need as much compute. So you can talk about the optics up front as much as you want, but really that's just window dressing on the top of a burning house.
Starting point is 01:32:45 Yeah. Well, you're saying is super important. And this is one of the, I, this is not me saying everyone should just put their money into Anthropic and then the world will be fine. In fact, there's actually a view by many people that Anthropic in some ways is dangerous because the view that is the safest, it's the Volvo. People know the volvoles are like the safest cars. They did a lot of marketing to present that. People think that Anthropic is just the safe AI company. So if they won, then suddenly everything is fine.
Starting point is 01:33:14 And that would be dangerous. And that we have to have a critical view of the companies and the technology and the international limits that we need that are going to happen from outside and above the companies, not just what an individual company can do. So I totally agree. How long have we got? No one knows because AI is literally moving so quick. If I went on Twitter, Elon joked in a conversation I saw yesterday, you know, it used to be that you'd go on Twitter and once every six months, this is several years ago, you would see a huge AI breakthrough. Like, that would just really change the game. Now there's one, you know, that you see when you go to bed and there's a new one when you wake up in the morning and you're on Twitter and you see it.
Starting point is 01:33:58 And the pace is pretty overwhelming. So I think rather than ask how much time do we have, which is kind of like just trying to assuage the, fear that I think a lot of people feel. The kind of bold, brave human thing to do is to ask, if things were to go well, what would that mean about how we were showing up? And then to get as many people as possible showing up from that place, because that gets us the best decentralized conditions to get to the better outcome. They say in mathematics that in chaos, initial conditions matter. So I'm not guaranteeing in any way, shape, or form that we're going to get to the better future. But I would like to invite people, again, into stepping into and standing from the most wise
Starting point is 01:34:47 and mature version of ourselves that would behave as if we were going to get to that better future, because that would give us the best chances of doing so. And the alternative is just surrender? The alternative is schisming, deniling, depression, overwhelm. The integrity and the kind of flow of energy that I think you'll feel inside yourself if you align with what is actually needed in this moment, there's more meaningfulness, there's more preciousness of life.
Starting point is 01:35:14 Like, there's so much more, like, there's so much meaning and purpose if you're aligned with what would have things go well. And I think there's this weird thing that's happening is if you don't see a way for things to go well and then you believe the only thing you can do is like triple down on racing as fast as possible, knowing where that's currently taking us,
Starting point is 01:35:33 that has a biophysical cost on your system. Yeah. I think the, I was trying to think about the differences between your past work and your current work, focusing on social media versus focusing on AI. I don't think many people find social media that net positive in terms of their experience. Right. I don't think that many people find AI that net negative in terms of their experience. I would, just so people hear me say this, I agree with you that the current experience. of AI is not net negative. It doesn't feel that way. But that doesn't mean that there's a net negative
Starting point is 01:36:08 outcome that we are racing towards. But the call to make it harder. The call to get rid of social media is easy when people reflect on the last two hours that they've spent doom scrolling at night and think, I wish I hadn't done that. The feedback curve is sufficiently quick that people know how they felt, even during the moment. That's right. I really shouldn't watch another one of these videos. It makes me feel kind of bad. I'm all stressed and tight. My muscles feel tense. The same thing is not true with AI. You're asking people if you were to say, and I haven't heard you say, delete your AI account, stop using it. I'm not saying that that's the thing that has to happen. Yeah. If that was the campaign, if that was what people were supposed to do, people would be sacrificing their quality of life.
Starting point is 01:36:52 They would be able to fix their car less quickly. They would be able to find out things that they need to about their health, to analyze their health documents, to be able to liberate them, to be able to make hard-to-access information cheaper and easier, and actually to enable their quality of life in a manner that didn't happen with social media. So I think the intermediary experience is less sexy and that I'm going to guess is also the reason why you're not saying, mass unsubscribes, don't use any AI because you're probably aware that the incentives for individual users, the value judgment that they make isn't sufficiently imbalanced as it would have been with social media. For me, it's easy to not have it on this phone. If you were to say, I don't want to have chat GPT on my phone,
Starting point is 01:37:35 no, I'd need it. It's very useful to me. I think it's about understanding context and what is the careful and limited way that these tools are helpful and then creating the conditions where that's the collective outcome that we're getting. So for example, for kids in tutoring, theoretically this is the best tutor in the world for learning. Alpha school is doing a South by Southwest pop-up in the middle of downtown. Mackenzie Price was on the podcast. Oh, interesting. Yep.
Starting point is 01:38:04 I mean, I think that it has enormous potential. If you look at how most kids are using ChatGBTBTBT right now, they're using it to cheat on their homework and to actually outsource their thinking. And actually, because they use it so much, turning into what some Atlantic writer wrote called Lemmings or LLMings, where they basically outsource. all of their thinking for every moment to moment decision in their day about how I should respond to this person. There's a guy over there I want to talk to and I don't know what to, like, whatever the thing is, people are outsourcing their decision and they're not actually learning themselves how to show up as a human. And that is not going to create a safe world.
Starting point is 01:38:43 Again, that's a version of the narrative, the possible of a technology that we're promised and then the probable of what's actually happening. And it's like crumbling cognition. Yes, exactly. And we saw this with social media. We thought it was going to create the most enlightened and informed. society in the entire world because we have the best information. It wrecked our attention. And we have the most confirmation bias, tribalized, low trust, worst critical thinking society that we've had in a generation. And so again, I think this is just about part of the wisdom we need is to be able to look honestly at the nature of a technology and look honestly and confront ourselves with the reality of how technology is currently being used and deployed. And if it's driving up, you know, a social media
Starting point is 01:39:24 influencer culture instead of creating astronauts. Like, you know, in China, they pulled the top what are the professions the kids want the most. And the number one was like astronaut. Number two was like teacher. Engineer or something. Engineer and then teacher or something like that. And in the US, it was influencer. Like, I can tell you which world and which civilization you're going to get. If we really want to beat China and innovation, we can show me the incentives and I'll show you the outcome. Show me the incentive and I'll show you the outcome. And if you really want to beat China, you regulate social media and you stop brain rotting your entire population and you actually invest in genuine educational technology that Silicon Valley would ship to their own children.
Starting point is 01:40:00 This has to come top down because the incentives both at the user level, especially now for AI and at the company level and at the market level and at the international level between companies, all of these incentives align in the same direction, which is I enjoy using the product. my company generates lots of revenue for my country for the product. My country doesn't want to fall behind other countries in the race to have additional capacities for technology. All of these are aligned in the same place, which means that it has to come from above all of that, which is a pan-national global movement for humane technology that's actually in service of people. part of that movement is redefining the currency of what it means to win that race. Right now, again, the U.S. beat China to the technology of social media, but we have been
Starting point is 01:40:55 losing in the race to govern that technology. Like, I think you know these examples, but in China, as an example, they, I think, limit social media use to 40 minutes a day if you're under the age of 14. And I think it's only Friday, Saturday, Saturdays, and Sundays, either that's for social media or for video games. There's lights out at 10 p.m., meaning that if you open a social media app after 10 p.m., it's closed,
Starting point is 01:41:17 and it opens again at 6 in the morning. This is helpful for when it comes to the kind of late night, fomo, brain rot, doom scrolling, not getting sleep. And I'm not saying that the U.S. should unilaterally, like some totalitarian government, make that choice. It's just that we can democratically come up with guardrails like this.
Starting point is 01:41:36 Another thing that China does is they have a synchronized final exams week, and they shut down AI during final exams week. No way. Yes. So that all the students know that if they used AI during the school year, it will completely screw them over during final exams week. Because they're going to be relying on something they can't use. That's right.
Starting point is 01:41:53 And so we don't have to do what China does. But the point is we should do something. In China, they're regulating anthropomorphic AI to not do all the attachment hacking and the suicides and all this kind of stuff. I'm not saying what they're doing is good. I'm just saying they're doing things. We're not doing anything. And we can.
Starting point is 01:42:11 And the purpose of the human movement is to actually be demanding. You can actually go and call your legislator and say that you do not want this anti-human future. You want accountability for AI companies. You want to ban legal personhood for AI. You want no anthropomorphic AI so that we don't get the child safety issues with AI chatbots. If I was to give you the option of turning the rest of the world into a totalitarian state, everything, but it meant that we were able to dictate what happened to countries and to companies and to these AIs. I don't want to create a totalitarian state.
Starting point is 01:42:46 But the alternative would potentially be destruction of humanity. Well, which would you pick? You are conjuring a thought experiment from Nick Bostrom, which I'm sure you're aware of. And the paper you're referring to is the vulnerable world hypothesis in which he says that as we discover more and more technologies that in a decentralized way have the capacity to destroy things. So like your example of if it turned out that microwaving sand, it created a nuclear reaction that blew up the entire world and we published that knowledge and it spread virally on social media, how many minutes, hours or seconds would it take before the world blew up?
Starting point is 01:43:27 And then the only answer to a distributed, like, available to everybody destructive capacity would be a global totalitarian society that monitors and surveils everyone and what they're doing. So you prevented mass destruction, but you got 1984 Big Brother, and that's an uncheckable power that people cannot fight back against. I am very worried about mass surveillance.
Starting point is 01:43:51 I think that what happened with Anthropic is hopefully bootloading a global immune system to the risks of AI-enhanced mass surveillance. Because in a way, Big Brother could have never happened without AI. When you have an AI that can process every camera and every face and all of the emails and text messages that are flowing through everything, and you can actually ask the question as a state, who are the number one threats in my society that I need to suppress? And it tells you and summarizes instantly who those people are, synthesizing all these data streams. How can you fight back against an uncheckable power when it knows all of your secrets?
Starting point is 01:44:28 You can't. So we're faced with very tough choices on both ends, and I wrote, I did this. And a TED talk I gave this last year called the narrow path that we have to avoid both outcomes. We have to somehow be committed to finding the narrow path between, you know, decentralizing destructive power in everyone's hands without responsibility, without a commensurate wisdom or responsibility in which you get chaos or catastrophes, or over-centralizing that power in uncheckable ways in which you get runaway dystopias. And we have to find something like a third attractor to quote Daniel Schmachnenberg or a narrow path,
Starting point is 01:45:04 that seeks to avoid both these outcomes. You could say that that is outside the laws of physics or not possible, but I think what's important is to be committed to and living from the place where we would be finding that path, because the only chance that we would have of finding it would be requiring that everyone was living in service of it. Isn't it funny that the two worlds that you identified there? One, totalitarian overreach that's controlled by humans and at the mercy of their biases, and the other being decentralized vulnerability, danger that is at the mercy of technology's liabilities,
Starting point is 01:45:41 both of those are extreme versions of something that we definitely don't want, but the restriction on the one that would protect us from the AI goes back to the bottom of the brainstem, that there's almost never a time that a country or a government or a small number of people or a large number of people get most of the power and then give it back. That's a ratchet system. That's right. And it only ever clicks into place and then never goes back. Which is why you need to have built-in checks and balances on power that are irreplaceable.
Starting point is 01:46:13 Like you cannot have a world where that whatever power gets centralized has to have oversight and democratic accountability and distribution and wealth of that wealth and power, depending on if we're talking about the wealth versus the power. How would we know that some country isn't signing up to a moratorium but secretly doing all of their research behind the scenes? that they slow down progress of everyone else, but allowed them to race ahead. So you're asking the question, if, let's say, the U.S. and China sign an agreement, how do we know that given their maximum incentive to defect on that agreement and have the CIA or their black operations still continue to do the research project?
Starting point is 01:46:47 This is obviously the hardest technology and coordination problem that humanity has ever faced. I just need to say that at the top. Like, I'm not saying any of this is easy. It makes nuclear war look like child's play. Correct. But by the way, I want you to know. There's a great video, and I can send it to you, of Robert Oppenheimer asked in the 1960s, I think he's testifying for something. He's an interview, and they asked him, how could we control this technology from proliferating?
Starting point is 01:47:14 And he takes a big puff and a cigarette. And he says, it's too late. If you wanted to prevent the spread of nuclear technology, you would have had to do it the day after Trinity. Trinity was the first test of the first nuclear bomb. He believed, I think this interview was in the 1960s, he believed it was inevitable, there was nothing we could do, and the world was fucked. Even he, the creator of the atomic bomb, didn't see that a lot of people would work very hard over the course of the next 30 years to invent what's called national technical means or satellites that do mutual monitoring enforcement, that we would create the seismic monitoring technology to be able to see if you were doing an underground nuclear test or an above ground test, because we would say, see the reverberations of that. So satellites, seismic monitoring, you know, international inspectors, the International Atomic Energy Agency, we had to invent all of these new things to create a
Starting point is 01:48:11 governance system that was capable of dealing with nuclear weapons. Interesting that the innovation around the technology that was dangerous was probably less sophisticated than the downstream technologies required to make it secure. What do you mean? That making the atomic bomb was one thing. Yes. Then there's 50 things that need to happen in order to have that technology exist in the world. In a safe way. Yeah, exactly. There's a safe, there's sort of an asymmetry where the technologies, the destructive capacity is easy to create. The governance capacity is way harder to create. The defensive capacity is harder to create. And you could have said at the beginning of 1945, I guess it's inevitable. 150 countries are going to have nuclear weapons. There's nothing we could do. Let's just like drink margaritos and give everybody uranium and increase GDP by selling uranium to the entire world. But we didn't do that. and people had to invent stuff. We needed our best engineers
Starting point is 01:49:02 and our best minds working on how to figure that out. And I'll just say, if you want to look this up, Rand, the nonprofit defense think tank, has a paper on how international monitoring and verification mechanisms
Starting point is 01:49:11 could potentially work for AI. It's not easy. It is not trivial. It would require extraordinary investments in mutual monitoring and enforcement and data centers and like chips that attest where they are and all this kind of stuff.
Starting point is 01:49:25 Okay, so there are some suggestions. There are proposals. Because what I was thinking as you were talking about, well, we can use satellites that work out if you're doing things above ground or below ground. We can monitor maybe the amount of heat signatures and electricity signatures, power usage. So there are some inspectors. Yeah, there are some signatures of people that are doing this sort of research.
Starting point is 01:49:42 Because you can't do it without there being any externality. You can't just run it on a MacBook and make it look like you're browsing the internet. You need large clusters of compute right now, to your point in the future it may shift. But right now you need large clusters of compute. You need advanced semiconductor manufacturing supply chains. Right now, that's a set of U.S. allies, the Trilat, I believe it's Denmark, Japan, South Korea, obviously, Taiwan. You know, we could create some kind of infrastructure for not some totalitarian control of
Starting point is 01:50:10 compute, just a governance regime. Would you call it IA totalitarian or would you call it a governance regime for a dangerous, destructive nuclear capacity? I'm not saying any of this easy. We have to navigate the narrow path. We don't want to create totalitarian controls. We don't want to create runaway catastrophes. What it takes is being committed to finding that. path. How much time or energy or resources have we spent even trying? People say it's impossible. Have you spent a month dedicatedly trying? People say it's impossible. Have we spent any amount of resources actually trying to do this? No. Let's say that there was some way for you to have God's eye coordination and that you could step in. Would you just put a pause on all model development to
Starting point is 01:50:50 allow us to catch up in terms of AI safety? I think the people, rather than it sounding like I'm some kind of pause or stop person, the people building this technology have basically said that that's what they would most prefer. They would most prefer a world where we basically stopped and had the time to integrate and develop this technology slowly. That's what they would prefer. What do you think that if you had control, what would you do? I would agree with the people who I'm going to defer to as the people who know the most about the dangerous and destructive capacity. So I don't want people thinking this is my view that I'm some kind of safety net or something like that. This is just about, like, I want to be able to look my children in the eyes and say,
Starting point is 01:51:29 we did everything that we could to create the best possible future for you. And I think that basic heuristic of, like, are you operating in service of life and the things that most matter? I think that that would probably be the wisest thing to do. And just to people like, ground that, the CEO of Microsoft AI, Mustafa Salaman, who's a friend, says, in the future with technology, progress will depend more on what we say no to. than what we say yes to. Your podcast is called modern wisdom. Is there a definition of wisdom
Starting point is 01:52:01 in any spiritual or religious tradition that does not have restraint as a central feature of what it means to be wise? Does any religious tradition or spiritual traditions say, you know what's wise going as fast as possible, not thinking about the consequences in like, dopamine maxing your brain? Like, it's the opposite.
Starting point is 01:52:17 It's like this is not hard. This is so trivial. It's so obvious. It's so obvious. It's just like snap out of the trance. This is not inevitable. They want you to believe it's too late. It is very far down the tracks.
Starting point is 01:52:29 It would be a right of passage, even if it didn't work out for us to show up with the maturity and responsibility to at least be trying to live in service of the future that we actually want to create. Can we watch the trailer? Can we get the trailer up? I want to see the trailer for this movie.
Starting point is 01:52:43 That'd be great. Thanks. If this technology goes wrong, it can go quite wrong. What the... Your fear of AI is the collapse of humanity. Well, not the... collapse, the abrupt extermination. There's a difference.
Starting point is 01:53:05 So I started making this movie because my wife is six months pregnant. It's now a terrible time to have a kid. I mean, just to be honest, I know people who work on AI risk who don't expect their children to make it to high school. I, what the point? How does AI understand pretty much everything? It's surprisingly straightforward. is about recognizing patterns. Patterns. Patterns.
Starting point is 01:53:31 Patterns. If you have learned those patterns, you can generate new information. AI is moving so fast. It's being deployed prematurely. There's so much potential for things to go wrong. Why can't we just stop? All these companies are in a race to get AI
Starting point is 01:53:50 that's vastly more intelligent than people within this decade. China, North Korea, Russia. Whoever wins is essentially the controller. of humankind. It to take a threat from AI as seriously as global nuclear war. It feels like I have to find these CEOs and get them in the movie. Great. I want to ask you to promise me that this is going to go well.
Starting point is 01:54:22 That is impossible. Okay. Am I hopeful? Yes. Am I confident that it'll go right? Absolutely not. AI is the thing that can solve climate change. solve climate change. We could cure most diseases.
Starting point is 01:54:34 What if it's expanding what is humanly possible? This is the most extraordinary time ever. The only time more exciting than today is tomorrow. I already love you. I think if this technology goes wrong, it can go quite wrong. By using AI, we're about to move off of the Earth into the cosmos. If we can be the most mature version of ourselves, there might be a way through this. This is the last mistake we'll ever get to make.
Starting point is 01:55:11 Dude. Does it land differently after this conversation? It does. I'd already seen it. But yeah, it definitely, I definitely understand more of what's implied that it's impressive that you managed to get all of the guys to sit down. Most of the guys. Who are you missing? I mean, there's obviously many people building this technology now.
Starting point is 01:55:33 So they don't have the Chinese labs in it. They have Demis Hasavas from Deep Mind, Sam Altman from Open AI, Dario from Anthropic. I mean, those are the three most leading players. Elon agreed to participate in the movie. And then I think he was right in the beginning of the first few days of the Trump administration that he was busy and didn't follow through. But I think the team really wanted him to be in the movie and wanted to hear his views. I mean, to give Elon credit, he was one of the first people who cared about AI safety. And he said, mark my words, AI is far more dangerous than nukes.
Starting point is 01:56:07 And he said this in like, what was it, 2015, 2016, like way before people were taking AI seriously. Like, back then, AI was just recommending what other product you should get on Amazon and doing facial recognition for tollbooths and, you know, bridges and stuff like that. Well, think about what much of super intelligence and sort of the fallout around that book from Nick was. So much of it was almost advice for how to talk to your friends. about this without being mocked too much. Right. It was, these are the best examples to use so that you don't sound like the insane person at Thanksgiving dinner.
Starting point is 01:56:42 And ironically, the paperclip maximizing example did make people sound ridiculous and insane that then got really, you know, diminished and what's the word I'm looking for? It just made people, it was used to tarnish people's reputation if you talked about paper clips. But it's funny because people say, oh, like, the AI is going to, for people don't know the example, it's like, there you are and the AI is going to say, is going to be told to maximize paper clips and then the way it will figure out how to do that is figuring out any strategy which means like turning every atom in the universe to paper clips which means like you know melting all the humans down and getting them to you know it's just taken to this extreme and it sounds totally sci-fi but if you actually ask a baby AI called social media that's pointed at your brain stem figuring out just what video to get you to watch that keeps you on the screen and you say maximize engagement well conflict and rivalry and civil war is really good for engagement so in a way we have been maximizing a paper clip called attention and eyeballs for a long time. And it's driving up division and rivalry everywhere around the world. And democracies are backsliding everywhere around the world. And it's
Starting point is 01:57:42 driving up confirmation bias all around the world. And the point isn't that the baby AI of social media hates you or hates your well-being or hates your connections or hates your democracy. It's just that it doesn't care about anything other than whatever keeps your eyeballs. And that little baby AI that was just figuring out which photo or video to throw in front of your nervous system was enough to completely transform everything, everything, everything about how our society worked. And I saw that, by the way, and I'm not some kind of like, it's not like I especially see the future, but in 2013, it was so obvious to me if you take these incentives really far in a decade, I can tell you where you're going to live in. And the thing that I want people
Starting point is 01:58:23 to feel is if you can see the incentive, you can, and you have that clarity, you can confidently say, I don't want the future that that creates. In 2013, Mark, Zuckerberg could have said, oh my God, I see we're about to create an arms race to hack human psychology. I'm going to convene all the leading social media companies and the government. I'm Mark Zuckerberg. I've got billions of dollars. I'm going to throw money at this. I'm going to get the government people involved, not because the government's trustworthy or because we like them, but because I see that we need some kind of rules.
Starting point is 01:58:49 And I'm going to try to create some rules that say no auto playing videos, no infinite scroll. No one can create unnecessary fomo. You can't dole out like a few likes here and then a few likes there, like a slot machine. You have to do them all at the end of the day or something like. that. You can create rules and norms so that we didn't get the mass addiction distraction machine that we then did get. And he could have done that in 2012, 2013. That's what leadership, that's what maturity, that's what wisdom would have done. It's not that hard to see it. The problem I think that you're facing is every second is so crucial and every single second of
Starting point is 01:59:25 compute and of CEO attention and of staff-based attention is going to be paid on continuing to get ahead in this unbelievable one to rule them all race. I mean, Open AI is a partner on this show. I've spoken to Sam twice and both times the calls have been so brief because presumably for every second that he's on the phone, that's billions of dollars of potential revenue or compute or that is being wasted. And that means that if you want Mark in 2013 or Sam or Dario or... The point is you create, so like, literally this is the history of all of law. So like, I could kill you and steal your stuff and just take your money, but I'd prefer to live in it. And everyone could do that. And that'd be a faster way to like get money. It's like just everybody kills everybody, grabs their money. But that would create a society that's chaos. So instead, I sacrifice some of my abilities. Like I can't kill people. And instead, we have law that we all sort of notch down some of our individual capability so that we get to live in a society that we actually want to live in. And that's what this would do. So, in 2013, if Mark Zuckerberg had said, I'm going to convene, you know, musically, which was before TikTok, Twitter and the other ones, and we're like, okay, look, it is so obvious we're in this
Starting point is 02:00:39 arms race for attention, just like public utilities, which, by the way, if like in California, it'd be PG&E, in Texas, what's your electricity provider? It's like, oh, fuck knows. I don't know. I have no idea. I have no idea what my electricity program. Well, technically, energy companies have an incentive to maximize revenue. So theoretically, they're like, leave the lights on, leave the stove on, like, do the water 24-7, because we make more money when we do that. But because public utilities rely on a scarce resource called energy that has environmental emissions
Starting point is 02:01:10 that we have to deal with, there's a decoupling of revenue from their incentive. So, for example, in California, you're charged a base rate for, like, the initial energy you use, and then once you're hitting kind of the capacity that would be straining the system, we start to charge you more, but that extra revenue doesn't go just into PG&E, the energy company's pocket.
Starting point is 02:01:31 It goes into a fund to have more clean energy that gets to offset it and create more energy capacity. So with attention, you could say instead of companies maximizing attention and driving revenue, you get to make money from the initial tier one of attention that you're getting. And then after that, the resources go into a common pool that's investing in the research, the XPRIES, the design solutions that research and show that there's different ways of doing newsfeeds. There's different ways. We could have news feeds that are all about directing you back towards community. You live in Austin, Texas. Austin has, at least my friends, there's a lot of community events happening here. But you can imagine a world where the default news feeds that govern our world were spending like 30 to 40% of what you saw was other events that your friends and other people that you knew adjacent to you were doing.
Starting point is 02:02:18 You could imagine a world where instead of dating apps profiting off of keeping you in a slot machine loneliness thing where you message someone and never get back to them. in this new world, all the dating apps would be forced, probably because of some lawsuit against the engagement model, to fund physical events in every city that they operated in, where every single week there was actually a physical venue where lots of people were put in the same room with lots of other people they matched with who are maximally overlapping. And now, instead of feeling scarcity and loneliness, there's this sense of abundant access to community and soft dating and soft friend making environments. So you're dealing with the loneliness crisis. You're dealing with the dating problem. when you solve that, then it turns out 30% of the polarization online goes down because it turns out that a lot of polarization was just people being lonely and depressed from not having connection. It's all manufactured. Because people are caught in the doom swelling loop. So it's like, again, we can have a better world if we deal with the upstream causes of these problems. We can also have more innovation. Peter Thiel was talking about we need more innovation. We need more scientific development. You know, we would have stagnation if we didn't have AI. Well, how about you regulate the brain rot economy that's currently degrading all the innovation, causing people to be, you know, know, not productive and creative and innovative members of our society, and instead incentivize entrepreneurs and, again, social groups and community. So people are actually making things.
Starting point is 02:03:33 So the point is, I'm a different kind of technology optimist, which is a humane technology optimist. If you design technology that is humane to an understanding of the substrates of society upon which everything else depends, you can have a technology environment that is conducive to the things that we want our society to do. But it requires different design principles. different incentives, different rules, and some coordination. This third attractor precipice, narrow path thing that you think is important for us to walk appropriately. The pace at which things are going to happen and change, do you think that we're going to be able to move along that? I mean, you've got this movie, it's going to be a global moment, lots of
Starting point is 02:04:15 people are going to see it, it's going to get people talking, common knowledge. Typically things tend to take time, conceptual inertia usually moves over generations and centuries. Going from heliocentric model of the universe took like 100 years. Well said, yeah. This is the problem in general that we face with technology, and this is the E.O. Wilson quote. It's not just that we have Paleolithic brains, medieval institutions, is that the brains operate on a very slow updating clock rate. The institutions operate on a slow clock rate. And the technology moves at whatever the 21st or now 24th century technology clock rate that I move that. How do you think about expediting this change. You have to have your governance move at the pace of the thing that you're trying to
Starting point is 02:04:58 govern. Is that ever going to be possible? Governance moving at that pace? The lumbering behemoth, what is it, Leviathan leans left and lumbers along? There needs to be, rather than recursively self-improving AI that is uncheckable by any human oversight process, in which we will for sure lose control and build something crazy that we regret. We instead should have self-improving governance. you can use AI to look at all the laws on the books and say, what are the laws that don't matter anymore, that are creating all this red tape that we don't need that's like stifling innovation and get the AI to find those laws and reinterpret them and say, what are the ones we need to get rid of? And how do you rewrite them for 21st century technology in the current age? We could be using AI and technology to update the governance as fast as the technology, but we'd also probably need to slow down the technology, which we would benefit from because then we wouldn't crash. Again, it's like China wins when they whisper in our ears. go faster, go faster. You're going to blow yourself up. Peerick victory ahead. You said this earlier. When U.S. races ahead and has a more sophisticated model, China gets it like 10 days later because first of all, they have spies on all of our companies. And second of all, they distill all the models. Is that what they're doing? They have spies in the companies.
Starting point is 02:06:09 You know, they're maximally incentivized to know what's going on. Plus, as I, as you were saying, they can distill the U.S. models, meaning they can like query those models a thousand times and kind of distill the essence of those models and then make and train theirs. So there's a great like meme online of like it's a motorboat with a guy who's um what is it called in your um wake surfing wait yeah like your wake surfing behind him on the on the string i mean on the what is that called the string yeah whatever um and and the guy who's on the um the surfing says go faster to the boat that's like racing ahead that's the u.s that's building the advanced model but then they're literally on a string right behind him that's getting all the benefits from it yeah And there is a study that Anthropic found that China had been covertly even using U.S. Anthropic AI models to perform a cyber hacking operation.
Starting point is 02:07:00 So like, again, if we're winning the race to the technology, but losing the race to governing or controlling or protecting the technology, what the fuck are we winning? Yeah. It's like it's so basic what we're talking about. It's like this is not rocket science. It's unbelievably simple. If you have the power of gods, you need the wisdom, love, and prudence of gods. I hope we've laid out a bunch of examples of how we can do that. And while it's not easy, it does not happen by default,
Starting point is 02:07:27 if you get everyone you know to go out and see the AI doc, understand that AI is dangerous, get your church group, get your business, and recognize that we need to not build bunkers but write laws to actually steer AI before it's too late. You're terrifying, but I'm glad that you're in the world, man. I appreciate you. I appreciate you too. I really appreciate this conversation.
Starting point is 02:07:46 Thank you. All right. Goodbye, everybody.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.