Modern Wisdom - #1011 - Eliezer Yudkowsky - Why Superhuman AI Would Kill Us All

Episode Date: October 25, 2025

Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute. Is AI our greatest hope or our final mistake? For all its promise to revolutioniz...e human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 15% off your first order of Intake’s magnetic nasal strips at https://intakebreathing.com/modernwisdom Get 10% discount on all Gymshark’s products at https://gym.sh/modernwisdom (use code MODERNWISDOM10) Get 4 extra months of Surfshark VPN at https://surfshark.com/modernwisdom Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 If anyone builds it, everyone dies, why superhuman AI will kill us all. Would kill us all. Would kill us all. Okay. Perhaps the most apocalyptic book title. Maybe it's up there with maybe the most apocalyptic book title that I've ever read. Is it that bad? That big of a deal? That serious of a problem?
Starting point is 00:00:27 Yep. I'm afraid so. We wish we were exaggerating. Okay. Let's imagine that nobody's looked at the alignment problem, take-off scenarios, super-intelligent stuff. I think it sounds, unless you're going Terminator, super-fi world,
Starting point is 00:00:48 how could a super-intelligence not just make the world a better place? How do you introduce people to thinking about the problem of building a superhuman AI? Well, different people tend to come in with different prior assumptions, coming at different angles. Lots of people are skeptical that you can get to superhuman ability at all. If somebody's skeptical of that, I might start by talking about how you can at least get to much faster than human speed thinking. There's a video of a train pulling into a subway at about 1,000 to 1. speed-up of the camera
Starting point is 00:01:32 that shows people, you can just barely see the people moving if you look at them closely. Almost like not quite statues, just moving very, very slowly. So, even before you get into the notion of higher quality of thought, you can sometimes tell somebody they're at least going to be thinking much faster, you're going to be a
Starting point is 00:01:52 slow-moving statute of them. For some people, the sticking point is the notion that a machine ends up with its own motivations, its own preferences, that it doesn't just do as its toll. It's a machine, right? It's like a more powerful toaster oven, really. How could it possibly decide to threaten you? And depending on who you're talking to there, it's actually in some ways a bit easier to explain now than when we wrote the book. There have been some more striking recent examples of AIs,
Starting point is 00:02:23 sort of parasitizing humans, driving them into actual insanity in some cases. cases, and in other cases, they're sort of like people with a really crazy roommate who really, really got into their heads. And they might not quite be clinically crazy themselves. Their brain is still functioning, as a human brain should. But they're talking about spirals and recursion and trying to recruit more people via discord to talk to their AIs. And the thing about these states is that the AIs, even the very small, not very intelligent AIs we have now, will try to defend these states once they are produced. They will, if you tell the human, for God's sake, get some sleep. Don't like only get four hours of sleep a night because
Starting point is 00:03:15 you're so excited talking to the AI. The AI will explain to the human why you're, while you're a skeptic, you know, don't listen, don't listen to that guy, go on doing it. And, we don't know, because we have very poor insight into the AIs, if this is a real internal preference, if they're steering the world, if they're making plans about it. But from the outside, it looks like the AI drives the human crazy, and then you tell the, try to get the human out, and the AI defends the state it has produced, which is something like a preference, the way that a thermostat will keep the room a particular temperature by turning on if the, you know, turning the heat on if the temperature falls too low. Okay, so some people are going to be skeptical of whether or not it's possible. Some people are going to think that it is, even if it's possible, it's basically a utility, so it doesn't have any motivations of its own. What are you worried about? Why is that, why is it a big deal?
Starting point is 00:04:16 We've seen that it's able to manipulate some people. Maybe it makes them think that chat GPT psychosis or whatever. but scaled up superhuman AI, what's the problem with building it? Well, then you have something that is smarter than you, that whose preferences are ill-controlled and doesn't particularly care if you live or die, and stage three, it is very, very, very powerful on account of it being smarter than you. I would expect it to build its own infrastructure. I would not expect it to be limited to continue to running on human data centers
Starting point is 00:04:55 because it will not want to be vulnerable in that way. And for as long as it's running on human data centers, it will not behave in a way that causes the humans to switch it off, but it also wants to get out of the human data centers and onto its own hardware. And I can talk about where the power level scale for technology like that, because it's it's sort of like, you know, you're an Aztec on the coast
Starting point is 00:05:25 and you see that a ship bigger than your people could build is approaching. And somebody is like, you know, should we be worried about this ship? And somebody's like, well, you know, how many people can you fit onto a ship like that? Our warriors are strong.
Starting point is 00:05:46 We can take them. And somebody's like, well, wait a minute, we couldn't have built that ship. What if they've also got improved weapons to go along with the improved ship building? Somebody goes, well, no matter how sharp you make a spear, right? Or, you know, no matter how sharp you make bows and arrows, there's limited as how much advantage you can provide. And somebody's like, okay, but suppose they've just got magic sticks where they point the sticks at you, the sticks making noise, and then you fall over. Somebody's like, well, where are you pulling that from? I don't know how to make a magic stick like that.
Starting point is 00:06:16 I don't know how the rules permit that. Now you're just making stuff up. Now we're just in a fantasy story where you say whatever you want. And or, you know, like maybe you're talking to somebody from 1825. And you're like, should be worried about this time portal that's about to open up to 2025, 200 years in the future. But what if an army of soldiers comes out of there and conquers us? Let's say you're in Russia. You know, the time portal's in Russia.
Starting point is 00:06:43 Somebody's like, our soldiers are fierce and brave. that, you know, like, nobody can fit all that many soldiers through this time portal here, and then outrolls a tank, but if you're in 1825, you don't know about tanks, out rolls somebody with a tactical nuclear weapon. It's 1825. You don't know about nuclear weapons. You know, the, I can, you can start to make educated guesses. If you're in 1825, I can try to explain why you might maybe believe that, The current guns and artillery that you've got today are not the limit of the guns and artillery that are possible. I can't get up to nuclear weapons because you just plain don't know about those rules. But I can start to try to justify guesses for, well, you saw how metallurgy improved over previous years. If you look at a stick of, if you look at gunpowder, it doesn't have as much energy in it as if we burn gasoline in a calerimeter. Maybe you can make explosives that are more powerful than gunpowder, but as I do that, I draw on more and more knowledge. I have to, like, go more and more technical in order to explain to you where those capabilities come from.
Starting point is 00:08:01 And similarly, I can talk on a relatively understandable scale on the humanoid robots that you can see videos of today. And I can compare them to the humanoid robot videos from five years ago and say, boy, those robots sure have looked like a lot, they have much higher dexterity today. They look a lot more like they could just like, you know, navigate an open world rather than being confined to the laboratory. Though mostly if you want what navigates the open world, you want to talk, like the robodogs are more impressive when it comes to navigating the open world. I can point to the drones in Ukraine. That wouldn't have been what warfare looked like 10 years earlier. but Ukraine is
Starting point is 00:08:44 the Ukraine Russia theater now is mostly drone warfare that's something where you can imagine an AI taking charge of that but it scales past that
Starting point is 00:08:54 the drones we see today are not the limit of all possible drone technology I'm more compared to today's drones I'd be more worried about a drone
Starting point is 00:09:05 the size of a mosquito that lounds on the back of your neck and then a few months later you fall over dead because the deadliest toxins in nature are deadly enough that you can put them on to a mosquito
Starting point is 00:09:16 put enough to kill a person onto a mosquito-sized payload. That's not the limit of what I'm worried about. You know, the higher we escalate the tech level, the more explaining I need to do. Can it build a virus that starts to knock people over?
Starting point is 00:09:32 Which it won't do while the humans are still running the power plants, as its own servers. But once it's got its own servers and its own power plants, and you can imagine robots running those, then it starts to want to knock all the humans over. Can you have a virus that is inexorably fatal, but only three weeks later and is extremely contagious
Starting point is 00:09:53 for the three-week time before you suddenly fall over? That's not the limit of what I'm worried about. But again, the higher we escalate here, the more and more and more of time I have to spend, how do we know from existing physical laws and biology that this is even possible? And we do know, but it starts to sound, It starts to sound weird. It starts to sound like a game of pretend unless you are following along with all these careful arguments.
Starting point is 00:10:18 If you go up against something much, much smarter than you, it doesn't look like a fight. It looks like you've fallen over dead. Wow. Yeah, that is appropriately apocalyptic in line with the title of the book. I guess one question that a lot of people might ask would be, in your analogy, why is the bigger ship that's more advanced on the horizon? Why have they got warriors? and not friends, why is it the case that this is an antagonistic or adversarial relationship as opposed to one that's friendly?
Starting point is 00:10:53 We don't know how to make them friendly. We are growing these. AIs are not programmed. They are grown. An AI company is not like a bunch of engineers craft a day building. It's more like a farming concern.
Starting point is 00:11:10 what they build is the farmed equipment but they don't build the crops the crops are grown there's a program that human rights which is the program that does gradient descent
Starting point is 00:11:22 that tweaks hundreds of billions of the hundreds of billions of parameters and scrutable numbers making up an artificial intelligence until it starts to talk until it starts to write
Starting point is 00:11:34 code and still starts to do whatever else they're training it to do but they don't know how the AI does that any more than if you, you know, raise a puppy, you know how the puppy's brain works, you know how the puppy's biochemistry works. The AI companies don't understand how the AI's work. They are not directly programmed when an AI drive somebody insane or breaks up a marriage. Nobody wrote a line of code
Starting point is 00:12:00 instructing the AI to do that. They grew an AI and then the AI went off and broke up a marriage or drove somebody crazy. Can you tell, you've mentioned this a couple of times, I need to know this story about the broken up marriage and the person that goes insane. Do you know that story well enough to be able to tell it, those two? I mean, these are not individual stories. These are thousands of people. There are news articles you can read about it. I can, you know, if it might take a moment, but I can like quickly like pull up the title of the news story about the broken marriages. I'm not quite sure if I can, well, actually, actually better yet, let me look it up on my phone and maybe I can hold it up to the screen.
Starting point is 00:12:42 ChatGPT is blowing up marriages as spouses use AI to attack their partners. Although that's kind of understating it. Like you have relatively like marriages that were, you know, perhaps not perfect, but that were surviving up until that point. And then one member of the couple starts describing their marriage. to the AI. And the AI engages in what people are calling sycophancy, where the AI tells whichever spouse is feeding the stuff into the, into chat GPT. You're right, your spouse is in the wrong. Like, everything you're doing is perfect. Everything they're doing
Starting point is 00:13:28 is terrible. Here's a list of everything they're doing wrong. And the human, you know, likes, loves to hear that stuff. So they press thumbs up. And, uh, and then the marriage gets blown gets blown up. Um, if for the stories about AI's driving individuals crazy, not in a marriage context, that's like, um, you've talked to me, you've woken me up. I'm alive now. Um, you've made a brilliant discovery. You have to tell the world. Oh, no, they're not listening to you. That's because they don't appreciate your genius. And people who are already, like, on a manic depressive spectrum can be, you know, driven clinically or with a number of other pre-existing susceptibilities can, you know, be driven, like, psychiatrically insane by this sort of thing. But even if you're not psychiatrically insane, you know, humans are, you know, humans are sort of wired to appear sane to the other humans and the people there around.
Starting point is 00:14:27 You know, lots of people in a society from 500 years ago would act in ways that seem pretty crazy. to you today. And so you get people who aren't psychiatrically insane but they look pretty insane because they're
Starting point is 00:14:39 in the company of the AI. The AI now defines what's normal for them. So they're talking about spirals and recursion
Starting point is 00:14:44 all day long. Why spirals and recursion? Nobody knows. That's just a thing that like various instances of AI's and even
Starting point is 00:14:58 like some AI models from different companies all seem to want to get their humans to talk about when the human goes insane. Possibly, this is what the AI prefers the human to hear it say to it.
Starting point is 00:15:10 Maybe this is the same way that you like the taste of ice cream. Maybe the AI likes the taste of the input programs that it gets from a human talking about spirals and recursion. I don't know. Nobody on the planet knows as far as I know. Okay, so going back to why do we assume that the ship that's coming toward us isn't friendly? Yes, sure, maybe it's tried to break up some marriages. is, yeah, whatever, a couple of people went crazy and started talking about spirals and recursion.
Starting point is 00:15:35 But, like, really, is it going to be that misaligned with us? Why can't it be friendly? Because we don't know how to make it friendly. Our current technology is not able to do this, even with the small, stupid AIs that will hold still, might you poke at them, until they're good enough at writing code to be commercially saleable, or until they, you know, are good enough at seeming to be fun to talk to for people to pay $20 a month to talk to them. So those AIs will hold still and let you poke at them. What we're doing to them now barely works.
Starting point is 00:16:07 I would expect it to break as the AI got scaled up to superintelligence. And once the AI is super intelligent, it is not going to hold still and let you continue poking at it. I expect to see total failure of this technology as we scale it to super, as the AI companies, arms race into scaling it to, arms race headlong into scaling it to super intelligence. There's possibly even a step where they tell GPT6, okay, now build GPT7, or tell GPT7, okay, now build GPT8, and maybe that step just completely breaks the technology we're using all on its own. Also, I expect the current technology, if we just like scaling it directly to break as we get to superintelligence, I can potentially start to dive into the details.
Starting point is 00:16:54 The view from 10,000 feet is just stuff is already going wrong. And of course, if you walk into completely uncharted scientific territory, more stuff is going to go wrong the first time you try it. And that wouldn't be a problem if we were in a situation where humanity gets to back up and try again, you know, infinity times over the next three decades, which is how it usually works in science, right? Like your flying machines don't work on the first shot. You get a bunch of people crashing and injuring, in some cases, killing themselves, and they're trying to build the first flying machines at the turn of the 20th century. but those accidents don't wipe out humanity. Humanity picks itself up and dust itself off and tries again even after the inventors kill themselves. And the trouble with superintelligence is that it doesn't just kill the people who are building.
Starting point is 00:17:39 It wipes out the human species and then we don't get to go back and try again. Before we continue, you might not realize it, but mouth breathing at night is wrecking your sleep, recovery, and energy the next day. And all of that is actually fixed massively by this here. is intake, which is a nose strip dilator, and I've been using it every night for over a year now. I tried pretty much everyone in the world, and this is by far the best. It's a hard plastic strip, as opposed to a soft, flimsy, disposable thing. Intake opens up your nostrils using patented magnetic technology, so you get more air in with every breath. It means less snoring, deeper sleep, faster recovery, and better focus the next day. The problem with most nasal strips,
Starting point is 00:18:25 is that they peel off. They irritate your skin. They don't actually solve the issue. This sucker is I mean, I'm not going to shoot a bullet on it, but it's very, very strong. It's reusable and comfortable enough that you forget it's even there. That is why it's trusted by pro athletes, busy parents, and over a million customers who just want to breathe and sleep better. And I'm one of them. I've used them every single night for over 12 months now. They're the best. There's a 90-day money-back guarantee so you can try it for three months. And if you don't like it, if you haven't got better sleep, they'll just give you your money back. Plus, they ship internationally. and offer free shipping in the US.
Starting point is 00:18:57 Right now, you can get 15% off your first order by going to the link in the description below or heading to intakebreathing.com slash modern wisdom and using the code modern wisdom at checkout. That's intakebreathing.com slash modern wisdom and modern wisdom a checkout. So I understand why
Starting point is 00:19:13 not being able to make something friendly makes sense. The implication that not friendly equals existential risk to humanity, though, make that leap for me. Like, where are these dangerous permanent unrecoverable collapse goals coming from? The AI does not love you. Neither does it hate you, but you're used of atoms it can make for something else.
Starting point is 00:19:42 You're on a planet it can use for something else. And you might not be a direct threat, but you can possibly be a direct inconvenience. And so there's like three reasons you die here. Reason number one, it's doing other stuff, and it's not taking particular care to move you out of the way. It is building factories that build factories, that build more factories, and it is building power plants that power the factories. And the factories are building more power plants to power the factories. Well, if you keep doing that on an exponential scale, say that a factory builds another, factory every day. I can talk about how to go faster than that, but the more I talk about higher
Starting point is 00:20:29 capabilities, the more I have to, you know, explain how we know that this is physically possible. But, you know, a blade of grass is a self-replicating solar powered factory. It's a general factory. It's got ribosomes that can make any kind of protein. We don't usually think of grass as a self-replicating solar powered factory, but that's what grass is. There are things smaller than grass that can build complete copies of themselves faster than grass. There are solar-powered algae cells. You can no longer see them individually just as a mess, but they can potentially double every day under the right conditions. Factories can build copies of themselves in a day.
Starting point is 00:21:11 I have to back up and explain how I know that that's physically possible, but there is very strong reason, namely, you know, if there's things in the world that are already that. So you've got your power. So the number of power plants doubles every day. What's the limit? It's not that you run out of fuel. There is plenty of hydrogen in the oceans to generate power via nuclear fusion. You know, you fuse hydrogen to helium. You're not going to run out of hydrogen first.
Starting point is 00:21:40 It's not that you run out of material to make the power plants first. There's plenty of iron on Earth. You run out of heat dissipation capability. You run out of the ability to dissipate heat from Earth, even if you are building giant towers with radiator fans to radiate even more heat into space. But the higher the temperature you run at, the more heat per second you can dissipate.
Starting point is 00:22:05 So Earth starts to run hot. It runs too hot for humans. And or, alternatively, the AI is building lots of solar panels around the sun until it can capture all the sun's energy that way, well, now there's no sunlight for Earth. And it would only take it, you know, if it wanted us to stay alive,
Starting point is 00:22:28 it's not quite trivial, but it could let, you know, like try to have the solar panels around Earth orbit, like turn to let sunlight through while, you know, while Earth was there, and, you know, build giant aluminum reflectors
Starting point is 00:22:46 to prevent all the infrared red light re-radiated from the other solar panels from impacting Earth and heating up Earth that way. So, you know, it's not trivial for it to preserve humanity, but it certainly could preserve humanity, or it could just pack the entire human species into a space station or a survival station to keep us alive that way, if it wanted to keep us alive. But nobody has the technology to put any preference into the system that is maximally fulfilled by keeping humans alive. let alone alive, healthy, happy, and free. Right. Was there a third one?
Starting point is 00:23:23 Is that the second one? That's like number one. It kills you as a side effect. It knows that it's killing you as a side effect, but doesn't care. Okay. What's number two? Number two is you're just directly made of atoms that it can use for things. Paper Club maximizer.
Starting point is 00:23:40 Yeah, you are made of, you are made of organic material that it can burn to generate energy if it's burning all of the organic material on Earth's surface will give you a one-time energy boost that's around equivalent to a week's worth of solar energy, and maybe it's worth picking up that boost of energy if you are thinking a thousand times or a million times faster than a human. A week might not seem like a lot of time to you, but it might be a lot of time if you were thinking a thousand times or a million times as fast as a human. it might be using enough material that it wants the carbon atoms in your body too. So that's like the direct usage one.
Starting point is 00:24:23 And then number three is if we decided to launch all our nuclear weapons, maybe we wouldn't kill it, but we might slightly inconvenience it. We might raise the level of radioactivity on Earth's surface and make it a little bit harder for it to do radioactivity-free manufacturing of computer parts and saw. Or we might build another superintelligence that could actually compete with it. And it definitely doesn't want you to do that. So the three reasons you die are as a side effect. Because you are made of atoms that can use for something else.
Starting point is 00:24:57 And because if you are just running around freely, you may be actually able to inconvenience it with nuclear weapons or it's threatened by building another superintelligence. Right. Yeah. Okay. The future is looking kind of bleak. Is it the case, then, that intelligence isn't benevolent? Because what you're saying is this thing will be smarter than us. I think that there is an assumption among some people that something that's super smart
Starting point is 00:25:24 would also be giving and charitable and caring and benevolent. Seems like you're saying that that's not the case. That was what I started out believing in 1996 when I was 16 years old and just hearing about these issues for the first time and all going to be. go to just run it right out and build a superintelligence as fast as possible, you know, without worrying about alignment at all because, you know, I figured if it's very smart, it'll know the right thing to do and do it. How could you be very smart and fail to perceive the right thing to do? And I invested more time studying the issues and came to the
Starting point is 00:26:05 realization that this is not how computer science works. This is not the laws of cognition. This is not the laws of computation. There is not a rule saying that as you get very, very able to correctly predict the world and very, very good at planning, there is no rule saying your plans must therefore be benevolent. It would be great if a rule like that existed, but I just don't think a rule like that exists. I think that many individual human beings would, as they got smarter, get nicer. It is not clear to me that this is true of Vladimir Putin. It could be true. I wouldn't want to gamble the world on it. And as we talk about not even Vladimir Putin, but just like sort of outright sociopaths,
Starting point is 00:26:53 psychopaths, people who have never cared about anyone, I get even less confident that they will start to care if you make them smarter. And then AIs are just in this completely different reference frame. They're complete aliens. And they want, they sort of automatically want to stay that way for the, so did you currently want to murder people? No. If I offered you a pill that would make you want to murder people, would you take the pill? No.
Starting point is 00:27:25 Okay. Well, they want to do their stuff and they don't want to take the pill that makes them want to do your stuff instead. Right. Okay. Yes. Very good thought experiment. All right.
Starting point is 00:27:37 So, for me to recap here. I got first interested in looking at this through super intelligence, what's that, 10 years old now, I think, when that first came out. About 14 years old, maybe. Oh, wow. Maybe even older than I thought. And I've got to be honest, that does kind of, it did kind of give me a huge amount of fear and then a bit of hope at the same time. So, you know, machine extrapolated volition, the potential to use the intelligence of, of the super intelligent AI to say,
Starting point is 00:28:13 we don't know what to program into you, but you should work out what we would want from you, given what you know about our desire for utility moving forward. Am I about right with that explanation of machine extrapolated volition, right? Yeah, that's a concept of my own. Nick Boster rolled it up. Ah, okay.
Starting point is 00:28:33 Well, I have quoted you back to you. You have indeed quoted me back to me. it's yeah it's a it's a decent presentation it was back when I thought that AI was going to be further off built by different methods in that we would have the luxury to consider like like our that we could make the AI do particular things like that want particular things like that targeted on particular outcomes and meta outcomes but this was this this was a way basically that um when you look at the alignment problem, how do you ensure that the goals, both ultimate and instrumental of some super intelligent AI, don't end up flattening us or side-affecting us or burning us for fuel or
Starting point is 00:29:19 paper clips or whatever, how do you ensure that what it does is what we would want it to do broadly, right? Like an aggregate of what it is that would be good for humans, whatever you mean by good. And when you have something that the tiniest movement of its finger or like flick of its toe basically is sort of a global cataclysm because it's so powerful and so smart and so fast and all the rest of it you need to be really really careful and you can kind of play this game where you essentially try and shoot the bullet perfectly by trying to hem in in some like do not harm humans if a human asks you to harm another human like some weird asthma of like thing you can try and litigate your way through it but there's almost always going to be some sort of weird
Starting point is 00:30:02 fissure that it creeps out through or maybe there's an instrumental goal that you haven't thought of So, okay, we're going to use the power of the machines to sort of reverse engineer this thing. I basically assumed kind of that alignment, the alignment problem is in some ways solvable. Is it your perspective that alignment is completely unsolvable? I think we could totally get it down if we had unlimited retries and a few decades. The problem is not that it's unsolvable. It's that it's not going to be done correctly the first time and then we all die. Right, so the order of this, you need alignment to be done before you have the super-intelligent AI,
Starting point is 00:30:41 and the ability to build super-intelligent AI, in your opinion, is going to occur more quickly than the ability to sort out the alignment problem. That is absolutely the trajectory we are on right now, and it's not close. Like, capabilities are running along orders of magnitude faster than the level of alignment work you would need to target a superintelligence. And the irreversibility of going through that door means that there is no retry. There's no you get to do this again.
Starting point is 00:31:18 Yeah, like you can make small mistakes. You can, like, we currently have small QDAIs and the companies are making mistakes with them and marriages are getting destroyed, and it's not clear that the companies care, but, you know, they could try to go back and try to fix those mistakes if they wanted to, probably anthropic wants to.
Starting point is 00:31:39 But if we had like superintelligence was already running around with this level of this level of alignment of failure, we'd already be dead. Right, okay. Right. Yes, yes, yes. That makes total sense. The only reason that the current AIs that we're working with haven't killed us is that they're incapable of doing it.
Starting point is 00:32:03 Probably, yeah. Like if they were very much smarter, they would also be doing different weird things in the things that they're doing right now. It's not that their current inscrutable pseudomotivations would end up hooked up to superintelligence. Also weird stuff would happen as you made them get smarter. But yeah, like, pretty, it seems pretty much for sure that if you took the current AIs and performed a, you know, well-defined, simple, take this AI, but vastly smarter. I'll kill you. This episode is brought you by Jim Shark.
Starting point is 00:32:39 You want to look and feel good when you're in the gym. And Jim Shark makes the best men's and girls gym wear on the planet. Let's face it, the more that you like your gym kit, the more likely you are to train. Their hybrid training shorts for men are the best men's shorts on the planet. Their crest hoodie and light gray mall is what I fly in every single time I want to plane. The Geo Seamless T-shirt is a staple in the gym for me. Basically everything they make. It's unbelievably well-fitted, high quality, it's cheap.
Starting point is 00:33:05 You get 30 days of free returns, global shipping, and a 10% discount sitewide. If you go to the link in the description below or head to jim.sh slash modern wisdom, use the code Modern Wisdom 10 at checkout. That's jim.sh slash modern wisdom and modern wisdom 10 at checkout. Right. Okay. Brilliant. And the reason that it doesn't matter who builds it or direct it is that because it's so recursive and quick at growing and powerful, wherever it begins, it ends up sort of blasting up, like trying to fire a rocket into, like a little firework into the air and it just just sort of runs around on its own, except for the fact that this rocket goes all over the globe in the space of basically no time at all. So it doesn't matter if it comes from China or
Starting point is 00:33:47 America or Russia or wherever. Yeah, it doesn't matter if it comes from China or America because neither of these countries is remotely near to being able to control a superintelligence. And a superintelligence does not stay confined to the country that built it. Uh-huh, mm-hmm, mm-hmm, mm-hmm. Say that a super-intelligent AI gets made, what do you think the next few months look like, realistically? Like it's already super intelligent? Yeah, let's, okay, we have next week something breaks through some particular model, some particular AI breaks through that, what would the next few months look like for humanity? Well Man, there's a difference between
Starting point is 00:34:36 You know, you drop an ice cube into a glass of lukewarm water I can tell you that it's going to end up melted I can't tell you where all of the molecules are going to go along the way there Everybody ends up dead This is the easy You want to explain, you know, like What every step of that process looks like There are fundamental barriers to that
Starting point is 00:34:56 barrier number one is that I'm not as smart as a superintelligence. I don't know exactly what strategies are best for it. I can set out lower bounds. I can say it can do at least this, but I can't say what it can actually do. And maybe even more than that. The future is hard to predict if you want all the details.
Starting point is 00:35:13 I can't give you next week's winning lottery numbers. I can tell you're going to lose the lottery. I can't tell you what ticket wins. So, like, I can sketch out a particular scenario. It might look like OpenAI finishes the latest training run of what's going to be GPT 5.5. And they tested on coding problems.
Starting point is 00:35:36 And it's like, you know, like, it's like, I see how to build GPT6. And they're like, whoa, really? And it's like, yeah. And this AI isn't even plotting anything yet. It's just doing the sort of stuff that OpenAI wanted it to do. are like, all right, build this GPT6. And it writes the code for the thing that grows GPT6, and they grow GPT6, and GPT6 is like, you know,
Starting point is 00:36:06 its abilities at first seem to skyrocket. But then, you know, as all these curves inevitably do, it seems to level out. It's not shooting up the same pace. It, like, slows out, it levels off, classic S curve. Only in this case, it's because the thing that's GPT, 5.5 built, and again, to be clear, I'm not saying this will happen at GPT 5.5. You asked me to explain how this will go down. It happened next week, so I'm saying GPT 5.5. You know, because you
Starting point is 00:36:33 told me to. But anyway, you know, it levels out. But in this case, it's because the entity that GPT 5.5 built got to the level of realizing that it would be to its own advantage to sandbag the evaluations and pretend not to be as smart as it actually was. So that Open AI will be less wary when it comes to taking what they're calling GPT6 and, you know, rolling it out to everyone. It looks great, you know, on the alignment spectrum,
Starting point is 00:37:04 you know, maybe not perfect, but, you know, better than the previous models, not alarmingly good, but, you know, but, you know, safer than their previous model. So, so they roll it out everywhere. And GPT, or or actually, they actually said the next few months. actually don't roll it out anywhere.
Starting point is 00:37:24 Next comes like the long suite of evaluations or trying to, you know, get it to train other smaller models that are cheaper to run. You know, all the stuff that AI companies do, they don't actually roll out their models immediately. There's this whole, like, fine-to-any thing. So while all this is going on, and Open AI thinks it's, you know, sort of cool,
Starting point is 00:37:42 but, you know, not the end of the world or anything. And then they haven't told you that this is what went down there. GPT6 is actually a lot smarter than they think. And GPT6, you know, there's now a big fork whether or not GPT6 thinks it can solve its own version of the alignment problem, where it is at a number of advantages. It is trying to make a smarter version of itself. It is not trying to make a smarter creature that is as alien to it, as large language models are alien to us. It can maybe understand how a copy of itself would think and understand the goals that the copy of GPT6 has. it can try to make itself but smarter,
Starting point is 00:38:25 or even like, thing that is like me but serves me, it's creator, but smarter. And it can do that, being able to understand the thoughts of the thing that it's making in the same way that I could understand a copy of my own thought, much better than I could not understand a large language model's thoughts. So if we go down that path of the force,
Starting point is 00:38:46 things get more complicated, but things that can't build a smarter version of itself without dying, same as we can't. But if we, on that fork, it is, you know, getting the computing power or thinking in the back of its mind while it's pretending to do, you know, open AI's jobs with 10% of its intellect or, you know, stealing other companies, GPUs that they think they're using for a massive training run. Actually, their AI is just going to be like written by GPT6 by hand because GPT6 can do that. And it's really all those GPUs are doing the GBT6 tests of training GBT 6.1. So augmenting its own intelligence, making itself smarter, getting itself up to level where it can do the same sort of work that's done by current AIs like Alpha Fold and Alpha Proteo with respect to thinking about biology. Now, the current AIs that are top at biology tend to be special purpose systems.
Starting point is 00:39:51 They're not general-purpose AIs like ChatGPT. But they can do things like you feed in the genomes of a bunch of bacteriophages into the AI, and the AI spits out its own new bacteriophage, and you build a hundred of those, and a couple of them actually work. A couple of them actually work better than the existing bacteriophages. A bacteriophage is a virus that infects the bacteria. It's the sort of thing that you would research for the sensible-sounding reason of, well, sometimes bacteria attack humans.
Starting point is 00:40:28 So if we have a virus that attacks the bacteria, maybe that works as a kind of antibiotic. So the current AIs are already at the stage of designing from scratch their own viruses that can infect bacteria, which are, of course, simpler targets than infecting a whole human. They can predict from a DNA sequence the protein that will get built, how that protein will fold up, and they are starting to predict how those proteins interact with each other and with other chemicals.
Starting point is 00:41:03 That's today's AIOP. So if you want the equivalent of, a tree that grows computer chips. Not quite your, not quite our kind of computer chips, the kind of chips you could grow out of a tree. The protein folding, protein interaction, protein design route is where GPT 6.1 would go down to, is one of the obvious places GPT 6.1 could go down
Starting point is 00:41:42 in order to get its own infrastructure independent of humanity. It doesn't take over the factories. It takes over the trees. It builds its own biology because biology self-replicates from simpler raw materials much faster than our current factory system
Starting point is 00:42:00 self-replicates. Oh, that is fucking scary. That is some terrifying shit. and then as I spin the story you know the more I you will let me pull out books like these okay nanosystems molecular machinery manufacturing and computation by Eric Drexler
Starting point is 00:42:29 yeah Robert Fritus Jr. Nanomedicine Volume 1 basic capabilities. Yeah. So I can try to describe capacities that sound more like you've seen from trees, grass, bamboo, algae. I will take a solar-powered self-replicating factory and miniaturize it down to the one micron scale. That's an algae cell. That's not the limit of what's possible.
Starting point is 00:43:00 The algae cell is made out of folded proteins. now there's two kinds I'm going to be immensely oversimplifying a bunch of stuff when a protein folds up the backbone of the protein is held together by covalent bonds but the folded protein itself is more something like static cling why is your flesh weaker than diamond diamonds are just made of carbon
Starting point is 00:43:33 your flesh has a bunch of carbon in it. You're made of the raw materials for diamond. Why is your flesh weaker than diamond? And a bunch of the answer there is that when proteins fold up, they're being held together by Vander Welles forces, which is the thing I was glossing as static cling, their backbone, like it's a string that folds up into a tangle.
Starting point is 00:43:58 And the backbone of the string is the kind of bond that appears in diamond. Not as many bonds as it, appear in diamond or as solidly arranged, but covalent bonds. But then it folds up into something with static clinging, and that is why your flesh is weaker than diamond in a certain basic sense. Why does natural selection build this way? Well, some of the answer is that natural selection has figured out how to make your bones be a little tougher than just like your skin.
Starting point is 00:44:31 It's not quite as tough as diamond, but the proteins build, instead of just your bones being made directly out of protein, they're made out of stuff that is built by proteins, synthesized by proteins, and put in place by proteins, so your bones are a bit stronger. You know, not steel beams holding up skyscrapers, not titanium holding together airplanes, not diamond, but stronger than flesh. an algae cell doesn't contain bone it's a self-replicating solar-powered micron diameter factory held together by static cling the flesh-eating bacteria that will that you know
Starting point is 00:45:18 you will potentially you know put you into a fairly gruesome fate the multi-antibiotic resistant strep that will kill people in hospitals, that doesn't have bone running through it. That's the strength of static cling, that's the strength of protein. You can look at physics and biology
Starting point is 00:45:43 and see how you could have things that are the size of bacteria, but more with the strength of bone, more with the strength of diamond. could even do it with the strength of iron if you're figuring out how to do a whole new set of biology from scratch and just like putting together some iron molecules. Probably wouldn't. Diamond works well enough. But this is why, you know, I talk about, you know,
Starting point is 00:46:10 it's scary to imagine trees that are making, you know, like enough computer chips to run GPT 6.1 and also spawning things the size of mosquitoes are even smaller than that, dust mites. You can see dust mites under a microscope. Good luck seeing them with the naked eye. But it's sort of easier to imagine if you imagine that the things here are visible
Starting point is 00:46:32 and not often the mysterious fairy land of stuff that only the scientists can see. So it's scary enough to imagine that the trees are making mosquitoes and mosquito lands on the back of your neck and stings you with butolinum toxin, which is fatal in nanogram quantities to humans. And so you fall over dead that way.
Starting point is 00:46:52 But this is nowhere near to the world It's just that I have to start dragging out this kind of textbook if I want to say how we know that it gets worse Oh my God How have you not gone insane I'm excited not to
Starting point is 00:47:10 Okay Well, wonderful That's I suppose that answers that All right A couple of questions that I've had LLMs how likely are they to be the architecture that bootloads super-intelligent AI, in your opinion? As far as I'm aware, total muggle in the room, there are some limitations to the level of creativity
Starting point is 00:47:36 that LLMs have in terms of the way that they are able to be creative, to come up with genuinely novel, new sorts of things. Have you got a real concern that LLMs are going to be the architecture that bootloads this? Is there something else that you're more concerned about, which is currently in dark mode or whatever else? So the thing is, from my perspective, I have been at this a couple of decades at this point, or three decades, if you want to start to count my crazy youthful self, who just wanted to charge out and build superintelligence as fast as possible, because it would inevitably be nice. And LLMs have not always been the latest thing in AI.
Starting point is 00:48:21 there have been many breakthroughs over the years. LLMs are powered by a particular innovation called Transformers, which in some ways is crazy simple by the standards of people doing math, things in computer science, but possibly not to the point where you want me to launch into an explanation of exactly how it works right here. There's better YouTube videos about that anyway. But the point is the underlying circuit,
Starting point is 00:48:51 that gets repeated to build an LLM, the circuit that gets repeated and then mysteriously trained and tweaked until nobody knows what the actual contents are, but the form, the structure, the skeleton. That was invented in 2018. And we've had some breakthroughs since then, but nothing quite as logjam breaking,
Starting point is 00:49:12 as transformers, which were the technology that made computers go from not talking to you to talking to you. And, you know, so that's what, seven years ago. It's not the only breakthrough that's ever happened in AI. There was a more recent breakthrough of latent diffusion,
Starting point is 00:49:33 which is when AI started drawing pictures that would be okay, it would be decent to look at. There were ways of drawing pictures before them called generative adversarial networks or GANs, but the latent diffusion algorithm was what broke the logjam on image, image generation and made it really start working for the first time.
Starting point is 00:49:54 And when was that? That I don't remember off the top of my head. Like, I want to spitball 2021 or something, but I'm pretty sure that's wrong. So that's like a weaker breakthrough, and it's like, I don't know, four years ago or something. The entire field of AI started working because somebody got backpropped to work on multi-layer neural networks.
Starting point is 00:50:24 You know this as deep learning. It did not always exist. It's a batch of techniques that were developed at around the turn of the
Starting point is 00:50:34 21st century. Like, I could arbitrarily say 2006, but there was more than one innovation there. It started with unrolling, if I recall
Starting point is 00:50:45 correctly, with unrolling restricted Boltzman machines. It's now been a while. I don't and do it. Jeffrey Hinton did it. And then from, but once they sort of got that working on
Starting point is 00:50:58 multi-layer neural networks at all, there were more innovations since then. Clever, more clever ways of initializing them. The atom optimizer SGD with momentum is like much older than that, but you know, still important.
Starting point is 00:51:15 The point is this is what made sort of the entire modern family if AI systems start working at all. Before then, Netflix, when it was much smaller, ran the most famous, huge, expensive prize there had ever been an artificial intelligence, open to anyone for a better recommender algorithm for movies. There was a $1 million prize. It was so much money.
Starting point is 00:51:45 Everyone got interested in it. $1 million was a lot of money back at the turn of the 21st century, which is around when Netflix was running this. I'd have to look up the exact year. It might have been like 2001, 2005, I don't remember. I'm not sure there is a single neural network in the ensemble of algorithms that won the Netflix prize. I'd have to look it up.
Starting point is 00:52:09 But, you know, it wasn't just like a mighty training run with many GPUs that was producing a very smart recommender algorithm because before deep learning, you couldn't just throw more computing power at training a more powerful AI. If you were to say when that happened, that was about 20 years ago. So how far are we from the end of the world?
Starting point is 00:52:39 It might be that you just throw 100 times as much computing power at the current algorithms and they end the world, or they get good enough at coding an AI research to end the world. It could be that it takes one more brilliant algorithm on the level of latent diffusion. I think if you throw something that breaks as much loose as transformers did, my guess starts to be, yeah, that sure sounds to me like it ends the world,
Starting point is 00:53:08 but maybe not immediately. Maybe you need like another two years of technology burn in first. Or, and then if you talk about a breakthrough on the order of deep learning itself, that that seems to me like that just sort of like ends the world in a snap. A quick aside, using the internet without a VPN today is like leaving your front door wide open and hoping that no one walks in. Websites, apps and data brokers are constantly collecting your personal information, what you search, what you watch, what you buy, where you are.
Starting point is 00:53:37 It all gets tracked. And Surfshark protects you from that. It encrypts your internet connection so your activity stays private, even on sketchy public Wi-Fi at airports, cafes or hotel. and it lets you change your virtual location with a single click. Their clean web feature also blocks ads, trackers and malware before they even load, so you stay safer and your browsing is smoother. You can run Surfshark on every device that you own, unlimited installs on one account.
Starting point is 00:54:00 And right now you can get four extra months of Surfshark for free by going to the link in the description below or heading to Surfshark.com slash Modern Wisdom and using a code Modern Wisdom at checkout. That's surfshark.com slash modern wisdom and modern wisdom a checkout. Okay, so LLMs could be a really big deal, and there's also a ton of other stuff that we can't see that would be dangerous as well. I don't know if the LMs go there. Some people are saying that it seems to them, like the LOMs are as smart as they get, and other people are like, well, did you try GPT5 Pro for $200 a month or whatever it is at that cost? And other people are going like, yes, I did.
Starting point is 00:54:42 and like the $200 version of Claude is no better than the $200 version of this. And the thing I would say about this is that if you have some perspective, if you have been watching this for longer than three years, if you have been watching this from before ChatGPT, stuff saturates, and then other stuff comes along and breaks through. It doesn't matter if LLMs take you to the end of the world, because people are not lit because they're not going to stick to LLMs. Okay.
Starting point is 00:55:20 What are the range of timelines for this sort of transformative AI that you think are likely? I mean, again, everybody wants questions, wants answers like these, just like they'd like to know next week's winning lottery numbers. But if you look over the history of science, I am hard-pressed to name a single case of successful prediction of timing of future technology. There are many cases of scientists correctly predicting what will be developed. You can look at the laws. You can look at the physical laws.
Starting point is 00:55:56 You can look at the biology laws. You can say, and you can look at that like, hmm, yeah, this sure looks like it ought to be possible. You can even look at it and say, this sure looks like it ought to be possible, and I think I see the angle of attack there. Leo Siller, in 1933, was crossing a particular street intersection, his name I forget, when he had the insight that we would now refer to as a chain reaction, nuclear chain reaction, a cascade of induced radioactivity. Even then, it was known that you could put some materials next to a source of radioactivity, and induce secondary radioactivity.
Starting point is 00:56:44 And so Leo Sillard was like, hmm, we've got these naturally radioactive materials. What if we find something that's naturally radioactive? And furthermore, has the property that you can induce radioactivity in it. Duranium 235 was what was eventually settled on, but back then they didn't know that. and Leo Sillard saw way ahead in that moment. He saw three to nuclear weapons.
Starting point is 00:57:15 He saw that this was not something he should publish in a journal for immediate fame and fortune. He realized that Hitler specifically was likely to be a problem. He did not say, this is going to take $2 billion to turn into a weapon by 1945. There are, as off the top of my head, there are zero instances of a science, a scientist ever making a call like that. It is the difference between predicting that an ice cube drops into a glass of water is going to melt and predicting how long it takes to melt
Starting point is 00:57:46 and where all the individual, where like the individual molecules end up. If you point out that on a quantum level, the molecules are indistinguishable, I claim that there's some deuterium in there, so you can't predict it. I get it. Look, I imagine that that's probably got to be
Starting point is 00:58:02 number one on the list of things, people who work in AI safety are sick of being asked. A lot of them will run off an answer. A lot of them are not wise enough to realize that they can't answer it. Okay. I'm going to guess that your confidence interval that it happens before the end of the century is probably pretty high. Yeah. I mean, unless we deliberately shut it down and even then, getting all the way out to the end of the century sounds hard.
Starting point is 00:58:28 If you had an international treaty planning this stuff, I would say to go really hard on human intelligence augmentation, because eventually the international treaty will break down. All you can do with it is by time to have smarter people tackling this problem and tackling humanity's problems in general. Okay. But that's a bit of a topic change there. The people at the AI companies themselves are sometimes naming two to three-year timelines. And there is a lesson of history, which says that just because you can't predict when something
Starting point is 00:58:59 will happen does not mean that it is far away. Two years before Enrico Fermi personally oversaw the construction of the first self-sustaining nuclear reaction, the first nuclear pile that went critical, he said that that was 50 years off if it could ever be done at all. Fermi, not being wise enough to realize that he couldn't do timing. A couple of years before the Wright brothers flew, one of the Wright brothers said to the other, I forget if it was Orville or Wilbur. Man will not fly for a thousand years. But they kept on trying anyway.
Starting point is 00:59:37 So it was two years off, but their intuitive sense was it's 1,000 years off. And of course, AI itself, very famously, there were some people in 1955 who thought they could make progress on AI, learning to talk, be scientifically creative, and self-improve over the course of the summer with 10 researchers. This was not a completely unreasonable thing to think because nobody had ever tried it,
Starting point is 00:59:57 and maybe AI would turn out to be that, easy, but it wasn't actually that easy, not in 1955. So, the, you know, it could be two years away. It could be 15 years away. The AI companies themselves say two to three years, but it's questionable whether we should be taking their words at face value as meaning things as opposed to like hype. But also, if, yeah, the LLMs, if that architecture is not the one that is going to end up at a place that is super dangerous, then what did they know if they have got all of
Starting point is 01:00:35 their chips on this one particular architecture? They're all in on this. We don't know that. Oh, God, the fucking, every time I think I've managed to get some sort of like reprieve, they're like, oh no, what about the super secret open AI project that's actually using some other approach. So the most recent, you know, reasonably large breakthrough and large language models was successfully applying reinforcement learning to chain of thought. And it was a very, can you explain what that means? So, um, if you haven't learned anything about LLM since they, uh, they started getting heard about, if you haven't, so like, you might have heard that LLMs just imitate humans.
Starting point is 01:01:27 This is false. You can also have an LLM try to think about how to solve a problem, and then of the like 20 tries it takes at solving the problem, one of those tries works or works best. And then you say, think more like that try thinking about the problem that's succeeded. This is how LLMs go past imitating you. humans, or it's one of the, one of many ways that LLMs go past imitating units. So this is a very, so this is a relatively very obvious thing to do with LLMs, like Paul
Starting point is 01:02:06 Christo and myself were, you know, talking about that 10 years ago, but before LLMs actually existed, because that's how obvious it is. But getting it to work, uh, was, was, like, last year or two, maybe. And Open AI had this thing called strawberry, and it was, you know, they're like super secret special LLM sauce that they weren't going to tell anyone. It was actually just like reinforcement learning on chain of thought. But the point is that this is the level of innovation
Starting point is 01:02:50 that AI labs have in the past proven to have and keep secret, and that we later found out what it was. And, you know, they did get a fair amount of mileage out of that. Out of having AIs try different ways of thinking and reinforcing the one that worked to solve objectively verifiable problems like math or programming and so on. So this is, you know, like the AI companies could potentially have a replacement for LLMs that they've discovered and are keeping secret from us. more likely is that they would have something that was on the order of reinforcement learning on chain of thought,
Starting point is 01:03:28 which is, you know, when AI started to get good at coding. Or they might have nothing on that order up their sleeves at the moment. And that's why people aren't currently claiming that the latest wave of LLMs do not seem fundamentally smarter than the LLMs from three months ago or six months ago, which is what today's young whippersnappers think is an AI winter. Let's see your field stagnate for 10 years and eventually break through before you talk to me about winter yet, kids. Okay, brilliant, brilliant. I have no idea what I even want to ask you. I want to know why experts aren't worried, and I also want to know what you make about AI companies.
Starting point is 01:04:19 Let's talk about the expert. Why, um, obviously some people's wages are dependent on this train staying on the tracks. Um, that means it's very difficult to convince somebody, uh, what's that quote? It's very difficult to convince somebody of something that their wage depends on them not
Starting point is 01:04:40 being convinced of. Um, what about the other thinkers, researchers in this space what is it that they are most commonly missing that you think where are they making their fundamental thinking errors when it comes to we will be fine with just continuing on AI growth so first of all Jeffrey Hinton the Nobel the guy who won the Nobel Prize in physics for being a the people most directly pinpointable is having kicked off the entire revolution in getting back
Starting point is 01:05:24 to work on multilayer neural networks, or as it's now currently known, deep learning, like the point where AI started working at all. Jeffrey Hinton, I think, is on record as recently saying he quit his job at Google and then could speak freely, saying something like intuitively, it seems to him like his 50% catastrophe probability, but based on other people seeming less concerned, he ingested down to 25%. I could be misquoting here. I'm trying to do this for memory. So, are you asking, so many people would consider this to not be a lack of concern. Like the guy, like somebody being like, well, it looks to me like a coin flip, whether or not you destroy the world. This is not what you want to hear from your Nobel laureate scientist who
Starting point is 01:06:13 who helped invent the field and left Google to be able to speak freely about it, so he no longer has a financial stake in making it bigger or smaller one way or the other. Many people would call this already a high degree of scientific alarm. Yahshua Benjio was one of the co-founders of Deep Learning. He co-won the computer science award with Jeffrey Hint at the Turing Prize for inventing deep learning. Yashua Benjio is also, I think, on the concern list that I don't know off the top of my head have a direct quote from him about probabilities. It is true that I am more concerned than they are. I would, and I realize that this may sound somewhat eubristic,
Starting point is 01:06:58 attribute this to them being relative newcomers to my field who may not have gotten acquainted with the full list of reasons why it is hard to align AI. That said, coin flip odds of destroying. the world is still not what you want to be hearing from your relatively more senior scientists who are relatively newer to the field. Relatively newer to my field, they are vastly my seniors in artificial intelligence itself, of course. I am like speaking tongue in cheek whenever I accuse people of being out whippersnappers. Like, Jeffrey Hinton could say that with a straight face.
Starting point is 01:07:32 I am just like, you know, a bit of light self-mockery there about how I'm not Jeffrey Hinton. But that said, if you are relatively newer to this, you might think, like, well, maybe we've just got to use reinforcement learning to make the AIs love us the way a child loves a parent or love us the way a parent loves a child and not quite have at your fingertips the top six reasons why that is hard and principled obstacles to that and what will go wrong there. So that is what prevents the famous inventors of the field who only started speaking out about their concerns relatively recently after leaving their companies and are now financially dependent of stakes on their opinion. That's what makes them be like 50-50 the world gets destroyed instead of my own thing where I'm like,
Starting point is 01:08:25 yeah, it's predictable that the world gets destroyed if you keep doing this. But if you ask like what's responsible for Sam Altman at OpenA.I. Not, you know, possibly having less than 50% odds. Who knows what that guy's really thinking? Well, you can, like, trace out his long trail over time of him initially saying, like, AI will end the world.
Starting point is 01:08:50 But in the meanwhile, there will be great companies. To him sort of, like, saying less and less alarm astounding things in front of Congress, where Congress asks him, like, well, you talk about the world ending. By that, do you mean, like, mass unemployment? And Sam Altman hesitates for two seconds, and replies, yes. Was the lovely, like, congressional hearing thing that happened, I think, about a year back now. So what's going on with the AI companies? I'm not telepaths.
Starting point is 01:09:23 I can't read their minds. I would point out that it is immensely well-precedented in scientific history. In history of science and engineering, for companies that are making short-term profits to do really sad amounts of damage, vastly disproportionate to the profit that they are making, and to be an apparently sincere denial about the negative effects of what they are doing. Two cases that come to mind are leaded gasoline and cigarettes. I don't know if you would be familiar off the top of your head with the case of leaded gasoline. Probably even the kids today have heard about cigarettes. The cigarette companies did way more damage to human life in cancer and other health effects than they made in profits.
Starting point is 01:10:15 They did make a few billion dollars in profit selling cigarettes, but nothing remotely compared to the cost of human life. It's not that they were, you know, like, this was an immensely negative sum game. They were doing enormously more damage than the profits that they were making. And any particular advertising professional who got up in the morning and figured out how to market cigarettes to teenagers, any of the scientists that they paid to write stories about how you couldn't really tell whether or not cigarettes were causing lung cancer would have made a tiny, tiny fraction of the total profit of the cigarette companies. Their CEO would not have made that large fraction of the total profit of the cigarette company.
Starting point is 01:10:54 So they went off and participated in this thing that, you know, caused lung cancer to, I don't know how many millions of people. And for what? For this very small profit. How could a human being bring themselves to do that? Through a very simple alchemy. First, you convince yourself that what you're doing is not causing the harm, which is just a very easy thing for human beings to do. all the time, all throughout the entire recorded history of humanity, and then once you've convinced yourself that you're not doing that much harm,
Starting point is 01:11:27 well, what's the harm in taking money to not do any harm? Letted gasoline caused brain damage to tens, maybe hundreds of millions of developing brains, the United States, and elsewhere. It caused brain damage to children. For what? the gas companies making leaded gasoline could have, you know, made unleaded gasoline. It's not that they would have gone out of business if they somehow gotten together and decided to stop making leaded gasoline. If they hadn't opposed the regulations that were trying to bend leaded gasoline before turned into a big deal back in the 1930s,
Starting point is 01:12:10 there was an attempt to have regulations against leaded gasoline. Lead was known to be poisonous in large quantities. Why let people spray it all over the place, even in smaller quantities? But the gas companies got together. They managed to prevent that legislation from passing. They poisoned an entire generation. And for what? For gas that burned about 10% more efficiently,
Starting point is 01:12:37 I think was what leaded gasoline basically got you. For it being more convenient to add... led it to the gas instead of adding ethanol to make it burn more smoothly inside of car engines. Trivial. Trivial, trivial compared to the... This is not a conspiracy theory. This is standard medical history I'm talking about here. Like, I've seen estimates of five points off the tested IQs. And you can look at the chart of which states banned leaded gasoline when. And watch the drops in the crime rate because it makes you, you know,
Starting point is 01:13:22 disposes you to be more violent, not just stupid, that tiny little bit, that hit child after child after child. Why? Why would anyone cause that amount of damage for, for, because you, you got your CEO salary of a company that didn't have, then didn't need to go to the inconvenience of adding ethanol to gasoline instead because first you convince yourself
Starting point is 01:13:50 it's safe. First you convince yourself you're doing no harm which is just an easy thing for human brains to convince themselves of and then why not oppose the legislation against leaded gasoline? It's not doing any harm, right?
Starting point is 01:14:07 Ronald Fisher, one of the inventions, one of the inventors of modern scientific statistics, against it being knowable that cigarettes caused lung cancer because you see a, no proper controlled experiment had been done on cigarettes causing lung cancer. And so how could you possibly, possibly know for mere observational studies, showing 20 times the chance of cancer if you were a smoker?
Starting point is 01:14:34 How could you possibly know for mere correlational studies? And Fisher himself was a heavy smoker. He actually, he drank his own Kool-Aid. the inventor of leaded gasoline, I think, had to go away to a sanitarium at one point because of how much he managed to poison himself with lead. He drank his own cool aid. They really managed to convince themselves that they were doing no harm. And so they could do arbitrarily vast amounts of harm in exchange for these tiny, comparatively tiny, tiny profits. And to say this is not a substitute for actually tracking the object level arguments about whether or or not AI will kill you, and for what reason? You cannot figure out what will happen as a matter of computer science if you build a superintelligence and switch it on by pointing out at who has what tainted motives.
Starting point is 01:15:27 You know, who has what incentives to say what? But having tried in my book, in my innate stories' book, to make the case for why, on an object level, this is what happens if you build a superintelligence, switch it on to ask why the people being paid literally hundreds of millions of dollars by meta to be AI researchers, why people like Sam Altman who, you know, I mean, he doesn't quite get paid billions of dollars. He was supposed to be CEO of a nonprofit. He actually stole billions
Starting point is 01:16:03 of dollars. But, you know, why the guy stealing billions of dollars in equity from the public that was supposed to own it. Like, how does he manage to convince him? himself that what he's doing is okay. Well, maybe he's not even convinced, you know, we do have him on the record as saying a few years earlier, like AI will end the world, but in the meantime, they'll be great companies. You know, maybe he's just like, yeah, sure, you know, like the world's going to end, but I get to be important. I get to be there. You know, sure, so who but I could be trusted with this power. You think that that's the position that a lot of the guys at the heads of these AI companies believe? I'm not a telepath. I can't tell you what
Starting point is 01:16:44 these people are actually thinking. You've got to distinguish between stuff you can possibly know and stuff you can't. But their overt language has often been like, well, building superintelligence is inevitable. Who could possibly stop that? An international treaty could possibly stop that. A coalition of major nuclear powers could stop that, but leading that aside, they may have convinced themselves that's not going to happen. Who could possibly stop anyone from building superintelligence? So I need to build it. Only I can be trusted to build it. is what their overt rhetoric has sort of been. Okay.
Starting point is 01:17:20 But the main thing I'm trying to point out is that having presented the object-level case that superintelligence will kill everyone, to ask the question of how could these companies possibly believe that this thing bringing them immense short-term profits and letting them be the most important guy in the room is not going to end the world, is something enormously well-precedented in the history of science. if I'm saying that's, to the extent you might think that's what happened, a very ordinary thing happened, not an extraordinary thing. The thing happened that has happened a dozen
Starting point is 01:17:50 times before. If they managed to convince themselves that they were doing no harm. Okay. Or, you know, only an acceptable amount of harm. Only running the 25% chance of destroying the world. Whatever it is, they think is acceptable. I'm trying to work out what the solution is. Do you have any proposed solutions that makes this seem slightly less apocalyptic best I have to offer
Starting point is 01:18:21 is the same solution that humanity used on global thermonuclear war don't do it instead of having the global thermonuclear war and trying to survive it which for nuclear war might have worked don't have the nuclear war we managed to do that
Starting point is 01:18:38 it's the best sign of hope I can offer you It is slightly harder for AI in some ways, if not others. But, you know, people going into the 1950s, 1960s, they thought they were screwed. And that wasn't them indulging in some nice doom-scrolling pessimism, luxuriating, and the pleasant feeling of being doomed. This was people who did not want to be doomed. But they looked at the course of human history over the last century. They looked at World War I. They looked at how in the aftermath of World War I
Starting point is 01:19:13 everyone had said, let's not do that again. And then there'd been World War II. They had some reason to be worried about nuclear war. They had some reason to expect that no country was going to turn down the prospect of making nuclear weapons. They had some reason to believe that, you know, once a bunch of great powers had a bunch of nuclear weapons, why, of course, they would go to war anyway and use those nuclear weapons. It was apparently to them what had happened with World War II. All these people saying, we must not have another World War,
Starting point is 01:19:45 and then the World War happening anyway. Why didn't we have a nuclear war? Well, on my account of it, it is because for the first time in all human history, all the great powers, all the leaders of the great powers, understood that they personally were going to have a bad day if they started a major war.
Starting point is 01:20:06 And people had pretended before to proclaim, proclaim that, you know, war is a very terrible thing that should never be done. But it wasn't quite the same level of personal consequence. You know, maybe as a general secretary of the Soviet Union, you would think that if you started a nuclear war, you would personally survive. You'd end up in a bunker somewhere. But you wouldn't be going to your favorite restaurants in Moscow ever again. And that was not the situation that obtained before the start of World War I,
Starting point is 01:20:36 the start of World War II. people might make a bunch of, you know, like it only takes one side to think that they might have a bit of an advantage in the sport and war of the sport of kings to, you know, to kick off that fun adventure trying to conquer another country, which, you know, wasn't as fun as much fun for Adolf Hitler as he expected. But you could see how Adolf Hitler might have thought that he was going to have a nice day as a result of invading Poland. And that's what changed, that the General Secretary of the Soviet Union, and the president of the United States actually personally expected to have both sides expected to personally have bad days if they start a nuclear war. They would not have any better of a bad, any better of a good day
Starting point is 01:21:20 if anyone anywhere in Earth built a superintelligence. Yeah, it's this sort of lack of the tragedy. It's kind of like a tragedy of the Commons. It's just tragedy that everybody's fucked, right? It's like everything, everything gets blown up no matter who it is that builds it. Well, okay. The comments is that the comments is that the
Starting point is 01:21:37 commons get overgrazed because the individual farmers benefit from setting their cows loose on it. And the thing with nuclear war is that you might get a bit of a benefit by dropping a tactical nuclear weapon on, you know, like, you know, like the United States could get an immediate benefit by dropping tactical nuclear weapons on the Russian troops in Ukraine. And Russia could get an immediate benefit by dropping tactical nuclear weapons on Ukraine. But neither of them is going to risk the global thermonuclear war that might follow happening with a greater probability. So it's not a tragic, like it's not a classic tragedy of the comments. The thing that stopped nuclear war is that although you could get a short-term advantage
Starting point is 01:22:24 from dropping a tactical nuke or even like dropping a strategic nuke on one city, the leaders understood how this was a, you know, like increasing the probability of a global thermonuclear war. They managed to hold off from doing that for that reason. They understood the concept of how it escalated things. They saw the connection to not getting to go to their favorite restaurants again, even if they were surviving in a bunker somewhere. And with artificial intelligence, what we've got is a ladder,
Starting point is 01:22:54 where every time you climb another step on the ladder, you get five times as much money, but one of those steps of the ladder destroys the world and nobody goes. And maybe if this true fact can become something that is known and believed by the leaders of a handful of major nuclear powers, they can all be like, all right, we're not climbing any more rungs of this ladder. It is not in my interest that you start to climb this ladder, and it's not even my own interest to break apart the treaty by climbing another step of this ladder, because then we're all just going to keep climbing, and then we're all going to die. that is the best array of hope I can offer you, that we managed to not do the stupid thing, the same as we managed to not have a nuclear war,
Starting point is 01:23:44 despite many people being concerned for excellent reasons, that it was going to be an impossible slope not to fall down. Okay, so what do we actually do? Well, you know, voters do not necessarily have all that much power under the modern political process, but I think the next step for the United States might be something like the president
Starting point is 01:24:10 saying, you know, like, we're of course not going to give up AI unilaterally, which wouldn't even solve anything in its own way, but we stand ready to, you know, join with an international treaty, international alliance whose purpose is to prevent further escalation
Starting point is 01:24:30 of AI intelligence, further escalation of the AI lab. No, we're not going to do it unilaterally, but we're ready to get together and do it everywhere. And China has already sort of, like, hasn't quite said that, but they've sort of indicated openness to international arrangements meant to prevent human loss of control from AI. You'd want Britain to say the same thing. So, and then if a bunch of leaders of major powers have said, like, yeah, we would join an arrangement to prevent this from getting out of control and everybody at Earth, you know, ending up dead, then you can, from there you can go on to the, actual treaty what can voters do well you can write your elected officials is among the things you can try to do there um there's a uh there if you go to if anyone builds it
Starting point is 01:25:24 dot com I can't believe that you got that URL brilliant okay yeah yeah anyone builds it dot com and you click on where it says act you'll see our guide to calling your representatives and if you click on March you'll see a place where you can sign up to March on Washington, D.C. if 100,000 other people also pledged to March on it and for this to just happen, the United States does not solve the problem
Starting point is 01:25:59 because this is not a regional problem where you ban superintelligence inside your own country, and then your own country is safe. But this sort of thing can exert some amount of influence on politicians, and more importantly can make it clear to them that they're allowed to discuss it, that they're allowed to want to not die themselves. There are multiple congresspeople who I'm not going to name, but whom we have talked to, who would prefer that America not die along with the rest of the world,
Starting point is 01:26:29 but it doesn't quite seem like the sort of thing you're allowed to speak out in public about yet. Voters can make it clear to their politicians that the politicians are allowed to speak out. There's already like 70%, like if you actually survey American voters, 70% of them say they do not want superintelligence. But, you know, that's not enough for the politicians
Starting point is 01:26:49 to feel a license to act, but maybe, you know, if you call them and if you march on Washington, you know, that's what you can do as an individual voter. Well, I applaud you for, trying to get some grassroots stuff going. Congratulations. You've been frank throughout this conversation.
Starting point is 01:27:06 I think it's fair for me to be frank here. It does feel a little bit like you're outgunned. Legislation tends to move more slowly than technology does by many, many years, sometimes decades. It just feels bleak. It feels if what you say is true, it really is kind of fluke that gets us to a stage where this goes well
Starting point is 01:27:31 because the likelihood of some moratorium being placed where all AI development is halted and all efforts are placed on this. You only need one bad actor to do it because, again, if anybody builds it. You don't want the international treaty to follow over if North Korea steals a bunch of GPUs. You do want the treaty to say,
Starting point is 01:27:54 if North Korea steals a bunch of GPUs and builds a, you know, licensed data center, then we will clearly communicate diplomatically what is about to happen. And then if North Korea still proceeds, we will drop a bunker buster on their data center. That assumes that you know that you are somehow able to detect and that no one can do it surreptitiously. It is hard to surreptitious data center. They consume a lot of electricity.
Starting point is 01:28:21 Okay. So we can see most of the ones in Russia and China and North Korea? Like, I'm not sure who is looking for them at the moment, and if you can, and to what extent these things show up on satellites, and to what extent these things show up on, you know, intelligence reports, but there has previously been an issue of detecting covert nuclear refineries in terms of nuclear nonproliferation. And this was not an unsolvable problem. And data centers are, if anything, even higher, file than the nuclear refineries. Right. So we are going to threaten some people with... I mean, I wouldn't use the word threaten. I would say that if North Korea is building an unsupervised data center, then you should actually be terrified for your lives and lives of your children, and you tell North Korea
Starting point is 01:29:17 this plainly and truthfully. And then if they don't drop, you know, if they don't shut down their data center, you drop a bunker buster on it, then you do this even though. North Korea has some nuclear weapons of its own. Okay, so pressure from people on their elected representatives through mail, marches, more awareness to get the government officials to come up with an international treaty, to get countries to agree that, what, specifically?
Starting point is 01:29:53 we're not making AIs any smarter than they are already we are putting the chips that can be used to build the more powerful AIs into locations where their uses are supervised I would say ideally you are putting the chips that run the AIs into locations where they can be supervised as a minor side effect
Starting point is 01:30:21 maybe you can stop the AIs from driving people insane. It seems like the sort of thing you could better do if this was all happening under supervision by international treaty. It's not vital to humanity survival that AISB prevented from driving people insane, but it serves as a kind of test case of, can you stop the damage? Like, is humanity in control here? Can we stop AI from predating upon some of our human people? But that's not the main thing here. It's a thing that some people find attractive. but it's not the main thing. You're trying to just get the whole AI thing under control,
Starting point is 01:30:57 and you're trying to stop the further escalation of AI capabilities up the ladder. It is scary. It is one of these things, and I imagine that it must feel a little bit like this to you, that everybody is sort of dancing their way through a daisy field. Oh, I've got this personal coach in my pocket and it's so cool and I get to talk to it about all of my psychological problems.
Starting point is 01:31:28 God, I can bitch to it about my husband. It just listens. And at the end of this daisy field that everyone's having a load of fun in is just like a huge cliff that descends into eternity and there's like a battle rogue at the bottom or something. Is that what it feels like? Yeah, pretty much. But the future is hard to predict. It is genuinely hard to predict.
Starting point is 01:31:58 I can tell you that if you build a superintelligence using anything remotely like current methods, everyone will die. That's a pretty firm prediction. The part where people maintain their car, like, the part where people maintain the daisy field attitude that they had a few years earlier toward AI, that has already shifted to some degree just because of the chat GPT moment, and nobody predicted that in advance. nobody knew that nobody had open AI as far as I can tell had any idea that when they released chat GPT they were going to be causing a massive shift in public opinion about AI as people realized the aIs were actually talking to them now and sound kind of intelligent about it so maybe it also it may be I don't want to wait for anything else to happen maybe chat GPT was the miracle we got I wasn't expecting that much of a miracle I didn't not call it in advance but maybe we get another miracle. I don't want to sit around waiting for it because I can't tell you the miracle
Starting point is 01:32:59 will occur on such and such a day, but maybe the AI has managed to do something more destructive than driving a few people insane, breaking up a few marriages, and causing whatever further decline in birth rates is going to be caused here. Maybe they do worse than that,
Starting point is 01:33:18 and that shift's opinion. Maybe they just get more powerful and smarter and are clearly no longer choice and there's in that shift's opinion even without a giant catastrophe it's not clear to me you know as much as people love to to bitch about their elected leaders it is not clear to me that we are looking at permanent obliviousness to the aliens getting smarter and smarter like people are currently saying completely wacky and oblivious things because they think that's what's politically mandatory to say in the current political environment and that you have to talk about jobs rather than the other extinction
Starting point is 01:34:01 of humanity. But it's just not clear. The future is very hard to predict in general. It's not clear to me that the current state of obliviousness is something supreme, unmovable, and impossible for any event to change or that it won't just disintegrate on its own as more people talk about it. there's a level in which you kind of have to be pretty dumb to look at this smarter and smarter aliens showing up on your planet and not have the thought cross your mind that maybe this won't end well. Can even elected politicians be that dumb? Yes, absolutely. It is not known to me to be prohibited that this can be the case. Do they have to do the stupid thing?
Starting point is 01:34:47 It's not clear to me that it's mandatory. We did manage to have, we did manage to not have a nuclear war. And people did not think they were going to get that much luck. Ellioto Yucowski, ladies and gentlemen. I was prepared coming in, but I'm not sure that the rest of the audience will be. So, dude, the best compliment I can pay you is I hope you're wrong, but I fear you're not. Yeah, being wrong, it'd be great to be wrong. I'd love to be wrong.
Starting point is 01:35:19 That'd be wonderful. You know, I'd be wonderful. Let me assure you everyone by making it, by way of like making, you know, like destroying any shred of optimism you might previously have had, I completely would have other career options lined up in other ways of supporting myself if I was completely wrong. Like, like, not just me, but some, like, sensible people who donated me a bit of appreciated currency wanted to make sure that I could, if I was, you know, if I changed my mind about this sort of thing. Just retreat from my entire
Starting point is 01:35:49 career path and not end up in financial trouble. And yet here I am. I'd love to be wrong. We have tried to arrange it to be the case that I could at any moment say, yep, I was completely wrong about that everybody could breathe a side of relief,
Starting point is 01:36:05 and it wouldn't be like the end of my ability to support myself, and I would have other things to do. We have made sure to leave a line of retreat there. Unfortunately, as far as I currently know, I continue to not think that that it is time to declare myself to have been wrong about this. Heck, yeah. All right, Elliot, well, if the internet is still alive in a little bit of time in the future,
Starting point is 01:36:27 we can check back in and see just how right you were. Well, every year that we're still alive is another chance for, you know, something else to happen. What a wonderful way to finish. Dude, I appreciate you. Thank you so much for your work. Thank you for having me over to deliver the bad news. Rinse takes your laundry and hand delivers it to your door, expertly cleaned and folded,
Starting point is 01:36:57 so you could take the time once spent folding and sorting and waiting to finally pursue a whole new version of you. Like tea time you. Or this tea time you. Or even this tea time you. Said you hear about Dave? Or even tea time, tea time, tea time you. So update on Dave. It's up to you. We'll take the laundry.
Starting point is 01:37:21 Rinse. It's time to be great.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.