Making Sense with Sam Harris - Making Sense of Existential Threat and Nuclear War | Episode 7 of The Essential Sam Harris

Episode Date: April 12, 2023

In this episode, we examine the topic of existential threat, focusing in particular on the subject of nuclear war. Sam opens the discussion by emphasizing the gravity of our ability to destroy life as... we know it at any moment, and how shocking it is that nearly all of us perpetually ignore this fact. Philosopher Nick Bostrom expands on this idea by explaining how developing technologies like DNA synthesis could make humanity more vulnerable to malicious actors. Sam and historian Fred Kaplan then guide us through a hypothetical timeline of events following a nuclear first strike, highlighting the flaws in the concept of nuclear deterrence. Former Defense Secretary William J. Perry echoes these concerns, painting a grim picture of his "nuclear nightmare" scenario: a nuclear terrorist attack. Zooming out, Toby Ord outlines each potential extinction-level threat, and why he believes that, between all of them, we face a one in six chance of witnessing the downfall of our species. Our episode ends on a cautiously optimistic note, however, as Yuval Noah Harari shares his thoughts on "global myth-making" and its potential role in helping us navigate through these perilous times.   About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating.  

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Welcome to The Essential Sam Harris. This is Making Sense of Existential Threat and Nuclear War. The goal of this series is to organize, compile, and juxtapose conversations hosted by Sam Harris into specific areas of interest. This is an ongoing effort to construct a coherent overview of Sam's perspectives and arguments, the various explorations and
Starting point is 00:01:12 approaches to the topic, the relevant agreements and disagreements, and the pushbacks and evolving thoughts which his guests have advanced. The purpose of these compilations is not to provide a complete picture of any issue, but to entice you to go deeper into these subjects. Along the way, we'll point you to the full episodes with each featured guest, and at the conclusion, we'll offer some reading, listening, and watching suggestions, which range from fun and light to densely academic. which range from fun and light to densely academic. One note to keep in mind for this series. Sam has long argued for a unity of knowledge where the barriers between fields of study are viewed as largely unhelpful artifacts of unnecessarily partitioned thought.
Starting point is 00:01:57 The pursuit of wisdom and reason in one area of study naturally bleeds into, and greatly affects, others. You'll hear plenty of crossover into other topics as these dives into the archives unfold. And your thinking about a particular topic may shift as you realize its contingent relationships with others. In this topic, you'll hear the natural overlap with theories of ethics, violence and pacifism, and more. So, get ready. Let's make sense of existential threat and nuclear war. In 1961, the astronomer Francis Drake jotted down a fairly simple back-of-the-napkin formula to calculate just how many technologically advanced civilizations we should expect to be out there in the cosmos right now. It came to be known as the Drake Equation.
Starting point is 00:02:52 The equation starts with an extremely large number, the estimate of the total number of stars in the universe. Then we narrow that number down to how many of those stars have planets orbiting them. Then we narrow that number down to how many of those stars have planets orbiting them. Then we narrow that number down to how many of those planets are likely to be suitable for the evolution of life. Then we narrow that down to the number of those life-suitable planets that have actually had life emerge.
Starting point is 00:03:18 Then we narrow that down to how many of those life forms are intelligent. And then, finally, we narrowed that down to how many of those intelligent life forms advanced to the stage of a technological civilization. Even if we're quite conservative with our estimate at each step of the narrowing process, maybe we guessed that only one in every 100,000 life-suitable planets actually did achieve even basic microbial life, or that only one in every one million forms of intelligent life became technologically advanced. Even if we apply these stringent factors, the results of the equation and our remaining number suggest that there still ought to be between a thousand and a hundred million advanced civilizations just in the Milky Way galaxy alone.
Starting point is 00:04:10 And there are, of course, billions of galaxies just like ours. So even if the correct number is just in the hundreds in our Milky Way, when you look out in the cosmos, there should be millions of civilizations out there. A physicist named Enrico Fermi asked the simple question, if this is true, where is everybody? How come, when we look out into the cosmos, we don't see or hear obvious evidence of a plethora of advanced lifeforms zipping about in their ships,
Starting point is 00:04:47 symmetrically geoforming entire galaxies into power plants, or what have you. This question became known as Fermi's Paradox. There is no shortage of hypotheses to address Fermi's question, but just about all of the responses can be categorized under three general answer types. One answer is that we're just early. Perhaps all of Drake's math was right and everybody will show up, but we just happen to be amongst the first of the party. The cosmos itself may have just recently reached
Starting point is 00:05:16 a state of habitability after the chaos from the initial inflation and the Big Bang sent heat and debris flying about in every direction. Maybe it just recently settled down and allowed life like ours to flourish, and we humans are just an early riser. Another answer is that we're very rare. Maybe Drake's numbers were not nearly conservative enough,
Starting point is 00:05:37 and life such as ours is just an exceedingly unlikely cosmic event. Perhaps there are only a small handful of civilizations out there, and given the vastness of the cosmos, it's no surprise that we wouldn't have had any close neighbors who happen to be advanced enough to say hello. Maybe the neighborhood is just very quiet. Or perhaps the most disturbing answer, the one we're going to be dealing with in this compilation, is this one. Maybe there is a great filter. What if there is a certain unavoidable technological phase that every intelligent life's advancement must confront? A technological phase that is just so hard to get through
Starting point is 00:06:19 that almost no civilization successfully crosses the threshold. And that explains why it appears that no one is out there. It may be that we humans are on a typical trajectory, and are destined to be erased. And soon. But even if there is a filter, and even if just the tiniest percentage of civilizations have been able to get through it and continue advancing without tripping over themselves, pretty soon they'd have the knowledge of how to do monumentally big engineering projects, if they so choose.
Starting point is 00:06:52 We should see evidence of their continued existence, right? So let's make sure we're imagining this filter analogy correctly. Maybe a single filter isn't quite right. Maybe we should be picturing thicker and thicker filter layers stacked one on top of the other. Maybe there would be a moment when you really do leave them all behind. That point of permanent safety would be when a civilization achieves a kind of knowledge so powerful that it understands how to survive and avoid its own self-destruction perpetually, and really does get through all of those filters.
Starting point is 00:07:30 But there does seem to be a kind of natural sequential order of the types of knowledge that a civilization is likely to discover. It is difficult to imagine discovering how to build flying machines before building wheelbarrows, but that is also not a guarantee. Is our human order of scientific discovery typical or an outlier? It seems that harnessing energy is key to both creative and destructive power, and that they must go hand in hand. You could imagine the kind of knowledge it would take to pull off a huge engineering project, like building a device that could siphon all of the energy
Starting point is 00:08:06 from a black hole at the center of a galaxy, for example. And you can recognize that this same knowledge would presumably also contain the power to destroy the civilization which discovered it, either maliciously or accidentally. And the odds of avoiding that fate trend towards impossible over a short amount of time. No one makes it through. This is the great filter answer to Enrico Fermi. That there are countless civilizations out there that blip out of existence almost as quickly as they achieve the technical prowess
Starting point is 00:08:38 to harness even a small percentage of the potential energy available to them. Is this what happens out there? Does this answer fare me? How many filters are there? We humans are a relatively young species, and already we seem to be discovering a few technologies that have some filter potential. If we get through our current challenges,
Starting point is 00:09:01 are we bound to just discover another, even more difficult technology to survive alongside? Is this tenable? This compilation is going to be a tour of Sam's engagement with, and a close look at, the strongest weapon of war we've created so far. A weapon that might be a candidate for this great filter, or at least a very difficult one. Nuclear war.
Starting point is 00:09:30 The complete erasure and annihilation of civilization was a talent once thought to be reserved only for the gods. As a reminder of just how stark the moment was when we realized we may have that power in our own hands, perhaps for the first time sensing that great filter on our horizon, it's worth playing a haunting and now very famous audio clip which lays the realization bare. Upon witnessing a successful test detonation of a nuclear bomb south of Los Alamos, Robert Oppenheimer, the physicist leading the Manhattan Project, recalls the scene and his thoughts.
Starting point is 00:10:04 We knew the world would not be the same. Few people laughed. Few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad Gita. Vishnu is trying to persuade the prince that he should do his duty. And to impress him, takes on his multi-armed form and says, now I am become death, the destroyer of worlds.
Starting point is 00:10:54 I suppose we all thought that one way or another. Making sense of nuclear war and its existential threat is not the happiest of subjects, and perhaps that's why most of us don't often look closely at the precariousness of the situation we're in. We experience a kind of cognitive dissonance that can act as a psychological barrier when direct engagement with a known threat is just too destabilizing, and more importantly, when the threat seems to defy a readily available remedy. If there is a great filter out there, what good would it do to worry about it?
Starting point is 00:11:31 Who would want to think about this stuff? Well, Sam Harris is one of those people who forces himself to, though that wasn't always the case. Before we get to the guests and conversations that Sam has hosted on Making Sense, we should remind ourselves of the analogy that we're using to approach this subject. A filter is not a wall. A filter, no matter how dense, does permit some things to get through. So even if the odds are stacked against us, the only game in town appears to be trying to improve our chances of getting to the other side. The only game in town appears to be trying to improve our chances of getting to the other side. We're going to start with Sam himself as he describes his re-engagement with this threat. It's his attempt to shake us out of our collective moral slumber,
Starting point is 00:12:19 to help us notice our circumstances when it comes to the nuclear question. He reads here from a particular book which was instrumental to his paying close attention to this subject. Sam is speaking in July of 2020, in the introduction of episode 210. We're coming up on the 75th anniversary of the atomic bomb in about a week. July 16th is the 75th anniversary of Trinity, the explosion of the first atomic bomb at the Trinity test site in Alamogordo, New Mexico. Whatever the merits or necessity of our building the bomb, and even using it to end the war with Japan, that can certainly be debated. But what is absolutely clear to anyone who studies the ensuing 75 years is that these were 75 years of folly, nearly suicidal folly. And this has been a chapter in human history of such reckless stupidity that it's been a kind of moral oblivion and there's no end in sight.
Starting point is 00:13:29 Rather, we have simply forgotten about it. We have forgotten about the situation we are in every day of our lives. This is really difficult to think about, much less understand, the enormity of our error here is stupefying in some basic sense. It's like we were convinced 75 years ago to rig all of our homes and buildings to explode, and then we just got distracted by other things, right? And most of us live each day totally unaware that the status quo is as precarious as it in fact is. So when the history of this period is written, our descendants will surely ask, what the hell were they thinking? ask, what the hell were they thinking? And we are the people of whom that question will be asked, that is, if we don't annihilate ourselves in the meantime. What the hell are we thinking?
Starting point is 00:14:44 What are our leaders thinking? We have been stuck for nearly three generations in a posture of defending civilization, or imagining that we are, by threatening to destroy it at any moment. And given our capacity to make mistakes, given the increasing threat of cyber attack, the status quo grows less tenable by the day. The first book I ever read about the prospect of nuclear war was Jonathan Schell's The Fate of the Earth, which originally came out in The New Yorker in 1982. If you haven't read it, it's a beautifully written and amazingly sustained exercise in thinking about the unthinkable. And I'd like to read you a few passages to give you a sense of it. This is from the beginning, starting a few sentences in. These bombs were built as weapons for war, but their significance greatly transcends war and
Starting point is 00:15:40 all its causes and outcomes. They grew out of history, yet they threatened to end history. They were made by men, yet they threatened to annihilate man. They are a pit into which the whole world can fall, a nemesis of all human intentions, actions, and hopes. Only life itself, which they threatened to swallow up, can give the measure of their significance. Yet in spite of the immeasurable importance of nuclear weapons, the world has declined,
Starting point is 00:16:10 on the whole, to think about them very much. We have thus far failed to fashion, or even to discover within ourselves, an emotional or intellectual or political response to them. This peculiar failure of response, in which hundreds of millions of people acknowledge the presence of an immediate, unremitting threat to their existence and to the existence of the world they live in, but do nothing about it, a failure in which both self-interest and fellow-feeling seem to have died, has itself been such a striking phenomenon that it has to be regarded as an extremely important part of the nuclear predicament as this has existed so far. End quote. So there Schell gets at the strangeness of the status quo, where the monster is in the room,
Starting point is 00:16:57 and yet we have managed to divert our attention from it. And I love this point he makes. It's a violation both of self-interest and fellow feeling. Our capacity to ignore this problem somehow seems psychologically impossible. It's a subversion of really all of our priorities, both personal and with respect to our ethical commitments to others. A little bit later on, he talks about this state of mind a little more. Because denial is a form of self-protection, if only against anguishing thoughts and feelings, and because it contains something useful, and perhaps even, in its way, necessary to life, anyone who invites people to draw aside the veil and look at the peril face to face
Starting point is 00:17:46 is at risk of trespassing on inhibitions that are part of our humanity. I hope in these reflections to proceed with the utmost possible respect for all forms of refusal to accept the unnatural and horrifying prospect of a nuclear holocaust. So there, Shell is being more tactful than I'm being here, admitting that this denial is on some level necessary to get on with life, but it is nonetheless crazy. Year after year after year, we are running the risk of mishap here. And whatever the risk, you can't keep just rolling the dice. And so it seems time to ask, when is this going to end?
Starting point is 00:18:35 To begin the exploration of clips, we're going to hear from a philosopher and author who spends a lot of time looking at existential risk, Nick Bostrom. Bostrom has a talent for painting colorful analogies to prime our thinking about these difficult topics. One of his analogies that brings the great filter hypothesis into vivid clarity goes like this. Imagine a giant urn filled with marbles, which are mostly white in color but range in shades of gray. Each of these marbles represents a kind of knowledge that we can pluck from nature and apply technologically. Picture reaching in and pulling out the knowledge of how to make a hair dryer, or the automobile, or a toaster oven, or even something more abstract like the knowledge of how to alter the genome to choose eye color or some other aesthetic purpose. Reaching into this urn, rummaging around and pulling out a marble, is the act of scientific
Starting point is 00:19:33 exploration and achievement. Now, white marbles represent the kinds of knowledge that carry with them very little existential threat. Maybe pulling a marble like this would be gaining knowledge of how to manufacture glass. That's a marble that we pulled out of the urn around 3500 BCE in Egypt. That little bit of knowledge mostly improves life on earth for humans and has all kinds of lovely applications for food preservation, artistic expression, window manufacturing, eyesight correction, and much more. It likely carries with it some kind of minor threat as well, though it's difficult to imagine
Starting point is 00:20:12 how that specific advancement would inherently threaten the existence of the species. You can imagine thousands of white marbles that feel as benign, positive, and generally harmless as this one. But Bostrom asks us to consider what a black marble would be. Is there some kind of knowledge that, when plucked out of nature, is just so powerful that every civilization is eradicated shortly after pulling it from the urn? Are there several of these black marbles hiding in the urn somewhere? Are we bound to grab one eventually?
Starting point is 00:20:44 Sam points out that it has generally been the attitude of science hiding in the urn somewhere? Are we bound to grab one eventually? Sam points out that it has generally been the attitude of science to just pull out as many marbles as fast as we possibly can and let everyone know about it the moment you have a good grip. And we operate as if the black marbles aren't in the urn,
Starting point is 00:20:59 as if they simply don't exist. What shade of gray was the marble that represented the moment we obtained the knowledge of how to split the nucleus of a uranium-235 atom and trigger and target its fission chain reaction in a warhead? Was that a black marble? That will be a question we consider throughout this episode, as well as the specific political entanglements which relate to this problem, and the alliances and personalities which affected it in the recent past.
Starting point is 00:21:28 So let's start out with Nick Bostrom and Sam engaging on the topic of existential threat in general as we move towards the nuclear question. Here, you'll hear Bostrom lay out his vulnerable world hypothesis and draw out the metaphor that we introduced. This is from episode 151, Will We Destroy the Future? Let's start with the vulnerable world hypothesis. What do you mean by that phrase? Well, the hypothesis is, roughly speaking, that there is some level of technological development
Starting point is 00:22:01 at which the world gets destroyed by default, as it were. So then what does it mean to get destroyed by default? I define something I call the semi-anarchic default condition, which is a condition in which there are a wide range of different actors with a wide range of different human recognizable motives. But then more importantly, two conditions hold. One is that there is no very reliable way of resolving global coordination problems. And the other is that we don't have
Starting point is 00:22:32 a very extremely reliable way of preventing individuals from committing actions that are extremely strongly disapproved of by a great majority of other people. Maybe it's better to come at it through a metaphor. Yeah, the urn. The urn metaphor. So what if in this urn there is a black ball in there somewhere?
Starting point is 00:22:55 Is there some possible technology that could be such that whichever civilization discovers it invariably gets destroyed? And what if there is such a black ball in the urn? I mean, we can ask about how likely that is to be the case. We can also look at what is our current strategy with respect to this possibility. And it seems to me that currently our strategy with respect to the possibility that the urn might contain a black ball is simply to hope that it doesn't. So we keep extracting balls as fast as we can. We have become quite good at that, but we have no ability to put balls back into the urn. We cannot uninvent our inventions.
Starting point is 00:23:36 So the first part of this paper tries to identify what are the types of ways in which the world could be vulnerable, the types of ways in which there could be some possible blackball technology that we might invent. And the first and most obvious type of way the world could be vulnerable is if there is some technology that greatly empowers individuals to cause sufficiently large quantities of destruction. Motivate this with a, or illustrate it by means of a historical counterfactual. We, in the last century, discovered how to split the atom and release the energy that is contained within some of the energy that's contained within the nucleus. And it turned out that this is quite difficult to do.
Starting point is 00:24:28 You need special materials. You need plutonium or highly enriched uranium. So really only states can do this kind of stuff to produce nuclear weapons. But what if it had turned out that there had been an easier way to release the energy of the atom? What if you could have made a nuclear bomb by baking sand in the microwave oven or something like that? So then that might well have been the end of human civilization in that it's hard to
Starting point is 00:24:54 see how you could have cities, let us say, if anybody who wanted to could destroy millions of people. So maybe we were just lucky. Now we know, of course, that it is physically impossible to create an atomic detonation by baking sand in the microwave oven. But before you actually did the relevant nuclear physics, how could you possibly have known how it would turn out? as we go on this harrowing ride to your terminus here, because the punchline of this paper is fairly startling when you get to what the remedies are. So why is it that civilization could not endure the prospect of what you call easy nukes? If it were that easy to create a Hiroshima-level blast or beyond, why is it just a foregone conclusion that that would mean the end of cities and perhaps the end of most things we recognize? I think foregone conclusion is maybe a little too strong. It depends a little bit on
Starting point is 00:26:00 the exact parameters we plug in. The intuition is that in a large enough population of people, like amongst every population with millions of people, there will always be a few people who, for whatever reason, would like to kill a million people or more if they could, whether they are just crazy or evil, or they have some weird ideological doctrine, or they're trying to extort other people or threaten other people. Just humans are very diverse, and in a large set of people, for practically any desire,
Starting point is 00:26:38 you can specify there will be somebody in there that has that. So if each of those destructively inclined people would be able to cause a sufficient amount of destruction then everything would get destroyed yeah now if one if one imagines this actually playing out in history then to to tell whether all of civilization really would get destroyed or some horrible catastrophe short of that would happen instead would depend on various things like just what kind of nuclear weapon would would it be like a a small kind of hiroshima type of thing or a thermonuclear bomb how easy would it be could literally anybody do it like in five minutes or would it take some engineer working for half a year and so depending on exactly the what what values you pick for those and some other variables. You might get scenarios ranging from very bad
Starting point is 00:27:28 to kind of existential catastrophe. But the point is just to illustrate that there historically have been these technological transitions where we have been lucky in that the destructive capability we discovered were hard to wield. You know, and maybe a plausible way in which this kind of very highly destructive capability could become easy to wield in the future would be through developments in biotechnology that maybe makes it easy to create designer viruses and so forth that don't require high amounts
Starting point is 00:28:04 of energy or special difficult materials and so forth. And then't don't require high amounts of energy or special difficult materials and so forth and there you might have an even stronger case like so with a nuclear weapon like one nuclear weapon can only destroy one city right uh where the viruses and stuff potentially can spread so yeah i mean we should remind people that we're in an environment now where people talk with some degree of flippancy about the prospect of every household one day having something like a desktop printer that can print DNA sequences, right? That everyone becomes their own bespoke molecular biologist and you can just print your own medicine at home or your own
Starting point is 00:28:46 genetic intervention at home, and this stuff really is, you know, the recipe under those conditions, the recipe to weaponize the 1918 flu could just be sent to you like a PDF. It's not beyond the bounds of plausible sci-fi that we could be in a condition where it really would be within the power of one nihilistic or otherwise ideological person to destroy the lives of millions and even billions in the wrong case. Yeah, or send us a PDF where you could just download it from the internet. So the full genomes of a number of highly virulent organisms are in the public domain and just ready to download so yeah so i mean we could talk more about that i think that i would rather see a future where dna synthesis was a service provided by a few places in the world
Starting point is 00:29:38 where it would be able if if the need arose to exert some control some screening rather than something that every lab needs to have its own separate little machine. Yeah, so these are examples of type one vulnerability, like where the problem really arises from individuals becoming too empowered in their ability to create massive amounts of harm. Now, so there are other ways
Starting point is 00:30:03 in which the world could be vulnerable that are slightly more subtle, but I think also worth bearing in mind. So these have to do more about the way that technological developments could change the incentives that different actors face. We can again return to the nuclear history case for an illustration of this. And actually, this is maybe the closest to a black wall we've gotten so far with thermonuclear weapons and the big arms race during the Cold War led to something like 70,000 warheads being on here, trigger alert. So it looks like when we can see some of the archives of this history that have recently opened up, that there were a number of close calls. The world actually came quite close to the brink on several occasions, and we might have been quite lucky to get through.
Starting point is 00:30:54 It might not have been that we were in such a stable situation. It rather might have been that this was a kind of slightly blackballish, and we just had enough luck to get through. But you could imagine it could have been worse. You could imagine properties of this technology that would have created stronger incentives, say, for a first strike, so that you would have crisis instability. If it had been easier, let us say, in a first strike to take out all the advers's nuclear weapons, then it might not have taken a lot in a crisis situation to just have enough fear that you would have to strike first for fear that the adversary otherwise would do the same to you. Yeah. Remind people that in the aftermath of the
Starting point is 00:31:38 Cuban Missile Crisis, the people who were closest to the action felt that the odds of an exchange had been something like a coin toss, something like 30 to 50 percent. And what you're envisioning is a situation where what you describe as safe first strike, which is there's just no reasonable fear that you're not going to be able to annihilate your enemy, provided you strike first. That would be a far less stable situation. And it's also forgotten that the status quo of mutually assured destruction was actually a step towards stability. I mean, before the Russians had, or the Soviets had their own arsenals, there was a greater game theoretic concern that we would be more tempted to use ours because nuclear deterrence wasn't a thing yet. Yeah. So some degree of stabilizing influence, although of course, maybe at the expense of the outcome being even worse, if both sides are
Starting point is 00:32:35 destroyed, then the safer strike might just be one side being destroyed. Right. Yeah. And so if it had been possible, say, with one nuclear warhead to wipe out enemies nuclear warheads within a wider radius then it's actually the case or if it had been easier to detect nuclear submarines so that you could be more confident that you had actually you know been able to target all of the other sides capability, then that could have resulted in a more unstable arms race, one that would, with a sort of higher degree of certainty, result in the weapons being used. And you can consider other possible future ways
Starting point is 00:33:18 in which, say, the world might find itself locked into arms race dynamics, where it's not that anybody wants to destroy the world, but it might just be very hard to come to an agreement that avoids the arms being built up and then used in a crisis. Nuclear weapon reduction treaties, there are concerns about verification, but in principle, you can kind of have,
Starting point is 00:33:42 like nuclear weapons are quite big and they use very special materials. There might be other military technologies where even if both sides agree that they wanted to just ban this military technology, it might just, the nature of the technology might be such that it would be very difficult or impossible to enforce. In that exchange, you heard Bostrom mention how lucky we may have gotten, in that it turns out nuclear weapons are not very easy to create. So even if this technology turns out to be a nearly black ball, and perhaps the darkest one we've yet pulled out of the urn, we can examine our treatment of them as a dress rehearsal with incredibly high stakes.
Starting point is 00:34:24 Bostrom also mentioned something in passing that's worth keeping in mind as we look closer at the nuclear weapon question, what he referred to as global coordination problems. This is a concept sometimes used in economics and game theory, and it describes a situation that would be best solved by everyone simultaneously moving in the same direction. But of course, people can't be sure what's in anyone else's mind, and humans are famously difficult to coordinate and synchronize in any case. So often these types of problems entrench themselves and worsen, even if most people agree that they are incredibly harmful. Another relevant feature of a coordination problem is that
Starting point is 00:35:05 there's usually a strong disincentive for first movers. This can be applied to climate change, political revolutions, or even something like a great number of people secretly desiring to quit social media but not wanting to lose connections or marketing opportunities. Laying the global coordination problem framework onto disarmament of nuclear weapons is an easy fit. The first mover who dismantles their bombs may be at a huge disadvantage, even if everyone privately agrees that we all ought to disarm. In fact, as you also heard Bostrom point out, when thinking about nuclear war strategy, the first strike is often aimed at decapitating the opponent's ability to strike back.
Starting point is 00:35:47 Of course, if your opponent has already willingly disarmed, say, in accordance with the mutual treaty, while you have retained your weapons and only pretended to disarm, the effect is just as devastating. So the coordination problem tends to persist. Now that we've laid some of the foundation to think about existential risk in general, let's move to a conversation Sam had with a guest who looks very closely at the prospect of nuclear war. The guest is Fred Kaplan, and when Sam spoke with him,
Starting point is 00:36:17 Kaplan had just published a book called The Bomb. But before we get to Kaplan, let's first listen to some of Sam's introduction to the conversation and let him do the work of trying to drag our attention to the unnerving reality of this situation again. He's going to bring us back to 1983, at a moment when the only thing standing between us and nuclear Armageddon may have been a single person's intuition. The doomsday clock was just advanced closer to midnight than it has been at any point in the last 75 years.
Starting point is 00:36:52 It now reads 100 seconds to midnight. Now, whether you put much significance in that warning, just take a moment to consider that the people who focus on this problem are as worried now as they've ever been. But do you think about this? If I were to ask how long it's been since you worried that you might have some serious illness, or that your kids might, or how long has it been since you've worried about being the victim of crime, or worried about dying in a plane crash. It probably hasn't been that long. It might have happened last week, even.
Starting point is 00:37:34 But I would wager that very few people listening to this podcast have spent any significant time feeling the implications of what is manifestly true. significant time feeling the implications of what is manifestly true. All of us are living under a system of self-annihilation that is so diabolically unstable that we might stumble into a nuclear war based solely on false information. In fact, this has almost happened on more than one occasion. Do you know the name Stanislav Petrov? He should be one of the most famous people in human history, and yet he's basically unknown.
Starting point is 00:38:13 He was a lieutenant colonel in the Soviet Air Defense Forces, who is widely believed to be almost entirely responsible for the fact that we didn't have World War III in the year 1983. This was at the height of the Cold War, and the Soviet Union had just mistaken a Korean passenger jet, Flight 7, for a spy plane and shot it down after it strayed into Siberian airspace. and shot it down after it strayed into Siberian airspace. And the U.S. and our allies were outraged over this, and on high alert.
Starting point is 00:38:58 Both the U.S. and the Soviet Union had performed multiple nuclear tests that month. And so it was in this context in which Soviet radar reported that the U.S. had launched five ICBMs at targets within the Soviet Union. And the data were checked and rechecked, and there was apparently no sign that they were in error. And Stanislav Petrov stood at the helm. Now, he didn't have the authority to launch a retaliatory strike himself. His responsibility was to pass the information up the chain of command. But given the protocols in place, it's widely believed that had he passed that information along, a massive retaliatory strike against the United States would have been more or less
Starting point is 00:39:42 guaranteed. And of course, upon seeing those incoming missiles, of which there would likely have been hundreds, if not thousands, we would have launched a retaliatory strike of our own. And that would have been game over. Hundreds of millions of people would have died more or less immediately. Now, happily, Petrov declined to pass the information along. And his decision boiled down to mere intuition, right? The protocol demanded that he pass the information along, because it showed every sign of being a real attack. But Petrov reasoned that if the United States were really going to launch a nuclear first strike,
Starting point is 00:40:25 they would do it with more than five missiles. Five missiles doesn't make a lot of sense. But it's also believed that any of the other people who could have been on duty that night, instead of Petrov, would have surely passed this information up the chain of command. And killing a few hundred million people, and thereby wiping out the United States and Russia, as you'll soon hear, our retaliatory strike protocol entailed wiping out Eastern Europe and China, for good measure. This could have well-ended human civilization. So when you think about human fallibility and errors of judgment and realize
Starting point is 00:41:07 that this ability to destroy the species is at all times, every minute of the day, in the hands of utterly imperfect people, and in certain cases abjectly imperfect people, it should make the hair stand up on the back of your neck. And the infrastructure that is maintaining all of these systems on hair trigger alert is aging, and in many cases run on computers so old that any self-respecting business would be embarrassed to own them. And yet for some reason, almost no one is thinking about this problem. And yet for some reason, almost no one is thinking about this problem. At the end of this compilation, we'll offer some recommended reading and viewing, including a documentary which focuses on that perilous moment with Petrov.
Starting point is 00:42:01 Sam goes on in that introduction to outline a few more absurd instances of close calls involving accidental war game codes being loaded into computers or misinterpreted radar signals which nearly sent the bombs flying. So now let's hear more from that episode. We're going to hear Kaplan and Sam discuss Kaplan's writing about the Cuban Missile Crisis of 1962, arguably the first whiff that humanity got of the genuine prospect of nuclear war. If you need a history refresher on the events of 1962, we'll recommend a documentary in the outro of this compilation, and, of course, Kaplan's book as well. For this clip, you'll just need to recall that at the tensest moment of the standoff, there were hundreds of Soviet nuclear warheads pointed at the U.S. on launch pads
Starting point is 00:42:45 stationed in Fidel Castro's Cuba, just 90 miles off of the United States coast. And the United States had a far greater number of missiles fixed on Soviet targets. Secret negotiations were underway by the leaders of all three nations involved to try to avert World War III and save face in front of their own populations. So here is Sam with Fred Kaplan from episode 186, an episode simply titled The Bomb. In your book, you report facts about the Cuban Missile Crisis that were not widely known and were actually systematically concealed to some effect. Perhaps go into that for a second, because it gave us a sense that bluffing on the brink of nuclear war was a successful strategy, because people thought that that's what had
Starting point is 00:43:38 happened, that he just basically stared Khrushchev down and Khrushchev blinked. But that's not quite what happened. That's not what happened. Most of us do know now, because it was revealed 20 years after the fact, that in fact, on the final day of the crisis, Khrushchev proposed a deal, a secret deal. I will take out my missiles from Cuba if you, United States, take out your very similar missiles from Turkey. And Kennedy took the deal. What isn't generally known, and I don't know why it isn't known, because you can listen to this whole exchange on tapes that were declassified 20 years ago, but that you will read about in maybe two or three other books, if that many. But Kennedy
Starting point is 00:44:23 reads the proposal, and he says, and you know, this is, he secretly tape recorded all of this. He goes, well, this seems like a pretty fair deal. And everybody around the table, all of his advisors, not just the generals, but the civilians too, Bobby Kennedy, Robert McNamara, McGeorge Bundy, all these paragons of good sense and reason, feverishly opposed this deal. NATO will be destroyed. The Turks will be humiliated. Our credibility will be lost forever. And, you know, Kennedy let them talk. And then, you know, he said, well, you know, this was on a Saturday. The following Monday, the United States, the military was scheduled
Starting point is 00:45:03 to start in the attack. There were going to be 500 air sorties a day against the missile sites in Cuba, followed four days later by an invasion. And Kennedy took the secret deal. He only told six people about this, though. And in fact, he put out the myth that there was no deal, because this was the height of the Cold War. It would look like appeasement.
Starting point is 00:45:27 One of the six people that he did not tell was his vice president, Lyndon Johnson, who therefore went into the Vietnam War convinced by the lesson of Cuba, the false lesson of Cuba, that you don't negotiate. You stare them down. But here's what's even scarier. You stare them down. But here's what's even scarier. We later learned, this was not known at the time, that some of them armed with tactical nuclear weapons, to stave off an anticipated American invasion. Therefore, if anybody else around that table except John Kennedy had been president, or if he had said, yeah, you're right, this is a bad deal, let's proceed with the plan, then there would have been a war with the Soviet Union without any question. Yeah, it's amazing.
Starting point is 00:46:26 And so in your book, you report on the details of these encounters between each U.S. administration and the war planners, which are generally the Air Force and the Navy, and each incoming president, whether we're talking about Kennedy and his team with McNamara or Nixon and Kissinger or Clinton and Obama and their teams, each president comes into these meetings and for the first time is told what our first strike and second strike policies are. And each one, it sounds like, comes away absolutely appalled by what the doctrine actually is and committed from that day to changing it. And yet, each has found himself more or less unable to change it in ways that fundamentally alter the game theoretic logic here. I mean, and these discussions are like
Starting point is 00:47:20 really out of Dr. Strangelove. The most preposterous scenes in Dr. Strangelove are no more comedic than some of these exchanges because these are plans that call for the annihilation of hundreds of millions of people on both sides. I mean, ever since Kennedy, we've been past the point where a first strike prevented the possibility of a retaliatory strike from the Soviet Union. So we're talking about protocols that are synonymous with killing 150, 200 million people on their side and losing that many on our side. And for the longest time, the protocol was to annihilate China and Eastern Europe, whether they were even part of the initial skirmish with the Soviet Union. Right. The U.S. policy throughout the 1950s and into some of the 60s,
Starting point is 00:48:13 the policy, this wasn't just the Strategic Air Command, this was signed off on by President Eisenhower and the Joint Chiefs of Staff. It was that if the Soviet Union attacked West Germany or took over West Berlin, and, you know, this was at a time in the late 50s, early 60s, when we really didn't have any conventional armies in Europe, but the plan was that at the outset of the conflict, to unleash our entire nuclear arsenal the conflict to unleash our entire nuclear arsenal at every target in the Soviet Union, the satellite nations of Eastern Europe, and as you point out, China, even if China wasn't involved in the war, and it was inquired, well, how many people is this going to kill? And the estimate was about 285. If you'd like to continue listening to this conversation, you'll need to subscribe at
Starting point is 00:49:03 SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense Podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app. The Making Sense Podcast is ad-free and relies entirely on listener support, and you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.