Within Reason - #28 Neil deGrasse Tyson - What Is Earth's Biggest Threat?

Episode Date: April 23, 2023

Neil deGrasse Tyson is an astrophysicist, director of the Hayden Planetarium, and one of the most recognisable communicators of science in the world. His most recent book is Starry Messenger: Cosmic ...Perspectives on Civilisation: https://amzn.to/40Cdkgi (affiliate link) Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Within Reason. My name is Alex O'Connor. My guest today is Neil deGrasse Tyson. Dr. Tyson is an astrophysicist and director of the Hayden Planetarium in New York City, as well as being one of the most recognizable communicators of science in the world. His most recent book, Starry Messenger, is subtitled Cosmic Perspectives on Civilization. And so we spoke about what it means to take a cosmic perspective, viewing ourselves in our universal context and the nihilism that this can inspire in. some people when they realize just how small we are in relation to the rest of the universe. We spend the bulk of the conversation, however, talking about threats to civilization.
Starting point is 00:00:39 We talk about the internal threat of artificial intelligence and Dr. Tyson's rather optimistic views about its development, as well as external threats. What does he think is the greatest threat to planet Earth from outer space? There are many more things that I would like to talk to Dr. Tyson about. There are entire chapters of this most recent book that I've really wanted to press him on. But as things progressed, I realized that there simply wasn't going to be time. Hey, perhaps this leaves room for a future potential episode. Only time will tell. Nonetheless, this was an incredibly enjoyable conversation. It was a privilege to sit down with
Starting point is 00:01:13 Neil deGrasse Tyson. I hope you get some enjoyment out of the following podcast, as I know that I certainly did. Neil deGrasse Tyson, thank you so much for coming on the podcast. Sure, thanks for having me. I love the overarching themes that drive your conversation. I'm delighted to participate. Well, I thought it would be good to sit down with you. I noticed that your latest book, Starry Messenger,
Starting point is 00:01:48 Cosmic Perspectives on Civilization, given that it's got the word cosmic in it, which is also in my YouTube channel name, I think there'll be some overlap at the very least. I wanted to begin by asking, actually, what is a cosmic perspective? Yeah, it's actually quite simple. You surely, you and your viewers, listeners, have surely heard of the overview effect, which
Starting point is 00:02:09 is what is described by astronauts who have gone into orbit and they look down on Earth and, you know, national borders dissolve away and you see sort of the fragility of where you came from. It's a whole other outlook. And so astronauts are changed, typically, for having gone into space and looked down on Earth. That has great value, but you know what has even greater value is a perspective that goes even beyond that. And I would call it a cosmic perspective, which exists not only in sight lines back towards Earth. So if you go to the moon, for example, and you look back to Earth, it's no longer just land masses passing beneath your feet.
Starting point is 00:02:57 No, it's the entire Earth is there. We say, wait a minute, that's a planet. And I'm not on that planet right now. It's just floating. What's holding it up? I don't know. It's just there. And it's orbiting that star.
Starting point is 00:03:14 And you don't, like I said, you don't see the color-coded countries in the schoolroom globe. You see oceans and land and clouds. That's it. you don't know that people are killing one another because of what gods they worship or don't worship or what skin color they have or what who they sleep with or who they you don't see any of that and so is it a good thing that you see it from far away that you don't notice it or is that a bad thing you could probably argue it both ways but I would say on balance it's a good thing because if I go up into space and go to the moon with someone from my
Starting point is 00:03:56 a different faction, let's call them, where our factions are fighting each other on Earth, and we're together up in the moon where our survival depends on each other? Do you think we're going to listen to instructions from military commanders on Earth or from heads of parliament? Oh, should I punch him in the nose now or later? What should I do? No. We are two human beings, Homo sapiens, a quarter million miles away from the next nearest Homo sapien.
Starting point is 00:04:24 all right this is a cosmic perspective things look different from that distance and at apollo 14 astronaut paul 14 it's of course one of the ones that went and walked on the moon Edgar mitchell i opened the book with a quote from him his quote is so good i didn't need to write the book that could have just been the book the quote is you develop an instant global consciousness a people orientation, an intense dissatisfaction with the state of the world, and a compulsion to do something about it. From out there on the moon, international politics looks so petty. You want to grab a politician by the scruff of the neck and drag him a quarter million miles out and say, look at that, you son of a bitch. That's the entire book right there.
Starting point is 00:05:18 Well, allow me to present. And he was feeling it. He was feeling the cosmic perspective. I want to present a sort of equal and opposite cosmic perspective that I see some people present. Especially on social media, I see a video starting with the person, then the house, then the city, then the country, then the globe, then the solar system, then the galaxy, then the galaxy cluster, out and out it goes. And the idea and the message is, look how small we are. Look how we're just tiny little apes on a speck of dust in the middle of nowhere. And for many people, rather than feeling a sense of awe and grandiosity, it makes them feel
Starting point is 00:05:57 rather small. It makes them feel a bit nihilistic. They think, look how tiny we are in this great expanse of the universe. So for them, adopting a cosmic perspective is something of a negative experience. I wonder what you would say to such people. I have two responses. One is, sure, you're small if you start at the human scale and go out to the edge of the universe, you know, 30 powers of 10.
Starting point is 00:06:20 whatever is the scale unit that you're using. But you could also zoom in and go into the cells of the human body, let's say the red blood cell. And you go in and then you get to the, you know, through the cell wall and you see the mitochondria or whatever the biologist has recently put inside your cell. And you go all the way down to the molecules. And then within the molecules, the atoms, and within the atoms, the nucleus.
Starting point is 00:06:52 You can go as many powers of 10 into something smaller than you as bigger than you. So if you want to feel large, take the trip the other way. So that's my advice, okay? If feeling large is important to you. But feeling large is an ego thing, and there's really no room for ego.
Starting point is 00:07:19 This is my second way to address that question. There's no room for an ego in the universe. In fact, astrophysicists have got to be some of the most humble scientists among all categories of inquiry. Because we look up, we small in time and in space and stuff we don't know and will we ever figure it out and who ordered that? I don't know what that is. This is happening to us every day. And so I claim that if you go into it with no ego and then you say, wow, we're small. But you know something else?
Starting point is 00:07:58 What astrophysics discovered in the middle of the 20th century? That the atoms of your body were forged in the big bang and in the hearts of stars that cooked up these elements, starting with hydrogen, and it moved up the periodic table, making heavier. and heavier elements. And that star then exploded billions of years ago, scattering that enrichment into gas clouds that would form the foundations of the next generations of star systems, including at least one that has what we call the sun and what we call Earth. And on that earth, there is life. So, yeah, you go out and look up and say, I'm small in this universe. Yeah, you're alive in the universe. But you know what else is true? the universe is alive within you that fact borders on the spiritual so that once you know it
Starting point is 00:08:57 and you look up at the night sky and say i am one among the stars i am part of this great unfolding of cosmic events from the big bang to whatever is the fate of the universe and i get to watch my three billion seconds of it that gets you about 95 years old i get to watch three billion seconds of it, and that is my privilege of being alive in this universe. That's how I think about it. You said it borders on the spiritual. Is this not enough, is this not sort of quite enough to be fully spiritual as an experience for you? Well, spiritual is, it's a very loose word, and people have privatized the word in whatever way fits themselves. So I don't, that's why I don't want to claim I'm either inventing a new definition for it or using it in some way that would step on
Starting point is 00:09:47 the toes of others. When I say borders on spiritual, that's an open door for you to then think about it, however it best merges with your sense of the world. So to me, the way I would use the word spiritual, spirit means spirit, right? And spirit, I think from the Latin is breath, your breath, what is vital for life, all right? In the world of religion, spirituality. has taken on religious patinas, and as many people do. So I don't, I leave it open. You reach for it, absorb it in whatever way best suits your needs. I suppose I've sort of, I can't say I've noticed because I haven't been alive long enough,
Starting point is 00:10:31 but it seems to me that if you were to ask people about science and scientific progress, say, 50 years ago, you're thinking about rocket ships, you're thinking about extraterrestrial life, you're thinking about exploring the galaxy, whereas now when we look at science and we look at the progress of science, we're looking at artificial intelligence and how it's going to destroy human civilization. We're looking at the science of global warming. We're looking at all kinds of destructive and things that sort of brought on an Armageddon view for planet Earth. Why do you think this is happening? Well, it's not the first time people imagine the end of the world. You go back 250 years, was it? I forgot the exact day. Thomas Malthus was certain the
Starting point is 00:11:16 human population without stripped the food supply, because he looked at the rate at which the population was growing, which was exponentially. He looked at the rate at which food supply was growing, which was linear, and exponentials will always win out over linear. And so he was sure we would all just die. Some of the greatest sources of death in the history of civilization have been famine. Millions of millions of people have died from famine. In the history of civilization, there are millions of people dying each year today. These days, because of famine and other diseases that are a result of famine and malnutrition. So it's not a new thing to think about the end of the world.
Starting point is 00:11:55 If you read the Christian Bible, the New Testament, there's all kinds of talk about the end is coming. That's why you read Revelation. Watch for the end. There were people while Jesus was walking around after he died, okay? People don't talk much about that. While he's walking around after he died, that was viewed as the second coming of Jesus and that the end was immediate and was going to come right away.
Starting point is 00:12:20 So it's, by it wasn't just that, it was, oh, in 1982, there was a great planetary alignment and people thought that all the gravity would line up and destroy Earth. Of course, it didn't happen. Then there were the two thousand, the people who thought in the year 2000, that was the end of the way. And then 2012, I submit to you that there is no shortage in the last thousand. of years of people declaring that the end of the world is near. Okay?
Starting point is 00:12:50 Was it Seneca? Somebody says, what is it? The economy is in the toilet. I'm paraphrasing, of course, we didn't have toilets back then. Economies in the toilet, the children no longer mind their parents, and it's clear that the end of the world is near. All right? So, no, I will not accept your declaration that.
Starting point is 00:13:15 end of the world proclamations are some emergent phenomenon. They're very real to you because you're experiencing them firsthand. But you read through the history. It's been there consistently and persistently. Ask any cult leader. Cultly, how do they get you to join the cult? The end of the world is coming. You have to join my cult because I know how to get you.
Starting point is 00:13:37 I know how to get around that. Okay. What do you think the people in Jonestown were saying? How many people died? I forgot the number. a cult in, was it British Guiana, where the end of the world is coming, let's all commit suicide, so we will go to where we need. The Heaven's Gate cult, all of us.
Starting point is 00:13:55 Okay, maybe the difference that you're citing is it is scientists telling you the world is going to end. And not only that, not only that, but also the difference that I see here is that for these kinds of cases, they say, look, the world is ending, and this is clearly a way to get you to join my club. you know, the Christians, the early Christians can walk around and say, look, the world is coming to an end and Jesus has warned us that this is happening soon, you'd better repent of your sins. A cult leader can say, the world's going to end, and I can show you the way out of it, but scientists of artificial intelligence seem to just be saying, you know, the world might be about to end, and well, that just sucks, and I'm afraid there's no club that we've formed
Starting point is 00:14:34 that can do anything about it. I would distinguish AI from other aspects of this. There's climate change where technology got us into it and we kind of need technology to get us out of it or behave change in human behavior there's an asteroid could be headed our way we don't know and if it is you wouldn't want to go the way with the dinosaurs so science can inform us of ways we could die that science did not know about a hundred years ago all right so now technology let's count AI as among the rank among the fruits of technology, there are people who fear that AI will ruin. I'm not among those, but then I'm not an expert, okay?
Starting point is 00:15:21 Hold aside that I've written 50,000 lines of computer code, and I've been thinking about computers my whole life. I don't present myself as an expert, and I won't. And I say that because people who do present themselves as experts, by the way, I don't count Elon Musk as an AI expert, even though he signed a proclamation saying we should rid of a i don't count them because that's not his background okay it's not his his well he's thought about computers but many people have thought about computers that don't present this other so you want to know what he thinks because he's a captain of industry so yes it matters
Starting point is 00:15:56 what he thinks but if you want to know whether a prognostication of doom is real then yeah you see what people are saying about it where people say it's going i don't agree that that's where it's going. But no reason for you to listen to me because I'm not the expert. Yeah, you make a prediction in the book. I wrote it down. You said, instead of becoming our overlord and enslaving us all, artificial intelligence will be just another helpful feature of the tech infrastructures that serve our daily lives. You seem quite optimistic about the progress of artificial intelligence. People freaked out when chat GBT showed up, right? It's mastering language. It's mastering language. And, well, think this through.
Starting point is 00:16:44 All right, I'm old enough to remember when computers, when you could buy a four-function calculator, the price of the calculates went from like $500 and $1,000 down to like a couple of hundred dollars down to like $50, which was about the price of a textbook at the time. So you can buy a calculator and no one had to ever again teach you long division. And I say, wow, look what this can do. I don't have to do it.
Starting point is 00:17:12 So now I can think of other problems that I couldn't previously solve because I didn't have the ability of the power to do it. I now have a computer to do it. When the computer came on the scene solving science problems, did we go running for the hills? Did we say this is the end of the world? Did we say, oh my gosh, no. We embraced it.
Starting point is 00:17:34 We brought our own ingenuity to it to see what else he could do for us. us having said that i don't think so no i'm saying so that's that's how it came into the world of science and and and we embraced what computers could do for us so now and oh since then it beat our best chess players did the world come to an end i don't think so it beat us in jeopardy which involves culture all right and and how nimble are you do you know what culture means and how it comes together to form new information compared with previous information. It beat us at go. Did the world come to an end?
Starting point is 00:18:12 No. So now it can write your term paper. You know what we should do? Get it to write the stuff that nobody else wants to write. Get it to write the stuff that no one signs their name to, like instruction manuals. Let it write travel brochures. Nobody ever signed their name to those. Let it do all the things we don't want to do.
Starting point is 00:18:32 okay that's how i see every new advance in computing i'm going to say it's a computing advance you're going to call it ai fine to me it's a computing advance by the way we're kind of there now almost there'll be a day AI is driving your car let's just call it a really advanced computer great it'll never be drunk or as you can say the UK they'll never be pissed okay It'll never, okay, and it could probably text and drive at the same time and not put anybody's life at risk. It could go 150 kilometers an hour down the street with very close spacing because it has instant reflexes that you don't. And if it wants to change lanes, it could tell the other cars. Could you part where?
Starting point is 00:19:22 Because I'm about to change lanes. How many accidents occur because people don't see what's happening where they, because there's a blind spot. There are no blind spots. They could drive 150 miles an hour in dense fog because they don't rely on visible light. Let it happen. Bring it on. And at least in the United States, it'll save 40,000 lives a year. You want to get philosophical.
Starting point is 00:19:46 What do we do when AI-driven, a computer-driven car kills 5,000 in a year from errors in software code or it's a bug or it's a test case that has never been seen? seeing, yet it saved 45,000 lies. What do you do about that? No one writes an article about the person who didn't die because they would have been killed that night by a drunk driver. Nobody writes that story. They write the story about who does die. So that number 6,000, everyone will be up in arms and don't want to get rid of self-driving cars because they don't see the full-up statistics of it. And I don't think the transition will be easy or smooth, but that number will go down. Ultimately, it will reach zero.
Starting point is 00:20:34 Sure. And how I know that? Because planes don't crash anymore, not really. Yeah. And why? I know in America, the FAA, a federal aviation administration, investigates every single plane crash, every single one. And they find out how and why it happened.
Starting point is 00:20:50 And a new rule shows up for all subsequent takeoffs and landings. That's why you can't carry lithium ion batteries, okay, in the cargo. No, because that took down a plane, and they know why, and they investigated it. And so all the reasons why it used to happen don't happen anymore. That's the future, and that's what I see. But it's not like somebody who has fears about artificial intelligence is going to not recognize this. They're going to say, of course, you know, AI is going to be able to do the things that we can't do. It's going to be able to drive our cars.
Starting point is 00:21:24 it's going to make life much easier, perhaps in the short term, but they still have a fear that this isn't incompatible. I mean, like with the calculator, nobody had a fear that the calculator was going to become sentient and develop its own desires and think that humans are getting in the way of its mathematical mission for the universe. That's why people see AI as a different category of technological progress. And it is interesting. You'd have to build something that makes judgments about everything in the world and then give it power to act on those judgments. The question is, who's going to build that?
Starting point is 00:21:59 Is that any different from saying, I have equals MC squared and advanced math? I'm going to build a hydrogen bomb. Okay? Yeah, we actually did build hydrogen bombs that could destroy civilization. Yeah, someone might build an AI creature that will decide on its own,
Starting point is 00:22:19 what it should do about the world. But somebody's going to have to build that. It's not going to build itself. So in the meantime, all of our incremental steps about AI is greatly magnifying our quality of life. And so, and by the way, if you went back 15 years, 15 years, and said, look what I have. I have this device, we call it a smartphone.
Starting point is 00:22:49 and it monitors the traffic wherever I want to go and redirects me if the traffic changes, they would burn you at the stake for being a witch. They would say, my gosh, what kind of AI is that? Is it sentient? And you can ask it. Siri, where's the nearest Starbucks? Oh, take a go down the block through. You would think that was AI.
Starting point is 00:23:10 Every advancement that a previous generation would have called AI, that's just life. Yeah, well, there's something I think they call it the AI problem or something, which is this idea that things that are described as artificial intelligence, as artificial intelligence progresses, the things that count change. That is, things that, you know, a decade ago. It's quite, yeah. And so I understand the hesitation to... That's my example of the moving goalpost, yes.
Starting point is 00:23:35 Yeah, to put a sort of distinct boundary around it. But, you know, it's nice to hear some optimism for a change, I suppose, about this impending disaster. I do think that if you took a mobile phone back to, you know, back 100 years, 200 years, people might think that you're some kind of witch. They might think that you're crazy, but do you think that they would look at that device and think that it's dangerous? I mean, here we have a science that's almost still in its...
Starting point is 00:24:00 You'd have to go back to the era of superstition. Then they would see it's dangerous. But in the era of rational thought, the technology era, no, I don't see that happening. I kid when I say, go back 15 years before and you're at stake. You go back to when superstition ruled law, then, yeah, entirely. We're in the infancy of this science of the technologies that are currently being referred to as artificial intelligence. And I think that, for example, some people think that the smartphone has done more to harm us than to help us, right? But that kind of thought didn't really exist until the smartphone had been with us for a long time.
Starting point is 00:24:36 We started to observe its effects. And then we thought, you know, maybe this isn't great for our attention spans. Maybe this isn't great for our social interactions. This seems quite unique in that before it's even really gotten off the ground, people are already flagging. it and not just saying, hey, look, you need to be careful here. It beat us at chess, at Go, at Jeopardy. It's writing your term paper. Don't tell me it hasn't gotten off the ground.
Starting point is 00:24:57 What are you basing that on? I think compared to what people predict it could become. I mean, when you obviously. Continuously, as we agreed, it's a continuously shifting goalpost. So if I show anyone who was talking about AI 10 years ago, the stuff that's going on today, it's, oh, my gosh, AI has finally arrived. No, we're living today and we say, no, it hasn't arrived. there's still another goal that has not been breached.
Starting point is 00:25:21 So, Josh, I interrupted you. Go on, finish. Well, what I wanted to say was that the fear and these scientists aren't just saying, look, we need to be careful about this, but they're saying this potentially could be one of the greatest existential threats to humanity. And I say, you know, it's just getting off the ground. And of course, compared to where we were 10 years ago, we're leaps and bounds ahead. But because this is exponential, it's a bit like sort of the growth of, the growth of
Starting point is 00:25:47 something like the smartphone in the first three years, huge changes were made. Well, all technology is basically exponential, all of it, as I spent time in one of the chapters delineating. But let me say it another way. I see where you're coming from. Let me try to shine a little bit of light on it. So you have to ask yourself, what is this AI thing that everyone fears? Is it a computer in a room that has control over every other computer in the world? Well, why would you grant that access? Like I said, it's like the hydrogen bomb.
Starting point is 00:26:26 Why would you grant access to the hydrogen bomb of a crazy person? You just wouldn't. Here's a computer that can control other computers, but you don't want it to control the computer that has the finger on the switch. So you put in firewalls or whatever we do today, that prevents bad, bad forces from operating on powerful sources of energy and influence. Okay? So I see that.
Starting point is 00:26:54 Now, we want to perhaps redouble the energy invested in how to tame that. And that was in that recent letter that was signed by so many that said we should stop our investigation of AI until we catch up with it. I think that's naive in the sense that you, Not going to stop curiosity in the world. It's not going to happen. Yeah, you can stop it over here, but is that going to stop China or the United Arab Emirates or Chile or some other country that they're going to continue? You don't have control over them. So it's more gestural to say we're scared, let's put a moratorium on it, than it is an authentic rule that the world is going to follow.
Starting point is 00:27:39 I think if it wakes you up to the possible dangers, that's great. what are the possible dangers of nuclear warfare let's talk about it for sure what are the possible dangers of a runaway virus let's talk about it you'll have heard if you have weaponized germs there there there there risks there let's talk about it so i don't see this as a different kind of risk you've heard the in the past have put our lives at stake this analogy of like uh of of of the five-year-olds and the adult and if artificial intelligence because something that is more intelligent than at least the average human being, potentially any human being, than us saying, you know, well, look, we'll just put a firewall.
Starting point is 00:28:20 Quite. I'm just saying it can do a billion calculations a second. We used to think doing calculations was intelligence. Now we can do that. Now you're not counting that. You're not putting a check in that box and saying it's smarter than us? So, well, okay, let's say that it is. I think there are sort of different forms of intelligence, right?
Starting point is 00:28:39 But I think the idea is of this thought experiment is the idea of this, this adult, this incredibly intelligent adult. Is it a machine in a room? No, this is a human being. This is a human being. And there are a bunch of, you know, five-year-olds who are trying to keep the human being in the cage. And they're a bit scared that the human being is that the adult's going to, you know,
Starting point is 00:28:58 try and escape. And they think, well, if that happens, we'll just stop him. But as the adult, you'd look at these five-year-olds trying to keep you in a cage. It's laughable. It's completely laugh, you know. Right, right. Except the adult, in your example, is another version of a human being that walks and talks and this sort of thing. A machine, a computer is in a machine, in a box, in a room, and it talks to others.
Starting point is 00:29:24 So let's call it a collective intelligence. I don't have a problem with that. And, okay, and it breaks the codes that we thought were unbreakable because it's, like, massively smart. And we have to ask, well, why would it do that? Did you program it that way? Did you say, become the most powerful thing in the world? Well, you shouldn't have done that, okay? So, I mean, there's a lot of occasions in the history of modern life
Starting point is 00:29:53 where people could do something and shouldn't do it and didn't do it. And we're all still alive because of it rather than dead. So, yes, I'm not saying. there aren't dangers. I just don't see the dangers as being unique among dangers that we have faced. And that the benefit that it can bring, when it's properly managed and allocated and done in a nicely planned way, we want to think about it, sure, can be of extraordinary benefit to humankind to solve problems that we can't solve that we're not smart enough to
Starting point is 00:30:30 solve, be it hunger or in the human genome or whatever. else it can do that we have yet to even dream of. You mentioned earlier about, I mean, we've spoken here about, let's say, an internal, an internal technology or an eternal fear that humans have about the planet. Sorry, a quick point about the adult and the five-year-old. Yeah. That's a brilliant philosophical analog. But in practice, we're not creating another human being.
Starting point is 00:31:05 It's a machine that runs on electricity. Okay, so it doesn't eat potatoes for its nourishment. It runs on electricity and batteries, okay? And so, yeah, you get started getting you, I'm going to unplug you. You know something? You can't stop me from unplugging you. But are you going to, is there some walking robot? No, we're not really making walking robots.
Starting point is 00:31:34 They make fun YouTube video. But no one is going to make a robot to do things because the thing itself is a robot. Am I going to make a robot to drive my car? No, my car is the robot. Okay? So, so, um, plus, AI only knows what's on the internet. Is that everything that humans know? Actually not.
Starting point is 00:31:59 I can make a discovery today and write it down. And if I don't put it on the internet. AI will never know it. Yeah, I suppose I wish I knew more about AI to be able to comment on the idea of just being able to switch it off. I think the fear is that somehow this beast will come to exist in a form that can't be just turned off, at least not simply. And you ask, you know, who would create such a being, who would create such a computer? I think there's a fear that it could either happen either by mistake or incompetency or perhaps by some artificial intelligence. Nuclear ICBMs can happen by mistake.
Starting point is 00:32:35 or on purpose. Correct. Yeah, that's right. And by the way, you can weapon, think of, it's not just all of civilization is, think of weaponizing it. You'd send some AI bot into another country within their firewall and it takes them out. Sure, this is a real thing. I don't, yes, we should worry about it.
Starting point is 00:32:54 But I'm not going to uniquely worry about it and say we should never have AI. I see what you're saying. I wonder, having spoken about some internal threats, what do you think is the greatest threat to planet Earth from space? Asteroids. Asteroid strikes. Yeah, and we're practicing how to deflect them. The DART mission from NASA worked out pretty well.
Starting point is 00:33:16 We feel pretty proud about slamming into an asteroid, altering its orbit, so that if you get good enough at that, you can deflect an asteroid that otherwise has us in its targets. But that's the single greatest threat. we plow through several hundred tons of meteors a day a day most of them are small and they burn up in the atmosphere occasion and as shooting stars occasionally they're big and very dangerous and they render things extinct the best preserved crater in the world is in arizona it's nearly a mile across you all still
Starting point is 00:33:54 remember miles right it's fun to say nearly a mile across you all came up with it, right? So somebody ought to remember what a mile is across the pond. A mile across 62-story building could be buried under the depth of the crater. And that was just 50,000 years ago. As far as we know, not much went extinct then. But oh my gosh, if you were around at that time, that would have been quite the spectacle. The solar system is a shooting gallery. And it's not just stuff hitting us from way long ago. Anytime they show paintings of dinosaurs, there's like a volcano and there's you know stuff like no dinosaurs were only like a hundred million years ago earth's been around for four and a half billion so a hundred million is like yesterday there's still
Starting point is 00:34:40 plenty of junk that can fall into earth and kill it so for me that's the greatest existential threat and what i worry about most is that a hundred years ago that wasn't even on the list so i i lose sleep but what was on the list then well you might die from tuberculosis you know yeah or whatever. And I wonder what's going to be on the list 100 years from now that we don't even know is an existential threat. But it won't be artificial intelligence. Sure.
Starting point is 00:35:12 How good are we at spotting asteroids? I mean, there's a big discussion about how good we are at deflecting them once we've seen them. But how much of a grasp do we have on where they are and when they're threatening us? The good fact is the bigger the asteroid, the easier it is to detect. That's good.
Starting point is 00:35:27 because the bigger ones will do more damage. So that's a good thing. So we can say with some confidence that there are no asteroids above a kilometer in diameter that hold Earth at risk any time in the foreseeable future. But as you go to smaller and smaller asteroid sizes, there are many more of them. So they're harder to track. They're harder to discover. They're not as bright.
Starting point is 00:35:54 and so we're not worrying about a species ending asteroid we're at about one that will take out a grid one will take out all of the UK yes that can happen but civilization survives it you know one that'll take out
Starting point is 00:36:11 not so much all of France all of Lichtenstein yes yes one that will hit in the ocean and take out a coastal city from a tsunami yes so but they're not extinction level episodes they're none large enough in spite of what Hollywood will otherwise tell you they're none big enough to accomplish that goal before uh I let you go because I know that time is pressing I wanted to ask you about something
Starting point is 00:36:39 you reference a few times throughout the book which is your forbidden Twitter file oh you you've mentioned a few tweets that maybe should have stayed in the forbidden Twitter file and never made it public. But I wanted to know, how often does that file grow, these tweets that you go to write and then think maybe the world best not hear this? Yeah, I have, the file now contains, I don't know, not too many, 20 or 30 tweets that I'm just not going to post. And there are others, like I said, maybe should have stayed there.
Starting point is 00:37:13 And there are tweets that are disturbingly true on a level that. that would just create unrest and argument and denial. And as an educator, that's not my goal. People just aren't ready for it. They aren't ready for these treats. Or maybe never be ready. It's just, and I don't need to start the fight, you know, to pick the fight. It's so true and so disturbing, the people will be in denial of it.
Starting point is 00:37:43 And then they'll say, oh, you only think that because blah, blah, blah. And that becomes a, because social media is a cesspool. and it's purest state it's assessable and I'm navigating that as I post some tweets and not others why are you still on social media in that case oh great question because as an educator I'd learn in that instant whether I was effective in what I said or how I said it
Starting point is 00:38:11 so if I use a word and people say what's that word mean that was not effective if I say something that I think is kind of funny and nobody laughs, then it's not funny. If I say something I think is clever and nobody says it's clever, then it's not clever. If I say something that's funny and clever and everyone agrees, it goes viral. And then I say, okay, I make a note. These words said these ways are potent. So that when I address the public and public talks that I give, I'm now equipped to communicate with that much more precision with the audience that is gathered in front of them.
Starting point is 00:38:50 So to me, social media is a proving ground of ideas that I want to share. What's the worst reaction that you've had to a tweet that just about made it out of the forbidden Twitter file and onto your real feed? Yeah, so there's one where we had one of our representatives to our congress at a rally was shot by someone and critically injured she's recovered but i think
Starting point is 00:39:22 is still in a wheelchair wheelchair shortly after that i said this is the consequence of living in a country where guns are plentiful in the company of crazy people crazy people plus guns that's a bad recipe so that that's what I said I said it a little more elegantly but that's how I said it that that's basically what I said well fascinating okay There was an entire flux of tweets that said, I have a mental illness and I am not a risk to your life. Other people said mentally ill people are not more likely than non-mentally ill people to commit crimes or to use a gun in this way.
Starting point is 00:40:33 And this went on and on. And someone else said, that's ablest to declare that. So first, it was my first time I ever saw the word ablest. So I had to look it up. I said, okay, that's a new word for me. Think racist or sexist. Now it's ablest. I'm saying crazy people are going to shoot you.
Starting point is 00:40:54 So that's being insensitive to crazy people in that tweet. But here's another interesting fact. No one used my word crazy. everyone who criticized it swapped out the word crazy and put in mentally ill i thought that was interesting so what what happened there was the word crazy i kind of saw it happen in real time in front of me the word crazy was removed from the language because it used to mean mentally ill people and mentally ill people said no you're going to call us mentally ill not crazy so now Now, crazy can't mean anything else other than a pejorative reference to a mentally ill person.
Starting point is 00:41:42 So all of this was a consequence of that tweet for my personal enlightenment and my sensitivities and my awareness. And so I'm kind of glad I posted it, but I had to bear the brunt of, by the way, the comments were polite. It was before the internet got really nasty, troll-like. They were firm but polite. And so I don't want to misrepresent what actually happened at the time, because there was many, many years ago. So that was a tweet. Maybe it could have stayed in the bin.
Starting point is 00:42:17 But the fact that I put it out there helped me become a better educator. Well, I'll give you one more tweet that just made it out the door. Okay? And we'll end on that. After one of the horrific school shootings, I tweeted. At Walmart, America's largest gun seller. By the way, it might have been the world's largest.
Starting point is 00:42:44 Whichever those it is, it was that in the tweet, okay? But let me say here for this conversation. America's largest gun seller, you can buy an AR-15 rifle, which had been used in the very recent shooting. You can buy an AR-15 rifle, yet company policy bans the sale of pop music with curse words that was the tweet
Starting point is 00:43:10 that tweet has no value judgment whatsoever other than this feels inconsistent and that's not so much a value judgment as an observation of people trying to
Starting point is 00:43:32 set up a sales policy, okay? So I can implied value judgment, perhaps. The fact that the tweets being made, the should or shouldn't sell guns. I'm not saying the rock music shooter shouldn't have lyrics. I'm saying if you're going to have one, it's odd that you don't have the other. If you're okay with selling guns, why do you fear curse words? That's really what it is. Sure.
Starting point is 00:43:55 And that's, okay, so I'm not saying anything about curse words or guns. guns other than if you worry that curse words will hurt you, why are you selling guns? If you're selling guns, why are you worried that curse words will hurt you? There is no singular interpretation of that tweet. It is a highlight of an inconsistency. Well, people ganged up on the tweet. Practically down the middle, half said, they're a private company. If they wanted it, then they're not the government, so they can say that free speech only applies to government.
Starting point is 00:44:36 They thought it was like a free speech, first amendment thing. In the United States Constitution, the First Amendment protects free speech. The other half said, they want to sell guns they can. It's protected by the Second Amendment. The other half made it a Second Amendment thing, thinking I'm trying to defend either the First or the Second Amendment when I was doing neither. And that was so illuminating to me. Because what it told me was people bring their bias. They have a lens and they use that lens to interpret non-biased information.
Starting point is 00:45:15 They use the lens to assert that neutral information carries a bias. That's what this was mind-blowing to me. So I thought people say, oh, that's interesting. I never knew that. But no, people took sides. They didn't even see each other's side that was getting taken. Because they thought I was trying to ban guns or ban pop music when I wasn't trying to do either. Boy, that was illuminating.
Starting point is 00:45:47 That was, you talk about helpful information for an educator? Yeah, there it was. That's Twitter for you, I suppose. hopefully. A lot of social media. Twitter is at the pinnacle, of course, but yes. I like to think that YouTube is a slightly safer place, but I suppose that we will find out in the comment section to this podcast. Neil DeGrasse Tyson, thank you so much for your time. Thanks for your interest. I'm happy to share it with whoever will listen. All right. Thanks. Thank you.
Starting point is 00:46:27 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.