StarTalk Radio - Cosmic Queries – The End of The World, with Josh Clark

Episode Date: February 1, 2019

What will cause humanity's demise? Will it be it artificial intelligence, a meteor, or ourselves? Neil deGrasse Tyson answers fan questions with comic co-host Chuck Nice and Josh Clark, host of the "S...tuff You Should Know" and "The End of The World" podcasts.NOTE: StarTalk All-Access subscribers can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/all-access/cosmic-queries-the-end-of-the-world-with-josh-clark/Photo Credit: StarTalk®. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk Cosmic Queries edition. Today, I got Josh Clark in the house. And you know him from Stuff You Should Know and a newly emergent podcast called The End of the World. Josh, welcome to StarTalk. Thank you very much for having me here. I mean, like, I'm really thrilled to be sitting here right now.
Starting point is 00:00:44 Excellent, excellent. And my co-host, Chuck Nice. Hey, hey. How are you, Neil? Welcome, welcome. you very much for having me here I mean like I'm really thrilled to be sitting here excellent excellent and my co-host Chuck nice hey how are you welcome welcome and so I'm your host Neil DeGrasse Tyson so Josh just stuff you should know hugely popular yeah you know we just hit 1 billion downloads we've been around for almost 11 years I think from what we understand we're the first podcast ever hit a billion downloads. So now we have to teach you how to say that. Okay, all right. You had a billion.
Starting point is 00:01:12 Right? Eventually we'll get billions and billions of downloads. Yeah, when you have two billions, then we'll teach you to say billions and billions. Yeah, I can't wait. Well, congratulations on that. Thank you very much. Excellent, excellent.
Starting point is 00:01:21 So that's a testament to not only how good the show is, but also that you've tapped into the fact that people still want to learn. Oh my gosh. Yeah. When we started doing it in 2008, learning was actually popular. I don't know if you remember back then, but being smart and geeky was super cool. And it's kind of changed a little bit recently. But overall, I think the fact that we are still popular shows that there always has been and always will be people who want to keep learning. People who leave college and they're like, well, wait a minute. That was pretty cool. Lifelong learners. Right, exactly.
Starting point is 00:01:58 And they definitely are a fan base. And there's a lot of them out there, we can tell you. And you weren't satisfied with just stuff you should know. Now you've got to end the world. Right. All right. So, Chuck, we solicited questions from our fan base, our social media platform, who knew Josh was coming on the show.
Starting point is 00:02:15 Yeah. And so they came at us. They did indeed. And, of course, we gleaned these questions from every StarTalk incarnation on the interwebs. And we always start with a Patreon patron because... You're so crass. Why, I am indeed. I have no shame, Neil.
Starting point is 00:02:34 Man. Yes, that and, you know, the Patreon patrons give us money. And so, therefore, we give them precedent and privilege because, you know, we're like your government, people. We're like your government. So anyway, wow. God, do I really want to start off with such a heavy note? Why not? Let's do it.
Starting point is 00:02:58 This is Luke Meadows from Patreon. He says- Luke Meadows. That sounds like a soap opera name. It does kind of. Luke Meadows. Oh, you know what? That's kind of cool. Yes, exactly. Luke Meadows. That sounds like a soap opera name. It does, kind of. Luke Meadows. Oh, you know what? That's kind of cool.
Starting point is 00:03:07 Yes, exactly. Luke Meadows. Dr. Luke Meadows. Excuse me. Of course. Dr. Luke Meadows. Dr. Will I Ever Dance Again? The handsome doctor.
Starting point is 00:03:16 Only with me. There you go. All right, here you go. What does Josh and Neil think is our biggest existential risk? Wow. We're starting off with like, bam. Let's do it. Like heaviest bat in the rack.
Starting point is 00:03:32 Yeah. What is our biggest existential risk? You got a podcast with the name End of the World. Go for it. Okay. All right. From what I found, across the board, everybody who thinks about existential risks and warns other people about existential risks say that AI is probably our biggest existential risk. And the reason, let me follow up with an explanation.
Starting point is 00:03:57 Okay. The reason why is because we are putting onto the table right now the pieces for a machine to become super intelligent right it's out there it's possible um it's not necessarily right there but it's it's possible right the problem is is we haven't figured out how to uh create what's called friendliness into an ai so or human beings humans as, that's a really good point, though, right? Like, we don't even know how to define, like, morality and friendliness. And as far as AI goes, friendliness in an AI is an AI that cares about humans as much as a machine can care. It takes into account.
Starting point is 00:04:38 Friendliness in AI is just an AI that doesn't kill you. Basically. I think that would count as a friendly AI. Right. AI that doesn't kill you. I think that would count as a friendly AI. But the problem is the pitfall with AI as an existential risk is we make this assumption that if an AI became super intelligent, friendliness would be an emergent property of that super intelligence. That is not necessarily true. still into that AI which supersede the emergent property of overcoming friendliness in lieu of you guys gotta go.
Starting point is 00:05:12 You guys are the problem. I've seen what you do to livestock. I'm not very happy about that. Humans are a virus. That was good. That's agent... Hold on on, Agent Mr. Anderson. They're all named Smith. Smith.
Starting point is 00:05:31 How could you not get the right name? Agent Smith. They're all Smiths. They're all Smiths. Right. Mr. Anderson. My name is Neo. Yes.
Starting point is 00:05:41 Okay. Stella. That was a different, that was a play, I think. Yeah. No, that was the end of The Matrix 4. Stella. Stella. Neo.
Starting point is 00:05:54 Okay. So you just worried, based on the sum of experts you've spoken to, you agree that this is the thing. I do, actually. They've convinced me. The more I looked into it, and this is one of those things, it's really tough to just experts you've spoken to yeah you agree that this i do actually they've convinced me um the more i looked into it and this is one of those things it's really tough to just kind of get across you know a brief sketch of the actual existential threat that artificial intelligence poses and i dedicated a whole episode to it in the end of the world um but when you start to dig into it you
Starting point is 00:06:21 realize like oh wait this is really like it's possible that this could happen and we while we're we're improving by leaps and bounds especially ever since we started creating um neural nets that could learn on their own just just feeding them information like just basically sitting them in front of youtube and say go learn what makes a cat picture a cat picture right um once we started doing that our ai research just shot off like a rocket, right? It was probably the most watershed moment in the history of artificial intelligence, and it happened very quietly about 2006. So we're doing really well with AI development. We're doing terribly with figuring out friendliness. And granted, the AI field has taken this seriously. There are AI researchers who are legitimate AI researchers who are working
Starting point is 00:07:06 on figuring out friendliness in parallel to figuring out machine intelligence, but it's not keeping up. And this right here is a very dangerous... So, here's the thing. So, I had a different answer from you. Okay. About our greatest existential risk. Okay. But I like your answer better.
Starting point is 00:07:21 Oh, wow. Thank you. Than the answer I was going to give. Well, I think I still like the answer anyway. Oh, just Asteroid will render every one of us extinct, including the AI. Boom! Asteroid wins again! Asteroid basically wins every contest. It's like the God mode in a video game. So here's what we have to do then.
Starting point is 00:07:44 I have a hybrid solution here. Okay. We invent the AI that wants to take us out. And you say, no, you have to figure out a way to deflect the asteroid. Because that's going to take us all out. And while it's busy doing that, we kill it. And that'll get completely distracted by solving the asteroid problem. Because we're not its biggest threat.
Starting point is 00:08:02 When do we kill it? Right when it's looking up. Then you unclog them all. So you go behind it with a screwdriver. I never saw that coming. So, can I tell you what sold me over to what you said?
Starting point is 00:08:20 None of the arguments you gave. A different argument, but they all come together. Okay. It was, I was sure, you can abstract the problem into a simple question, right? If you put AI in a box,
Starting point is 00:08:35 will it ever get out of the box? Yes. Okay. I'm locking you in the box because I think you're dangerous. Okay? Can the AI get out of the box? That's very interesting.
Starting point is 00:08:48 Yeah, you can just abstract it to that simple question. You've got to abstract it to that. That's very interesting. Go ahead. And I was convinced, listening to an AI program, another podcast, Podcast Universe with Sam Harris, I said, my gosh, it gets out every time. What's in the box? Because before then, I'm thinking, look, this is America.
Starting point is 00:09:08 AI gets out of hand, I'm going, you shoot it. You're right. Okay? You know, this is like Beverly Hillbillies. You just shoot it. Yeah. Mod. Any of them.
Starting point is 00:09:20 Any of them. Right. Any of them. Even Ellie Mae. Ellie Mae. Oh, Grandma, everybody's got a gun. Okay? Any of them. Any of them. Right. Any of them. Even Ellie Mae. Ellie Mae. Oh, Grandma Ebony's got a gun. Okay.
Starting point is 00:09:30 So I can just shoot it. Yeah. And no, it doesn't work that way. Right. Because AI is in the box. I'm never letting you out. And the AI will convince you to let it out. Right.
Starting point is 00:09:43 If it's smarter than you. That's the job. That's its job. That it out. Right. If it's smarter than you. That's the job. That's its job. That's what. Right. The fact that it's smarter means it will. So here's, I'm making up this conversation, but this is the simplest of conversations.
Starting point is 00:09:57 I'm not letting you out, but I want to get out. I'm not letting you. Well, I've just done some calculations and I have found a cure for the disease that your your mother has right but I can't do anything about it in here you have to let me out to do that I'll get the clamp it's every time so I said wow and it can save everyone in the world yep and now it's out right that's exactly right or any any of the the locks that we put around at any of the the protocols we've built to keep it in place I think as you're about to say
Starting point is 00:10:36 it's super intelligent so by definition it's smarter than basically all of us combined. Right. It's like saying, it's like a, it's like a dog believing it can lock you in a room. Right. Forever. Right. It's like, no, you say,
Starting point is 00:10:56 oh, I just bought a, you know, 14 ounce T-bone steak. Do you want it for dinner? Yeah, yeah, yeah, yeah. Well, I have to go prepare it. Then they open the door, I get out of the door.
Starting point is 00:11:03 Right. Right. And better than, yeah, better than that, I think, okay, you're well i have to go prepare it then they open the door i get out of the door right and better than yeah better than that i think okay you're gonna have to forgive me because you had a you had a colleague on and he's a teacher and he might have been one of your teachers that we talked to and one of his um methods for getting um students to learn is you give them the same problem that some other astrophysicists may have faced. And then as they solve it, that's how they learn, as opposed to teaching them what that astrophysicist already discovered. You let them make the discovery. So if this thing is so smart, it would literally have the ability to just whatever we design to go back to square one and redesign it on its own and say, well, now here's the next phase. That's how I get out of it.
Starting point is 00:11:56 Well, that's one of the emerging threats is AI, machine learning that can write code like i think some harvard researchers trained um a deep learning algorithm to write code by exposing it to code learning deep this right deep impact sounds menacing right did the asteroid win in that one i never saw it it it tied oh yeah it was a tie we'll call it it was a tie well it took out new york city yeah okay well that's a bad one but civilization california civilization endured civilization that's what matters okay so so then that asteroid was not an existential risk well it was except we split it into two uh-huh and the big piece went away it got new york in the end in the end of the movie no no well you have to destroy new york because it's a movie but but they did it right. Rather than, unlike in Armageddon, where the asteroid pieces had GPS locators and hit monuments.
Starting point is 00:12:53 One decapitated the Chrysler building. Did it really? Okay. And hit the clock, continued through the Chrysler building, went in the front door of Grand Central Terminal, and hit the clock in the middle of the floor. That's the opening sequence. Yes. Okay. I'm just saying.
Starting point is 00:13:07 I'm just saying. You remember that. Yes. Yes. Okay. Okay. We got this. Another one came from over New Jersey and hit the World Trade Center.
Starting point is 00:13:14 Okay? That's right. Aiming for our stuff. All right. So, Deep Impact had science advisors. Okay. Because Armageddon with Bruce Willis violates more known laws of physics per minute than any other movie ever made. Okay. Because Armageddon with Bruce Willis violates more known laws of physics per minute
Starting point is 00:13:27 than any other movie ever made. That's pretty funny. Just so you know. Even more than Gravity? No, no, that one was cool. That one at least tried. Okay, alright. Okay, that one tried. But thanks for remembering my Gravity tirade. But do we only get one question in this segment?
Starting point is 00:13:43 Well, listen, this has been... Anytime I'm still entertained on one question we're doing a great job end of the world uh i look see we can fit in one short one okay all right let me have uh let's go with will j our patreon patron who says this what one or two skills would you learn now to be useful and productive in a post-apocalyptic world? That is, of course, if we survive the event. So, skill, go ahead. I got one ready? Ready?
Starting point is 00:14:09 I would learn how to break into a hardware store. Oh, that's a good one. That's a very good one. Yeah, nothing more valuable in an apocalypse than the contents of a hardware store. Or a towel. Don't forget your towel, too. A towel?
Starting point is 00:14:22 It's a hitchhiker's guide reference. Oh, excuse me. Ooh, ooh. You didn't have your towel. I just a hitchhiker's guide reference. Oh, excuse me. You didn't have your towel. I just got hitchhiked. You need to be able to break into a hardware store. My answer would be learning how to collect canned food. That would be mine. That's a good one.
Starting point is 00:14:36 That's that movie, The Boy and His Dog. I never saw that. The Don Johnson one? The Don Johnson. Yeah. The dog was intelligent, but the dog would help. Don Johnson? It's apocalyptic earth. Yeah. Yeah, the dog was intelligent, but the dog would help. Don Johnson? It's apocalyptic Earth.
Starting point is 00:14:47 Okay. And it's a boy and his dog. The only one's alive on Earth's surface, as far as they know. The dog helps him find food, but the food is all canned, and the dog can't get into the can. So he opens the cans, and they both eat. Oh, so it's a buddy comedy. It's a... All right.
Starting point is 00:15:04 So, Chuck, what would be your one thing you would take with you? Your skill. One skill? It would be this, being funny, because everybody loves that. I'd be like, dude, you know, get somebody laughing. They'd be like, ha, ha, ha. I'm like, yeah, can we break into this hardware store? So, one other thing.
Starting point is 00:15:23 One other thing. There's one more skill you have to have. Okay. Thou shalt know physics. All right. Okay? If you don't know physics, just move back into the cave. It's kind of a superpower.
Starting point is 00:15:34 Here's why I thought about that. Recently, I was asked to review a book written by some MIT physicists and engineers. Okay. And it's called The Physics of Energy. The Physics of Energy. It just came out. Here it is. Oh, my God.
Starting point is 00:15:47 That looks like a textbook. Yeah, that's heavy. It is kind of a textbook. Okay. Yeah, yeah. It's based on courses they taught. All right. Okay?
Starting point is 00:15:54 The Physics of Energy, Robert Jaffe, Washington Taylor. And so I actually blurbed the book. Even books like this can get blurbs, okay? I couldn't put it down. There it goes. You ready? Yeah. Page turner.
Starting point is 00:16:07 Okay. All right. Here it goes. If you buy one textbook this year. Okay. Here it is. This is it. Ready?
Starting point is 00:16:15 Okay. Go ahead. If your task was to jumpstart civilization but had access to only one book, then the physics of energy would be your choice. Wow. Professors Taylor and Jaffe have written a comprehensive, thorough, and relevant treatise. It's an energizing read as a standalone book,
Starting point is 00:16:34 but it should also be a course offered at every college, lest we mismanage our collective role as shepherds of our energy-hungry, energy-dependent civilization. Sweet. Book drop. Nice. role as shepherds of our energy hungry energy dependent civilization sweet book drop nice now does that blurb have anything to do with the with the chep that i see sitting on this table here from taylor and no no that was just to cover postage okay um so the point is you don't want to have to wait for another isaac newton to be born to discover the physics and then you want to you want to start where you left off true and so that's what this book would do cool that was a really good answer yeah okay
Starting point is 00:17:19 better than the towel i think i don't mean i don't mean to besmirch. No, it's all right. You know, Douglas Adams here. It was a jokey answer at best. So, we just end that segment. We're going to come back to more Cosmic Queries on the end of the world as we know it. We're Star Talk. We're back on StarTalk. I'm Neil deGrasse Tyson, your personal astrophysicist.
Starting point is 00:17:57 Today's special Cosmic Queries edition on the ends of the world. And we've got Josh Clark with us. Josh. Hey. Welcome. Thank you very much. You're the stuff you should know guy. Yes, that's right.
Starting point is 00:18:06 With a new podcast, Ends of the World. Yeah, The End of the World with Josh Clark, appropriately. You really want to associate your name with that concept. I like it. You're kind of like the Tyler Perry of science podcasts. Pretty much. Yeah. That's what I was going for. And listen, it's a smart thing to do.
Starting point is 00:18:19 Everyone who worked on it, I made sign a contract that said they would not look me in the eye during production. That's what I was going for, for sure. But it's all about existential risks, and it's largely based on the work of a guy named Nick Bostrom, who is a philosopher out of Oxford, who's basically been warning people about existential risks for 20 years and has really kind of given us our understanding of what existential risks are and why they're different and why they're worth paying attention to. I said I know him. I know his work. I've not met him. Yeah. But I've referenced his work many times in my talks. I got to speak to him a few times for the podcast, like three times. And then the third time, his assistant was like, you know, Dr. Bostrom puts every request for a media appearance or an interview or a project or whatever through
Starting point is 00:19:03 a cost-benefit analysis. And I made it through that grinder like three times. Wow. And I felt pretty good about that. Chuck, do you think the billion downloads has something to do with that? I was going to say. Yeah. I didn't flow on it.
Starting point is 00:19:16 I just came in, you know, casual or whatever. I think the billion, that's a heavy number. I think the reason why he was speaking to me so frequently or so willing to talk to me about the same thing three times is because, you know, he was talking through me. He was trying to reach more people. And that kind of brought me back down to size a little bit after I realized that. That's a good thing, though. I mean, you know, it's worthwhile. So, Chuck, let's get some more questions.
Starting point is 00:19:42 Let's do it. Any more Patreons? No, but I oh god, here it is so this is PhilVader23 from Instagram somewhat rhetorical but I'm interested I think I know why he asked
Starting point is 00:19:56 if the world ended would the human race end and I'll say vice versa there are a lot of people who feel like we're it. You know what I mean? Like if we end, that is the end. So if the world ended, would the human race end?
Starting point is 00:20:13 And if the human race ends, we know the world wouldn't end, but would it make a difference? Earth is going to be here with or without us. Earth is here before, during, and after asteroid strikes. It's here before, during, and after viral attacks. So we are a blip in the history of the Earth. So when people say, oh, save Earth, they usually mean save ourselves on Earth.
Starting point is 00:20:40 In almost every case somebody says save Earth, that's implicitly what they mean save humans on earth they might say oh save the other animals they might say that but you don't mean no what they mean is what we are doing is affecting other animals and ultimately that might affect us because we're in a we're in an ecosystem that has balance and interconnectivity. So it's the short-sightedness of decisions we make. Let me not call it short-sighted. Let me say not fully researched.
Starting point is 00:21:18 No, because I think people think they're doing what's okay, right? They thought let's make a smokestack and pump smoke into the air so that it goes into the air high above you rather than at ground level. That's better, right? And no one is thinking, well, this is still in the air and it's wrapping around the earth.
Starting point is 00:21:35 So air pollution was not imagined that it would ever be a worldwide problem. Right. And so we had to learn that. And when we did, we'd made great progress. Right? I mean, air is cleaner than it's ever been. Right? All around the had to learn that. And when we did, we made great progress. Right? I mean, air is cleaner than it's ever been. Right.
Starting point is 00:21:48 All around the world. Thank you, Al Gore. He invented clean air. So, yeah, this end of the earth thing. Do you talk much about the end of the earth? I do. It's a big point that I make that if we screw up and we wipe ourselves out, whether it's through AI or some biotech accident or maybe something going awry with nanotech or a physics experiment even potentially, if we do this – He's trying to drag my people into the problem.
Starting point is 00:22:20 I heard that. I was waiting for a global thermonuclear war. I heard that. He's trying to blame the physics. I thought I would get out of that one. I considered it, but then I decided no. But if the worst comes and we slip up and we wipe ourselves out, life would almost certainly go on. Because it has so many times before.
Starting point is 00:22:44 We've been through at least five that we know of, mass extinctions. Big ones, too. I think the Ordovician one. I can't remember how long ago it was. It was very, very ancient. But they're starting to think that a gamma ray burst basically sterilized Earth. Came that close to just killing all life on Earth, and it still couldn't. A gamma ray burst
Starting point is 00:23:00 hit Earth, and life still hung on. Hung on after the asteroid wiped out the dinosaurs and a lot of other species. Life will probably keep going. I would bet just about anything on it. So yeah, there will be life after we go, if we go. If we go to us, the world will have ended.
Starting point is 00:23:18 So it is kind of moot in that respect. Hmm. So one thing about the gamma ray burst is that was invoked after no one could find any other reason for how that many, how much life could go. Oh, is that right? Yeah. I mean, it's, it's plausible.
Starting point is 00:23:32 We have them in the universe. Usually they're pointing in some other direction or if they point towards us, they're very far away. Right. So the question is, in the statistics of this, could you have one that's nearby that points straight at us? And if it does, these are high energy particles, high energy light, and it first takes out the ozone layer.
Starting point is 00:23:51 Right. The ozone protects you until there's no ozone. Right. But it keeps coming. So it's like the first line of defense that is now all massacred. Now it keeps going, makes it all the way down to Earth's surface, and those are high-energy particles that is incompatible with the large molecules that we call biology.
Starting point is 00:24:10 Nice. So it just breaks apart your molecules, and it kills everything. If you're in a cave, you'll survive. But you probably eat things that depended on things that died on Earth's surface. So would you survive even with the atmosphere burned away? Or the ozone layer burned away? No, no, even with the atmosphere burned away or the ozone
Starting point is 00:24:25 no no the ozone you take out the ozone so you'd have to go to places where you'd still be protected okay by from the ozone which would be underground yeah yeah so yeah you would really like episode four which is about natural risks including gamma ray bursts and in the end you'd be very proud i i conclude that they are quite rare and probably not going to happen. Yeah, rare enough so that really you should do things like buckle your seatbelt. Right. That's very good advice. But you can take care of both. You can take care of immediate threats like dying in a car crash
Starting point is 00:24:55 while you're simultaneously thinking about more remote, larger threats as well. But in proportion. You do that in a balanced way. Sure. That is my new phrase now just for when I'm going to have reckless abandon, just gamma life. Right? Yeah. Yeah, but to Josh's point, if you take out 90% of the life and 10% survives, what you've done is pry open ecological niches where the 10% of the life that remains can run.
Starting point is 00:25:22 Now to just run and fill that. To just run and fill back. To just run and fill back. Yeah, you can make a pretty good case that if we are wiped out, we would leave the biggest ecological niche of all currently on Earth. Haven't you seen the book Life After Man? I saw the special on Discovery or Science Channel. Well, maybe they made that after the book. They did. Right, right.
Starting point is 00:25:39 I thought you were talking about a lifetime special. Okay. You know, Christmas in life after a baby. So who do we keep trying to kill that lives with us? Like the mice and rats, right? Right. So if we're out of the way, what sets the upper size of a mouse or a rat? It's that it can escape from being killed by us by going to a pipe or a hole.
Starting point is 00:26:03 If we're not there, nothing to stop the growth of rodents. Which is like, what's the name of the... The copy bar. That's it. What's that again? South America. The copy bar. That's right.
Starting point is 00:26:15 The rodent that was this big. It's a river rodent. Nothing, nothing. There's nothing to stop it. So then they would just run the world. Nice. Right. So, but...
Starting point is 00:26:24 They'd have museums. With human skeletons. With Teddy Roosevelt stuffed in it. So there would also be nothing stopping the capybaras or the giant rodents from also gaining intelligence. It's possible that we like to think of ourselves as the only intelligent life on earth. And that's just patently untrue. We just have to expand our definition of intelligence. So perhaps we're the current endpoint in the evolution of intelligent life on Earth.
Starting point is 00:26:53 But if we're gone, that doesn't mean that that evolution of intelligence is just going to cease as well. So maybe a million years or 50 million years or 100 million years from now, the capybaras will be like exploring the galaxy or the universe. But that presumes that intelligence improves your survival. It doesn't? That's a very big assumption.
Starting point is 00:27:13 But that is an assumption I would make. Look at the cockroaches doing just fine. Without any kind of brain that we would praise. Okay, that's true. But you can also demonstrate that if we take our intelligence, thearachas are smart. Wait, Chuck, are you actually have a cockroach circus? When I say a cockroach, I'm not saying, gee, that's intelligent.
Starting point is 00:27:34 I'm really not thinking that. I'm sorry. Well, you're not as dumb as me. No, you can be so intelligent that you have devised ways of destroying your own genetic lineage. That is the entire point of the podcast that I made, The End of the World with Josh Clark, that we could possibly have become so intelligent that we might accidentally wipe ourselves out with that intelligence. This is my point. So, therefore, an intelligent capybara might not be where evolution takes it.
Starting point is 00:28:05 Right. Okay. So let's say that that is, that we're following not a predetermined or prescribed process, but just one that you can bet is probably going to follow within a certain boundary. And that we're kind of in the middle of that boundary and that the capybaras that came behind us would follow the same path. Right. There's every reason to believe that if we wipe ourselves out the capybaras will wipe themselves out too and that goes to inform
Starting point is 00:28:28 another thing that i go into in um the podcast what's called the great filter this idea that it's possible that there is some barrier between the origin of life growing into intelligent life and that intelligent life spreading out into the universe and that that is why we seem to be alone in the universe because the humans and the capybaras will always inevitably destroy themselves probably because of their intelligence because they gain uh as sagan put it um they they became more powerful before they became wise and that's a very that's a precarious position to be in and that's the position that we're in right now it It's called adolescence. Technological adolescence, actually, is what he called it, precisely. The energy to act but without the wisdom to constrain it.
Starting point is 00:29:13 So there's a version of what you said, which surely you know about, because it would have been in that same world of research that you did. It has to do with, all right, let's say we want to colonize. That's a bad word today settle another planet show up show up let's say we want to take a vacation one-way vacation right where we have to actually build a place to live so what happens so you go out to the planet. And then, okay, what's the urge that made you want to do that? Well, it's an urge to, like, explore, okay?
Starting point is 00:29:53 Or to conquer. Either. It's the same effect. Now, there are people there who want to do the same thing. You've bred this into your genetic line because you were having babies, and you're the one who wanted to do this. So then they get two planets. And then they have babies and they get two planets.
Starting point is 00:30:09 You go one, two, four, eight, 16. It is suggested that you can reach a point where the very urge to explore necessarily is the urge to conquer, thereby preventing the full exploration of the galaxy. Because you're going to run into somebody else at the same time. You're going to run into your own people. Your own people. Correct.
Starting point is 00:30:33 Right. Correct. And that is a self-limiting arc. That's the Borg. I mean, you know. But the thing is, the great filter in particular, which is an economist, a physicist-turned-economist named Robin Hanson, who I'm sure you're familiar with. No, no, I don't know.
Starting point is 00:30:48 Okay. Well, Robin Hanson came up with this idea that there's something that stops life from expanding out into the planet. And the reason why it would seem to stop before they expand out from their planet is because we would see evidence of them otherwise by now. Well, that's the Fermi paradox. Right. Yeah. which is episode one. I'm telling you, Neil, you would love this podcast. It would be right up your alley. All right, another question.
Starting point is 00:31:14 All right, here we go. Let's keep moving. Got to be fast because we're almost out of this segment. Why are we taking so long to answer these questions? Because, no, it's good. I like it. You know what I mean? Go.
Starting point is 00:31:21 Deep dive. DJ Maz 2006 from instagram says how do you want to die uh chuck knows how i want to die because i want to i want to fall into a black hole that's it oh that's a good one that's a good one that's totally good good lord so can i can i follow up with a question okay would you know that you have fallen into a black hole would you know in advance that that's what i want to do, and then I'd fall in, and then I would watch what happened and report back until my signal never gets out of the black hole and I get ripped apart and I get spaghettified.
Starting point is 00:31:51 No, but I think what Josh is asking is, if you're in the black hole, is it a process that would allow you some consciousness at a level where you would be like, oh my God, I'm in the black hole! Until you're ripped apart, but you're conscious of everything as you fall in, even through the event horizon. Even through the event horizon, you would still be conscious. Oh yes, yes, and you'll in the black hole. Until you're ripped apart, but you're conscious of everything as you fall in, even through the event horizon.
Starting point is 00:32:07 Even through the event horizon. You would still be conscious. Oh, yes. Yes, and you'll see the whole thing. Wow. Totally cool. How about you? I was going to say quickly and painlessly is how I want to die. That's not imaginative.
Starting point is 00:32:16 Come on. Everybody wants that. No, but just in case there's somebody listening. Given what you know about what people don't know, give me a better answer than that. Okay. All right. Fine. Fine.
Starting point is 00:32:24 How do I want to die? I don't know? Give me a better answer than that. Okay, all right, fine, fine. How do I want to die? I don't know. I think a low-energy vacuum bubble would be pretty cool, just washing over us all of a sudden, which would probably be quick and painless, too. But then it would happen at the speed of light, so you wouldn't see it coming.
Starting point is 00:32:37 Quick and painless. That's another quick and painless. Whereas a black hole is quick but very painful but deeply fascinating. Because you get spaghettified, right? You get spaghettified, yeah. And you would feel that. Oh, yeah.
Starting point is 00:32:46 Okay, that's what I was... Oh, this hurts so bad, but it's so interesting. Because it's science. All right, when we come back, more StarTalk Cosmic Queries. On the end of the world. That's what we know. StarTalk is back i got josh clark with me who is has a new podcast on ends of the world because he wasn't happy with a billion downloads of stuff you should know he's still at it so glad to have you on the show thank you you. So we're doing, it's a Cosmic Queries edition.
Starting point is 00:33:26 And Chuck, we spent so much time answering only a few questions. Yes. We got to make this whole segment a lightning round. Wow. So let's just do it. We have never done this before. The entire segment will be my, which means that you have to answer the question as concisely as possible. Yeah, in a soundbite, basically.
Starting point is 00:33:41 Pretty much in a soundbite. Okay. If you don't soundbite, I will soundbite you. Go. Okay, here we go. NicoBlack247 on Instagram says, when we find life off of the earth, how would you expect
Starting point is 00:33:56 religious groups to react? Would they change? Thanks from Illinois. Go. They would freak out, I think. Some religious groups would freak out because life on Earth, human life on Earth, intelligent human life on Earth is believed to be the sole creation of God. But so many other religious groups would be totally down with it and just see it as a greater part of God's creation.
Starting point is 00:34:23 All right, bing, bing. Let's move on this is liam beckett on instagram who says this um do you think as a society we will ever get past biased news from both sides or only become more divided speaking of the end of the world yeah totally i think this is just kind of like a temporary problem that we have and we are going to continue to advance. And as we advance, we will be less divided. That's my hope, at least.
Starting point is 00:34:49 Neil? That was beautiful. Thank you. That was unrealistically beautiful. Really? You think so? Oh, it's just a phase we're going through. It's not the beginning of the end of civilization um my my issue is people try to beat each other on the head to convince them of your own opinion and try to get you to vote in ways that align with their
Starting point is 00:35:14 opinion when there's so much so many things out there that are objectively true that we should all agree on what is objectively true and then base civilization on that. And then after that, celebrate each other's diverse opinions rather than beat each other over the head for them being different. But I think that's a point that we can conceivably get to. And when we do, we will be less divided. So really, you just said the same thing I did. Oh, snap. There we go.
Starting point is 00:35:40 All right. Time to move on. Snap. All right. Next question. Oh, well. This is Francesco Sante. He says, as long as humans have existed,
Starting point is 00:35:51 I assume we have looked up and felt a connection with the universe, even if we didn't have the insights of astrophysics and cosmology. Do our atoms know, all caps, that they came from up there? No. Next question. All caps. That they came from up there? No. Next question. Next question. So John Kennedy, I think President Kennedy, before he was president, as you may know, they have a home in Hyannis Port.
Starting point is 00:36:21 So the ocean coastline is not unfamiliar to him. They own boats, this sort of thing he spoke often about the allure of the ocean and wondered openly whether we are drawn to the ocean shore because our genetic profile may remember that in fact we came our vertebrate history is owed to the fishes in the sea and that we're somehow pulled back to it. So I can poetically agree with that, but there is no way we could have known that we are stardust without modern astrophysics telling us this.
Starting point is 00:36:59 I think we will look up and wonder, but I don't think it's because there's a genetic connection. I think it's because we just want to know if someone up there is going to eat us that looks dangerous we're looking up at the universe the way you look in the brush there's something they're going to harm me if it's not then otherwise it's a beautiful thing to look at yeah interesting okay all right next uh alejandra hernandez once from twitter says this with some AI nearly capable of passing the Turing test, do you believe the technological singularity will occur in the near future? And if so, how do you think humanity will fare?
Starting point is 00:37:32 Now, we touched upon that in the beginning. So let me sharpen that question. Here it is. How soon is this going to happen? There you go. Oh, man, I don't know. How close? How soon will AI be our overlords?
Starting point is 00:37:42 The thing that I find upsetting and scary is that it could happen. Says the man who has an end of the world podcast. He says, what I find scary, I'm afraid now. It could happen at any time conceivably. It could happen at any time. From what I understand, we have all the components out there and it could just kind of happen. They could fall into place. I don't know.
Starting point is 00:38:03 It's impossible to predict when it will happen. And you can't say with absolute certainty that it will happen it's just really possible and the fact that it is possible means that it could conceivably happen at any time and is the self is i'm sorry is the singularity this is my question so we're still in our our uh lightning round is the singularity actual consciousness or is it self-aware? So the singularity is this point where machines become self-aware and super intelligent or if you're a transhumanist,
Starting point is 00:38:38 that's the point where we merge with a transhumanist. Yeah, what is that? So that's a big, big umbrella term, and it encompasses a lot of different thoughts and philosophies, but the main thing that threads it all together is this idea that we can and will and should merge with our machines, merge with our technology, which sounds far out until you realize like-
Starting point is 00:38:57 We're already doing it. Yeah, we wear like glasses and contacts and clothes and stuff like that. And I carry the world internet in my pocket. Yes, exactly. I don't have to graft it into my cerebellum. Okay, but wouldn't it be easier and more convenient if you did just kind of get information that rapidly, that easily, and could expand?
Starting point is 00:39:14 Open skull surgery or pull this out of my pocket. Is that my choice? Basically. I don't need to see the latest cat video that badly. I can wait until I can dial it up on my phone. But what about an infinite loop of cat videos? Let me sweeten the pot a little bit. Next question.
Starting point is 00:39:31 All right. Okay, here we go. Rex Young, you almost touched on this, but from Twitter says this. Rex wants to know, any general advice on how to foster peace in the world, locally, online, or in the world at large? I'm glad that this person is right. So that would then preclude, if you succeed at that, that means total worldwide warfare is off the table as an existential risk. So that's an important question. Right.
Starting point is 00:40:01 Right. So, yeah, I think that that seems to be found in the organizations and the institutions that we build. From what I understand, the moral progress of humanity has been kind of tied to the global community that we've been developing. And as we spread out and understand and meet more and more people and connect with more and more people, that seems to be in lockstep with this movement toward peace on a global scale. He's so hopeful. I really am hopeful. I'm deeply hopeful for the future of humanity.
Starting point is 00:40:35 I'm also worried, but I am hopeful for sure. Wow. That's really cool. That's so beautiful. It is. Thank you. I'm a comedian. I wish I was that.
Starting point is 00:40:44 I'm not that hopeful. If I were Thank you. I'm a comedian. I wish I was that. I'm not that hopeful. If I were that hopeful, I'd be unemployed. I mean, I'd still give us a very low chance of making it to technological maturity and safety. But I am deeply hopeful that we will and that if we do reach that, that there will be a much more peaceful species that we are. All right, cool. I don't know how much time we have. Keep going. Just do it.
Starting point is 00:41:04 Do it. This is Fyodor Popov. Fyodor? What? Fyodor. Fyodor. Fyodor. But not Theodore. Fyodor.
Starting point is 00:41:13 Fyodor. That's the original version. Here we go. What do you think are the best ways to keep abreast of current developments in the study of existential risk? There are great websites out there like those of Future Humanity Institute and Future of existential risk. There are great websites out there like those of Future Humanity Institute
Starting point is 00:41:26 and Future of Life Institute. Neither is very active on social media. Have you ever specifically researched the various topics you've explored since you finished the series? Yeah, actually, great question. So a couple of things. I'm planning on doing a follow-up
Starting point is 00:41:42 to the End of the World podcast, these first 10 episodes. That's a lot more podcasty, a weekly kind of thing to keep abreast of all this stuff. So listen out for that. But also the Future of Life Institute actually is pretty visible on social media. They have a great podcast as well. But that's a really important point. Right now, as far as existential risks are concerned, there's a lot of academics writing really smart papers, and you have to go grind those up to understand what's going on. So that's one of the reasons why. I'm an academic. We read the papers.
Starting point is 00:42:15 Right. Don't grind them up. I'm a non-academic. If you're non-academic, you grind. And use them as. No, no. In all fairness, academic research papers are very dry and very jargon filled.
Starting point is 00:42:28 They're really hard to get. You have to teach yourself how to read an academic paper. It is a grind for people like us. Yes, it is. That's why I made this podcast and that's why I plan to continue to make the podcast because I will grind this stuff up and then try to explain it so that it's not just
Starting point is 00:42:44 academic papers that are out there. You'll be our conduit to our extinction. That would be great. If we're going to go extinct either way, I might. You don't want to be the guy. Right. All right. I get bored, so you don't have to. There you go.
Starting point is 00:42:55 All right. Here we go. We've got another one, I think. Yeah, I've got another one. Here's this one. This is Mario Gurt. Mario Gurt on Instagram says, is it possible that our universe is someone else's large hadron collider?
Starting point is 00:43:12 Oh, that's a good one. What an awesome little question. I mean, are we the galaxy on the belt of Orion? Right. There you go. Did you get that reference? No. Men in Black. Men in Black.
Starting point is 00:43:24 The first one? Yeah. I haven't seen it a while you have you have incomplete geek street cred oh yeah no I know there's a gap there's there's things that I need to learn. There's some gaps. For sure. That's okay. We still love you. So what was I talking about? So you said, are we the galaxy of the God of Orion? Are we the universe inside somebody else's thumb? Large Hadron Collider.
Starting point is 00:43:57 Right. So let me answer that in a slightly different way. Okay. When we first probed the atom and we found, oh, wait a minute, the atom has a nucleus and it's got electrons that orbit the nucleus that's just like the solar system the solar system is just like the galaxy where the star is orbiting the center of the galaxy we have planets orbiting a star and we have electrons orbiting so maybe it's that all the way down right maybe that's the theme and maybe that's how all this works. And when you start probing the atom on that scale, the laws of physics manifest in completely different ways. Right.
Starting point is 00:44:31 So it's not just a scaling phenomenon. Right. So for us to have these laws of physics manifest the way that we do and claim that it's the microscopic physics in someone else's collider, it's just not a realistic extension of how things work. Although it was deeply attractive because it was philosophically pleasing to imagine that you just had nested... Of course, because it's very linear. It's just nesting.
Starting point is 00:45:00 You just keep going. Right. So because things manifest differently on these scales, you can't just get, for example, okay, there's something called a water strider, which is an insect that can just stride on the water. It uses surface tension of the water. If that were any bigger, it would just fall through. You can't scale things because the forces operating have different manifestations on different scales. That's why. And so that's why, what's the movie Them? Do you remember the movie Them? I don't know the movie Them. With the ants? Ants! Oh, he's got one! I have seen that one.
Starting point is 00:45:39 He's got one! Giant ants. I might have seen some nuclear thing and the ants got big. Nice. And the ants are coming. And the ants are coming Okay ants are creepy anyway And now they're bigger than you Yeah You freak out I love ants That can never happen
Starting point is 00:45:50 Do you love ants? I love them so much Because ants have these Tiny spindly little legs Right And if you scale up The size of the ant It's weight
Starting point is 00:45:58 Outstrips the ability Of these spindly legs To hold them up Have you done this on Twitter? Have you done a Twitter rant about that? I could. I could totally rant on this. So the point is, as you get bigger, I can say this mathematically, as you get bigger,
Starting point is 00:46:13 the strength of your legs, your limbs, only goes up as the cross-sectional area. But your weight goes up as the cube of your dimensions. Ooh. So what happens is, because as you get bigger, you grow in all dimensions, but your legs, if your leg gets wider, the strength is only the cross-section of your leg. Right. So eventually you just crush yourself. You crush.
Starting point is 00:46:35 That's why hippopotami don't have skinny legs. Right. And they're short, fat, stumpy legs. Stumpy, big, big, stumpy legs. Elephants have stumpy legs. Right. Okay? A giraffe has long, slender legs,umpy legs. Elephants have stumpy legs. Right. Okay? A giraffe has long, slender legs, but a giraffe don't weigh all that much.
Starting point is 00:46:49 Right. It's slender. And the distribution isn't any way different. The distribution is completely different. Like, yeah. So, it's a fascinating cottage industry studying the relationship between size and life. And how things scale. And how things scale.
Starting point is 00:47:03 Wow. Yeah. I know. And that wasn't a lightning round, but who cares? That was really cool, man. It's why if you take a bucket of water and empty it on your car,
Starting point is 00:47:11 it doesn't stay as a big ball of water, but if you make the water smaller and smaller and smaller... It just becomes dropless. Then it's a drop, and the drop will stay on the car because surface tension holds it. Surface tension's not strong enough to hold big things. It'll hold a little thing.
Starting point is 00:47:23 Right. The world of insects is completely surface driven. Their physics courses in Insects 101 is all about surface tension. Huh. Right.
Starting point is 00:47:33 Yeah. Because you can get trapped inside of a little bubble. How do I get out with surface tension? That's why everyone needs to know physics. Everybody needs,
Starting point is 00:47:40 even insects. Insects, humans, everybody. Oh, wow, that was cool. We gotta wrap this up. That was cool, that was cool. We gotta wrap this. Oh, we're done. Yeah, yeah, sorry. Oh, man, that was cool. We got to wrap this up. That was cool, that was cool. We got to wrap this. Oh, we're done? Yeah, yeah, sorry. I was trying to go back to another one. We did get a bunch in there, though. Yeah, we did. Listen, that was
Starting point is 00:47:51 like the longest lightning round we've ever had. Yeah, no, it was good, good, good. So, Josh, thanks for coming on. Thank you so much for having me. We got to do this again. I will do it anytime you want. Anytime. Josh, before we sign off, tell us exactly where to find your work. Oh, you can find The End of the World with Josh Clark anywhere you find podcasts,
Starting point is 00:48:10 including the iHeartRadio app and Apple Podcasts and all that jam. And then you can find me on social media at Josh Um Clark, because I don't know if you noticed or not, but I say um quite a bit. And I started a hashtag to keep a conversation about existential risk going. It's hashtag EOTWJoshClark. So people can find me those ways. Awesome. Alright, if you're looking for the
Starting point is 00:48:31 end of the world, this is your man. Alright, thanks Josh. Chuck, always good to have you. Oh, are you kidding me? It's my pleasure. Alright, you've been listening to, possibly even watching, StarTalk End of the World As You Know It Edition, Cosmic Queries. Josh Clark, thanks for being here.
Starting point is 00:48:47 As always, I bid you to keep looking up.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.