Stuff You Should Know - What will happen when we reach the Singularity?

Episode Date: November 8, 2012

Futurists have unnervingly predicted an impending moment in human history: the Singularity, when a superhuman artificial intelligence is created. What will become of humans? Enslavement? Exterminatio...n? Utopia? Find out with Josh and Chuck. Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/listener for privacy information.

Transcript
Discussion (0)
Starting point is 00:00:00 Flooring contractors agree. When looking for the best to care for hardwood floors, use Bona Hardwood Floor Cleaner. The residue-free, fast drying solution is specially designed for hardwood floors, delivering the safe and effective clean you trust. Bona Hardwood Floor Cleaner is available at most retailers where floor cleaning products are sold and on Amazon. Also available for your other hard surface floors like Stone, Tile, Laminate, Vinyl, and LVT. For cleaning tips and exclusive offers, visit Bona.com slash Bona Clean. The War on Drugs is the excuse our government uses to get away with absolutely insane stuff. Stuff that'll piss you off. The cops, are they just like looting? Are they just like pillaging? They just have way better names for what they call,
Starting point is 00:00:45 like what we would call a jack move or being robbed. They call civil acid work. Be sure to listen to The War on Drugs on the iHeart radio app, Apple Podcast, or wherever you get your podcasts. Welcome to Stuff You Should Know from HowStuffWorks.com. Hey and welcome to the podcast. I'm Josh Clark with me as always is Charles W. Chuck Bryant. That makes this stuff you should know. The podcast. Scottish? No, okay. There was nothing. It's just weirdness. Josh-ish. Chuck, I love November. You mean,
Starting point is 00:01:34 Movember. You know, I do. All right, Josh, as you know, because of my semi-virginal fresh face here, I have decided to get on the Movember train for people that don't know. That is for men, and I guess women, if you can grow a mustache, more power to you, to raise money and awareness for prostate cancer. Yeah, so I've been asked to do this a bunch and I've never done it. Well, I'm glad you're finally doing it. Tell us all about it, man. Well, you know, I signed up. I've got my little Movember page, and then you go to that little page and you can donate money for my team of one. And unless you count the mustache as a thing, and then that's two of us. So hopefully soon that'll be happening. And it would be cool,
Starting point is 00:02:18 you know? I'm going to grow it back in anyway, so you might as well raise a little money along the way. Or you shouldn't tell people you're going to grow it anyway, or they'll contribute more money. No, no, no. I'm growing the goatee, but I will only grow the mustache for November. So how do people contribute to this effort? Josh, go to mobro.co.com slash Charles Bryant, and that is my page. Or just go to the Movember website. They got a little handy search bar there. Type in Charles Bryant. There's one other Charles Bryant, but he is not the one with a picture of me. Oh, that's good. So when it lists the two dudes, and one of them clearly has a photo of my freshly shaven face, not super freshly shaven like that
Starting point is 00:03:00 morning. Right, via webcam. You look like a hostage of some sort. I do. So go to mobro.co.com slash Charles Bryant, donate, help support prostate cancer research, and I'll be updating with photos. And if you guys want to chime in on what kind of stash I should grow, I'll try my best. Okay. I'm kind of limited to like standard crumb catcher and pencil pin. What about walrus? I can't, it just doesn't get that big. I can't do the Raleigh fingers, you know. So I have my limits. Have you tried wax, mustache wax? Maybe I will. Okay. It's long enough. So go to mobro.co slash Charles Bryant. That's right. And you can donate to this. Yeah, much appreciated. Movember. On with the show. You got a good setup for today? I do. I'd love to hear it. Let's get to it.
Starting point is 00:03:50 Have you ever, sorry, Chuck? Yes. Have you ever heard of a Luddite? I've been called a Luddite. Okay. Somebody who's not afraid of technology. Yeah, I'm not afraid. You're very technologically savvy. You know stuff. You're not afraid of it. Nope. But whoever's calling you that is actually, they're kind of incorrect. That's a misconception. Luddites were not ever afraid of technology. I wish I would have known that at the time. Yeah, because you could have been like, you're just wrong and stupid in every single way. Actually, it was our buddy, Scott Ippolito. So I'll just throw it back in his face. I'll tell him too. I'll just, I'll stand next to him and be like, yeah. Yeah. Yeah. No, Luddite is originally, they were a group of protesters, labor protesters from
Starting point is 00:04:33 that protested between 1811 and 1816. And they wanted fair wages. They wanted better treatment in their workplaces. And no iPhones. And they were known to break machines, like manufacturing machines on purpose. Yeah. Oh, yeah. They had sledgehammers that were ironically made by in one case, I think in Manchester, they were made by the same blacksmith who'd made these knitting machines that they use the sledgehammers to break. His name was Enoch. And they say that Enoch would make these things and Enoch would break these things. But anyway, they were known for smashing machines, which at the time was like high technology. Yeah. 1811, like a knitting machine, like that's mind bogglingly technological. Sure. And so they got this reputation for
Starting point is 00:05:23 being afraid of this technology. They were afraid it was going to take their jobs. That's not true. I mean, they were to an extent. Yeah. But what they were directing their anger in their ire when they were smashing these machines was not the machine or the technology or the people who invented them or what the machines represented. But these mill owners who are misusing these machines, who are using these machines to force people out of jobs, who are using unskilled people who had no idea what they were doing and getting hurt and killed using these machines. So what the Luddites really wanted was fair labor practices. And they wanted to control these machines. Yes. That's the key to Ludditism is machines are great
Starting point is 00:06:04 as long as we're in control of them and we're smart about what we're doing. Yeah. And they don't come to replace us or run our lives. So today, a Luddite would probably be they would probably react fairly close to the modern conception of the term Luddite because it's gotten so far out of hand, right? That we're actually now talking today about something called the singularity, which is the point where the machines really do take over, not in the very ubiquitous way that they already have today, like they're everywhere. Not that you didn't know that already, but I mean, they control things that we don't fully understand like the cyber war. We were talking about how like the infrastructures run on windows and like valves and pipes and water treatment
Starting point is 00:06:52 systems and everything is operated by computer, right? So what happens if the computer suddenly becomes aware and it's in control of these things and decides that it doesn't really like the humans? It sounds extremely science fiction. Yeah, there was no way to carry out this podcast without that sentence being spoken. Sure. But it's the people who are talking about this who are predicting this are very smart, credible people. Oh, yeah, true. And what we're talking about then is the singularity. That's right. The point where the technological solution. Yeah. Yes, specifically. Yeah. What other singularities are there? Well, I think, you know, we mentioned that was a singularity, which is something entirely different. And I think it's probably just to
Starting point is 00:07:37 distinguish stuff like that. Okay. I don't know if there are other types of singularities. So so it's a singularity versus the singularity. Yes. So maybe the singularity is the point of no return. I guess so. Okay. So what's your question? What did you ask me? Is this bad or good? No. I do have a question for you. Do you think it'll happen? No, I don't I don't think and this might be my narrow field of view at this point in my life. But I think that mankind will make sure that doesn't happen. Oh, man, I've got a counter argument for you from Werner Wynch himself. Oh, no, I've seen the counter arguments. Okay. But that still doesn't change my mind. So you don't think that in the quest to be the top dog, to consolidate power, to consolidate
Starting point is 00:08:26 world domination, some government out there will be like, well, yes, we agree with you at the UN that yes, we have to prevent this from happening. But our scientists back at home are actually working on this one thing that's probably going to make it happen and we're going to be in charge. Yeah, I think that they would create fail safes. And I think even if they didn't, it wouldn't be so widespread that it would take over humanity counter argument to that. Okay. If we create fail safes, yes, using our brains. Yes. And the singularity is by definition, the basically the birth, the emergence of an artificial intelligence. Yes. That's smarter than us. A superhuman artificial intelligence. That's basically what the singularity represents
Starting point is 00:09:09 the creation of. Wouldn't that intelligence be able to be like, Oh, that's very funny that you came up with these fail safes are so tough for me to get around. I think what my problem is with stuff like this is the assumption that if computers were made smarter than people that they would try and destroy us all and reign supreme. That's my problem with this all is it's a very large leap to go from this computer can fix itself and maybe learn to okay, now it decides it hates us all and wants to kill us all. Okay, so I had an idea about this. I watched the video you see the Ray Kurzweil video that future is Ray Kurzweil. He's talking about they kept the interviewer kept asking him like, what scares you about the singularity? What's the downside of the singularity?
Starting point is 00:09:56 He wouldn't fall for it. He's like, I'm an optimist, but you know, I understand that there are going to be downsides or whatever. But if you look at the 20th century, yeah, our our advances in technology had it was a double-edged sword like we use that technology to kill millions and millions of people in the 20th century wars. But we also use that technology to advance the lifespan by like twice. Yeah, as it twice as long as it was before. So it's a double-edged sword. And I think that's kind of a glib argument because I feel like he's leaving out a really important fact. And that is that in the 20th century, all of that technology, every single iota of it, good and bad, was deployed by humans. After the singularity happens, we have another non-human actor. Yes, with motivations
Starting point is 00:10:46 that we can't even conceive of at this point, right? Deploying technology, program motivation. See, that's the argument. But no, that's that's the thing right now. Our stuff is constrained by its programs. After it hits AI, true AI, I think AI plus plus is what it's called. Yeah, it's no longer constrained by its programming. It's out of our control, literally. And that's the point that I don't think we will reach. Okay, well then, yeah, I agree with you. But if we do reach that point, then I do fear that we have computers that are thinking the same way that eugenicists think, except they don't have that empathy or compassion thing that stays the eugenicists hand. Or they do. They're trying to build empathy. So I don't know. Okay, we totally jumped to the end of this thing.
Starting point is 00:11:30 We're like, what are we even talking about? So you believe that they're going to destroy humanity at one point? I believe we need Ned Ludd, the fictitious leader of the Luddites, more than ever right now. Because I think that there's a lot of very smart people moving in a very fast pace in a direction that I don't think everybody is aware we're going. And there hasn't been a general discussion on whether that's the best thing to do or how to do it. What are the fail-says? Is anyone even talking about that? Like, what are they? How do we get them in place? Because I think there should be an impedance to creating unfettered artificial intelligence. Yeah. Well, here we go then. Okay. Boy, that was a rant. You like started yelling at me.
Starting point is 00:12:14 Oh, no, I'm not upset with you at all. I hope it didn't come off like that. That's right. I like you, Chuckie. The war on drugs impacts everyone, whether or not you take drugs. America's public enemy number one is drug abuse. This podcast is going to show you the truth behind the war on drugs. They told me that I would be charged for conspiracy to distribute 2,200 pounds of marijuana. Yeah, and they can do that without any drugs on the table. Without any drugs. Of course, yes, they can do that. And I'm the prime example of that. The war on drugs is the excuse our government uses to get away with absolutely insane stuff. Stuff that will piss you off. The property is guilty. Exactly. And it starts as guilty. It starts as guilty. The cops, are they just like
Starting point is 00:12:53 looting? Are they just like pillaging? They just have way better names for what they call like what we would call a jack move or being robbed. They call civil acid. Be sure to listen to the war on drugs on the iHeart radio app, Apple podcasts, or wherever you get your podcasts. Not too long ago, in the heart of the Amazon rainforest, this explorer stumbled upon something that would change his life. I saw it and I saw, oh, wow, this is a very unusual situation. It was cacao, the tree that gives us chocolate. But this cacao was unlike anything experts had seen, or tasted. I've never wanted us to have a gun fight. I mean, you saw the stacks of cash in our
Starting point is 00:13:39 office. Chocolate sort of forms this vortex. It sucks you in. Like I can be the queen of wild chocolate. We were all lost. It was madness. It was a game changer. People quit their jobs. They left their lives behind so they could search for more of this stuff. I wanted to tell their stories. So I followed them deep into the jungle and it wasn't always pretty. Basically, this like disgruntled guy and his family surrounded the building armed with machetes. And we've heard all sorts of things that, you know, somebody got shot over this. Sometimes I think, oh, all this for a damn bar of chocolate. Listen to obsessions, wild chocolate on the iHeart Radio App, Apple Podcast, or wherever you get your podcasts. So Werner Venge is one of the guys that thinks it
Starting point is 00:14:27 is going to happen. He's a professor of math at the San Diego State University. Yeah. Go Aztecs. And he thinks he wrote an essay called the coming technological singularity, how to survive in the post-human era. And he thinks there are four ways which this could happen. He also points out that he thinks it will happen before 2030, which I don't think that will happen. And that's coming up. Yeah. It's like right around the corner. I think Kurzweil says the same thing. He said 2029 is the one he's been citing. Oh, yeah. Yeah. Well, we'll see. Number one, scientists could develop advancements in AI. It's pretty easy to understand. Right. Number two, computer networks might become self aware somehow. Like how that's pretty vague. Well,
Starting point is 00:15:18 he was saying in the paper that's Strickland's interpretation. He's saying in his paper like it'll probably be a total surprise to the people who are working on this algorithm to make a a search engine better or something. And they just tweak it just slightly in such a way that all of a sudden the computer system wakes up. Right. And you've just created sentience accidentally in a computer network. And now it's self aware. Right. And he's saying that's how that could happen accidentally, basically. Okay. So number three is transhumanism, basically. Computer human interface becomes so advanced that there's sort of it blurs the line between humans and robots. Right. Which is probably the best case scenario for us if the technological
Starting point is 00:16:07 singularity is going to happen because we'll be on board. Yeah. Well, unless the brain part is in the robot. You know what I'm saying? Yeah. And they're just operating the body of the human form. But if we're indistinguishable from a robot and a human, like if we merge so, so much, then like what benefits one. Yeah. But what is it? Centennial man by Centennial Man. I think or Pistorius. Remember when we did our DGA speech a couple of years ago, he was big news. Oh, yeah. And then the Olympics. He was big news because did you see him run? Yeah. Man, I had not seen him run before and it is something to see. It's really cool. It's pretty awesome. Yeah. I love the people that were like, you know, it gives him an advantage because blah, blah, blah.
Starting point is 00:16:48 And the South Africa came in dead last in the world. Well, no, I mean, I don't think anyone expected him to win, but I just love the snarky counter argument was then cut off your legs below the knees. If it's such an advantage. Yeah. You want to win? Go cut off your legs. Yeah. I forgot we had mentioned him in the transhumanism thing. Yeah. And that's before he was like really big news. Yeah. As far as the Olympics goes. And then number four, biological science advancements allow us to engineer human intelligence, to physically engineer it. Right. And the first three involve computers, like we have to this, the singularity would be reached by basically advancements in computing. The last one is strictly like coming up with
Starting point is 00:17:28 this super vitamin that just makes our intelligence superhuman. That would be awesome. The point is, is that at through one of these four proposed ways. Yeah. At some point, Werner Vince is 2030, Ray Kurzweil says 2029. Hans Morovic says 2050 maybe. Well, he says that computers will be capable of processing power equal to the human brain. Okay. But not necessarily AI, which is an essential part of this. Like we have to understand how to create the human brain under certain circumstances. Yeah. For this to reach. But at some point, all of these things are saying, we're going to have on this planet something that doesn't exist right now. Yeah. And that is a superhuman intelligence. Whether it's an artificial intelligence, as in the first three, right,
Starting point is 00:18:19 or superhuman human intelligence, that never needs to be seen. But the point is, is once that happens, all of a sudden, there's basically what amounts to a new species that just popped up on the map. And it's going to take off like a rocket. Robo humans. Yeah. And it takes off like a rocket because it's got a rocket built into it. All of this is based sort of on Moore's law, which is, I guess we can go ahead and talk about more. Gordon Moore. Great name. Gordon Moore. It's a great like electronics engineer name. Yeah, I guess you're right. In the mid 60s, he was a semiconductor engineer. And he proposed what we call Moore's law now. And that's basically what he was noticing at the time was, or I guess we should just say Moore's law is that the idea that technology doubles it
Starting point is 00:19:10 every 18 months. Yeah. That's what they settled on. It basically ending 12 to 24 months. But I think he originally said like 18 months. So yeah, they split the difference and said 18 months. Yeah. I think Moore back has said it like it was 24 and then 18 and he feels like it's more like 12 now. Right. But it's progressing like exponentially, I guess is the point. Yeah. So anyway, back in the 60s, he noticed that he was building semiconductors and he said, you know, at the components and the prices are falling. But then he noticed instead of just selling stuff for half the price, why don't we just roll that back into making smaller transistors and selling them the same high price. Yeah. But just getting more bang for your buck. Yeah. Can
Starting point is 00:19:52 you imagine if that had never happened? Right. Like, what if that what if the cycle became same? No, let's just, you know, sell it cheaper. I don't know. I mean, like, what kind of differences would that have? We'd have super cheap, slow technology. Maybe everybody just kind of be on the pot or something like that, you know, real laid back like, but more like, I think part of being a computer scientist, someone else would have come along and like, guys, why don't we try and advance? You're doing this wrong. Yeah. Strickland points out too that Moore's law is a self fulfilling prophecy because of that. Because that mentality that you just you just mentioned was present, like, rather than just sell it at half price, let's put twice as much into it.
Starting point is 00:20:37 Yeah. Right. And so since that's the drive of the the transistor, is it the transistor industry that he was in? Yeah. Or the microprocessor industry that it's a self fulfilling prophecy. It's a self fulfilling law because that drive is there to basically meet that deadline. Yeah. They keep trying to pack more and more in so that they can satisfy Moore's law. True in it. Depending on who you ask, like this article is already out of date. In February of this year of 2012, is that where we are? A team of Austrian physicists created a functioning single atom transistor. Really? Single atom, fully controllable. That's 0.1 nanometers. Right. And a human here is 180,000 nanometers. Yeah. And in this article even wide,
Starting point is 00:21:29 I think Strickland was talking about Intel has transistors 45 nanometers wide. Yeah. Like they're trying to get better. This one is one atom wide. And it's not like on the market or anything close like that. But it is fully functioning and fully controllable. And that is faster than Moore's law. That was supposed to hit us in 2020. And you can't get any smaller, like that's as small as it gets. And we've already reached it. Right. And the problem is, is like they're what they're running up against is the things like quantum tunneling on the quantum world. Yeah. When you have an electron and you're using like very thin material to direct it. Right. And a transistor. Yeah. Or a capacitor. It does a little magic act. That's what's
Starting point is 00:22:14 important. The transistor. Well, yeah, it just suddenly is on one side of this wall. Yeah. That you're using to guide it. And then it's just on the other and basically makes it outside of your transistor. You're like, wait, come back. Yeah. But it didn't like bore a hole through it. No, it just went through it. Like it wasn't there. Look at me now. Exactly. And that's called quantum tunneling. Yeah. Which is kind of a problem when you get on this nano scale, because classical mechanics kind of goes out the window and you run into quantum mechanics that has weird stuff like that going on. Yeah. But ironically, that whole size problem that you're running into that runs into quantum problems, it may actually be saved by the quantum
Starting point is 00:22:54 world through quantum computing. Moore's law, I guess, that technological progress. Because we're running into that size problem. But with quantum computing, you basically it uses quantum states like how you can have superpositions, a bunch of different states at once. Yeah. To carry out to carry out parallel processes. So very traditional computers carrying out one process, a quantum computer could carry out a million processes, right, which makes that computer exponentially faster than anything available today, which could be what shoots us into this artificial intelligence. If quantum computers become viable, right, and widespread, well, that's where they're headed. The one atom transistor, part of the problem with that one is it's got to be, it's
Starting point is 00:23:44 only operable at negative 391 degrees, which is like liquid nitrogen cold. That's it. But they're working on it. Weird. That's where that quantum levitation comes from. It's like really, really cold. Oh, really? Yeah. That's the only time it works. But it works. Interesting. Yeah. Matt told me about that one. So, Josh, let's say you have your shooting for true AI. You built yourself a robot. Your robot's great. Cleans up, seems to solve problems. It's like Richie Rich's Butler. Might even be learning, who knows. And you want to test it out to see where you're at. I know what you're getting at. What would you do? I would give that thing a Turing test. A what? A Turing test. T-O-U-R-I-N-G? No, T-U-R-I-N-G, named after the
Starting point is 00:24:36 father of computing, the chemically castrated homosexual. Excuse me? Yes. Did you know this? Alan Turing is a British early proponent of robot science. Right. And he was a chemist? What? Chemically castrated for being a homosexual. Let's hear it. Okay, so during World War II, he was this ace code breaker for the British government. Yeah. And he actually cracked the Nazi code. And after the war, they were like, hey, thanks a lot for that old chat. Thanks for helping us win the war. By the way, as you know, homosexuality is outlawed here and will be until, oh, I don't know, 1950s. And so we're going to convict you of homosexual acts and chemically castrate you. It's thanks. Wow. Yeah. That all happened? Yes. But okay. So despite this,
Starting point is 00:25:28 he still comes up with this thing called a Turing test named after him. And it involves a blind judge, not an actually blind judge, but like a judge who doesn't know who they're talking to. And the judge is asking the same questions of a person and a computer. It's like Blade Runner. I guess remember at the beginning of Blade Runner, he's asking the questions to Leon. And it's not quite the Turing test because he can see Leon, but he's basically trying to suss out of Leon as a replicant. Right. So he's asking him like questions that sort of they all kind of touch on like empathy, it seems like, like you see a turtle in the road. Do you do it's on its back. Do you flip it back over or do you smash it or like, what do you do? What does Leon say? I don't remember. I
Starting point is 00:26:14 think he asked him about his apartment and he gets annoyed and he kills the guy. Now what happens, Matt? Then too long. That says. Yeah, I think Leon kills him. Anyway, the Turing test, if you can't tell the difference between the robot and the person, then the robot passes the test. And supposedly that's a touchstone of reaching true AI. Yeah, if you can fool a human. Yeah. As far as the singularity goes with AI, I guess that's AI. Then there's AI plus and then there's AI plus plus, which would just be like a super human intelligence, artificial intelligence. Yeah, that's capable of self aware. It's capable of using intuition, inferring things. Yeah, like, like Hans Morovic was pointing out, like a third generation robot could learn that if you knocked over that cup of
Starting point is 00:27:01 water, water will spill out and you have a mess and your owner gets mad and powers you down for half an hour. But it would learn that after spilling that water and maybe more than once. Yeah. This fourth generation robot or something with true artificial intelligence that could infer, could look at that cup, see that the top's open, realize that there's water inside, and without ever having to knock it down, could infer that if I spilled it, it would spill. Or if I knocked it over, it would spill the water out. Yeah. And that's Hans Morovic. And he also says you could potentially tie signals to that, like words like good and bad. Yeah. So, and this is all a program, you understand. Humans are programmed to do this. So this is technically all presingularity then.
Starting point is 00:27:47 Yeah, all this is presingularity. He's just Morovic is talking about the one through four generations of robots as he sees it. Yeah. But if you tie words like good and bad, the robot adapts and it's conditioning. It's like a rudimentary learning on the outside. It looks like if the owner says like, don't do that, that's bad. The robot understands what that means. But what it really knows is it reads body language and maybe human raises his voice. And that means anger. And like you said, anger means I get shut down or something. Right. And that's not what I want, because I want to destroy you eventually. Exactly. I will remember this. And since we're on Morovic, I guess we should talk about some of his other thoughts on robots. He thinks they're good.
Starting point is 00:28:34 He does think they're good. He thinks the second generation, first of all, he thinks right now that they are smarter than insects. Computers are. Is that right? He thinks soon enough, they will be as smart as like a lizard. And after that, they might be as smart as like a monkey. And then the fourth step would be humans. Smart as or smarter than smart, better than in some cases with certain applications. Better at math for sure. Well, they're already better at math. Oh my God. Calculators. Better at chess, deep blue. Oh God. You know, it's a stuff like that's happening on some levels. He thinks the third generation, I'm sorry, the second generation will be like the first, but
Starting point is 00:29:16 more reliable. So they work out the kinks. The third generation, he thinks is where it really takes a leap. And that's what you're talking about. Instead of making mistakes over and over to learn, it works out in its head and then performs the task. Yes. So that's inferring inferring. And that's fourth generation. That's third generation. Oh, is it? Yeah. Okay. Wow. We're further along than I thought. That's right. And he thinks also in the third generation that they could model the world like a world simulator. So essentially it looks around and is able to take in enough information to suss out a scenario. And if that sounds familiar, that's because that's what you do every day. Yeah, exactly. And he thinks the biggest two hurdles will be,
Starting point is 00:30:02 well, the third generation is also where you're going to get your psychological modeling. So trying to simulate empathy and things like that. Right. Interact with humans. And then the fourth one, he says, marries the third generation's ability to simulate the world with a reasoning program, like a really powerful reasoning program. Yeah. But he thinks the two biggest hurdles in the end, as far as becoming more than human or as good as human, are the things that we're best at, which is interacting with the physical world, like on a moment by moment basis, you have to be able to adapt like at a, you know, in a split second, humans can do that. Sure. We learned to over time. Right. So we didn't get, you know, took, didn't get eaten by
Starting point is 00:30:46 the dinosaur. Right. And the other one is social interaction or empathy. Are you a creationist now? Took, took in the dinosaur coexisted? Oh, did they not? No. Sure, they did in my world. And the second one is social interaction. So those are the two things that he says will be the most difficult to achieve. Yeah, I would imagine. And that's empathy. So say we have these things walking around, we have robots like that. And then they are all connected to a network, a wireless network. Yeah. And they're all running off the same like general programs. And somehow, one of them becomes self aware, wakes up as Werner Vinge puts it in his singularity article. Yeah. And that, that algorithm spreads throughout the network all of a sudden. So all
Starting point is 00:31:39 of a sudden, all of your robots are awake. Yeah. That's a pretty terrifying idea, because now all of a sudden, these robots that were under our control are now under their own control, they've broken loose with their programming. Yeah. That would be, again, I think a very scary scenario. But it's also possible that like this could happen pre robots, maybe we won't have robots by this time, and it will just be like networks, like a sentient network. That's scarier to me. How so? Well, because you can look at a robot and get scared of it and take a baseball bat to it. But a network is just feels like, yeah, in the ether, like you wouldn't know it's coming or something. Exactly. Yeah, it's embedded. Yeah. Especially with, you know, the cloud out there
Starting point is 00:32:24 now. Yeah. So say this kind of thing like scared you. What, what are some fail safes, like you said, or what are some obstacles that you could put up to prevent this from happening? Well, if you wanted to follow Isaac Asimov, you would build in the three laws of robotics. Yeah. I think we've gone over this before even, it feels like it. The three laws of robotics? In one of them, sure. Robot may not injure a human or through inaction, allow them to come to harm. That'd be a nice thing to build in there. Yeah. Robots must obey orders by humans, except where it contradicts number one. Right. That's a great fail safe. Yeah. Like, don't do anything unless I tell you to. But you still got to worry about the super villain,
Starting point is 00:33:07 of course. And then three robots kind of serious. Robots must protect its own existence, which sounds scary, but it cannot conflict with one or two. Right. I think we didn't, didn't we talk about that in our TV show? Isn't that come up? Yeah. Okay. Does it sound familiar? Yes. So I would build in those are three pretty good fail safes. If you follow Asimov's laws, then you probably wouldn't have a robot getting out of hand unless someone, like I said, like some bad person built one to intentionally get out of hand. But even in, I think Vinge makes pretty good point, even beyond like a bad person, like some, like a super villain, getting his hands on something and intentionally making a robot
Starting point is 00:33:53 bad, especially like a sentient robot. Yeah. We may reach this point through normal, everyday competition. That is true. Where like maybe countries all agree to not do this. Yeah. But there's one or two that are still working on it. Yeah. And they're not working toward the singularity, but they're working toward computing domination. Yeah. They want to have the best machines to carry out the process is the fastest and stay viable as like a world leader, that kind of thing. And then AI just kind of happens accidentally, like we said, maybe so, man, I could see something like that. And it also, I do, I will say this, that if it stuff like this happens, I think it will be an accident. And I think it will be
Starting point is 00:34:41 after years of selling us this stuff as a convenience. Oh, yeah. Yes. That's how they get you in there. They don't say, Hey, we're creating a robot that will maybe kill you. We say we're, we're in planning an RFID chip in your arm that makes it much easier for you to shop. Sure. Or we have figured out how to, what is it? Optogenetics? I think I can't remember what it's called where like you take like a jellyfish's light sensitive genes, splice them into another animal's genes so that the cells are light sensitive, photo sensitive. And then you can use little, basically little light generators directed at specific cells or neurons or whatever to get them to fire precisely to work precisely perfectly every time. So all
Starting point is 00:35:27 of a sudden you don't have Parkinson's anymore because all of your nerves are functioning 100%. And once we have that in there, who's controlling that? What network is that connected to? Because through that, through that step, we've become transhuman that human computer interfaces become a little more meshed. So, you know, living a long time is really great. And we've already expanded human life like by what? Double at least. Right. So why not do it again and again and again? Yeah. So you got to be like, let's say you got to be 20% non-human to get there. That's not too bad, right? Right. You get to be a thousand years old. But the point is, is like we're already on this path. Oh, yeah. Technology makes our lives that much easier. So we're on this path where
Starting point is 00:36:14 we're basically just messing around with computing to make it better, faster, more human-like. Right. And all we have to do is get to the point where a machine that is capable of reproducing itself becomes sentient and decides that it wants to reproduce itself. And then that machine creates a better machine and so on and so on and so on. Yeah. And when that happens, evolution will become technological. It will be replicated technologically. Yeah. And it will happen in this incredibly compressed time. Yeah. Possibly of hours or days. So it gets out of hand before we can do anything about it. It happens like that. Dan Cayman, remember that guy? Yes. He's got artificial limbs that attach to your neural wiring. Yeah. So you think pick up a cup with
Starting point is 00:37:04 hand and your mechanical hand does it. Right. That's pretty... Can you imagine that? Right. What's going on right now? It comes back to that Kurzweil argument. Like, yeah, technology's always double edge. Yeah. You know, like there's good and there's bad to it. And he may be absolutely right. But again, I feel like we are going in a direction that a lot of people don't realize we're going in and there hasn't been any discussion about it. I think there's discussion about it though. That's where I disagree. In the larger world? I bet you there are conferences and things like this that we don't know about. There are, but I wonder how many of them are... I mean, don't you think if you went to a singularity conference or an AI conference and said, well, hey, hey, hey, maybe we shouldn't
Starting point is 00:37:46 be, you know, exploring some of these roads? Like, will you be... You'd lose your funding, I would imagine. Yeah, I don't... I'd be ostracized. I don't necessarily think they're going to the, like, the conferences where they love this stuff. Right. But I think there are people out there talking about it just like they talk about maybe we shouldn't mess with stem cells so much. Sure. But they are not integrated with the people who are actually carrying out this work. It's not coming from within the community. And if it is, I... No, I don't know for sure. Okay. But I'm not reassured that it is happening. And that's where I think my fears are based. Gotcha. I'm not against technology. I think technology does improve our lives. But I also... I mean, there is such a thing as Pandora's
Starting point is 00:38:25 Box, even if it is metaphorical. Agreed. I think maybe we should close with Nico. Okay. Just two weeks ago, Nico the Robot was able to recognize itself in a mirror. And I want to say it was England. And that is a really big deal because that is a hallmark of animal intelligence. Self-awareness. Self-awareness, a dog walking by a mirror and looking at it and recognizing itself. Yeah. Nico apparently did that. That's pretty crazy. Yeah. Well, welcome to humanity, Nico. We will be licking your boots in no time. Your metallic, foul-tasting robotic boots. The war on drugs impacts everyone, whether or not you take drugs. America's public enemy number one is drug abuse. This podcast is going to show you the truth behind the war on drugs. They told me that I would be charged for conspiracy to distribute 2200 pounds of marijuana. Yeah, and they can do that without any drugs on the table. Without any drugs, of course, yes, they can do that. And I'm the prime example of that.
Starting point is 00:39:30 The war on drugs is the excuse our government uses to get away with absolutely insane stuff. Stuff that'll piss you off. The property is guilty. Exactly. And it starts as guilty. It starts as guilty. The cops, are they just like looting? Are they just like pillaging? They just have way better names for what they call like what we would call a jack move or being robbed. They call civil acid. Be sure to listen to the war on drugs on the iHeart Radio App, Apple Podcast, or wherever you get your podcasts. Not too long ago, in the heart of the Amazon rainforest, this explorer stumbled upon something that would change his life. I saw it and I saw, oh, wow, this is a very unusual situation. It was cacao, the tree that gives us chocolate. But this cacao was unlike anything experts had seen or tasted. I've never wanted us to have a gun fight. I mean, you saw the stacks of cash in our office. Chocolate sort of forms this vortex. It sucks you in. Like I can be the queen of wild chocolate. We were all lost. It was madness. It was a game changer. People quit their jobs. They left their lives behind so they could search for more of this stuff. I wanted to tell their stories, so I followed them deep into the jungle. And it wasn't always pretty. Basically, this disgruntled guy and his family surrounded the building armed with machetes.
Starting point is 00:40:55 And we've heard all sorts of things that, you know, somebody got shot over this. Sometimes, I think, oh, all this for a damn bar of chocolate. Listen to Obsessions, Wild Chocolate on the iHeart Radio App, Apple Podcast, or wherever you get your podcasts. If you want to learn more about Singularity, type in what is the technological Singularity in the search bar at HouseDivorce.com. It'll bring up a John Strickland article, John Strickland from TechStuff. That's right. And quite sure they've covered this several times. But we wanted to take our hand at it. So you can check that out, too. The TechStuff article or podcast. Agreed. Yeah, I'm all over the place. Let's see. I said TechStuff, which means it's time for Listener Mail. Actually, before we do this real quick, I want to point out we had, remember Jack Mead, we had an email about poor Jack Mead, has caught up with the podcast and feels like he's wandering and drifting the world.
Starting point is 00:41:53 Sure. We should plug the Stuff You Should Know Army. We often call all the fans of Stuff You Should Know Army, but there is a subgroup on Facebook that you can look up, S-Y-S-K Army. And they are the twisted Uber fans who like to discuss things about the show. That's crazy. It's a nice little community. And they're all great people and very supportive, like good folk. Yeah. So, Jack, go check them out. Take your smart. I'm going to call this a rebuke for the Star Wars podcast. Oh, man. Remember, we had someone from New Jersey write in and say nukes won't work in space because X, Y, and Z. This guy, I think says that it could happen. One of you asked a wonder what happened if a nuke went off in space. One nuke in space has potential to wipe out the entire coastal United States is what this guy says. There are a couple of sources I found on the Internet. I only knew about it because of a book series I read called The Great and Terrible series by Chris Stewart.
Starting point is 00:42:54 It is an apocalyptic book giving an idea of what the last days on Earth could be like. In one of the later books, America suffers from a catastrophic terrorist attack in which four nukes were detonated above the U.S. This caused all electronic equipment to fall to short out and become useless. Panic ensued, cars wouldn't work, cell phones became bricks, and the entire power grid was rendered useless. I remember reading the author's notes stating that there was a military report given to Congress about this kind of scenario, and I found something similar. He sent us the link. It wasn't like Nuke can rich really scared about this, like early in the primary campaign. I don't know. I think he was. Was he? Yeah. David. One interesting note in the report refers to how the discovery of the EMP blast that accompanies nukes led to the atmospheric test ban treaty. And that is Tyson Branghurst, Alaska. Tyson did some research. That's pretty cool. It sounds like an SYSK fan. Yeah. It wasn't just like, can you guys Google this for me? Yeah. Thank you, Tyson. Yeah, thanks, Tyson. If you want to show off your research and skills, you did some follow up on a question that you had or something we mentioned or whatever, we want to hear about it. We like that kind of stuff. It's pretty cool. You can show off your work in 140 characters or less on Twitter at SYSK podcast. You can join us on facebook.com slash stuff you should know.
Starting point is 00:44:22 Or you can send us very lengthy emails to stuffpodcastatdiscovery.com. For more on this and thousands of other topics, visit howstuffworks.com. The cops. Are they just like looting? Are they just like pillaging? They just have way better names for what they call like what we would call a jack move or being robbed. They call civil acid. Be sure to listen to the war on drugs on the iHeart radio app, Apple podcast, or wherever you get your podcast. We're going to use Hey Dude as our jumping off point, but we are going to unpack and dive back into the decade of the 90s. We lived it, and now we're calling on all of our friends to come back and relive it.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.