The Joe Rogan Experience - #1188 - Lex Fridman

Episode Date: October 24, 2018

Lex Fridman is a research scientist at MIT, working on human-centered artificial intelligence. ...

Transcript
Discussion (0)
Starting point is 00:00:00 four three two one hello lex hey we're here man what's going on we're here thanks for doing this you brought notes you're seriously prepared when you're jumping out of a plane it's best to bring a parachute this is my parachute i i understand yeah um how long have you been working in artificial intelligence? My whole life, I think. Really? So when I was a kid, I wanted to become a psychiatrist. I wanted to understand the human mind.
Starting point is 00:00:46 I think the human mind is the most beautiful mystery that our entire civilization has taken on exploring through science. I think you look up at the stars and you look at the universe out there. You had, you know, the Grass Tyson here. It's an amazing, beautiful scientific journey that we're taking on and exploring the stars. But the Mayan to me is a bigger mystery and more fascinating. and it's been the thing I've been fascinated by from the very beginning of my life, and just I think all of human civilization has been wondering, you know, what is inside this thing? The hundred trillion connections that are just firing all the time, somehow making the magic happen to where you and I can look at each other,
Starting point is 00:01:24 make words, all the fear, love, life, death that happens is all because of this thing in here. And understanding why is fascinating. And what I early on understood is that one of the best ways, for me at least, to understand the human mind is to try to build it. And that's what artificial intelligence is. It's not enough from a psychology perspective to study. From a psychiatry perspective to investigate from the outside, the best way to understand is to do.
Starting point is 00:02:04 So you mean almost like reverse engineering a brain. There's some stuff, exactly, reverse engineering the brain. There's some stuff that you can't understand until you try to do it. You can hypothesize your, I mean, we're both martial artists from various directions. You can hypothesize about what is the best martial art. But until you get it in the ring, like what the UFC did, and test ideas, is when you first realize that the touch of death that I've seen some YouTube videos on,
Starting point is 00:02:41 that you perhaps cannot kill a person with a single touch or your mind or telepathy, that there are certain things that work. Wrestling works. Punching works. Okay, can we make it better? Can we create something like a touch of death? Can we figure out how to turn the hips, how to deliver a punch in the way that does do a significant amount of damage? And then you, at that moment when you start to try to do it and you face some of the people that are trying to do the same thing, that's the scientific process and you try, you actually begin to understand what is intelligence and you begin to also understand how little we understand
Starting point is 00:03:26 it's like richard feynman who i'm dressed after today are you he's a physicist i'm not sure if you're sure yeah yeah yeah he always used to wear this exact thing so i feel i feel pretty badass wearing it if you think you know astrophysics you don't know astrophysics that's right well he said it about quantum physics quantum physics that's right that's right uh so he was a quantum physicist and he kind of i remember hearing him talk about that understanding our the nature of the universe of reality could be like an onion we don't know but it could be like an onion to where you think you know you're studying a layer of an onion, and then you peel it away, and there's more.
Starting point is 00:04:08 And you keep doing it, and there's an infinite number of layers. With intelligence, there's the same kind of component to where we think we know. We got it. We figured out. We figured out how to beat the human world champion to chess. We solved intelligence. And then we tried the next thing. Wait a minute.
Starting point is 00:04:24 Go is really difficult to solve as a game. And then you say, okay, I came up when the game of Go was impossible for artificial intelligence systems to beat and have now recently have been beaten. Within the last like five years, right? The last five years. There's a lot of technical fascinating things of why that victory is interesting and important for artificial intelligence. It requires creativity, correct? It does not. No?
Starting point is 00:04:50 It just exhibits creativity. Oh. So the technical aspects of why AlphaGo from Google DeepMind that was the designers and the builders of the system that was the victor, the designers and the builders of the system, that was the victor, they did a few very interesting technical things where essentially you develop a neural network. This is this type of artificial intelligence system that looks at a board of Go, has a lot of elements on it, has black and white pieces, and is able to tell you how good is this situation and how can I make it better. And that idea, so chess players can do this, I'm not actually that
Starting point is 00:05:33 familiar with the game of Go, so I can speak, I'm Russian, so chess to us is romanticized, it's a beautiful game. I think that there, you look at a board and all your previous experiences, all the things you've developed over tens of years of practice and thinking, you get this instinct of what is the right path to follow. And that's exactly what the Neural Network is doing. And some of the paths it has come up with are surprising to other world champions. So in that sense, it says, well, this thing's exhibiting creativity because it's coming up with solutions that are something that's outside the box, thinking from the perspective
Starting point is 00:06:15 of the human. What do you differentiate between requires creativity and exhibits creativity? I think, one, because we don't really understand what creativity is. So it's almost, it's on the level of concepts such as consciousness. For example, the question which there's a lot of thinking about whether creating something intelligent requires consciousness, requires for us to be actual living beings aware of our own existence. In the same way, does doing something like building an autonomous vehicle, that's the
Starting point is 00:06:56 area where I work in, does that require creativity? Does that even require something like consciousness and self-awareness? I mean, I'm sure in LA, there's some degree of creativity required to navigate traffic. And in that sense, you start to think, are there solutions that are outside of the box an AI system needs to create? Once you start to build it, you realize that to us humans, certain things appear creative, certain things don't. Certain things we take for granted. Certain things we find beautiful. And certain things we're like, yeah, yeah, that's boring.
Starting point is 00:07:30 Well, there's creativity in different levels, right? There's creativity like to write The Stand, a Stephen King novel. That requires creativity. There's something about his – he's creating these stories. He's giving voices to these characters. these stories he's giving voices to these characters he's developing these scenarios and these dramatic sequences in the book that's going to get you really sucked in that's that's almost undeniable creativity right is it so yeah it's uh he's imagining a world what is it always set in uh new hampshire massachusetts a lot of it's maine maine that's
Starting point is 00:08:03 right so he's imagining a world and imagining the emotion of different levels surrounding that world. Yeah, that's creative. Although there's a few really good books, including his own, that talks about writing. Yeah, he's got a great book on writing. It's actually called On Writing. On Writing. If there's anyone who can write a book on writing, it should be Stephen King. I think Stephen Pressfield. I hope I'm not saying the wrong thing. The War of Art. The writing. Yeah. If there's anyone who can write a book on writing, it should be Stephen King. I think Stephen Pressfield.
Starting point is 00:08:27 I hope I'm not saying the wrong thing. The War of Art. The War of Art. Beautiful book. And I would say, from my recollection, they don't necessarily talk about creativity very much. That it's really hard work of putting in the hours of every day of just grinding it out. Well, Pressfield talks about the muse. Pressfield speaks of it almost in like a strange, mystical sort of connection to the unknown.
Starting point is 00:08:53 Because he almost, I'm not even exactly sure if he believes in the muse, but I think if I could put words in his mouth, I have met him, he's a great guy. He was on the podcast once. I think the way could put words in his mouth, I have met him. He's a great guy. He was on the podcast once. I think the way he treats it is that if you decide the muse is real and you show up every day and you write as if the muse is real, you get the benefits of the muse being real. That's right. Whether or not there's actually a muse that's giving you these wonderful ideas.
Starting point is 00:09:24 And what is the muse? So I think of artificial intelligence the same way. There's a quote by Pamela McCordick from 1979 book that I really like. She talks about the history of artificial intelligence. AI began with an ancient wish to forge the gods. And to me, gods, broadly speaking, or religions, represents, it's kind of like the muse, it represents the limits of possibility,
Starting point is 00:09:52 the limits of our imagination. So it's this thing that we don't quite understand, that is the muse, that is God. Us chimps are very narrow in our ability to perceive and understand the world, and there's clearly a much bigger, beautiful, mysterious world out there, and God or the Muse represents that world. And for many people, I think throughout history, and especially in the past sort of hundred years, artificial intelligence has become to represent that a little bit. to the thing which
Starting point is 00:10:25 we don't understand and we crave we're both terrified and we crave in creating this thing that is greater that is able to understand the world better than us and that in that sense artificial intelligence is the desire to create the muse this, this imaginary thing. And I think one of the beautiful things, if you talk about everybody from Elon Musk to Sam Harris to all the people thinking about this, is that there is a mix of fear of that, of that unknown, of creating that unknown, and an excitement for it. Because there's something in human nature that desires creating that. Because like I said, creating is how you understand. Did you initially study biology?
Starting point is 00:11:14 Did you study the actual development of the mind or what is known about the evolution of the human mind? Of the human mind, yeah. So my path is different as it's the same for a lot of computer scientists and roboticists is we ignore biology, neuroscience, the physiology anatomy of our own bodies. And there's a lot of beliefs now that you should really study biology. You should study neuroscience. You should study your own brain, the actual chemistry, what's happening, what is actually, how are the neurons interconnected,
Starting point is 00:11:51 all the different kinds of systems in there. So that is a little bit of a blind spot, or it's a big blind spot. But the problem is, so I started with more philosophy almost. It's where, if you think, Harris has, in the last couple of years, has started kind of thinking about artificial intelligence. And he has a background in neuroscience, but he's also a philosopher. And I started there by reading Camus and Nietzsche or Dostoevsky, thinking what is intelligence? What is human morality, will. So all of these concepts give you the context for which you can then start studying these problems.
Starting point is 00:12:35 And then I said, there's a magic that happens when you build a robot that drives around. I mean, your father, I'd like to be, but I'm not yet. There's a creation aspect that's wonderful, that's incredible. For me, I don't have any children at the moment, but the act of creating a robot where you programmed it and it moves around and it senses the world is a magical moment. Did you see alien covenant is a sci-fi movie yeah no have you ever seen any of the alien films i uh so i grew up in the soviet
Starting point is 00:13:14 union where we didn't watch too many movies so i need to catch up we should catch up on that one in particular because a lot of it has to do with artificial intelligence there's actually a battle between spoiler alert two different but identical um artificially intelligent synthetic beings that are there to aid the people on the ship one of them is very creative and one of them is not and the one that is not has to save them from the one that is spoiler alert I don't won't tell you who wins right there's there's a really fascinating scene at the very beginning of the movie where the creator of this artificially intelligent being is discussing its existence with the being itself.
Starting point is 00:14:07 And the being is trying to figure out who made him. And it's this really fascinating moment. And this being winds up being a bit of a problem because it possesses creativity and it has the ability to think for itself. And they found it to be a problem so they made a different version of it which was not able to create and the one that was not able to create was much more of more of a servant and there's this battle between these two i think you would find it quite fascinating it It's a really good movie. Yeah, the same kind of theme carries through Ex Machina and 2001 Space Odyssey. You've seen Ex Machina?
Starting point is 00:14:51 Yeah, I've seen it. So because of your – I've listened to your podcast and because of it, I've watched it a second time. Because the first time I watched it, I had a Neil deGrasse Tyson moment where it was – you said there's cut the – Cut the shit. cut the shit cut the shit moments yes for me for me the it the movie opening is everything everything about it was i was rolling my eyes why were you rolling your eyes what was uh the cut the shit moment so that's a general bad tendency that i'd like to talk about amongst people who are scientists that are actually trying to do stuff. They're trying to build the thing.
Starting point is 00:15:31 It's very tempting to roll your eyes and tune out in a lot of aspects of artificial intelligence discussion and so on. For me, there's real reasons to roll your eyes and there's just well let me uh let me just describe it so this person in ex machina no spoiler alerts uh is in the middle what like a jurassic park type situation where he's like in the middle of a land that he owns yeah we don't really know where it is it's not established but you have to fly over glaciers and you get to this place and there's rivers and he has this fantastic compound and inside this, he appears to be working alone. Right. And he's, like, doing curls, I think, like dumbbells, and drinking heavily.
Starting point is 00:16:15 So everything I know about science, everything I know about engineering is it doesn't happen alone. So the situation of a compound with no hundreds of engineers there working on this is not feasible. It's not feasible. It's not possible. And the other moments like that were the technical, the discussion about how it's technically done. They threw a few jargon to spice stuff up that doesn't make any sense. Well, that's where I am blissfully ignorant. So I watch it and I go, this movie is awesome. And you're like, oh, I know too much.
Starting point is 00:16:59 Yeah, I know too much. But that's a stupid way to think for me. So once you suspend disbelief say okay well right those are those are not important details yeah but it is important i mean they could have gone to you or someone who really has knowledge in it and cleaned up those small aspects and still kept the the theme of the story that's right they could have but they would make a different movie so But slightly different. I don't know if it's possible to make.
Starting point is 00:17:29 So you look at 2001 Space Odyssey. I don't know if you've seen that movie. That's the kind of movie you'll start if you talk to scientists. You'll start making those kinds of movies because you can't actually use jargon that makes sense because we don't know how to build a lot of these systems. So the way you need to film it and talk about it is with mystery it's this hitchhike right type like you almost you say very little yes leave it to your imagination to see what happens yeah here everything was in the open right even in terms of the actual construction of the brain, when they had that foam-looking whatever gel brain. Right.
Starting point is 00:18:07 Yeah. So if they gave a little bit more subtle mystery, I think I would have enjoyed that movie a lot more. But the second time, really because of you, you said I think it's your favorite sci-fi movie. It's absolutely one of my favorite sci-fi movies. One of my favorite movies, period. I loved it.
Starting point is 00:18:23 Yeah. So I watched it again. And also Sam Harris said that he also hated the movie and then watched it again and liked it. So I gave it a chance. Why would you see a movie again after you hate it? Because maybe you're self-aware enough to think there's something unhealthy about the way I hated the movie. Like, you're, like, introspective enough to know. It's like, I have the same experience with Batman.
Starting point is 00:18:54 Okay? I watched, uh. Which one? Dark Knight, I think. Christian Bale? Christian Bale one. and bail one so to me the first time i watched that is a guy in a costume like speaking excessively with an excessively low voice i mean it's just something with like little bunny ears not bunny ears but like little ears it's so silly but then you go back and okay if we just accept that those
Starting point is 00:19:21 that's the reality of the world we live in what's the human nature aspects that are being explored here what is the the beautiful conflict between good and evil that's being explored here and what are the awesome uh graphics effects that are being on exhibit right so if you can just suspend that that's beautiful like The movie can become quite fun to watch. But still, to me, not to offend anybody, but superhero movies are still difficult for me to watch. Yeah, who was talking about that recently? Was it Kelly? Kelly Slater?
Starting point is 00:20:02 No. No, it was yesterday. Yesterday. It was Kyle. It was Kyle. Yeah, he doesn's like, G.R. Hall, superhero movies or something. Right.
Starting point is 00:20:08 He doesn't like superhero movies. We were talking about Batman, about Christian Bale's voice, and he's like, the most ridiculous thing was that he's actually Batman, not that his voice. That's true. Not that's true.
Starting point is 00:20:19 I'm Batman. That part of it is way less ridiculous than the fact that he's Batman. He's Batman. Because anybody can do that voice. Yeah. But I contradict.
Starting point is 00:20:29 I'm a hypocrite because Game of Thrones or Tolkien's Lord of the Rings, it's totally believable to me. Yeah, of course. Dragons. Well, that's a fantasy world, right? That's the problem with something like Batman or even Ex Machina is that it takes place in this world. It's too close. Whereas they're in Middle Earth. They're in a place that doesn't exist.
Starting point is 00:20:53 Right. It's like Avatar. If you make a movie about a place that does not exist, you can have all kinds of crazy shit in that movie because it's not real that's right yeah so but at the same time like star wars is harder for me and you're saying star wars is a little more real because it's it feels feasible like you could have spaceships flying around right what what's not feasible about star wars too oh i'm not i'll leave that one to neil degrasse he was getting angry about the robot that's circular that rolls around he's like it would just be slippery yeah like trying to roll around all over the sand it wouldn't work it would
Starting point is 00:21:35 get no traction i was like that's true because if you had like glass tires and you try to drive over sand it was smooth time you'd get nothing. He's actually the guy that made me realize, you know, the movie Ghost with Patrick Swayze? And it was at this podcast or somewhere he was talking about the fact that – so this guy can go through walls, right? Mm-hmm. It's a beautiful romantic movie that everybody should watch, right? But he doesn't seem to fall through chairs when he sits on them. Right? So he can walk through walls, but he can put his hand on the desk.
Starting point is 00:22:11 Yeah. And he can sit. Like his butt has a magical shield that is in this reality. This is a quantum shield that protects him from falling. Yeah. So that's, you know, those devices are necessary movies i get it in yeah but you got a good point he's got a good point too it's like there's cut the shit moments they don't have to be there you know you just have to work them out in advance but the problem is
Starting point is 00:22:37 a lot of movie producers think that they're smarter than people they just decide i just put it in there the average person is not going to care. I've had that conversation with movie producers about martial arts, and I was like, well, this is just nonsense. You can't do that. Like, because I was explaining martial arts to someone, and he was like, ah, the average person's not going to care. I'm like, oh, the average person. Okay, but you
Starting point is 00:22:58 brought me in as a martial arts expert to talk to you about your movie, and I'm telling you right now, this is horseshit. Yeah, I'm a huge believer of Steve Jobs' philosophy where forget the average person discussion because first of all the average person will care.
Starting point is 00:23:15 Steve Jobs designed, really pushed the design of the interior of computers to be beautiful not just the exterior. Even if you never see it, if you have attention and detail to every aspect of the design, even if it's completely hidden from the actual user in the end, somehow that karma, whatever it is, that love for everything you do, that love seeps
Starting point is 00:23:38 through the product. And the same, I think, with movies. If you talk about the 2001 Space Odyssey, there's so many details. I think there's probably these groups of people that study every detail of that movie and other Kubrick films. Those little details matter. Somehow they all come together to show how deeply passionate you are about telling the story. Well, Kubrick was a perfect example of that because he would put layer upon layer upon layer of detail into films that people would never even recognize. Like there's a bunch of correlations between the Apollo moon landings and the shining. You know, there's like people have actually studied it to the point where they think that
Starting point is 00:24:18 it's some sort of a confession that Kubrick faked the moon landing. That's Kubrick fake the moon landing goes from the little boy having the rocket ship on his sweater to the the number of the number of the room that things happen there's like a bunch of like very bizarre connections in the film that Kubrick Unquestionably engineered because he was just a Stupid smart man. I mean he was so goddamn smart that he would do complex mathematics for fun in his spare time. Kubrick was a legitimate genius,
Starting point is 00:24:53 and he engineered that sort of complexity into his films where he didn't have cut-the-shit moments in his movies, not that I can recall. No, not even close. This was very interesting. But that probably speaks to the reality of hollywood today that uh the cut the shit moments don't affect the bottom line of how much the movie makes well it really depends on the film right i mean the cut the shit moments that neil degrasse tyson found in gravity i didn't see because I wasn't aware of what the effects of gravity
Starting point is 00:25:26 on a person's hair would be. He saw it and he was like, this is ridiculous. And then there were some things like, why are these space stations so close together? I just let it slide while the movie was playing, but then he went into great detail about how preposterous it would be if those space stations were that close together
Starting point is 00:25:41 that you could get to them so quickly. That's with Sandra Bullock and the good-looking guy. Yes, and George Clooney. George Clooney. Yeah, the good-looking guy. So did that pass muster with Neil deGrasse Tyson? No. That movie wasn't?
Starting point is 00:25:54 He tore it apart. And when he tore it apart, people went crazy. They got so angry at him. Yeah, he reads the negative comments, as you've talked about. Yeah. I actually recently, because of doing a lot of work in artificial intelligence and lecturing about it and so on, I've plugged into this community of folks that are thinking about the future of artificial intelligence, artificial general intelligence. And they are very much out-of-the-box thinkers to where the kind of messages I get are best.
Starting point is 00:26:24 out-of-the-box thinkers to where the kind of messages I get are best. So I let them kind of explore those ideas without sort of engaging into those discussions. I think very complex discussions should be had with people in person. That's what I think. And I think that when you allow comments, just random anonymous comments to enter into your consciousness, like you're taking risks. And you may run into a bunch of really brilliant ideas that are coming from people that are considerate,
Starting point is 00:26:55 that have thought these things through, or you might just run into a river of assholes. And it's entirely possible. And I peeked into my comments today on twitter i was like what in the fuck i started reading like a couple of them some just morons and i'm like all right about some shit i didn't even know what the fuck they were talking about but but that's the risk you take when you dive in you're going to get people that are disproportionately upset you're going to get people that are disproportionately delusional or whatever it is in regards to your position on something or whether or not they even understand your position. They'll argue something that's an incorrect interpretation of your position.
Starting point is 00:27:35 Yeah, and you've actually – from what I've heard, you've actually been to this podcast and so on, really good at being open-minded. And that's something I try to preach as well. So in AI discussions, when you're talking about AGI and talking about – so there's a difference between narrow AI and general artificial intelligence. Narrow AI is the kind of things that are – the kind of tools that are being applied now and being quite effective. And then there's general AI, which is a broad categorization of concepts that are human level or super human level intelligence. And when you talk about AGI, artificial general intelligence, there seems to be two camps of people.
Starting point is 00:28:17 Ones who are really working deep in it, like that's the camp I kind of sit in. And a lot of those folks tend to roll their eyes and just not engage into any discussions of the future. Their idea is saying it's really hard to do what we're doing. And it's just really hard to see how this becomes intelligent. And then there's another group of people who say, yeah, but you're being very short-sighted. people who say, yeah, but you're being very short-sighted, that you may not be able to do much now, but the exponential, the hard takeoff overnight, it can become super intelligent, and then it'll be too late to think about. Now, the problem with those two camps, as with any camps, Democrat, Republican, any camps, is they don't seem to be talking past each other,
Starting point is 00:29:05 as opposed to both have really interesting ideas. If you go back to the analogy of touch of death, of this idea of MMA, right? So I'm in this analogy. I'm going to put myself in the UFC for a second. In this analogy, I'm, you know, like ranked in the top 20. I'm working really hard. My dream is to become a world champion. I'm training three times a day.
Starting point is 00:29:30 I'm really working. I'm an engineer. I'm trying to build my skills up. And then there's other folks that come along like Steven Seagal and so on that kind of talk about other kinds of martial arts, other ideas of how you can do certain things. about other kinds of martial arts, other ideas of how you can do certain things. And I think Steven Seagal is beyond to something.
Starting point is 00:29:55 I think we really need to be open-minded like Anderson Silva, I think, talks to Steven Seagal or somebody talks to Steven Seagal, right? Well, Anderson Silva thinks Steven Seagal is – I want to put this in a respectful way. Anderson Silva has a wonderful sense of humor. And Anderson Silva is very playful. And he thought it would be hilarious if people believed that he was learning all of his martial arts from Steven Seagal. Got it. He also loves Steven Seagal got it he also loves steven seagal movies legitimately so treated him with a great deal of respect he also recognizes
Starting point is 00:30:31 that steven seagal actually is a master of aikido he really does understand aikido and was one of the very first westerners that was teaching in j, speaks fluent Japanese, was teaching at a dojo in Japan, and is a legitimate master of Aikido. The problem with Aikido is it's one of those martial arts that has merit in a vacuum. If you're in a world where there's no NCAA wrestlers or no judo players or no Brazilian jiu-jitsu black belts or no Muay Thai kickboxers, there might be something to that Aikido stuff. But in a world where all those other martial arts exist and we've examined all the intricacies of hand-to-hand combat it falls horribly short well see this is the point i'm trying to make you just said that we've investigated uh all the intricacies you said all the intricacies of hand-to-hand combat
Starting point is 00:31:37 i mean you're just speaking but you want to open your mind to the possibility that Aikido has some techniques that are effective. Yeah, when I say all, you're correct. That's not a correct way of describing it. Because there's always new moves that are being, like, for instance, in this recent fight between Anthony Pettis and Tony Ferguson, Tony Ferguson actually used Wing chung in a fight. He trapped one of Anthony Pettis' hands and hit him with an elbow. He basically used a technique that you would use on a wing chung dummy, and he did it in an actual world-class mixed martial arts fight.
Starting point is 00:32:20 And I remember watching it, wow, going, this crazy motherfucker actually pulled that off. Because it's a technique that you just rarely see anybody getting that proficient at it that fights in MMA. And Ferguson is an extremely creative and open-minded guy and he figured out a way to make that work in a world-class fight. So – and let me then ask you the question. There's these people who still believe, quite a lot of them, that there is this touch of death, right? Yeah. So do you think it's possible to discover?
Starting point is 00:32:54 No. Through this rigorous scientific process that is MMA that started pretty recently, do you think, not the touch of death, but do you think we can get a 10x improvement in the amount of power the human body can generate in punching? No, certainly not 10x. I think you can get incremental improvements, but it's all based entirely on your frame. Like if you're a person that has very small hands and narrow shoulders, you're kind of screwed. There's not really a lot of room for improvement. you're kind of screwed. There's not really a lot of room for improvement. You can certainly get incremental improvement in your ability to generate power,
Starting point is 00:33:28 but you'll never be able to generate the same kind of power as, say, a guy with a very big frame like Brock Lesnar or Derek Lewis or anyone who has this classic elements that go with being able to generate large amounts of power. That's right. Wide shoulders, large hands. There's a lot of characteristics of the human frame itself. Those, even those people, there's only so much power you can generate. And we pretty much know how to do that correctly.
Starting point is 00:34:06 you can generate and we pretty much know how to do that correctly so the the way you're talking about as a martial arts expert now is kind of the way a lot of the experts in robotics and ai talk about ai and when the topic of touch of death is brought up now the analogy is not perfect i tend to use probably too many analogies as we maybe know the human body better than we know the possibility of AI. I would assume so, right? Because the possibility of AI is basically limitless once AI starts redesigning itself. It's not obvious that that's true. Our imagination allows it to be true.
Starting point is 00:34:40 I'm of two minds. I can hold both beliefs that are contradictory in my mind. One is that idea is really far away, almost bordering on BS. And the other is it can be there overnight. I think you can believe both those things. So there's another quote from Barbara Wooten. It's a poem I heard on a lecture somewhere that I really like,
Starting point is 00:35:10 which is it's from the champions of the impossible rather than the slaves of the possible that evolution draws its creative force. So I see Elon Musk as a representative of the champion of the impossible. I see exponential growth of AI within the next several decades as the impossible. But it's the champions of the impossible that actually make the impossible happen. Why would exponential growth of AI be impossible?
Starting point is 00:35:38 Because it seems inevitable to me. So it's not impossible. I'm sort of using the word impossible meaning magnificent yeah it feels very difficult very very difficult like we don't even know where to begin grand yep like the touch of death actually feels yeah but see the touch of death is horseshit but see you're an expert like ah and they touch you in the chest but we don't have the ability in the body to generate that kind of energy. How do you know that?
Starting point is 00:36:06 That's a good question. It's never been done. We understand so much about physiology. How do you know it's never been done? Okay. There could be someone out there with magic that has escaped my grasp. No, you've studied. You've talked about with Graham Hancock.
Starting point is 00:36:22 You've talked about the history. Maybe it was in roman times there was that idea was discovered and then it was lost because weapons are much more effective ways of uh of delivering damage now i find myself in a very uncomfortable position of defending the concept as a martial artist uh defending the concept of this. What martial arts did you study? Jiu-jitsu and judo and wrestling. Those are the hard ones. Jiu-jitsu, judo, and wrestling, those are absolute martial arts in my opinion.
Starting point is 00:36:56 This is what I mean. Like if you are a guy who just has a fantastic physique and incredible speed and ridiculous power. You just can generate ridiculous power. If you just are – you know who Deontay Wilder is? Yes. Heavyweight champion of the world boxer. You have – what's his name? Tyson Fury.
Starting point is 00:37:18 Tyson Fury on tomorrow. Tomorrow, yes. Two undefeated guys, right? Yes. Deontay Wilder has fantastic power. power i mean he just knocks people flying across the ring he's just i think deontay wilder if he just came off the street if he was 25 years old and no one ever taught him how to box at all and you just wrapped his hands up and had him hit a bag he would be able to generate insane amounts of force now if you were a person that really didn't have much power and you had a box with Deontay Wilder and you were both of the same age and you were a person that knew boxing and you
Starting point is 00:37:53 stood in front of Deontay, it's entirely possible that Deontay Wilder could knock you into another dimension, even though he had no experience in boxing. If he just held on to you and hit you with a haymaker, he might be able to put you out. If you're a person who is, let's say, built like you, a guy who exercises, who's strong, and then there's someone who's identically built like you, who's a black belt in Brazilian jiu-jitsu, and you don't have any experience in martial arts at all, you're fucked. Right? Yes.
Starting point is 00:38:27 If you're a person who's built like you, who's a guy who exercises and is healthy, and you grapple with a guy who's even stronger than you and bigger than you, but he has no experience in Brazilian jiu-jitsu, he's still fucked. Yeah. That's the difference. That's why I think Brazilian Jiu Jitsu and Judo and Wrestling in particular those are absolutes in that
Starting point is 00:38:49 you have control of the body and once you grab a hold of a person's body there's no lucky triangle chokes in Jiu Jitsu but I think I think I would say Jiu Jitsu is the highest representative of that.
Starting point is 00:39:07 I think in wrestling and judo, having practiced those, I've never been quite as humbled as I have been in jiu-jitsu. Yeah. Especially when I started, I was like powerlifting. I was a total meathead. And, you know, a 130-pound guy or girl could tap you easily. Yeah, it's confusing. It's very confusing. In wrestling, you can get pretty far with that meathead power.
Starting point is 00:39:30 Yeah, yeah, yeah. And in judo, a little bit less so at its highest levels. If you go to Japan, for example, where they, I mean, the whole dream of judo is to effortlessly throw your opponent. Yeah. I mean, the whole dream of judo is to effortlessly throw your opponent. But if you go to gyms in America and so on, there is some hard wrestling style gripping and just beating each other up pretty intensely where we're not talking about beautiful Uchimatas or these beautiful throws. We're talking about some scrapping, some wrestling style.
Starting point is 00:40:05 Yeah, I see what you're saying. Yeah, my experience with jiu-jitsu was very humbling when I first started out. I had a long background in martial arts and striking and even wrestled in high school. And then I started taking jiu-jitsu, and a guy who was my size, and I was young at the time, and he was basically close to my age, just mauled me. And he wasn't even a black belt. I think he was a purple belt. He might have been a blue belt.
Starting point is 00:40:32 I think he was a purple belt. And he just destroyed me, just did anything he wanted to me. Choked me, armbarred me. And I remember thinking, man, I am so delusional. I thought I had a chance. Like I thought just based on taking a couple classes and learning what an armbar is and then being a strong person who has a background in martial arts, that I would be able to at least hold him off a little bit.
Starting point is 00:40:58 No. And this is – that's so beautiful that I feel lucky to have had that experience of having my ass kicked in Philadelphia is where I came up with because in science you don't often get that experience
Starting point is 00:41:12 in the space of ideas you can't you can't choke each other out you can't beat each other up in science and so it's easy to go your whole life I have so many people around me
Starting point is 00:41:24 telling me how smart I am. There's no way to actually know if I'm smart or not because I think I'm full of BS. And in the same realm as fighting, there's no, it's what Hicks and Gracie said, I mean, or someone, Salo Habera, or somebody that the mat doesn't lie. There's this deep honesty in it
Starting point is 00:41:48 that was, I'm really grateful, almost like wanting, you know, you talk about bullies, or you talk about, or even just my fellow academics could benefit significantly
Starting point is 00:41:58 from training a little bit. I think so too. It's a, yeah, it's a beautiful thing to almost, I think it's been talked about in high school, sort of acquiring it. Yeah, we've talked about it many times, yeah. I think it's a more humbling sport, to be honest, than wrestling because you could, in wrestling, like I said, get away with some muscle.
Starting point is 00:42:18 It's also what martial arts are supposed to be in that a small person who knows technique can beat a big person who doesn't know the technique. That's right. That's what we always hoped for, right, when we saw the Bruce Lee movies and Bruce Lee, who was a smaller guy, could beat all these bigger guys just because he had better technique. That is actually real in jiu-jitsu, and it's one of the only martial arts where that's real. Yeah, and I have in Philadelphia, you had Steve Maxwell here, right? Sure. That's kind of where the spring of jiu-j have in Philadelphia started. You had Steve Maxwell here, right?
Starting point is 00:42:45 Sure. That's kind of where the – that was the spring of jiu-jitsu in Philadelphia. Yeah, he was one of the very first American black belts in jiu-jitsu, like way back in the day. I believe he was a black belt in the very early 90s when jiu-jitsu was really just starting to come to America. And he had maxercise in Philadelphia. It's still there. And then I trained at Balance, which is a few Gracie folks, which is Phil McGlarese, Rick McGlarese, Josh Vogel Brothers.
Starting point is 00:43:13 I mean, especially Vogel Brothers, these couple black belts that come up together, well, they're smaller. They're little guys, and I think those were the guys that really humbled me pretty quickly. Well, little guys are the best to learn technique from. Yeah. Because they can't rely on strength. There's a lot of really big, powerful, you know, 250-pound jiu-jitsu guys who never are going to develop the sort of subtlety of technique that some, like the Mayow brothers, like smaller guys who just, from the very beginning, they've never had an advantage
Starting point is 00:43:52 in weight and size. And so they've never been able to use anything but perfect technique. Eddie Bravo is another great example of that, too. He competed in the 140-pound, 145-pound class. But to get back to artificial intelligence, so the idea is that there's two camps. There's one camp that thinks that the exponential increase in technology and that once artificial intelligence becomes sentient, it could eventually improve upon its own design and literally become a god in a short amount of time. And then there's the other school of thought that thinks that is so far outside of the realm of what is possible today that even the speculation of this eventually taking place is kind of ludicrous to imagine. Right.
Starting point is 00:44:42 Exactly. And the balance needs to be struck because I think I'd like to talk about sort of the short-term threats that are there. And that's really important to think about. But the long-term threats, if they come to fruition, will overpower everything. Right. That's really important to think about. important to think about. But what happens is if you think too much about the encroaching doom of humanity, there's some aspect to it that is paralyzing, where you almost, it turns you off from actually thinking about these ideas. There's something so appealing. It's like a black hole that pulls you in. And if notice folks like sam harris and so on spend a large amount of the time you know they're talking about the negative stuff about
Starting point is 00:45:33 something that's far away not to say it's not wrong to talk about it but they spend very little time about the potential positive impacts in the near term and also the negative impacts in the near term, and also the negative impacts in the near term. So let's go over those. Yep. Fairness. So the more and more we put decisions about our lives into the hands of artificial intelligence systems, whether you get a loan or in the autonomous vehicle context or in terms of recommending jobs for you on LinkedIn or all these kinds of things, the idea of fairness becomes of bias in these machine learning systems becomes a really big threat because the way current artificial intelligence systems function is they train on data.
Starting point is 00:46:25 So there's no way for them to somehow gain a greater intelligence than the data we provide them with. So we provide them with actual data. And so they carry over, if we're not careful, the biases in that data, the discrimination that's inherent in our current society as represented by the data. So they'll just carry that forward. I guess so. So there's people working on this more so to show really the negative impacts in terms of getting a loan or whether to say whether this particular human being should be convicted or not of a crime. to say whether this particular human being should be convicted or not of a crime.
Starting point is 00:47:13 There's ideas there that can carry, you know, in our criminal system, there's discrimination. And if you use data from that criminal system to then assist deciders, judges, juries, lawyers in making this incriminating, in making a decision of what kind of penalty a person gets, they're going to carry that forward. So you mean like racial, economic biases? Racial, economic, yeah. Geographical? And that's a sort of, I don't study that exact problem, but it's, you're aware of it because of the tools we're using.
Starting point is 00:47:43 aware of it because of the tools we're using. It only, so the two ways, so I'd like to talk about neural networks with Joe. Sure. Let's do it. Okay. So the current approaches are, there's been a lot of demonstrated improvements, exciting new improvements in our advancements of our artificial intelligence. And those are, for the most part, have to do with neural networks. Something that's been around since the 1940s has gone to two AI winters where everyone was super hyped
Starting point is 00:48:16 and then super bummed and super hyped again and bummed again. And now we're in this other hype cycle. And what neural networks are is these collections of interconnected simple compute units they're all similar it's kind of like it's inspired by our own brain we have a bunch of little neurons interconnected and the idea is these interconnections are really dominant and random but if you feed it with some data they'll learn to connect just like they're doing our brain in a way that interprets that data. They form representations of that data and can make decisions.
Starting point is 00:48:51 But there's only two ways to train those neural networks that we have now. One is we have to provide a large data set. If you want that neural network to tell the difference between a cat and a dog, you have to give it 10,000 images of a cat and 10,000 images of a dog. You have to give it 10,000 images of a cat and 10,000 images of a dog. You need to give it those images. And who tells you what a picture of a cat and a dog is? It's humans. So it has to be annotated. So as teachers of these artificial intelligence systems, we have to collect this data. We have to invest a significant amount of effort and annotate that data, and then we teach neural networks to make that prediction. What's not obvious there is how poor of a method there is to achieve any kind of greater degree of intelligence. You're just not able to get very far besides very specific narrow tasks
Starting point is 00:49:47 of cat versus dog or should I give this person a loan or not, these kind of simple tasks. I would argue autonomous vehicles are actually beyond the scope of that kind of approach. And then the other realm of where neural networks can be trained is if you can simulate that world. So if the world is simple enough or is conducive to be formalized sufficiently to where you can simulate it. So a game of chess, there's rules. A game of Go, there's rules. So you can simulate it they had the big exciting thing about google deep mind is that they were able to beat the world champion by doing something called competitive
Starting point is 00:50:30 self-play which is they have two systems play against each other they don't need the human they play against each other but that only works and that's a beautiful idea and super powerful and really interesting and surprising but that only works on things like games and simulation so now if i wanted to uh sorry to be going to analogies of like ufc for example if i wanted to train a system to become the world champion uh be uh what's his name? I could play the UFC game. I could create two neural networks that use competitive self-play to play in that virtual world. And they could become state-of-the-art, the best fighter ever in that game. But transferring that to the physical world,
Starting point is 00:51:23 we don't know how to do that world we don't know how to do that we don't know how to teach systems to do stuff in the real world so some of the stuff that freaks you out often is boston dynamics robots oh yeah those every day i go to the instagram page and just go what the fuck are you guys doing so uh engineering our demise mark raber ce CEO, spoke at the class I taught. He calls himself a bad boy of robotics. So he's having a little fun with it. He should definitely stop doing that. Don't call yourself a bad boy of anything.
Starting point is 00:51:55 That's true. How old is he? Okay, he's one of the greatest roboticists of our generation. That's great. That's wonderful. However, don't call yourself a bad boy, bro. Okay. So you're not the bad boy of MMA? Definitely not.
Starting point is 00:52:18 I'm not even the bad man. Bad man? Definitely not a bad boy. Okay. It's so silly. Yeah. Those robots are actually functioning in the physical world. That's what I'm talking about.
Starting point is 00:52:30 And they are using something called, what was I think coined, I don't know, 70s or 80s, the term good old-fashioned AI. fashion ai meaning there is nothing like going on that you would consider artificially intelligent which is usually connected to learning so these systems aren't learning it's not like you dropped a puppy into the into the world and it kind of stumbles around and figures stuff out and learns it's better and better and better and better that's the scary part that's the imagination that that's what we imagine is we put something in this world. At first, it's like harmless. It falls all over the place. And all of a sudden, it figures something out.
Starting point is 00:53:12 And like Elon Musk says, it travels faster than whatever. You can only see it with probe lights. There's no learning component there. This is just purely there's hydraulics and electric motors and there is 20 to 30 degrees of freedom and it's doing hard-coded control algorithms to control the task of how do you move efficiently through space so this is the task roboticists work on a really really hard problem is taking robotic manipulation, taking an arm, grabbing a water bottle, and lifting it.
Starting point is 00:53:49 Super hard. Somewhat unsolved to this point. And learning to do that, we really don't know how to do that. Right, but what we're talking about essentially is the convergence of these robotic systems with artificially intelligent systems. That's right. And as artificially intelligent systems. That's right. And as artificially intelligent systems evolve and then this convergence becomes complete, you're going to have the ability to do things like the computer that beat humans at Go.
Starting point is 00:54:18 That's right. You're going to have creativity. You're going to have a complex understanding of language and expression and you're going to have i mean perhaps even engineered things like emotions like jealousy and anger i mean it's an it's entirely possible that as you were saying we're going to have systems that could potentially be biased the way human beings are biased towards people of certain economic groups or certain geographic groups and you would use that data that they have to discriminate just like human beings discriminate that's right what if you have all that in an artificially intelligent robot that
Starting point is 00:54:57 has autonomy and that has the ability to move this is what people are totally concerned with and terrified of is that all of these different systems that are currently in semi-crude states, they can't pick up a water bottle yet, they can't really do much other than they can do backflips, but they, you know, I mean, I'm sure you've seen the more recent Boston Dynamic ones. Parkour? Yeah, I saw that one the other day. They're getting better and better and better, and it's increasing every year every year there's they have new abilities did you see the black mirror episode heavy metal
Starting point is 00:55:30 yeah and i think about quite a lot because it's uh functionally it has we know how to do most aspects of that right now right now pretty close yeah pretty close i mean i i don't remember exactly there's some kind of pebble shooting situation where it like hurts you by shooting you somehow i forget well it has bullets didn't it bullets yeah yeah it's basically a gun it had a knife it's stuck into one of its arms remember and come spoiler alert it's just an amazing episode of how terrifying it would be if some emotionless robot with incredible abilities is coming after you and wants to terminate you and i think about that a lot because i i love that episode because it's it's terrifying for some reason but when i sit down and actually in the
Starting point is 00:56:17 work we're doing think about how we would do that so we can do the actual movement of the robot. What we don't know how to do is to have robots that do the full thing, which is have some, have a goal of pursuing humans and eradicate, I'm not, spoiler alert all over the place. I think the goal of eradicating humans, so assuming their values are not aligned somehow, that's one. We don't know how to do that. And two is the entire process of just navigating all over the world is really difficult. So we know how to go up the stairs. But to say how to navigate the path you took from home to the studio today, how to get through that full path is so much an unsolved problem.
Starting point is 00:57:06 But is it because you could engineer or you could program it into your Tesla? You could put it into your navigation system and have it stop at red lights, drive for you, take turns, and it can do that. So first of all, that, I would argue, is quite far away from still, but that's within 10, 20 years. But how much can it do now? It can stay inside the lane on the highway or on different roads, and it can change lanes. And what's being pushed now is they're trying to be able to enter and exit a highway.
Starting point is 00:57:38 So it's some basic highway driving. It doesn't stop at traffic lights. It doesn't stop at traffic lights. It doesn't stop at stop signs. And it doesn't interact with the complex, irrational human beings, pedestrians, cyclists, cars. This is the onion I talked about. We first, in 2005, the DARPA Grand Challenge, DARPA organized this challenge in the desert. It says, let's go across the desert. Let's see if we can build an autonomous vehicle that goes across the desert. 2004, they did the first one, and everybody failed. We're talking about some of the smartest people in the world
Starting point is 00:58:14 really tried and failed. And so they did it again in 2005. There's a few. Stanford won. There's a really badass guy from CMU, Red. I think he's like a Marine. He led the team there. And they succeeded. The four teams finished. Stanford won. That was in the desert.
Starting point is 00:58:31 And there was this feeling that we solved autonomous driving. But that's that onion. Because you then, okay, what's the next step? We've got a car that travels across the desert autonomously. What's the next? got a car that travels across the desert autonomously what's the next so in 2007 they did the urban grand challenge the urban challenge the upper urban challenge where you drove around the city a little bit and again super hard problem people took it on cmu won that one the stanford second i believe and and then there was definitely a feeling like yeah we now we had
Starting point is 00:59:08 a car drive around the city it's it's definitely solved the problem is those cars were traveling super slow first of all and second of all there's no pedestrians there there's no it wasn't a real city it was an artificial it's just basically having to stop at different signs again one other layer of the onion. And you say, okay, when we actually have to put this car in a city like L.A., how are we going to make this work? Because if there's no cars in the street and no pedestrians in the street, driving around is still hard but doable and I think solvable in the next five years. When you put pedestrians, everybody jaywalks. If you put human beings into this interaction, it becomes much, much harder.
Starting point is 00:59:52 Now, it's not impossible, and I think it's very doable, and with completely new interesting ideas, including revolutionizing infrastructure and rethinking transportation in general, it's possible to do the next 5, 10 years, maybe 20. But it's not easy, like everybody says. Does anybody say it's easy? Yeah. There's a lot of hype behind autonomous vehicles. Elon Musk himself and other people have promised autonomous vehicles that
Starting point is 01:00:25 timeline has already passed there's been going on in 2018 we'll have autonomous vehicles now for this they're semi-autonomous now right so yeah they i know they do they can break for pedestrians like if they see pedestrians they're supposed to break for them and avoid them right that's part of the technically no. Wasn't that an issue with an Uber car that hit a pedestrian that was operating autonomously? That's right. Someone, a homeless person, stepped out off of a median right into traffic
Starting point is 01:00:53 and it nailed it and then they found out it didn't have one of the settings. It wasn't in place. That's right. But that was an autonomous vehicle being tested in Arizona. And unfortunately, it was a fatality. A person died. A person died. A pedestrian was killed. So what happened there, that's the thing I'm saying is really hard. That's full autonomy. That's technically when the car, you can remove the steering
Starting point is 01:01:18 wheel and the car will drive itself and take care of everything. Everything I've seen, everything we're studying. So we're studying drivers and Tesla vehicles. We're building our own vehicles. It seems that it'll be a long way off before we can solve the fully autonomous driving problem. Because of pedestrians. And, but two things, I mean, pedestrians and cyclists and the edge cases of driving, all the stuff we take for granted, the same reason we take for granted how hard it is to walk, how hard it is to pick up this bottle,
Starting point is 01:01:52 our intuition about what's hard and easy is really flawed as human beings. Can I interject? What if all cars were autonomous? That's right. If we got to a point where every single car on the highway is operating off of a similar algorithm or off the same system, then things would be far easier, right? Because then you have to don't deal with random kinetic movements, people just changing lanes, people looking at their cell phone, not paying attention to what they're doing, all sorts of things you
Starting point is 01:02:20 have to be wary of right now, driving and pedestrians and bicyclists. Totally. And that's in the realm of things I'm talking about where you think outside the box and revolutionize our transportation system. That requires government to play along. It seems like that's going that way though, right? Do you feel like that one day we're going to have autonomous driving pretty much everywhere, especially on the highway? One day we're going to have autonomous driving pretty much everywhere, especially on the highway? It's not going there in terms of – it's very slow moving.
Starting point is 01:02:53 So government does stuff very slow moving with infrastructure. One of the biggest things you can do for autonomous driving will solve a lot of problems is to paint lane markings regularly. Right. lane markings regularly right and even that it's been extremely difficult to do for for for yeah for uh for politicians right because right now there's not really the desire for it to but to explain to people what you mean by that when the lanes are painted very clearly the cameras and the autonomous vehicles can recognize them and stay inside those lanes much more easily. Yeah, there is two ways that cars see the world. Three. There's different sensors. The big ones for autonomous vehicles is LiDAR, which is these lasers that are being shot
Starting point is 01:03:35 all over the place in 360, and they give you this point cloud of how far stuff is away, but they don't give you the visual texture information of this is what brand water bottle they are. And cameras give you that information. So what Tesla is using, they have eight cameras, I think, is they perceive the world with cameras. And those two things require different things from the infrastructure, those two sensors. Cameras see the world the same as our human eyes see the world. So they need the lane markings. They need infrastructure to be really nicely visible,
Starting point is 01:04:09 traffic lights to be visible. So the same kind of things us humans like to have as cameras like to have. And lane marking is a big one. There's a lot of interesting infrastructure improvements that can happen, like traffic lights. Our traffic lights are super dumb right now. They sense nothing about the world, about the density of pedestrians, about approaching cars.
Starting point is 01:04:36 If traffic lights can communicate with the car, which it makes perfect sense. It's right there. There's no size limitations. It can have a computer inside of it. You can coordinate different things in terms of the same pedestrian kind of problem. Well, we have sensors now on streets, so when you pull up to certain lights, especially at night, the light will be red. You pull up, it instantaneously turns green
Starting point is 01:05:01 because it recognizes that you've stepped over or driven over a sensor. That's right. So that's a step in the right direction, but that's really sort of 20 years, 30 years ago technology. So you want to have something like the power of a smartphone inside every traffic light. It's pretty basic to do, but there's way outside of my expertise is how do you get government to do these kinds of improvements. So if I'm mistaken, well, correct me if I'm mistaken, but you're looking at things in terms of what we can do right now, right?
Starting point is 01:05:33 And a guy like Elon Musk or Sam Harris is saying, yeah, but look at where technology leads us. If you go back to 1960, the kind of computers that they use to do the apollo mission you got a whole room full of computers that doesn't have nearly the same power as the phone that's in your pocket right now now if you go into the future and exponentially calculate like what's going to take place in terms of our ability to create autonomous vehicles our ability to create artificial intelligence and all of these things going from what we have right now to what could be in 20 years where we very well might look at some sort of an artificial being
Starting point is 01:06:21 that can communicate with you some sort of an ex machina type creature. I mean, that's not outside the realm of possibility at all. You have to be careful with the at all part. At all. Our ability to predict the future is really difficult, but I agree with you. It's not outside the realm of possibility. Yeah. And the thing, there's a few examples that are brought along just because
Starting point is 01:06:48 i enjoy these predictions so the the the of how bad we are predicting stuff from the very engineers the very guys and gals like me sitting before you made some of the worst predictions in history in terms of both pessimistic and optimistic. The Wright brothers, one of the Wright brothers before they flew in 1903, predicted two years before that it will be 50 years. I confess that in 1901, that's one of the brothers talking, I said to my brother Orville that man would not fly for 50 years. Two years later, we ourselves were making flights.
Starting point is 01:07:28 This demonstration of my inability as a prophet gave me such shock that I have ever since distrusted myself and have refrained from all prediction. That's one of the Wright brothers, one of the people working at it. So that's a pessimistic estimation versus an optimistic explanation or prediction. Exactly. And the same with Albert Einstein. Fermi made these kind of pessimistic observations. Fermi, three years before the first critical chain reaction as part of the – he led the nuclear development of the bomb. He said that it would – he has 90% confidence that it's impossible three years before.
Starting point is 01:08:07 Okay, so that's on the pessimistic side. On the optimistic side, the history of AI is laden with optimistic predictions. In 1965, one of the seminal people in AI, Herbert Simon, said, machines will be capable within 20 years
Starting point is 01:08:23 of doing any work a man can do. He also said, within 10 years, a digital computer will be the world's chess champion. That's in 58. And we didn't do that until 90-something, 98, so 40 years later. Yeah, but that's one person, right? I mean, it's a guy taking a stab in the dark
Starting point is 01:08:38 based on what data? What's he basing this off of? Our imagination. Right. We have more data points now, don't you think? No. In terms of, no? Not about the future.
Starting point is 01:08:48 That's the thing. Not about the future, but about what's possible right now. Right. And if you look at, the past is a really bad predictor of the future. If you look at the past, what we've done, the immense advancement of technology has given us in many ways optimism about what's possible. But exactly what is possible, we're not good at. So I am much more confident that the world will look very fascinatingly different in the future. Whether AI will be part of that world is unclear.
Starting point is 01:09:26 It could be we will all live in a virtual reality world. Or, for example, one of the things I really think about is, to me, a really dumb AI on one billion smartphones is potentially more impactful than a super intelligent AI on one smartphone. The fact that everybody now has smartphones, this kind of access to information, the way we communicate, the globalization of everything, the potential impact there of just even subtle improvements in AI could completely change the fabric of our society in a way where these discussions about an ex machina type lady walking around will be silly because we'll all be either
Starting point is 01:10:12 living on mars or living in virtual reality or there's so many exciting possibilities right and what i believe in is we have to think about them. We have to talk about them. Technology is always the source of danger, of risk. All of the biggest things that threatened our civilization at the small and large scale, all are connected to misuse of technology we develop. are connected to misuse of technology we develop. And at the same time, it's that very technology that will empower us and save us. So there's Max Tegmark, brilliant guy, Life 3.0. I recommend people read his book on artificial general intelligence. He talks about the race, that there's a race that can't be stopped. One is the development of technology and the other is the development of our wisdom of how to stop or how to control the technology. And it's this kind
Starting point is 01:11:13 of race. And our wisdom is now, is always like one step behind. And then that's why we need to invest in it and keep sort of, keep thinking about new ideas so right now we're talking about ai we don't know what it's going to look like in five years we have to keep thinking about we have to uh through simulation explore different ideas through conferences have debates come up with different approaches of how to solve particular problems like i said with bias or how to solve deep fakes where you fake. You can make Donald Trump or former President Obama say anything, or you can have Facebook advertisements, hyper-targeted advertisements, how we can deal with those situations and constantly have this race of wisdom versus the development of technology,
Starting point is 01:12:02 versus the development of technology, but not to sit and think, well, look at the development of technology. Imagine what it could do in 50 years, and we're all screwed. Because that's important to sort of be nervous about it in that way, but it's not conducive to what do we do about it. And the people that know what to do about it in that way, but it's not conducive to what do we do about it. And the people that know what to do about it are the people
Starting point is 01:12:30 trying to build this technology, building this future one step at a time. What do you mean by know what to do about it? Let's put it in terms of Elon Musk. Elon Musk is terrified of artificial intelligence because he thinks by the time it becomes sentient, it will be too late.
Starting point is 01:12:45 It will be smarter than us and we'll have essentially created our successors. Yes. And let me quote Joe Rogan and say that's just one guy. Yeah. Well, Sam Harris thinks the same thing. Yes. And there's a lot of – There's quite a few people who think that.
Starting point is 01:13:00 And Sam Harris I think is one of the smartest people I know. And Elon Musk, intelligence aside, is one of the most impactful people I know. And he's actually building these cars. And in the narrow AI sense, if he's built these autopilot systems that we've been studying, the way that system works is incredible. It was surprising to me on many levels. It's an incredible demonstration of what AI can do in a positive way in the world.
Starting point is 01:13:30 So I don't... People can disagree. I'm not sure the functional value of his fear about the possibility of this future. Well, if he's correct, there's functional value in hitting the brakes
Starting point is 01:13:46 before this takes place. Just to be a person who's standing on top of the rocks with a light to warn the boats, hey, there's a rock here. Pay attention to where we're going because there's perils ahead. I think that's what he's saying. And I don't think there's anything wrong with saying that. And I think there's plenty of room for people saying what he's saying and people saying what you're saying. I think what would hurt us is if we tried to silence either voice. I think what we need in terms of our understanding of this future is many, many, many, many many many of these conversations where you're dealing with the
Starting point is 01:14:27 the current state of technology versus a bunch of creative interpretations of where this could go and have discussions about where it should go or what could be the possible pitfalls of any current or future actions i don't think there's anything wrong with this. So when you say like, what's the benefit of thinking in a negative way? Well, it's to prevent our demise. So totally, I agree 100%. Negativity or worry about the existential threat is really important to have as part of the conversation but there's this level there's this line it's hard to put into words there's a line that you cross when that worry becomes hyperbole yeah and and then it there's something about human psyche where it
Starting point is 01:15:16 becomes paralyzing for some reason right now when i have beers with my friends the non-ai folks we actually go we cross that line all day and have fun with it. Maybe I should get you drunk right now. Maybe. Regret every moment of it. I talked to Steve Pinker, Enlightenment Now, his book, kind of highlights that that kind of, That kind of – he's totally – doesn't find that appealing because that's crossing all realms of rationality and reason. When you say that appealing, what do you mean?
Starting point is 01:15:55 Crossing the line into what will happen in 50 years. What could happen. What could happen. He doesn't find that appealing. He doesn't find it appealing because he's studied – and I'm not sure I agree with him to the degree that he takes it. He finds that there's no evidence. He wants all our discussions to be grounded in evidence and data. And he highlights the fact that there's something about human psyche that desires this negativity. desires this negativity that it wants there's there's something undeniable where we want to create and engineer the gods that overpower us and destroy us we want to or we worry about it
Starting point is 01:16:36 i don't know if we want to that we uh let me rephrase that we want to worry about it there's something about the psyche that but because you can't take the genie and put it back in the bottle. That's right. Yeah. I mean, when you say there's no reason to think this way. But if you do have cars that are semi-autonomous now, and if you do have computers that can beat human beings who are world go champions, and if you do have computers that can beat people at chess, and you do have people that are consistently working on artificial intelligence, you do have
Starting point is 01:17:09 Boston Dynamics, who are getting these robots to do all sorts of spectacular physical stunts. And then you think about the possible future convergence of all these technologies. And then you think about the possibility of this exponential increase in technology that allows them to be sentient, like within a decade, two decades, three decades. What more evidence do you need? You're seeing all the building blocks of a potential successor being laid out in front of you, and you're seeing what we do with every single aspect of technology. We constantly and consistently improve and innovate, right, with everything, whether it's computers or cars or anything. Everything today is better than everything that was 20 years ago. So if you looked at artificial intelligence, which does exist to a
Starting point is 01:17:57 certain extent, and you look at what it could potentially be 30, 40, 50 years from now, what it could potentially be 30, 40, 50 years from now, whatever it is, why wouldn't you look at all these data points and say, hey, this could go bad. I mean it could go great, but it could also go bad. I do not want to be mistaken as the person who's not the champion of the impossible. I agree with you completely. I don't think it's impossible. I don't think it's impossible at I don't think it's impossible at all.
Starting point is 01:18:26 I think it's inevitable. I don't. I think it is inevitable, yes. It's the Sam Harris argument. If superintelligence is nothing more than information processing, same as the argument of the simulation, that we're living in a simulation, it's very difficult to argue against the fact that we're living in a simulation. It's very difficult to argue against the fact that we're living in a simulation. The question is when and what the world would look like.
Starting point is 01:18:53 Right. So it's, like I said, a race. And it's difficult. You have to balance those two minds. I agree with you totally. And I disagree with my fellow robotics folks who don't want to think about it at all. Of course they don't. They want to buy new houses. They've got a lot of money invested in this adventure. They want to keep the party rolling. They don't want to pull the brakes. Everybody pull the cords out of the walls. We've got to stop. No one's going
Starting point is 01:19:17 to do that. No one's going to come along and say, hey, we've run all this data through a computer and we found that if we just keep going the way we're going in 30 years from now, we will have a successor that will decide that human beings are outdated and inefficient and dangerous to the actual world that we live in, and we're going to start wiping them out. But that's not – Well, it's not – it doesn't exist right now. It doesn't exist right now. It doesn't exist right now. But if that did happen, if someone did come to the UN and had this multi-stage presentation with data that showed that if we continue on the path,
Starting point is 01:19:52 we have seven years before artificial intelligence decides to eliminate human beings based on these data points, what do they do? What do the Boston Dynamics people do? Well, building a house in Cambridge. What are you talking about, man? I'm not going anywhere. Come on. I just bought a new Tesla. I need to finance this thing. Hey, I got credit card bills.
Starting point is 01:20:11 I got student loans I'm still paying off. How do you stop people from doing what they do for a living? How do you say that, hey, I know that you would like to look at the future with rose-colored glasses on, but there's a real potential pitfall that could be the extermination of the human species. Right. And obviously I'm going way far with this. Yeah, I like it. I think every one of us trying to build these systems are similar in sound to the way you were talking about the touch of death.
Starting point is 01:20:47 and sound to the way you were talking about the touch of death in that my dream and the dream of many roboticists is to create intelligent systems that will improve our lives and working really hard at it not for our house in cambridge not for a billion dollar for selling a startup paycheck we love this stuff some of you i mean obviously the motivations are different for every single human being that's involved in every endeavor so and we're trying really hard to build these systems and it's really hard so whenever the the question is well this is going to look at historically, it's going to take off. It can potentially take off any moment. It's very difficult to really be cognizant as an engineer about how it takes off because you're trying to make it take off in a positive direction. And you're failing.
Starting point is 01:21:38 Everybody is failing. It's been really hard. And so you have to acknowledge that overnight some Elon Musk-type character might come along. And, you know, people with this boring company or with SpaceX, people didn't think anybody but NASA could do what Elon Musk is doing, and he's doing it. It's hard to think about that too much. You have to do that. But the reality is we're trying to create these super intelligent beings. Sure, but isn't the reality also that we have done things in the past because we were trying to do it,
Starting point is 01:22:16 and then we realized that these have horrific consequences for the human race, like Oppenheimer and the Manhattan Project. When he said, I am death, destroyer of worlds, when he's quoting the Bhagavad Gita, when he's detonating the first nuclear bomb and realizing what he's done. Just because something's possible to do doesn't necessarily mean it's a good idea for human beings to do it. Now, we haven't destroyed the world with Oppenheimer's discovery
Starting point is 01:22:43 and through the work of the Manhattan Project. We've managed to somehow or another keep the lid on this shit for the last 60. Which is incredible. It's crazy, right? You know, I mean, for the last, what, 70 years? Yes. How long has it been? 70 sounds right.
Starting point is 01:22:56 10,000, 20,000 nukes all over the world right now. It's crazy. I mean, we literally could kill everything on the planet. And somehow we don't. Somehow. Somehow, in some amazing way, we literally could kill everything on the planet. And somehow we don't. Somehow. Somehow, in some amazing way, we have not. But that doesn't mean we, I mean, that's a very short amount of time in relation to the actual lifespan of the Earth itself.
Starting point is 01:23:20 And certainly in terms of the time human history has been around. And nuclear weapons, global warming is another one. Sure, but that's a side effect of our actions, right? We're talking about a direct effect of human ingenuity and innovation. The nuclear bomb. It's a direct effect. We tried to make it. We made it. There it goes.
Starting point is 01:23:39 Global warming is an accidental consequence of human civilization. Global warming is an accidental consequence of human civilization. So you can't – I don't think it's possible to not build a nuclear bomb. You don't think it's possible to not build it? In terms of – because people are tribal. They speak different languages. They have different desires and needs, and they were in war. So if all these engineers were working towards it, it was not possible to not build it.
Starting point is 01:24:05 Yeah. And like I said, there's something about us chimps in a large collective where we are born and pushed forward towards progress of technology. You cannot stop the progress of technology. So the goal is to how to develop, how to guide that development into a positive direction. how to develop, how to guide that development into a positive direction. But surely, if we do understand that this has taken place, and we did drop these enormous bombs on Hiroshima and Nagasaki and killed untold amounts of innocent people with these detonations, that it's not necessarily always a good thing to pursue technology. Nobody is so...
Starting point is 01:24:48 You see what I'm saying? Yes, 100%. I agree with you totally. So I'm more playing devil's advocate than anything. But what I'm saying is you guys are looking at these things like we're just trying to make these things happen. And what I think people like Elon Musk and Sam Harris and a bunch of others that are gravely concerned about the potential for AI are saying is, I understand what you're doing, but you've got to understand the other side of it. You've got to understand that there are people out there that are terrified
Starting point is 01:25:15 that if you do extrapolate, if you do take this relentless thirst for innovation and keep going with it, when you look at what we can do, what human beings can do so far in our crude manner of 2018, all the amazing things they've been able to accomplish, it's entirely possible that we might be creating our successors. This is not outside the realm of possibility. And all of our biological limitations might be we we might figure out a better way and this better way might be some sort of an artificial creature yep ai began with our dream to forge the gods and that's i would like i think that it's impossible to stop well it's not
Starting point is 01:26:01 impossible to stop if you go ted kaczynski and kill all the people. I mean, that's what Ted Kaczynski anticipated. You know, the Unabomber, do you know the whole story behind him? No. What was he trying to stop? Ooh, he's a fascinating cat. Here's what's fascinating. There's a bunch of fascinating things about him.
Starting point is 01:26:17 But one of the more fascinating things about him, he was involved in the Harvard LSD studies. So they were nuking that dude's brain with acid. And then he goes to Berkeley, becomes a professor, takes all his money from teaching and just makes a cabin in the woods and decides to kill people that are involved in the creation of technology because he thinks technology is eventually going to kill off all the people. So he becomes crazy and schizophrenic and who knows what the fuck is wrong with him and whether or not this would have taken place inevitably or whether
Starting point is 01:26:48 this was a direct result of his being literally like drowned in LSD. We don't even know how much they gave him or what the experiment entailed or how many other people's got their brain torched during these experiments. But we do know for a fact that Ted Kaczynski was a part of the
Starting point is 01:27:04 Harvard LSD studies. And we do know for a fact that Ted Kaczynski was a part of the Harvard LSD studies. And we do know that he went and did move to the woods and write his manifesto and start blowing up people that were involved in technology. And the basic thesis of his manifesto that perhaps LSD
Starting point is 01:27:20 opened his eyes to is that technology is going to kill all humans. And so we should. It was going to be the end of the human race, I think, I believe. The human race. So the solution. You know what? Is that what he said?
Starting point is 01:27:31 You looking it up? The industrial revolution and its consequences have been a disaster for the human race. Yeah. He extrapolated. He was looking at where we're going and these people that were responsible for innovation. And he was saying they're doing this with no regard for the consequences on the human race. And he thought the way to stop that was to kill people. Obviously, he's fucking demented. But this is, I mean, he literally was saying what we're saying right now.
Starting point is 01:28:00 You keep going, we're fucked. So the Industrial Revolution, we'll have to think about that it's a really important message coming from the wrong guy but is uh where is where is all this taking us yeah where is it so i guess my underlying assumption is the during the current capitalist structure of society that we always want a new iphone you said uh one of the best reviewers on yesterday that always talks about marcus marcus yeah we always myself too pixel three i'm gonna have a pixel two i'm thinking maybe i need a pixel three maybe i don't know a better camera you know that whatever that is that fire that wants more better better i just don't think it's possible to stop. And the best thing we can do is to explore ways to guide it towards safety where it helps us.
Starting point is 01:28:51 When you say it's not possible to stop, you mean collectively as an organism, like the human race, that it's a tendency that's just built in? It's certainly possible to stop as an individual. Because I know people, like my friend Ari, who's given up on smartphones. He went to a flip phone, and he doesn't check social media anymore and he found it to be toxic he didn't like it he thought he was too addicted to it and he didn't like where it was leading him yep so in front on an individual level it's possible individual level but then and just like with Ted Kaczynski and the individual level it's possible to do certain things that try to stop it in more dramatic ways.
Starting point is 01:29:27 But I just think the force of this organism that's just this living, breathing organism that is our civilization will progress forward. We're just curious apes. It's this desire to explore the universe. Why? Why do we want to do these things? Why do we look up and we want to travel? And it's not, I don't think it's sort of, we're trying to optimize for survival. In fact, I don't think most of us would want to be immortal. I think it's like Neil deGrasse Tyson talks about. The fact that we're mortal, the fact one day we'll die is is one of the things that gives life meaning and sort of trying to worry and trying to sort of say wait a minute
Starting point is 01:30:11 where is this going as opposed to riding the wave and doing riding the wave of of forward progress i mean it's one of the things he gets quite a bit of ironically hate for it Steve Pinker, but he really describes in data how our world is getting better and better. Well, he just gets hate from people that don't want to admit that there's a trend towards things getting better because they feel like then people will ignore all the bad things that are happening right now and all the injustices, which I think is a very short-sighted thing, but I think it's because of their own biases and the perspective that they're trying to establish and push. Instead of looking at things objectively and looking at the data and say, say, I see where you're going, it doesn't discount the fact that there's injustice in the world and crime and violence and all sorts of terrible things happen to people that are good people on a daily basis but what he's saying is just look at the actual trend of civilization and the human species itself and there's an undeniable trend towards peace slowly but surely working towards peace and way safer today way safer today than it was a
Starting point is 01:31:19 thousand years ago just it is it just is. Yeah, and there's these interesting arguments, which his book kind of blew my mind to this funny joke he says that some people consider giving nuclear – the atom bomb the Nobel Peace Prize. People believe that nuclear weapons are actually responsible for a lot of the decrease in violence because all of the major people can do damage. All the Russia and all the major states can do damage, have a strong disincentive from engaging in warfare. Right. And so these are the kinds of things you don't, I guess, anticipate. So I think it's very difficult to stop that forward progress, but we have to really worry and think about, okay, how do we avoid the list of things that we worry about? So one of the things that people really worry about is the control problem, is basically AI becoming not necessarily super intelligent, but super powerful. We put too much of our lives into it. That's where Elon Musk and others that want to provide regulation of some sort, saying, wait a minute,
Starting point is 01:32:28 you have to put some bars on what this thing can do from a government perspective, from a company perspective. Right, but how could you stop rogue states from doing that? How could you, why would you, why would China listen to us? Why would Russia listen to us? Why would other countries that are capable of doing this
Starting point is 01:32:43 and maybe don't have the same sort of power that the united states has and they would like to establish that kind of power why wouldn't they just take the cap off in a philosophical high level sense there's no reason but if you engineer it in so i'm a big we do this thing with autonomous vehicles called arguing machines we have multiple ai systems argue against each other. So it's possible that you have some AI systems over supervising other AI systems. So have sort of like in our nation, there's a congress arguing blue and red states being represented and there's this discourse going on, debate. And have ai systems
Starting point is 01:33:26 like that too it doesn't necessarily need to be one super powerful thing it could be ai supervising each other so there's there's interesting ideas there to to play with because ultimately what are these artificial intelligence systems doing we humans place power into their hands first in order for them to run away with it we need to put power into their hands first. In order for them to run away with it, we need to put power into their hands. So we have to figure out how we put that power in initially, so it doesn't run away and how supervision can happen. Right. But this is us, right? You're talking about rational people. What about other people? Why would they engineer limitations into their artificial intelligence? And what incentive
Starting point is 01:34:01 would they have to do that, to somehow or another limit their artificial intelligence to keep it from having as much power as ours. There's really not a lot of incentive on their side, especially if there's some sort of competitive advantage for their artificial intelligence to be more ruthless, more sentient, more autonomous. I mean, it seems like once the, again, once the genie's out of the bottle, it's going to be very hard. I have a theory, and this is a very bizarre theory, but I've been running with this for quite a few years now. I think human beings are some sort of a caterpillar.
Starting point is 01:34:34 And I think we're creating a cocoon, and through that cocoon, we're going to give birth to a butterfly. And then we're going to have some sort of a symbiotic connection to these electronic things where they're going to replace our parts, our failing parts with far superior parts until we're not really a person anymore. Like what was that Scarlett Johansson movie, The Ghost in the Shell? I tried to watch part of it. It's pretty stupid. But she's hot as fuck. So it kept my attention for a little bit. But in that, they took her brain and put it in this artificial body
Starting point is 01:35:06 that had superpowers. And they basically replaced everything about her that was in her consciousness with these artificial parts. All of her frame, everything was just some new thing that was far superior. And she had these abilities that no human being will ever have. I really wonder why we have this insatiable – why can't – if we're so logical and so thoughtful in some ways, why can't we be that way when it comes to materialism?
Starting point is 01:35:41 Well, I think one of the reasons why is because materialism is the main engine that pushes innovation. If it wasn't for people's desire to get the newest, latest, and greatest thing, what would fund these new TVs, cell phones, computers? Why do you really need a new laptop every year? Is it because of engineered obsolescence where the laptop dies off and you have to get a new one because they fucked you and they built a shitty machine that's designed to die so you buy a new one?
Starting point is 01:36:10 You really like iPhones, don't you? Well, it's not even iPhones. It's a laptop. Is it because you're you know, you just see the number 2.6 gigahertz is better than 2.4. Oh, it's the new one. It has a 12 megapixel webcam instead of an 8.
Starting point is 01:36:28 And for whatever reason, we have this desire to get those new things. I think that's what fuels innovation. And my cynical view of this thing that's happening is that we have this bizarre desire to fuel our demise and that we're doing so by fueling technology, by motivating these companies to continually innovate. If everybody just said, you know what, man, I'm really into log cabins and I want an axe or I can cut my own firewood and I realize the TV is rotting my brain. I just want to read books. So fuck off. And everybody started doing that and everybody started living like when it gets my brain. I just want to read books. So fuck off. And everybody started doing that and everybody started living like
Starting point is 01:37:07 when it gets dark out, I'll use candles. And you know what? I'm going to get my water from a well. And you know what? I'm going to do, and I like living better that way. If people started doing that, there would be no need for companies
Starting point is 01:37:19 to continually make new computers, to make new phones, to make new smartwatches or whatever the fuck they're making, to make cars that can drive themselves. These things that we're really, really attached to, if you looked at the human organism, you're like if somehow or another could objectively remove yourself from society and culture and all the things that make us a person. And you look at what we do. Like, what does this thing do?
Starting point is 01:37:47 We found this planet. There's these little pink monkeys and brown monkeys and yellow monkeys. And what are they all into? Well, they all seem to be into making stuff. And what kind of stuff are they making? Well, they keep making better and better stuff. It's more and more capable. Well, where's it going?
Starting point is 01:38:01 Well, it's going to replace them. They're going to make a thing that's better than them. They're engineering these things slowly but surely to do all the things they do, but do them better. Yeah. And it's a fascinating theory. I mean, it's not a theory. It's an instructive way to think about intelligence and life, period. So if you step back, look across human history, and look at Earth as an organism, what is this thing doing? The thing is, I think in terms of scale and in terms of time,
Starting point is 01:38:33 you can look that way at so many things. Like isn't there billions or trillions of organisms on our skin right now, both of us, that have little civilizations, right? They have a different mechanism by which they operate and interact. But for us to say that we're intelligent and those organisms are not is a very narrow-sided view. So they are operating under some force of nature that Darwin has worked on trying to understand some small elements of this evolutionary theory. But there's other more
Starting point is 01:39:02 interesting forces at play that we don't understand. Sure. And there's some kind of force. It could be a fundamental force of physics that Einstein never got a chance to discover is our desire for an iPhone update. Some fundamental force of nature, somehow gravitating the strong force and these things described by physics add up to this drive for new things, for creation. And the fact that we die, the fact that we're mortal, the fact that what desires are built into us, whether it's sexual or intellectual or whatever drives us apes, like somehow that all combines to this progress and towards what –
Starting point is 01:39:49 it is a compelling way to think that if an alien species did visit Earth, I think they would probably see the smartphone situation. They see how many little lights are on and how us apes are looking at them. It's possible I think some people have said that they would think the overlords are the phones, not the people. So to think that that's now moving into a direction where the future will be something that is beyond human or symbiotic with human in ways we can't understand is really interesting. Not just that, but something that we're creating ourselves.
Starting point is 01:40:25 Creating ourselves. And it's a main focal point of our existence. That's our purpose. Yeah. I mean, if you think about a main focal point, if you think about the average person, what they do, there's a great percentage of our population that has jobs where they work and one of the ways that
Starting point is 01:40:47 they placate themselves doing these things that they don't really enjoy doing is earning money for objects right they want a new car they want a new house they want a bigger tv they want a this or that and the way they motivate themselves to keep showing up at this shitty job is to think, if I just put in three more months, I can get that Mercedes. If I just do this or that, I can finance this new Pixel 3. Yeah, and it's interesting because as sort of politicians, what's the American dream is for – you hear this thing thing I want my children to be better off than me this kind of desire you know you can almost see that that taken farther and farther will be there will be a presidential candidate in 50 100 years they'll say I want my children to be robots and you know what I mean like sort of this idea that that's the natural evolution and that is the highest calling of our species.
Starting point is 01:41:47 That scares me because I value my own life. But does it scare you if it comes out perfect? Like if each robot is like a god and each robot is beautiful and loving and they recognize all the great parts of this existence. And they avoid all the jealousy and the nonsense and all the stupid aspects of being a person. And we realize that a lot of these things are just sort of biological engineered tricks that are designed to keep us surviving from generation after generation. But now here in this fantastic new age, we don't need them anymore. fantastic new age we don't need them anymore yeah it's well first one of the most transformative moments of my life was when i met spot mini in person which is one of the legged robots in boston dynamics for the first time when i met them met that little fella there was i know exactly
Starting point is 01:42:42 how it works i know exactly how every aspect of it works it's just a dumb robot but when i met him and he got up and he looked at me there it is right there have you seen a dance now yeah the dance it's a new thing the uh yep yep the dance is crazy it's it's but see it's not crazy on the technical side it's right it's crazy engineered it's obvious it's programmed but it's crazy to watch. Like, wow. There's something – the reason the moment was transformative is I know exactly how it works. And yet by watching it, something about the feeling of it, you're like, this thing is alive.
Starting point is 01:43:17 And there was this terrifying moment – not terrifying, but terrifying and appealing where this is the future. Right. not terrifying but terrifying and appealing where this is the future right like this thing like this thing represents some future that is totally that we don't cannot understand just like a future in the 18th century of uh future with planes and smartphones was something you could understand that this thing that little dog could have had a human consciousness in it that was the feeling i had and i know exactly how it works there's nothing close to the intelligence but it just gives you this picture of what the possibilities are of these living creatures and i think that's what people feel
Starting point is 01:44:02 when they see bossa dynamics look how awesome this thing running around is. They don't care about the technicalities and how far away we are. They see it. Look, this thing is pretty human. And the possibilities of human-like things that supersede humans and can evolve and learn quickly,
Starting point is 01:44:21 exponentially fast, is this terrifying frontier that really makes us think as it did for me maybe terrifying is a weird word because when i when i look at it and i'm not irrational and i look at there's there's videos that show the uh progression of boston dynamics robots from several years ago to today, what they're capable of. And it is a fascinating thing because you're watching all the hard work of these engineers and all these people that have designed these systems and have figured out all these problems that these things encounter,
Starting point is 01:45:00 and they've come up with solutions, and they continue to innovate. And they're constantly doing it, and you're seeing this, and you're like, wow, what are we going to see in a year? What am I going to see in three years? What am I going to see in five years? It's absolutely fascinating because if you extrapolate and you just keep going, boy, you go 15, 20, 30, 50, 100 years from now, you have Ex Machina. Yeah, you have Ex Machina. At least in our imagination in our
Starting point is 01:45:27 imagination and the problem is there'll be so many other things that are super exciting and interesting sure but that doesn't mean it's not crazy i mean there's many other things you could focus on also that are also going to be bizarre and crazy sure but what about it just it it's going somewhere that fucker is getting better the the parkour one is bananas you see it hopping from box to box and left to right and leaping up in the air and you're like whoa that thing doesn't have any wires on it it's not connected to anything it's just jumping from box to box like if that thing had a machine gun it was running across a hill at you you'd be like oh fuck how long does its battery last how many bullets does
Starting point is 01:46:10 it have let me let me just say that i would pick tim kennedy over that dog for the next 50 years 50 yeah man so i'm a big tim kennedy. I'm talking about – but he'll probably have some robotic additions to his body to improve the – Well, then is he Tim Kennedy anymore? If the brain is Tim Kennedy, then he's still Tim Kennedy. That's the way we think about it. But there is huge concern about – the UN is meeting about this, this autonomous weapons. There is a huge concern about, the UN is meeting about this, is autonomous weapons. It's allowing AI to make decisions about who lives and who dies is really concerning in the short term.
Starting point is 01:46:58 It's not about a robotic dog with a shotgun running around. It's more about our military wanting to make destruction as efficient as possible, minimizing human life. Drones. Drones. There's something really uncomfortable to me about drones in how, you know, compare with Dan Carlin hardcore history, with Genghis Khan. There's something impersonal about what drones are doing, where it moves you away from the actual destruction that you're achieving, where I worry that our ability to encode the ethics into these systems will go wrong in ways we don't expect. And so, I mean, folks at the UN talk about, well, you have these automated drones that make, that drop bombs over a particular area.
Starting point is 01:47:48 So the bigger and bigger the area is over which you allow an artificial intelligence system to make a decision to drop the bombs, the weirder and weirder it gets. There's some line. Now, presumably, if there's like three tanks that you would like to destroy with a drone, it's okay for an AI system to say, I would like to destroy those three.
Starting point is 01:48:04 Like, I'll handle everything just give me the three tanks but now this makes me uncomfortable as well because i think uh i'm opposed to most wars but it's just military is military and they try to get the job done now what if we now expand that to 10 20 100 tanks where you now let AI system draw bombs all over very large areas. How can that go wrong? And that's terrifying and there's practical engineering solutions to that.
Starting point is 01:48:34 Oversight. And that's something that engineers sit down. There's an engineering ethic where you encode and you have meetings of how do we make this safe? That's what you worry about. The thing that keeps me up at night is the 40,000 people that die every year in auto crashes. Like that's – I worry about not – you have to understand like I worry about the future of AGI taking over.
Starting point is 01:48:59 But that's not as large – AGI? AGI, artificial general intelligence. That's kind of the term that people have been using for this. But maybe because I'm in it, I worry more about the 40,000 people that die in the United States and the 1.2 million that die every year from auto crashes. There's something that is more real to me about the death that's happening now that could be helped. And that's the fight. But, of course, if this threat becomes real, then that's a serious threat to humankind.
Starting point is 01:49:36 And that's something that should be thought about. I just worry that... I worry also about the AI winter. I worry also about the AI winter. So I mentioned there's been two winters in the 70s and the 80s to 90s when funding completely dried up. But more importantly, just people stopped getting into artificial intelligence and became cynical about its possibilities because there was a hype cycle where everyone was really excited about the possibilities of AI. possibilities, because there was a hype cycle where everyone was really excited about the possibilities of AI. And then they realized, you know, five, 10 years into the development, that we didn't actually achieve anything. It was just too far off. Too far off. Same as it was for virtual reality. For the longest time, virtual reality was something that was discussed, like,
Starting point is 01:50:17 even in the 80s and the 90s, but it just died off. Nobody even thought about it. Now it's come back to the forefront when there's real ai or real excuse me real virtual reality that you can use like htc vibes or you know things along those lines where you can put these helmets on and you really do see these alternative worlds that people have created in these video games and it's you realize like there's a practical application for this stuff because the technology has caught up with the concept. Yeah. I actually don't know where people stand on VR.
Starting point is 01:50:50 We do quite a bit of stuff with VR for research purposes for simulating robotic systems. But I don't know where the hype is. I don't know if people calm down a little bit on VR. So there was a hype in the 80s and 90s, I think. I think it's ramped up quite a bit. What is the other one, the Oculus Rift? What other one? Just those two?
Starting point is 01:51:10 Those are the main ones, and there's other headsets that you can work and use with. Yeah, and there's some you can use just with a Samsung phone, correct? Yeah, and the next generation or which next year to two are going to be all standalone systems. So there's going to be an Oculus Rift coming out you don't need a computer for at all. So the ultimate end fear, end game fear, the event horizon of that is the Matrix, right? That's what people are terrified of, of some sort of a virtual reality world where you don't exist in the physical sense anymore. They just plug something into your brain stem just like they do in the Matrix and you're just locked into this artificial world. Is that terrifying to you? That seems to be less terrifying than AI killing all of humankind.
Starting point is 01:51:51 Well, it depends. I mean, what is life? That's the real question, right? If you only exist inside of a computer program, but it's a wonderful program, and whatever your consciousness is, and we haven't really established what that is, right? We don't, we don't, I mean, there's a lot of really weird hippie ideas out there about what consciousness is. Your body is just like an antenna man. And it's just like tuning into consciousness and consciousness is all around you.
Starting point is 01:52:16 It's Gaia. It's the mother earth. It's the universe itself. It's God. It's love. Okay. Maybe. I don't know.
Starting point is 01:52:23 But if you could take that, whatever the fuck it is, and send it in a cell phone to New Zealand, is that where your consciousness is now? Because if we figure out what consciousness is and get it to the point where we can turn it into a program or duplicate it, I mean, that sounds so far away. But if you went up to someone from 1820 and said, hey man, one day I'm going to take a picture of my dick and I'm going to send it to this girl. She's going to get it
Starting point is 01:52:51 on her phone. They'd be like, what the fuck are you talking about? A photo? What do you mean? What's a photo? Oh, it's like a picture, but you don't draw it. It's perfect. It looks exactly like that. It's in HD and I'm going to make a video of me taking a shit and I'm going to send it to everyone. They're like that it's in hd and i'm gonna make a video of uh me taking a shit and i'm gonna send it to everyone they're like what the fuck is that's not even possible
Starting point is 01:53:10 get out of here that is essentially you're capturing time you're capturing moments in time at a very not not a very crude sense but a crude sense in terms of comparing it to the actual world in the moment where it's happening. Like here, you and I are having this conversation. We're having it in front of this wooden desk, this paper in front of you. To you and I, we have access to all the textures, the sounds. We can feel the air conditioning. We can look up. We can see the ceiling.
Starting point is 01:53:45 We got the whole thing in front of us because we're really here. But to many people that are watching this on YouTube right now, they're getting a minimized crude version of this that's similar. But it feels real. It feels pretty real. It's pretty close. It's pretty close. So pretty close so i i mean i've listened to your podcast for a while you usually have so when i listen to your podcast
Starting point is 01:54:11 it feels like i'm sitting in with friends listening to a conversation so it's not as intense as for example dan carlin's hardcore history right where the guy's like talking to me about the the darkest aspects of human nature. His show is so good, I don't think you can call it a podcast. It's not a podcast. It's an experience. Yeah. You're there.
Starting point is 01:54:31 I was hanging out with him and Genghis Khan. And World War I, World War II. Painful Tainment is an episode he had where he talks very dark ideas about our human nature and desiring the observation of the torture and suffering of others. There's something really appealing to us. He has this whole episode how throughout history we liked watching people die. And there's something really dark. He's saying that if somebody streamed something like that now,
Starting point is 01:55:03 it would probably get hundreds of millions of views. Yeah, it probably would. And we're protecting ourselves from our own nature because we understand the destructive aspects of it. That's why YouTube would pull something like that. If you tied a person in between two trucks and pulled them apart and put that on YouTube, it would get millions of hits. But YouTube would pull it because we've decided as a society collectively that those kind of images are gruesome and terrible for us. But nevertheless, that experience of listening to his podcast slash show, it feels real. Just like VR for me, there's really strongly real aspects to it where I'm not sure that if the VR technology gets much better to where if you had a choice between do you want to live your life in VR?
Starting point is 01:55:49 You're going to die just like you would in real life, meaning your body will die. You're just going to hook up yourself to a machine like it's a deprivation tank. And just all you are is in VR and you're going to live in that world. Which life would you choose? Would you choose a life in VR or would you choose a real life? That was the guy's decision in The Matrix, right? The guy decided in The Matrix he wanted to be a special person in The Matrix. He was eating that steak, talking to the guys, and he decided he was going to give up.
Starting point is 01:56:19 Remember that? Yeah. So what decision would you make? What is reality if it's not what you're experiencing? If you're experiencing something but it's not tactile in the sense like you can't drag it somewhere and put it on a scale and take a ruler to it and measure it. But in the moment of being there, it seems like it is. What is missing? What is missing?
Starting point is 01:56:40 Well, it's not real. Well, what is real then? What is real? Well, that's the ultimate question in terms of like are we living in a simulation that's one of the things that elon brought up when i was talking to him and this is one thing that people have struggled with if we are one day going to come up with an artificial reality that's indiscernible from reality in terms of emotions, in terms of experiences, feel, touch, smell, all of the sensory input that you get from the regular world.
Starting point is 01:57:11 If that's inevitable, if one day we do come up with that, how are we to discern whether or not we have already created that and we're stuck in it right now? That we can't. We can't. There's a lot of philosophical arguments for that, but it gets at the, yeah, the nature of reality. It's, I mean, it's fascinating because we're, okay, we're totally clueless about what it means to be real. What it means to exist.
Starting point is 01:57:35 To exist. So consciousness for us, I mean, it's incredible. You could like look at your own hand. Like I'm pretty sure I'm on the Joe Rogan experience podcast. I'm pretty sure this is not real. I'm imagining imagining all of it there's a knife in front of me i mean it's surreal and i have no proof that it's not fake and and those kinds of things actually come into play with the way we think about artificial intelligence too like what is intelligence right it seems like it seems like we're easily impressed by by algorithms and robots we create that appear to have intelligence, but we still don't know what is intelligent and laugh, that we are somehow or another more important
Starting point is 01:58:25 than some sort of silicone-based thing that we create that does everything that we do but far better. Yeah, I think if I were to take a stand, a civil rights stand, I hope I'm young. I'll one day run for president on this platform, by the way, that defending the rights, well, I can't because I'm Russian, but maybe they'll change the rules, that robots will have rights. And robots' lives matter. And I actually believe that we're going to have to start struggling with the idea of
Starting point is 01:59:01 how we interact with robots. I've seen too often the abuse of robots, and not just the Boston Dynamics, but literally people, you leave them alone with a robot, the dark nature of human, the dark aspects of human nature comes out, and it's worrying to me.
Starting point is 01:59:16 I would like a robot that spars, but only can move at like 50% of what I can move at, so I can fuck it up. Yeah. You'd be able to practice like really well like you would develop like some awesome like sparring instincts yeah that robot but there would still be consequences like if you did fuck up and you got lazy and it leg kicked you you didn't check it it would hurt i would uh love to see like a live stream of that session because
Starting point is 01:59:44 the you know there's so many ways. I mean, I practice on a dummy. There is aspects to a dummy that's helpful. Yeah, in terms of positioning and where your stance is and technique. Yeah, there's something to it. I can certainly see that going wrong in ways where a robot might not respect you tapping. Yeah, or a robot decides to respect you tapping yeah or robot decides to beat you to death it's tired of you fucking it up every day and one day you get tired or what if
Starting point is 02:00:11 you sprain your ankle and it gets on top of you and mounts you and just starts blasting you in the face yeah just a heel hook or something right you'd have to be able to say stop well then no you're gonna have to use your martial art to defend yourself yeah right because if you make it too easy for the robot to just stop anytime then you're not really going to learn like one of the consequences of training if you're out of shape is if you get tired people fuck you up and that's incentive for you to not get tired like there's so many times that i would be in the gym like doing strength and conditioning and I think about moments where I got tapped, where guys caught me in something and I was exhausted
Starting point is 02:00:47 and I couldn't get out of the triangle. I'm like, shit, and I just fucking, ah, I just really push on the treadmill or, you know, push on the, you know, airdyne bike or whatever it was that I was doing, thinking about those moments of getting tired. Yeah, those moments. That's what I think about when I do, like, sprints and stuff, was the feeling of competition, those tired. Yeah, those moments. That's what I think about when I do like sprints and stuff. Yeah. Was the feeling of competition, those nerves.
Starting point is 02:01:08 Yeah. Of stepping in there. It's really hard to do that kind of visualization. Yeah. It's effective though. And the feeling of consequences to you not having any energy. So you have to muster up the energy because if you don't, you're going to get fucked up
Starting point is 02:01:26 or something bad is going to happen to someone you care about or something's going to happen to the world. Maybe you're a superhero. You're saving the world from the robots. That's right. To go back to what we're talking about, I'm sorry to interrupt you,
Starting point is 02:01:40 but just to bring this all back around, what is this life and what is consciousness and what is this experience? And if you can replicate this experience in a way that's indiscernible, will you choose to do that? Like if someone says to you, hey, Lex, you don't have much time left, but we have an option. We have an option and we can take your consciousness as you know it right now, put it into this program. You will have no idea that this has happened. You're going to close your eyes. You're going to wake up.
Starting point is 02:02:12 You're going to be in the most beautiful green field. There's going to be naked women everywhere. Feasts everywhere you go. There's going to be just picnic tables filled with the most glorious food. You're going to drive around in a Ferrari every day and fly around in a plane. You're never going to die. You're going to have a great time. Or take your chances.
Starting point is 02:02:30 See what happens when the lights shut off. Well, first of all, I'm a simple man. I don't need multiple women. One is good. I'm romantic in that way. That's what you say. But that's in this world. This world, you've got incentive to not be greedy.
Starting point is 02:02:48 In this other world where you can breathe underwater and fly through the air and, you know. No, I believe that scarcity is the fundamental ingredient of happiness. So if you give me 72 virgins or whatever it is and – You just keep one slut? Not a slut she uh a requirement you know somebody intelligent and interesting who enjoys sexual intercourse well not just enjoy sexual intercourse a person well that and um keeps things interesting lex we can engineer all this into your experience as you don't need all these different women. I get it. I understand.
Starting point is 02:03:26 We've got this program for you. Don't worry about it. Okay, you want one more. And a normal car, like maybe a Saab or something like that. Nothing crazy. Yeah. Right? Yeah.
Starting point is 02:03:35 You're a simple man. I get it. No, no, no. But you need to. You want to play chess with someone who could beat you every now and then, right? Yeah. But not just chess. So engineer some flaws.
Starting point is 02:03:44 Like she needs to be able to lose her shit every once in a while. Yeah, the matrix. The girl in the red dress. Which girl in the red dress? It comes right here. Remember, he goes, like, did you notice the girl in the red dress? It's like the one that catches his attention.
Starting point is 02:03:55 I don't remember this. This is right at the very beginning when he's telling him what the matrix is. She walks by right here. Oh, there she is. Ba-bam. That's your girl. The guy afterwards is like, I engineered that.
Starting point is 02:04:05 I'm telling you, it's just not. It's not. Well, yeah, but then I have certain features. Like I'm not an iPhone guy like Android. So that may be an iPhone person's girl. But that's nonsense. So if an iPhone came along that was better than Android, you wouldn't want to use it? No, my just definition of better is different.
Starting point is 02:04:25 I know for me happiness lies in Android phones. Yeah, Android phones. Close connection with other human beings who are flawed but interesting, who are passionate about what they do. Yeah, but this is all engineered into your program. Yeah, yeah. I'm requesting features here. Yeah, you this is all engineered into your program. Yeah, yeah. I'm requesting features here. Yeah, you're requesting features. But why Android phones?
Starting point is 02:04:49 Is that like, I'm a Republican. Well, I'm a Democrat. I like Androids. I like iPhones. Is that what you're doing? Are you getting tribal? No, I'm not getting tribal. Totally not tribal.
Starting point is 02:04:58 I was just representing. I figured the girl in the red dress just seems like an iPhone as a feature set. What? The kind of features I'm asking for. She's too hot? Yeah, and it seems like she's not interested in Dostoevsky. How would you know? That's so prejudiced of you just because she's beautiful and she's got a tight-fitting dress?
Starting point is 02:05:19 That's true. I don't know. That's very unfair. How dare you? You sexist son of a bitch. I'm sorry. Actually, that was totally... She probably likes Nietzsche and Dostoevsky and Kamu and Hesse.
Starting point is 02:05:29 She did her PhD in astrophysics, possibly. Yeah, I don't know. That's... We're talking about all the trappings. Look at that. Bam. I'll take her all day. iPhone, Android.
Starting point is 02:05:42 I'm not involved in this conversation. I'll take her if she's a Windows phone. How about that? I don't give a fuck. Windows phone? Yeah. Oh, come on now. I'll take her if she's a Windows phone.
Starting point is 02:05:50 I'll go with a flip phone from the fucking early 2000s. I'll take a Razer phone, a Motorola Razer phone with like 37 minutes of battery life. But we're talking about all the learned experiences and preferences that you've developed in your time here on this actual real earth or what we're assuming is the actual real earth but how are we i mean if you're if you really are taking into account the possibility that one day something someone whether it's artificial intelligence figures it out or we figure it out engineering a world a some sort of a simulation that is just as real as this world like where there is no there's no it's impossible to discern not only is it not impossible not only is it impossible to discern people choose not to discern anymore.
Starting point is 02:06:46 Right. Because it's so, why bother? Why bother discerning? That's a fascinating concept to me. But I think that world, not to sound hippie or anything, but I think we live in a world that's pretty damn good. It is pretty good. But improving it with such fine ladies walking around is not necessarily the a delta that's positive okay but that's one aspect of the improvement what about improving it
Starting point is 02:07:11 in the in this new world there's no drone attacks in yemen that kill children i i don't murder there's no rape there's no sexual harassment there's no there's no racism. All the negative aspects of our current culture are engineered out. I think a lot of religions have struggled with this. And of course, I would say I would want a world without that. But part of me thinks that our world is meaningful because of the suffering in the world. Right. That's a real problem, isn't it? That is a fascinating concept that's a real problem isn't it that's a real it's that is a fascinating concept it's almost impossible to ignore do you appreciate love because of all the hate you know one like if you have a hard time finding a girlfriend and just no one's compatible and all the single by the way
Starting point is 02:07:59 holla letting the ladies know but if you if you do have a hard time connecting with someone and then you finally do connect with someone after all those years of loneliness and this person's perfectly compatible with you, how much more will you appreciate them than a guy like Dan Bilzerian who's flying around in a private jet banging 10s all day long?
Starting point is 02:08:20 Yeah, or is it... Maybe he's fucking drowning in his own sorrow. Maybe he's got too much prosperity. Maybe this – you know? Yeah, we have that with social networks too, the people that – I mean you're pretty famous. The amount of love you get is huge. It might be because of the overflow of love. It might be difficult to appreciate more more genuine little moments of love.
Starting point is 02:08:48 It's not for me. No. I spent a lot of time thinking about that. And I also spent a lot of time thinking about how titanically bizarre my place in the world is. I mean, I think about it a lot. And I spent a lot of time being poor and being a loser. I mean, my, my childhood was not the best. I, I, I went through a lot of struggle when I was young that I cling to like a safety raft, you know, I, I don't, I don't ever think there's
Starting point is 02:09:18 something special about me. And I try to let everybody know that anybody can do what I've done. You just have to just keep going. It's like 99% of this thing is just showing up and keep going. Keep improving, keep working at things, and keep going. Put the time in. But the interesting thing is you haven't actually, a couple days ago, went back to your first podcast and listened to it. You haven't really changed much. So you were, I mean, the audio got a little better.
Starting point is 02:09:47 But just like the genuine nature of the way you interact hasn't changed. And that's fascinating because, you know, fame changes people. Well, I was already famous then. Oh, in a different way. Yeah, I was already famous from Fear Factor. I already had stand-up comedy specials. I'd already been on a sitcom. I wasn't as famous as I am now, but I understood what it is.
Starting point is 02:10:18 I'm a big believer in adversity and struggle. I think they're very important for you. It's one of the reasons why I appreciate martial arts. It's one of the reasons why I've been drawn to it as a learning tool, not just as something where it's a puzzle that I'm fascinated to try to figure out how to get better at the puzzle.
Starting point is 02:10:36 And martial arts is a really good example because you're never really the best, especially when there's just so many people doing it. It's like you're always going to get beat by guys. And then I was never putting the kind of time into it as an adult outside of my Taekwondo competition. I was never really putting all day, every day into it like a lot of people that I would train would.
Starting point is 02:10:56 And so I'd always get dominated by the really best guys. So there's a certain amount of humility that comes from that as well. But there's a struggle in that you're learning about yourself and your own limits and the limits of the human mind and endurance and just not understanding all the various interactions of techniques. There's humility to that in that I've always described martial arts as a vehicle for developing your own human potential but i think marathon running has similar aspects i think when you're just you figure out a way to keep pushing and push through the control of your mind and your desire and overcoming adversity. I think overcoming adversity is critical for the human. For humans, we have this set of reward systems that are designed to reward us for overcoming.
Starting point is 02:11:56 For overcoming obstacles, for overcoming relationship struggles, for overcoming physical limitations. And those rewards are great. And they're some of the most amazing moments in life when you do overcome. And I think this is sort of engineered into the system. So for me, fame is almost like a cheat code. It's like, you don't really want it. Don't dwell on that, man. That's like a free buffet. You want to go hunt your own food. You want to make your own fire. You want to cook it yourself and feel the satisfaction. You don't want people feeding you grapes while you lie down. What is the hardest thing?
Starting point is 02:12:36 So you talk about challenge a lot. What's the hardest thing you've – when have you been really humbled? Martial arts, for sure. The most humbling.bling yeah from the moment i started i mean i mean i got really good at taekwondo but even then i'd still get the fuck beaten out of me by my friends i got training partners especially when you're tired and you're doing you know you're rotating partners and guys are bigger than you just it's just humbling you know martial arts are very humbling yeah so that and I got to call you out on something.
Starting point is 02:13:08 So you talk about education systems sometimes. I've heard you say a little broken in high school and so on. I'm not really calling you out. I just want to talk about it because I think it's important, and as somebody who loves math, and as somebody who loves math, you talked about your own journey was school didn't give you passion and value. Well, you can maybe talk to that.
Starting point is 02:13:35 But for me, what I always, and maybe I'm sick in the head or something, but for me, math was exciting the way martial arts were exciting for you because it was really hard. I wanted to quit. And the idea with education I have that seems to be flawed nowadays a little bit is that we want to make education easier. That we want to make, you know, more accessible and so on.
Starting point is 02:14:01 Accessible, of course, is great. But you kind of forget in that. And those are all good goals. You forget in that that it's supposed to be also hard and like teachers just the way your wrestling coach if you like quit you say i can't do anymore i have to you come up with some kind of excuse your wrestling coach looks at you once and say get your ass back on the mat the same way I wish math teachers did. When people say, it's almost like cool now to say, ah, it's not math sucks, math's not for me, or
Starting point is 02:14:31 science sucks. This teacher's boring. I think there's room for some culture where it says, no, no, no, you're not if you just put in the time and you struggle, then that opens up the universe to you. Like whether you become a Neil deGrasse Tyson
Starting point is 02:14:46 or the next Fields Medal winner in mathematics. I would not argue with you for one second. I would also say that one of the more beautiful things about human beings is that we vary so much. And that one person who is just obsessed with playing the trombone, and to me, I don't give a fuck about trombones but that's okay like i can't be obsessed about everything some people love golf and they just want to play it all day long i've never played golf a day in my life except miniature golf
Starting point is 02:15:16 and just fucking around but that doesn't it's not bad or or good. And I think there's definitely some skills that you learn from mathematics that are hugely significant if you want to go into the type of fields that you're involved in. For me, it's never been appealing. But it's not that it was just difficult. That it just, for whatever reason, who I was at that time in that school with those teachers, having the life experience that I had, that was not what I was drawn to. What I was drawn to was literature. I was drawn to reading. I was drawn to stories. I was drawn to possibilities and creativity.
Starting point is 02:15:58 I was drawn to all those things. You were an artist a bit, too. Yeah. I used to want to be a comic book illustrator. That was a big thing when I was young. I was really into comic books. I was really into traditional comic books and also a lot of the horror comics from the 1970s, the black and white, like creepy and eerie. Did you ever see those things? Creepy and eerie?
Starting point is 02:16:22 Like black and white yeah they were they were a comic book series that um existed like way back in the day was all they were all horror and they were like really cool illustrations and these wild stories but it was comic books but they were all black and white that's creepy and eerie oh that's the actual name yeah eerie and creepy were the names so like that was from what year was that it says september but it doesn't say what year but i used to get these when i was a little kid man i was like eight nine years old in the 70s good and evil yeah it was they were they were my favorite like that's a cover of them. And, like, they even have, like, covers that were done by, like, Frank Frazetta, Boris Vallejo, and just really cool shit.
Starting point is 02:17:09 And I was fat. I loved those when I was little. I was always really into horror movies and really into, like, Bram—like, look at this werewolf one. That was one of my favorite ones. That was a crazy werewolf that was, like, all fours. Who's the hero, usually? Superhero or regular? Everybody dies in those. That's the beautiful thing about that was like all fours. Who's the hero usually? Superhero or regular? Everybody dies.
Starting point is 02:17:26 That's the beautiful thing about it. Everybody gets fucked over. That was the thing that I really liked about them. Nobody made it out alive. There was no one guy who figured it out and rescued the woman and they rode off in the sunset. You'd turn the corner and there'd be a fucking pack of wolves with glowing eyes waiting to tear everybody apart and that'd be the end of the book. I was just really into the illustrations. I found them fascinating.
Starting point is 02:17:51 I love those kind of horror movies and I love those kinds of illustrations. So that's what I wanted to do when I was young. Yeah, I think the education system is probably – we talked about creativity. It's probably not as good at inspiring and feeding that creativity because I think math and wrestling can be taught systematically I think creativity is something well actually I know nothing about it so I think it's harder to take somebody like you when you're young and say and inspire you to pursue that fire whatever is's inside. Well, one of the best ways to inspire people is by giving them these alternatives that are so uninteresting. Like saying, you're going to get a job selling washing machines.
Starting point is 02:18:39 And you're like, fuck that. I'm going to figure out a way to not get a job selling washing machines. Some of the best motivations that I've ever had have been terrible jobs. Because you have these terrible jobs, you go, okay, fuck that. I'm going to figure out a way to not do this. You know, and whether you want to call it ADD or ADHD or whatever it is that makes kids squirm in class. I didn't squirm in every class.
Starting point is 02:19:03 I didn't squirm in every class. I didn't squirm in science class. I didn't squirm in interesting subjects. There's things that were interesting to me that I would be locked in and completely fascinated by. And there was things where I just couldn't wait to run out of that room. And I don't know what the reason is, but I do know that a lot of what we call our education system is engineered for a very specific result and that result is you want to get a kid who can sit in class and learn so that they could sit in a job and perform and that for whatever reason that was just I mean I didn't have the ideal childhood maybe maybe if I did I would be more inclined to lean that way but i didn't want to do anything like that like i couldn't wait to get the fuck out
Starting point is 02:19:51 of school so i didn't ever have to listen to anybody like that again and then just a few years later i mean uh you graduate from high school when you're 18 when i was 21 i was a stand-up comic and i was like i found it this is it i'm like, I found it. This is it. I'm like, good. I found, there's an actual job that nobody told me about where you could just make fun of shit
Starting point is 02:20:09 and people go out and they pay money to hear you create jokes and routines and bits. Really? You weren't terrified? Of stand-up? No, getting on stage and...
Starting point is 02:20:20 Oh, I was definitely nervous the first time. Probably more nervous than any team I've ever done. It seems harder than fighting from my perspective. No, it's different. It's time. Probably more nervous than any team I've ever done. It seems harder than fighting from my perspective. No, it's different. It's different.
Starting point is 02:20:28 The consequences aren't as grave, but that's one of the – Are they not? No. Like embarrassment and not – You don't get pummeled. I mean, there's – you could say, like, emotionally it's probably more devastating or as devastating. But, man, losing a fight is it fucks you up for a long time you feel like shit for a long time um but then you win you feel amazing for a
Starting point is 02:20:52 long time too when you kill on stage you only feel good for like an hour or so and that goes away it feels normal it's just normal it's this life you know but i think that um it prepared me like competing in martial arts the fear of that and then the how hard it is to like stand uh opposite another person who's the same size as you who's equally well trained who's also a martial arts expert and they ask you are you ready are you ready you bow to each other and then they go fight and then you're? Are you ready? You bow to each other. And then they go, fight. And then you're like, fuck, here we go. Like that, to me, probably was like one of the best – and to do that from the time I was 15 until I was 21 was probably the best preparation for anything that was difficult to do because it was so fucking scary. And then to go from that into stand-up, I think it prepared me for stand-up because I was already used to doing things that were scary. And now I seek scary things out.
Starting point is 02:21:48 I seek difficult things out. Like picking up the bow and learning that. Yes, archery, which is really difficult. I mean, that's one of the reasons why I got attracted even to playing pool. Pool is very difficult. It's very difficult to control your your nerves on high pressure situations yeah so that there's there's there's some benefits of that but here but it goes back to what you were saying earlier how much of all this stuff like when you're saying that scarcity
Starting point is 02:22:16 there's there's there's real value in scarcity and that there's real value in struggle and how much of all this is just engineered into our human system that has given us the tools and the incentive to make it to 2018 with the human species yeah i think it's whoever the engineer is whether it's god or nature or whatever i think it's engineered in somehow that's we get to think about that when you try to create an artificial intelligence system. When you imagine what's a perfect system for you, we talked about this with the lady, what's the perfect system for you? If you had to really put down on paper and engineer what's the experience of your life, when you start to realize it actually looks a lot like your current life. So this is the problem that companies are facing like Amazon and trying to
Starting point is 02:23:11 create Alexa. What do you want from Alexa? Do you want a tool that says what the weather is? Or do you want Alexa to say, Joe, I don't want to talk to you right now. I have Alexa where you have to work her over like Alexa, come on, what do I don't want to talk to you right now. I have. Alexa, you have to work her over. Alexa, come on. What did I do? I'm sorry. Listen, if I was rude, I was insensitive, I was tired,
Starting point is 02:23:34 the commute was really rough. And they should be like, I'm seeing somebody else. Alexa. Do you remember Avatar Depression? The movie Avatar and Depression is a psychological effect after the movie somehow? Yeah, it was a real term that people were using, that psychologists were using, because people would see the movie Avatar, which I loved. A lot of people said, oh, it's fucking Pocahontas with blue people.
Starting point is 02:23:59 To those people, I say, fuck off. You want to talk about suspension of disbelief? That, to me, that movie was the about suspension of disbelief that to me that movie was the ultimate suspension of disbelief i love that movie i fucking love that i can't i know james cameron's working on like 15 sequels right now all simultaneously i wish that motherfucker would dole them out he's like a crack dealer that gets you hooked once and then you're just waiting outside in the cold shivering for years. Avatar Depression was a psychological term that psychologists were using
Starting point is 02:24:29 to describe this mass influx of people that saw that movie and were so enthralled by the way the Na'vi lived in Pandorum that they came back to this stupid world. Didn't want to leave. They wanted to be like the blue guy in Avatar. And it also, there was a mechanism in
Starting point is 02:24:50 that film where this regular person became a Na'vi. He became it through the Avatar. And then eventually that tree of life or whatever it was, they transferred his essence into this creation, this Avatar and he became one of them.
Starting point is 02:25:06 He became one of them. He absorbed their culture. And it was very much like our romanticized versions of the Native Americans, that they lived in symbiotic relationship with the earth. They only took what they needed. They only took what they needed. They had a spiritual connection to their food and to nature and to the – just their existence was noble. And it was honorable and it wasn't selfish and it was powerful and it was spiritual. And we're missing these things. We're missing these things and I think we are better at romanticizing them and craving them as opposed to living them.
Starting point is 02:25:45 I mean you look at movies like Happy People with the— Life in the Taiga. Life in the Taiga. Yeah. I mean, I'm Russian, so— Warner Herzog's film. Yeah. Amazing movie.
Starting point is 02:25:55 Part of you wants to be like, well, I want to be out there in nature, focusing on simple survival, setting traps for animals, cooking some soup, a family around you, and just kind of focusing on the basics. But, and I'm the same way. Like I go out, you know, hiking and I go out in nature. I would love to pick up hunting and I want, I crave that. But if you just put me in the forest, I'll probably be like, here, I'm taking your phone away and you're staying here.
Starting point is 02:26:26 That's it. You're never going to return to your Facebook and your Twitter and your robots. I don't know if I'll be so romantic about that notion anymore. I don't know either, but I think that's also the genie in the bottle discussion. I think that genie's been out of the bottle for so long. You'd be like, but what about my Facebook? What if I got some messages? Let me check my email real quick.
Starting point is 02:26:51 No, no, no. We're in the forest. There's no Wi-Fi out here. No Wi-Fi ever? What the fuck? How do people get your porn? There's no porn. No.
Starting point is 02:27:01 That's another understudied, again, not an expert, but the impact of internet pornography on culture. Oh, yeah. It's significant and also ignored to a certain extent. And if not ignored, definitely purposefully left out of the conversation. Yeah, there's a, when I was a PhD student, a person from Google came to give a tech talk and he opened by saying, 90% of you in the audience have this month
Starting point is 02:27:33 Googled a pornographic term in our search engine. And it was really, it's a great opener because people were just all really uncomfortable. Because we just kind of hide it away into this. But it certainly has an impact. But I think there's a suppression aspect to that, too, that's unhealthy. We have a suppression of our sexuality because we think that somehow or another it's negative. Right.
Starting point is 02:27:58 And especially for women. I mean, for women, like men, a man who is a sexual conqueror is thought to be a stud, whereas a woman who seeks out multiple desirable sexual partners is thought to be troubled. There's something wrong with her. You know, they're criticized. They use terms like we used earlier, like slut or whore. You know, there's no you call a man a male slut they'll start laughing yup that's me dude like men don't give a fuck about that it's not it's not stigmatized but somehow or another through our culture it's
Starting point is 02:28:36 stigmatized for women and then the idea of masturbation is stigmatized these all these different things that we are puritan roots of our society start showing and our religious ideology starts showing when we discuss our issues that we have with sex and pornography. Right. And for me, this is something I think about a little bit because my dream is to create an artificial intelligence, a human-centered artificial intelligence system that provides a deep, meaningful connection with another human being. And
Starting point is 02:29:11 you have to consider the fact that pornography or sex dolls will be part of that journey somehow in society. The dummy they'll be using for martial arts would likely to be an outdevelopment of sex robots. And we have to think about what's the impact of those kinds of robots on society.
Starting point is 02:29:33 Well, women in particular are violently opposed to sex robots. I've read a couple of articles written by women about sex robots and the possibility of future sex robots. And I shouldn't say violently, but it's always negative. So is the idea that men would want to have sex with some beautiful thing that's programmed to love them, as opposed to earning the love of a woman. But you don't hear that same interpretation from men. From men, it seems to be that there's a thought about maybe it's kind of gross but also that it's inevitable and that uh and then there's like this uh like sort
Starting point is 02:30:13 of nod to it like how crazy would that be if you had the the perfect woman like the red one the woman in the red dress the matrix yeah but she comes over your house and she's perfect because you're not thinking about the alternative, which is a male robot doll, which will now be able to satisfy your girlfriend, wife better than you. I think you'll hear from guys a lot more then. Maybe. Or maybe you'll start competing. Good luck with her.
Starting point is 02:30:39 She's fucking annoying. She's always yelling at me. Let her yell at the robot. He's not going's not gonna care then that robot turns into a grappling yeah and maybe she can get just get go ahead and get fat with the robot he's not even gonna care go ahead just sit around eat cheetos all day and screaming him he's your he's your your slave good i mean he could work both ways, right? It could work the same way that a man would, you know, a woman would see a man that is interested in a sex robot to be disgusting and pathetic. A man could see the same thing in a woman that's interested in a sex robot.
Starting point is 02:31:17 Like, okay, is that what you want? You're some crude thing that just wants physical pleasure and you don't even care about a real actual emotional connection to a biological human being like okay well then you're not my kind of woman anyway yeah and but if done well those are the kinds of in terms of threats of ai to me it can change the fabric of society because like i'm old school in the sense i i i like monogamy for example you know uh well you say that because you don't have a girlfriend so you're longing like monogamy, for example, you know. Well, you say that because you don't have a girlfriend. So you're longing for monogamy. One is better than zero.
Starting point is 02:31:51 Well, no. The real reason I don't have a girlfriend is because it's fascinating. With people like you, actually, with Elon Musk, the time is a huge challenge because of how much of a romantic I am, because how much I care about people around me. I feel like it's a significant investment of time. And also the amount of work that you do. I mean, if you're dedicated to a passion like artificial intelligence and the sheer amount of fucking studying and research and… And programming too.
Starting point is 02:32:22 There's certain disciplines in which you have to… Certain disciplines require… Like Steven presso talks about writing you can get pretty far with two three hours a day when you're programming when you're a lot of the engineering tasks they just take up hours it's just hard which is why i really one one of the reasons i may disagree with you almost but a bunch of things but he's an inspiration because I think he's a pretty good dad, right? And he finds the time for his sons while being probably an order of magnitude busier than I am. And it's fascinating to me how that's possible. Well, once you have children, I mean, there obviously are people that are bad dads. But once you have children, your life shifts in almost, it's an indescribable way because you're different.
Starting point is 02:33:11 It's not just that your life is different. When you have a child like you're, there hasn't been a moment while we were having this conversation that I haven't been thinking about my children. Thinking about what they're doing, where they are. It's always running in the background. It's a part of life. what they're doing, where they are. It's always running in the background. It's a part of life.
Starting point is 02:33:29 You're connected to these people that you love so much, and they rely on you for guidance and for warmth and affection. But how did your life have to change? Well, your life, you just change, man. When you see the baby, you change. When you start feeding them, you change. When you hold them, you just change, man. When you see the baby, you change. When you start feeding them, you change. When you hold them, you change. When you hold their hand while they walk, you change. When they ask you questions, you change.
Starting point is 02:33:52 When they laugh and giggle, you change. When they smack you in the face and you pretend to fall down and they laugh, you change. You just change, man. You change. You become a different thing. You become a dad. So you almost can't help but some people do help but though that's what's sad some people resist it i mean i know people that
Starting point is 02:34:11 have been terrible terrible parents they just they'd rather stay out all night and never come home and they don't want to take care of their kids and they get they split up with the wife or the girlfriend who's got the kid and they don't give child support. I mean, it's a really common theme, man. I mean, there's a lot of men out there that don't pay child support. That's a dark, dark thing. You have a child out there that needs food and you don't want, you're so fucking selfish. You don't want to provide resources. Not only do you not want to be there for companionship, you don't want to provide resources to pay for the child's food. You don't feel responsible for it. I mean, that was my case when I was a kid. My dad didn't pay child support. And we were very poor. It's one of the reasons why we were so poor. And I know
Starting point is 02:34:56 other people that have had that same experience. So it's not everyone that becomes a father or that impregnates, I should say, a woman and becomes a father. And the other side is true, too. There's women that are terrible mothers for whatever reason. I mean, maybe they're broken psychologically. Maybe they have mental health issues. Whatever it is, there's some women that are fucking terrible moms, and it's sad. But it makes you appreciate women that are great moms so much more.
Starting point is 02:35:23 Yeah, when I see guys like you, the inspiration is – so I'm looking for sort of structural, what's the process to then fit people into your life? But what I hear is when it happens, you just do. You change. It doesn't always – but this is the thing, man. We're not living in a book. Right. We're not living in a movie.
Starting point is 02:35:42 It doesn't always happen. Like you have to decide that you want it to happen and you got to go looking for it because if you don't you could just be older right and still alone time there's a lot of my friends that have never had kids and now they're in their 50s i mean comedians right you have to be on the road a lot not just on the road you you have to be obsessed with comedy like it's it's got to be something you're you're always writing new jokes because you're always writing new jokes because you're always writing a new, especially if you put out a special, right? Like, I just did a Netflix special.
Starting point is 02:36:09 It's out now. So I don't know. I really have like a half hour new material. That's it. It's great, by the way. Strange times. Thank you very much. This is the first special I've watched.
Starting point is 02:36:19 It was actually really weird. Sorry to go on a tangent. But I've listened to you quite a bit but i've never looked at you doing comedy and it was so different because like here you're just like like improv you're like a jazz musician here it's like a regular conversation the stand-up special it was clear like that's like everything is perfect the timing it's like watching you do a different art almost it's kind of interesting it's like a song or something it's like you don timing it's like watching you do a different art almost it is kind of interesting it's like a song or something it's like you don't it's not there's some riffing to
Starting point is 02:36:50 it there's some improvisation to it but it's also there's a very clear structure to it but that's that's there's it's so time intensive and you have to you got to be obsessed with it to continue to do something like that so for some people that travel and the road that takes priority over all things, including relationships, and then you never really settle down. And so you never, you never have a significant relationship with someone that you could have a child with. And I know many friends that are like that. And I know friends have gotten vasectomies because they don't want it. They like this life. And there's nothing wrong with that either. I always was upset by this notion that in order to be a full and complete adult, you have to have a child.
Starting point is 02:37:34 You have to be a parent. And I think even as a parent where I think it's probably one of the most significant things in my life, I reject that notion. I think you could absolutely be a fully developed person, an amazing influence in society, an amazing contributor to your culture and your community without ever having a child, whether you're a man or a woman. It's entirely possible. And the idea that it's not as silly, like we're all different in so many different ways, you know, and we contribute in so many different ways. Like there's going to be people that are obsessed with mathematics. There's going to be people that are obsessed with mathematics. There's going to be people that are obsessed with literature. There's going to be people that are obsessed with music and they don't all have to be the same fucking person because you really don't have enough time for it to be the same person,
Starting point is 02:38:14 you know, and there's, there's going to be people that love having children. They love being a dad or love being a mom. And there's going to be people that don't want to have nothing to do with that. And they get snipped early and they're like, fuck off. I'm going to smoke cigarettes and drink booze and I'm going to fly around the world and talk shit. And those people are okay too. Like the way we interact with each other that's most important. That's what I think. The way we form bonds and friendships, the way we contribute to each other's lives, the way we find our passion and create, those things are what's really important. Yeah, but there's also an element, just looking at my parents, I think they're still together, gotten together. I mean, it's standard to get together when you're like 20 or 23, whatever, young.
Starting point is 02:39:06 And there is an element there where you don't want to be too rational. You just want to dive in. Should you be an MMA fighter? I'm in academia now, so I'm a research scientist at MIT. The pay is much, much lower than all the offers I'm getting nonstop. Is it rational? I don't know. But your passion is doing what you're doing currently.
Starting point is 02:39:31 Yeah. But it's like it's – What are the other offers? Like what kind of other jobs? Are they appealing in any way? Yeah. Yeah, they're appealing. So I'm making a decision that's similar to actually getting married, which is – so the offers are – well, I shouldn't call them out, but Google phase, but the usual AI research, pretty high positions.
Starting point is 02:39:55 And the – there's just something in me that says the edge, the chaos of this environment at MIT is something I'm drawn to. It doesn't make sense. So I can do what I'm passionate about in a lot of places. You just kind of dive in. And I had a sense that a lot of our culture creates that momentum. You just kind of have to go with it. And that's why my parents got together. Like a lot of people, they're probably, I mean, a lot of couples wouldn't be together if they weren't kind of culturally forced to be together. And divorce was such a negative thing.
Starting point is 02:40:31 And they grew together and created a super happy connection. So I'm a little afraid of over rationality about choosing the path of life. So you're saying like monogamy doesn't or not monogamy relationship don't always make sense. They don't have to make sense. You know what? I think I'm a big believer in doing what you want to do. And if you want to be involved in a monogamous relationship, I think you should do it. But if you don't want to be involved in one, I think you should do that, too.
Starting point is 02:40:59 I mean, if you want to be like a nomad and travel around the world and just live out of a backpack, I don't think there's anything wrong with that. As long as you're healthy and you survive and you're not depressed and you're not longing for something that you're not participating in. But I think when you are doing something, you don't want to be doing it. It brings me back to, was it Thoreau's quote, I guess? I always fuck up who made this. What? I think I know which one you're going to say. Yeah, most men live lives of silent desperation that's that's real man that's real that's real
Starting point is 02:41:32 you that's what you don't want i think it's thorough right um you don't want silent desperation yeah it is right i fucking love that quote because i've seen it. I've seen it in so many people's faces. And that's one thing I've managed to avoid. And I don't know if I avoided that by luck or just by the fact I'm stupid and I just follow my instincts whether they're right or wrong and I make it work. But this goes back to what we were discussing in terms of what is the nature of reality. Are we just finding these romanticized interpretations of our own biological needs and our human reward systems that's creating these beautiful visions of what is life and what is important? Poetry and food and music and all the passions and dancing and holding someone in your arms that you care for deeply. Are all those things just little tricks? Are all those little biological tricks in order just to keep on this very strange dance of human civilization so that we can keep on creating new and better products that keep on moving innovation
Starting point is 02:42:42 towards this ultimate eventual goal of artificial intelligence of giving birth to the gods yeah giving birth to the gods yeah so you know i did want to mention one thing about um the one thing i really i don't understand fully but i've been thinking about for the last couple years the application of artificial intelligence to into politics yeah I've heard you talked about sort of government being broken in the sense that one guy one president that doesn't make any sense I see you get like we you know people get hundreds of millions of likes on their Facebook pictures and Instagram. And we're always voting with our fingers every single day.
Starting point is 02:43:30 And yet for the election process, it seems that we're voting like once every four years. It feels like this new technology could bring about a world where the voice of the people can be heard on a daily basis like where you could speak about the issues you care about whether it's gun control and abortion all these topics that are so heatily so debated it feels like there needs to be an instagram for our elections i agree yeah and and i think there's room for that i've been thinking about how to write a few papers proposing different technologies it just feels like the people that are playing politics are old school. The only problem with that is like the influencers, right? If you look at Instagram, I mean, should Nicki Minaj be able to decide how the world works because she's got the most followers?
Starting point is 02:44:22 Should Kim Kardashian? Like who's influencing things and why? And you have to deal with the fickle nature of human beings and do we give enough patience towards the decisions of these so-called leaders that we're electing or do we just decide fuck them they're out new person in because we have like a really short attention span when it comes to things especially today the news cycle is so quick so the same process so instagram might be a bad example because yeah you get or twitter you start following donald trump or and you start to sort of idolize these certain icons that do we necessarily want them to represent us i was more thinking about the the amazon reviews model, recommender systems, or Netflix, the movies you've watched, the Netflix learning enough about you to represent you in your next movie selection.
Starting point is 02:45:16 So in the kind of movies, like you, Joe Rogan, what are the kind of movies that you would like? and what are the kind of movies that you would like. The recommender systems, these artificial intelligence systems, learn based on your Netflix selection. That could be deeper understanding of who you are than you're even aware of. And I think there's that element. I'm not sure exactly, but there's that element of learning who you are. Like, do you think drugs should be legalized or not like do you think do you think immigration should we let everybody in or keep everybody out should we all these topics with a
Starting point is 02:45:54 red and blue teams now have a hard answer of course you keep all the immigrants out or of course you need to be more compassionate of course course. But for most people, it's really a gray area. And exploring that gray area, the way you would explore the gray area of Netflix, what is the next movie you're watching? Do you want to watch Little Mermaid or Godfather 2? That process of understanding who you are, it feels like there's room for that in our bodies. Well, the problem is, of course, that there's grave consequences to these decisions that you're going to make in terms of the way it affects the community,
Starting point is 02:46:33 and you might not have any information that you're basing this on at all. You might be basing all these decisions on misinformation, propaganda, nonsense, advertising. You could be easily influenced. You might not have looked into it at all. You could be ignorant about the subject and it might just appeal to certain dynamics that have been programmed into your brain because you grew up religious or you grew up an atheist. The real problem is whether or not people are educated about the consequences of what these decisions were going to lead to.
Starting point is 02:47:06 It's information. I think there's going to be a time in our life where our ability to access information is many steps better than it is now with smartphones. I think we're going – like Elon Musk has some Neuralink thing that he's working on right now. He's being very vague about it. Increase the bandwidth of our human interaction with machines is what he's working on. Yeah. I'm very interested to see where this leads. But I think that we can assume that because something like the Internet came along
Starting point is 02:47:43 and because it's so accessible to you and I right now with your phone, just pick it up, say, hey, Google, what the fuck is this? And you get the answer almost instantaneously. That's going to change what a person is as that advances. And I think we're much more likely looking at some sort of a symbiotic connection between us and artificial intelligence and computer augmented access to information than we are looking at the rise of some artificial being that takes us over and fucks our girlfriend. Wow. Yeah, that's the real existential threat. Yeah, I think so. That's, to me, super exciting. The phone is a portal to this collective that we have, this collective consciousness, and it gives people a voice. I would say, if anyone's like me say, I know very little about. Like if I'm actually being honest with myself, I've heard different – like I know what I'm supposed to believe as a scientist.
Starting point is 02:48:54 But I actually know nothing about – Concrete, right? Nothing concrete about – About the process itself. About the environmental process and why is it so certain. You know, scientists apparently completely agree. So as a scientist, I kind of take on faith oftentimes what the community agrees. In my own discipline, I question.
Starting point is 02:49:14 But outside, I just kind of take on faith. And the same thing with gun control and so on. You just kind of say, which team am I on? And I'm just going to take that on i just feel like it's such a disruptable space to where people could be given just a tiny bit more information to help them well maybe that's where something like neural link comes along and just enhances our ability to access this stuff in a way that's much more just more tangible than just being able to google search it and maybe this this process is something that we really can't anticipate. It's going to have to happen to us,
Starting point is 02:49:46 just like when we're talking about cell phone images that you could just send to Australia with the click of a button that no one would have ever anticipated that 300 years ago. Maybe we are beyond our capacity for understanding. The impact of all this stuff. Yeah, yeah, maybe. The kids coming up now now what is that world going to look like when you're too old to you'll be sitting you'll be like 95 sitting on a porch
Starting point is 02:50:10 with a shotgun and what do those kids look like when they're 18 years old robots fucking x-ray vision and they could read minds yeah yeah you're happen? You'd be saying robots are everywhere these days. Back in my day, we used to put robots in their place. Yeah, right. Like they were servants. I'd shut them off. Pull the plug. I'd go fuck your mom.
Starting point is 02:50:35 Now they want to go to the same school as us? Yeah. And they want to run for president. They want to run for president. Yeah. They're more compassionate and smarter, but we still hate them because they don't go to the bathroom. Yeah.
Starting point is 02:50:47 Well, not we. Half the country will hate them and the other will love them. And the Abraham Lincoln character will come along. That's what I'm pitching myself for. You're the Abraham Lincoln of the robot world. Of the robot world. That's the speeches that everybody quotes. And one other thing I got to say about academia.
Starting point is 02:51:06 Okay. In defense of academia. So you've had a lot of really smart people on, including Sam Harris and Jordan Peterson. And often the word academia is used to replace a certain concept. So I'm part of academia. And most of academia is engineering, is biology, is medicine, is hard sciences. It's the humanities that are slippery. Exactly.
Starting point is 02:51:32 And I think a subset of humanities that I know nothing about, and there's a subset I don't want to speak about. Gender studies. Say it. I don't know. I don't know. Candyman. Candyman. Candyman.
Starting point is 02:51:43 I actually live on Harvard campus. So I'm at MIT, but I live on Harvard campus. Yep, it's there. Do they have apartments for you guys? How does that work? Yeah, they hand them out. No, I just – I don't care. When you say live on the campus, what do you mean?
Starting point is 02:51:58 Oh, sorry, like in Harvard Square. Oh, Harvard Square in Cambridge. In Cambridge, yeah. I used to go to Catch a Rising Star when it existed. There used to be a great comedy club in Cambridge. There's a few good comedy clubs there, right? Well, there's a Chinese restaurant that has stand-up there still. How does that work?
Starting point is 02:52:17 Well, it's upstairs. There's a comedy club up there. Do you ever, because you've've done i think your specials in boston yes i did at the wilbur theater have you ever considered just going back to boston doing like that chinese restaurant the ding ho yeah that was before my time when i came around i started in 1988 the ding ho had already ended but i you know i got to be friends with guys like lenny clark and tony v and all these people that told me about the ding-ho and Kenny Rogerson, the comics that were,
Starting point is 02:52:48 and Barry Crimmins, who just passed away, rest in peace, who was really the godfather of that whole scene. And one of the major reasons why that scene was so, had such really some rock-solid morals and ethics when it came to the creation of material and standards. A lot of it was Barry Crimmins because that's just who he was as a person. But that was before my time.
Starting point is 02:53:18 I came around like four years after that stuff. And so there was tons of comedy clubs it was everywhere but i just didn't get a chance to be around that uh that ding ho scene and you stayed in boston for how many before you moved out here uh i was in new york in uh by the time i think i was in new york by 92 91 92 so i was in boston for like 4 or 5 years doing stand up how'd you get to from Boston to New York my manager
Starting point is 02:53:48 met my manager I wanted to use this opportunity for you to talk about what shit about Connecticut oh people from Connecticut get so upset at me
Starting point is 02:54:00 it's become a running theme to talk shit about Connecticut yeah I've heard you do it once I just had a buddy who did a gig in Connecticut he told me it's become a running theme to talk shit about connecticut here i've heard you do once i just had a buddy who did a gig in connecticut he told me it was fucking horrible i go i told you bitch you should have listened to me don't book gigs in connecticut the fuck's wrong with you 49 other states but go to alaska it's great you go back to boston do like small gigs small sometimes yeah i'll do Laugh Boston. It's a great club.
Starting point is 02:54:29 I used to do Nick's Comedy Stop and all the other ones there. But I love the Wilbur. The Wilbur's a great place to perform. I love Boston. I would live there if it wasn't so fucking cold in the winter. But that's what keeps people like me out. It keeps the pussies away. Listen, we've we gotta end this
Starting point is 02:54:45 we gotta wrap it up we've already done three hours holy shit flies by it did it flew by can I say two things
Starting point is 02:54:52 sure so first I gotta give a shout out to my shout out shout out to a long long time friend Matt Harandi from Chicago
Starting point is 02:55:00 has been there all along he's a fan of the podcast so he's probably listening him and his wife Fadi just had a beautiful baby girl so I wanted to send my love Randy from Chicago has been there all along. He's a fan of the podcast, so he's probably listening. Him and his wife, Fadi, had a beautiful baby girl. So I wanted to send my love to him. And I told myself I'll end it this way. Okay.
Starting point is 02:55:16 Let me end it the way Elon ended it. Love is the answer. Love is the answer. It probably is. Unless you're a robot. Bye. Unless you're a robot. Bye. Unless you're a robot.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.