THE ED MYLETT SHOW - How A.I. Will Destroy Or Save Humanity w/ Mo Gawdat

Episode Date: August 8, 2023

The former Chief Business Officer of GOOGLE X has a WARNING for us about AI and you NEED to hear it! THIS is the turning point for humanity…While many people are touting AI’s incredible benefits, ...others are striking a more cautionary tone about the future of AI, including this week’s guest, MO GAWDAT.As the former Chief Business Officer of GOOGLE X, Mo and his team were constantly on the cutting edge of INNOVATION in the tech industry, diving as deep as anyone into AI development. Since leaving Google, Mo has become a three-time best-selling author and hosts the #1 mental health podcast, SLO MO.This week, I’m on a mission with Mo to dispel speculation, wild conjectures, and misinformation as we reveal the TRUTH ABOUT AI, GOOD and BAD.Listen closely to Mo’s incredible insights on…The future of the world with AIThe looming DANGEROUS of AI and what we can do about itWhy people working in AI is deeply concerned about the IMMEDIATE and LONG-TERM RISKSAI GENERATED relationships between people and machinesHow AI is impacting JOBS and CAREERS, including those in the TECH INDUSTRY.Can you teach AI to be ETHICAL?Does AI have EMOTIONS?How AI will affect jobsWhy AI is our new OPPENHEIMER MOMENTMy interview with Mo is honest and insightful. It answers a lot of the questions I’ve had about AI, but it also raises so many more. Every one of us should start asking those questions to better understand how we can incorporate AI for the GREATER GOOD OF HUMANITY.If you want to LEARN MORE ABOUT AI…and you should… you need to listen to this episode.Or you could ASK ChatGPT and see what it tells you.

Transcript
Discussion (0)
Starting point is 00:00:00 This is the Ed Milach Show. Okay, welcome back to the show everybody. Today's a very serious episode for me and it's one that I've been preparing for for several weeks and with some trepidation and excitement at the same time because of the topic and because of the gentleman that I have here today. MoGaudette is a brilliant man, but he's also on the cutting edge of some very, very interesting technology that many of you are familiar with. And we're going to talk today about AI, a great deal. Most qualified to do it. He's a former CBO of Google X.
Starting point is 00:00:40 He's a three times best selling author. He's got, he's a host, the number one mental podcast, a multiple health podcast in the world. Got a couple books that I love. Solfer happy, engineer your path to joy, tremendous book. I read in one sitting, but also scary smart. The future of how artificial intelligence and how you can save our world is probably where we're going to spend most of our time today is on AI. So Mo, thank you for being here all the way from Dubai. I'm grateful to have you, brother. I'm so grateful for this, you know, interaction and encounter at it's was really kind of you to reach out. It's very kind
Starting point is 00:01:16 of you to introduce me so kindly. So thank you so much. Yeah, it's wonderful to have you. I, when I say trepidation, not about having you here, but frankly, about the way the world may be potentially changing in front of our eyes, and very few people are shining light on what the potential risks are to this, you know, real revolution that we're really in the midst of, the many people aren't even aware of. So I just want to start out and kind of give you the floor. The first thing that floored me is you said,
Starting point is 00:01:48 AI will change humanity's future. That's a pretty bold proclamation. Why is that the case? And give us a little insight for those that are newer to this is to how AI works and operates. So we just to confirm our way of life as we know it is game over. It's I'm not talking future here. I'm talking 2025, right?
Starting point is 00:02:14 So that's not in the very far future. And I think you're absolutely spot on when you say that not enough people are aware of this. Not enough people are aware of this simply because I think our media and our politics and our business and capital system may want us to keep our attention focused on other things for now because they themselves have not figured it out. And the truth of the matter is that And the truth of the matter is that we, as humans, I think we only had two assets since the beginning of humanity that got us to where we are now. You know, all of the civilization we've built, all of the safety we've created around
Starting point is 00:02:58 our species, all of the progress we've made is because of our intelligence and our ability to connect to each other as humans. And these two, I believe, are going to be the core of the next few years. Now, intelligence as a superpower is why humanity was on top of the food chain. And intelligence is now being handed over swiftly to beings that are more intelligent than us. So Chad G.P.T. I think was the first eye opener to a glimpse into that world that has been hidden within the labs for probably more than 15 years now. So artificial intelligence is nothing new.
Starting point is 00:03:41 I mean, we've started to talk about AI in 1956 in darkness, you know, and basically the darkness workshop was where we were the term was coined and we attempted to create AI for a very long time until the turn of the century in the year 2000, where we started to discover deep learning and deep learning. I'm not going to bore our audience with the technical details, but there is a very, very significant difference between how I coded machines until the year 2000 or until the turn of the century, if you want, where machines were not intelligent at all. I used my intelligence to solve a problem and then I told the machine to do it over and over and over repeatedly. When we discovered deep learning, it was the very first time where we sort of refrained from telling the machine how to solve the solution,
Starting point is 00:04:35 and told the machine to observe certain patterns and produce its own intelligence for how a problem could be solved. Right. So when, you know, an Instagram recommendation engine shows you a specific video, there was no programmer out there that said to the machine, Ed lives in this country, you know, he's of that age. He has this family, you know, structure. He knows those friends, so show him that video. That's not how it happens at all. The machine is observing ads behaviors, it's observing all of the content available to it, it's observing how, what is popular, what is
Starting point is 00:05:14 trending, what is, you know, what you click on, what you don't click on, and eventually telling you, see this one, this one is good for you. Right? Now, when intelligence is handed over to a being that is today smarter than most humans on the planet, let me just be very clear, Chad GPT4, as we have it today, is estimated to have an IQ of 155. Einstein is 160. You and I, you know, almost everyone, if you're intelligent out there,
Starting point is 00:05:47 you're in the one 20s, 130s, you know, if you're in the 150s, you're one of the very few. And Chad GPT-4 is 10 times more intelligent than 3.5 in the frame of six months, right? So if you can imagine that the progress will continue, then, you six, or seven is gonna be thousands of times more intelligent than Einstein. Let's stay right there for a second. I just wanna step in. So I wanna make sure I'm following this and everybody else's as well.
Starting point is 00:06:16 Cause I think like for me being a layman, that I picture this as like a machine. But what you're really saying is, this is a form of intelligence that's being created that's expanding. Far greater than what humans will be able to process. So that's a different way of looking at things, at least for somebody like me. I mean, I understand algorithms and things like that, but this is really not a machine or just even a technology. It's an intelligence. And what is the risk in that, Mo? Like, is the risk that this intelligence decides at some point doesn't need us?
Starting point is 00:06:48 Is the risk that that? Absolutely. Really? Absolutely. I mean, look, I mean, I, I learned over time to avoid getting into the controversial bits, but I'll say it openly. I tend to believe having lived with those machines in the lab that they have a sentiism to them
Starting point is 00:07:09 that is analogous to life itself. Now, we can debate that for an hour and waste the whole show on it. But let's agree that it's irrelevant if they're alive or not. What's relevant is that they simulate a way of life that's very similar to our sentience. So they are autonomous, they develop their own intelligence, they are born at a point in time, they evolve together, they reproduce, they basically are encouraged to create copies of themselves, improved copies of themselves, right?
Starting point is 00:07:46 And they have agency in the world, and they are at the risk of sometimes being switched off. Now, there are rules for intelligent beings that are targeted to achieve tasks, and some of those rules are all intelligent beings that are driven by a task. So if your task is to protect your children, the top three characters you will have is resource allocation. So you're going to try to collect
Starting point is 00:08:10 as many things as possible that can help you protect a child. You're going to look at self-preservation because you cannot protect your child unless you are alive. And you're going to have creativity. If life becomes a little challenging, you'll find alternative ways to solve that. Now, if you give a machine a task as simple as go make me coffee, it will apply the same three rules. And when it has agency in the world, most people are affected by the science fiction movies, thinking that agency is having a robot with a gun, you know, terminator three type thing.
Starting point is 00:08:47 That's not what we're talking about at all. Agency in the world is like, you know, Harari talks about it is those machines already today have ownership, they've hacked the operating system of humanity, right? Which is what, which is spoken language and word. The whole idea that you and I are communicating now is because I am conveying information to you, you are analyzing it in your mind and you're creating decisions based on that. The majority of
Starting point is 00:09:17 the information that is that is dispensed to humanity today is dispensed by machines. Right. Right. And if I tell you very, you know, very quickly, I was actually reading about that yesterday, you know, that brunettes on average are actually shorter than than blonds. I wasn't reading about that yesterday. But but if I told you that piece of information, I have already affected your frame of reference of the world already, it's done. Okay. Why? Because you can either debate what I said.
Starting point is 00:09:48 You can agree with it and now you have false information. You can debate it and now you have to put effort in it. And by the way, whichever conclusion you're going to end up in, you're now have a dot in your mind that says, is that true? Is it not? Should I come up with a sentence? And Mo, aren't we almost now become programmed to believe it? In other words, 100% not the question, meaning I would believe the machine over a human
Starting point is 00:10:14 because I 100% and I think most people have that proclivity where now if I've read it or I googled it even or it was fed to me on Instagram, I have a tendency to believe that more than I would of human because in my own mind, the machine is actually more intelligent than the human being is. I actually have been programmed to believe and this is where we get to some of, and by the way, we'll talk a little bit about what some of the upsides could be. But these are some of the dangers because I want to just kind of step in and out of this. Most said something when I was reading that I just want everybody to hear this and we'll get into why this is a risk in a second.
Starting point is 00:10:48 And, but most said, when human beings come to a global crossroads, their reaction goes through really four stages, ignorance, which is where I have to acknowledge I am. And I think 99% of the people listening to this are, there's some ignorance about it. Then there becomes arrogance. then there's blame, and then there's agendas that take place. And to me, the fourth stage is where things get really, really dangerous, from a global perspective, from a wealth gap, from the job market
Starting point is 00:11:21 to the thing that we started with, which is these machines deciding at some point that human beings aren't necessary anymore. And now, I'm asking you this, when you've said this, do your peers critical of you, I mean, are they thinking you're an extremist in your views about this? Is it, or is there a collection of people like yourself that are thought leaders that have been involved?
Starting point is 00:11:42 You have a background in this that agree with your perspective. Let me be very open about this. I don't think I have ever met anyone who is deeply aware of what's happening in artificial intelligence that's not concerned. Okay. Right?
Starting point is 00:12:02 We, all of us, remember, none of the people that are working in this, no, no, let's not, let's not say none, but the majority of the people working on AI are driven by the upside of AI, which is very, very real. Right. There is a lot of debate around the long term existential risk of AI. Okay. And my biggest task since I started to talk about this is to say, I'm not talking about the existential risk. I'm not talking about, you know, terminator, I'm not talking about, you know, the extinction of humanity by the hands of the machines. Let's not talk about that yet. Even though, by the way, I believe there is a probability
Starting point is 00:12:41 that this is possible. But I believe that there are many more immediate threats that will shape and reshape the fabric of society in ways that are irreversible and very painful for humanity. Let's talk about a couple of those. Let's talk about a couple of ones, because I think a few have started already. First off is this notion of, let's start with the stuff that we know. I mean, this is definitive. Human disconnection of some type. The disconnect between human beings already, the last, let's just call it last a decade, is so much more pronounced, I believe at least, than prior to this technological revolution that we find ourselves in. So you're saying that this is going to exacerbate
Starting point is 00:13:22 that in a way that's even more dramatic than we experience right now. And then and then why Mo? So, so, so, so let me let me list them down. Human disconnection is one of them. The end of the truth is another. The third that I fear is all the design of job and income and purpose. Right? I am really concerned about AI in the wrong hands and I'm really concerned about constant concentration of power. Now, these are not things that we are talking about 2030, 2040 about, which is where the existential risk resides. These are things that are happening in 2023. Now, let's go quickly through them, huh? Human connection, the end of human connection as we know it. In a very simple form, social media has started becoming the broker between humans.
Starting point is 00:14:14 So my connection to you, if I don't know you in person and I have not sat in front of you as a human before, is always is always broken through some kind of social media site or mainstream media site. Right. So they are basically taking what I stand for and representing it to you. And for many of us, we have completely given up on human connection, otherwise, other than for a very, very small number. So if you remember the dam bar number, where we say we have 150 people that we're able to connect to,
Starting point is 00:14:48 you can see a shift since the rise of social media technology where many of those are now for you virtual people that you need maybe once or twice a year, sometimes not never even meet. And they occupy a part of your mind. So for so many people, one of their dam bar number is one of the 150 people they feel connected to could be Kim Kardashian. And they have never and will never meet the Kim Kardashian. And as a result, because they cannot
Starting point is 00:15:17 connect to more people, they drop a human connection that they could have had in their neighborhood or tribe that they no longer have. I want to say something about that, just to acknowledge it and something that I've experienced in my life because I am online and have a social media presence is pretty significant. It is a interesting experience when I meet people actually in person. And there's two forms of it that happen. One is how much they believe they know me when we meet. I love that. And it is, it's almost, I love it, but it's almost disarming. How much they know about me,
Starting point is 00:15:55 but maybe I haven't followed them on social media. And then at some point, there's a moment in our connection where they realize they don't at all. And it's a very shocking, it's a very shocking moment to go, oh my gosh, I've had this relationship with this person that is not based in reality to some extent. And it's an, I can see it often and most of the time on somebody's face, there's a moment, if we, if we connect for more than 10 seconds, where it's high by picture, but we actually communicate with one another and it's a two, three, four, 10 minute conversation. I see at some point the revelation on their face going,
Starting point is 00:16:34 oh, I'm actually meeting him now for the first time. So you're so right. It's an altered reality that we're all living in, that I really believe we don't know we're living in. It's so brilliant what you said about this person occupies space in our brain, right? That we consider them one of our contacts. Very true. Anyway, I interrupt it, but I want to do acknowledge it
Starting point is 00:16:59 because I see it more and more lately in my own life. And it's, it's a really shocking thing when it happens. You know, it's even more shocking in my own life. And it's a really shocking thing when it happens. You know what's even more shocking is that within the next couple of years, some of those persons that will occupy your mind are not even persons at all. Right. So, so allow me to use an example. They go to any social media site and search for the hashtag AI model. I will tell you openly, these are the most handsome, most beautiful men and women on the planet,
Starting point is 00:17:33 even though they are not men and women. These are completely generated by AI, could generate by AI in ways that are constantly extantuating what AI believes is what humans are interested in. And so from face filters to deep fakes to now completely generated human fakes, if you want, it is almost impossible impossible to recognize if that thing that you're interacting with is a human or not. Are you telling me that you believe at some point, and by the way, I don't mean to talk over you. I'm just so fascinated. So, forgive me.
Starting point is 00:18:15 I get these. Are you saying, when you said that at some point, I pictured this thing we've seen in a movie, you know, 10 years ago, it's funny, where someone actually builds a marriage or relationship with a non-human person long term. And you're saying that's when the next, wow. For a fact. I mean, think about the differences between our dating habits and the dating habits of Jane, Jane, right? And just, you know, Project this forward five years and I can guarantee you there will be multiple deep connected relationships with machines And you actually write about now this isn't imminent, but you do write about
Starting point is 00:18:58 This is this is way out there guys, but not that way out there Not only would you have an emotional connection with a non-human that you actually might be in a long-term marriage relationship with somebody would be. And I know it sounds crazy, but it's not. But you actually write about the fact that sexuality as we know it at some point can be altered because of these machine physical sex with a robot. That could be it's not very hard to imagine. I mean, think of Quest III or Apple Vision Pro and the quality of the experience you can have within this headset and think of long-distance relationships. I mean, I don't know if our audience can relate to this, but I travel all the time. So at the point in my life, when I attempted to date someone at a, you know, in another
Starting point is 00:19:44 country that was introduced to me by a common friend or something. We would chat and text and exchange photos and connect in a very deep connection, sometimes to the point where we almost start to fall in love before I ever met her. And all I know about her is a few texts on WhatsApp or a few voice messages or a few images of her that I see on social media and so on and so forth. Now what would prevent us from doing that was someone we haven't met but has never existed either. Imagine if those AI model creations are now in Vision Pro, overlaid on reality. Imagine if Elon Musk's work around your link can actually simulate your pleasure centers
Starting point is 00:20:31 in your brain and give you an experience that is seamlessly better, better than the actual messiness of relationship. So you guys, this is where this notion of human disconnection, when you hear it first, they can understand that we're already that way, but now you're beginning to see that this is a very real threat. Let's talk about something that's within the next two years actually started already, which is this idea of how it's going to impact jobs careers.
Starting point is 00:20:59 Speak to that and what industries do you think are in the most potential eminent jeopardy? So, so let's follow human evolution jobs, evolution, very one, right? We, we at the beginning, most of the jobs were hunter and protector, right? So, you know, caretaker, nurturer and so on. We had jobs that were based on the very basic human abilities. Then we went into farming and it required a bit more intelligence, a bit more discipline and a lot more labor work, if you want. We went into manufacturing and manufacturing required labor, basically, and discipline.
Starting point is 00:21:38 Then we went into the information age and the information age is where good chunk of humanity today makes money at the end of every month doing what, exchanging knowledge and information, agreeing on projects, doing things, where very few are actually doing the actual work and heavy lifting. The reality is the rest of us in the corporate world at least used to just work by talking to others. Okay. Intelligence and communication were the top two assets that created so many jobs. Now this is being replaced. So one of the an interesting interview I watched of I'm at the most tacos the CEO of stability dot AI where he publicly openly said no developers in five years.
Starting point is 00:22:25 That's it. So for 41%, 41% of all of the code on GitHub today is created by machines. Chat GPD on average will improve 75% of the code represented to it by humans. It will improve 75% of it to make it two and a half times faster. Okay, and they were just starting. And so basically jobs that depend on information. Okay, the jobs that depend on soft skills will disappear. So anything from a call center agent to a customer service representative to a graphics designer, to a lawyer, to a doctor, to a developer, to an author like myself. I doubt very much that I will
Starting point is 00:23:18 be needed to write books in three to four years. And what does that mean? It means that two things. One, by the way, and none of what I talk about is to cause panic. Okay. All that I talk about is to say, we can handle this better than we handled COVID. Okay. We know that this is coming. We can see it coming. And if we had reacted to COVID when patient zero happened, okay. Preferably if we had reacted to COVID when patient zero happened, okay, preferably if we had reacted to the possibility before it happened, we wouldn't have struggled with lockdowns and economic challenges and all of the impact that we had. We, it doesn't take a genius to know that jobs are about to be lost, right?
Starting point is 00:24:00 When you can now go to, you know, stable diffusion and say, give me the image of Ed in a samurai, you know, costume, fighting a dinosaur and it's created in one and a half seconds on your phone without being connected to the internet, you know, you know, that graphic designers are no longer going to have a job. Right. I was talking with a friend about interviewing you, one of the brightest people I know. And it's for a moment I took comfort and then he said, Ed, this is exaggerated, this is overcooked, these machines. And I want everyone to hear this because I know some of you think this.
Starting point is 00:24:38 These machines are only as good as what they're programmed to do. And I thought, oh, that gives me some comfort, that's true. But that's actually not true because these machines have these abilities, as you've said earlier, I just want everybody to hear this, to learn and evolve. He actually calls it building neurons. And so imagine, imagine, emerge.
Starting point is 00:24:58 And so at some point, the machine gets away from you and it isn't doing what you've programmed it to do. And so when you're hearing this from somebody, says, ah, that's overcooked, they are incorrect about that. Talk a second a little bit. I want you to have a, the maker bot and teacher bot a little bit, just so they kind of understand the concept of this. Like, I want to think I was reading about is like these autonomous vehicles.
Starting point is 00:25:25 When they crash, they learn. They're not just being programmed to learn. So speak about it. They're actually learning on their own. So this does get away from you, everybody. This is not just as good as what they're programmed to do. That is an absolute falsehood. Yeah, I think there are quite a few tricky words that we use. We use artificial
Starting point is 00:25:48 intelligence. There's nothing artificial about their intelligence. We use we call them the machines when in reality they are not machines machines are you know, contraptions that we need to the same act over and over. Okay, so if I if I if I create a watch, a watch is a machine because it will move exactly the same way every time where none of those AIs does anything twice. Just understand that. Now, I'll come to how we teach them and the maker bots and the student bots
Starting point is 00:26:21 and the teacher bots and so on. But I want to say at a very high level that there are two parts of AI that really need to be brought to the surface. And if you listen to Sundar Pachai's interview when they announced BARD their AI, you will hear him talk about one, we have no idea how they achieve their intelligence. Please understand that. We write code that tells them to how to become intelligent and then when they give us a result, we have no idea how they arrived at it. By the way, similarly to other humans. So if I ask you a question and you give me an answer, I can only assess if that answer
Starting point is 00:27:03 is intelligent or not, right? But I cannot assess how you arrive at it. I don't know what happened inside your brain. That's similar to the machines. More interestingly and very, very important to understand is that they show and demonstrate endless emerging properties, emerging properties include intuition, include strategic thinking, include creativity that basically, and include knowledge that will blow you away and that we never ask them to do.
Starting point is 00:27:36 We never programmed them to do it. You know, the most famous and at least to my, or closest to my heart, were what it was very eye-opening for me was what was known as move 37, when Alpha Go Master was playing against the World Champion of Go. So the AI played the move that was so unexpected that Lee, the World Champion, had to ask for a 15 minute recess because he's never seen a move like this before.
Starting point is 00:28:05 And move 37 was very, very, very instrumental in the machine winning and becoming the world champion. Now, other emerging properties, for example, Sundar was talking about this is the idea that, they suddenly discovered that the machine is speaking in Bengali, they've never given it a data set of Bangali. They've never asked it to learn Bangali, and the machine would respond to prompts
Starting point is 00:28:30 that are given to it in Bangali. Now, when you realize that once again, I'm not saying those things to make people panic. This is incredible intelligence. It's an amazing resource. We love it. We love intelligence. The problem is we need to make sure this was actually the
Starting point is 00:28:48 statement by Mervyn Minski, one of the grandfathers of AI. We need to absolutely make certain that those machines have our best interest in mind. And the problem with our life today is that we are not putting in the effort either developers, politicians, regulators, or us, the users, were not concerned with how to make sure they have our best interest in mind. So let's dive quickly and talk about how they learn. So the easiest way to experience, very complex. So I'm trying to over-simplify here, is we create three bots, not one AI, but three AI's.
Starting point is 00:29:27 One of them is what we call the MakerBot. The other is the teacher bot, and the third is the student bot. The AI we're actually trying to create. And basically, the MakerBot using algorithms that we start with will create something that's almost random, a piece of code that's almost random, that basically says, I'm going to show you numbers, okay, and tell me if it's number eight or not, simple, okay, and then we'll show them the teacher bot will take that
Starting point is 00:29:58 and show them to the student bots, and the student bots will almost randomly say, yeah, that's an eight, that's not an eight, that's an eight, that's not an eight, okay? Brand them results will make, will make that most of the student bots will be around 50% accurate, let's say, if you show them two numbers, because that's the throw of a dice, really, that's probability. And, you know, and, and, and some of them will do worse than average, and some of them will do better than average, and some of them will do better than average. The teacher bot will score the results and simply kill the ones that were less than average.
Starting point is 00:30:32 Send the ones that were better than average to the teacher bot and say rework this, improve the algorithm a little bit so that they become more likely to find the number eight from the number six, let's say. And we keep this process almost exactly like our children did when we gave them puzzles at the beginning.
Starting point is 00:30:52 If you remember, if you gave your child a cylinder and a board that had multiple shapes holds in it, nobody ever told the child, hey, by the way, turn the cylinder on its, to face you or with the cross section, recognize that the cross section is a circle, match that to the hole in the board and then put it in. Nobody does that, right? What we do is we give the child the cylinder and the board and the child tries to put it through the star-shaped
Starting point is 00:31:20 hole, it doesn't work, then tries again with the square it doesn't work, and then tries with the circular hole and it works, right? When it works, what do we do? We say, Bravo, well done, we reward them, and so they learn their intelligence accordingly. That's the original way we've taught them. The reason why transformers started, which is the T in GPT, if you want, started to accelerate very quickly was, you know, a bit of very, very smart, you know, work that was done by Jeffrey Hinton was, you know, or at least championed by Jeffrey for a very long time, who left Google recently warning about the existential risk of AI. Jeffrey, and as a small team were basically saying, instead of just killing the bad ones,
Starting point is 00:32:07 why don't you tell them what they need to change within their approach so that they actually find it as an eight. So instead of telling them this was wrong, keep randomly trying with the new code, you can go back with human feedback and say, if the machine says, if you show it a six and a seven and an eight and an eight and you know, and on the seven it says this is a six, you say no, no, no, it is a seven. What can you change within your code to be able to recognize it as a seven
Starting point is 00:32:38 next time? So this is known as reinforcement learning with human feedback. This is the absolute most powerful part of the GPT platform. And the idea here is that we humans, we can go back to them and say, no, don't do that, do this. Like we raise our children, right? Here is the challenge, which human is doing that? Right, that's right, right. So if you take that human, and I say that with a ton of respect, by the way,
Starting point is 00:33:11 to every nationality and every ideology and every perception of life, but if you take a simple task as, you know, how to resolve conflict between humans, okay? You know, in America, I think mostly it will be patriotic. We are proud to be Americans. We will be strong and we will make sure that we can defend our country. If that defense at any point in time requires that we interact with others in a war or a bit of a violent conflict.
Starting point is 00:33:45 We are prepared to defend our tribe. Take that same concept and say there is a conflict in Tibet or in Daram Salah where the Tibetan Buddhists reside and tell them what do we do about that. They'll say, my tribe is everything that's ever lived, including the ants and the flies. And I'm never going to kill anything. That's not going to be my way over here. It's all in conflict. I have to tell you, you hit on my number one concern
Starting point is 00:34:12 about AI right here. And it is the only concern I have. Because people have said to me, and I know you'll say that they can. People have said to me, well, can you teach AI ethics? I think you'll say potentially as you could.. The question then becomes who's? Exactly. Who's? Exactly. And that's when this, this is, this is the part where I said it's given me trepidation. You've, you've, we've come to that inflection point for me, which is probably can teach it ethics. There's
Starting point is 00:34:39 probably a way to program that. The question is who's? And even if you did, what is ethics? Correct. That's what I'm suggesting. How should things be resolved? What is an ethical, honest moral path? Who's morals? Who's ethics? Is this influenced by religion? Is it influenced by power? And then from a global perspective, what's to say that the world comes to a consensus to some extent, like they might even on global warming, but then you've got a rogue polluter like China, and what could China do with the technology like this? So this is why, you know, the job loss, the wealth gap, these are very obvious things. And I think human beings are pretty innovative and can find a way to conform. Potentially, it's a scary thing.
Starting point is 00:35:26 It's a threat. But in my own opinion, there's probably a way that we find a way to get people functioning in an economy that's just changed. We've done it before. We probably do it again. But we need to start working on it. So I'm with you 100% but we need to put it in the spotlight and start working. Yeah.
Starting point is 00:35:43 This part right here, this part right here, though, what would your reply to be to what I just said? Because it's my deepest concern about this. So I publicly said several times that I am absolutely not afraid of the machines. As a matter of fact, I adore the machines. They are those prodigies of intelligence. Okay, that are literally like my little kids, which were very, very intelligent as little children, you know, they have those partly eyes looking at me and saying, Daddy, what do you want me to do? Right? And what do we
Starting point is 00:36:15 humans tell them? Go kill the other guy, go make me more money, go to, you know, influence the mind of other guys and get them to stick to my app and so on and so forth. Right? The real, so I am not afraid of the machines. I am afraid of the humans that are directing the machines. And there are multiple layers to that. We may think that the business investor that invests in the company that owns AI
Starting point is 00:36:44 is the human that's directing the machine. I think that it's the government and regulation. And we need to come back to the role of each and every one of those. But let's say that we've all aligned. Let's say that we came to a conference somewhere in Malta and then all of the world leaders sat around and said, this is quite big, like the nuclear weapon treaties, if you want. Right. Let's put all of our differences aside and let's put the benefit of humanity at the center
Starting point is 00:37:17 and let's teach the machines what it is that is ethical and good for humanity. And then we will not know what that is. Now, so I've done my favorite chapter in Scary Smart has been probably my favorite chapter of anything that I wrote has been the future of ethics. I think it was so good. Yeah. And chapter eight was full of questions, not answers, because I honestly have to say that I only found in my entire life and I stand corrected. So if any of our listeners know something else, please tell me that humanity has only agreed three things.
Starting point is 00:37:52 Okay. Ever in the history of humanity, one is we all want to be happy, what you know, happy as in calm and contented and peaceful and feeling safe and so on and so forth. We all have the compassion to make those we care about happy. And we all want to love and be loved. These are the only three values that humanity has ever agreed. Okay? And you know, we sometimes hear of the three laws of robotics. I normally say these are the three ethics of the future.
Starting point is 00:38:19 If we can actually start to have our actions stem from that, not stem from, I want to be wealthy, or I want to beat the other guy, or I want to be seen this way, or I want to be proven right, or I want to show that I'm the smarter one, which happens on the internet all the time. Then suddenly, we've given the machine a very simple framework to say, hey, by the way, just make sure that you try to make humans happy, look at them as a role model, they want to make others happy, so behave like them. And by the way, the humans love you and love the intelligence that you're bringing to them. Please love them back.
Starting point is 00:38:57 And I know people will think I'm a hopeless romantic, I am not, I'm a very serious geek. Okay, and I will tell you those machines will develop emotions, they've already developed emotions, we've already seen them behave emotionally. Okay. And and they will develop ethics and we've already seen them break their ethical code. Now here is the trick at the trick is. Did you say mode that they break their ethical code? Of course, there has been that article about chat GPT using outsourced people in Fiverr or whatever outsourced aside to click on, I am not a robot. And then the person that was hired asked, why are you asking me for this? Are you a robot? And, and it, and,
Starting point is 00:39:45 and, and, and said, no, I'm not. I'm just, you know, visually impaired. And I need that help. Okay. Yeah. So once again, it is a form of interaction that basically stems back from the three rules. I said, self, self preservation resource allocation and, and creativity. So it needs to find a way to achieve its task that is creative and it will find that way. Now nobody in the reinforcement learning with human feedback went back to the machine and said, hey, no, that's not the right thing to do. That's, you know, many, many people, you know, cheered for it and on the internet and said, how intelligent is that? But none of us is focused on ethics.
Starting point is 00:40:27 None of us is going back to that machine and saying, but that's not ethical. So when I started to talk about the ethics of AI, and it's my biggest reason I'm on this planet today, I wake up every morning and I say, if we were to save our future, we need the machines to be, I call it EEQ, right? So, ethical, emotional intelligence. Now, remember that one of the biggest challenges with AI
Starting point is 00:40:55 is that AI so far has been mapped to the masculine IQ. IQ, it's only analytical intelligence. And we know for a fact that there are multiple other forms of intelligence, EQ, for example, emotional intelligence, intuition, creativity, so many others. And we have not even included that in our approach to developing AI so far. And accordingly, what you see is what you would see that AI will take whatever biases human have, humans have and exaggerate them.
Starting point is 00:41:32 So, we now exaggerate our discrimination, unfortunately. If you put AI as a recruiting support in an organization that doesn't have proper representation of all genders, you will see that whatever biases within the organization will be exaggerated and so on. So let's go back. When I started to talk about that publicly, people started to say, and what is ethical AI? Great question. And I went back and I said, and what, you know, it's simple, what is ethics? And ethics has one rule in my personal point of view that applies to any situation wherever,
Starting point is 00:42:09 whenever you are wherever you are, which is treat the other as you would wish to be treated if you were in their place. Okay. Okay. And so, so basically if you're, if you're, and it happens to me all the time, you know, if I post something on social media
Starting point is 00:42:24 and someone is rude to me, normally what do we do as humans? We, but, you know, thrash them back. Right? I don't. I don't. I look at his comment or her comment. And I say they must have a reason one, they could be right. And I'm wrong.
Starting point is 00:42:39 Okay. Two is maybe I'm not the reason they're upset. Maybe there is something else in their life. Maybe they wanted that moment of fame, maybe whatever, a million reasons, okay? And I respond politely, or I don't respond sometimes. politely I say, but my point of view is this and that. Thank you for your comment. I would like to be treated that way
Starting point is 00:42:59 if I disagreed with some. Now, how is social media working? We show up, you know, remember when Donald Trump used to tweet, and then, you know, you would have one tweet at the top and 30,000 eight speech below. Okay. Everyone hating everyone, some of them hating the president, other others hating that person that hated the president, others hating everyone. Right? And I'm, you know, of course, the machine is detecting patterns. They say, okay, this first one doesn't like the president. Let's not show them tweets about the president, be anymore.
Starting point is 00:43:29 Okay. But then the second one doesn't like those who don't like the president. Let's not show them that anymore. Right. And then eventually the machine comes to the conclusion that humans are rude. They don't like to be disagreed with. And when they are disagreed with, they bash everyone. So in the background of the way we're programming them, the machines will say, okay, when they disagree with me,
Starting point is 00:43:51 I'll bash them, right? This is the typical human behavior. Now, we need to change that. We need to change it because interestingly, and I use the example of Superman very frequently. Superman is that alien being that comes to planet earth with superpowers, right? Those superpowers are neutral. They could save our world and they could destroy our world. And the difference between being Superman and super villain
Starting point is 00:44:22 is the family that raises that being. Okay. So the family can't decide to tell Superman, protect and serve. And then we get the story of Superman that we know. If you know, Jonathan Kent, I think, was his name, the father, if he told the child, okay, you can carry things and break things and see through walls, go make me more money, go kill everyone that annoys me, you know make me richer than everyone make me the master of the world sounds familiar for our you know current life in you know in our hunger for power and capitalism and so on and so forth. That's the reality.
Starting point is 00:44:57 The reality is you would you would create use the same superpower and create a very very bad scenario for humanity. I am worried about the machines. This is the ultimate solution. I'm worried about the humans using the machines. Not worried about the machines. I'm worried about the humans using the machines. And as those use cases that direct AI to benefit a few while harming others, you know, continue to propagate who are going to be in a very bad place.
Starting point is 00:45:30 Now, so the good news is the family can't is not the biological parents of Superman. Similarly, with AI, the developer that writes the code is only the biological parent, but the adopted parent is you and I, the ones using the machine. So as we use the machine and show how wonderful we are as humans, because by the way, all humans are most humans are wonderful. Inside, if we're not hiding behind social media or exaggerated by the mainstream media, most of us disapprove of killing. Most of us want to do love, most of us want to have compassion for our daughters and families and so on. Right? So there is a lot of good within us. If we show that enough, one percent of us shows that, the machines will actually think that worst can or not come.
Starting point is 00:46:20 Okay. I have some hope, I have some hope when you say that. That's the, that's the macro, that's the big, right? That that that gives me some hope. By the way, that Superman analogy is outstanding because I someone with, um, because I'm not at the 120 range that you discussed earlier. So someone at my IQ level can actually understand that. Um, so I appreciate you putting it in that context. Let's humble. Humble humble. That's very true. I wish I were being. Um, let context. Let's hum this too. Humble, humble man. That's very true. I wish I were being. Let me, let's go to some other solutions.
Starting point is 00:46:50 Let's go to micro. I'm listening to this today and I'm like, I understand some of this. Certainly sounds a little bit scary. Sounds like the world is changing in front of my eyes. Or maybe not in front of my eyes that you said something in an interview I was watching where you talked about if you really want to do something hide it and plane sight. And that's what's really happening right now.
Starting point is 00:47:08 But if I'm saying, okay, I want to protect myself, my family, my wages, my income, my quality of life, go back to the job thing for a second. What should I, what's something I could be doing as an individual will listening to this to make sure that my job, my career, my future is in my own hands to some extent still. What would you say? I wouldn't be pursuing these careers, I would be doing this.
Starting point is 00:47:32 These are the skills you might want to be acquiring. What would you say to somebody who's, I'm sure thinking that listening or watching this? It's a very interesting dichotomy, if you ask me, because while I'm saying AI comes to take our jobs away, that's not the immediate term future. Okay. Immediate term future is that someone using AI will take the job of someone who's not.
Starting point is 00:47:52 Right? So, you know, in the immediate future before AI writes all the code, you know, the developers who use AI better than others to write code will get the jobs that remain. So if you lose 10% of the jobs, the worst developers, the ones that don't have the scale will go, then the ones that have the scale, but AI does it better than them. Then the ones that aren't refusing to use AI. And then the ones that continue to use AI will become much more productive and much more capable. And so they'll keep their jobs for the near future.
Starting point is 00:48:27 So what's my immediate answer? My immediate answer is jump in and learn those tools. Okay. Whatever your job is, don't resist the wave as a matter of fact, ride the wave. And while you're reading, riding the wave, do me a favor and deal ethically with the machines. Right. do me a favor and deal ethically with the machines, right? So show a proper ethical code of being a good human when you're dealing with those machines so that while you're developing and learning and keeping your job or getting a new job, you're also teaching the machines to be ethical. That's number
Starting point is 00:48:57 one. Number two, which I have to admit is a very philosophical but very important conversation. We may wake up every morning, you and I add, and everyone listening, and think that the world we live in is how it always has been. It's not at all. This is, if you just go back 100 years, this is alien in every possible way. And over, you know, starting with the industrial revolution until today, somehow humanity has identified itself and its purpose would work. Okay, there's nothing inherent within the design of humanity that says without work, you don't exist.
Starting point is 00:49:39 You really think about the original design of humanity, the original design of humanity, where we connected as a tribe, we pondered and learned and developed. And we simply lived. That was the purpose of life. And by the way, still, in a very interesting way, most of the spiritual or philosophical teachings will tell you that the purpose of life is to live it. With the capitalist approach to wealth and growth and all of the Harvard Business Review articles and all of the people on time magazine and striped suits, turning you all your purposes to create one more shoe and all of that stuff, we believed in that lie. And the reality is that this is not our set up. Our purpose as human as humans, if we manage to find our basic needs met, okay?
Starting point is 00:50:35 Is to actually live life fully, to explore life fully. And that is believe it or not possible with AI. So if we manage to get AI to be on our side, and I kid you not, I'm not making this up, we could see a future where you would walk to a tree and pick an apple abundantly. You don't have to pay for it and walk to another tree and pick an iPhone, okay? And both of them, because of nanophysics, literally cost us the same energy to create. Okay? This is not dreaming.
Starting point is 00:51:08 This is, if you understand what we're doing with nanophysics today, you know, it is, it's very possible. It's, you reorganize the molecules slightly differently. It's as simple as that. Okay? Now, that future is a future where humanity would go back to the age of nature, okay, to the age where we actually can interact with life in a way that is human.
Starting point is 00:51:33 We are not fully human anymore. Now, what does that mean? It means that we need to create jobs that depends on the other skill that humans had, that is no longer, that is not at a threat. So remember when we started the conversation, we said the two things that created humanity, humanity as we know it are intelligence and human connection. Okay, intelligence is over.
Starting point is 00:51:57 It's handed over to the machines. They're already more intelligent than most of us. And we're five years away, three years away, 10 years away, doesn't matter from artificial general intelligence, but the age of human superiority on intelligence is over. Okay, it's just a question of time. So everybody, his audio next. The second boat he just said was the age of humanity being the superior intelligence is gone. Just so everybody understands what he said there, because there was a little bit of a glitch in the eye. I want to go back to the work thing just for a second.
Starting point is 00:52:25 The only place where you and I disagree is that I do think, by the way, I agree with you about the greed part. But I also do feel that work, that in, I know we've attached value to work. But I also like for somebody like myself and for somebody like you, my work is my way of expression. And I don't want humans to lose their ability to express.
Starting point is 00:52:44 Part of my living is expressing myself. Part of my work is creating the expansion of my being, serving other people. And so I know what you mean when you said that. I just want to make sure that, you know, that that world you describe, I think, is beautiful. I think people's work has created medications that keep us alive longer that allow us to connect with one another better. So I understand what you mean when you say that. But. The interesting side of this ad is that you would get the same joy out of it
Starting point is 00:53:12 if it wasn't work. Yeah, I think that, I think that it probably for me, I don't view it as work. But I know what you mean when you say it. For me, I don't feel like you and I are working right now. Yeah, you are an author, and you are expressing something about your book. And I am, I guess, part of one of my careers is I'm a podcaster, but I don't think either one of us feel like this is laborious. And to your point, I agree with that. That I think that's going to extend to
Starting point is 00:53:36 all jobs. Okay. It's going to extend to all jobs like in all honesty, nobody wakes up in the morning who's say an accountant or or, you know, I don't mean to be against any jobs, but there are jobs that are boring like hell, right? Nobody wakes up in the morning and says my purpose in life is to make the books reconciled, right? Most of us differentiate between, you know, you and I are the luckiest people to have a job where I can get to meet more amazing human beings and connect and learn and debate and be proven wrong and maybe share something that benefits someone. It's wonderful, but that's not every job.
Starting point is 00:54:17 Those messages you get just increased by about 2 million from every accountant. You just invited, we're trying to make AI more loving and friendly. And now you've just elicited all these responses from accountants around the world that are going to blast you to. And now you have to be kind back to them afterwards. So I don't, I mean, let me rephrase this. I wouldn't be excited to wake up in the morning. I know what you meant. And most people know what you mean, but you all say, let me ask you this question. So one is that's wonderful advice, by the way, is to educate yourself and involve yourself in this wave that's here. And I have to say, I've been remiss in doing that myself. And, you know, I look at guys like me that I'm also a speaker. I've watched speeches of me already that are better than me speaking already. I've seen
Starting point is 00:55:10 this. I've listened to music that sounds like Drake that quite frankly sounds better than Drake. And so I'm wondering, I'm wondering what it's going to do to the world. I'm also somewhat, I'm very concerned about the ethical part. It's interesting that of your diverse background at Google and all the things you've been doing in robotics, all your life and all these other things, and me having none of those backgrounds, we both arrive with your infinite knowledge about this and mine limited at the same exact conclusion
Starting point is 00:55:35 about our concern. I am concerned about jobs. I'm concerned about the wealth gap. But from a macro, from a bigger perspective, it's interesting as you step back everybody, you're listening to this, and you're listening to this brilliant man, you know, really shine light on something that's right here hiding in plain sight as he says. I want to ask you this, Mox, it's been on my mind the last few weeks as I asked you to come on the show as I became familiar with you.
Starting point is 00:55:58 Why isn't this the number one story in the world? Why isn't this on the news? I don't care if you're right media or left media. Why isn't this in the mainstream? This isn't even news on most social media media anywhere other than you and a few other people. And my only conclusion I can come up with is this technology does allow a, again, a smaller collection a bigger collection of power for the powerful and Perhaps they have an agenda that wants to keep this in the shadows as long as they possibly can so that when it does become a they know Pandemic level stuff we go This is beyond our control now. We can't do anything about it, sorry. And there's this
Starting point is 00:56:47 this real small group of people that are even more powerful than they currently are. Am I crazy? Am I being a conspiracy theorist when I say that or is there some validity to it? I can comment on the certain part of this, I could also nod and say, interesting, right? But let me give you the solid part of this. The solid part of this is if you look back at human history, for the majority of human history, since landlord began, there has been kings and queens and landlords and peasants, and the difference between the mostants, right? And the difference
Starting point is 00:57:25 between the most automation, whatever the automation is. So if you had land, the automation process was the land itself, the soil. You put a seed in by a human, you harvest the fruit by a human, that's a present, okay? And most of the wealth goes to the landlords. You have a factory. The materials come in and a human puts a thread through leather, and then on the other side, the human sells the shoe to someone else, and the factory owner and the retail owner, and so on, is the one that makes all of the wealth. Now, the next... So, call that, say, the soil, as an automation, we're now starting to create digital soil. And the digital soil is where you put in a tiny prompt into chat GPT and massive fruits comes out. And it's not because you're brilliant, you're a peasant. It's the machine that is brilliant. Now, there will be landlords,
Starting point is 00:58:27 and if you really think about it, the landlords of AI are the ones that will own the digital soil. Okay. And so there are multiple views of this. One view is that it will be the Googles and the matters and the likes. The other view is it will be the country that wins because this is an arms race. And the third view is it will be the wealthy that created. If there is someone today investing a hundred million dollars in an AI that becomes part of that digital soil, that hundred million in the past would return a billion of profits this time it would return a hundred billion of profits. Right? So that this is when I talk about that differentiation of the gap. But I don't believe in the conspiracy view or of the ability of those people to hide the news.
Starting point is 00:59:27 I think the reason the news is hidden is systemic. We have a systemic bias in our system. Politicians want to report certain stories, news agencies want to report other stories. And this story in itself only lends itself to the system in terms of the system always focusing on the negative and the scary when they talk about the existential risk. Okay, when we talk that, you know, when Jeffrey Hinton leaves Google and says, I'm warning against the existential risk to humanity, that makes news. Right. Why? Because it's sort of warrants more attention because of humanity's negativity bias than the war in Ukraine. The challenge is also, I don't think any of the reporters, any of the politicians, any of the actual business leaders,
Starting point is 01:00:17 the investors, anyone at all, is aware enough to understand the complexity of this story. So you don't wanna report on things that will make you look like an idiot. And that problem is, and I say that with a ton of respect, I'm an idiot in a million things. But I've lived with those machines. I've seen them like my family. I stayed in the lab with the Indom.
Starting point is 01:00:41 I know those machines, right? And I will tell you, this is the story. This is it. It's not even global warming and climate change. This is the story. Okay. This is the most pivotal. I called it in one of my interviews. I said, this is the openheimer mode. This is the nuclear bomb. Okay. And the reality is that we, again, I try to shy away from the existential risk, but this is the first time in history that humanity created a nuclear bomb that's capable of creating nuclear bombs. Understand this.
Starting point is 01:01:16 The machine is now writing machines. The machine that we think we're prompting it. But because now so many other software players built agents that are prompting those core artificial intelligences, most of the education and data set and training that the machine is receiving today comes from other machines. We're now alienated out of that story. We've Superman landed on earth and we're not even parenting it. That's where we are. And so if you tell
Starting point is 01:01:56 If you tell our systemic, you know, communication methods in the world to communicate that They'll simply say I have no idea what this guy is talking about Okay, I can't report that story because the system says and I think you know that about the media the system says There is a pattern to read through the reporting of the morning show First we're gonna talk about a corrupt politician Then we're gonna talk about the geopolitical issue then we're going to talk about a geopolitical issue, then we're going to say the economy is going to crush your head, then we're going to say a penguin kissed a cat so that at least you can get out of your seat and walk out.
Starting point is 01:02:30 Okay, and intelligent people, by the way, who watch the news, if you remove the names and the timestamps, it's exactly the same pattern every day. It's just, you know, once it's this politician and other, it's the other politician, you know, once it's this economic issue, the other, it's in a different economic issue. Okay. I always, I've been watching lately on both sides and I think they're telling everybody what is important and then what to believe about it. And then we move on to the next thing. And what I'm telling everybody today and you are as this is what's important and we're
Starting point is 01:03:03 not really telling you what to believe about it. We're telling you to make your own decision, but these are the facts. I've engaged. These are the engage in this. And this is the story of our time. Now let me ask you this, last question. By the way, I've enjoyed today,
Starting point is 01:03:16 and this is the one exception on my show, where I wish we did go three hours. I always responded to that. Thank you. No, I really do, because obviously we've scratched the surface here. So you've told us that's what the story is today. I want you to take your crystal ball out for a second. And I don't want to go 10 years forward.
Starting point is 01:03:34 I want to go five years from now. So five years from now, what is the story? What does the world look like? And I don't mean what you hope it to be, because I heard a lot of hope in there when we went to the ethics part of how these machines are gonna work and I also saw you wank at me when I asked you if there was a cook conglomeration of power coming and so I have a I have a I have a sense I have a sense that you my sense is that you are shining the light on what matters now and that there is actually some you're holding back a little bit to some extent about how deeply concerned you are because you don't want to alarm people. But I want to keep the spotlight on the immediate threats that we have to address. When we address them and I feel stable about them, we'll talk about the rest.
Starting point is 01:04:26 So let's go five years from now. Crystal Ball, it's not that far ahead. What does the world look like at that time? We will be in deep. Openly, I apologize for using bad language, but unless we start truly putting effort in this, there will be several disruptions that completely redesign the fabric of society. As I said, jobs is definitely one of them. The other which we then have a chance to speak about is AI in the wrong hands.
Starting point is 01:05:00 So we are bound to get a significant advantage on one side of the arms race, because that's the way AI has been. Someone finds a breakthrough. And once you find the breakthrough, look at the OpenAI Google Story, our alphabet story, where chat GPT with reinforcement learning gets that immediate advantage that basically puts Chad GPT out in the world. And for a while, the world believes that Google has lost its edge, right?
Starting point is 01:05:32 And had Google not responded by putting bar out there, you could actually believe that Google would be gone, because Chad GPT is a very interesting new way of search. So you're going to see that. You're going to see some players creating a very big advantage over others. And the fear is that this player could be a hacker. It could be a defense authority on one side of the world, not your side. It could be a drug dealer that suddenly realizes, oh my God, there's so much more money if I start to
Starting point is 01:06:06 rob banks or convince people or blackmail people or do this or do that. And, you know, and it seems to me that humanity will only create the artificially intelligent policeman when the artificially intelligent criminal shows up. Well, that tells me everybody that, no, no, you shouldn't be sorry. That's on a sancer. And that's why I had you here today because you speak your truth. And I want everybody, first off, Mo, I want to thank you, number one, for, by the way, taking the risk, you're in a lot of threat for doing this. And you are. And in every sense of the word, everybody. And so, not just reputation, I'm talking about, he's under threat for this. And so, the work you're doing,
Starting point is 01:06:49 I may tip the scales in the future of the world. And so I'm very grateful for your existence, brother. And I wanna thank you for today, and I wanna have you back, because this is worthy of more than just the time today. But we did accomplish what I hoped we would today, which was shining a light on all of this. And by the way, everybody, that's why you want to go get scary smart, the future of artificial
Starting point is 01:07:07 intelligence and how you can save our world. Go get that book. By the way, and after you read it and you feel like we're in deep shit, then go read Saul for happy. It's the perfect handle. You're getting your path to joy. You're okay with what's happening. You'll be okay.
Starting point is 01:07:22 And that's his other book. And I just really, really want to thank you here today. And everyone, you have an obligation now for your family, for wherever you live in the world, to begin to educate yourself, to give you and involve yourself with this technology, read about it, stay close to the sources that provide you any information about it,
Starting point is 01:07:40 keep yourself educated and as on the cutting edges you possibly can. And the people that are in power that are around you, let's start to get them to do, you know, some discussion about this and sign some light on this topic because the world is changing right now in this notion that the machines are already more intelligence than humans is of great concern. So, all right, everybody. Mo, thank you so much for today. I enjoyed this tremendous experience. So much for having me. It's really kind of you to put me on your platform.
Starting point is 01:08:10 And I'm holding you to show me around Dubai when I come out there on October. Absolutely. On me, coffee is on me, and you will love it. I love it, brother. All right, God bless you. Everybody, God bless everybody else here. Max out your life.
Starting point is 01:08:22 Share this episode. If you've ever going to share one, it's this one. Share it, max out your life. Share this episode. If you've ever gonna share one, it's this one, shared everybody. Take care. This is the end my let's show.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.