The Prof G Pod with Scott Galloway - Understanding AI’s Threats and Opportunities — with Mo Gawdat

Episode Date: July 13, 2023

Mo Gawdat, the former Chief Business Officer of Google [X] and an expert on happiness, joins Scott to discuss the need to control our response to AI, how this technology is impacting society, and the ...four major threats he’s identified. We also hear about Mo’s transition out of tech to focus on happiness. Follow Mo on Instagram, @mo_gawdat.  Scott opens with his thoughts on Threads and has a question for Mark Zuckerberg: are you Darth Vader, or are you Anakin?  Algebra of Happiness: express love and admiration as often as you can. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Support for this show comes from Constant Contact. If you struggle just to get your customers to notice you, Constant Contact has what you need to grab their attention. Constant Contact's award-winning marketing platform offers all the automation, integration, and reporting tools that get your marketing running seamlessly, all backed by their expert live customer support. It's time to get going and growing with Constant Contact today.
Starting point is 00:00:28 Ready, set, grow. Go to ConstantContact.ca and start your free trial today. Go to ConstantContact.ca for your free trial. ConstantContact.ca Support for PropG comes from NerdWallet. Starting your slash learn more to over 400 credit cards. Head over to nerdwallet.com forward slash learn more to find smarter credit cards, savings accounts, mortgage rates, and more. NerdWallet. Finance smarter. NerdWallet Compare Incorporated.
Starting point is 00:01:17 NMLS 1617539. Episode 258. 258 is the country code belonging to Mozambique in 1958 Bank of America introduced their first credit card and the U.S. 50-star flag was designed by a high school student for a history project he got a B- for it true story my sex ed teacher at my high school was fired he was teaching the students about ejaculation but but it went right over their heads. That's good. That's good. Go, go, go! Welcome to the 258th episode of the Prop G Pod. In today's episode, we speak with Mo Gadot, the former chief business officer at Google X, an expert on happiness and the author of Scary Smart, the future of artificial intelligence and how you can save our world.
Starting point is 00:02:14 We discuss with Mo the need to control our response to AI, how this technology is impacting society and the four major threats he's identified. We also hear about Mo's transition out of tech to focus on happiness and how he's dealt with profound loss. This was, we all agree, this is one of our favorite conversations. He's just, I mean, in addition to obviously being very successful and very smart, he's a very soulful, thoughtful guy. And anyways, I'm in New York after my Arrested Adolescents World Tour of Ibiza and Mykonos. I'm trying to get some work done here in the city. I love London. Actually, do I love London? Is that fair? I like London. The weather, I don't know if you've heard, the weather's not great in London. And my son's at boarding school, so I've already like, I was expecting to lose him in three years, not now. And that's kind of bummed me out. But New York, I don't care what anyone says, New York is number one. There's nothing like it here. I'm in Soho and I'm walking around and it's just booming. And I can't get over the businesses that went out of business in the pandemic. There's new ones and really, it feels like there's kind of the city has shed its skin and it's back stronger and better than ever and i walked around soho on sunday and just went shopping and stopped in and got went and spent 85 dollars at baltazar
Starting point is 00:03:32 boulangerie on donuts and coffee and pastries granted i'd taken an edible so you don't want to go to just a pro tip here you don't want to go to Balthazar's Boulangerie High. That's just a recipe for spending $80 on fine French baked goods. But it was worth it. It was worth it. It was worth more like 85 bucks. And then I went across the street to the MoMA store where they just sell like cool skateboards, overpriced skate painted skateboards, see above edible. And I just had the best time walking around alone. I'm also, I'm really going off script here. I'm getting recognized all the time now. And I like it because people are super nice. And I mean this sincerely, if you recognize me and you have any inclination to say hi, say hi, I'm friendly and it always makes me feel good to see people. But, but I read something and I wish I hadn't read it. And that is supposedly every time someone comes up and says hi to you, there are a hundred people who recognize you and don't say hi. And that makes sense because I would imagine I've seen several hundred people I recognize from various media and I never say hi to people. I'm self-conscious. I don't want to bother them, whatever it is, or my ego is too big to go up and express admiration. And so I'm a bit paranoid now when I'm out alone wandering
Starting point is 00:04:46 around high or high on an edible. I don't know if that qualifies as high. And I think, oh my God, hundreds of people are like looking at this old man they know walking around high in the MoMA store and buying pastries. Anyways, that has nothing to do with the show today. So where are we? Where am I? Anyways, enough about me. I'm headed to Colorado on Saturday. If you see me walking around the streets, I'm embarrassed. See, I should have said Aspen. I'm going to Aspen, but I'm embarrassed. Isn't it strange? I was embarrassed by how little money I had growing up. And as a young man, I was very self-conscious about the fact I didn't have a lot of money. And now that I have a lot of money, I'm self-conscious about that. I never had the right amount of money. I've never had the right amount. No,
Starting point is 00:05:27 I shouldn't say that. It's much better. This is much better, but I'm still self-conscious. And it's weird how I caught myself saying I'm going to Colorado because I don't want to say I'm going to Aspen. Anyways, enough about me. Where are my pastries? Okay, let's get onto the business news. The thing everyone is talking about, the most ascendant platform in history, in history. Even Mark Zuckerberg is surprised at how well that went. Wow, I mean. So what do we have here? We have Threads, which is an interesting study in disruption.
Starting point is 00:05:58 Why is it an interesting study? Disruption is more a function, more a function of how vulnerable the sector is. In other words, we have a tendency to just think of it in the context of Amazon being excellent at execution and having access to cheap capital, which they did. But the reason why Amazon got to $100 billion as fast or faster than I think almost any company in retail, certainly in retail, maybe in history, is that grocery stores and department stores and record stores and bookstores hadn't innovated in about 40 years, and people were ready for an alternative. because ad-supported cable television kept getting more and more expensive. And then you woke up and said, well, I can have CNN and ESPN and Bravo's 1, 2, 3, and 4 for $120 a month, or I can have Netflix for $12 a month. The value proposition of Netflix really came into full view for me
Starting point is 00:07:01 just the other day. I do watch ad-supported cable TV. One, because I'm old as fuck. But two, I really like Anderson Cooper. I like Fareed Zakaria. I like flipping around the channels and seeing Goodfellas 75 times in a weekend. What is it about that movie? Goodfellas and the Shawshank Redemption. It's just like if you have cable TV, you will see these movies 700 times. And every time it gets better and better, these movies. By the way, Shawshank Redemption was a box office failure and it found life on cable television. And now it's one of the most viewed movies in history. God, what a wonderful film. Anyways, anyways, it struck me. I thought, well, I want to get cable TV on my gigantic wallpaper, douchebaggy TV. Look how
Starting point is 00:07:40 impressive I am. By the way, the wallpaper TV, it's just so money. It's just so cool. It's literally the coolest thing I own. Anyways, other than a steel blue grape Dane, that's probably the coolest thing I own, although it's not really a thing. I miss my dog. I miss Leah. Last summer, totally indulgent, took her to Aspen with us. God, that was great. It was a total celebrity walking around with the great Dane. Not easy to get a Great Dane to Aspen from London, by the way. So anyways, I wanted to flip on cable television and the cost would be around 75 bucks or 55 or 75 to turn on live TV on Hulu. Hulu. And I'm like, okay, let me get this. Netflix is costing me 12 bucks. Apple TV Plus is five or eight bucks. Amazon Prime Video is another, you know, it's included in my Amazon Prime. So I'm paying 20 or 30 bucks for this torrent, this Mariana trench of content where I got to pay 60 or 70 to watch Fareed and Kim, you know,
Starting point is 00:08:42 the Kardashians. By the way, I would pay 50 or 70 bucks a month so I didn't have to watch the Kardashians. And I thought, Jesus, this just brings home how big a chin ad-supported cable TV is. So the bottom line is disruption is a function of how vulnerable the disruptee is. There's nothing more vulnerable than a total asshole. And that's how Elon Musk has acquitted himself over the last six months. What advertiser? The fastest way to get unceremoniously fired as a CMO would be to advertise on Twitter. It is all risk with no upside. It's never been a great platform in terms of ROI, but now there's more downside. So that's just not a great value proposition when you're in Cannes. And neither Musk or Linda Iaccarino, who claims to be the CEO but clearly isn't, showed up because I think they thought all we're just going to get is shit and just be apologizing and explaining all the time.
Starting point is 00:09:50 So what do you have? It's not that Threads is that good. Blue Sky, the Jack Dorsey competitor, Mastodon, or even Post News, founded by who I think is probably the most impressive digital product guy in the world right now, Noam Bardeen, and has done great relationships. Disclosure, I'm an investor, but has great relationships with news sources, Reuters, et cetera. But here's the thing. They have a 300 million person cannon they can fire at a new app. And it's just striking. And people were just so desperate for an alternative. But not only desperate for an alternative, here's the thing. People didn't want to have to rebuild their network. It's taken me 15 years to get to 550,000 followers on Twitter. And it's a huge asset base for me, and it really helps me with my career. I think of myself as a thought leader, and I realize how pretentious that is, but I'm a thought leader that does edibles and goes to MoMA because I'm deep. I'm deep. Anyways, I've invested a lot in it, and it pays off for me. It's got good spread and good reach for my content, and the moat is
Starting point is 00:11:01 huge. It's hard for me to go anywhere else. It's not interoperable. My fan base or follower base or whatever you want to call it or my community, my peeps, is not portable. So every time I would start on one of these other platforms, I would build it to 10 or 20 or 30,000 followers. And I think, God, they can take that canon of 3 billion active users and point it at this thing. And I'm at letter, not A, but I start at letter L. I think I'm at 60 or 70,000 followers already on threads and the UI is pleasant. And guess what? Here's the really wonderful thing. Here's the core value proposition of threads.
Starting point is 00:11:40 It's not Twitter. It's not Twitter. And it just strikes me that like Mussolini came across as sort of charming and likable in World War II because of the character standing on his left and right. And all of a sudden, Zuck, the Zuck, has an opportunity that's huge here. Now, what is the meta, meta opportunity here? What would be the gangster move if Mark Zuckerberg called me tomorrow and he won't and said, Scott, you've been critical of me, but let's be honest, you're a fucking business genius. That's what I imagine our conversation would be like. Anyways, I would say to him, this is a huge opportunity
Starting point is 00:12:13 for Meta. And what's the opportunity? What gets in the way of Meta in a trillion or two trillion dollars in valuation? They have two thirds of all social media. They have an unbelievable management team. They have incredible human capital. They have fantastic access to capital. They have these incredible platforms, Instagram, WhatsApp, the core platform, which is aging and dusty, but still has a ton of people on it. Facebook is the internet in a bunch of countries. This is the opportunity to do the pivot from Darth Vader back to Anakin Skywalker. And that is the pivot was pulled off by Microsoft. In the 90s, Microsoft was seen as the evil empire. They were the death star. They were incredibly sort of full body contact, competitive abuse tactics. They used to announce
Starting point is 00:12:59 products knowing they weren't going to actually release the product just to make it more difficult for another company to get adoption in the B2B enterprise market because they said, well, Microsoft's coming out with a similar product. Let's just wait. When Microsoft had absolutely no intention of releasing a product, and what did they do? They decided, well, in this market, it's better to be a good partner, so we're going to tone it back, different kind of gestalt, and now Microsoft is seen as the good guys. And I think that's a big part of the reason that Microsoft consistently is number two and sometimes the most valuable company in the world. And here is the meta opportunity for meta and for the Zuck. Start your hat white, boss. For the first time in,
Starting point is 00:13:36 I don't know, 10 years, you're actually seen as the good guy. You're seen as the good guy. So lean into that. Lean into that. You could reduce your revenue by 10 or even 20% and have the market cap of this company go up by doing the following. Agegate some of your platforms. There's no reason a 14-year-old girl needs to be on Instagram. Stop the bullshit pretending you do agegate. You don't. Become much more stringent, much more stringent around content, around medical information or elections. Have a pause 90 days before the 2024 elections, as we know this is going to be the mother of all shitshow apocalypse meets misinformation disco-a-go-go from Putin and his masters of misinformation as he tries to get Trump reelected, recognizing his only way that he does not end up on the 11th floor with a big window
Starting point is 00:14:24 at some point is to win in Ukraine. And the only way he can do that right now is if Trump gets reelected. So, boss, start thinking about the Commonwealth. Start thinking about teens and start erring on the side of caution. Start taking shit down and stick up the middle finger when someone starts bellowing about First Amendment. As far as I can tell, First Amendment is mostly from people who want their misinformation to reign supreme. And by the way, Meta has absolutely no fidelity to the First Amendment. The First Amendment states that the government shall pass no law inhibiting free speech. You're not the fucking government. You're a for-profit. This is the opportunity for Meta. For the first time in forever, you're considered the good guy. Lean into it.
Starting point is 00:15:10 Come to the light side of the force. Start being the man you want to be. Start occupying the space you command. Start thinking, okay, I have stakeholders, just not shareholders. And by the way, if you start recognizing your stakeholders are teens who are suffering from the greatest increase in depression at the hands of bulldozer parenting and social media, recognize as a parent, recognize as an American citizen, and boss, you are a citizen. You're blessed. Have you noticed? Have you noticed that if you go up and down the Western seaboard of the United States that borders the Pacific Ocean, there's a bunch of multi-hundred billion dollar and multi-trillion dollar companies? And what happens when you hit the border just above Seattle? It stops until you get to Lululemon. And what happens when you get to Qualcomm and La Jolla?
Starting point is 00:15:49 It stops until you go 5,000 kilometers or 5,000 nautical miles to Buenos Aires and Mercado Libre. You're incredibly blessed. It seems to me like you have a great family. Seems to me that you do not recognize, but hopefully as you get older, recognize how wonderful it is to be American. Well, boss, start nodding your head to the debt you are owed in terms of the blessings you have been granted to be born and raised in the great country that is America. Start giving a shit about America and our election. Start giving a shit. You have kids. Start giving a good fucking goddamn about kids. This is your opportunity. Are you Darth Vader or are you Anakin? We'll be right back for our conversation with Mo Gadot. I just don't get it. Just wish someone could do the research on it.
Starting point is 00:16:39 Can we figure this out? Hey, y'all. I'm John Blenhill, and I'm hosting a new podcast at Vox called Explain It to Me. Here's how it works. You call our hotline with questions you can't quite answer on your own. We'll investigate and call you back to tell you what we found. We'll bring you the answers you need every Wednesday starting September 18th. So follow Explain It To Me, presented by Klaviyo. The Capital Ideas Podcast now features a series hosted by Capital Group CEO, Mike Gitlin.
Starting point is 00:17:18 Through the words and experiences of investment professionals, you'll discover what differentiates their investment approach, what learnings have shifted their career trajectories, and how do they find their next great idea. Invest 30 minutes in an episode today. Subscribe wherever you get your podcasts. Published by Capital Client Group, Inc. Welcome back. Here's our conversation with Mo Gadot, the former chief business officer at Google
Starting point is 00:17:56 X and the author of Scary Smart, the future of artificial intelligence and how you can save our world. Mo, where does this podcast find you? I'm now in Dubai. So let's bust right into it. You've spoken a lot about AI and believe that it's urgent that we control it. Can you say more? I believe that it's urgent that we control the situation. I don't think we can control AI, sadly. I think most of the AI as a topic has been not properly covered by none of the types of
Starting point is 00:18:30 the media for a very long time, because it sounded like science fiction until it became science fact. So reality is that most people, until we started to interact with ChatGPT and see BARD follow on so quickly, we thought that this was the problem of our grandchildren, if you want. But the truth is, AI is here. AI is continuing to take over a lot of what we expected humans to be able to do. AI is smarter than us in every aspect, every task that we've assigned to them. And most interestingly, you know, as the media and the conversation starts to talk about the threats of AI, we try to talk about the existential threats of, you know, what would happen if Terminator 3 shows up. I think that while there is a probability that those things could happen, they're further in the future than
Starting point is 00:19:34 the immediate threats which are upon us already. We're not talking three years away. We're talking as of 2023, which require a much, much more sense of urgency in terms of starting the conversation so that we are not surprised like we were surprised when COVID was upon us. What do you think those immediate threats are? There are quite a few, but I basically think that the top four are a very serious redistribution of power, a concentration of power. There is the end of the truth. So the concept of understanding and being able to recognize what's true and what's not
Starting point is 00:20:17 is, in my personal view, over. There is a very, very significant wave of job losses that is immediately upon us, and that will affect us in ways that are much more profound, I think, in terms of threat to our society in general, you know, that basically are starting to happen already. So we need to start to react to them. And I think the most interesting side of all of this is that we have a point of no return, if you think about it, where just, you know, in my book, Scary Smart, I use the analogy of COVID, not that COVID is, you know, something that we would like to talk about anymore. But everyone that understood pandemic viral propagation could have told you around 20 years ago that there is a pandemic
Starting point is 00:21:18 that will happen. We had lots of signals that told the world that, you know, there was SARS, there was swine flu, there was, you know, so many. And yet we didn't react. Then we had patient zero. We didn't react. Then we had patient several thousand. And then the reaction was blame and finger pointing and, you know, saying, where did that come from? Who's doing this? What's the political agenda behind it? And then we overreacted. And, you know, in a very interesting way, all of that has been very disruptive to society, to economy, to, you know, quite a bit of the way we understand life as it is. So my feeling is that we've been screaming. I personally left Google X in 2018, in March, beginning of March. On March 20th of 2018, I issued my first video on artificial intelligence, which was reasonably well viewed. We had like maybe 20 million views or something that basically said that we are going to be facing challenges and
Starting point is 00:22:25 that we need to behave in certain ways. I published my book in 2020, which was 2021, which was the business book of the year in the UK, for example. And yet most people looked at it and said, yeah, it's fascinating what you're saying, but it's not yet here. Right. Those things that I'm talking about are here. you know the the a patient zero moment could be the u.s elections in 2024 i mean what is an election when you don't understand what the truth is when you're unable to recognize what's fake and what's not uh you know there is an arms race to capturing the power of ai and the ones that will capture the power of AI will literally create a super companies, super countries, or super humans that are really not looked at as a
Starting point is 00:23:15 redesign of society as they should be looked at. I think my main point to summarize is that there is no threat from the machines in the immediate term, but there is a big, big threat from the way humans will interact with the machines in the short term. Whether they'll use it to upgrade their power or whether they'll use it to unbalance power or whether they'll use it for criminal activities or whether it's just naturally going to run us out of jobs. So I really appreciate a couple of things about what you just said. The first is I find a lot of catastrophizing around, you know, it's a great headline and great clickbait to have the end of the species with a tortured genius at the center of it. And I have trouble, and granted, you're going to forget more about this than I'm ever going to know, but seeing the immediate path to human extinction of LLMs. And I also appreciate the fact that you've identified four specific more short-term threats that we should be focused on. I want to go through each of them and get sort of an unpack. So the first is, provide a, provide a viewpoint and then some cases a contrary viewpoint, because I would actually describe myself as an AI optimist. A concentration of power. So are you talking about a concentration in terms of corporate power or view or look at it from a defense point of view or look at it from an individual point of view, he or she who captures AI captures the superpower of the century, basically. Superman has landed on the planet. Superman is that being with immense superpowers. And,
Starting point is 00:25:07 you know, it so happens that in the story of Superman, family Kent is a moral, you know, values driven family that basically encourages Superman to, you know, to protect and serve. And so we end up with a story that is all about, you know, Superman that we know, but if they encouraged Superman to make more money, to increase market share, to kill the other guy, which is really what our world has constantly been prioritizing as the set of values that we appreciate, then you could end up with a supervillain. And what ends up happening is that the supervillain in that case is just one Superman, right? So there is no other that can compete with that. So, you know, if someone manages to create
Starting point is 00:25:59 an AI that manages to crack through the defense authorities of some government or another and claim the nuclear weapon codes, that's a move that is a game over move. This is a checkmate, basically. And it is not unlikely that there is some defense department on every side of the Cold War that's working on that right now, trying to create, to crack through the codes of the other guy. Even at the individual level, if you want to go into the crazy path of augmenting humans with AI, which would happen in multiple ways, right? So, you know, the idea of Neuralink, for example, or even if it's just through preferential application of intelligence to some humans over the others, which normally, you know, the Californication of technology will position it as, oh, this is an amazing way to improve the lives of everyone, even in Africa. But the reality of the matter is that if there is
Starting point is 00:26:58 a single group of individuals or a single individual in specific that can gain a tremendous advantage of intelligence, that individual is likely to try to keep that advantage by not sharing that intelligence with others, right? So if you assume that there is a certain place where that augmentation of intelligence with humanity starts, then everyone else is at a disadvantage. Everyone else is likely not going to get their smartphone connected to the same level of intelligence, because by definition, that provides a very significant competitive advantage. So let's go on to the next one, the end of truth. My sense is that truth has been under attack for a while. Quite a bit. engagement. And oftentimes, that type of content has no fidelity to the truth. The algorithms are neutral or benign in the sense that whatever causes controversy, whether it's vaccine
Starting point is 00:28:12 misinformation or election misinformation, if it causes a controversy and more comments and more enragement, more engagement, more Nissan ads, that these companies have incentive to not provide any guardrails around the pursuit of truth or something or avoiding stuff that just blatantly falls and result in bad outcomes. Don't we have the opportunity, if we put in place the right incentives, couldn't we use AI as a missile shield as opposed to just a hypersonic missile? Isn't it a function of we have the wrong incentives as opposed to the technology itself being a threat? Spot on. Spot on. I mean, as we continue our conversation,
Starting point is 00:28:56 you will more and more uncover that my true view here is that I'm not afraid of the machines at all. I think the machines are absolutely neutral in terms of what they can provide or deprive us of at any point in time. The worry is that we live in a system that is highly capitalist, highly consumerist, highly power hungry, right? And so accordingly, if you apply the same principles of today's society to AI, basically putting them on very, very powerful steroids, then more of what we do today will be done. And more of what we do today will be untraceable, will be unmanageable in many ways. And so when it comes to the truth specifically,
Starting point is 00:29:46 you are absolutely spot on. The truth has not only been under attack, it has been put to the side because it doesn't serve agendas, not even political agendas. It just doesn't serve the agenda of the big news networks, which want to capture your attention with more negativity than positivity. And by definition, you know, that means that they're sort of one-sided on the truth, right? You can see that social media, for example, is attempting to highlight certain sides of exaggerated fakeness instead of attempting to give you the truth as it is, because that's where you get the likes and the followers and so on and so forth. And AI is just putting that on steroids. AI is going to, by definition, which has been happening for quite some time, reinforce the bias.
Starting point is 00:30:40 So, you know, when our choices is to show violence in the news, for example, AI will notice that as the trend of humanity and basically continue to magnify that. If the marketplace is available for face filters and deep fakes, then lots more investments will go into faking the truth so that social media has nothing to do with the truth anymore. I always ask people to go to any of the social media platforms and search for the hashtag AI model, for example, which will give you examples of beauty that have already surpassed humans' abilities to catch up with, which means that humans will continue to compete with an illusion, a mirage, really. I think the biggest challenge, however, Scott,
Starting point is 00:31:25 which really needs to be brought to the spotlight, is creation. So we've magnified biases of the truth for a very long time, but we're now creating complete fakes that are almost undetectable, And we're doing that at a very, very low cost in very, very fast speed. So, you know, things like stability, AI's stable diffusion, for example, and the ability to use prompts to create images that are very indistinguishable from reality, or the idea of being able to fake my
Starting point is 00:32:06 look and my voice and my tone in a deep fake video and creating uh you know anything really from a creation point of view it's becoming more and more difficult to even know that what you're seeing existed in the first place let's go on to the next one job loss my sense is that we've been to this or at least i feel like we've been to this, or at least I feel like we've been to this movie before, whether it's automation or the cotton gin or textile manufacturing technology, there is some job loss early on. And then we find that that additional productivity creates new opportunities. We knew that automation was going to eliminate jobs on the factory floor and factories, but we didn't anticipate car stereos or heated
Starting point is 00:32:45 seats, and we ended up actually growing the employee base. And my sense is that ARC has happened in almost any industry when there is technological innovation. Why is it different this time? Why do you think this would be permanent job loss? It is different because we have moved the definition of a job along the capabilities of a human for a very long time. So, you know, when we were out there hunting in the caves, our capability were sort of aggression and strength, right? Then as we moved to the agricultural revolution,
Starting point is 00:33:22 it became, again, you know, maybe a little bit of the use of strength and discipline. And then when we moved to the industrial revolution, it became a skill, and then really not a skill, but more hours. And then we moved to the information revolution, and basically, we replaced our capabilities with brains, with intelligence. As the jobs that depend on intelligence, as we go up the hierarchy of human skills and talents, and we end up at intelligence as the last resort of jobs that we had in this current age, as that is taken away by machines that are more intelligent than us, we don't have any more skills
Starting point is 00:34:05 as humans to replace that other than one skill, which I keep advocating to the whole world, which I believe will become the most valuable skill in the next four to five years, which is human connection, right? So, you know, me as an author, I claim to have written insightful books for the last five, six, seven years, you know, that people found intriguing and thought-provoking and so on, going forward, the industry of being an author is going to dwindle because I think not only can, you know, books be written quicker, but they will be written in abundance with people that are not typically
Starting point is 00:34:45 authors. So there is a very significant disruption to the supply-demand equation when it comes to my books in comparison to everyone else. So it doesn't mean that the book industry will decline as a total, but the book industry will be distributed along a very large spectrum of providers, if you want, right? which basically from a supply-demand equation diminishes the value of any product provided. But what will not go away is if I am in a, you know, in a stadium or in a big theater with 10,000 people as a human speaking to 10,000 people, this is not going to be replaced in the immediate future. It will be, by the way, replaced in that longer term future with avatars and, you know, maybe virtual reality or maybe holograms or whatever, but not in the immediate future. I am maybe a little luckier
Starting point is 00:35:38 in that I can achieve that human connection. But think of music, think of movies, think of graphics design, think of all of that, these are jobs that dependent on on creating a persona that can now be created better with AI. You know, so to me, I think music, the music industry will shift back to the origin of the music industry, which is live performances, because music creation can definitely be done by the machines. You know, and I shift back to the origin of the music industry, which is live performances, because music creation can definitely be done by the machines. And I think eventually we will end up, not in the very far future, I think within a year or two, we're going to end up watching movies that are from A
Starting point is 00:36:16 to Z created by a machine, where no human has ever contributed or acted in that movie. But then we will still have the few vintage actors that humans will say, oh, no, can you imagine Tom Hanks is actually a human? You know, let's go watch that movie. Could it go the other way? Because both of us are authors. And I've thought that similar to what technology did in the music industry, where it made it global and the top artists started doing 10 million albums instead of one. Could it go the other way? And that is really great authors use AI as a tool and your books become better. But the initial kernel of that value that creativity i mean when i i'm using ai i'm writing a new book and i'm using ai for thoughts and an outline but i find it really does lack and i don't
Starting point is 00:37:14 know if it ever gets there i don't know if there's ever the ghost in the machine or a move sentient where it comes up with something original not yet and that not yet right that the creativity the connection you're talking about the human, I see it as like word processing or a thesaurus. And obviously it's more powerful than that. And that is AI is not going to take your job, but somebody who understands AI is going to take your job. Couldn't it be something that like most technological innovation just makes the best even better. And if you're a mediocre lawyer, a mediocre author, a mediocre artist, you're in deep trouble, which creates a set of social externalities and problems. But it strikes me at least so far, and granted, I realize we haven't hit the real curve here. We have unemployment at historic lows in the West. Couldn't this be a deflationary force
Starting point is 00:38:02 that forces mediocre professionals to find another job and brings down wage pressure, which is fueling inflation and be great for the economy? I mean, isn't there an optimistic view that this tool could be part of the solution and not the problem? Because there's been so many movies about the end of work, right? That jobs are going to go away. We're all going to be sitting on our couch with no purpose and no meaning. And it hasn't happened. But you see a future where this will slowly, the march up the food chain here will not stop. And slowly but surely, job destruction will far outpace ideas for new businesses and new opportunities.
Starting point is 00:38:42 That's where you are. Yeah, I think, while I don't disagree with what you said at all, I think what you're looking at is the immediate future, right? And I think the debate that a lot of computer scientists and AI scientists are not making clear to the world is that the mid-term future is not that far. So the truth is, yes, today, Scott can still definitely write a better book than someone who has never written a book and is asking Chad GPT to write a book for him or for her. So this is the reality in the short term.
Starting point is 00:39:23 Very quickly, however, so in that short-term reality, the supply-demand imbalance, because readers will be excited to try those things, sometimes some of those books will hit and so on, you know, there will be a supply-demand imbalance, simply as if you can imagine that 50,000 new authors came on the market today, right? Maybe 500,000, maybe a million. We don't know, right? So this is one side. The other side, which is really the core of my message
Starting point is 00:39:51 when I talk about this topic, is where is the near future? The near future is that those machines are advancing at paces that are completely misunderstood by those who have not lived within the lab. So let me give you an example. ChatGPT today is estimated to have an IQ of 155. If you just look at the tasks that it can perform, you know, passing the bar exam or whatever, it is estimated to be at 155.
Starting point is 00:40:24 Einstein was 160, right? ChatGPT4 is 10 times more intelligent than ChatGPT3 in a matter of eight months. So if you can assume that another 10 times improvement will happen by version 5, version 6, version seven, whatever you want, then within the next couple of years, you're talking about a machine that is at a thousand plus IQ, which is not, with all my due respect for you, it's definitely not a territory you can compete in. That's number one. Number two, you spoke about ingenuity, which I believe, you know, is definitely still on the human side. So they're currently generative, okay, where we basically believe that they're coming up with new things, but they're basically coming up with the best summation possible of the data set that we give them. Okay, the interesting bit is, first of all, the data set is expanding very quickly. And a big
Starting point is 00:41:22 part of the data set, believe it or not, is no longer coming from humans. And a big part of the data set, believe it or not, is no longer coming from humans. So a big part of the data set is machine generated knowledge that becomes part of the data set, right? And we've seen that before, you know, those who are deep into artificial intelligence, you know, we remember what is known as Move 37, when AlphaGo Master was playing against the world champion of the game Go. One move was the move 37, where the machine made a move that's completely, that has never been seen before in the game Go, right? It was mathematically odd, but strategically very interesting. And it completely disrupted our understanding of how the machine works to the point that Lee, the world champion, asked for a 15 minutes recess to understand what happened. Okay. So we've seen that endless, countless times on AI, where you would see that the machine comes up with strategies
Starting point is 00:42:23 and with ingenuity, things that we have not taught it. It was never available in the data set and it would learn that on its own. Emerging properties is one way where we refer to that. Those emerging properties include creativity, they include ingenuity, they include emotional sentiments, they include language we haven't taught them, they include quite a bit of what we didn't expect will happen. So if you want to be optimistic, you have every right to be optimistic because we're still in a good place. But if you attempt to imagine how something moving at that pace will sooner or later reach, then you have to start getting concerned.
Starting point is 00:43:09 Because even if it takes seven years instead of two for Chad GPT to be a thousand times more intelligent than the most intelligent human on the planet, seven years is not that far. And that point in our future is inevitable. There is no turning back with the kinds of investments pouring in AI. It is inevitable that they will be smarter than us with their architectures, with their memory size, with their compute capabilities. It's inevitable that they will be smarter than you. They already are in the tasks we assign to them. So when we figured out a way, when we figured out fission and we split the atom and this incredible new energy source was discovered, we sort of immediately went to weaponizing it as opposed to trying to figure out how to turn it into free energy, do away with carbon. And a lot of scientists, my understanding is once the atom was split, became very depressed and some even committed suicide because they understandably said, now that the world, an unstable world with different agendas, is able to split the atom, it's the end of the species.
Starting point is 00:44:16 And you could understand at that time how they thought that, but we went on to create the International Atomic Energy Commission. We have reduced nuclear weapons from 50,000 to 10,000. My understanding is that there's battlefield technology, lasers that in an instant would blind everybody on the field. And we've decided, even adversaries have decided to cooperate and not make these weapons available. We've done what I think is important work around cross-lateral or multilateral treaties around halting the progress of bioweapons. Couldn't we do the same thing here? I don't want to put words in your mouth, but are you arguing for some sort of global agency, whether it's under NATO or multilateral agency that says,
Starting point is 00:44:57 okay, similar to the other threats we faced, we have to come to some sort of agreement around what this can or cannot be used for? I love that you bring this up. This is a big chunk. It's not my entire message, but a good chunk of the message is get up and start acting. Now, you have to understand that the nuclear treaty took us tens of years to reach and that we still have 10,000 nuclear heads out there. And that last year when, you know, Putin started to threaten, you know, NATO, we were still discussing the threat of nuclear weapons. So we haven't ended the threat. We've just sort of reduced the threat to the superpowers, right? So this is one side of the debate. The more interesting side of the debate, and I
Starting point is 00:45:45 think the core of the matter, is my call to action. I call this, and you know, Tristan Harris of the AI dilemma and the social dilemma talks about that as well, that we call this an Oppenheimer moment, right? This is where the nuclear head is about to be manufactured. And the best time ever to have evaded nuclear war and the threats of nuclear war would have been by not inventing or by agreeing as humanity upfront that this is devastating. We don't need this as part of humanity. But now we're late, right? So now this is already here. And we needed to see the devastation to be able to realize that this is something threatening enough for one objective to happen. And that's what I keep calling for when I talk to governments, when I speak publicly. I keep calling for this is a moment
Starting point is 00:46:37 where there is such a significant disruption to the fabric of society, to the safety of humanity. As I said, not because of the existential crisis, we'll talk about that later, but because of the possibility of someone using this against the rest of us, okay? That we need humanity to rise above our individual selfishness, above our nation's selfishness, and get together and talk about the well-being
Starting point is 00:47:07 of humanity at large. We need some place where we say, look, AI can be amazing for every one of us to the point that we actually may not ever need to fight again because there is abundance for everyone. But can we please get together and put our differences between China, Russia, and the US, and try to do a treaty that says, let's do this together for the benefit of humanity, not for the benefit of any individual nation. Now, as I say that, I sound very naive. Why? For two reasons. One is, this is a typical prisoner's dilemma. And the problem with the prisoner's dilemma is that I cannot trust the other guy. So when two prisoners are given options that play
Starting point is 00:47:52 them against each other, the challenge they have is not understanding that choosing the better option for, you know, is better for them. It's that they don't trust if the other guy will keep to it. Okay. And in this stage of development of AI, it's not just that America doesn't trust China and Alphabet doesn't trust Meta. Okay. It is that we don't trust that there is no other criminal somewhere off the grid capable of developing AI and we are unaware of their existence. So by definition, every player in this current life that we're living is in an arms race. This is an arms race, okay? Because the threat of one of them beating the others is extremely expensive to the others. And so everyone will continue to pour resources
Starting point is 00:48:48 on this. Everyone will continue to pour investments on this. And everyone will attempt to be the first one that creates the nuclear bomb that's called artificial intelligence. The thing that really scares me about this is that when you're talking about previous technologies that have been represented this type of threat, if you believe, as I do, and obviously you do, that it could be used for pretty malicious objectives, is the inability for verification. We know when someone's performed an underground nuclear test, this is really hard. It would be very hard to verify that Iran is not, you know, going full force at AI. When you speak to people about, I think a lot of people and a lot of agencies in a
Starting point is 00:49:36 lot of countries are going to probably nod their head and say, this warrants thoughtful regulation and cross-lateral agreements. And I can't imagine or I would think that the biggest nations would get together and say, okay, we do have a shared interest in trying to wrap our arms around this and maybe figure out a way if there's a way of addressing the prisoner's dilemma here. What is that? Let's assume that people have incentives or nations have incentives to figure this out. What does the organization and the mechanism and the regulation potentially look like? Is it a Geneva Convention sort of thing? Is it an interpol? How do you think you begin to address this problem, assuming we can get good actors or nations, good and bad actors, to agree this is something that warrants additional resources? I would picture something that's more analogous
Starting point is 00:50:25 to the FAA or FDA, right? Let's say FAA a little more than FDA, to be honest. But basically, something that says build anything you want, as long as it's safe, inspected, agreed by everyone to be published in public and available to the world. Right. So, so something that basically says, you know, and I don't, you know, debate that what Sam Altman, the CEO of OpenAI, when he speaks about developing it in public so that we can iron it out together. Interesting idea, if you ask me. But the truth is that having ChatGPT out in the world with crossing all three barriers, by the way, so all of us, you know, AI scientists and business people and so on, we said there were three barriers that we were not supposed to cross. One is don't put them on the open internet until
Starting point is 00:51:19 you're sure of their safety. Two is don't teach them to write code. And the massive wave that you have today is that AIs are prompting AIs. So we're being sidetracked from the conversation of AI developing new code that AI is asking it to develop. This is a disaster waiting to happen, if you ask me. Now, when we compare this to nuclear weapons, the statement that I say, which is a little scary, so I apologize, but it really is worthy of attention, is we've never, ever before created a nuclear weapon that was capable of creating nuclear weapons. Understand that. So we now have created artificial intelligence code that's capable of creating code. Right? As a matter of fact, it is it is incented to create code, and it's incented to create better code
Starting point is 00:52:18 than humans. So if you take, you know, as I said, the immediate threats of that is that some of the, you know, Emad Mushtaq, for example, the CEO of stability.ai, basically predicts that there will be no developers, no human developers in the next five years, right? And when you really think about that, again, stability's work will also enable us to have AI with the capability of chat GPT on your mobile phone within a year to a year and a half without connection to the internet, file sizes of a couple of gigabytes. Okay. I mean, at least that's the claim. So they may get there in a year and two and five, it doesn't matter. But the claim is that, and the attention and the investment is those minimum personalized copies of artificial intelligence that are everywhere. Now, when you
Starting point is 00:53:11 really look at the big picture of this, you're in a place where, where the ball is today, it's not that scary. As a matter of fact, it's quite exciting. When you really look at, you know, someone like me who loves knowledge so much to be able to talk to someone or something with so much knowledge is amazing, right? But it's where the ball is going to be. This is why I say there is a point of no return, right? Where the ball is going to be, in my personal view, is that the advancements of the capabilities of AI are going to continue on the pace that we've seen. And then there will be a case that I call patient zero, right? Patient zero is a disastrous event that will capture the spotlights and the
Starting point is 00:53:51 headlines of the news, where we will say, whoops, this thing can go wrong, right? The problem is that by the day we agree to that, governments agree to that, we start to put all of that infrastructure in place to try and regulate and have oversight and so on, you've already had so much code out there that has no control in it, okay? That has no code that says, switch me off. That has no code that says, don't come to this part, right? And by then, we will have patient zero. We will all agree that we need to do something about it,
Starting point is 00:54:29 but it will become really difficult to do anything about it because that point has been crossed. When we had Chad GPT and Bard and others online, writing code, being prompted by other machines, and out in the open internet, we're already crossing that point, but at least we're not crossing that point. But at least we're not crossing it with a thousand different incarnations of AI. So maybe now is the time to act.
Starting point is 00:54:52 We'll be right back. Hey, it's Scott Galloway. And on our podcast, Pivot, we are bringing you a special series about the basics of artificial intelligence. We answering all your questions what should you use it for what tools are right for you and what privacy issues should you ultimately watch out for and to help us out we are joined by kylie robison the senior ai reporter for the verge to give you a primer on how to integrate ai into your life so tune into ai basics how and when to use ai special series from Pivot sponsored by AWS, wherever you get your podcasts. Hello, I'm Esther Perel, psychotherapist and host of the podcast Where Should We Begin, which delves into the multiple layers of relationships, mostly romantic. But in this special series, I focus on our relationships with our colleagues,
Starting point is 00:55:46 business partners, and managers. Listen in as I talk to co-workers facing their own challenges with one another and get the real work done. Tune into Housework, a special series from Where Should We Begin, sponsored by Klaviyo. So I want to put forward a couple of theses and you tell me where I've got it right and where I've got it wrong. I think it's Carlo Cipolla, the professor from Berkeley who wrote a book on intelligence and the stupid,
Starting point is 00:56:20 and he defined intelligence as people who help themselves while helping others. And he described people who help themselves while hurting others is the bandits and i would argue that the majority of capitalist corporations are run by bandits on that it's not a moral they're not a moral people but they're amoral and there's a lot of distance between them and the externalities uh that they that they. And money is an incredible, I don't know, it's incredible at blurring your vision in terms of seeing the actual impact and stopping what you're doing. It feels like we've outsourced intelligence to government. And I'm not hopeful. As a matter of
Starting point is 00:56:57 fact, I'm rather cynical at the strategy of calling on business leaders, better angels to show up. I just haven't seen it happen. I don't think they're bad people, but I think they'll consistently in a capitalist society make an excuse for what gets them the Gulfstream 650 extended range versus just the Gulfstream 650. Isn't this really about an appreciation for institutions and government and asking them to step in and do what they're supposed to do, and that's prevent a tragedy to the commons. I worry that it's dangerous to think the better angels of Alphabet or Meta or Amazon or Alibaba are going to show up. Doesn't this really require swift and thoughtful and strong government intervention? Absolutely.
Starting point is 00:57:41 Absolutely. If you want the short answer, absolutely. Not because those people are evil, just like you said, it's because of systemic bias where you, society would blame Sundar more if he created an AI that's, you know, had an issue in it. And I know, Sundar, I worked with him. I know that he wants what's good. Okay. But he is in a system that's telling you, these are your incentives. Grow the business, protect the business. It's a for-profit company.
Starting point is 00:58:19 And in a very interesting way, when I lived inside Google and, you know, I worked very closely with Larry and Sergey, you know, Larry's point of view openly was, if we don't do it, someone else will do it. Okay. And we are the good people, whether they are, you know, I loved working with Larry and Sergey, and I think they were values driven. But the truth of the matter is, everybody thinks they are the good people. Okay. So, so, so leaving it to the, to business, you're not leaving it to individuals. You're leaving it to a system that chooses to prioritize GDP growth and profit growth over everything else. Okay. We're leaving it to a system that says it doesn't matter if it's ethical, as long as it's legal. There's a mega, mega difference between the
Starting point is 00:59:06 two. You can do a lot of things that are not ethical, but find the loophole that allows you to do it legally. And I think that the reality of the matter is that by doing that, we're going to lose. Now, on the other hand, again, it's naive to expect the government to regulate. Why? Because the US government wants Google and Alphabet to lead because they're afraid that Alibaba and whichever other Chinese companies will lead. So they don't have... The system that we created as humanity
Starting point is 00:59:41 is what is about to kill us. I mean, I don't mean, you know, kill us physically, but the system, that systemic bias of constantly magnifying profit and power is where we are today, where every single one of us knows, you know, it takes a very ethical, courageous position to stop at a point in your life and say that's it i'm going to leave and and and i you know when i when i did that in 2018 understand so that you don't blame those executives that until 2017 2016 i believed i was making the world better and i I was, okay? I think the problem with systemic bias is that there is a point in our constant strive for a better iPhone when the advantages no longer outweigh the
Starting point is 01:00:36 disadvantages and the price we pay, right? And this point, in my personal view, we have achieved by maybe 2010, 2012. You know, enough technology. Anything further than that has been mainly to maximize profit for those corporations. Now, so where does this power lie? This power lies in do not subscribe to threats. When Meta gives you another product that is mainly out there to compete with Twitter, and then you feel the pressure that I'm not going to be on it, and so accordingly, I'm going to lose my followers. I think there is a courageous position that we all need to start
Starting point is 01:01:17 taking by saying, I'm not going to stand in line to buy the iPhone 15. I'm not going to subscribe to another social media network that's trying to use me as a product. I'm not going to engage in rude and aggressive conversations on social media just to gratify my ego. I'm not going to let the system propagate. Is government going to do it on your behalf? I pray to God they will. But do I expect they will? I don't think so. I don't think they're fast enough, and I don't think their incentives are aligned. So when I, we talk about this a lot on my other podcast, Pivot, I speculate, and I'm curious what you think, that the first major scary moment or externality of AI is going to happen Q1 or Q2 of next year. Spot on. Spot on. It's our patient zero.
Starting point is 01:02:16 And it's going to be focused around misinformation concerning the U.S. election. Spot on. That's exactly my prediction. Okay. The first realization, the problem, Scott, is will we view it as such? Or will we get into another debate like we did in previous U.S. elections of who did what, how can you prove it, war is not only fought on the battlefield. War is fought economically, and it's fought in the minds of the people. You have Noah Harari's view of AI hacking the operating system of humanity because now it can master language better than most humans can. Okay. This is it. If I tell you right now, by the way, Scott, there is a new, some kind of shampoo that will grow your hair within 24 hours and make it red. Okay. Doesn't matter if what I told you is right or wrong. This, I have just seeded. Amen. Exactly, right? Exactly. I have seeded an idea in your head that requires you to verify or approve or disapprove. And I've just occupied your head already. And there are so many ways
Starting point is 01:03:40 that you can influence a human's perception of the world that are entirely in the hands of the machines. People don't understand this. You have at least 90% of the information you receive today was dictated by a machine. Everything you see on social media is dictated by a machine. Every ad you see next to Google is dictated by a machine. Anything that Google, you know, displays to you as organic results, it's organic as per the machine of Google. You know, four of the top 10 apps on the App Store, you know, this last month, were generated by machines. So do you think, well, I don't want to cheapen the conversation about talking about corporations and shareholder value, but I can't help it. It strikes me that overnight Alphabet
Starting point is 01:04:36 appeared, went from being what was arguably the most innovative organization in the world to being perceived as flat-footed, that it was the ultimate example of the innovator's dilemma. Then rather than being out in front and first on the commercialization of this, they didn't want to threaten this $150 billion amazing business, arguably the best business in the history of business, search. And then Microsoft with less to lose around search came in and bested them. One, do you agree with that? And two, who do you think, from just a pure corporate strategy standpoint and intellectual property and depth of domain expertise, who would you argue are likely going to be the big winners and losers around AI's application in a corporate setting? That's a very multi-layered question. So what people probably don't recognize,
Starting point is 01:05:25 and maybe I'm not the best person to say that, but equivalents of BARDs were available in AI terms to Google for a very long time, right? So the idea of providing one answer to your query has been definitely toyed around with for a very long time. This is why BARD showed up very quickly after ChatGPT was on the market. Correct? What is in the AI labs would blow you away
Starting point is 01:05:56 in comparison to what you out here in the world will see. Okay? Now, the trick is this. The trick is there were two reasons in my personal view, I'm not speaking on behalf of Google, but one, how do you monetize, Bart? If there is one answer to the question, why do I need an ad? Okay, that's one side of it. But the other side, which is what I loved about the Google I worked in, is there was no arrogance around the fact that there was no one answer to any truth. Okay? So Google ethically said, I don't have the right to give you an answer. The choice of what answer is true
Starting point is 01:06:39 is yours. Okay? I can only show you what everything out there says, and then you choose what you believe is the truth. That's a very ethical position, and I respect Google for that. What we're doing with ChatGPT and the likes today is a different formatting of that one answer. We're saying, look, there has been 2 million answers to the question you asked. Google would list the two million answers for you. Transformers and language models will say, and here is the average of those two million answers. If we really, really blend them together and give you the essence, the very essence of them, as we believe the essence is, here is the essence. Okay. Now, with that, you have to understand that Google then took an
Starting point is 01:07:27 ethical position saying, one, we don't know the truth. So we shouldn't say the truth. Two, we also don't have the right to put something out there that is not tested enough for safety, for AI safety. I think that's a very responsible position. Okay? The problem is, it's a prisoner's dilemma. We're stuck in capitalism. So the minute you put Chad GPT out there, Sundar is in a checkmate position. So what can he do other than say, respond? Because if he doesn't say respond,
Starting point is 01:07:58 he will get fired as the CEO, okay? Because his business is going down the drain and someone else will say respond. Whoever the next person is will say respond, put Bard out there. Yeah, and there's, I mean, I want to take the glass half full approach here and that is,
Starting point is 01:08:15 whether it was pouring mercury into the river, General Motors would be pouring mercury into the river from its factories right now if it hadn't been outlawed because they'd be at a competitive disadvantage by those who are allowed to do it. So it feels to me that we just need thoughtful people such as yourself advising our elected representatives to form the right bodies, whether it's a division of NATO or whatever it might be to the UN and then regulatory bodies
Starting point is 01:08:39 here. And it feels like there's a lot of incentive for cooperation here. So just before... Can I just publicly announce what, because you said rightly that the next U.S. election is the battlefield, right? This is the patient zero. I strongly recommend that the U.S. government and every other government on earth criminalizes fake content or AI-generated content that is not marked as fake or AI-generated. So you need a law in place that says if what you're displaying is found to not be actual truth, you will be put in prison for the rest of your life. Or what about, everyone keeps talking about an AI pause. I found that letter was very naive. And I also found that it lost a lot of credibility when it had Elon Musk sign it, because I don't think he's looking for an AI pause. I think he's looking
Starting point is 01:09:27 for other people to pause so he can catch up. I found the letter was very self-defeating. But what about the idea of an AI pause around all election information 90 days before the election? Trying to discern what is true or not true. What if you just said to all the big platforms, for 90 days, we need you to put a pause on all election information, 90 days before the election. Amazing. As a matter of fact, wouldn't it be wonderful if we just went back to old school? That all information about the election is removed automatically, other than actual debates on stage filmed by old cameras. Yeah, I think that's right.
Starting point is 01:10:09 But again, the problem is a prisoner's dilemma, Scott. So by doing that, you're penalizing social media companies so they don't have that content, which is a big revenue for them. You're penalizing news networks so they don't have content to, you know, they don't have opinions and noise and sticky eyes that are staying there to listen to all of the fluff and all of the arguments. And it's, again, the system. But you're absolutely spot on. Wouldn't it be wonderful if we said everything about the election should be viewed in the classical old form of humans talking to humans? So you've been very generous through time. I just wanted to touch on a moment. We share that we both sort of, we're both really interested in happiness or the exploration of happiness. And I'm just curious, what was the transformation? Why did you decide to start to
Starting point is 01:10:56 kind of pivot from, you know, obviously these deep, really important subjects, including AI and technology and start talking about happiness? Was it a, was it, is it a personal pursuit? Was there a moment in your life where you thought you needed to spend more time on this? What was the inspiration? There were two moments. So first of all, I was the extreme example of the grumpy, rich brat. Very early in my life, I made massive amounts of money by understanding mathematics and programming before online trading was a big thing. And, you know, I was, as a result, as always, the more money I made, the more miserable I became. And I remember vividly one Saturday morning where I completely broke my daughter's heart, completely broke her heart. You know, she's coming Saturday morning, jumping up and down and feeling very happy about what we're about to do. And I look at her, grumpy as always, looking at my laptop and say, I said, can we please
Starting point is 01:11:49 be serious for a moment? Okay. And my daughter was five at the time and her heart broke and she cried. And I really realized I didn't like that person. So I started to research the topic of happiness. You know, it took me 12 years to move from being that miserable, annoying, grumpy executive to being a very calm, very, very open-minded and cheerful and nice person, if you want. Through that journey, I used the aid of my son, who was born a tiny little Zen monk, always
Starting point is 01:12:21 had peace in him somehow, who unfortunately in 2014, I was chief business officer of Google X at the time. He called us, he used to live in Boston and he had a US tour played in a band at the time in August. And he basically called us in June and said, can I come visit you in Dubai for a week? I feel obliged to come and spend time with you. And we said, sure. And then he went through the simplest surgical operation, an appendix inflammation. Normally, it takes four minutes and the patient is out. But the surgeon sadly did five mistakes in a row.
Starting point is 01:12:58 All of them were preventable. All of them were fixable. But within four hours, my son left our world. No, I'm so sorry. I didn't know that. No, it's okay. I actually think he's in a good place. I really do. And Ali, before he left, he had a dream, which he spoke to his sister about, the only person who spoke about. So she comes running to me and she says, Papa Ali told me that he had a dream that he was everywhere and part of everyone. Which in most spiritual, I didn't know that at the time, but in most spiritual teachings,
Starting point is 01:13:37 everywhere and part of everyone is to disconnect from the physical, which basically is the definition of death. And so when Ali died, when Aya, my daughter, told me that, to my crazy executive mind, chief business officer of Google, seven years building Google's emerging markets, it translated into my mind as a quote. So I somehow responded by saying, sure, Habibi, consider it done. Because I was responsible for the next 4 billion user strategy at Google. So I somehow responded by saying, sure, Habibi, consider it done. Okay? Because I was responsible for the next 4 billion user strategy at Google.
Starting point is 01:14:12 I knew exactly how to reach billions of people. And so in my mind, I said, okay, I'm going to write a book. And I am going to share everything my son taught me about happiness. And in a very selfish way, the objective of the book was that a part of the essence of my son, Ali, which is what he taught me about happiness, is going to be read by enough people who will tell enough people. And if I could reach 10 million through six degrees of separation in 72 years with my math, he will be everywhere and part of everyone. Right. That's where the project started. And then somehow the universe made it work. So within six weeks after the book launch, we were already a bestseller in like eight countries. My videos were viewed 180 million times. And it was clear that 10 million happy was happening. So
Starting point is 01:14:59 we went to a billion happy as a target. Basically, the rest of my life was targeted for a billion happy. My work on AI, believe it or not, is part of a billion happy because I don't think we will be very happy as humanity if we don't get AI right. And so when I resigned from Google 2018, I published a video that was called One Billion Happy, basically the name of the mission. And that was entirely about that, entirely about the fact that AI is going to magnify human tendencies. And that if we continue to be rude and aggressive and grumpy and unhappy and selfish and so on, that this will be the magnification of AI. And my effort has been to try and magnify the true human values, love, happiness, and compassion, to try and say, if we show enough of that online, maybe we can influence the views of AI so that they start to show us more of that and basically expect us to want more of that. And that was the
Starting point is 01:15:59 original work. And since then, I've spent the last five, six years doing nothing but this. My dream is that by the end of my life, I would have contributed somehow to waking a billion people up to the fact that they have the right to find happiness. And hopefully, I will have spent all of the money that I have earned from Google. And hopefully, we will be forgotten enough for the mission to continue after we leave as a small team. So I would love your advice. I wasn't expecting to go here. I lost a close friend a week ago and totally unexpected leukemia that just kept getting worse and worse over the course of 12 months. And I'm really struggling with the grief. I'm just not, I wasn't prepared for it. I don't know how to deal with it.
Starting point is 01:16:46 That is meaningful grief. You have dealt with profound grief. There is, I can't imagine any more profound grief than losing a child. We grow up with this comfort as parents that we're going to get to go first. It's a huge source of comfort that I have. What advice do you have for people who are dealing with this type of grief? Well, I mean, it's difficult to give any advice that would take the grief away that quickly. Let's just be very open and honest about it. There is a finality to death that contradicts everything we've ever been trained to as humanity. It triggers our fear. It triggers our helplessness.
Starting point is 01:17:26 It triggers our insecurity. We cannot trust life. We miss the person that we love that left us. We are scared about where they are. We have lots of uncertainties. It's a very overwhelming trauma. And the reality of the matter is that my first advice is grief, fully grief. You know, if you're angry, be angry. If you're unsure, be unsure. If you want to take a break, take a break. This is the first, you know, step on the way. Then there are two steps in my mind that are, one is very logical and the other is very spiritual. Okay, the
Starting point is 01:18:06 logical one is very harsh to say is very harsh, but it is, you know, when sometimes they say the truth will set you free. There's absolutely nothing, nothing you can ever do to bring them back. Okay, so so my very mathematical logical mind actually, believe it or not, went out and did the research is like, has anyone ever come back? Right? Yeah. We had many people that came back from near death experiences, but you know, for a fact that your friend is gone. Okay. The first thing I started to think about after Ali left was if I hit my head against the wall for 27 years, he's not going to come back. I'm going to be torturing myself and the world is not going to get better. Okay. So I found in my heart a space that basically said, I don't know how I will get
Starting point is 01:18:59 there, but I will hopefully find the positive in all of this. I don't know how I will get there but I will learn to accept that he left. And I think accepting that he left, I call this committed acceptance. Committed acceptance is that sometimes life throws things at you that are so final. You know, from the silliest thing of being stuck in traffic when you have an appointment that you're about to miss okay all the way to losing a loved one there are very frequent things that happen in your life that are out of your control and and and what i what i say is committed acceptance accept that this is your new baseline accept logically and emotionally that as painful as it is, this is the end of that hug that I could get from my son. It's not going to happen again. And then commit to make your life
Starting point is 01:19:52 and the lives of those around you better despite the pain, which is a very interesting thing. You don't have to know how. You do not have to know how. You just tell yourself, now that I've accepted this tragedy, I accepted this pain, I'm going to crawl out of it. Literally, the word is crawl out of it. I'm going to do one thing today that makes my life better than yesterday. And I'm going to do another thing tomorrow that makes life a little better than today. That's it. Okay?
Starting point is 01:20:19 So this is the practical side, if you want. The spiritual side, I think, is very key so that I don't lie to people. And in my first book, Solve for Happy, I spoke about the concept of death from a physics point of view. Because we struggle with the scientific method in measuring anything that we can't, with understanding anything that we can't measure. And you cannot measure what happens after we die. Okay? But there are lots of concepts in quantum physics specifically, and with the idea of theory of relativity and the fact that space-time completely exists, maybe we shouldn't go into those today, but that will tell you that life is separate from the
Starting point is 01:21:02 physical. Call it consciousness, call it spirit or soul like religions call it, but there is a non-physical element to us. That non-physical element is the part that disconnected from my son's body when he left. And that same handsome form of his that was on the intensive care table was no longer him. You could feel it. You could feel that his essence was no longer him. You could feel it. You could feel that his essence was no longer there. And if you can accept the fact that there is a non-physical element to us, okay, that non-physical element by definition exists outside space-time because otherwise it wouldn't be able to perceive time and the passage of time. It's a simple object-subject relationship.
Starting point is 01:21:42 Then that non-physical element is not affected by the events of the physical. So my son really never died. So my son's physical form was born and my son's physical form decayed. But the essence of my son, his consciousness, has never gone anywhere. And when you really understand this, you understand that death is not the opposite of life. Death is the opposite of birth. And life exists before, during, and after. That I don't expect everyone to agree with. Okay? But to me, because I see it from a physics point of view more than a religious point of view, I tend to believe that my son is okay. I don't know okay how, but I know for a fact that I will join him sooner or later, and that I too will leave this physical form, and that I too will be okay.
Starting point is 01:22:32 And, you know, if I am optimistic about my life, it may take 25 more years before I leave. But if I look back at my last 56 that passed literally like that, then 25 is not that long. So in my heart, I have a certainty that I will be there too. And I think there is not a bad place. There is not even a place. It's not even a time. It's an eternity of consciousness. And that's the truth of who we are. Very difficult to explain that quickly. But at the end of the day, I don't see the departure of my son necessarily as a bad thing. Not for him, and definitely not for the world that benefited from his departure. It's just very painful for me. And if it's painful for me, then I'm the one responsible for managing that pain.
Starting point is 01:23:22 I'm the one responsible for dealing with that pain, because that pain doesn't add any positivity to me at all. Mo Gadat is the former chief business officer at Google X and an expert on happiness. He's also the author of three international bestselling books that have been published in over 30 languages. After a 30-year career in tech, Mo switched gears and made happiness his primary focus. In 2020, he launched his podcast, Slow Mo, a podcast with Mo Gadat. He joined us from Dubai. Mo, I mean this so sincerely. This was very meaningful, and I appreciate your good work and generally just really wish the best for you and yours. And so appreciate not only how thoughtful and how just incisive you are, but how spiritual
Starting point is 01:24:04 and authentic you are. I really appreciate your time. I really can't thank you enough, Scott. So not for the opportunity, not just for the opportunity to speak to your audience, but I really felt connected during this conversation. I really enjoyed the level of depth that we went into, and I'm really grateful for that.
Starting point is 01:24:31 Algebra of happiness. I had a close friend pass away last week. My friend Scott Sabah, 54, I believe, literally almost a year ago, maybe 15 months ago, we went to Loom together. We were sort of, not sort of, we were close friends and we like to do the same things. We like to go out, we like to party, we like to drink. And lived in New York as a real estate developer, graduated from USC. Three kids, two that just graduated from USC, one who was a freshman at UVA.
Starting point is 01:25:02 I met Scott probably 15 years ago. We lived in the same building and we just hit it off and started traveling together. And whenever I was in New York, I would just call Scott. And he was always up for going and grabbing a drink with me at Zero Bond or he was kind of cool with me. I go to the same places over and over. And Scott would always find his cooler places in the East village. He just kind of knew what was going on. Nice man, ton of fun, great shape year ago, bump on his head, ends up it's leukemia. No problem. We're going to treat it with pills, not chemo. You don't need it. Oh, the pills aren't working. We need to go to chemo.
Starting point is 01:25:41 Uh, Oh no, it's, you know, it looked like it was better, but now it's converted to something much more serious. I think it was called Richter's. And we need to do a stem cell transplant. Okay, your son's a match. That's good news. We're literally going to reset your entire blood makeup. I mean, even different blood type. Just the stem cell transplant. Good news. it worked, you're all clear. Oh no, it's back. And there's nothing we can do. I mean, just from bad to worse to tragic. And I got to be honest, I was sitting here thinking about this, like, what is the lesson here? And I don't, you know, I'm struggling with this. I don't have anything profound or moving or inspiring to say around this. This is a tragedy, and I'm struggling with it. It is something I still can't wrap my head around.
Starting point is 01:26:31 I've had some death in my life, but not a lot. I lost my mom. But when your parents die, it's sort of the natural order. I'm not going to say you're prepared for it, but at least rationally, it makes sense. But when you lose a friend who's healthy and who's younger than you, you're just struggling to wrap my head around it. The only thing I can come to and the only thing I would take away or what I would try and impart on people is that when I would text him towards the end, I would say, I love you and there's a lot of people in your life who love you and I know you know that. And I thought to myself, why is it I wait till someone's dying to tell them these things?
Starting point is 01:27:10 And I think it is healthy. Something that has been inspiring for me or has been a big source of strength and helped me register my own emotions and helped me be more emotive to the people I care about is recognizing that death is pretty final and it's waiting for all of us. And I realize that's very macabre, and it's also, I can prove it to every one of us, the mortality rate is 100%. And I don't think it's a bad idea to have a sense of mortality at a pretty young age, because what you find is rather than being scared or thinking about something you're not supposed to think about as a young person, you do recognize you're going to lose your parents, and you become more courageous
Starting point is 01:27:45 with your emotions and you become a little bit more graceful. You don't feel as awkward. I hug my male friends now. I never used to hug my male friends. You think more about a little bit more about others and maybe trying to be the person that you always thought. Like I have a cartoon of me that I'm this generous, nice, loving man. And I'm kind of not. Sometimes I am, but I'm mostly not. And I had breakfast this morning with a guy named Scott Harrison, who's a total inspiration, started a charity called Charity Water. And he always invites me to Africa with my boys. And I think, do I really want to go to Africa? I'm not like a rough-it kind of guy. If I go to Africa, I want to go to some douchebag tenant camp. And I've always kind of put it off.
Starting point is 01:28:23 And I thought, you know what? My 12 and 15- old boys, they can't help. Mostly my fault are going to be a little bit spoiled. I need them. I need to take them to Africa and to see how not the other half, but the other, you know, much less fortunate people live. And so I'm going to do it. So it's given me more of a, you know, Scott's death has sort of given me more of a sense of the presence or of the present. But look, the only learning here, and I'm not going to pretend there is a learning, I am struggling with this, is it's not a bad idea to look at the people in your life and realize that randomly you're going to lose some of them. And how does that change? Knowing that you are going to lose some of them, what would you say to them? Because you don't regret it. I regret not
Starting point is 01:29:14 saying these things to Scott sooner. I don't think you're ever going to look back and say, it was embarrassing to tell my friends that I love them or I admire them or say to my dad, you know, I really admire you. I'm impressed by you. And, you know, I remember us doing these things together. I just don't think you'll ever regret saying those things, but get on it, get on it. And the way you get on it and the way you find the courage around these things is the harsh recognition that randomly, especially when you get to my age, you're going to lose people you care a great deal about. This episode was produced by Caroline Shagrin. Jennifer Sanchez is our associate producer,
Starting point is 01:29:54 and Drew Burrows is our technical director. Thank you for listening to the Property Pod and Vox Media Podcast Network. We will catch you on Saturday for No Mercy, No Malice, as read by George Hahn, and on Monday with our weekly market show. What software do you use at work? The answer to that question is probably more complicated than you want it to be. The average U.S. company deploys more than 100 apps,
Starting point is 01:30:15 and ideas about the work we do can be radically changed by the tools we use to do it. So what is enterprise software anyway? What is productivity software? How will AI affect both? And how are these tools changing the way we use our computers to make stuff, communicate, and plan for the future? In this three-part special series, Decoder is surveying the IT landscape presented by AWS. Check it out wherever you get your podcasts. Support for the show comes from Alex Partners. Did you know that almost 90% of
Starting point is 01:30:47 executives see potential for growth from digital disruption, with 37% seeing significant or extremely high positive impact on revenue growth? In Alex Partners' 2024 Digital Disruption Report, you can learn the best path to turning that disruption into growth for your business. With a focus on clarity, direction, and effective implementation, Alex Partners provides essential support when decisive leadership is crucial. You can discover insights like these by reading Alex Partners' latest technology industry insights, available at www.alexpartners.com. That's www.alexpartners.com slash vox. That's www.alixpartners.com slash vox. In the face of disruption, businesses trust Alex Partners to get straight to the point and deliver results when it really matters.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.