The Prof G Pod with Scott Galloway - The Future of AI and How It Will Shape Our World — with Mo Gawdat

Episode Date: February 6, 2025

Mo Gawdat, the former Chief Business Officer for Google X, bestselling author, the founder of ‘One Billion Happy’ foundation, and co-founder of ‘Unstressable,’ joins Scott to discuss the state... of AI — where it stands today, how it’s evolving, and what that means for our future. They also get into Mo’s latest book, Unstressable: A Practical Guide to Stress-Free Living. Follow Mo, @mo_gawdat. Subscribe to No Mercy / No Malice Buy "The Algebra of Wealth," out now. Follow the podcast across socials @profgpod: Instagram Threads X Reddit Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Support for Prop Tree comes from Viori. Oh my God, true story. I am wearing, totally coincidentally, guess what? Viori shorts. Viori's high quality gym clothes are made to be versatile and stand the test of time. They sent me some to try out and here I am. For our listeners, Viori is offering 20% off
Starting point is 00:00:20 your first purchase plus you have free shipping on any US orders over $75 and free returns. Get yourself some of the most comfortable and versatile clothing on the planet. Viori.com slash ProfG, that's V-U-O-R-I.com slash ProfG. Exclusions apply. Visit the website for full terms and conditions. Get groceries delivered across the GTA from real Canadian Superstore with PC Express. Shop online for super prices and super savings.
Starting point is 00:00:53 Try it today and get up to $75 in PC Optimum Points. Visit superstore.ca to get started. Clear your schedule for U-time with a handcrafted espresso beverage from Starbucks. Savor the new small and mighty Cortado, cozy up with the familiar flavors of pistachio, or shake up your mood with an iced brown sugar oat shaken espresso. Whatever you choose, your espresso will be handcrafted with care at Starbucks. at Starbucks. Go, go, go! Welcome to the 335th episode of The Prof G Pod. In today's episode, we speak with Mo Gadot, the former chief business officer of Google X, bestselling author, founder of One Billion Happy, and host of Slow Mo, a podcast with
Starting point is 00:02:04 Mo Gadot. We discuss with Mo how AI could shape our lives in the coming decades, the opportunities it brings, and the risks it poses to society, ethics, and mental health. We also get into his latest book, Unstressable, a Practical Guide to Stress-Free Living. Yeah, that's gonna happen.
Starting point is 00:02:19 Yeah, I'm gonna read a book and all of a sudden, Mr. Stress is gonna leave the neighborhood. Call me cynical. Color me a bit skeptical. Stress is going to leave the neighborhood. Call me, call me cynical. Color me a bit skeptical. What's going on with the dog? What's going on with the dog? So I am in New York after a stop in Orlando where I went for a speaking gig. I have absolutely no sense of Orlando other than Disney world, which is the
Starting point is 00:02:41 seventh circle of hell for parents. Essentially I do almost no parenting 364 days a year, and I compensate for all of it by agreeing to take my boys and their five or six closest friends to Walt Disney World, which is just, I mean, that is cruel and unusual punishment for a parent. Uh, but anyways, not doing it this time, just bombing in, speaking to a lovely group of people, then getting back on a plane and going up to New York while I spend four days with the team and do a bunch.
Starting point is 00:03:08 I find New York, I get so much done in New York. There's something about, I don't know, everyone just seems to be on high, if you will. By the way, it's fascinating. All these members clubs are opening. In the last couple of years, there's been Zero Bond, my favorite, Casa Cipriani, downtown, weird location. They put a ton of money into it, has that Italian vibe.
Starting point is 00:03:27 I get the sense it's Trust Run Kids from New Jersey, but that's just me. And then what else has opened up? San Vicente Bungalows is opening up from Los Angeles, so everyone assumes it's gonna be cool, and I'm excited about that. The Crane Club, which is the guys from the Tau Group, who are probably the most
Starting point is 00:03:45 successful nightclub. Like pretty much a giant fucking red flag is when you find out that your daughter's dating a club promoter. But these guys made good and made so much dough and cabbage and really kind of professionalized the industry, if you will. And they're the folks or the power behind Crank Club, so it should be interesting. And then I went to another one last week, and it's my favorite so far based on my Snap Impressions.
Starting point is 00:04:11 Shea Margo. Ooh, hello. Hello, ladies. I don't know exactly how to describe it, other than the thing that struck me was it was super cool, super crowded, and the thing I liked about it was it was intergenerational. What do I mean by that? There was a lot of young, hot people.
Starting point is 00:04:27 It was a good thing. It was a good thing. New York, by the way, is run on hot women, hot young women, and rich men. That's it for everyone else. It's a soul crushing experience. Anyways, and then it had people my age, and then it had parents eating and dining.
Starting point is 00:04:42 And I love that whole sort of like, we can be cool at any age, which is becoming increasingly important to me as I become a hundred fucking years old. Anyways, I love being back in New York. New York's on fire. I still think it's the greatest city in the world and am excited to be here. I'm also going to talk to Mo about specifically, I think there's a paradigm shift going on in AI, a little bit of of a teaser, little bit of a teaser.
Starting point is 00:05:05 I'm like those promos for all those YouTube videos that say the secret to happiness is, and then they cut out. But we're gonna talk to Mo about what I think is, I think I had a realization around what is, how the whole AI economy might shift. Anyways, with that, here's our conversation with Mo Gadot. Mo, where does this podcast find you? I'm in Dubai today.
Starting point is 00:05:30 I am battling with the surprises of February so far. And yeah, enjoying every bit of it. Well, let's start there. Surprises of February. What are surprises in February from an individual such as yourself in Dubai? Dubai is wonderful in February, but it, you know, we occasionally, remember last year we had this incredible flood that was really, really quite, you know, and just a couple of days ago we had a bit of rain that sort of like triggered the same fears.
Starting point is 00:06:07 But of course, the real surprises were deep seek and the responses in the market and how the world, I feel overreacted a bit and then underreacted a bit and life. Life, there you go, I hear you. So let's bust right into it. The last time you were on, I think it was about a year ago, and you're sort of our go-to of this mix of spirituality and deep technical domain expertise.
Starting point is 00:06:34 And we were talking as my guest, kind of our need to control the response to AI. Give us what you think the kind of current state of play is in AI given some of the recent developments and how that may have influenced or did it influence your worldview or your predictions around or thoughts around the future of AI? You have to imagine that the short history of what I normally refer to as the third era of computing, you know, basically the two years between the time when chat GPT came out and today, you know, that short history was a pace that humanity has never ever seen before. I think you've seen what I used to refer to as the as the first inevitable, where basically everyone is in a prisoner's dilemma.
Starting point is 00:07:19 Don't want the other side to win so everyone's competing, throwing everything on it, you know, at it. And basically, you'd get releases of new technology that are sometimes separated by weeks, if not months at most. And I think what most people don't recognize is that at least within the areas where we invested, we have made massive stride on tech. So when it comes to the march to AGI, if you want, which I think humanity will continue to disagree about for a while because we don't really have a definition and accurate definition of AGI, you know, is still steady and very, very fast, right? So we're gonna get there.
Starting point is 00:08:06 My prediction is we almost have already gotten there. And that's, you know, when it comes to linguistic intelligence, they've won, they're winning in mathematics, they're winning in reasoning, you know, and everything we will pour resources on, they will get to become better than humans. So it's just a question of time really. The part that hasn't changed in my mind, Scott, which now I think is very, very
Starting point is 00:08:31 firm and much more accurate if you want, is that the impact on humanity in the short term is going to be dystopian. And that has nothing to do with the existential risk that people speak about with AI. It has a lot to do with the value set of humanity at the age of the rise of the machines. Basically, unelected influential powers making decisions on humanity's behalf in ways that completely determines how things happen, leading to massive changes in the very fabric of society, and basically paying to an agenda where I tend to believe we will end up with very few big platform players completely in bed with governments,
Starting point is 00:09:21 completely feeding on hunger for power, hunger for wealth, and sort of depriving the rest of us of the freedom to live the life we live. I summarized this in an acronym that I made, seven changes to our way of life, I call them FACE RIPs, and we can go into them in details if you want. But basically I see this as inevitable. I see that the short-term dystopia is going to be upon us very very soon just because the massive superpower that is at the disposal of agendas is going to be in play very very quickly. You said unelected officials that are reshaping society. Are you talking about Sam Altman, Elon Musk?
Starting point is 00:10:09 Who are you referring to? 100%. I mean, with all due respect, why is my life being determined by Sam Altman? We all had an accord, unwritten rule if you want, that we won't put AI out in the public sphere until we feel that we've tackled safety or alignment or ethics, if you want, all wonderful dreams to have.
Starting point is 00:10:33 Sam Altman, very soft spoken, comes out every now and then and says, this is the priority of what we believe in, but in reality, it's a publicly traded company creating billionaires. Everyone's rushing very, very quickly. Yeah, it's all about the money. And you know, if you and I have lived in the tech world long enough to understand that what, you know, that all you need is a very clever PR manager to craft a message that is almost exactly the opposite of what you focus on every day. But you say it over and over until you yourself believe it.
Starting point is 00:11:07 The truth is, uh, the world is, is not ready for what is about to hit us. Whether you take the simple things like the economics of the world and how they will change as a result of AI, all the way to the change of the dynamics of power and, and, you know, the resulting deprivation of freedom, all the way to how the economics of the world are going to change and how the jobs are going to change and how the human connection is going to change and our understanding of reality is going to change. These are decisions that are not made by us anymore. Think about it this way, Spider-Man's with great power comes great responsibility.
Starting point is 00:11:47 We've disconnected power from responsibility. There is massive, massive power concentration concentrated in hands that do not answer to anyone. So I a hundred percent agree with you. The, the idea that everything from which buildings are these targeted bombs, bomb first, what, you know, our perception of our government election strategies, all of these things are now being decided by algorithms program by a very small
Starting point is 00:12:13 number of people. That creates, I think, a lot of concern. The steelman argument is that if we don't iterate around the public's usage of these things, that other entities will leap ahead of us and their intentions are even more malicious than ours. That while capitalism perverts things, at its heart, it's not malicious. It might be indifferent, but it's not malicious. And the fear is that if we let other entities run unfettered with AI in the sense that it becomes the Wild West and the public provides feedback and these models leap out ahead of ours,
Starting point is 00:12:52 that ultimately the trade-off between a capitalist motive is worth it relative to letting other societies get out ahead of AI. Respond to that argument. I find that this is a very valid argument if you think of the short term, if you think of the long term, it could lead to a very dystopian place. So allow me to explain. A competitive race, arms race, that basically says if I don't build the nuclear bomb first, someone else will build it, does not necessarily lead to a world where you're the only one that owns a nuclear bomb. As a matter of fact, it leads to a world that has more than one owner of nuclear bombs. And I think what you saw from DeepSeek, for example, is a very
Starting point is 00:13:37 interesting result that comes out of, okay, we're going to consider this a war, we're going to compete against the other people, we're going to this a war, we're gonna compete against the other people, we're gonna apply second sanctions, we're gonna try to limit their ability to progress and what do they do as a result? Necessity is the mother of all needs and so basically they find ways to do things differently. Now, when you really look at the idea
Starting point is 00:14:03 of testing things in public, which is an argument that's used very frequently, but by open AI, I think the analogy almost sounds like let's, you know, test the Trinity in Manhattan, not in New Mexico, just to see how it impacts humanity, right? That's not how you do things. The way you do things is when you are uncertain of the outcome, you normally can test it in ways that are much more contained. But that's this genius out of the bottle long ago, because the truth of the matter is that everyone is racing already. The other outcome, believe it or not, and I say that with a ton of respect, is that yes, the US might lead the arms race or China, you'll never really know.
Starting point is 00:14:51 It might be open AI or it might be alphabet, you'll never really know. But the problem with that is that a more polar world where such concentrated power is not a fair world either. It's not a fair world to the world, but it's also not a fair world to most Americans. And I think that's what most people don't recognize is that you eventually, sooner or later as more and more power is concentrated in the hands of very, very few, which is the only way the US can beat China, if you want, in this technology.
Starting point is 00:15:28 Those very few eventually will turn on the American citizen and say, you know what, you're not really bringing any productivity. We care about maximizing the same target we've been chasing so far, more power, more wealth, and you're standing in the way. And I think you can see that in the American society very, very clearly today before AI takes over. The only answer in my view, believe it or not, which I know sounds really idealistic, idealistic if you want, is a mutually assured destruction conviction. It's that we both understand, you know, both by both I mean every two arch enemies on both sides that we are shifting the mindset and the existence of humanity from a world of than the other person, where for me to gain economically, I have to compete with the other person to a world of total abundance. I mean, we spoke about this last time we met, Scott, and my definition of the current age of AI is what I call the intelligence augmentation. So
Starting point is 00:16:41 we're now augmenting human intelligence with machine intelligence in ways where if I can lend you 250 IQ points more, imagine what you can invent. And I say that publicly all the time, I dare the world, I say give me 400 IQ points more and I will harness energy out of thin air. So why are we competing if that's the possibility ahead of us when the competition, you know, drives us to a point of absolute mutually assured destruction? So it strikes me when we talk about mutually assured destruction,
Starting point is 00:17:19 it strikes me that the two entities that would have to come to some sort of agreement around regulation or a pause, or it would be the US and China. And I'm sure there's other entities, but those are the, those are the lead dogs. All right. Do you think it's realistic that the Chinese would be sympathetic to this argument and that there's enough mutual trust to say, look, we got to, I don't want to say slow down, but put some of this behind our apps share with each other.
Starting point is 00:17:43 I mean, this was sort of Oppenheimer's, was it Oppenheimer's initial vision that we share this technology and say, okay, when one gets too far out ahead of the other, that's a problem. We need, we need to control it together and realize that if one gets too far out in front of the other, the temptation to destroy the other is too great, at which point that person will destroy it. We'll, we'll make sure they can strike back in some limited fashion. Do you think it's realistic? And maybe realistic or not, it's something we've got to do, that we try and strike some sort of treaty with the CCP on China? It's not realistic in the current political environment. Unfortunately,
Starting point is 00:18:20 you know, the current geopolitics of the world is heating up more and more, but it wasn't realistic in the case is heating up more and more. But it wasn't realistic in the case of Russia and nuclear weapons either. By the way, I am not for slowing down at all. I'm actually for speeding up all the way, but speeding up in a direction that is not competitive, but rather for the prosperity of the whole world. I mean, at the end of the day, Scott, again, give know, give me 400 IQ points more and I'll solve every, you know, problem known to humankind. And this is quite, you know, straightforward really. You and I have both worked with incredibly smart people and you
Starting point is 00:18:55 understand what the difference of 100 IQ points means, right? Give me better reasoning, better mathematics, better understanding of physics, and I can do things that humanity never dreamt of. And this is a promised utopia that is at our finger-stead tips. So I'm not saying slow down. I'm simply saying there is no point to compete. The issue that is facing our world is not a problem of technology that's moving too fast. Technology has always been good for us. It's a problem of trust that if the other guy gets the technology before me, I'm in trouble. And that trust is not established in the lab, it's not established in the data center. It's basically established with the realization that we can create a world of absolute total abundance, total abundance. I could know every piece of knowledge that ever existed. I know you well, Scott.
Starting point is 00:19:53 I know how big of a dream this is for people like you and I all have to learn, right? And I can use that knowledge in ways that will make me richer. But how many Ferraris does anyone need? I think this is the challenge we have. The challenge is, you know, the founders, by the way, I don't believe this is a question of money for the founders of AI startups. I think this is a question of ego rather than greed. I'm the one that figured it out first. I'm the one that, you
Starting point is 00:20:22 know, provided this amazing breakthrough to humanity. But if you look back just 150 years at the King or Queen of England, they had a much worse life than what anyone today has. Anyone in any reasonable city in the US today has air conditioning, has transportation, has clean water, has hot water, has sanitation. So we're getting to the point where more doesn't actually make any difference
Starting point is 00:20:51 anymore. It is a morality question of can we just shift the mindset to abundance instead of scarcity? We'll be right back. Stay with us. Support for Prop G comes from 1-800-Flowers. Valentine's Day is coming up and you can let that someone in your life know just how special they are with the help of 1-800-Flowers.com. They offer beautiful, high quality bouquets and this year you can get double the roses for free.
Starting point is 00:21:23 When you buy one dozen from 1-800-Flowers, they'll double your bouquet to two dozen roses. Of course, roses are a classic sweet way to say, I love you. And 1-800-Flowers lets you share that message without breaking the bank. All of their roses are picked at their peak, cared for every step of the way,
Starting point is 00:21:39 and shipped fresh to ensure lasting beauty. Our producer, Claire, ordered from 1-800-Flowers, and she thought they were just wonderful. Her partner was just so delighted, so delighted, strengthened the relationship. Their bouquets are selling fast and you can lock in your order today. Win their heart this Valentine's Day at 1-800-FLOWERS.com
Starting point is 00:21:59 to claim your W.Roses offer. Go to 1-800-FLOWERS.com slash ProfG. That's 1-800-FLFlowers.com slash Prof G. That's 1-800-Flowers.com slash Prof G. Hey, this is Peter Kafka. I'm the host of Channels, a podcast about technology and media. And maybe you've noticed that a lot of people are investing a lot of money trying to encourage you to bet on sports. Right now, right from your phone. That is a huge change, and it's happened so fast that most of us haven't spent much
Starting point is 00:23:02 time thinking about what it means and if it's a good thing. But Michael Lewis, that's the guy who wrote Moneyball and the Big Shore and Liar's Poker, has been thinking a lot about it. And he tells me that he's pretty worried. I mean, there was never a delivery mechanism for cigarettes as efficient as the phone is for delivering the gambling apps. It's like the world has created less and less friction for the behavior when what it needs is more and more.
Starting point is 00:23:27 You can hear my chat with Michael Lewis right now on channels, wherever you get your podcasts. Do you think sequestering China from our most advanced chip technology was a mistake? 100%. It's the biggest mistake ever. from our most advanced chip technology was a mistake? 100%. That's the biggest mistake ever. I mean, since when did we, you know, I mean, strategically, as I said, of course, you know, the two big sanctions that America did in the last, you know, few years
Starting point is 00:23:59 were, you know, backfired massively against America. The move against Russia, you know, basically got a lot of people to try and de-dollarize a little bit. And the move against China drove China to become more inventive. It's as simple as that. But it is also a massive statement of, you know what, I'm going to try everything I can to beat you and I don't know how to say that in a in a polite way But I've gone the first time to America in the 70s and it blew me away
Starting point is 00:24:35 It was a world apart from anywhere else in the world. I get that feeling today when I land in Shanghai It's you know, it's not it's not an easy fight, it's not a determined fight. Let's say 70s, 80s, 90s, definitely post-Berlin, the US could do whatever the F they wanted in the world. I don't think it's as easy a slam dunk as it has been in the past anymore. I think America needs to recognize that when you win, it's going to be through strategies like what Trump is talking about by increasing defense spending even further than what it was, loading the American debt clock even further than it is loaded. And I had a very good boss of mine that used to say, when we're under pressure, we tend
Starting point is 00:25:34 to do more of what we know how to do best. But what we know how to do best is what got us under pressure in the first place. And I truly and honestly think that imagine a world where there is an agreement that America adheres to, by the way, that basically says, let's just deliver that world that everyone's dreaming of. Deliver a world where there is no need for you to attack me. I think of a little bit of this, how I would couch some of your comments
Starting point is 00:26:03 as you think we're entering into what I'll call an age of equivalence. I don't know how to, my semantics might be off, but I think of America was able to develop and sustain certain competitive advantages. Manufacturing mostly because the German and Japanese infrastructure had been leveled, then services infrastructure and then the technology, whether it was because of IP, risk-taking, multiculturalism, and we were able to maintain one to three decade leads and find the next thing
Starting point is 00:26:36 and establish more prosperity and create and consume a disproportionate amount of the world's spoils. And tell me if I'm saying this correctly, you now believe that our competitive advantage around these things is shrinking from 30 years to 30 days. So it sort of should bring on this incredible age of cooperation and we should stop deluding ourselves that we're gonna be able to get out ahead and win.
Starting point is 00:27:02 Is that an accurate summary of what you're stating? That is a very accurate summary that it's still possible for the US to win. I think the most important competitive advantage that you may have not mentioned here is that money has always been free for the US. You had the ability to print money to create amazing wealth that got reinvested wisely and sometimes unwisely. Unfortunately, I think we're in a time where $500 billion on Stargate sounds unwise, right? But at a point in time, it was a no issue. It's like, okay, if this is what it takes to build the infrastructure, we'll do it. What I'm attempting to say here is it's not that the US has lost the capability to crush other nations on whatever
Starting point is 00:27:53 you know full spectrum dominance that the US has been attempting to achieve for years, it's that other nations have grown an ability to resist right right? And that the more the US is becoming, you know, again I worked my entire life in corporate America so don't take that as an attack to the American approach at all. I'm basically saying that the more America will bully the world, the more you'll get responses like Deep Seek across the world, where people are simply going to say, you know what, we don't like this anymore. I will openly say, I don't like the fact that there is a small chunk of whatever money I made anywhere in the world
Starting point is 00:28:39 that was somehow handed to America because of the US dollar dominance. I don't feel as a wealthy man, I don't feel that this tax, if you want, on all the money made everywhere in the world, that this export of inflation to everywhere in the world is just set up for all of us to succeed. You can see across the world actions from Japan, from China, from Russia for sure, that is basically attacking the U S where it becomes painful, which is the U S dollar dominance. It's not going to go away anytime soon, but it makes things a little painful. And, and it is, you know, in a typical environment, in a typical environment,
Starting point is 00:29:24 the U S would say, you know what, I'm going, in a typical environment, the US would say, you know what, I'm gonna crush you. I'm strong enough and you are strong enough. You know, I'm gonna apply tariffs as Trump would say, and make sure that nobody has access to my wonderful market. Yeah, makes sense, it does make sense, but it also causes pain on the US side, right?
Starting point is 00:29:42 And it comes from a mindset of we're still competing for limited resources where the world was made up of metals and mirrors, you know, and power was acquired by weapons. I think we are on the cusp of a world where everything is possible. Just understand that from a difference of manufacturing point of view, right, with enough
Starting point is 00:30:07 understanding of nanophysics and an understanding of, you know, a level of intelligence that helps us bridge the remaining bits of nanophysics, we could manufacture things out of thin air, just reorganizing molecules of air. Right? Instead of competing for minerals and resources. This is on at our fingertips, it's years away. There is a need for a mindset change. I always like to pause and double click on or at least cement and highlight
Starting point is 00:30:42 what I think is real striking insight in the notion of an inability to sustain an advantage and all it does is create fear and weaken relationships and make one side more likely to strike while they're ahead and create workarounds because they're, you know, nothing creates innovation like war and the threat of survival, right? And what also really resonated was, and I've been saying this, I think Sam Altman is Sheryl Sandberg with Hush Tones. Sheryl Sandberg was weaponized for femininity,
Starting point is 00:31:14 her charm, her maternal instincts, a gender, the important conversation on gender to basically take what was a company that was creating rage, making our discourse more coarse, depressing our teens, and make it seem more palatable to basically rub Vaseline over the lens of pretty mendacious behavior. And I feel like Sam and his hushed tones and his, that's a real concern. You know, Senator, I'm worried too.
Starting point is 00:31:39 Meanwhile, I'm about to raise $40 billion at a $350 billion market cap. I mean, it's just, I have been to this fucking movie before and we are falling for it again and again. And so I wanna propose something and have you respond to it. And this is literally, you just inspired this thought. Similar to the way we have the UN or NATO, we have a new organization and the two founding members are China and the US and it's total open.
Starting point is 00:32:07 There's offices in DC, Silicon Valley, Shanghai and Beijing. Every room, every team has a mix of US and Chinese scientists, regulators, such that it's almost impossible to hide anything. We're all working on the same damn thing and we're trying to solve the world's most difficult problems, food distribution, health, poverty. We're working together, but we're also making sure there's a very, very thick layer of supervision and enforcement such that we are constantly testing, how would you make bioweapons?
Starting point is 00:32:43 And then we're sending our crawlers out to see, is anyone working on this that we don't want working on this? And we together try and create, you know, like what Interpol was doing, where we had multilateral cooperation around the drug trade and arm shipments. But we have this multilateral organization that says their total transparency. And our job is to dole it out where we see the most opportunity to increase
Starting point is 00:33:07 stakeholder value. And the stakeholders are all seven and a half billion people on the planet. And we're there to ensure that there's trust and transparency and ensure that the bad guys don't get this and start doing and we're going to cooperate around either sequestering this or ensuring that the development of it to make weapons or create a new super virus that we are hip to these things before anybody else and act against them. With that type of organization, do you think that's possible? And in your mind, do you think that that has merit?
Starting point is 00:33:40 That would be a dream. I mean, let me just double click on a very important comment that you said there at the end. What both parties are unable to recognize while they are putting their heads down and competing with each other is how many bad guys are putting their heads down in silence and working against both of them. The thing about AI is a massive democracy, a massive set of open source. Once again, because of the speed of this thing, you know, it took Linux tens of years to actually be,
Starting point is 00:34:16 I mean, at least around 10 years to be established. It took massive open source models weeks to be established, right? And so there is access that, you know, massive open source models, weeks to be established. And so there is access that anyone today can download a DeepSeq model to their computer in the jungles of Colombia and do something malicious without ever being detected. Now the dream here is that we work together to say, look, again, mutually assured destruction.
Starting point is 00:34:49 If we are not both together against the bad guys, there is harm that can come to all of us. And I think it's a beautiful dream, but believe it or not, there is a bit of that dream that's already happening. I mean, I don't know if you know the statistic, but 38% of all top AI researchers in America are Chinese. You know, it's quite staggering when you really think about it. And if you, if you count, if you count the Indians and if you count, you know, some of,
Starting point is 00:35:14 some that have Russian origin and so on. What percentage of that 38% are spies? Great question. And in all honesty, In the world you're defining it, those spies are assets to humanity. And it's quite interesting that if you do not have a reason to spy, then they become more of an asset to humanity. I think, I think that the truth here is there is no winning. There's truly no winning.
Starting point is 00:35:41 And, and I, of course, I don't want to be grumpy, but you know, a massive advantage in AI is not going to trump the card of nuclear holocaust. So we're competing in the wrong arena, if you think about it, because in a world where we have so many superpowers, of which almost of which almost four or five can completely wipe out our planet in less than two hours. Right? The quest for more power, for a dream that I can crush someone else, is a very dangerous quest. Nobody in this world today can crush anybody. I think this message needs to become really, really clear. What are we competing on. What are we competing on? What are we competing on? And so, of course, you know, what you recommended, by the way, can be done by governments, which I think is an impossible dream. But believe it or
Starting point is 00:36:36 not, if just a few billionaires got together and built those things, the creation of the world of abundance will basically nullify the need to compete. You see, the creation of the world of abundance will basically nullify the need to compete. You see the challenge we have in our minds is we're not in that world of abundance yet. Right? And so we're still living in our capitalist way of every one of us has to play to aggregate more wealth, which delivers more power. And then I take that wealth and power and protect my wealth and power and make more of it.
Starting point is 00:37:11 This is a world that's about to end. It is literally about to end for 6 billion of us as soon as jobs go away. Nobody's talking about this. So you and I both know, you probably more so, but I would say I know personally or somewhat well, I don't know, a dozen or two dozen billionaires. And what I have found is that the majority of them
Starting point is 00:37:35 have what I call their very expensive go back. And that is they have a plan, whether it's anti-Semitism or a nuclear war or some sort of AI catastrophe or revolution. And they have their Gulfstream 650 ready on a moment's notice and pilots and their bunker in New Zealand. And what I've said when I've talked to a few people about this is like, let me get this, if things really get that bad, you
Starting point is 00:38:05 don't think your pilots are going to get you to your destination and then kill you? You think they're going to sacrifice themselves to save your family? You don't think that everybody else is going to figure out where the billionaire bunkers are and come and take you? I mean, it's just such a ridiculous.
Starting point is 00:38:20 I feel like it's not only a stupid thesis. It's an unhealthy one, because they're under the impression that their money can buy them a rip cord, a way out, and they can't. And so shouldn't you be focusing all this energy on making sure that we just don't get to that point? Colonizing Mars, well, here's an idea.
Starting point is 00:38:43 Take your immense talent and capital to make this place a little bit more fucking habitable because you're not gonna wanna live on Mars. Mars is an awful place. You don't wanna be there. That's worse than death. That's not space exploration, it's space execution. Isn't this, I mean, don't we have a real virus?
Starting point is 00:39:04 It's almost like capitalism collapsing on itself, where we get so caught up in our self-worth, in our masculinity, in our power around the number, and we see this way to a billion, 10 billion, a trillion dollars, which will increase in the current age, my worth as a human. Doesn't this require an entirely different zeitgeist? Endlessly.
Starting point is 00:39:29 You see, both directions of this dilemma are quite interesting. One of them is, remember last time, I don't remember when we were, when we had a drink after the event, we spoke about the idea of what you can do with money. Uh, we spoke about the idea of what you can do with money. You know, there, there is, there is a specific, um, you know, range of wealth, uh, where money makes a difference.
Starting point is 00:39:54 You know, if you've never driven a sports car before and you manage to get yourself a sports car, you go like, ah, I made it. But then if you drive a real sports car and you know how annoying and fucking broken they are and you know how annoying and fucking broken they are and you know how they you just eventually go like I don't need any more of this. The problem is the game of billionaire or multi multi multi millionaire is wonderful okay it's a nice game but it has no significant impact on gains that you can achieve as a human you'll still sleep in one bed and you can make it as fancy as you can, but it's still one bed.
Starting point is 00:40:28 You can still drive only one car. There could be 600 other cars, 600,000 other cars in the garage, but you're still driving only one. And by the way, when you're a billionaire, you're not really driving it comfortably anyway, because you're targeted all the time. The other way of this crazy dilemma is even more worthy of discussion, because we remember the times when if you had an MBA, you were like a highly educated post-grad, and now everyone has an MBA, and then if you had a PhD, you became the special one, and now everyone has a PhD, and many have many. And the idea here is
Starting point is 00:41:06 there is an inflation to the value of something that you acquire, right? And what is happening with wealth today with artificial intelligence is if you just look at the current trajectory, we're going to see our first trillionaires within years for sure. And that not only makes that person acquire more wealth that is not necessary, but it makes the price of every Rolls Royce higher, and then that makes the price of every Mercedes higher, and that makes the price of every Toyota higher, and so on and so forth. Which basically means that as more of those exist just in the single digits, more of the millionaires become poor. And then a few years later, more of the hundred million millionaires become poor because they can no longer compete with that level of wealth to which everyone is now appealing. And so if you take that
Starting point is 00:41:59 cycle and continue to repeat it over and over, eventually you'll end up with very few, like way less than 0.01 of 1% of all humans that have so much wealth, but then the great equalizer is that the rest of us have no wealth at all. So once again from an economics point of view, we are getting to a point where money will have very little value as compared to a world where money has no value because everything is becoming a lot cheaper, which is a world we can create with AI. So, I buy it theoretically, but what I've registered is that over the last 50 years, money becomes an even greater arbiter of the life you can lead.
Starting point is 00:42:43 When I was a kid, the difference between my dad's house and his boss's house, a little bit nicer car, a little bit bigger house, but we were in the same neighborhood golf to the same country club, the market in a capitalist society figure always figures out a way for you to offer you more with more money. There's coach, there's premium economy, there's business class, there's first class, there's chartering, there's fractional jet ownership, there's ownership, there's a challenger,
Starting point is 00:43:07 there's a Bombardier Global Express, and there's a Gulfstream 650, then there's going into space. My sense is life has actually gone the other way the last 50 years, that the life that the 0.1% lead is an entirely different life. It's like the delta between being middle class and rich has gotten bigger and bigger and bigger and so the incentives are actually the other way that there really is a
Starting point is 00:43:33 reason when you're the richest man in the world you can show up and turn off foreign aid without being elected. Correct I think we're saying the same thing. What that means however is that the majority of us even the ones that are now millionaires, are going to become poor. That what you're saying is exactly true, it's that the range in which we're now talking about the difference between what you can do with a lot of money and what you cannot do if you don't have that money, makes everyone almost equal at the bottom. Everyone gets a reasonable car, but not a massively fancy car.
Starting point is 00:44:11 Everyone, you know, becomes equal as compared to those incredibly wealthy, if you know what I mean. We'll be right back. Stay with us. health and human services secretary nominee Robert fluoride Kennedy jr. went before the Senate today in fiery confirmation hearings did you say Lyme disease is a highly likely militarily engineered by a weapon I probably did say that Kennedy makes two big arguments about
Starting point is 00:44:45 our health and the first is deeply divisive. He is skeptical of vaccines. I do believe that autism does come from vaccines. Science disagrees. The second argument is something that a lot of Americans, regardless of their politics, have concluded. He says our food system is serving us garbage and that garbage is making us sick. Coming up on Today Explained, a confidant of Kennedy's, in fact, the man who helped facilitate his introduction to Donald Trump on what the Make America Healthy Again movement wants. Today Explained, weekdays wherever you get your podcasts. This week on ProfG Markets, we speak with Robert Armstrong, US financial commentator
Starting point is 00:45:29 for the Financial Times. We discuss Trump's comments on interest rates and who might emerge as the biggest winners from the deep-seek trade. In the world we lived in last Friday, having a great AI model behind your applications, either involved building your own, or going to ask OpenAI, can I run my application on top of your brilliantly good AI model? Now maybe this is great for Google, right? Maybe this is great for Microsoft, who were shoveling money on the assumption that they had to build it themselves at great expense.
Starting point is 00:46:04 You can find that conversation and many others exclusively on the ProfG Markets podcast. We're back with more from Mogadat. Mo, I want to propose a thesis and I'm going to do what we're supposed to do and that is talk about your book. I was sort of blown away by this guy, Robert Armstrong. He proposed or he talked about a certain industries where the innovation has resulted in stakeholder value, not shareholder values.
Starting point is 00:46:38 So we have fallen under the, the notion that if I can come up with a better search engine, I'm going gonna capture trillions of dollars in shareholder value for me and my investors and my employees. Same way around social media, same way around e-commerce. I came to Orlando last night for a speaking gig. I skirt along the surface of the atmosphere
Starting point is 00:46:57 at eight tenths the speed of sound. I don't have to eat my niece going over the Rockies with scurvy. I don't get seasick for 14 days as my parents did coming on a steamship. It has changed humanity, jet travel. The PC changed humanity for better. I mean, it's just a supercomputer that used to cost
Starting point is 00:47:17 10 billion dollars on inflation adjusted. I can get for 300 bucks, put it on my desk and increase the productivity of everything. I was on the board of Gateway Computer. We were the second largest computer manufacturer in the world when I bought 17% of the company. It was worth $130 million. If you added up all the profits of the airline industry,
Starting point is 00:47:36 it's negative. They've lost more money than they've made. There are certain industries in technology where because of a lack of competitive modes, the gains, the value seep to humanity and to stakeholders, they're not able to be captured by a small number of shareholders. And when DeepSeq came along, it sort of dawned on me maybe, and I think this is an optimistic vision, maybe AI is more like the PC or the airline industry, and that is many of the benefits will accrete to stakeholders and citizens,
Starting point is 00:48:09 but no one small set of company or people are going to be able to capture all of the value. Do you think that's an optimistic view of where AI might be headed? In other words, do not participate in the SoftBank Round at $350 billion in OpenAI. There is certainty in my mind that there is going to be a democratization of power, more access for everyone
Starting point is 00:48:31 to more things, right? You know, unfortunately, if you take a power hungry scenario in the recent wars of 2024 in the world, you got the ultra powerful, you got a concentration of power, some of it using AI, by the world, you got the ultra-powerful, you got a concentration of power, some of it using AI, by the way, in terms of weapons that have massive impacts, but you also got access to drones that can be flown from a very far away distance and for $3,000 cause a lot of harm. I think that dichotomy, if you want that arbitrage between a massive concentration of power at the top and a democratization of power at the bottom is going to drive a very, very high need for control. Once again, I love the hypothesis or the ambition for AI to become that net positive to the world because it's not really driving only profits to the top,
Starting point is 00:49:27 which it will, but I think that the opposite direction of that is that when you have massive power at the top and you sense that the bottom has a democracy of power and that can threaten you at any point in time, you're going to have to oppress them. And so that will take away the benefits that the majority can get. And I give a very stark and maybe a bit graphic example.
Starting point is 00:49:55 Think about a world, Scott, where a bullet could kill. But if you're a leader of a nation, you can have protection around you and can have everything to protect yourself. We've seen examples in the 2024 wars where a specific person is targeted anywhere in the world and killed. You know, a tiny little drone carries that bullet, seeks you with AI, finds where you are, stands in front of your forehead and then shoots. And these technologies are unfortunately under development. Now think about what that does to democracy. Think about those who own that weapon,
Starting point is 00:50:35 by the way they don't necessarily have to be governments and how they can influence the distribution of power, how they can ensure that whatever is created is directed in a way that's different than what would benefit the majority. Yeah, in every war there's a new weapon that kind of changes the game and I think people don't talk about this enough, but I think drones are the new weapon that's going to come. I mean, I think about millions of self-healing assassin drones and AI, and the AI under the direction of some individual puts together a list of people who are not in the way of my wealth or my power, and those drones can be released at one of a thousand different ... I mean, you can really get very dark very fast here.
Starting point is 00:51:20 So I'm going to try and segue out of this into something a little bit more positive. Is this the very first conversation where I am a grumpier than you? different. I mean, you can, you can really get very dark, very, very fast here. So I'm going to try and segue out of this into something a little bit more positive. Is this the very first conversation where I am grumpier than you? Yeah, we're, but there's, this is a, there's, yeah, it's definitely grumpy old men. It's grumpy, it's grumpy, grumpier and grumpiest. But I do find, I do, whenever I speak to you, I, you like manage to distill something down to something understandable and actionable for me.
Starting point is 00:51:47 I love the idea of this multilateral agency. I was thinking in a zero-sum game philosophy that we need to get out ahead, we need to develop AI, we shouldn't be shipping Nvidia chips to China. I was part of that crew. And what you have taught me is, okay, what if we cooperated around not only releasing it to for, you know, the betterment of humanity, but also quite frankly, policing the bad stuff together and being
Starting point is 00:52:14 100% transparent with each other and just saying there, not only are there no secrets, but it would be impossible to have secrets amongst each other because we're, we've just decided we're in the same office space. I really love that idea. And I think that as I think about candidates that I wanna support in 2028, I do, or 2026, I do have access mostly because I have money, but I think this is a really interesting view.
Starting point is 00:52:40 Anyways, thank you for that. Your latest book, Unstressible, Practical Guide for Stress-Free Living addresses the pervasive issues of chronic stress in modern life. In an interview on the diary of a CEO with, by the way, Stephen Bartlett, who I believe is going to be the next Joe Rogan, you described stress as an addiction and a badge of honor. Say more. Why are we so addicted to distress? Part of the fakeness that leads us to success is I'm busy, I'm busy, I'm busy.
Starting point is 00:53:11 Which I have to say I found almost always quite shocking because if you go across the range of intelligence, if you want, I think most of us know that a good 80 to 90% of all of the efficiency that you bring to any job that you do is done within 20% of the time. But yet, you know, part of your ego is I'm gonna fill the other 20, you know, the other 80% of the time with 20% work that's taking a lot of toll on me because it basically means I'm driven.
Starting point is 00:53:47 It basically means that I am maximizing my performance, maximizing my deliveries between waking up in the morning at 5 a.m. to run an Ironman and then going in the evening to attend I don't know what and flying all over the world and so on and so forth. The truth of the matter is this is a self-perception, a form of ego that says I am doing amazing. But it isn't. And I think the biggest challenge we have is that we believe that the world stresses us. The world does not stress us. I mean, when I wrote Unstressable, I started from physics. I
Starting point is 00:54:25 basically said, look, the easiest way to understand physics in here is to understand stress in humans is to look at stress in objects. And the stress in object is the force applied to the object, but that is divided by the cross-section of the object, how much resources the object has to carry that force. Right? And so typical reality of our life, especially the lives of busy executives who live in busy cities and so on and so forth, is that there will be multiple challenges and forces applied to you every day, but that the cross-section of you, your capabilities, your skills, your connections, your abilities and so on, the more you have those and apply them properly, the less stressed
Starting point is 00:55:06 you feel. There might be more force applied to you, you might be carrying more challenges, but you don't feel stressed, just like an object doesn't break when it has a bigger cross section. And the reality of the matter is that part of the badge of honor is not that I'm carrying a lot of things, it's that I'm busy and I'm angry and I'm stressed and I'm this and I'm that. And I find that honestly, yeah. And I worked with many people who are very successful, who are, who appear to be that way and become a lot very obnoxious and unloved by their people.
Starting point is 00:55:37 And I worked with a few that were totally chill. You know, I used to be the one that used to tell my sales team. I really think this pipeline is too wide. I really think you should focus on 30% of it and close it, rather than waste your time on things that you will not serve well. And in a way, you make more money that way, you become more successful that way, you get more customer satisfaction that way. And the rest of the pipeline you hand over to a different channel that does it in a way that's suited for it so that it doesn't stress anyone.
Starting point is 00:56:10 How do we deal with stress in a more sustainable way? And as we wrap up here, are there any quick fixes? I feel that what we want to deal with is not stress. What we want to deal with is breakpoints. So we want to avoid breakpoints. And I think there are three breakpoints that happen to us in our world today. One is of course burnout.
Starting point is 00:56:27 And burnout algorithmically is the sigma of all of the stressors that you're under multiplied by their duration, multiplied by their intensity. And basically most of the time when you burn out, you burn out not because one big stressor is in your life, but it's because of the aggregation of all the little things, the loud alarm in the morning, the commute, the this and that, and then one little thing shows up on top of it and you break down. And so burnout to me is a question of a weekly review. Literally every Saturday you sit with yourself, you write on a piece of paper everything that stressed you last week, and you scratch out the ones that you commit that you will not allow in your life anymore. You can either remove them from your life or make them more enjoyable. So if you have to be stuck in a commute or a long flight, take some good music with you, be healthy and so on and so forth. The other break point, unfortunately, is trauma. So? So basically, massive stress that happens in
Starting point is 00:57:26 a very short period of time that exceeds your ability to deal with it, the loss of a loved one, an accident, you know, being stuck in a in war or whatever, and so on. And, and, and this unfortunately is not within our hands, but believe it or not, it actually is not the reason for the stress pandemic of the world. So 91% of all of us would get at least one PTSD inducing, like the highest of all trauma PTSD inducing event once in their life. But 93% of those would recover in three months and 96.7% of those will recover in six months. And all of those will enjoy post-traumatic growth. So there is no worry about trauma if you want. It's not within your control to prevent, but if you work on it, you'll recover.
Starting point is 00:58:19 The third and the most interesting reason for stress, especially in younger generations today, is what I call an anticipation of a threat right and and the challenge with it is that looking forward with fear, worry, anxiety and panic are probably the biggest stressors for the younger generations and and and the funny bit is that fear is not a bad emotion Fear is actually alerting you to something that you need to pay attention to so that's okay Right worry anxiety and panic are actually of a very different fabric So worry is not about I know there is a threat coming Worry is I can't make up my mind if there is a threat coming or not And so you keep flip-flopping and you don't take the action and and you keep feeling the fear
Starting point is 00:59:03 But not doing anything about it. When you're worried, you need to actually tell yourself openly, look, I'm going to decide if I should chill or panic, right? Chill or freak out. If it's freak out, then it's fear, deal with it. If it's chill, then stop thinking about it. Anxiety is not about the threat. Anxiety is actually about your capability. And most people, if they really visit themselves when they feel anxious, when you're anxious, there is a threat approaching you, but you constantly think that you're not capable of dealing with it. So the more you attempt to deal with the threat, the more you feel incapable, so the more anxious you become. When you're anxious,
Starting point is 00:59:40 work on your capabilities, not on the threat. And then panic is a question of time, right? And panic really is the stress, you know, the threat is imminent, it's approaching me too quickly. And so accordingly, when you feel panicked, don't work on solving the problem, don't work on addressing the threat, work on giving yourself more time, you know, find someone to help you or delay the, you know, the presentation time or, you know, or cancel a few meetings so that you have more time for the, for whatever it is that you need to focus on. And what I mean by all of this, this is a very, very quick summary of, you know, a lot of stuff that we discuss in Unstressable. But what I mean by this is that it's all, it all goes back to your cross-section, all goes back to skills
Starting point is 01:00:23 and choices that we make so that the external stressors that come to us from the world don't kill us. One of my favorite Steven Spielberg movies is this movie called Bridge of Spies and this Russian spy who's been unmasked by the US government is in court, he's being tried for treason or spying and he's potentially facing life in prison. And his lawyer, I think it's Tom Hague, says, aren't you nervous? Aren't you stressed?
Starting point is 01:00:50 And he looks at him and says, would it help? Exactly. Exactly. Yeah. Anyways, Mo Gadat is the former chief business officer for Google X, the founder of One Billion Happy Foundation and co-founder of unstressable. He's also a bestselling author of books including software happy, scary smart and that little
Starting point is 01:01:09 voice in your head. Moe, I mean this and seriously you bring my stress down because I find you inspiring and relaxing and you distill things into kind of actionable solutions. Really always enjoy speaking with you. I think you're really a profound thinker. Thanks for your good work. This episode was produced by Jennifer Sanchez. Our intern is Dan Shalon. Drew Burrows is our technical director. Thank you for listening to the ProppG Pod
Starting point is 01:01:36 from the Box Media Podcast Network. We will catch you on Saturday for No Mercy, No Malice, as read by George Hahn. And please follow our ProppG Markets Pod wherever you get your pods for new episodes every Monday and Thursday.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.