The Rich Roll Podcast - Rage Against The Machines: Yuval Noah Harari On Surviving AI, The History Of Information, And The Future Of Humanity

Episode Date: October 28, 2024

Yuval Noah Harari is a renowned historian, bestselling author of “Sapiens” and “Homo Deus,” and the mind behind the new book, “Nexus.” This conversation explores AI’s impact on society ...through Yuval’s unique historical lens. We discuss AI as “alien intelligence,” information’s role in shaping political systems, embracing uncertainty, institutional trust, and finding clarity amid rapid change. His analysis of our collective human psyche in the AI era is profound and revelatory. Yuval is a treasure trove of wisdom. This one is enlightening and sobering. Enjoy! Show notes + MORE Watch on YouTube Newsletter Sign-Up Today’s Sponsors:  Roka: Unlock 20% OFF your order with code RICHROLL  👉ROKA.com/RICHROLL Whoop: Get a FREE one month trial 👉join.whoop.com/roll AG1: Get a FREE bottle of Vitamin D3+K2 AND 5 free AG1 Travel Packs 👉drinkAG1.com/richroll On: High-performance shoes & apparel crafted for comfort and style 👉on.com/richroll Bon Charge: Get 15% OFF my favorite wellness tools & more 👉boncharge.com/richroll Airbnb: Your home might be worth more than you think. Find out how much 👉 airbnb.com/host Check out all of the amazing discounts from our Sponsors 👉 richroll.com/sponsors Find out more about Voicing Change Media at voicingchange.media and follow us @voicingchange

Transcript
Discussion (0)
Starting point is 00:00:00 I own a bunch of spectacles, and I made the grave error the other day of donning a normal non-roka pair on my indoor trainer when I was riding my bike indoors, and I gotta tell you, it was a disaster. Every three to five seconds, I had to take my hands off the handlebars and push my glasses back up my nose until I got so frustrated I just tossed them aside. his back up my nose until I got so frustrated I just tossed him aside. This is the dilemma of every active but optically impaired person I know. And as someone who has relied upon eyewear every single day since I was five years old, it is also the source of endless aggravation. Thankfully, now eradicated thanks to Roka, the stylish performance eyewear company founded by two former Stanford swimming teammates of mine who have gifted everyone like me and, quite frankly, the world with their fashionable line of super lightweight prescription glasses and sunglasses with patented no-slip nose and temple pads that are just impervious to sweat, and no matter what you do,
Starting point is 00:01:06 remain locked on your mug no matter how intense your workout. Without the dork factor, these things go everywhere with me, from the trail to the dinner party. Put them on, feel the difference, and wear without limits. Unlock 20% off your order with the code RICHROLL at roka.com. That's R-O-K-A dot com. Today's episode is sponsored by Whoop. At this point, we're halfway through Sober October. I hope all of you out there who are participating are feeling the benefits of taking this break from alcohol. And I can tell you, as somebody who's been wearing a Whoop for, I don't know, five or
Starting point is 00:01:50 six years at this point, I promise you, it is eye-opening to see just how much sobriety is impacting your health on a day-to-day basis. Whoop tracks things other wearables can't, things like your heart rate, your sleep quality, your daily recovery, your HRV, your workouts, and even your breathing at night. Now, I've been sober for many years. I've seen firsthand how prioritizing health can change your life. And now Whoop is actually backing all of this up with solid evidence-based scientific data. And here's what they found. Drinking knocks your overall recovery down by about 12%. It bumps up your
Starting point is 00:02:31 resting heart rate by 7%. It messes with your sleep quality. We all know this. And get this, every single drink drops your recovery by another 4%. If you're doing Sober October, you're probably noticing these improvements already. And if you haven't started yet, come on, it's not too late. You can join in and enjoy these benefits for yourself. Right now, Whoop is offering all of my listeners a free month to try it out. Just go to join.whoop.com slash roll to get started.
Starting point is 00:03:03 That's join.whoop dot com slash roll. Let's finish Sober October strong. Most people around the world are still not aware of what is happening on the AI front. It can invent medicines and treatments we never thought about, but it can also invent weapons that go beyond our imagination. You're changing the basis of everything. It's no wonder there is an earthquake in the structure that is built on top of it. I got news for you people.
Starting point is 00:03:37 The rise of the machines is already upon us. So what exactly do we need to understand about the rapid ascent of artificial intelligence? What does this revolution augur for the future of the human species? To gain clarity amidst the confusion, I'm joined today by Yuval Noah Harari, a world-renowned historian and mega best-selling author whose landmark books on the history and future of humanity have sold an astonishing 45 million copies and made him the public intellectual of our time. This is the first time that we are basically about to enter a non-human culture. The big question is whether we will force it to slow down or it will force us to speed up until the moment we collapse and die. His latest book and the terrain for today's conversation is Nexus, an absolutely essential read that makes quite a compelling case
Starting point is 00:04:34 for why artificial intelligence will be the biggest disruption in the history of civilization. AIs can make decisions. They are not just tools in our hands. They are agents creating new realities. It's very difficult to appreciate the dangers because the dangers, they are kind of alien. In the Hollywood scenario,
Starting point is 00:04:54 you have the killer robots shooting people. In real life, it's the humans pulling the trigger, but the AI is choosing the targets. Thank you for coming. I appreciate you being here today. I'm excited to unpack what I think is a really revelatory book, a very important book that speaks to perhaps the most vital issue of our time. And in reflecting upon it, I was thinking back on Homo Deus,
Starting point is 00:05:24 which came out in 2015, 16. And in that book, you address AI, but at that time, it was as if you were sounding an alarm on a future story that had yet to be written, right? And perhaps it came off a bit Cassandra, you know, in that moment. And I'm curious, as we find ourselves now in 2024, eight, nine years later, it's as if not only are we, you know, kind of on the cusp of this new revolution, we're mired in it in a way that perhaps even is far more intense than even you predicted at that time. far more intense than even you predicted at that time. Yeah, I mean, things have been moving much, much faster than I think any of us predicted. And, you know, in 2016, AI was like this tiny cloud on the horizon that might arrive in decades or even centuries. And here we are in 2024, and the storm is kind of upon us. And I think maybe the most important thing is really to understand what AI is, because now there is so much hype around AI
Starting point is 00:06:31 that it's becoming difficult for people to understand what is AI. Now everything is AI, especially in the markets, in the investment world, they attach the tag AI to just about anything in order to sell it. So, you know, your coffee machine is now a coffee machine, is an AI coffee machine,
Starting point is 00:06:52 and your shoes are AI shoes. And what is AI? So, you know, the key thing to understand is that AIs are able to learn and change by themselves, to make decisions by themselves, to invent new ideas by themselves. If a machine cannot do that, it's not really an AI. So a coffee machine that just makes you coffee automatically, but by a pre-programmed way, and it never learns anything new, it's just an automatic machine.
Starting point is 00:07:23 It's not an AI. It becomes an AI if, as you approach the coffee machine, the machine, before you press any button, addresses you and says to you, I've been watching you for the last weeks or months, and based on everything I've learned about you and your facial expression and the time of day and so forth, I predict you would like an espresso. So I already took the liberty to make a cup for you. It made the decision independently. And it's really an AI if it then tells you, actually, I've invented a new machine, a new beverage, a new drink that no human ever thought about before. I call it bespresso. And I think
Starting point is 00:08:06 it's better than espresso. You would like it more. And I took the liberty to prepare a cup for you. Then it's really an AI, something that can make decisions and invent new ideas by itself. And therefore, by definition, something that we cannot predict how it will develop and evolve that we cannot predict how it will develop and evolve, and for good or for bad. It can invent medicines and treatments we never thought about, but it can also invent weapons and dangerous strategies that go beyond our imagination. You characterize AI not as artificial intelligence, but as alien intelligence. You give it a different term. Can you explain the difference there and why you've landed on that word? Traditionally, the acronym AI stood for artificial intelligence.
Starting point is 00:08:56 But with every passing year, AI becomes less artificial and more alien. Alien not in the sense that it's coming from outer space. It's not. We create it. But alien in the sense it analyzes information, makes decisions, invents new things in a fundamentally different way than human beings. Again, artificial is from artifact.
Starting point is 00:09:19 It gives us the impression that this is an artifact that we control. And this is misleading. Because yes, we designed the kind of baby AIs, we gave them the ability to learn and change by themselves, and then we release them to the world. And they do things that are not under our control, that are unpredictable. And in this sense, they are alien. And again, I mean, humans are organic entities, like other animals. We function organically. For instance, we function by cycles, day and night, summer and winter. We're sometimes active, sometimes we need to rest, we need to sleep. AIs are alien in the sense that they are not organic. They function in a
Starting point is 00:10:06 completely different way, not by cycles, and they don't need to rest, and they don't need to sleep. And now as they take over more and more parts of reality, parts of society, there is a kind of tug of war of who would be forced to adapt to whom. Would the inorganic AIs be forced to adapt to the organic cycles of the human body, of the human being? Or would humans be pressured into adopting this kind of inorganic lifestyle and starting with the simplest thing
Starting point is 00:10:39 that AI are always on, but people need time to be off? So if you think even about something like the financial markets, traditionally, if you look at Wall Street, it's open only Mondays to Fridays, 9.30 in the morning to 4 o'clock in the afternoon. It's off for the night. It's off for the weekends. It takes vacations on Christmas, on Independence Day. And now as algorithms and AIs are taking over the markets, they're always on.
Starting point is 00:11:09 And this puts pressure on human bankers and investments and so forth. You can't take a minute off because then you're left behind. So in this sense, they are alien, not in the sense that they came from Mars. To understand artificial intelligence and to understand what is actually happening and where we're heading, the thesis of this latest book requires us to understand the nature of information itself and the formative ways in which the evolution of information networks are inextricable from the evolution and progress of humankind. So I'm curious about how you discovered that lens into kind of understanding the nature of artificial intelligence and why it's important to contextualize what is occurring right now through that perspective. It's actually something I began exploring in previous books, the ideas that information is the most fundamental stratum, most fundamental basis of human society and of human reality. Because the human superpower is
Starting point is 00:12:15 the ability to cooperate in very large numbers. If you compare us to chimpanzees, to elephants, to hyenas, individually, there are some things I can do and the chimpanzee can't and vice versa. Our big advantage is not on the individual level. The really big advantage is that chimpanzees can cooperate in, you know, a few dozen chimpanzees, like 50 chimpanzees can cooperate, maybe 100. But with humans, with Homo sapiens, there is no limit. We can cooperate in thousands, in millions, in billions.
Starting point is 00:12:47 If you think about the World Trade Network, like the food we eat, the shoes we wear, everything we consume, it sometimes comes from the other side of the world. So if you have 8 billion people cooperating, and this is our big advantage over the chimpanzees and all the other animals, what makes it possible for us to cooperate with millions and billions of other human beings? It's information. Information is what holds all these large-scale systems together. And to understand human history is to a large extent to understand the flow of information.
Starting point is 00:13:23 And I'll give an example. If you think, for instance, about the difference between democracies and dictatorships, we tend to think about it as a difference or as a conflict between values, between ethical systems. Democracies believe in freedom, dictatorships believe in hierarchies, things like that. And which is true as far as it goes, but on a deeper level, information flows differently in democracies and dictatorships. It's a different shape, a different kind of an information network. In a dictatorship, all decisions are made centrally. Dictatorships come from dictate. One person dictates everything. Putin dictates everything in Russia. Kim Jong-un dictates everything in North Korea. So all the information
Starting point is 00:14:10 flows to a single hub where all the decisions are being made and sent back as orders. So it's a very centralized information network. A democracy, on the other hand, if you look at it in terms of you're in outer space, looking at the flow of information in the United States, you will see several centers in the country, Washington, the political center, New York, the financial center, Los Angeles, the maybe autistic center. But there is no single center that dictates everything. You have several centers, and you also have lots and lots of smaller hubs and centers where decisions are constantly being made. Private corporations, private businesses, voluntary associations, individuals making lots of decisions, constantly exchanging information without that information ever having to pass through the center, through Washington, or even through New York, or even through Los Angeles. So just looking, you don't know anything about the values of the people. You just imagine you're in outer space, in some spaceship or satellite, just observing the flow
Starting point is 00:15:22 of information down below on the planet, you will see that North Korea is very different information flow than the United States. And this is crucial to understand. And when you look at thousands of years of history and how history changes and different regimes rise and fall, understanding what kind of information technology is available is a key to understanding which political systems or economic systems win. For most of history, a large-scale democracy like the United States was simply impossible.
Starting point is 00:16:00 If you think about the ancient world, the only examples we know of democracy are small city-states like Republican Rome or like ancient Athens or even smaller tribes. We don't have any example of a large-scale democracy of millions of people spread over a vast territory that functioned democratically. Now, we know the stories, for instance, about the fall of the Roman Republic and the rise of the Caesars, of the emperors, of the autocrats. But it's really not the fault of Augustus Caesar or Nero or any of the other emperors that Rome became an autocratic empire. Simply, there was no way that the information technology necessary to maintain a large-scale democracy, which is bigger than just the city of Rome, like the whole of Italy or the whole of the Mediterranean. Democracy is a conversation.
Starting point is 00:17:06 converse and decide whether to go to war with the Persian Empire, what to do about the immigration crisis on the Danube with all these Germans trying to get in. You can't have a conversation because you don't have the information technology. And, you know, if it was just the fault of Caesar that Rome became an autocratic empire, we should have seen some other examples of a large-scale democracy in India, in China, somewhere, but nowhere. We only begin to see large-scale democracies in the late modern era, after the rise of new information technologies, which were not available to the Romans, like the printed newspaper, and then the telegraph, and the radio, and television, and so forth. Once you have these technologies, you begin to see large-scale democracies like the United States. and then the telegraph and the radio and television and so forth. Once you have these technologies, you begin to see large-scale democracies like the United States.
Starting point is 00:17:50 And one final point, why is it so important to understand this? Once you understand that democracy is actually built on top of information technology, you also begin to understand the current crisis of democracy. Because, you know, now all over the world, not just in the US, we have current crisis of democracy. Because, you know, now all over the world, not just in the US, we have a crisis of democracy. And to a large extent, this is because there is a new information technology, social media, algorithms, AIs. And it's like, you know, you're changing the basis of everything. So it's no wonder there is an earthquake in the structure that is built on top of it. So we have this idea that the advent or the improvement of information systems and
Starting point is 00:18:33 information technology is part and parcel of the empowerment of democratic systems across the world. But built into that is this sort of indelible misconstrual of information, this assumption or presumption that more information is better and leads to truth and knowledge and wisdom. And your book kind of puts the lie to that and tells a very different story around not only the definition of information, but its purpose. Yeah, I mean, information isn't truth. Information is connection. It's something that holds a lot of people together. And unfortunately, what we see in history, that it's often much easier to connect people, to create social order with the help of fiction and fantasy and propaganda and
Starting point is 00:19:28 lies than with the truth. So most information is not true. The truth is a very rare subset of the information in the world. The problem of truth is that the truth, first of all, is costly, The problem of truth is that the truth, first of all, is costly, whereas fiction is very cheap. If you want to write a truthful history book about the Roman Empire, for instance, you need to invest a lot, a lot of energy, time, money. You need to study Latin. You probably need to study Greek, ancient Greek. You need to do archaeological excavations and find these ancient, whether inscriptions or pottery or weapons, and analyze them.
Starting point is 00:20:09 Very costly and difficult. To write a fictional story about the Roman Empire, very easy. You just write anything you want and it's there on the page or on the internet. The truth is often also very complicated because reality is complicated. You want to give a truthful explanation for why the Roman Republic fell, or why the Roman Empire eventually fell. Very complicated. Whereas fiction can be made as easy, as simple as possible, and people tend to prefer simple explanations over complicated ones. And finally, the truth can be painful, unattractive. We often don't want to know the
Starting point is 00:20:47 truth about ourselves, whether as individuals, which is why we go to therapy for many years to know the things we don't want to know about ourselves, and also on the level of entire nations. You know, each nation has its own dark episodes, its own skeletons or cemeteries in the closet that people don't want to know about. A politician that, you know, in an election campaign would just tell people the truth, the whole truth and nothing but the truth is unlikely to win many votes. So in this competition between the truth, which is costly and complicated and sometimes painful, and fiction, which is costly and complicated and sometimes painful, and fiction, which is cheap and simple and you can make it very attractive, fiction tends to win. And if you look at, you know,
Starting point is 00:21:33 the large-scale systems, networks in history, they are often built on fictions, not on the truth. Maybe I give one example. If you think about visual information like portraits, paintings, photographs. So what is the most common portrait in the world? What is the most famous face in the history of humanity? It is the face of Jesus. I mean, there are more portraits of Jesus than of any other person in the history of the world. Billions and billions produced over centuries in cathedrals and churches and homes, and fully 100% of them are fictional. There is not a single authentic, truthful portrait of Jesus anywhere.
Starting point is 00:22:22 We have no portrait of him from his own lifetime. The Bible doesn't say a single word about how he looked like. There is not a single word in the Bible whether Jesus was tall or short, dark hair or blonde or bald, nothing. All the images,
Starting point is 00:22:39 and you know, it's one of the most famous faces in history, it all comes from the human imagination. And it's still very successful in inspiring people and uniting people. Could be for good purposes, you know, charity and building hospitals and helping the poor, but could also be for bad purposes, crusades, persecutions, inquisitions. But either way, the immense power of a fictional image to unite people and going looking at what's happening today in the world so you have these you know big tech
Starting point is 00:23:13 companies and social media companies that they tell us that our information is always good so let's remove all restrictions on the flow of information and flood the world with more and more information and more information would with more and more information, and more information would mean more truth, more knowledge, more wisdom. And this is simply not true. Most information is actually junk. If you just flood the world with information, the truth will sink to the bottom. It will not rise to the top, again, because it's costly and complicated. And you look around, we have this flood of information. We have the most sophisticated information technology in history, and people are losing the ability to hold a conversation, to talk and listen to one another.
Starting point is 00:23:59 You know, in the United States, Republicans and Democrats are barely able to talk to each other. And it's not an American phenomenon. You see the same thing in Brazil, in France, in the Philippines, all over the world. Because again, the basic misconception is that more information is always good for us. It's like thinking that more food is always good for us. And most information is junk information. And what's curious to me about all of this is that on some level, what you're saying is there's nothing new about this. There is this idea that suddenly we've found ourselves in a post-truth world.
Starting point is 00:24:35 And part of what you're saying is it's kind of always been that way. But the qualitative difference right now is not by definition these platforms that allow us to share information as much as it is the algorithms that empower them, that make the decisions about what we're seeing and when we're seeing it. Yeah, I mean, this is maybe the first place you see the power of AIs to make independent decisions in a way that reshapes the world. When I said earlier that, you know, AIs can make decisions. And AIs, they are not just tools in our hands. They are agents creating new realities.
Starting point is 00:25:17 So you may think, okay, this is a prophecy for the future, a prediction about the future, but it's already in the past. Because even though social media algorithms, they are very, very primitive AIs. you know, the first generation of AIs, they still reshaped the world with the decisions they made. In social media, Facebook, Twitter, TikTok, all that, the ones that make the decision what you will see at the top of your news feed or the next video that you'll be recommended, it's not a human being sitting there making these decisions. It's an AI, it's an algorithm. And these algorithms were given a relatively simple and seemingly benign goal
Starting point is 00:25:59 by the corporations. The goal was increase user engagement, which means in simple English, make people spend more time on the platform. Because the more time people spend on TikTok or Facebook or Twitter or whatever, the company makes more money. It sells more advertisements, it harvests more data that it can then sell to third parties. So more time on the platform, good for the company. This is the goal of the algorithm. Now, engagement sounds like a good thing. Who doesn't want to be engaged? But the algorithms then experimented on billions of human guinea pigs
Starting point is 00:26:37 and discovered something, which was, of course, discovered even earlier by humans, but now the algorithms discovered it. The algorithms discovered that the easiest way to increase user engagement, the easiest way to grab people's attention and keep them glued to the screen, is by pressing the greed or hate or fear button in our minds. You show us some hate-filled conspiracy theory, and we become very angry. We want to see more. We tell about it to all our friends. User engagement goes up. And this is what they
Starting point is 00:27:12 did over the last 10 or 15 years. They flooded the world with hate and greed and fear, which is why, again, the conversation is breaking down. Very hard to hold a conversation with all this hate and fear. Yeah, it's a function of unintended consequences that, on some level, is no different than Nick Bostrom's alignment problem thought experiment about paperclips. Like, this is the exact same thing. And I think it speaks to not only human ignorance, but human hubris around this powerful technology. I think, you know, you talk so much about stories and how indelible they are in terms of crafting
Starting point is 00:27:51 our reality. But one of those stories is we know what we're doing. We can handle it. We understand the consequences. We know the downside here. And we're making sure that what we're putting out in the world is safe and consumer-friendly when, you know, on some level they know it's not, but also they have no idea, you know, what will become of it as a result. And so we're just in this frontier, this unregulated frontier where anything goes at the moment. Yeah, I mean, I think it's important what you said, that these are kind of unintended consequences. Like the people who manage the social media companies, they are not
Starting point is 00:28:31 evil. They didn't set out to destroy democracy or to flood the world with hate and so forth. They just really didn't foresee that when they give the algorithm the goal of increasing user engagement, the algorithm will start to promote hate. And one of the first places that this happened... But let me just interject quickly on that, though. Now that they know that that's the case, it's not as if they're backtracking. That's true. I mean, now they know. They're not exactly regulation-friendly at the moment. No, absolutely not.
Starting point is 00:29:02 Sorry, sorry, go ahead. You're right. Now they know and they are not doing nearly enough. But initially, when they started this whole ball rolling, they really didn't know. And one of the places you saw it for the first time, this was, you know, eight years ago when I published
Starting point is 00:29:17 Homo Deus, this was happening. I didn't pay attention to it either. In Myanmar, Burma, the country formerly known as Burma, Facebook was basically the internet and certainly the biggest social media platform. And in the 2010s, the algorithms of Facebook in Myanmar, they deliberately spread terrible conspiracy theories and fake news about the Rohingya minority in Myanmar, which led to an ethnic cleansing.
Starting point is 00:29:48 Of course, it was not the only reason. There was deep-seated hatred towards Rohingya much before. But this kind of propaganda campaign online, on Facebook, contributed to an ethnic cleansing campaign between 2016 and 2017-2018, in which thousands of Rohingya were killed, tens of thousands were raped, and hundreds of thousands were expelled. You now have close to a million Rohingya refugees in Bangladesh and elsewhere. And this was fueled to a large extent by these conspiracy theories and fake news on Facebook. And at the time, the executive of
Starting point is 00:30:26 Facebook had no, I mean, they didn't know even the Rohingya existed. It's not like it was a conspiracy of Facebook against them. For the whole of Myanmar, a country where Facebook had millions and millions of users, they, by 2018, this is after they got reports of the ethnic cleansing campaign, they had just a handful of humans trying to kind of regulate the actions of millions of users and the algorithms. And they didn't even speak Burmese. Like when the algorithm chose, okay, I'll show people this hate-filled conspiracy theory video in Burmese, nobody in Facebook headquarters spoke Burmese.
Starting point is 00:31:12 They had no idea what the algorithm was promoting. The key thing is, is not to absolve the humans from responsibility. It's to understand that even very primitive AIs, and we are talking about, you know, like eight years ago, not things like Chachipiti. Still, the decisions made by these algorithms to promote certain content had far-reaching and terrible consequences.
Starting point is 00:31:39 In Myanmar, they were not just producing conspiracy theories. They were producing, there are millions of users producing, you know, cooking lessons and biology lessons and sermons on compassion from Buddhist monks and conspiracy theories. And the algorithms made the decision to promote the conspiracy theories. And this is just kind of a warning of, look what happens with even very primitive AIs. And the AIs of today, which are far more sophisticated than in 2016, they too are still just the very early stages of the AI evolutionary process. And we can think about it like the evolution of animals. Until you get to humans, you have 4 billion years of evolution.
Starting point is 00:32:25 You start with microorganisms like amoebas, and it took billions of years of evolution to get to dinosaurs and mammals and humans. Now, AIs are at present at the beginning of a parallel process. The Changi-PT and so forth, they are the amoebas of the AI world. But AI evolution is not organic. It's inorganic, it's digital, and it's millions of times faster. So whereas it took billions of years
Starting point is 00:32:54 to get from amoebas to dinosaurs, it might take just 10 or 20 years to get from the AI amoebas of today to AI T-Rex in 2040 or 2050. Maybe even less. Maybe even less. We're talking about, I don't think our brains are organized properly to really comprehend the accelerated speed at which this is self-learning and iterating and improving upon itself.
Starting point is 00:33:20 Like it's a compounding thing that is astronomical. Meanwhile, trillions of dollars are being spent to build these server farms with these NVIDIA chips, and there's so much power required to keep these things going. They're talking about nuclear power. I mean, this is a whole new world. And yet, in talking about it, it still feels somewhat like an academic exercise. about it, it still feels somewhat like an academic exercise. Because for myself or somebody who might be watching or listening, their experience with AI comes in the form of ChatGPT or some of these helpful tools. Like, I like my algorithm. It shows me the kind of products that I want to buy without having to search for it. And a simple example would be preparing for this podcast. Like I
Starting point is 00:34:05 listen to your book on audio book and I'm doing what I usually do, pulling up a bunch of tabs and just collating a bunch of information on you and the book and the message that you're putting out. But I did something I had never done before, which is I got a PDF of Nexus and I uploaded it to a tool called Notebook LM. And that tool then synopsized the entire book and created a chatbot where I could ask it questions about your book and ask it to elaborate on certain concepts. And it will even create a podcast conversation between two people about the subject matter of the book. So even this conversation is at risk, right? It's not irrelevant. And I'm like, wow, that's kind of a remarkably helpful tool. And it's easy to just not really
Starting point is 00:34:53 appreciate or connect with the downside risk and power of these tools and where they're leading us. So I think what I'm saying is, I guess the point I'm trying to make is consumers, like all of us, we're being lured into a trust of something so powerful we can't comprehend and are ill-equipped to be able to kind of cast our gaze into the future and imagine where this is leading us. Absolutely. I mean, part of it is that there is enormous positive potential in AI. It's not like it's all doom and gloom. There is really enormous positive potential if you think about the implications for healthcare. That, you know, AI doctors available 24 hours a day that know our entire medical history and have read every medical paper that was ever published and can tailor their advice, their treatment to our specific life history and
Starting point is 00:35:47 our blood pressure, our genetics. It can be the biggest revolution in healthcare ever. If you think about self-driving vehicles. So every year, more than a million people die all over the world in car accidents. Most of them are caused by human error, like people drinking and then driving or falling asleep at the wheel or whatever. Cell driving vehicles are likely to save about a million lives every year. This is amazing. You think about climate change. So, yes, developing the AIs will consume a lot of energy, but they could also find new sources of energy, new ways to harness energy that could be our best shot at preventing ecological
Starting point is 00:36:25 collapse. So there is enormous positive potential. We shouldn't deny that. We should be aware of it. And on the other hand, it's very difficult to appreciate the dangers because the dangers, again, they're kind of alien. Like if you think about nuclear energy, yeah, it also had positive potential, nuclear, cheap nuclear energy.
Starting point is 00:36:43 But people had a very good grasp of the danger, nuclear war. Anybody can understand the danger of that. With AI, it's much more complex because the danger is not straightforward. The danger is really, I mean, we've seen the Hollywood science fiction scenarios of the big robot rebellion, that one day a big computer or the AI decides to take over the world and kill us or enslave us. And this is extremely unlikely to happen anytime soon, because the AIs are still a kind of very narrow intelligence. Like the AI that can summarize a book, it doesn't know
Starting point is 00:37:20 how to act in the physical world outside. You have AIs that can fold proteins, you have AIs that can play chess, but we don't have this kind of general AI that can just find its way around the world and build a robot army and whatever. So people, it's hard to understand what's so dangerous about something which is so kind of narrow in its abilities. And I would say that the danger doesn't come from the big robot rebellion. It comes from the AI bureaucracies. Already today, and more and more, we will have not one big AI trying to take over the world. We will have millions and billions of AIs constantly making decisions about us everywhere. You apply to a bank to get a loan, it's an AI deciding whether
Starting point is 00:38:04 to give you a loan. You apply to get a job, it's an AI deciding whether to give you a loan. You apply to get a job, it's an AI deciding whether to give you a job. You're in court, or you're found guilty of some crime, the AI will decide whether you go for six months or three years or whatever. Even in armies, we already see now in the war in Gaza
Starting point is 00:38:20 and with the war in Ukraine, AI make the decision about what to bomb. And in the Hollywood scenario, you have the killer robots shooting people. In real life, it's the humans pulling the trigger, but the AI is choosing the targets. It's telling them what to... This is much more complex than the standard scenario.
Starting point is 00:38:51 This podcast is brought to you by Airbnb. So this past summer, I spent an entire month in Paris. I was doing a bunch of stuff around the Olympics. And last year, I took an extended period of time off from work to be in Australia. And both were incredibly inspiring. They were energizing and in many ways, kind of transformative experiences in large part because I wasn't really visiting these places as much as I was actually like living there. Not as a tourist, but more like as a resident because I was staying in other people's homes. And it's just a different and better way to travel. And it's why I have been for many years such a huge fan and customer of Airbnb, which recently got me thinking about the other side of this equation, which is hosting on Airbnb as a way to make the most of your spaces when you are
Starting point is 00:39:38 away. It's such a practical idea if you think about it. Your space, which would otherwise just be sitting there empty, could earn you some extra cash while you're away. And not only is this opportunity incredibly obvious, Airbnb also makes the whole process pretty simple as a great way to offset travel costs or even fund your next adventure. Long trips, short trips, it doesn't matter. You control everything. And just so we're clear, you're not signing up for some kind of full-time job. You're just making the most of what you already have when it suits you.
Starting point is 00:40:11 The point is, your home might be worth more than you think. To find out how much, visit airbnb.com slash host. We're brought to you today by AG1. I've been drinking AG1 for so many years at this point. It's hard to even say that it's like an essential part of my morning routine because it's just a reflex. Like I don't even think about it. I just do it and have been doing it forever.
Starting point is 00:40:35 And I love the feeling that it gives me and I love what it does for my health. And in just 60 seconds, I know that I'm giving my body a daily dose of vitamins, of minerals, pre and probiotics, adaptogens, and so much more. It's just basically a daily self-care routine in a glass. It's not just another nutritional supplement, however. It is specifically formulated with bioavailable ingredients that actually work with your body. The powder form makes it easy for your body to digest and access all those nutrient-dense
Starting point is 00:41:06 ingredients. I, over the years, have noticed changes in my gut health, and this is validated by a recent research study that showed 97% of participants felt their digestion improved after 90 days of drinking AG1. I've experienced less bloating, improved digestion, and so many other benefits as well, because AG1 also supports my immune health, helps fill nutrient gaps, providing whole body benefits like gut and stress support. So start with AG1 and notice the difference for yourself. It's a great first step to investing in your health, and that's why they've been a proud partner of mine for so very long. Try AG1 and get a free bottle of vitamin D3K2 and five free AG1 travel packs with your first purchase at drinkag1.com slash richroll. That's a $48 value for free if you go to drinkag1.com slash richroll.
Starting point is 00:42:01 Check it out. slash richroll. Check it out. Every point of connection with bureaucracy then becomes turned over to an algorithm that makes decisions in a black box without the opportunity for rebuttal or conversation, right? So we're outsourcing all of these decisions and creating like an autocratic diaspora of decision makers. And that in turn, like you can imagine over time, like what emerges from that is like a godhead or a pantheon of gods where there's an authoritarian regime that's dispersed across this in which we are relenting our agency over to these machines and trusting that they're making the right decisions, but not knowing how those decisions are being made. Even the engineers who are creating the algorithms don't know. And there's something, you know, kind of innately terrifying about that. Again, it's not authoritarian in the sense that there is a single human being that is
Starting point is 00:43:01 pulling all the levers. No, it's the AIs. Like the bank has this AI that decides who is qualified to get a loan. And if they tell you, we decided not to give you a loan, and you ask the bank, why not? And the bank says, we don't know. I mean, computer says no. I mean, the algorithm says no. We don't understand why the algorithm says no, but we trust the algorithm. And this is likely to spread to more and more places. The key thing is, it's not that the bank is hiding something from you.
Starting point is 00:43:31 It's really that the AIs make decisions in a very different way than human beings, on a basis of a lot more data. So if the bank really wanted to explain to you why they refuse to give you a loan, like let's say there is a law, the government passes a law of a right to an explanation. If the bank refused to give you a loan, you can apply. They must give you an explanation. So the explanation, well, people fear that it will be kind of, I don't know, racist bias or homophobic bias like in the old days.
Starting point is 00:44:03 That the algorithm saw that you're black or you're Jewish or you're gay, and this is why it refused to give you a loan. It won't be like that. I mean, the bank will send you an entire encyclopedia and millions of pages saying, this is why the computer refused to give you a loan. The computer took into account thousands and thousands of data points about you, each one based on statistics, on millions of previous cases. And now you can go over these millions of pages if you like. And if you want to challenge, okay. But it's not the kind of old style racism or whatever.
Starting point is 00:44:42 Sure. A new version of the terms and conditions that we just click on without reading, right? Except extrapolated a hundredfold. In addition to that, with all of these data points, I can't help but think that these machines, the veracity of the information that these machines provide us with is only as reliable as the data sets that it has been provided with. And right now, we're tiptoeing into a situation where the internet is being rapidly degraded because it's being populated more and more by AI content. Now, when you go to Google and you search, the first thing you see is sort of
Starting point is 00:45:26 an AI kind of summary of your query as opposed to links. And this in turn is undermining the business model of legacy media and all forms of media, right? So as those continue to die on the vine, more and more of the internet will be a result of AI generated content. And then it becomes a recursive thing in which it's feeding upon its own inputs to make decisions. And, you know, with that, like you can imagine a degradation of the data set upon which it is making those decisions. Exactly. Even if you think about something like music.
Starting point is 00:46:03 So AI that now creates music, it basically ate the whole of human music. For thousands of years, humans produced music or art or theater or whatever. Within a year, the current AI just ate the whole of it and digested it and start now creating new music or new texts or new images. creating new music or new texts or new images. And the first kind of generation of AI texts or music, this is based on previous human culture. But with each passing year, the AIs will be eating their own products. Because as you know, the human share in music production or the human share in text production or image production will go lower and lower, most images, most music will be produced at least in part by AI, and this will be the new food that the AI eats. And then you have exactly what you described,
Starting point is 00:46:57 this recursive pattern, and where it will lead us, we have no idea. I mean, another way to think about it, this is the first time that we are basically about to enter a non-human culture. Like humans are our cultural entities. We live cocooned inside culture, like all this music and art and also finance and also religion. This is all part of culture. And for tens of thousands of years, the only entities that produced culture were other humans. So all the songs you ever heard were produced by humans. All the religious mythologies you ever heard came from the human imagination. Now there is an alien intelligence, a non-human intelligence that will increasingly produce songs and music, mythology, financial strategies, political ideas. Even before we rush to decide, is it good? Is it bad? Just stop and think about
Starting point is 00:47:54 the meaning of living in a non-human culture or a culture which is, I don't know, 40% or 70% non-human. It's not like going to China and seeing a different human culture. It's like really alien culture here on Earth. Yeah, my human mind bristles at that. I start thinking about this bias I have around the originality of human thought and emotion, and this kind of assumption that AI will never be able
Starting point is 00:48:22 to fully mimic the human experience, right? There's something indelible about what it means to be human that the machines will never be able to fully replicate. And when you talk about, you know, information, the purpose of information being to create connection, a big piece there is intimacy, like intimacy between human beings. So information is meant to create connection, but now we have so much information and we're feeling very disconnected. So there's something broken in this system. And I think it's driving this loneliness epidemic, but on the other side, it's making us value like intimacy, maybe a little bit more than we were previously. And so I'm curious about where intimacy kind of fits into this, you know, post-human world in which culture is being
Starting point is 00:49:12 dictated by machines. I mean, human beings are wired for that kind of intimacy. And I think our radar or our kind of ability to, you know, identify it when we see it is part of what makes us human to begin with. Maybe the most important part. I think the key distinction here that is often lost is the distinction between intelligence and consciousness. That intelligence is the ability to pursue goals and to overcome problems and obstacles on the way to the goal. The goal could be a self-driving vehicle trying to get from here to the goal. The goal could be a self-driving vehicle trying to get from here to San Francisco. The goal could be increasing user engagement. And an intelligent agent knows how to overcome the problems on the way to the goal. This is
Starting point is 00:49:58 intelligence. And this is something that AI is definitely acquiring. In at least certain fields, AI is now much more intelligent than us. Like in playing chess, much more intelligent than human beings. But consciousness is a different thing than intelligence. Consciousness is the ability to feel things. Pain, pleasure, love, hate. When the AI wins a game of chess, it's not joyful. If there is a tense moment in the game, it's not clear who is going to win. The AI is not tense. It's only the human player which is tense or frightened or anxious. The AI doesn't feel anything.
Starting point is 00:50:40 Now, there is a big confusion because in humans and also in other mammals, in other animals, in dogs and pigs and horses and whatever, intelligence and consciousness go together. We solve problems based on our feelings. Our feelings are not something that kind of evolution is a decoration. It's the core system through which mammals make decisions and solve problems is based on our feelings so we tend to think that consciousness and intelligence must go together and in all these science fiction movies you see that as the computer or robot
Starting point is 00:51:18 becomes more intelligent then at some point it also gains consciousness it falls in love with the human or whatever. And we have no reason to think like that. Yeah, consciousness is not a mere extrapolation of intelligence. Absolutely not. It's a qualitatively different thing. Yeah, and again, if you think in terms of evolution,
Starting point is 00:51:38 so yes, the evolution of mammals took a certain path, a certain road in which you develop intelligence based on consciousness. But so far, what we see is computers, they took a different route. Their road develops intelligence without consciousness. I mean, computers have been developing, you know, for 60, 70 years now. They are now very intelligent, at least in some fields, and still zero consciousness. Now, this could continue indefinitely. Maybe they are just on a different path. Maybe eventually they will be far more intelligent than us in everything
Starting point is 00:52:17 and still will have zero consciousness, will not feel pain or pleasure or love or hate. You know, the same way that if you think about birds and airplanes. So airplanes did not become like birds. Airplanes don't fly using feathers and so forth. They fly in a completely different way. It's not like that at a certain point when the airplane flies fast enough, suddenly the feathers will appear. No.
Starting point is 00:52:44 And it could be the same with intelligence and consciousness, that it will be more and more intelligent without feelings ever appearing. Now, what adds to the problem is that there is nevertheless a very strong commercial and political incentive to develop AIs that mimic feelings, to develop AIs that can create intimate relations with human beings, that can cause human beings to be emotionally attached to the AIs. Even if the AIs have no feelings of themselves, they could be trained, they are already trained, of themselves, they could be trained, they are already trained, to make us feel that they have feelings and to start developing relationships with them. Why is there such an incentive?
Starting point is 00:53:34 Because intimacy is, on the one hand, maybe the most cherished thing that a human can have. You know, just on the way here, we were listening to Barbara Streisand singing, our people who need people are the luckiest people in the world. That intimacy is not a liability. It's not something bad that, oh, I need this.
Starting point is 00:53:55 No, it's the greatest thing in the world. But it's also potentially the most powerful weapons, weapon in the world. If you want to convince somebody to buy a product, if you want to convince somebody to vote for a certain politician or party, intimacy is like the ultimate weapon. I mean, so far in history, there was a big battle for attention, how to grab human attention. Also, we talked about earlier in social media,
Starting point is 00:54:22 how to get human attention. And there were ways like, I don't know earlier in social media, how to get human attention. And there were ways, like, I don't know, in Nazi Germany, Hitler could force everybody to listen to his speech on radio. So he had command of attention, but not of intimacy. There was no technology for Hitler or Stalin or anybody else to mass produce intimacy. Now with AIs, it is possible technically to mass-produce intimacy. Now with AIs, it is possible, technically, to mass-produce intimacy. You can create all these AIs that will interact with us, and they will understand our feelings, because, again, feelings are also patterns.
Starting point is 00:55:01 You can predict a person's feelings by watching them for weeks and months and learning their patterns and facial expression and tone of voice and so forth. And then if it's in the wrong hands, it could be used to manipulate us like never before. Sure, it's our ultimate vulnerability, this beautiful thing that makes us human becomes this great weakness that we have. Because as these AIs continue to self-iterate, their capacity to mimic consciousness and human intimacy will reach such a degree of fidelity that it will be indistinguishable to the human brain. And then humans become like these unbelievably easy to hack machines who can be directed wherever the AI chooses you know, chooses to direct them. Yeah, it's not a prophecy. We can take actions today to prevent this. We can have regulations about it. We can, for instance, have a regulation that AIs are welcome to interact with humans,
Starting point is 00:55:59 but on condition that they disclose that they are AIs. If you talk with an AI doctor, that's good, but the AI should not pretend to be a human being. You know, I'm talking with an AI. I mean, it's not that there is no possibility that AI will develop consciousness. We don't know. I mean, there could be that AIs will really develop consciousness. But does it matter if it's mimicking it to such a degree of fidelity?
Starting point is 00:56:24 In terms of how human beings interact consciousness. But does it matter if it's mimicking it to such a degree of fidelity? Does it even, in terms of like how human beings interact with it, does it matter? For the human beings, no. I mean, this is the problem. I mean, because we don't know if they really have consciousness or they're only very, very good
Starting point is 00:56:36 at mimicking consciousness. So the key question is ultimately political and ethical. If they have consciousness, if they can feel pain and pleasure and love and hate, this means that they are ethical and political subjects. They have rights, that you should not inflict pain on an AI the same way you should not inflict pain on a human being, that what they like, what they love might be as important as what human beings desire.
Starting point is 00:57:06 So they should also vote in elections. And they could be the majority. Because, you know, you can have a country, 100 million humans and 500 million AIs. So do they choose the government in this situation? Now, you know, in the United States, interestingly enough, there is actually an open legal path for AIs to gain rights. It's one of the only countries in the world where this is the case.
Starting point is 00:57:30 Because in the United States, corporations are recognized as legal persons with rights. Until today, this was a kind of legal fiction. Like according to US law, Google is a person. It's not just a corporate, it's a person. And as a person, it also has freedom of speech. This is the Supreme Court ruling for 2010 of Citizen United. Now, until today, this was just legal fiction because every decision made by Google
Starting point is 00:57:56 was actually made by some human being, an executive, a lawyer, an accountant. Google could not make a decision independent of the humans. But now you have AIs. So imagine the situation when you incorporate an AI. Now this AI is a corporation. And as a corporation, US law recognizes it as a person with certain rights, like freedom of speech. Now it can earn money. It can go online, for instance, and offer its services to people and earn money. Then it can open a bank account
Starting point is 00:58:30 and invest its money in the stock exchange. And if it's very smart and very intelligent, it could become the richest person in the US. Now, imagine the richest person in the US is not a human. It's an AI. And according to US law, one of the rights of this person is to make political contributions, donations. This was the main reason behind Citizen United in 2010. So this AI now makes billions of dollars of contributions to politicians
Starting point is 00:59:00 in exchange for expanding AI rights. And the legal path in the U.S. is completely open. You don't need any new law to make this happen. That's a plot of a movie. Yeah, when we were in L.A. Yeah, I mean, wow, that's so wild to contemplate. What are the differences in the ways in which the advent of this powerful technology is impacting democratic systems and authoritarian systems? So both systems have a lot to gain and have a lot to lose. Again, the AI, it's the most powerful technology ever created. It's not a tool, it's an agent. So you have millions and billions of new agents, very intelligent, very capable,
Starting point is 00:59:50 that can be used to create the best healthcare system in the world, but also the most lethal army in the world, or the worst secret police in the world. If you think about authoritarian regimes, so throughout history they always wanted to monitor their citizens around the clock. But this was technically impossible. Even in the Soviet Union, you know, you have 200 million Soviet citizens. You can't follow them all the time because the KGB didn't have 200 million agents.
Starting point is 01:00:21 And even if the KGB somehow got 200 million agents, that's not enough. Because, you know, in the Soviet Union, it's still basically paper bureaucracy, the secret police. If a secret agent followed you around 24 hours a day, at the end of the day, they write a paper report about you and send it to KGB headquarters in Moscow. So imagine, every day, KGB headquarters is flooded with 200 million paper reports. Now, to be useful for anything, somebody needs to read and analyze them. They can't do it. They don't have the analysts. Therefore, even in the Soviet Union, some level of privacy was still the default for most people.
Starting point is 01:01:04 For technical reasons. Now, for the first time in history, it is technically possible to annihilate privacy. A totalitarian regime today doesn't need millions of human agents if he wants to follow everybody around. You have the smartphones and cameras and drones and microphones everywhere. And you don't need millions of human analysts to analyze this ocean of information. You have AI. And this is already beginning to happen. This is not a future prediction. In many places around the world, you begin to see the formation of this totalitarian surveillance regime. It's happening
Starting point is 01:01:41 in my country, in Israel. Israel is building this kind of surveillance regime in the occupied Palestinian territories to follow everybody around all the time. And also in our region, in Iran, since the Islamic revolution in 1979, they had the hijab laws, which says that every woman, when she goes out, walking or even driving in her private car, she must wear the hijab, the headscarf. And until today, the regime had difficulty enforcing the hijab laws because they didn't have, you know, millions of police officers that you can place on every street, a police officer.
Starting point is 01:02:23 If a woman drives without a headscarf, immediately she's arrested and fined or whatever. In the last few years, they switched to relying on an AI system. Iran is now crisscrossed by surveillance cameras with facial recognition software, which recognizes automatically if in the car that just passed by the camera,
Starting point is 01:02:47 the facial recognition software can identify that this is a woman, not a man, and she's not wearing the hijab and identify her identity, find her phone number, and within half a second, they send her an SMS message saying you broke the hijab law,
Starting point is 01:03:06 your car is impounded. Your car is confiscated, stop the car by the side of the world. This is daily occurrence today in Tehran and Isfahan and other parts of Iran. And this is based on AI. And it's not like there is a report that goes to the court and some human judge goes over the data and decides what to do. The AI, like, immediately decides, okay, the car is confiscated. And this can happen in more and more places around
Starting point is 01:03:34 the world, like even in the US. You know, if you think about all the debate about abortion. Without going into the debate itself, the people who think, rightly or wrongly, but they think that abortion is murder, they have a very strong incentive
Starting point is 01:03:51 to build a similar surveillance system for American women. You know, to stop murder. Like, you can build this surveillance system that can identify, yesterday you were pregnant, today you're not, what happened in between? So it's not just a problem, you know, for Iran or for the Palestinians or the Chinese. This can come to the U.S. as well. And to prevent them from crossing state lines, things like that. Yeah. Yeah. Like, OK, you went from, I don't know, Texas to California. You were
Starting point is 01:04:24 pregnant. You were pregnant. You came back. You're not pregnant. What happened in California? So it feels like AI is this incredible tool to consolidate power around authoritarian regimes. But it also has its pitfalls too. It's not the perfect tool. It also frightens the autocrats. Because the one thing that human dictators always feared most
Starting point is 01:04:47 was not a democratic revolution. The one thing they feared most is a powerful subordinate that they can't control and that might manipulate them or take power from them. If you can look at the Roman Empire, not a single Roman emperor was ever toppled by a democratic revolution. Never happened. But many of them lost their life or their power to a subordinate, you know, a general that rebelled against them, a provisional governor, their brother, their wife that took power from them. This is the greatest fear of every dictator also today. And so if you think about AI, so if you're a human dictator and you now give this immense power to an AI system, where is the guarantee that this system will not turn against you and either eliminate you
Starting point is 01:05:40 or just turn you into a puppet? I mean, what we also know about dictators, it's relatively easy to manipulate these people if you can whisper in their ear, because they are very paranoid. And the easiest people to manipulate are the paranoid people. And we have our AI corporation in the United States that can deploy billions of dollars towards bots and whatever else to create that paranoia or enhance it. You really just need to hack one person. For an AI to take power in the US, very complicated. It's such a distributed system. Like, okay, the AI can learn to manipulate the president, but it also needs to manipulate the senators and the congress members and the state
Starting point is 01:06:23 governors and the Supreme Court. the state governors and the supreme court like what would the ai do with a senate filibuster it's difficult but if you want to take power in a dictatorship you just need to learn to manipulate a single person so uh the dictators are not all happy about the ais and we're already beginning to see it, for instance, with chatbots, that they are very concerned because, you know, you can design a chatbot which will be completely loyal to the regime. But once you release it to the internet to start interacting with people in real life, it changes. I mean, remember what we talked earlier, that AI is defined by the ability to learn and change by itself.
Starting point is 01:07:09 So even if Putin creates, like the Putinist chatbot that always says that Putin is great and Putin is right and Russia is great and so forth, but then you release it to the real world, it starts observing things in the real world. For instance, it notices that, you know, in Russia
Starting point is 01:07:26 the invasion of Ukraine is officially not a war. It's called a special military operation. And if you say that it's a war, you go to prison for up to, I think, three years or something like that. Because it's not a war, it's a special military operation. Now, what do you do if a very intelligent chatbot that you released, you know, connects the dots and says, no, it's not a special military operation. It's a war.
Starting point is 01:07:52 Would you send a chatbot to prison? What can you do? And, you know, democracies, of course, also have a problem with chatbot saying things we don't like. They can be racist. They can be homophobic, whatever. saying things we don't like. They can be racist, they can be homophobic, whatever. But the thing about democracy,
Starting point is 01:08:09 it has a relatively wide margin of tolerance, even for anti-democratic speech. Dictatorships have zero margin for dissenting views. So they have a much bigger problem with how to control these unpredictable chatbots. We're brought to you today by ON. Being a gearhead, I'm all about testing the latest sports tech. But you know what often gets overlooked? Apparel.
Starting point is 01:08:47 Apparel is crucial to performance, and that's why I was blown away by the folks at On's Swiss Labs. Their cutting-edge approach from sustainability to precision testing for performance enhancement is next level. It is truly Swiss innovation at its best. Visit on.com slash richroll. That's on.com slash richroll.
Starting point is 01:09:10 We're brought to you today by Boncharge. So I'm turning 58 in October, and I spent much of my time, most of my life in the sun without much thought about skincare whatsoever. And around 50, I kind of hit this moment where I had a wake-up call and realized like, hey, I better start to, you know, attend to this if I want to retain my youthful appearance. And that's when I discovered the benefits of red light therapy. And Bond Charge has an amazing array of these red light therapy products. They use technology that uses low wavelength light to penetrate the skin. And what it does is it stimulates cells in the mitochondria. In turn, this increases collagen production.
Starting point is 01:09:57 It improves circulation. It helps with wrinkle repair and ends up promoting firmer, more youthful looking skin, which I can personally attest to. My favorite among their lineup of products is Boncharge's Innovative Face Mask. It's portable, it's easy to use, you can travel with it, and just 10 to 12 minutes per session is all you need to start seeing results. Everything about it is just simple. There's no messy creams, there's no endless product bottles or subscription services. It's straightforward. It's travel-friendly.
Starting point is 01:10:30 Plus, all Boncharge products are HSA and FSA eligible, potentially saving you up to 40% with tax-free purchases. Are you ready to elevate your wellness journey? Of course you are. So go to Boncharge.com slash Rich Roll. Elevate your wellness journey. Of course you are. So go to bondcharge.com slash richroll. That's B-O-N-C-H-A-R-G-E dot com slash richroll.
Starting point is 01:10:56 Use code richroll at checkout for 15% off their entire range of wellness tools. How are you interpreting the current moment, given that we're on the cusp of an election here in the United States? And, you know, there's a lot of discourse around the existential threat to democracy that we may be facing. What role is AI playing in this? what role is AI playing in this? What should we understand about the impact of this technology on us as citizens and voters? At present, I don't think that AI has, again, social media has, of course, a huge impact on the political discourse
Starting point is 01:11:38 and thereby on the results of the elections. But I don't see AI really kind of changing or manipulating the elections in November I don't see AI really kind of changing or manipulating the elections in November. It's too close. The big question is, whoever wins the elections, maybe the most important decisions that person has to make will be about AI. Because of the extremely rapid pace that this technology is developing, you know, you look
Starting point is 01:12:04 at where ChatGPT was a year ago. You look at what things are now in 2024. What will be the state of AI in 2027, 2028? So, you know, I watched the presidential debate. Most people, their main takeaway was about the cats and the dogs. It's the most memorable thing for the debate. I mean, you know, whoever wins maybe will have to make some of the most important decisions in history about the relations.
Starting point is 01:12:33 If you're worried about immigration, it's not the immigrants that will, you know, replace the taxi drivers. It's the immigrants that will replace the bankers that you should be worried about. And it's the AIs,
Starting point is 01:12:45 not somebody coming from south of the border. And who do you trust to make these momentous decisions? Now, and if you think specifically about the threats to democracy, so one thing we learned from history is that democracies always, since again, ancient Athens, they always had this one single big problem or weakness. That democracy is basically a kind of a deal that you give power to somebody for a limited time period, for four years, on condition they give it back. And then you can make a different choice. Like, we tried this, it didn't work, let's try something else. This ability to say, let's try something else, this is democracy.
Starting point is 01:13:32 And it's based on that you give power and you expect to get it back after four years. A peaceful transfer at the end of that term. If you give power to somebody who then doesn't give it back, they now have the power. They have the power to also stay in power. That was always the biggest danger in democracy. this, you like that, there is discussion to be had. But you have your one person, Donald Trump, and that has, you know, you have a record for the previous time that this person doesn't want to give power back. And he is willing to go a long way, including potentially inciting violence, to avoid giving power back. And you want to give him so much power, that doesn't sound like a very good idea. So for me, this is the kind of the number one issue in the elections. Everything else is of marginal importance in comparison. Yeah. I mean, I think it challenges our
Starting point is 01:14:39 predilections around the stability of democracy and is forcing us to really embrace the fact that it is a delicate dynamic that is, you know, informed by collective action by the people. And in reflecting upon, you know, this technology also, you know, the story of technology is one in which our ability to legislate around it and regulate it always falls way behind the pace of advancement. And now we're in a situation where the pace of advancement is like nothing we've ever seen before, which calls into question our ability to not only kind of put guardrails around it, but to even understand what is actually happening. The history of information systems is one of collective human cooperation. And yet we're in a situation right now where it feels like cooperation is being challenged, not only nationally here in the
Starting point is 01:15:39 United States, but internationally. And so as we kind of begin to talk about how we're going to triage this or find solutions, like where do you land in terms of our capacity to collectively come together as a global community to figure out solutions and then put them into motion so that we don't tiptoe into some kind of dystopia. So there is a lot to unpack here. So first of all, when we think about cooperation, as we said earlier, this was always our biggest advantage as a species, that we cooperate better than anybody else. We can construct these even global networks of trade that no other animal even understands. Like if you think about, I don't know, horses.
Starting point is 01:16:25 So horses never figured out money. They were bought and sold, but they never understood what are these things that the humans are exchanging. And this is why horses could never unite against us or could never manipulate us because they never figured out how the system works. That one person is giving me to
Starting point is 01:16:45 another person in exchange for a few shiny metal things or some pieces of paper. AI is different. It understands money better than most people. Like most people don't understand how the financial system really works. And financial AIs in fintech, they already surpass most human beings, not all human beings, but most human beings in their understanding of money. So we are now confronting, again, millions and billions of new agents
Starting point is 01:17:15 that potentially can use our own systems against us, that computers can now collaborate using, for instance, the financial system more efficiently than humans can. So the whole issue of cooperation is changing. And computers also learn how to use the communication systems to manipulate us, like in social media. So they are cooperating while we are losing the ability to cooperate.
Starting point is 01:17:44 And that should raise the alarm. Now, and the thing is, it's very difficult to understand what is happening. If we want humans around the world to cooperate on this, to build guardrails, to regulate the development of AI, first of all, you need humans to understand what is happening. Secondly, you need the humans to trust each other. And most people around the world are still not aware of what is happening on the AI front. You have a very small number of people in just a few countries, mostly the US and China and a few others,
Starting point is 01:18:21 who understand. Most people in Brazil, in Nigeria, in India, they don't understand. And this is very dangerous because it means that a few people, many of them are not even elected by the US citizen. They are just, you know, private companies. They will make the most important decisions.
Starting point is 01:18:40 And the even bigger problem is that even if people start to understand, they don't trust each other. Like I had the opportunity to talk to some of the people who are leading the AI revolution, which is still led by humans. It is still humans in charge. I don't know for how many more years, but as of 2024, it's still humans in charge. humans in charge. And you meet with these, you know, entrepreneurs and business tycoons and politicians also in the US, in China, in Europe, and they all tell you the same thing, basically. They all say, we know that this thing is very, very dangerous, but we can't trust the other humans. If we slow down, how do we know that our competitors will also slow down? Whether our business competitors, let's say here in the US, or our Chinese competitors across the ocean.
Starting point is 01:19:36 And you go and talk with the competitors, they say the same thing. We know it's dangerous. We would like to slow down to give us more time to understand, to assess the dangers, to debate regulations, but we can't. We have to rush even faster because we can't trust the other corporation, the other country. And if they get it before we get it, it will be a disaster. And so you have this kind of paradoxical situation where the humans can't trust each other,
Starting point is 01:20:06 but they think they can trust the AIs. Because when you talk with the same people and you tell them, okay, I understand you can't trust the Chinese or you can't trust open AI, so you need to move faster developing this super AI. How do you know you could trust the AI? And then they tell you, oh, I think that will be okay. I think we've figured out how to make sure that the AI will be trustworthy and under our control. So we have this very paradoxical situation when we can't trust our fellow humans, but we think we can trust the AI. And layered on top of that is an incentive structure, of course, that further engenders distrust in this arms race, right? Like the prize goes to the breakthrough developers and those will be rewarded and remunerated in ways that are, you know, perhaps unprecedented, right? other side of that is so enticing that any discourse around regulation or anything else
Starting point is 01:21:07 that might slow it down becomes not only a national security threat, but also an entrepreneurial threat, right? So everything is motivating rapid acceleration at the cost of transparency and regulation and all these other things, all these checks and balances that we really need right now. And I don't know, like, you know, how you're feeling about this, but it leaves me a little cold and pessimistic. Like, you're a historian, like the story of humankind is all gas, no brakes, you know? Like, let's just, we're plowing forward and we'll deal with the consequences when they come. Like, we're not wired adequately to really appreciate the long-term consequences of our behavior. We're kind of, you know, looking right in front of us and making decisions based on how it's going to impact us in the immediate future and very little else. and very little else. Yeah, I mean, throughout history, the problem is people are very good at solving problems, but they tend to solve the wrong problems.
Starting point is 01:22:09 Like they spend very little time deciding what problem we need to solve. Like 5% of the effort goes on choosing the problem. Then 95% of the effort goes in solving the problem we focus on. And then we realize, oh, we actually solved the wrong problem. And it just creates new problems down the road that we now need to.
Starting point is 01:22:29 And then we do it the same again. And, you know, wisdom often comes from silence, from taking time, from slowing down. Let's really understand the situation before we rush to make a decision. And, you know, it starts on the individual level that so many people, for instance, think, oh, my main problem is in life is that I don't have enough money. And then they spend the next 50 years making lots of money. And even if they succeed, they wake up at a certain point and said, oops, I think I chose the wrong problem.
Starting point is 01:23:04 I think it wasn't, yeah, I need some money, but it wasn't my main problem in life. And we are perhaps doing it collectively as a species, the same thing. You know, you go back to something like the agricultural revolution. So people thought, okay, we don't have enough food. Let's produce more food with agriculture. We'll domesticate wheat and rice and potatoes. We'll have lots more food. Life, will domesticate wheat and rice and potatoes, will have lots more food, life will be great.
Starting point is 01:23:27 And then they domesticate these plants and also some animals, cows, chickens, pigs, whatever. And they have lots of food and they start building these huge agricultural societies with towns and cities. And then they discover
Starting point is 01:23:42 a lot of new problems they did not anticipate. For instance, epidemics. Hunter-gatherers did not suffer almost any infectious diseases, because most infectious diseases came to humans from domesticated animals, and they spread in the dense towns and cities. Now, if you live in a hunter-gatherer band, you don't hold any chickens or pigs. So it's very unlikely some virus will jump from a wild chicken to you.
Starting point is 01:24:10 And even if you got some new virus, you have just like 20 other people in your band and you move around all the time. Maybe you infect five others and like three die and that's the end of it. But once you have these big agricultural cities, then you get the epidemics. People thought they were building paradise for humans. Turned out they were building paradise for germs. And human life expectancy and human living conditions for most humans actually
Starting point is 01:24:38 goes down. If you're a king or a high priest, it's okay. But for the average person, it was actually a bad move. And the same thing happens again and again throughout history. And it can happen now on a very, very big scale with AI. In a way, it goes back to this issue of organic and inorganic. That organic systems are slow. They need time. And this AI is an inorganic system which accelerates beyond anything we can deal with. And the big question is whether we will force it to slow down or it will force us to speed up until the moment we collapse and die.
Starting point is 01:25:19 I mean, if you force an organic entity to be on all the time and to move faster and faster and faster, eventually it collapses and dies. One of the things I heard you say that really struck me was this. It's a quote. If something ultimately destroys us, it will be our own delusions. So can you elaborate on that a little bit and how that applies to what we've been talking about? So can you elaborate on that a little bit and mythological delusions, we cannot trust the other humans. And we think we need to develop these AIs and faster and faster, and give them
Starting point is 01:26:16 more and more power because we have to compete with the other humans. And this is the thing that could really destroy us. And, you know, it's very unfortunate because we do have a track record of actually being quite successful of building trust between humans. It just takes time. I mean, if you think about, again, the long arc of human history, so these hunter-gatherer bands
Starting point is 01:26:39 tens of thousands of years ago, they were tiny, couple of dozen individuals. And even though the next steps, like agriculture, they were tiny, a couple of dozen individuals. And even though the next steps like agriculture, they had their downside, again, like epidemics, people did learn over time how to build much larger societies, which are based on trust. If you now live in the United States, If you now live in the United States or some other country, you are part of a system of hundreds of millions of people who trust each other in many ways which were really unimaginable in the Stone Age. Like, you don't know 99.99% of the other people in the country.
Starting point is 01:27:27 And still, you trust them with so much. I mean, the food you eat, mostly you did not go to the forest to hunt and gather it by yourself. You rely on strangers to provide the food for you. Most of the tools you use are coming from strangers. Your security, you rely on police officers, on soldiers that you never met in your life. They are not your cousins. They are not your next door neighbors. And still they protect your life. So yes, if you now go to the global level, okay, we still don't know how to trust the Chinese and the Israelis still don't know how to trust the Iranians and vice versa.
Starting point is 01:28:00 But it's not like we are stuck while we were in the Stone Age. We've made immense progress in building human trust, and we are rushing to throw it all away because it just, again, it that has been degraded in recent times. And I think without that, we stand very little chance as a democratic republic of surviving and solving these kinds of problems. Absolutely. If you ask in brief, what is the key to building trust between millions of strangers, The key is institutions. Because you can't build a personal intimate relationship with millions of people. So it's only institutions, whether it's courts or police forces or newspapers or universities or healthcare systems that build trust between people. that build trust between people. And unfortunately, we now see this,
Starting point is 01:29:09 again, another epidemic of distrust in institutions on both the right and the left. It is fueled by a very cynical worldview, which basically says that the only reality is power, and humans only want power, and all human interactions are power struggles. So whenever somebody tells you something, you need to ask whose privileges are being served, whose interests are being advanced. And any institution is just an elite conspiracy to take power from us.
Starting point is 01:29:40 So journalists are not really interested in knowing the truth about anything. They just want power. And the same for the scientists and the same for the judges. And if this goes on, then all trust in institutions collapses and then society collapses. And the only thing that can still function in that situation is a dictatorship. Because dictatorships don't need trust. They are based on terror. So people who attack institutions, they often think,
Starting point is 01:30:07 oh, we are liberating the people from these authoritarian institutions. They are actually paving the way for a dictatorship. And the thing is that this view is not just very cynical. It's also wrong. Humans are not these power crazy demons. All of us want power to some extent, that's true, but that's not the whole truth about us. Humans are really interested in knowing the truth
Starting point is 01:30:32 about ourselves, about our lives, about the world, on a very deep level, because you can never be happy if you don't know the truth about your life. Because you will not know what are the sources of misery. Again, you will focus on your life. If you don't know the truth, you waste all your life trying to solve the wrong problems. And this is true of also of journalists and judges and scientists. Yes, there is corruption in every institution. This is why we need a lot of institutions to keep each one another in check. But if you destroy all trust in institutions, what you get is either anarchy or a dictatorship. And again, it's a good exercise every now and then to stop and think about how
Starting point is 01:31:20 every day we are protected by all kinds of institutions. Like when people talk with me about the deep state, you know, this conspiracy about the deep state, I immediately think about the sewage system. The sewage system is the deep state. It's a deep system of tunnels and pipes and pumps, which is a state built under our houses and streets and neighborhoods, and saves our life every day, because it keeps our sewage separate from our drinking water. You know, you go to the toilet, you do your thing, it goes down into the deep state, which keeps it separate from the drinking water. If I can tell one historical anecdote, where did it come from?
Starting point is 01:32:08 So, you know, after agricultural revolution, you have big cities, they are paradise for germs, hotbeds for epidemics. This continues really until the 19th century. London in the 19th century was the biggest city in the world and one of the most dirty and polluted and a hotbed for epidemics.
Starting point is 01:32:25 And in the middle of the 19th century, there is a cholera epidemic and people in London are dying from cholera. And then you have this bureaucrat, medical bureaucrat, John Snow, not the guy from Games of Thrones, a real John Snow, who did not fight dragons and zombies, but actually did save millions of lives. zombies, but actually did save millions of lives. Because he went around London with lists, and he interviewed all the people who got sick or died. If somebody died from cholera, he would interview their family. Tell me, where did this person get their drinking water from? And he made these long lists of hundreds and thousands of people. And by analyzing these lists, he pinpointed a certain well on Broad Street in Soho in London, where everybody, almost everybody who got sick on cholera, they had a zip of water from that well at a certain stage. And he convinces the municipality
Starting point is 01:33:20 to disable the pump of the well and the epidemic stops. And then they investigate, they discover that the well was dug about a meter away from a cesspit and water, sewage water from the cesspit got into the drinking water. And today, if you want to dig a well or a cesspit in London or in Los Angeles, you have to fill so many forms and to get all these bureaucratic permits, and it saves our lives. And how does that relate to this idea of the deep state? I'm trying to tether those two notions together. Again, the people who believe the conspiracy theories about the deep state, they say that all these state bureaucracies, they are elite conspiracies against the common people trying to take over power, trying to destroy us.
Starting point is 01:34:07 And in most cases, no, the people in these, you know, to manage a sewage system, you need plumbers. You also need bureaucrats. Again, you need to apply for a license to dig a well. And it is managed by all these kind of state bureaucrats. And it's a very good thing because, again, there is corruption in these places sometimes. This is why we keep also courts. You can go to court. This is why we keep newspapers, so they can expose corruption in the cities, in the municipalities, sewage department.
Starting point is 01:34:46 But most of the time, most of these people are honest people who are working very hard every day to keep our sewage separate from our drinking water and to keep us alive. And by extrapolation, there are all of these bureaucracies that are working in our interest in invisible ways that we take for granted. Exactly. Basically, right. You've often said clarity is power. And I think your superpower is your ability to kind of stand at 10,000 feet and look down on humanity and the planet and identify what's most important in these macro trends that
Starting point is 01:35:16 help us make sense of what's happening now. And I'd like to kind of end this with some thoughts on how you cultivate that clarity through meditation and your, you know, very kind of like profound practice of mindfulness and information deprivation, I should say, right? Information fast. Yeah. Starting maybe with the idea of an information fast. So I think this is important today for every person to go on an information fast. So I think this is important today for every person to go on an information diet. That this idea that more information
Starting point is 01:35:51 is always good for us is like thinking that more food is always good for us. It's not true. And the same way that the world is full of junk food that we'd better avoid, the world is also full of junk information
Starting point is 01:36:02 that we have better avoid. Information which is artificially filled with greed and hate and fear. Information is the food of the mind. And we should be as mindful as what we put into our minds as of what we put into our mouths. But it's not just about limiting consumption. It's also about digesting. It's also about digesting. It's also about detoxifying. Like we go throughout our life and we take in a lot of junk, whether we like it or not, that fills our mind.
Starting point is 01:36:36 And I meditate two hours every day, so I can tell you there is a lot of junk in there. A lot of hate and fear and greed that I picked up over the years. And it's important to take time to simply digest the information and to also detoxify, to kind of let go of all this hatred and anger and fear and greed, which is in our minds. and fear and greed, which is in our minds. So I began, when I was doing my PhD in Oxford, a friend recommended that I go on a meditation retreat, or a vipassana meditation.
Starting point is 01:37:15 And for a year, he kind of nagged me to go on a meditation. And I said, no, this is kind of mystical mumbo-jumbo. I don't want to... And eventually I went. And it was amazing because it was the most remote thing for mysticism that I could imagine. Because it was a 10 days retreat, and on the very first evening of the retreat, the teacher, Esen Goenka, the only instruction he gave, he didn't tell me to kind of visualize some goddess or do this, nothing.
Starting point is 01:37:41 He just said, what is really happening right now? Bring your attention to your nostrils, to your nose, and just feel whether the breath is going in or whether the breath is going out. That's the only exercise. Like a pure observation of reality. What amazed me was my inability to do it. Like I would bring my attention to the nose
Starting point is 01:38:06 and try to feel, is it going in? Is it going out? And after about five seconds, some thoughts, some memories, some fantasy would arise in the mind and would just hijack my attention. And for the next two or three minutes, I would be rolling in this fantasy or memory until I realized, hey, I actually need to observe my breath. And I would come back to the breath, again, five seconds, maybe 10 seconds, I will be able to, oh, now it's coming in, it's coming in, oh, now it's going out, it's going out. And again, some memory would come and hijack me. And I realized first that I know almost nothing about my mind. I have no control of my mind. And my mind is just like this factory
Starting point is 01:38:47 that constantly produces fantasies and illusions and delusions that come between me and reality. Like if I can't observe the breath going in and out of my nostrils because some fantasy comes up, what hope do I have of understanding AI or understanding the conflict in the Middle East without some mind-made illusion or fantasy coming between me and reality? And for the last 24 years, I have this daily exercise
Starting point is 01:39:19 of I devote two hours every day to just what is really happening right now. I sit with closed eyes and just try and focus, let go of all the mind-made stories and feel what is happening to the breath, what is happening to my body, the reality of the present moment. I also go for a long meditation retreat, usually every year of between 30 days and 60 days of meditation. Because again, one of the things you realize there is so much noise in the mind, that just to calm it down to the level that you can really start meditating seriously, it takes three or four days of continuous meditation. Just so much noise. So long retreats, they enable to have this really
Starting point is 01:40:07 deep observation of reality, which is impossible. Most of life we spend like detached from reality. Two hours a day. That's a commitment. Even in the midst of all the book promotion craziness. Yeah. So before I came here, I usually do one in the morning, one in the midst of all the book promotion craziness, you're able to find it. Yes, before I came here, I usually do one in the morning, one in the afternoon or evening. What a beautiful thing. And obviously, your ability to think clearly and write so articulately about these ideas is very much a product of this practice. Absolutely. practice. Absolutely. I mean, without the practice, I would not be able to write such books and I would not be able to deal with the kind of all the publicity and all the interviews and, you know,
Starting point is 01:40:51 this roller coaster of positive and negative feedback from the world all the time. I would say one important thing. This is not necessarily for everybody. Because I meditate and I have meditator friends and so forth, I mean, different things work for different people. There are many people that I wouldn't recommend to meditate two hours a day or to go for a 10 days meditation retreat because they are different. Their body, their minds are different. For them, perhaps going on a 10 days hike in the mountains would be better. For them, perhaps devoting two hours a day to music, to say playing or to creating or going to psychotherapy would have better results.
Starting point is 01:41:33 Humans are really different in many ways from one another. There is no one size fits all. So if you never try meditation, absolutely try it out and give it a real chance. It's not like you go for like a few hours and it doesn't work, okay, give it out. And give it a real chance. It's not like you go for like a few hours and it doesn't work. Okay, I give it up. Like give it a real chance.
Starting point is 01:41:50 But keep in mind that again, different minds are different. So find out what really works for you. And whatever it is, that's the important part, whatever it is, invest in it. I have to release you back to your life, but maybe we can end this with just a concise thought about what it is that you want people to take away from this book.
Starting point is 01:42:12 Like, what is most vital and crucial for people to understand about what you're trying to communicate? That information isn't truth. Truth is a costly, a rare, and precious thing. It is the foundation of knowledge and wisdom and of benign, beneficial societies. You can build terrible societies without the truth. But if you want to build a good society and you want to build a good personal life, you must have a strong basis in the truth. And it's difficult, again, because most information is not the truth. And invest in it. It's worthwhile to have a practice, whatever it is, that gets you connected with reality, that gets you connected with the truth. Thank you for coming here today. I really appreciate you taking the time to share your wisdom and experience. I think Nexus, your latest book is, as I said at the outset, a crucial, vital book that everybody should read. We're entering into a very interesting time and we are well advised to be as best prepared as we possibly can. And I appreciate the work that you do.
Starting point is 01:43:25 And thank you again, Yuval. Thank you. I only graced the surface of the outline that I created. So hopefully you can come back because I got a million more questions. I could have talked to you for hours. Next time I'm in LA, I'll be happy to. Thanks, man.
Starting point is 01:43:38 Appreciate it. Cheers. Peace. Today's episode is sponsored by Whoop. Right now, Whoop is offering all of my listeners a free month to try it out. Just go to join.whoop.com slash roll to get started. That's join.whoop.com slash roll. Let's finish Sober October strong. That's it for today. Thank you for listening. I truly hope you enjoyed the conversation. To learn more about today's guest, including links and resources related to everything
Starting point is 01:44:30 discussed today, visit the episode page at richroll.com, where you can find the entire podcast archive, my books, Finding Ultra, Voicing Change in the Plant Power Way, as well as the Plant Power Meal Planner at meals.richroll.com. If you'd like to support the podcast, the easiest and most impactful thing you can do is to subscribe to the show on Apple Podcasts, on Spotify, and on YouTube, and leave a review and or comment. This show just wouldn't be possible without the help of our amazing sponsors who keep this podcast running wild and free. To check out all their amazing offers, head to richroll.com slash sponsors.
Starting point is 01:45:13 And sharing the show or your favorite episode with friends or on social media is, of course, awesome and very helpful. And for podcast updates, special offers on books, the meal planner, and other subjects, please subscribe to our newsletter, which you can find on the footer of any page at richroll.com. Today's show was produced and engineered by Jason Camiolo. The video edition of the podcast was created by Blake Curtis, with assistance by our creative director, Dan Drake. Portraits by Davey Greenberg. Graphic and social media assets assets courtesy of Daniel Solis.
Starting point is 01:45:50 And thank you, Georgia Whaley, for copywriting and website management. And of course, our theme music was created by Tyler Pyatt, Trapper Pyatt, and Harry Mathis. Appreciate the love. Love the support. See you back here soon. Peace. Plants. Namaste.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.