The Rich Roll Podcast - Yuval Noah Harari On Why Clarity is Power

Episode Date: September 17, 2018

What is the relationship between history and biology? What is the essential difference between Homo sapiens and other animals? Is there justice in history? Does history have a direction? Did people be...come happier as history unfolded? What ethical questions do science and technology raise in the 21st century? These are the queries that compel Yuval Noah Harari – a man unafraid to tackle the biggest questions of our time. For those unfamiliar, Yuval is a renown historian who received his PhD from the University of Oxford in 2002 and is currently a lecturer at the Department of History at the Hebrew University of Jerusalem. But Yuval is best known as the author of three groundbreaking, massive bestsellers. Sapiens: A Brief History of Humankind* is a narrative of humanity’s creation and evolution —a #1 international hit that explores the ways in which biology and history have defined us and enhanced our understanding of what it means to be “human.” A worldwide sensation recommended by Barack Obama, Bill Gates and Mark Zuckerberg, Sapiens has sold over 15 million copies, been translated into nearly 50 languages, was listed on the Sunday Times bestseller list for over six months in paperback, and was a New York Times top 10 bestseller. Whereas Sapiens peered into our past, Homo Deus: A Brief History of Tomorrow* tunes Yuval's perspicuity on his estimation of our species’ future — specifically our quest to upgrade humans into gods. Within two years of publication, the book has sold in excess of four million copies and been translated into nearly 50 languages. Yuval's latest work is 21 lessons For the 21st Century*, a probing and visionary investigation into today’s most urgent issues as we move into the uncharted territory of the future. Here he stops to take the pulse of our current global climate, focusing on the biggest questions of the present moment: What is really happening right now? What are today’s greatest challenges and choices? And what should we pay attention to? I can't adequately express the profound extent to which Yuval's work has impacted my perspective on humanity's past. The bizarre future that will undoubtedly reshape our species. And the unprecedented predicaments we currently face — acute problems that if not adequately solved will harken the end of humanity as we currently understand it. Yuval’s work is defined by his ability to see things clearly – with a distance and objectivity that provides a welcome and much needed expanse to explore big ideas. It’s a clarity he credits to meditation, a ritual he diligently practices two hours daily with an annual 60 day silent retreat. Today I sit down with one of the world's great public intellectuals to explore these urgent questions — and what might befall humanity should should we fail to craft solutions — all through the clarity of Yuval’s finely ground lens. We discuss the problem of disinformation and distraction. How artificial intelligence is rapidly reshaping our world. Enjoy! Rich

Transcript
Discussion (0)
Starting point is 00:00:00 It's common to say that information is power and knowledge is power. And this was true for much of history, when information was very scarce. And censorship worked by withholding information, by blocking the flow of information. But we now live in a very different age, when we are flooded by enormous amounts of information. We have far too much of it, we don't know how to make sense of it. And censorship now actually works by distracting people with too much information, with irrelevant information, with disinformation.
Starting point is 00:00:36 And in this age, clarity is more important than ever before because we need to know what to focus on. Attention becomes maybe the most scarce resource of all. That's Yuval Noah Harari, and this is The Rich Roll Podcast. The Rich Roll Podcast. What is going on, everybody? I am Rich Roll. This is my podcast. Welcome aboard.
Starting point is 00:01:13 Thanks for listening. Thank you for subscribing. Thank you for sharing it with your friends. I do see all the social media posts, and I want you to know that I appreciate it. Doing this show is just such an unbelievable honor. It is one of my greatest joys. And it simply would not be possible without all of you guys. That is the truth. So again, thank you. And speaking of honors, I'm not even going to pretend to be low-key about today's guest. This one is definitely a big deal for me, and I think it's fair to say sets a new high-water mark for the show, because today's guest is none
Starting point is 00:01:53 other than Yuval Noah Harari. For the very few unfamiliar, Yuval is a renowned historian. He is a prodigious intellect and one of the truly great thinkers of our time. Yuval received his PhD from the University of Oxford in 2002. He's currently a history professor at the biggest questions of our time. Things like, what is the relationship between history and biology? What is the essential difference between Homo sapiens and other animals? Is there justice in history? Does history have a direction? Did people become happier as history unfolded? And what ethical questions do science and technology raise in the 21st century? Yuval is a guy that I've wanted to meet ever since I read his first book, Sapiens, which is a groundbreaking and utterly fascinating narrative of humanity's creation, of humanity's evolution. This book had a profound impact on me, and all of you should read it immediately if you haven't already. Yuval is also the author of Homo Deus, which is an
Starting point is 00:03:13 equally mind-blowing book that tunes his perspicuity on the estimation of our species' future and our quest to upgrade humans into gods. And Yuval's got a new book out this week. It's called 21 Lessons for the 21st Century. And in the way that Sapiens looks at our past and Homo Deus kind of peers into our future, this book roots us in the present. It's sort of a wrestling match with how to best understand today's most pressing issues. Today's conversation is about this present moment. It's about what Yuval sees as the biggest problems we face and what he imagines we will soon face should we fail to soon craft solutions.
Starting point is 00:04:03 We're brought to you today by recovery.com. I've been in recovery for a long time. It's not hyperbolic to say that I owe everything good in my life to sobriety. And it all began with treatment and experience that I had that quite literally saved my life. And in the many years since, I've in turn helped many suffering addicts and their loved ones find treatment. And with that, I know all too well just how confusing and how overwhelming and how challenging it can be to find the right place and the right level of care. Especially because, unfortunately, not all treatment resources adhere to ethical practices. It's a real problem.
Starting point is 00:04:41 It's a real problem. A problem I'm now happy and proud to share has been solved by the people at recovery.com who created an online support portal designed to guide, to support, and empower you to find the ideal level of care tailored to your personal needs. They've partnered with the best global behavioral health providers to cover the full spectrum of behavioral health disorders, including substance use disorders, depression, anxiety, eating disorders, gambling addictions, and more. Navigating their site is simple. Search by insurance coverage, location, treatment type, you name it. Plus, you can read reviews from former patients to help you decide. Whether you're a busy exec, a parent of a struggling teen, or battling addiction yourself, I feel you. I empathize with you.
Starting point is 00:05:32 I really do. And they have treatment options for you. Life in recovery is wonderful, and recovery.com is your partner in starting that journey. is your partner in starting that journey. When you or a loved one need help, go to recovery.com and take the first step towards recovery. To find the best treatment option for you or a loved one, again, go to recovery.com.
Starting point is 00:06:00 Okay, we did it. Thanks for bearing with me. You've all, you've all know Harari. So super excited about this one. It took quite a bit of jockeying to make this happen. The only way I could make it work in the midst of his insanely busy press tour for the new book. This is a guy who lives in Israel and really doesn't leave unless he has to, unless he has a book that he's promoting. So the only way I could make this happen was to fly to New York City and fit into his schedule.
Starting point is 00:06:27 So that's what I did, and really glad I did it. The best way, I think, to introduce the subject matter of today's conversation is to just read the first sentence of the new book, which goes like this. In a world deluged by irreverent information, clarity is power. Clarity is power. It's a powerful statement. I think it sets the theme for not just this new book, but also all of the work that he does, everything that he speaks about. Yuval's work is really defined by his ability to see things clearly with a distance and a rare objectivity that I think provides room for him to explore these big ideas in really compelling ways. It's a clarity he credits to meditation,
Starting point is 00:07:17 something he practices two hours every single day with an annual 60-day silent retreat. And today, we explore all of this. We explore the urgent questions we face through this clarity of Yuval's finely ground lens. This one's a little bit shorter than my usual conversations. I only had a tight hour with him, so I couldn't probe quite as deeply as I would have preferred, but this is nonetheless packed with plenty of gems to ponder. We discuss the problem of disinformation and distraction, the implications presented by advances in biotech and infotech, how big data hearkens the end of humanism and the potential of AI to do many things,
Starting point is 00:08:00 not the least of which is produce a massive irrelevant class of people. We talk about the problems with education and the growing importance of developing emotional and professional flexibility and resilience. It was truly an honor to spend an hour with one of the great, great minds of the 21st century. I think we had some fun. I think you've all enjoyed it as well. I think we had some fun. I think Yuval enjoyed it as well. So here we go. Without further ado, please enjoy my conversation with Yuval Noah Harari.
Starting point is 00:08:34 Yuval, pleasure to meet you. It's a pleasure to be here. Thank you for carving out some time to talk to me. You're in the midst of what I can only imagine is a hurricane of press at the moment. Today's the day the book comes out in the United States. So congratulations. Thank you. It's an amazing work,
Starting point is 00:08:53 and I've been following your trajectory for a while, and the impact that you're having on culture is really quite something. It's profound. And so thank you for the work that you do before we even get into it. I appreciate it. In looking to introduce you to the audience, you're now in your third book.
Starting point is 00:09:10 The first book, Sapiens, casts a glance backwards on the history of humanity. Homo Deus casts that glance forward at what we can anticipate in the future years to come. And this newest book roots us in the present. We're in very interesting times right now, confusing times. And you're trying to make sense of what is going on and trying to cut a path forward in the interest of humanity. And I think, you know, to kind of launch into this, the first sentence in the book really sets the tone and kind of encapsulates
Starting point is 00:09:48 everything that you're about and the work that you do, which is clarity is power. So can you explain that concept for me? Yes. It's common to say that information is power and knowledge is power. And this was true for much of history when information was very scarce and censorship worked by withholding information, by blocking the flow of information. But we now live in a very different age when we are flooded by enormous amounts of information. We have far too much of it.
Starting point is 00:10:20 We don't know how to make sense of it. And censorship now actually works by distracting people with too much information, with irrelevant information, with disinformation. And in this age, clarity is more important than ever before, because we need to know what to focus on. Attention becomes maybe the most scarce resource of all. And where to direct your attention and how to keep your attention on the important things, this is extremely important. And one of the things that I guess differentiate the powerful people today from the less powerful is knowing what to do with their attention.
Starting point is 00:11:04 Mm-hmm. today from the less powerful is knowing what to do with their attention. Mm-hmm. Yeah, there's never been greater demands for our attention. Things like fake news, although ascending and we're having this fake news moment, are certainly nothing new. Misinformation and disinformation and propaganda are as old as humanity. But this watershed moment that we're having where we're just inundated with attention by virtue of devices that are specifically designed to addict us and distract us, it's become increasingly more and more difficult to discern truth from fiction, truth from fiction, reality from obfuscation, and to just kind of navigate our lives with that sense of clarity.
Starting point is 00:11:51 Yeah, and I would agree that fake news and disinformation are definitely not something new. We had them from the very beginning of history, and in many ways it was much worse in the past. With all the talk about, you know, Facebook and Twitter spreading rumors and fake news, just think about yourself in some medieval village a thousand years ago, maybe in England somewhere. And somebody comes along and tells you, hey, do you know this old lady who lives at the edge of the village, I just saw her flying on a broomstick. And, you know, within an hour, you would have a raging mob with, you know, pitchforks and
Starting point is 00:12:34 torches ready to burn this old lady to death. So fake news is not a new problem created by Facebook or Twitter or Putin or Trump or anything like that. What is new is that now we are surrounded by devices which were designed to hack our brains in a way which was never possible before because until today, nobody really understood the human brain. In the Middle Ages, the understanding of how the human body and how the human brain functioned and how human attention functions was extremely rudimentary. So yes, people knew a few tricks about how to grab attention, but it was nothing like what we see today simply because science has progressed since the Middle Ages. And I think maybe the most important thing for people to realize about living in the 21st century as against the Middle Ages or the Stone Age is that we are now hackable animals. People don't want to hear that. I know.
Starting point is 00:13:49 We like to think that we're sentient and that we have agency, you know, beyond the capabilities that you're sort of writing about. Yes, we are sentient. We do have agency. But something is changing. Until today, basically, nobody could really hack us.
Starting point is 00:14:08 We were just too complicated and science and technology were too primitive. So if you went around believing that, hey, I'm a free agent, nobody can really look into my brain, nobody can really understand my mind, nobody can manipulate me and predict what I'm going to do next. This was true.
Starting point is 00:14:27 But this is no longer true. Humans are extremely complicated animals, but they are not infinitely complicated. And in order for us to cross this watershed, we don't really need algorithms that know us perfectly. This is impossible. Nothing is perfect in the world. You can't build a system that predicts everything or that understands you perfectly. But in order to have a big revolution in politics, in culture, you don't need a perfect system. The system just needs to be better than the average human.
Starting point is 00:15:07 And this is not so very difficult because, you know, humans often make terrible mistakes in the decisions and the most important decisions of their lives. Humans often know surprisingly little about themselves, about their desires, about their minds. little about themselves, about their desires, about their minds. So if this is the yardstick to build an algorithm that knows you better than you know yourself, this is not an impossible mission. And we are very close to the point when somebody like Amazon or like the Chinese government are going to have these kinds of algorithms, these kinds of systems. Yeah, what's interesting about it is that it's not happening against our will. We're voluntarily
Starting point is 00:15:51 signing up for this. And I think most people are proceeding on this assumption that these tools are making our lives better. What's wrong with Amazon predicting the books that I want to read? What's wrong with me being delivered that kind of advertising of the books that I want to read, what's wrong with me being delivered that kind of advertising of the products that I already want to buy. It seems, if not benevolent, on some level, not malevolent. Yeah, I mean, first of all, this is part of the design.
Starting point is 00:16:21 I mean, this is not the secret police from George Orwell in 1984. This is the secret police from Brave New World. I mean, it works by understanding you and by appealing to your own cravings, your own emotions. So yes, I mean, the system works by making you feel that they are on your side. Otherwise, it wouldn't be this kind of system. It will be the old style, Gestapo, KGB. But equally, and even more importantly, in many cases, it is benevolent.
Starting point is 00:16:57 I mean, depicting it as some kind of terrible conspiracy misses the point. If everything, if all these developments were just a terrible conspiracy to abuse us, then I don't think it would have worked very well. The point is that in many cases, these systems do understand us better and can improve our lives in many ways. And this is the temptation. I mean, in some cases, it's obvious. If you think about a healthcare system that monitors what's happening inside your body,
Starting point is 00:17:32 and yes, it knows what's happening inside your body better than your conscious mind. If you have, I don't know, cancer spreading in your body, very often people become aware of it only when it's already a big problem. And you start feeling pain and you don't know what it is, so you go to the doctor and you have this exam and this test and then they find, oh, you have cancer spreading in your liver or something. And it's now going to be very difficult, very painful, very expensive to deal with it.
Starting point is 00:18:06 And the alternative is to have the system that constantly monitors what's happening in your body and is able to discover that, hey, cancer is starting to spread in your liver when it's still in the initial stages. You don't feel anything. But the biometric sensors, they are already picking up the first telltale signs that cancer is beginning, when it's still very easy, very cheap, pain-free to get rid of it. So this is wonderful. Why would we want to block this kind of development? And it's also the case that in many decisions in life,
Starting point is 00:18:48 from which movie to watch to what to study in university, we could use help. We often make very bad choices. That would have helped me out a lot when I was younger. It would save me a decade. So the whole problem is that there is a huge temptation there. It's not just a kind of malevolent conspiracy. There are downsides, but how to resist the temptations or how to enjoy the benefits without suffering the harmful consequences,
Starting point is 00:19:27 this is the big challenge. And just saying, oh, this is all a terrible development, we should just unplug ourselves completely from all the devices, this is not going to work. And this is probably a bad idea because we would be missing so many positive developments. Right. I'm wearing a ring right now that is actually a biometric device that measures my heart rate and my heart rate variability and my various sleep states. And I'm looking at this as a tool, as a benefit, sort of blissfully divorcing myself from the fact that I am a product of big data.
Starting point is 00:20:06 And right now we're in this moment of convergence of infotech and biotech that's going to reshape how we live and what our world is going to look like. So play this dystopia out for me. Oh, well, if you want to go in the dystopian direction, then the question is, who knows, who gets the information that this biometric device is collecting on you? And what are they going to do with it? being harvested by some big corporation or by some government and you have no idea what they are doing, then this could lead in all kinds of dystopian directions.
Starting point is 00:20:53 One dystopian scenario, just imagine North Korea having these rings mass produced and forcing every citizen of North Korea to go around 24 hours with this ring and to give all the information to some central database. And you walk into a room and you look at a picture of Kim Jong-un on the wall and the ring picks up the signs of anger or the signs of dissatisfaction. Next stop, the gulag. Yeah, next stop is the gulag. It's almost like minority report,
Starting point is 00:21:26 like a precognition model of trying to figure out who's going to be an enemy of the state before an act is even perpetrated. Not only before the act is perpetrated, before you even think about doing something. I mean, you can pick, you can have, if you follow all the, I don't know how many citizens there are in North Korea, like 20 million or 30 million.
Starting point is 00:21:47 If you follow enough of them for enough time, you can build the profile or that you can pick up the dissidents when they are in kindergarten. You don't need to wait until they are 30 years old and working in the Department of Transport in Pyongyang and having these schemes to do. No, you can pick them up in kindergarten
Starting point is 00:22:12 and re-educate them or whatever. So if you want to go dystopian, many just think what Stalin would have done if he had these kinds of biometric technologies? Yeah, we tend to think of this age of technology as being in its adolescence, but I think we're really in its infancy. We're still in the birth canal here, and we have no idea how to telescope forward into what this is going to look like. And you, with your superpowers of clarity, have given us a glimpse of what perhaps is to come. And the argument that you make is that big data harkens the end of humanism.
Starting point is 00:22:58 So, can you explain what humanism is and what you mean by that? Well, humanism is the belief in the supremacy of humanity and in particular human feelings. You can kind of encapsulate humanism in the idea that human feelings are the ultimate authority in the universe or at least on earth. And in politics, this means that the voter knows best. The highest authority in politics is the feelings of the voters.
Starting point is 00:23:28 And it should be very clear that we are talking about feelings, not about rationality. When you go, you have a referendum or you have an election, you are appealing to the feelings of the voters, not to their rationality. If elections were about rationality, there was absolutely no reason to give equal voting rights to everybody. We know perfectly well that different people have different rational capabilities and different knowledge, expertise, and so forth. Again, if you have cancer and you go to the hospital, you go to the expert.
Starting point is 00:24:06 You don't have a referendum between all the people you meet on the street. You want the person who has the best capabilities. But in humanist politics, no. You appeal to the feelings of individuals. This is the highest authority. In the field of economics, it manifests itself in the idea that the customer is always right. The highest authority in a humanist economy
Starting point is 00:24:30 is the feelings of the customers. If the customers want something, and it doesn't matter why, because it's a crazy whim, it's a fad, it's a fashion, who cares? This is the highest authority in the economy. If everybody wants it, or if
Starting point is 00:24:45 enough people want it, they vote with their credit cards. There is nobody who can say, well, but this is bad for you. Who are you? I mean, my feelings are the highest authority. And it's the same in art. In art, humanism, the humanist slogan in art is beauty is in the eyes of the beholder. It's just like the custom is always right. There is no authority higher than the beholder. And if you like a certain genre or a certain piece of art, your feelings are the ultimate judge. There is nobody out there who can tell you, yes, you like it,
Starting point is 00:25:22 but it's actually bad. It's actually bad art. And in ethics, the idea is just follow your heart. Just do what feels good. If it feels good, do it. If it doesn't feel good, don't do it. Of course, sometimes, you know, your good feelings collide with somebody else's bad feelings. I feel very good about stealing your car. You feel very bad about it. So we now have an ethical debate. And when you look today in the world, you see that the very complicated ethical debates around things like identity politics,
Starting point is 00:25:55 which actually revolve around human feelings. This is the highest authority. You don't appeal to God or to the Bible or to the Pope. You appeal to, I feel. And all this comes also down to education. The highest ideal of education, according to humanism, is to teach people to think for themselves. Again, they are the highest authority. So this is humanism. forces, political forces, cultural forces all bend to the will and the whims of the collective
Starting point is 00:26:26 emotional landscape of the population. Exactly. And the only thing that can counter the feelings of a human being is the feelings of another human being. That's the game. And all this is now threatened by what happens when a big data algorithm can hack the human system and can not only predict people's feelings, so you don't need to ask them, you know better than they what they feel. It's very difficult to know what you feel. I mean, we have entire professions like therapists who are just trying to get us in touch with our emotions. It's very difficult to know what I actually want. And what happens if you have a system that knows what I want better than me? So authority gradually shifts to that system. And what happens if the system can, if you can, it can predict and understand my feelings so well,
Starting point is 00:27:25 it obviously can manipulate my feelings as well. The oracle becomes a sovereign. And then authority completely shifts away from human emotions. There is a famous quip about, I think, communism, that you have a communist, before the revolution, you have a communist agitator standing in front of a crowd and telling everybody, when the revolution comes, we will all eat strawberries and cream.
Starting point is 00:27:54 And then somebody says, but I don't like strawberries and cream. And then the agitator says, when the revolution comes, you will like strawberries and cream. Because you will like what the state tells you that you like. And you know, in 1917, the way that the state made you like something was very crude and not very pleasant. But in the future, the state can make you like strawberry and cream in very sophisticated and pleasant ways. So where does this lead us?
Starting point is 00:28:47 Specifically, you know, in terms of how this is going to change market forces and change the political landscape and change, you know, specifically how we, you know, navigate the world in terms of careers and making a living. I mean, you talk a lot about the irrelevance, the irrelevant class in terms of AI and irrelevance, the irrelevant class in terms of AI and reshaping the discussion not around the revolution of the robots that are coming to kill us, but in the sense that it's going to render a giant swath of the human population unemployable and irrelevant. Yeah, there are many, many themes that we can explore. There is the theme of irrelevance and uselessness that, again, coming back to the communist revolution, if in the 20th century the big struggle was against exploitation, they are trying to exploit you. In the 21st century, the big struggle might be against irrelevance. They just don't need you. And it's much, much more difficult to struggle
Starting point is 00:29:46 against irrelevance than against exploitation. So this is one thing. Another thing is that we just don't have any idea what human life is going to look like when more and more of the decisions are taken by these algorithms on our behalf. Because for thousands of years, almost all religious and political and artistic traditions depicted life as a drama of decision-making. Whether it's a Shakespeare play or whether it's a Jane Austen novel or whether it's a Hollywood comedy or whether it's a theology book, they all describe life as a kind of journey, that you are walking on the road and every few steps or every few miles you reach an intersection and you need to choose.
Starting point is 00:30:37 You have small decisions, what to choose for lunch. You have big decisions, whom to marry, whom to vote for, whether to start this career or that career, whether to go to war or to make peace. And the whole drama revolves around the decision, making the right decision. Now, how does life look like? How does art look like? How does theology look like? Whenever you reach the intersection, you just take out the smartphone and you say, okay, Google, what should I do? Just try, I don't know, King Lear, Macbeth, Hamlet. As somebody who doesn't believe in free will, how is this different qualitatively?
Starting point is 00:31:19 Oh, I mean, you know, there is a lot of confusion about free will. People definitely have a will. People definitely make decisions all the time. We make choices all the time. To say that I don't believe in free will and science doesn't even understand what free will means, it doesn't mean that people don't make decisions. We make decisions all the time, but these decisions don't reflect freedom. decisions all the time, but these decisions don't reflect freedom. They reflect millions of different factors that we are unaware of the vast majority of them. My main problem with free will, aside from the scientific fact that it just doesn't make any scientific sense, is that if you believe in free will, you are extremely uncurious about the way you make decisions,
Starting point is 00:32:07 about your own desires, about your own mind. You just tend to identify with whatever pops up in your mind. A desire pops up, and hey, that's me. I chose this. This is my free will. I'll do it. On some level, though, not to interrupt, but as a sort of countervailing perspective on this, presuming that every human being is the aggregation of their traumas and their decisions and their various experiences, packed into that is this idea that we're not making clear decisions. Our decisions are clouded by our various experiences that perhaps prevent us from
Starting point is 00:32:42 making the best decision in our best interest. So that smartphone might be able to help us transcend those limitations to make a better decision. This is why it's so tempting. I mean, if the smartphone always made terrible decisions, then very quickly we would throw it away. But the thing is that in more and more fields, we discover by empirical experience that it's a good idea to listen to Google or Netflix or Amazon. If you think about something like navigating your way
Starting point is 00:33:14 around New York, I'm now going on this book tour, so I'm going to here and there all over the city. And most of the cab drivers, they just rely on Google now. And it's a good idea. I mean, you reach an intersection, your gut feeling tells you turn right.
Starting point is 00:33:33 Google says, no, no, no, turn left. It's better. You trust your gut feeling, you're stuck in traffic. You miss your appointment. Next time you listen to Google, you arrive on time. So you learn the experience. It's better to listen to them. They know what they're talking about.
Starting point is 00:33:46 And now it's with traffic. In 20 years, it will be decisions like whom to marry and what to study in university. And it will be tempting because they will give us very good recommendations. At the same time, we're moving into this era where the gestalt of automation is accelerating at a crazy pace. And there's a lot of conversations about jobs and lost jobs.
Starting point is 00:34:13 But the problem is much more dire than that. Can you elaborate a little bit more on what that landscape is going to look like? You mean for the job market? Yeah, in 2050 and how that impacts generations to come well um the the simplistic scenario is that the robots are coming they'll take all the jobs will have nothing to do and for some people in some countries this may be the case there are many countries today that rely primarily on cheap manual labor for their economies, people working in sweatshops and textile factories, and these economies may completely crash. In other countries
Starting point is 00:34:54 like in the US, many jobs will disappear, but many new jobs will appear. If you bring back textile production from Honduras to the United States, because you now have 3D printers and you now have robots that are far cheaper than the people in Honduras. So Honduras is in a very big trouble. But in the U.S., you have maybe more jobs, not in producing textiles, nobody needs that, but, for example, in writing code. Because the main business of the textile industry is going to be data and code. You need a lot of data on your customers, what they want, what is the new fashion,
Starting point is 00:35:33 and about their bodies. You can now design a shirt specifically for your torso. You don't need to rely on mass production like in the Industrial Revolution. And you'll be able to print that in your torso. You don't need to rely on mass production like in the Industrial Revolution. And you'll be able to print that in your home. And you can print it in your home or in New Jersey. You don't have to bring it from Honduras. But you do need coders,
Starting point is 00:35:55 you do need people who deal with data and so forth. So there are new jobs. But the thing is that these jobs too will gradually be automated and change very rapidly. The landscape in the job market ahead of us is going to be extremely hectic and bumpy. The automation revolution will not be a single big watershed event that lots of jobs disappear, many new jobs appear, you have a few years of turbulence, and then everything settles down to a new equilibrium, end of story. No.
Starting point is 00:36:29 Like every 10 years, I mean, AI is not even near its full potential. It's just in its infancy. We haven't seen anything yet. So every 10 years, you are likely to lose your job or your job is going to be completely transformed by the new wave of the latest machine learning wizardry. And if you want to stay in the game, you will have to basically reinvent yourself. And not just once, but repeatedly. You throw in extended lifespans, people live longer, and they retire at an older age maybe. So maybe you need to reinvent yourself five or six times during your lifetime. So not just the idea of a job for life, but a profession for life, this is going to be completely obsolete. Yeah, and it's going to require a level of skills
Starting point is 00:37:27 far more advanced than this idea of the coal miner then becoming a factory worker where you can easily translate some basic skill sets to re-educate people. It's going to get a lot more complex. Even now, when you look at millennials and Gen Z, they're very acclimated to this freelance economy. Very few of them look at a job as something they're going to do for the rest of their
Starting point is 00:37:56 life, but I think that's only going to accelerate. The lifespan on those careers are going to get shorter and shorter. The thing is that they are young now. When you're 25, when you're 30, reinventing yourself is difficult. It's never easy to reinvent yourself, but it's still feasible.
Starting point is 00:38:16 But when you get to the age of, say, 40, 50, 60, it becomes more and more difficult. And this is what awaits all of us, and certainly the younger generation. So the fact that they may enjoy to some extent the gig economy in their 20s,
Starting point is 00:38:39 this is not necessarily what you would like your life to be in your 50s. But maybe you don't have any option. Maybe if you want to stay in the game, you need to run faster and faster. And the greatest difficulty in this respect is likely to be psychological. That how do you maintain the kind of mental flexibility that maybe at age 20 comes natural to you, but for most people is not natural at all at age 50 or 60. Right.
Starting point is 00:39:11 So, resilience and emotional intelligence become paramount in a way that we haven't seen before, which calls into question our entire educational system. Yes, exactly. We're still stuck in this sort of Elizabethan Victorian model of how we're educating children that's very information-based and is just out of step with the times and what is going to be required. So how do we rethink how we're educating our youth? You know, everybody, almost everybody, I guess,
Starting point is 00:39:52 who deals with these issues know that the educational system is bankrupt. It's not adapted to the realities of the 21st century. But we don't have an alternative model. We have a lot of experiments. to the realities of the 21st century, but we don't have an alternative model. We have a lot of experiments, but so far none of these experiments seems to be scalable. You can have a small experimental school
Starting point is 00:40:13 which does amazing things, but then you think, okay, now how do I scale this to millions of teachers and tens of millions of pupils? That's a big challenge. And I certainly don't have the answers. I would definitely agree that the things we need to invest in education
Starting point is 00:40:36 are emotional intelligence and mental flexibility and learning how to learn rather than cramming information. That's the easy part to say that. But how do you teach emotional intelligence? Yeah, I mean, if you were raising a child, what is the curriculum that you would create? Come on, you got to have the answers.
Starting point is 00:41:00 I don't know. I know, it's tough, right? That's, again, and also, you know, I can come up with this list of things we can do. I don't know. Money is not wasted on whatever. And then you realize, okay, so my idea of let's take a group of kids to the woods and live for a week and build a house in the tree and things like that. Okay, how do I do it with government budgets and millions of kids and so forth? That's much more difficult than I thought at first. And that's the more difficult than I thought at first. And that's the big, the big problem. It's not with coming up with a solution for a small number of kids in an affluent suburb.
Starting point is 00:41:53 It's coming up with a solution, which is scalable to the whole country and basically to the whole world. I mean, the worst problems, as we mentioned earlier, may not even be in the United States. They are far more likely to be in places like Honduras or like Brazil or like Bangladesh. Yeah, the places that are most prone to be automated first. Yeah, it's the places that are easiest to automate what they
Starting point is 00:42:19 do. And the profits from the automation will not go to Honduras. They will go to California. So if coal miners in Pennsylvania lose their jobs, you can envision a situation in which the government taxes the big highter companies in Silicon Valley in order to help the coal miners in Pennsylvania. But I don't see the government of the US taxing Google and Amazon in the US to help people in Honduras. Why do you think we're not talking about this enough? I mean, we talk a lot about jobs and job loss, but that conversation is very rooted in the
Starting point is 00:43:01 present moment. There's not a lot of deep thinking going into forecasting what the future might hold. And perhaps that's systemic. That's a fundamental flaw in our liberal democratic system that relies upon economic market forces to solve these problems for us and doesn't provide the bandwidth or the incentive to think long term about what the world may look like so that we can plan for it in the present. Well, it's always difficult to deal with future problems when you have more urgent business, maybe less important business, but more urgent business right now. And this is especially true in a system when you have the four-year election cycle.
Starting point is 00:43:49 That, okay, why should I now spend my time and my political capital on solving a problem which will hit us in 20 years? That won't get me re-elected in four years. And that's one very important ingredient of the problem. But on a more basic level, it's just, you know, everything is just very, very new. Five years ago, all this talk about AI still sounded like science fiction.
Starting point is 00:44:20 The attention we now have on AI, you know, it happens so fast. I remember that when I published my first book, Sapiens, it was 2014, just four years ago. The book doesn't mention AI at all, as far as I can remember. Nobody was talking about it. A few people in laboratories in Silicon Valley or in some universities, they realized something big is in the pipe. But in 2013, 2014, AI was still not
Starting point is 00:44:52 in the news. Now it's everywhere. So four years is not such a long time to kind of start a completely new political debate. I think humans are hardwired to kind of fundamentally view the world as somewhat static. Liberal democracy is here. It's always going to be here. It's hard for us to imagine a future that looks qualitatively different from the one that we live in now. And how do you square that? Like squaring that with the rapidity. I mean, we've never seen the rapidity of change that we're seeing now. And it's just continuing to accelerate. And it's bewildering. We're playing catch up in a way that if we're not careful is going to lead to dire circumstances for all of us.
Starting point is 00:45:54 And so I feel like we almost need a different system, a system of star chambers, of really smart people who are getting together to talk about these ideas to prophylactically come up with solutions that will keep us on a positive trajectory and not lead us astray. You know, this is allegedly what they are trying to do in China. That forget elections, forget liberal democracy, we need a very different kind of systems, a system that can look ahead decades and plan things without all this troublesome stuff of elections. And it has its advantages and it has some very big disadvantages as well.
Starting point is 00:46:42 some very big disadvantages as well. And as much as I have a lot of faith in experts and in scientists, there are dangers in giving all the authority or too much authority to a small group of experts and thinking that, well, they'll figure out what to do. Especially as these experts are in the end still humans with all the biases and all the flaws of human nature. Again, then the next temptation is say, okay, so let's take it out of the hands of humans and give it to algorithms. But then you have entirely new problems. Part of them are still an inheritance from the human problems
Starting point is 00:47:28 because the humans designed the algorithms and very often they design the algorithms with their own biases built in and they don't even realize it. But there are also entirely new problems when the decision-making process becomes completely opaque to humans. And this, whether we like it or not,
Starting point is 00:47:49 whether it is the solution or a solution or whatever, this is going to happen more and more. But as the world becomes more complicated, as the pace of change increases, most people will just not be able to understand what the hell is happening. And more and more power will be concentrated in non-human hands, most people will just not be able to understand what the hell is happening. And more and more power will be concentrated in non-human hands,
Starting point is 00:48:11 in the hands of these algorithms. And we see it happening, for example, right now with the global financial markets, that so much of the transactions there are being done by algorithms. And very often, even the best human experts can't explain what is happening, like with the flash crashes. And this is just likely to increase, even if you keep a human at the top, like the CEO is a human being, the president, we want a human president.
Starting point is 00:48:40 But as everybody who worked, at least for some time, in a big bureaucratic system knows, very often the person at the head, he or she, they are just figureheads. The real decisions are taken down the line by the people who prepare the different options. Like you come to the president with three options. This is what we can do with Iran. Or this is what we can do with the crisis in the market. And okay, so the president gets to pick between three options.
Starting point is 00:49:09 But who decided that these are the three options? Why only these three? What about the other five options, which were never brought to the table? In many, many cases, the real decision is which three options to bring to the president. is the real decision is which three options to bring to the president. Very often, two options are just so unthinkable that you know what is going to do anyway. And what are the motivations behind the choices made to present those three options? Everybody has their own agenda. Exactly.
Starting point is 00:49:38 And now the question is, what happens when the three choices, which are presented to the president or the CEO, they were shaped by algorithms according to the way algorithms understand the world. And it's so complicated that we can't understand why the algorithm decided that these are the three options that we need to choose from. And this is going to be, we are going to encounter this problem in more and more areas in life. Like, yeah, it starts with the simple, stupid things. Like, why does Netflix give me these five options for the next movie? Why these five movies? Why not 50 others? But here, you know, the stakes are very low. So,
Starting point is 00:50:25 okay, so I watched a bad movie. But what happens when you have an incredibly sophisticated financial system that no human being can understand? And then the computer alerts the president that, look, there is now a crisis in the markets. And these are the three options you can do. And he doesn't even understand what they mean and he certainly doesn't understand why do you think there is a crisis and why do you think that these are the three options um well you're a human you'll never understand it but trust us we know and it doesn't matter if you call it a democracy or a dictatorship or whatever. It's a kind of system that we have never encountered before.
Starting point is 00:51:15 I don't know what to make of all this. That's a human reaction. Yeah, it's like, what to even do with that? How do we even process that? How do we find a way forward? You can actually look at the present. I mean, how many people today, let's say in the US, can honestly say they understand how the financial system works?
Starting point is 00:51:40 There's probably a lot of people who will claim to understand it, whether they truly do or not. Even, let's be very generous, just the people who claim, I really understand the financial system. Out of how many people out there in the US, like 300 million, 350 million? How many would honestly say, yes, I understand how this system works? I would imagine very, very few.
Starting point is 00:52:04 I mean, maybe a million out of 350 million. That's still a very small percentage. And we are already living in a world which is so complicated that most people don't understand how something important like the financial system works. And one of the things that is happening in politics
Starting point is 00:52:26 is that people talk about the things they understand, not about the things that are important. And it happens in the political. I mean, I understand immigration. Okay, you have these foreigners. They want to come here. I don't like it. Okay.
Starting point is 00:52:41 I think one of the main attractions of talking about immigration is that you understand, or at least you think you understand what you're talking about. If we are going to debate the interest rates that the Federal Reserve sets, I don't understand how to talk about it. Well, and the people that would purport to understand it disagree amongst themselves. And fundamentally, all of this is based on, this is something you talk about all the time. Fundamentally, it's based on a story that we all agree is fact, but which is really myth.
Starting point is 00:53:15 Yes, but in the case of immigration, at least we know what the story is. In the case of the interest rates of the Federal Reserve, most people just don't understand the story at all. So you can't have, I mean, there is nothing, people like to disagree. They like to shout and argue, and I think this, and you're wrong.
Starting point is 00:53:35 But you can't have this kind of argument about the interest rates because you don't understand anything. So you talk about immigration. You know, it's like in businesses, like you have a meeting of the staff and you have two items on the agenda. One item is which pension fund, which pension program to have for the staff. And the other item is which coffee machine to buy. So you would have a five-minute debate
Starting point is 00:54:09 about the pension fund because nobody understands. Okay, let's just do whatever he says. I mean, we trust the chief financial officer that he knows what he's doing. And then you have a two-hour argument about the coffee machine. Because we understand. I mean, we can talk. The beauty of humans.
Starting point is 00:54:29 Shifting gears a little bit, I want to talk about global climate change and how you think about this problem and where we're at in terms of our ability to reckon with one of the enumerated things that you discuss in your new book, the greatest challenges facing humankind? Well, climate change, the two most important things to realize about it is that it's happening. It's not some future scenario. It's not like, you know, nuclear war is an existential threat to humankind, but it's just a future possibility.
Starting point is 00:55:07 It's not happening right now. Climate change is happening right now. So we need to do something. And the second thing we need to realize is that the only effective way to counter climate change is through global cooperation. It should be absolutely obvious that you cannot stop climate change on the national level. It's not a national problem. No country is ecologically independent. Even the most powerful countries like China or the US, they are not independent countries when it comes to the climate. And this is something which is very annoying and threatening,
Starting point is 00:55:47 especially for people from the nationalist right. Right, we're in this breakdown of global cooperation at the moment with the rise of nationalism, which is not helping this problem. Yeah, it's just taking us in the wrong direction. And I don't think it's therefore a coincidence that almost all the people who deny the reality of climate change, they come from the nationalist right. And at first thought, it seems like a very surprising coincidence.
Starting point is 00:56:17 Why is it that almost all the climate change deniers are right-wing nationalists? Why don't you have left-wing socialists who deny climate change? And historically speaking, it's even more surprising because historically caring about the environment was something which characterized the right and which characterized nationalists. They are the ones who care about forests
Starting point is 00:56:40 and going out to nature and connecting to the earth, whereas the left-wing socialists they love industry and big cities and the proletariat and things like that and when you look in like in the in the first half of the 20th century so environmental causes these are the kinds of things that nazis cared about and suddenly this becomes a left-wing thing. And the conservatives, they don't care about conserving the environment. And my explanation for what happened is the realization that you just can't do much on the national level. So if your ideal is to be this kind of isolated fortress, we don't need global cooperation. We don't like globalization. We
Starting point is 00:57:25 don't like to cooperate with foreigners. So we don't like to admit that there are global problems that demand global cooperation and which just cannot be solved on a national basis. Yeah, it's distinct from the problem presented by nuclear proliferation, because on some level, nationalism still provides nation states with the ability to prevent that Armageddon event from happening because of mutual incentivization. Yeah. Because of mutual destruction that would occur. But that sensibility does not apply to addressing the climate change problem. Yeah, for many countries, if you're a small country, you're completely helpless.
Starting point is 00:58:12 You think about the extreme case, if you are a small island nation in the Pacific Ocean, like Kiribati, which is in danger of being swallowed by the ocean if climate change continues there is nothing you can do to save yourself I mean you can reduce greenhouse gas emissions in Kiribati
Starting point is 00:58:34 to zero you can even start absorbing CO2 from the atmosphere, it won't help you if you don't get the Chinese and the Indians and the Americans and the Europeans to adopt better environmental policies, it won't help you. And even a country like China that now realizes the danger it faces from climate change, you cannot save Shanghai from the Pacific Ocean unless you cooperate with the Germans and the Americans and the Russians. And what makes it even more problematic is that some nations might calculate,
Starting point is 00:59:15 and perhaps even correctly to a certain degree, that they actually stand to benefit from climate change. This is particularly true of Russia, which could benefit quite a lot from climate change, from global warming. And it's also true of many of the countries... In the short term. At least in the short term, yes. But they have very few assets on the ocean. So if the oceans rise, it's not a big problem for Russia.
Starting point is 00:59:48 If Siberia warms up and becomes a breadbasket of the world, that's a plus. If the Arctic Ocean melts... Do you think those conversations are going on? I don't know, but could be. Another big plus, if the Arctic Ocean melts, then all the shipping from China to Europe, instead of going through Singapore and the long way through the Suez or the Cape of Good Hope, you can just reroute them through Kamchatka and Novaya Zemelya. And suddenly Kamchatka is the new Singapore. So that's a big plus. And also Russia and also
Starting point is 01:00:27 Iran and Qatar and Saudi Arabia, their economy is based on fossil fuels. So if we suddenly have a big shift from burning fossil fuels to renewable energy, to wind, solar, nuclear, to wind, solar, nuclear, that's going to create an enormous economic shock for a country like Qatar or like Russia, whose economy depends on oil and gas. So for this reason too, it's not going to be so easy. Yeah, so what is the bridge forward towards global cooperation around this issue? Human beings as storytellers, what is the story that we can craft and all agree upon to begin to address this in a constructive way? Well, there are three things. First of all, to realize that for at least the vast majority of humankind, this is a terrible danger. To unite people, you need an enemy,
Starting point is 01:01:26 but the enemy doesn't have to be another human. The enemy can be a danger that threatens all of us and climate change definitely fits the bill. So you have an enemy, which is always good in a story. But it doesn't have a face. Yes, it doesn't. It's ephemeral. Yes, it's not as good an enemy as Hitler or as Voldemort,
Starting point is 01:01:48 but it's something. I mean, you can work with it. Secondly, besides an enemy, you need to have hope that we can do something about it and not only prevent the worst outcome, but we can actually benefit from it in many ways. Over the last few years, there was a growing realization that climate change is not just a danger,
Starting point is 01:02:12 that we need good puritans to consume less and drive less and spend less in order to save the planet. We can actually benefit a lot from developing new eco-friendly technologies, even economically. There is a lot of room for, if one thing that you can't do in today's world is to stop economic growth, because this is the number one value, whether we like it or not, this is the number one value of almost all countries. They can say whatever they want. They can call themselves communists or liberal democracies or Hindu or Muslim or Jewish or secular. Their number one
Starting point is 01:02:51 value is really economic growth. So if you need, in order to stop climate change, to stop economic growth, it won't happen. No, it has to be incentivized in a capitalistic way. It's in the best interest of all of these corporations to go green. Yeah, but not only incentivized in a capitalistic way. It's in the best interest of all of these corporations to go green. Yeah, but not only incentivized in the sense that we have more regulations and we take externalities into account, but actually developing some types of new eco-friendly technologies is going to be really beneficial. And maybe to give one example, which is close to my heart, is clean meat.
Starting point is 01:03:29 That one of the main polluters and one of the main causes for climate change and for other ecological problems around the world is the dairy and meat industry. And it also has the added problem of causing enormous suffering to billions of sentient beings. Now, we can develop and we are developing new technologies of producing milk and dairy simply from cells. If you want a steak, you don't need to raise an entire cow and then slaughter the cow. You just grow a steak. Now, again, 10 years ago, this sounded like science fiction, but this is now being done by quite a number of startups and companies all over the world. The initial price, I think, for a hamburger was $300,000 in 2013. And now I heard it's down to about $10. And it's still going down.
Starting point is 01:04:28 So in five or 10 years, you can have a cheaper hamburger, a much more ethical hamburger, and also a much healthier hamburger. Because if you raise it, if you don't raise it a cow, if you just grow the hamburger from cells, you can determine the exact ingredients. And you don't need so much
Starting point is 01:04:49 antibiotics, and you don't need all the fat, and you can have a much, much, not just cheaper and ethical, but much healthier hamburger. So we don't need to think about it as, okay, we have to do some difficult stuff in order to prevent climate change, there are actually a lot of potential benefits there. Yeah, the clean meat revolution is happening. It's unclear what the timeframe is. It might take a little bit longer than we may realize, but it's certainly coming in the same way that automated driving is coming. There might be a weird acclimation period of people trying to get used to this idea, but I think that they will. And I've explored this topic on the podcast. I had Paul Shapiro on, who wrote Clean Meat, who I know you know. You wrote the foreword to his book. And
Starting point is 01:05:34 I've had Bruce Friedrich on. I'm trying to get from the Good Food Institute. Do you know him? Amazing individual. I'm trying to get Uma Valetti on from Memphis Meats because I think it's fascinating what's happening there. And we have a dire crisis in the form of factory farming right now that's wreaking ecological havoc on the world. We need a more innovative, better way to feed our growing population, which brings me to the subject of veganism. You're vegan yourself. You've done a million interviews. People don't ask you that much about this, but I'm interested in what motivated you to adopt this lifestyle. Well, first I would say that I'm vegan-ish. Vegan-ish, okay. Yeah, I'm not very religious about it. I try to limit my involvement as far as possible with the meat and dairy industry,
Starting point is 01:06:30 but I don't see it as a kind of purity laws. I come from a Jewish background, so I know all about the OCDs that people can develop about ritualistic purity laws with food, so I try not to go there. The reason that brought me to veganism is purely ethical. Just a growing awareness of the enormous suffering we are inflicting on billions of sentient beings just to indulge our culinary fancies. I mean, at least in the 21st century, if you were an Inuit living a thousand years ago
Starting point is 01:07:13 and the only thing you have to eat around you is seals and fish, then okay, I won't try to convince you to become vegan. But if you live in New York in 2018, that defense doesn't hold. And so I think that just for the ethical reasons, this is what we need to do. Yeah. Our hour is almost up
Starting point is 01:07:39 and I want to be conscious of your time, but I want to end this conversation bringing it back to where we started, which is this idea of clarity as power. And your methodology for being as clear as possible is your devotion to your meditation practice, Vipassana meditation. You meditate two hours a day. So perhaps a good way to end this is to talk a little bit about the impact that that practice has had on you as a human being and how it informs the work that you do. I don't think I could have survived these last few years without the meditation. the meditation, you know, all the publication of the book and all the attention and traveling around the world without the peace of mind that the meditation brings, I could not have done it.
Starting point is 01:08:33 I couldn't have written the book in the first place without the meditation, because as you say, it brings a kind of clarity and focus, especially if you try to condense the entire history of the world into like 450 pages. Which is very ambitious. Yes, you need... It gives you, it gave you this incredible objectivity though, to see everything from 10,000 feet. Yeah, but you need to be able to focus because there are so many details that can take you
Starting point is 01:09:03 here and there and then you end up writing 4,000 pages and not 400 pages. But really for me, the most important contribution of meditation is to really be able to see reality as it is and to tell the difference between what is really happening and what is just stories generated by the mind. But you could pull your smartphone out and the algorithm would tell you. This is the one thing that so far
Starting point is 01:09:34 algorithms are not getting even close to telling you. That the human mind is a factory for generating fictional stories about myself, about my family, about my country, about the world. And it's so difficult to tell the difference between the stories we invent and objective reality. And when I came to my first meditation retreat, Vipassana retreat, it was 18 years ago. retreat vipassana retreat it was 18 years ago i was doing a phd in oxford and i thought was a very smart person and that i know myself very well and i'm in control of my life and the teacher gave us the first uh few few days the practice is very very simple in in a way just observe your breath
Starting point is 01:10:22 just observe when the breath is coming in. Be aware when it's coming in, you know it's coming in. When it's going out, you just know it's going out. You don't need to do anything. Not a breathing exercise. You don't need to control the breath. You just know now it's coming in. It sounds like the simplest thing in the world.
Starting point is 01:10:40 And I was absolutely shocked that I couldn't do it for more than 10 seconds. Like I would try to just know, oh, is it coming in or out? And within like five seconds, the mind would run away to some memory, some fantasy. Oh, I forgot I needed to do this. So I needed to do that and whatever. And I realize I know almost nothing about my mind. I have absolutely no control over it.
Starting point is 01:11:06 We need to start from zero. And this was, I think it was the most important thing anybody ever told me in my life. And it was the most shocking realization of my life. And also- And transformative. Yeah, very, very transformative. And from then on, both in my private life and also in my work,
Starting point is 01:11:34 I try to just stick with this very simple exercise, just try to see what is really happening right now. Yeah, and you go on annual extended retreats, 30 to 60 days. Yeah, last year it was 60 days. The year before that it was 45 days. And silent the entire time? Absolute silence. Definitely no smartphones and computers,
Starting point is 01:11:59 but also you don't talk with anybody there. You're with your mind 60 days. Unbelievable. Our time is up. Thank you so much. I really appreciate, I appreciate the work that you do. I wish you well, best of luck with the new book. Everybody should pick up the new book, 21 Lessons for the 21st Century. And thank you very much, Yuval. Thank you for listening. Yeah, how do you feel?
Starting point is 01:12:31 I feel great. Good. It was fun. All right, awesome. Thanks, you guys. Incredible human being that Yuval. Such a pleasure. Such an honor to spend an hour with him.
Starting point is 01:12:42 Really hope you guys enjoyed that. Please, please pick up his new book, 21 Lessons for the 21st Century. And if you haven't already, make a point of checking out his other landmark works, Sapiens and Homo Deus. You can thank me later. You can learn more about Yuval at ynharari.com.
Starting point is 01:12:59 And you can follow him on Twitter if you like. He's at Harari underscore Yuval. But like most great minds, he follows absolutely zero people. So he isn't likely to see your tweet if you mention him. I imagine he's too busy meditating or speaking or perhaps cogitating on the next big idea. But I've got plenty of links in the show notes on the episode page on richroll.com to extend your experience of this conversation and to learn more about Yuval and his work and his world. If you would like to support our work here on the podcast, just subscribe to the show on Apple Podcasts or Google Podcasts or on whatever platform you enjoy this content.
Starting point is 01:13:42 You can subscribe to my YouTube channel, youtube.com forward slash Rich Roll, and just share it with your friends and on social media. That's the best thing. If you just share your favorite episode with somebody else, I love that and I appreciate that. If you want to contribute beyond that, we have a Patreon, which you can find at richroll.com forward slash donate. I want to thank everybody who helped put on the show today. Jason Camiello for audio engineering, production, show notes, and interstitial music. Blake Curtis and Margo Lubin for graphics. DK David Kahn for sponsor relations and theme music, as always, by Analema. Appreciate you guys. Thanks for all the love. I'll see you back here next week with running coach, writer, and emissary of all things
Starting point is 01:14:26 running culture, Knox Robinson. It's a really good one. Until then, may peace be with you, my friends. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.