Making Sense with Sam Harris - #138 — The Edge of Humanity

Episode Date: September 20, 2018

Sam Harris speaks with Yuval Noah Harari about his new book “21 Lessons for the 21st Century.” They discuss the importance of meditation for his intellectual life, the primacy of stories, the need... to revise our fundamental assumptions about human civilization, the threats to liberal democracy, a world without work, universal basic income, the virtues of nationalism, the implications of AI and automation, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Today I am speaking with Yuval Noah Harari. Yuval has a Ph.D. in History from the University of Oxford, and he lectures at Hebrew University in Jerusalem, where he specializes in world history.
Starting point is 00:01:03 His books have been translated into over 50 languages, and these books are Sapiens, A Brief History of Humankind, Homo Deus, A Brief History of Tomorrow, and his new book, which we discuss today, is 21 Lessons for the 21st Century. Yuval is rather like me in that he spends a lot of time worrying out loud. He's also a long-term meditator. I don't know if there's a connection there. There was so much to talk about. There is much more in the new book than we touched, but we touched a lot. We actually started talking about the importance of meditation for his intellectual life. We talk about the primacy of stories, the need to revise our fundamental assumptions about human civilization and how it works, the current threats to liberal democracy,
Starting point is 00:01:53 what a world without work might look like, universal basic income, the virtues of nationalism. You've all had some surprising views on that. The implications of AI and automation, and several other topics. So without further delay, I bring you Yuval Noah Harari. Thank you. Thank you. And thank you to Rivers Cuomo. That's amazing. Thank you. So, and thank you to Rivers Cuomo. That's amazing. So, you've heard this from me before, if you've been to an event or listened to events on a podcast, but I, so it may get old to hear, but it really doesn't get old to say. I can't tell you
Starting point is 00:02:58 what an honor it is to put a date on the calendar and have you all show up. I mean, it's just astonishing to me that this happened. So thank you. And thank you to Yuval for coming out. It's my honor. It's amazing to collaborate with him. Thank you. So, Yuval, you have these books that just steamroll over all other books,
Starting point is 00:03:28 and I know because I write books. So you wrote Sapiens, which is kind of about the deep history of... Yes. With a few fans. Which is really about the history of humanity. And then you wrote Homo Deus, which is about our far future. And now you've written this book, 21 Lessons for the 21st Century,
Starting point is 00:03:49 which is about the present. I can't be the only one in your publishing world who notices that now you have nothing left to write about. So good luck with that career of yours. So how do you describe what you do? Because you're a historian. One thing that you and I have in common is that we have a reckless disregard for the boundaries between disciplines. You just touch so many things that are not straightforward history. How do you think about your intellectual
Starting point is 00:04:20 career at this point? Well, my definition of history is that history is not the study of the past. It's the study of change, how things change. And yes, most of the time you look at change in the past, but in the end, all the people who lived in the past are dead and they don't care what you write or say about them. If the past has anything to teach us, it should be relevant to the future and to the present also. But you touch biology and the implications of technology. I follow the questions. And the questions don't recognize these disciplinary boundaries. And as a historian, maybe the most important lesson that boundaries. And as a historian, maybe the most important lesson that I've learned as a historian
Starting point is 00:05:08 is that humans are animals, and if you don't take this very seriously into account, you can't understand history. Of course, I'm not a biologist. I also know that humans are a very special kind of animal. If you only know biology, you will not understand things like the rise of Christianity know that humans are a very special kind of animal. If you only know biology, you will not understand things like the rise of Christianity or the Reformation or
Starting point is 00:05:30 the Second World War. So you need to go beyond just the biological basis. But if you ignore this, you can't really understand anything. Yeah, the other thing we have in common, which gives you, to my eye, a very unique slant on all the topics you touch, is an interest in meditation and a sense that our experiences in meditation have changed the way we think about problems in the world and questions like, you know, just what it means to live a good life or even whether the question of the meaning of life is an intelligible one or a valid one or one that needs to be asked. How do you view the influence of the contemplative life on your intellectual pursuits?
Starting point is 00:06:20 I couldn't have written any of my books, either Sapiens or Homo Deus or 21 Lessons, without the experience of meditation, partly because of just what I learned about the human mind, from observing the mind, but also partly because you need a lot of focus in order to be able to summarize the whole of history into like 400 pages. And meditation gives you this kind of ability to really focus. My understanding of at least the meditation that I practice is that the number one question is what is reality? What is really happening? To be able to tell the difference
Starting point is 00:07:07 between the stories that the mind keeps generating about the world, about myself, about everything, and the actual reality. And this is what I try to do when I meditate. and this is also what I try to do when I write books, to help me and other people understand what is the difference between fiction and reality. Yeah, yeah, and I want to get at that difference, because you use these terms in slightly idiosyncratic ways, so I think it's possible to either be confused about how you use terms like story and fiction. For instance, just the way you talk about the primacy of fiction, the primacy of story, the way in which our concepts that we think map onto reality don't really quite map onto reality, and yet they're nonetheless important.
Starting point is 00:08:10 That is, in a way that you don't often flag in your writing, a real meditator's eye view of what's happening here. I mean, it's not, like, you're giving people the epiphany that certain things are made up, like the concept of money, right? The idea that we have dirty paper in our pocket that is worth something, right? That is a convention that we've all agreed about. But it's an idea. It only works because we agree that it works. But the way you use the word story and fiction rather often seems to denigrate these things a little bit more than I'm tempted to do when I talk about it. I don't say that there is anything wrong with it. Stories and fictions are a wonderful thing, especially if you want to get people to cooperate effectively. You cannot have a global trade
Starting point is 00:08:59 network unless you agree on money. And you cannot have people playing football or baseball or basketball or any other game unless you get them to agree on rules that quite obviously we invented. They did not come from heaven. They did not come from physics or biology. We invented them. And there is nothing wrong with people agreeing, accepting, let's say for 90 minutes, the story of football, the rules of football,
Starting point is 00:09:28 that if you score a goal, then this is the goal of the whole game and so forth. The problem begins only when people forget that this is only a convention, this is only something we invented, and they start confusing it with kind of, this is reality, this is the a convention. This is only something we invented. And they start confusing it with kind of, this is reality. This is the real thing. And in football, it can lead to people, to hooligans beating up each other or killing people because of this invented game. And on a higher level, it can lead to, you know, to world wars and genocides in the name of fictional entities like gods and nations and currencies that we've created. Now, there is nothing wrong with these
Starting point is 00:10:14 creations as long as they serve us instead of us serving them. But wouldn't you acknowledge that there's a distinction between good stories and bad stories? Yeah, certainly. The good stories are the ones that really serve us, that help people, that help other sentient beings live a better life. I mean, it's as simple as that. I mean, of course, in real life, it's much more complicated
Starting point is 00:10:40 to know what will be helpful and what not and so forth. But a good starting place is just to have this basic ability to tell the difference between fiction and reality, between our creations and what's really out there, especially when, for example, you need to change the story. you need to change the story or a story which was very adapted to one condition is less adapted to a new condition
Starting point is 00:11:11 which is for example what I think is happening now with the story of the underground liberal democracy that it was probably one of the best stories ever created by humanity and it was very one of the best stories ever created by humanity.
Starting point is 00:11:29 And it was very adapted to the conditions of the 20th century. But it is less and less adapted to the new realities of the 21st century. And in order to kind of reinvent the system, we need to acknowledge that to some extent, it is based on stories we have invented. Right. But so when you talk about something like human rights being a story or a fiction, that seems like a story or a fiction that shouldn't be on the table to be fundamentally revised, right? Like that's where people begin to worry that to describe these things as stories or fictions is to suggest tacitly, if I don't think you do this explicitly,
Starting point is 00:12:13 that all of this stuff is made up, and therefore it's all sort of on the same level, right? And yet there's clearly a distinction between, a distinction you make in your book between dogmatism and the other efforts we make to justify our stories, right? There's stories that are dogmatically asserted, and religion has more than its fair share of these, but there are political dogmas, there are tribal dogmas of all kinds, you know, nationalism can be anchored to dogma. And the mode of asserting a dogma is to be doing so without feeling responsible to counter arguments and demands for evidence and reasons why. Whereas with something like human rights, we can tell an additional story about why we value this convention, right? Like we don't have to... It doesn't have to be a magical story. It doesn't have to be that we were all imbued
Starting point is 00:13:08 by our creator with these things. But we can talk for a long time without saying it's just so to justify that convention. Yeah, I mean, human rights is a particularly problematic and also interesting case. First of all, because it's our story. I mean, we are very happy with you discrediting
Starting point is 00:13:29 the stories of all kinds of religious fundamentalists and all kinds of tribes somewhere and ancient people, but not our story. Don't touch that. It depends what you mean by we. So I guess we, most of the people, I don't see anybody here. It could be just empty chairs and recordings of laughter. But I assume that the people here, most of them, this is our story.
Starting point is 00:13:54 The second thing is that we live in a moment when liberal democracy is under a severe attack. And this was not so when I wrote Sapiens. I felt much fre fear writing these things back in 2011, 2012. And now it's much more problematic. And yes, I find myself, one of the difficulties of living right now, as an intellectual, as a thinker,
Starting point is 00:14:20 that I'm kind of torn apart by the imperative to explore the truth, to follow the truth wherever it leads me, and the political realities of the present moment and the need to engage in very important political battles. And this is one of the costs, I think, of what is happening now in the world, that it restricts our ability, our freedom, to truly go deep and explore the foundations of our system. And I still feel the importance of doing it, of questioning even the foundations of liberal democracy and of human rights, simply because I think that as we have defined them since the 18th century, they are not going to survive the tests of the 21st century. And it's extremely unfortunate
Starting point is 00:15:26 that we have to engage in this two-front battle. That at the same moment, we have to defend these ideas from people who look at them from the perspective of nostalgic fantasies. That they want to go back from the 18th century. And at the same time, we have to also go forward
Starting point is 00:15:52 and think what it means, what the new scientific discoveries and technological developments of the 21st century really mean to the core ideas of what do human rights mean when you are starting to have superhumans? Do superhumans have superhuman rights? What does the right to freedom mean when we have now technologies that simply undermine the very concept of freedom. We kind of, when we created this whole system, not we, somebody, back in the 18th and 19th century, we gave ourselves
Starting point is 00:16:36 all kinds of philosophical discounts of not really going deeply enough in some of the key questions, like what do humans really need? And we settled for answers like, just follow your heart. Yeah. And this was good enough. This is Joseph Campbell. I blame Joseph Campbell for follow your bliss. No, but follow your heart.
Starting point is 00:17:03 The voter knows best. The customer is always right. Beauty is in the eyes of the beholder. All these slogans, they were kind of covering up for not engaging more deeply with the question of what is really human freedom and what do humans really need. And for the last 200 years it was good enough but now to just follow your heart is becoming extremely dangerous and problematic when there are corporations and organizations and governments out there that for the first time in history can hack your heart. And your heart might be now a government agent, and you don't even know it. So telling people in 2018, just follow your heart, is a much, much more dangerous advice than in 70, 76. Yeah. So let's drill down on that circumstance. So we have this claim that liberal democracy
Starting point is 00:18:08 is one, under threat, and two, might not even be worth maintaining as we currently conceive it, given the technological changes that are upon us or will be upon us. It is worth maintaining. It's just becoming more and more difficult. upon us or will be upon us. Well, it is worth maintaining. It's just becoming more and more difficult. Presumably, there are things about liberal democracy that are serious bugs and not features in light of the fact that, as you say, if it's all a matter of putting everything to a vote and we are all part of this massive psychological experiment where we're gaming ourselves with algorithms written by some people in this room to not only confuse us with respect to what's in our best interest, but the very tool we would use to decide what's worth wanting is being
Starting point is 00:19:00 hijacked. It's one thing to be wrong about how to meet your goals. It's another thing to have the wrong goals and not even know that. It's hard to know where ground zero is for cognition and emotion if all of this is susceptible to outside influence, which ultimately we need to embrace because there is a possibility of influencing ourselves in ways that open vistas of well-being and peaceful cooperation that we can't currently imagine, right? Or we can't see how to get to. So it's not like we actually want to go back to when there was no, quote, hacking of the human mind. Every conversation is an attempted hack of somebody else's mind, right? So we're just getting,
Starting point is 00:19:49 it's getting more subtle now. Yeah, it's, you know, throughout history, other people and governments and churches and so forth, they all the time tried to hack you and to influence you and to manipulate you. They just weren't very good at it, because humans are just so incredibly complicated. And therefore, for most of history, this idea that I have an inner arena, which is completely free from external manipulation, nobody out there can really understand what's happening within me. How special you are. And how special I am, what I really feel and how I really think.
Starting point is 00:20:30 And all that, it was largely true. And therefore, the belief in the autonomous self and in free will and so forth, it made practical sense. Even if it wasn't true on the level of ultimate reality, on a practical level, it was good enough. But however complicated the human entity is, we are now reaching a point when somebody out there can really hack it. Now, they want, can really hack it. Now, they want... It can never be done perfectly. We are so complicated.
Starting point is 00:21:08 I'm under no illusion that any corporation or government or organization can completely understand me. This is impossible. But the yardstick or the threshold, the critical threshold is not perfect understanding.
Starting point is 00:21:25 The threshold is just better than me. The key inflection point in history, in the history of humanity, is the moment when an external system can reliably, on a large scale, understand people better than they understand themselves. And this is not an impossible mission because so many people don't really understand themselves very well. No. Similarly, with the whole idea...
Starting point is 00:21:53 Just ask my wife. With the whole idea of shifting authority from humans to algorithms. So I trust the algorithm to recommend TV shows for me. And I trust the algorithm to tell me how to drive from Mountain View to this place this evening. And eventually, I trust the algorithm to tell me what to study, and where to work, and whom to date, and whom to marry, and who to vote for. And then people say, no, no, no, no, no.
Starting point is 00:22:26 That won't happen. Because there will be all kinds of mistakes and glitches and bugs and the algorithm will never know everything. And it can't do it. And if the yardstick is the algorithm, to trust the algorithm, to give authority to the algorithm, it needs to make perfect decisions,
Starting point is 00:22:46 then yes, it will never happen. But that's not the yardstick. The algorithm just needs to make better decisions than me about what to study and where to live and so forth. And this is not so very difficult because as humans we often tend to make terrible mistakes even in the most important decisions in life. Yeah, yeah. I promise this will be uplifting at some point.
Starting point is 00:23:18 So let's linger on the problem of the precariousness of liberal democracy. And there's so many aspects to this. Maybe just to add one thing to this precariousness, the ideas that systems have to change, again, as a historian, this is obvious. I mean, you couldn't really have a functioning liberal democracy in the Middle Ages because you didn't have the necessary technology. Liberal democracy is not this eternal ideal that can be realized anytime, anyplace.
Starting point is 00:23:54 Take the Roman Empire in the 3rd century, take the Kingdom of France in the 12th century, let's have a liberal democracy there. No, you don't have the technology. You don't have the infrastructure. You don't have what it takes. It takes communication. It takes education. It takes a lot of things that you just don't have. And it's not just a bug of liberal democracy. It's true of any socio-economic or political system. You could not build a communist regime in 16th century Russia. I mean, you can't have communism without trains and electricity and radio and so forth, because in order to make all the decisions centrally, if a slogan is that you work,
Starting point is 00:24:41 they take everything, and then they redistribute according to needs. Each one works according to their ability and gets according to their need. The key problem there is really a problem of data processing. How do I know what everybody is producing? How do I know what everybody needs? And how do I shift the resources, taking wheat from here and sending it there? In 16th-century Russia, when you don't have trains, when you don't have radio, you just can't do it.
Starting point is 00:25:15 So as technology changes, it's almost inevitable that the socioeconomic and political systems will change. So we can't just hold on, no, this must remain as it is. The question is, how do we make sure that the changes are for the better and not for the worse? Well, by that yardstick,
Starting point is 00:25:37 now might be the moment to try communism in earnest. We can do it now, right? So you can all tweet that Yuval Noah Harari is in favor of communism. I didn't say anything. I mean, we had a moment in the sun that seemed, however delusionally, to be kind of outside of history.
Starting point is 00:26:00 You know, it's like the first moment in my life where I realized I was living in history was September 11th, 2001. But before that, it's like the first moment in my life where I realized I was living in history was September 11, 2001. But before that, it just seemed like people could write books with titles like The End of History. And we sort of knew how this was going to pan out, it seemed. Liberal values were going to dominate the character of a global civilization, ultimately. We were going to fuse our horizons with people of however disparate background. You know, someone in a village in Ethiopia was eventually going to get some version of the democratic, liberal notion of human rights and
Starting point is 00:26:41 the primacy of rationality and the utility of science. So religious fundamentalism was going to be held back and eventually pushed all the way back, and irrational economic dogmas that had proved that they're merely harmful would be pushed back. And we would find an increasingly orderly and amicable collaboration among more and more people. I think like I say, and we would get to a place where war between nation states would be less and less likely to the point where, by analogy, a war between states internal to a country like the United States, a war between Texas and Oklahoma, just wouldn't make sense, right? How is that the United States. A war between Texas and Oklahoma just wouldn't make sense, right? How is that possibly going to come about? Wait and see. Yeah, exactly. But now we seem to be in a moment where
Starting point is 00:27:31 much of what I just said we were taking for granted can't be taken for granted. There's a rise of populism. There's a xenophobic strand to our politics that is just immensely popular, both in the US and in Western Europe. And this anachronistic nativist reaction is, as you spell out in your most recent book, is being kindled by a totally understandable anxiety around technological change of the story. We're talking about people who are sensing, it's not the only source of xenophobia and populism, but there are many people who are sensing the prospect of their own irrelevance given the dawn of this new technological age.
Starting point is 00:28:18 What are you most concerned about in this present context? I think irrelevance is going to be a very big problem. It already fuels much of what we see today with the rise of populism, is the fear and the justified fear of irrelevance. If in the 20th century, the big struggle was against exploitation, then in the 21st century,
Starting point is 00:28:44 for a lot of people around the world, the big struggle is likely to be against irrelevance. And this is a much, much more difficult struggle. A century ago, so you felt that, let's say you're the common person, that there were always these elites that exploit me. Now you increasingly feel, as a common person, that there are all these elites that just don't need me. And that's much worse on many levels, both psychologically and politically. It's much worse to be irrelevant than to be exploited. Spell that out. Why is it worse? I'll spell that out. Why is it worse?
Starting point is 00:29:27 First of all, because you're completely expendable. If a century ago you mount a revolution against exploitation, then you know that if things, when bad comes to worse, they can't shoot all of us because they need us. Who's going to work in the factories? Who's going to serve in the armies if they get rid of us? That's a motivational poster I'm going to get printed up. I'm not sure what the graphic is, but they can't shoot all of us. If you're irrelevant, that's not the case. You're totally expendable. And again, we are often, our vision of the future is followed by the recent past.
Starting point is 00:30:11 The 19th and 20th century were the age of the masses, where the masses ruled. And even authoritarian regimes, they needed the masses. So you had these mass political movements, like Nazism and like communism, and even somebody like Hitler or like Stalin, they invested a lot of resources in building schools and hospitals and having vaccinations for children and sewage systems and teaching people to read and write, not because Hitler and Stalin were so
Starting point is 00:30:46 nice guys, but because they knew perfectly well that if they wanted, for example, Germany to be a strong nation with a strong army and a strong economy, they needed millions of people, common people, to serve as soldiers in the army and as workers in the factories and in the offices. So some people could be expendable and could be scapegoats like the Jews, but on the whole, you couldn't do it to everybody. You needed them. But in the 21st century, there is a serious danger that more and more people will become irrelevant and therefore also expendable, we already see it happening in the armies. That whereas the leading armies of the 20th century relied on recruiting millions of common people to serve as common soldiers, today the most advanced
Starting point is 00:31:41 armies, they rely on much smaller numbers of highly professional soldiers and increasingly on sophisticated and autonomous technology. If the same thing happens in the civilian economy, then we might see a similar split in civilian society where you have a relatively small, very capable professional elite relying on very sophisticated technology, and most people, just as they are already today, militarily irrelevant, they could become economically and politically irrelevant. Now, that sounds like a a real risk we're running but but but it's the normal intuitions about what is scary about that don't hold up given the right construal and expectations about human well-being
Starting point is 00:32:36 so it's like we know what people are capable of doing when they're irrelevant because aristocrats have done that for centuries. I mean, there are people who have not had to work in every period of human history, and they had a fine old time, you know, shooting pheasant and inventing weird board games. And then if you add to that some more sophisticated way of finding well-being, you know, so if we taught people, you know, stoic philosophy and how to meditate and good sports, and it's nowhere written that life is only meaningful if you are committed to something you only will do because someone's paying you to do it, right?
Starting point is 00:33:20 Definitely. I mean, there is a worst case and a best case scenario. In the best case scenario, people are relieved of all the difficult, can spend your time, your leisure time, on exploring yourself, developing yourself, doing art or meditating or playing sports or developing communities. There are wonderful scenarios that can be realized. There are also some terrible scenarios that can be realized. I mean, I don't think there is anything inevitable. I mean, the technology, the technological revolution, which is just beginning right now, it can go in completely different directions. Again, if you look back at the 20th century,
Starting point is 00:34:22 then you see that with the same technology of trains and electricity, and radio, you can build a communist dictatorship or a fascist regime or a liberal democracy. The trains don't care. They don't tell you what to do with them. And they can be used for anything you can use them for. They don't object. And it's the same with AI and biotechnology and
Starting point is 00:34:46 all the current technological inventions. We can use them to build really paradise or hell. The one thing that is certain is that we are going to become far more powerful than ever before, far more powerful than we are now. We are really going to acquire divine abilities of creation, in some sense even greater abilities than what was traditionally ascribed to most gods from Zeus to Yahweh. If you look, for instance, the creation story in the Bible, the only things that Yahweh managed to create are organic entities. And we are now on the verge of creating the first inorganic entities after four billion years of evolution. So in this sense, we are even on the verge of outperforming the biblical God in creation. And we can do so many different things with that.
Starting point is 00:35:45 Some of them can be extremely good, some of them can be extremely bad. This is why it's so important to have these kinds of conversations because this is maybe the most important question that we are facing. What to do with these powers? Yeah.
Starting point is 00:36:01 So what norms or stories or conventions or fictions, concepts, ideas, do you think stand in the way of us taking the right path here? I mean, so take, we've sort of alluded to it without naming it. Let's say we could all agree that universal basic income was the near-term remedy for some explosion of automation and irrelevance. You look skeptical about that. Yeah, I have two difficulties with universal basic income, which is universal and basic. Income is fine, but universal and basic, they are ill-defined. Most people, when they speak about universal basic income, they actually have in mind national basic income. They think in terms, okay, we'll tax Google and Facebook in California
Starting point is 00:36:54 and use that to pay unemployment benefits or to give free education to unemployed coal miners in Pennsylvania and unemployed taxi drivers in New York. The real problem is not going to be in New York. The real problem, the greatest problem, is going to be in Mexico, in Honduras, in Bangladesh. And I don't see an American government taxing corporations in California and sending the money to Bangladesh
Starting point is 00:37:21 to pay unemployment benefits there. And this is really the automation revolution. They're clapping to stop us from paying. Those are the libertarians in the audience. We've built, over the last few generations, a global economy and a global trade network. And the automation revolution is likely to unravel the global trade network and hit the weakest links the hardest. So you will have
Starting point is 00:37:54 enormous new wealth, enormous new wealth created here in San Francisco and Silicon Valley. But you can have the economies of entire countries just collapse completely. Because what they know how to do, nobody needs that anymore. And we need a global solution for this. So universal, if by universal you mean global, taking money from California and sending it to Bangladesh,
Starting point is 00:38:23 then yes, this can work. But if you mean national, it's not a real answer. And the second problem is with basic. How do you define what are the basic needs of human beings? Now, in a scenario in which a significant proportion of people no longer have any jobs, and they depend on this universal basic income or universal basic services, whatever they get, they can't go beyond that.
Starting point is 00:38:55 This is the only thing they're going to get. Then who defines what is their basic needs? What is basic education? is their basic needs. What is basic education? Is it just literacy? Or also coding? Or everything up to PhD?
Starting point is 00:39:10 Or playing the violin? Who decides? And what is basic healthcare? Is it just, I mean, if you're looking 50 years to the future and you see genetic engineering of your children and you see all kinds of treatments to extend life,
Starting point is 00:39:24 is this the monopoly of a tiny elite? Or is this part of the universal basic package? And who decides? So it's a first step. The discussion we have now about universal basic income is an important first step. But we need to go much more deeply into understanding what we actually mean by universal and by basic. Right. Well, so let's imagine that we begin to extend the circle, coincident with this rise in affluence. And because on some level, if the technology is developed correctly, we are talking about pulling wealth out of the ether, right?
Starting point is 00:40:11 So automation and artificial intelligence, there's more, the pie is getting bigger. And then the question is how generously or wisely we will share it with the people who are becoming irrelevant because we don't need them for their labor anymore. Let's say we get better at that than we currently are. But I mean, you can imagine that we are going to be, we will be fast to realize that we need to take care of the people in our neighborhood, you know, in San Francisco. And we will be slower to realize we need to take care of the people in Somalia, but maybe we'll just, these lessons will be hard. One, we'll realize if we don't take care of the people in Somalia, a refugee crisis, unlike any we've ever seen, will hit us in six months, right? So there'll be some completely self-serving reason why we need to eradicate famine or some other largely economic problem elsewhere. But presumably,
Starting point is 00:41:08 we can be made to care more and more about everyone. Again, if only out of self-interest. What are the primary impediments to our doing that? that? Human nature. It is possible, it's just very difficult. I think we need for a number of reasons to develop
Starting point is 00:41:34 global identities, a global loyalty, a loyalty to the whole of humankind and to the whole of planet Earth. So this is a story that becomes so captivating that it supersedes other stories that seem to say Team America. No, it not abolishes them.
Starting point is 00:41:50 I don't think we need to abolish all nations and cultures and languages and just become this homogeneous gray goo all over the planet. No, you can have several identities and loyalties at the same time. People already do it now. They had it throughout history. I can be loyal to my family, to my neighborhood, to my profession, to my city, and to my nation at the same time. And then suddenly there are conflicts, say, between my loyalty to my business and my loyalty to my family. So I hate to think hard. Sometimes I prefer the interest of the
Starting point is 00:42:23 family. Sometimes I prefer the interests of the family. Sometimes I prefer the interests of the business. So, you know, that's life. We have these difficulties in life. It's not always easy. So I'm not saying let's abolish all other identities. And from now on, we are just citizens of the world. But we can add this kind of layer of loyalty to the previous layers. And this, you know, people have been talking about it for thousands of years. But now it really becomes a necessity, because we are now facing three global problems, which are the most important problems of humankind. And it should be obvious to everybody that they can only be solved on a global level,
Starting point is 00:43:05 through global cooperation. These are nuclear war, climate change, and technological disruption. It should be obvious to anybody that you can't solve climate change on a national level. You can't build a wall against rising temperatures or rising sea levels. No country, even the United States or China,
Starting point is 00:43:30 no country is ecologically independent. There are no longer independent countries in the world if you look at it from an ecological perspective. Similarly, when it comes to technological disruptions, the potential dangers of artificial intelligence and biotechnology should be obvious to everybody. You cannot regulate artificial intelligence on a national level. If there is some technological development you're afraid of,
Starting point is 00:43:59 like developing autonomous weapon systems, or like doing genetic engineering on human babies, then if you want to regulate this, you need cooperation with other countries. Because, like the ecology, also science and technology, they are global. They don't belong to any one country or any one government. So if, for example, the United States bans genetic engineering on human beings, it won't prevent the Chinese or the Koreans or the Russians from doing it.
Starting point is 00:44:34 And then a few years down the line, if the Chinese are starting to produce superhumans by the thousands, the Americans wouldn't like to stay behind. So they will break their own ban. The only way to prevent a very dangerous arms race in the fields of AI and biotechnology is through global cooperation. Now, it's going to be very difficult, but I don't think it's impossible. I actually gain a lot of hope from seeing the strength of nationalism.
Starting point is 00:45:09 So that's totally counterintuitive, because everything you just said, in the space provided, there's only one noun that solves the problem, which is world government on some level. No, we don't need a single emperor or government. You can have good cooperation even without a single emperor. Then we need some other tools by which to cooperate because we have, in a world that is
Starting point is 00:45:35 as politically fragmented as ours, into nation states, all of which have their domestic political concerns and their short time horizons. You're talking about global problems and long-term problems that can only be solved through global cooperation and long-term thinking. And we have political systems that are insular and focused on time horizons that don't exceed four or, in the best case, six years. And then we have the occasional semi-benevolent dictatorship that can play the game slightly differently.
Starting point is 00:46:09 So what is the solution, if not just a fusing of political apparatus at some point in the future? We certainly need to go beyond the national level to a level when we have real trust between different countries. Of the kind you see, for example, still in the European Union. If you take the example of having a ban on developing autonomous weapon systems. So if the Chinese and the Americans today try to sign an agreement banning killer robots, the big problem there is trust. How do you really trust the other side
Starting point is 00:46:48 to live up to the agreement? AI is in this sense much worse than nuclear weapons because with nuclear weapons, it's very difficult to develop nuclear weapons in complete secrecy. People are going to notice. But with AI, there are all kinds of things you can do in secret. And the big question is, how can we trust them?
Starting point is 00:47:11 And at present, there is no way that the Chinese and the Americans, for example, are really going to be able to trust one another. Even if they sign an agreement, every side will say, yes, we are good guys, we don't want to do it, but how can we really be sure that they are not doing it? So we have to do it first. But if you think about, for example, France and Germany, despite the terrible history of these two countries,
Starting point is 00:47:39 a much worse history than the history of the relations between China and the US, if today the Germans come to the French and they tell the French, trust us, we don't have some secret laboratory in the Bavarian Alps where we develop killer robots in order to conquer France, the French will believe them. And the French have good reason to believe them.
Starting point is 00:48:02 They are really trustworthy in this. And if the French and Germans manage to reach this situation, I think it's not hopeless also for the Chinese and the Americans. So what explains that difference? Because it is a shocking fact of history that you can take these time slices that are 40, 50 years apart, where you have the attempted rise of the Thousand-Year Reich, where Germany is the least trustworthy nation anyone could conceive of,
Starting point is 00:48:36 the most power-hungry, the most militaristic. You could say the same about Japan at that moment. And then fast forward a few decades and we have what I guess it's it's always vulnerable to some some change but we have a just a seemingly truly durable basis of trust what is as a historian what what accomplished that magic and why is it hard to just reverse engineer that with respect to Russia or China or any other teaming adversary? It's just a lot of hard work. I mean, in the case of the Germans, what you can say about them is they're very... If you'd like to continue listening to this
Starting point is 00:49:16 conversation, you'll need to subscribe at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free and relies entirely on listener support, and you can subscribe now at SamHarris.org. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.