Lex Fridman Podcast - #107 – Peter Singer: Suffering in Humans, Animals, and AI

Episode Date: July 8, 2020

Peter Singer is a professor of bioethics at Princeton, best known for his 1975 book Animal Liberation, that makes an ethical case against eating meat. He has written brilliantly from an ethical perspe...ctive on extreme poverty, euthanasia, human genetic selection, sports doping, the sale of kidneys, and happiness including in his books Ethics in the Real World and The Life You Can Save. He was a key popularizer of the effective altruism movement and is generally considered one of the most influential philosophers in the world. Support this podcast by supporting these sponsors: - MasterClass: https://masterclass.com/lex - Cash App – use code "LexPodcast" and download: - Cash App (App Store): https://apple.co/2sPrUHe - Cash App (Google Play): https://bit.ly/2MlvP5w If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 05:25 - World War II 09:53 - Suffering 16:06 - Is everyone capable of evil? 21:52 - Can robots suffer? 37:22 - Animal liberation 40:31 - Question for AI about suffering 43:32 - Neuralink 45:11 - Control problem of AI 51:08 - Utilitarianism 59:43 - Helping people in poverty 1:05:15 - Mortality

Transcript
Discussion (0)
Starting point is 00:00:00 The following is a conversation with Peter Singer, professor of bioethics at Princeton University, best known for his 1975 book, Animal Liberation, that makes an ethical case against eating meat. He has written brilliantly from an ethical perspective on extreme poverty, euthanasia, human genetic selection, sports doping, the sale of kidneys, and generally happiness, including in his books Ethics in the Real World and The Life You Can Save. He was a key popularizer of the Effective Altruism Movement and is generally considered one of the most influential philosophers in the world. Quick summary of the ads, two sponsors, a cash app and masterclass. Please consider supporting the podcast by downloading cash app and using code LEXPOTCAST and signing
Starting point is 00:00:53 up at masterclass.com slash LEX. Click the links by the stuff. It really is the best way to support the podcast and the journey I'm on. As you may know, I primarily eat a ketogenic or carnivore diet, which means that most of my diet is made up of me. I do not hunt the food I eat, though one day I hope to. I love fishing, for example. Fishing and eating the fish I catch has always felt much more honest than participating in the supply chain of factory farming. From an ethics perspective, this part of my life has always had a cloud over it. It makes me think. I've tried a few times in my life to reduce the amount of meat I eat, but for some reason, whatever the makeup of my body, whatever the way I practice the dieting I have, I get a lot of mental and physical
Starting point is 00:01:47 energy and performance from eating meat. So both intellectually and physically, it's a continued journey for me. I return to Peter's work often to reevaluate the ethics of how I live this aspect of my life. Let me also say that you may be a vegan or you may be a meat eater and maybe upset by the words I say or Peter says, but I ask for this podcast and other episodes of this podcast that you keep an open mind. I may and probably will talk with people you disagree with. Please try to really listen, especially to people you disagree with, and
Starting point is 00:02:27 give me and the world the gift of being a participant in a patient, intelligent, and nuanced discourse. If your instinct and desire is to be a voice of mockery towards those you disagree with, please unsubscribe. My source of joy and inspiration here has been to be a part of a community that thinks deeply and speaks with empathy and compassion. That is what I hope to continue being a part of, and I hope you join as well. If you enjoy this podcast, subscribe by YouTube, review it with 5 stars and Apple podcasts, follow on Spotify, support it on Patreon, or connect with me on Twitter at Lex Friedman. As usual, I'll do a few minutes of ads now and never near ads in the middle
Starting point is 00:03:11 that can break the flow of the conversation. This show is presented by CashApp, the number one finance app in the App Store. When you get it, use code Lex Podcast. CashApp lets you send money to friends, buy bitcoin and invests in the stock market with as little as one dollar. Since Cash App allows you to buy bitcoin, let me mention that cryptocurrency in the context of the history of money is fascinating. I recommend a cent of money as a great book on this history. Debits and credits on ledgers started around 30,000 years ago. The US dollar created over 200 years ago and the first decentralized cryptocurrency released just over 10 years ago.
Starting point is 00:03:53 So given that history, cryptocurrency is still very much in its early days of development but is still aiming to, and just might redefine the nature of money. So again, if you get cash app from the app store or Google Play and use the code LexSpotGast, you get $10. And cash app will also donate $10 to first, an organization that is helping to advance robotics and STEM education for young people around the world. This show is sponsored by Masterclass.
Starting point is 00:04:23 Sign up at masterclass.com slash Lex to get a discount and to support this podcast. When I first heard about Masterclass, I thought it was too good to be true. For $180 a year, you get an all-access pass to watch courses from to list some of my favorites. Chris Hadfield and Space Exploration, the other guest Tyson on Scientific Thinking and Communication, will write, creator of some city and sims on game design, I promise I'll start streaming games at some point soon. Carlos Santana on guitar, Gary Kasparov on chess, Daniel Lagrado on poker, and many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. By the way you can watch it on basically any device. Once again sign up at masterclass.com slash Lex to get discount and to support this podcast. And now here's my conversation with Peter
Starting point is 00:05:22 Singer. When did you first become conscious of the fact that there is much suffering in the world? I think I was conscious of the fact that there's a lot of suffering in the world, pretty much as soon as I was able to understand anything about my family and its background, because I lost three of my four grandparents in the Holocaust. Obviously I knew why I only had one grandparent and she herself had been in the camps and survived. So I think I knew a lot about that pretty early. My entire family comes from the Soviet Union, I was born in the Soviet Union, sort of World War II has deep roots in the culture and the suffering that the
Starting point is 00:06:26 war brought, the millions of people who died is in the music, is in the literatures, and the culture. What do you think was the impact of the war broadly on our society? The war had many impacts. I think one of them, a beneficial impact, is that it showed what racism and authoritarian government can do. And at least as far as the West was concerned, I think that meant that I grew up in an era in which there wasn't the kind of overt racism and anti-Semitism that had existed for my parents in Europe.
Starting point is 00:07:09 I was growing up in Australia and certainly that was clearly seen as something completely unacceptable. There was also a fear of a further outbreak of war which this time we expected would be nuclear because of the way the Second World War had ended. So there was this overshadowing of my childhood about the possibility that I would not live to grow up and be an adult because of a catastrophic nuclear war. There was a film on the beach was made in which the city that I was living Melbourne was the last place on earth to have living human beings because of the nuclear cloud that was spreading from the north. So that certainly gave us a bit of that sense. There were many, you know, there were clearly many other legacies that we got of the war
Starting point is 00:08:04 as well and the whole setup of the world and the cold war that followed. All of that has its roots in the Second World War. You know, there is much beauty that comes from war. At a conversation with Eric Weinstein, he said, everything is great about war except all the death and suffering. Do you think there's something positive that came from the war? The mirror that it puts our society, sort of the ripple effects on it, ethically speaking, do you think there are positive aspects to war?
Starting point is 00:08:41 I find it hard to see positive aspects in war. And some of the things that other people think of as positive and beautiful, I'm maybe questioning. So there's a certain kind of patriotism people say, you know, during wartime we all pull together, we all work together against the common enemy. And that's true. An outside enemy does unite a country. And in general, it's good for countries to be united and have common purposes. But it also engenders a kind of a nationalism and a patriotism that can't be questioned and that I'm more skeptical about. What about the brotherhood that people talk about from soldiers, the sort of counterintuitive, sad idea
Starting point is 00:09:30 that the closest that people feel to each other is in those moments of suffering, of being at the sort of the edge of seeing your comrades dying in your arms, that somehow brings people extremely closely together, suffering brings people closely together. How do you make sense of that? It may bring people close together, but there are other ways of bonding and being close to people, I think, without the suffering and death that war entails. Perhaps you could see, you could already hear the romanticized Russian in me. We tend to romanticize suffering just a little bit in our literature and culture and so on.
Starting point is 00:10:16 Could you take a step back and I apologize if it's a ridiculous question, but what is suffering? If you were trying to define what suffering is, how would you go about it? Suffering is a conscious state. They can be known as suffering for a being who is completely unconscious. And it's distinguished from other conscious states in terms of being one that considered just in itself, we would rather be with that. It's a conscious state that we want to stop if we're experiencing or we want to avoid having again if we've experienced it in the past. And that's, I say, emphasized for its own sake, because of course people will say, well, suffering strengthens
Starting point is 00:10:58 the spirit, it has good consequences. And sometimes it does have those consequences. And of course, sometimes we might undergo suffering. We set ourselves a challenge to run a marathon or climb a mountain, or even just to go to the dentist so that the toothache doesn't get worse, even though we know the dentist is going to hurt us to some extent. So I'm not saying that we never choose suffering, but I am saying that other things being equal, we would rather not be in that state of consciousness. Is the ultimate goal sort of, you have the new 10 year anniversary release of the life you
Starting point is 00:11:33 can save book, really influential book, we'll talk about it a bunch of times throughout this conversation, but do you think it's possible to eradicate suffering, errors that the goal, or do we want to achieve a kind of minimum threshold of suffering? And then keeping a little drop of poison to keep things interesting in the world? In practice, I don't think we ever will eliminate suffering. So I think that little drop of poison as you put it, or if you like, the contrasting dash of an unpleasant color, perhaps something like that,
Starting point is 00:12:16 and otherwise, harmonious and beautiful composition, that is going to always be there. If you ask me whether in theory, if we could get rid of it, we should. I think the answer is whether in fact we would be better off, or whether in terms of by eliminating the suffering, we would also eliminate some of the highs, the positive highs. And if that's so, then we might be prepared to say it's worth having a minimum of suffering in order to have the best possible experiences as well.
Starting point is 00:12:51 Is there a relative aspect to suffering? So when you talk about eradicating poverty in the world, is this the more you succeed, the more the bar of what defines poverty raises, or is there at the basic human ethical level a bar that's absolute, that wants to get above it, then we can morally converge to feeling like we have eradicated poverty. I think they're both, and I think this is true for poverty as well as suffering. There's an objective level of suffering or of poverty where we're talking about objective indicators like you're constantly hungry, you don't, you can't get enough food, you're constantly cold, you can't get warm, you'll have some physical pains that you're never rid of.
Starting point is 00:13:51 I think those things are objective, but it may also be true that if you do get rid of that and you get to the stage where all of those basic needs have been met, there may still be then new forms of suffering that develop. And perhaps that's what we're seeing in the affluent societies we have, that people get bored, for example, they don't need to spend so many hours a day earning money to get enough to eat and shelter. So now they're bored, they like a sense of purpose, that can happen. And that then is a kind of a relative suffering that is distinct from the objective forms of suffering. But in your focus on eradicating suffering, you don't think about that kind of
Starting point is 00:14:37 the kind of interesting challenges and suffering that emerges in affluent societies. That's just not in your ethical philosophical brain is that of interest at all. It would be of interest to me if we had eliminated all of the objective forms of suffering, which I think are of as generally more severe and also perhaps easier at this stage any way to know how to eliminate. So yes, in some future state, when we've eliminated those objective forms of suffering, I would be interested in trying to eliminate the relative forms as well. But that's not a practical need for me at the moment. Sorry to linger on it because you kind of said it, but just is elimination the goal for the affluent society. So, do you see a suffering as a creative force?
Starting point is 00:15:31 Suffering can be a creative force. I think I'll repeating what I said about the highs and whether we need some of the lows to experience the highs. So it may be that suffering makes us more creative and we regard that as worthwhile. Maybe that brings some of those highs with it that we would not have had if we'd had no suffering. I don't really know. Many people have suggested that and I certainly can't have no basis for denying it. And if it's true, then I would not want to eliminate suffering completely.
Starting point is 00:16:08 But the focus is on the absolute, not to be cold, not to be hungry. Yes, that's at the present stage of where the world's population is. That's the focus. Talking about human nature for a second. Do you think people are inherently good, or do we all have good and evil in us that basically everyone is capable of evil based on the environment? Certainly, most of us have a potential for both good and evil. I'm not prepared to say that everyone is capable of evil, that maybe some people who even in the worst of circumstances would not be capable of it.
Starting point is 00:16:46 But most of us are very susceptible to environmental influences. So when we look at things that we were talking about previously, let's say that, what the Nazis did during the Holocaust, I think it's quite difficult to say I know that I would not have done those things, even if I were in the same circumstances as those who did them. Even if, let's say, I had grown up under the Nazi regime and had been indoctrinated with racist ideas, had also had the idea that I must obey orders, follow the commands of the Fura, plus of course, perhaps the threat that if I didn't do certain things, I might get sent to the Russian front and that would be a pretty grim fate.
Starting point is 00:17:37 I think it's really hard for anybody to say, nevertheless, I know I would not have killed those Jews or whatever else it was that What's your intuition? How many people will be able to say that? Truly, you'd be able to say it. I think very few, less than 10%. To me, it seems a very interesting and powerful thing to meditate on. So I've read a lot about the war, the World War II, and I can't escape the thought that I would have not been one of the 10%. Right. I have to say I simply don't know. I would like to hope that I would have been one of the 10% but I don't really have any basis for claiming that I would have been
Starting point is 00:18:20 different from the majority. Is it a worthwhile thing to contemplate? It would be interesting if we could find a way of really finding these answers. There obviously is quite a bit of research on people during the Holocaust, on how ordinary Germans got led to do terrible things. And there are also studies of the resistance, some heroic people in the white rose group, for example, who resisted even though they were likely to die for it. But I don't know whether these studies really can answer your larger question of how many people would have been capable of doing that. Well, sort of the reason I think it's interesting is in the world, as you've described, you know,
Starting point is 00:19:11 when there are things that you'd like to do that are good, that are objectively good, it's useful to think about whether I'm not willing to do something, or I don't even, I'm not willing to acknowledge something as I don't even, I'm not willing to acknowledge something as good and the right thing to do because I'm simply scared of putting my life, of damaging my life in some kind of way. And that kind of thought exercises helpful to understand what is, what is the right thing in my current skill set and the capacity to do. So there's things that are convenient and capacity to do. So if there's things that are convenient, and there's, I wonder if there are things that are highly inconvenient, where I would have to experience
Starting point is 00:19:51 derision or hatred or death or all those kinds of things, but it's truly the right thing to do. And that kind of balance is, I feel like in America, we don't have, it's difficult to think in the current times. It seems easier to put yourself back in history when you can sort of objectively contemplate whether how willing you are to do the right thing when the cost is high. True, but I think we do face those challenges today, and I think we can still ask ourselves those questions. So one stand that I took more than 40 years ago now was to stop eating meat, come of vegetarian at a time, when you hardly met anybody who was a vegetarian, or if you did,
Starting point is 00:20:39 they might have been Hindu, or they might have had some weird theories about meat and health and I know thinking about making that decision. I was convinced that it was the right thing to do, but I still did have to think Our all my friends gonna think that I'm a crank Because I'm now refusing to eat meat So, you know, I'm not saying there were any terrible sanctions, obviously, but I thought about that and I guess I decided, well, I still think this is the right thing to do and if I'll put up with that, if it happens. And one or two friends were clearly uncomfortable with that decision, but that was pretty minor
Starting point is 00:21:20 compared to the historical examples that we've been talking about. But other issues that we have around to, like global poverty and what we ought to be doing about that, is another question where people I think can have the opportunity to take a stand on what's the right thing to do now. Climate change would be a third question where again, people are taking a stand, you know, can look at Greta Tunberg there and say, well, I think it must have taken a lot of courage for a schoolgirl to say, I'm going to go on strike about climate change and see what happens.
Starting point is 00:21:58 Yeah, especially in this divisive world, she gets exceptionally huge amounts of support and hatred both. She's a very difficult. 14 years to operate in. In your book, ethics in the real world, amazing book. People should check it out. Very easy read. 82 brief essays on things that matter. One of the assays asks, should robots have rights? You've written about this. So let me ask, should robots have rates? You've written about this, so let me ask, should robots have rates? If we ever develop robots capable of consciousness,
Starting point is 00:22:33 capable of having their own internal perspective on what's happening to them, so that their lives can go well or badly for them, then robots should have rights. Until that happens, they shouldn't. So consciousness essentially a prerequisite to suffering. So everything that possesses consciousness is capable of suffering put another way. And if so, what is consciousness? put another way. And if so, what is consciousness? I certainly think that consciousness is a prerequisite for suffering. You can't suffer if you're
Starting point is 00:23:12 not conscious. But is it true that every being it is conscious will suffer or has to be capable of suffering? I suppose you could imagine a kind of consciousness, especially if we can construct it out officially, that's capable of experiencing pleasure, but just automatically cuts at the consciousness when they're suffering. So they're like, you know, instant anesthesia as soon as something is going to cause suffering. So that's possible. But doesn't exist as far as we know on this planet yet. You asked what is consciousness. Philosophers often talk about it as they're being a subject of experiences. So, you and I and everybody listening to this is a subject of experience. There is a conscious subject
Starting point is 00:24:03 who is taking things in, responding to it in various ways, feeling good about it, feeling bad about it. And that's different from the kinds of artificial intelligence we have now. I take out my phone, I ask Google directions to where I'm going. Google gives me the directions and I choose to take a different way. Google doesn't care. It's not like I'm offending Google or anything like that. There is no subject of experience is there. And I think that's the indication that Google AI we have now is not conscious, or at least that level of AI is not conscious.
Starting point is 00:24:44 And that's the way to think about it. Now, it may be difficult to tell, of course, whether a certain AI is or isn't conscious. It may mimic consciousness, and we can't tell if it's only mimicking it, or if it's the real thing. But that's what we're looking for. Is there a subject of experience, a perspective on the world, from which things can go well or badly from that perspective. So our idea of what suffering looks like comes from our just watching our cells when we're in pain sort of um, when we're experiencing pleasure, it's not only pleasure and pain. Yes. Yeah, so and then uh, you could actually push back on us,
Starting point is 00:25:25 but I would say that's how we kind of build an intuition about animals is we can infer the similarities between humans and animals and so infer that they're suffering or not based on certain things and their conscious or not. What if robots, you mentioned Google maps and I've done this experiment so I work in robotics just for my own self or I have several Roomba robots and I play with different speech interaction voice-based interaction. And if the Roomba or the robot or Google Maps shows any size of pain like screaming or moaning or
Starting point is 00:26:08 being displeased by something you've done that in my mind. I can't help but immediately upgraded and Even when I myself programmed it in just having another entity that's now for the moment disjointed for me showing size of pain makes me feel like it is conscious. I immediately, then whatever, I immediately realize that it's not obviously, but that feeling is there. I guess, what do you think about a world where Google Maps and Roombas are pretending to be conscious, and we, the descendants of apes are not smart enough to realize they're not, or whatever, or that is conscious, they appear to be conscious, and so you then have to give them rights. The reason I'm asking that is that kind of capability may be closer than we realize.
Starting point is 00:27:09 Yes, that kind of capability may be closer, but I don't think it follows that we have to give them rights. I suppose the argument for saying that in those circumstances we should give them rights is that if we don't we'll harden ourselves Against other beings who are not robots and who really do suffer That's a possibility that you know if we get used to looking at it being suffering and Saying yeah, we don't have to do anything about that that being doesn't have any rights maybe we'll feel the same about animals, for instance. And interestingly, among philosophers and thinkers who
Starting point is 00:27:53 denied that we have any direct duties to animals, and this includes people like Thomas Aquinas and Emanuel Kant, they did say, yes, but still it's better not to be cruel to them, not because of the suffering we're inflicting on the animals, but because if we are, we may develop a cruel disposition and this will be bad for humans, you know, because we would more likely be cruel to other humans and that would be wrong. So you don't accept that. I don't accept that as the basis of the argument for why we shouldn't be cruel to animals. I think the basis of the argument for why we shouldn't be cruel to animals is just that
Starting point is 00:28:32 we're inflicting suffering on them and the suffering is a bad thing. But possibly I might accept some sort of parallel of that argument as a reason why you shouldn't be cruel to these robots that mimic the symptoms of pain, if it's going to be harder for us to distinguish. I would venture to say, I'd like to disagree with you and with most people, I think, at the risk of sounding crazy, I would like to say that if that room was dedicated to faking the consciousness and the suffering, I think it will be impossible for us. I would like to apply the same argument as with animals to robots that they deserve right
Starting point is 00:29:18 in that sense. Now we might outlaw the addition of those kinds of features into rumors, but once you do, I think I'm quite surprised by the upgrade in consciousness that the display of suffering creates. It's a totally open world, but I'd like to just sort of the different scene animals and other humans is that in the robot case, we've added it in ourselves. Therefore, we can say something about how real it is. But I would like to say that the display of it is what makes it real.
Starting point is 00:29:59 And there's some, I'm not a philosopher, I'm not making that argument, but I at least like to add that as a possibility. And I've been surprised by it. It's all I'm trying to sort of articulate poorly, I suppose. So there is a philosophical view has been held about humans,
Starting point is 00:30:17 which is rather like what you're talking about. And that's behaviorism. So behaviorism was employed both in psychology. People like B. of Skinner was a famous behaviorist. But in psychology, it was more a kind of a, what is it that makes this science where you need to have behavior because that's what you can observe. You can't observe consciousness. But in philosophy, the view is defended by people like Gilbert Ryle,
Starting point is 00:30:41 who was a professor of philosophy at Oxford wrote a book called The Concept of Mind, in which, you know, in this kind of phase, this is in the 40s of linguistic philosophy, he said, well, the meaning of a term is its use, and we use terms like so and so is in pain when we see somebody writhing or screaming or trying to escape some stimulus. And that's the meaning of the term. So that's what it is to be in pain. And you point to the behavior. And Norman Malcolm, who was another philosopher in the school from Cornell, had the view that,
Starting point is 00:31:20 so what is it to dream? After all, we can't see other people's dreams. Well, when people wake up and say, I've just had a dream of, you know, here I was, undressed walking down the main street or whatever it is you've dreamt. That's what it is to have a dream. It's to basically to wake up and recall something. So you could apply this to what you're talking about and say, so what it is to be in pain is to exhibit these symptoms of pain behavior. And therefore, these robots are in pain. That's what the word means.
Starting point is 00:31:54 But nowadays, not many people think that Royals kind of philosophical behaviorism is really very plausible. So I think they would say the same about your view. So yeah, I just spoke with Noam Tromsky, who basically was part of dismantling the behaviors movement. But and I'm with that, 100% for studying human behavior, but I am one of the few people in the world, who has made room bus scream in pain. And I just don't know what to do with that empirical evidence because it's hard to sort of philosophically agree. But the only reason I philosophically
Starting point is 00:32:39 agree in that case is because I was the programmer. But if somebody else was a programmer, I'm not sure I would be able to interpret that well. So it's, I think it's a new world that I was just curious what your thoughts are. For now, you feel that the display of the what we can kind of intellectual say is a fake display of suffering is not suffering. That's right. That would be my view. But that's consistent, of course, with the idea that it's part of our nature to respond to this display, if it's reasonably authentically done. And therefore it's understandable that people would feel this. And maybe, as I said, it's even a good thing that they do feel it. And you wouldn't want to harden yourself against it because then you might harden yourself against beings who are really suffering. But there's a line, you know, so you said once an artificial
Starting point is 00:33:38 general intelligence system, a human level intelligence system become conscious. I guess if I could just linger on it. Now I've wrote really dumb programs that just say things that I told them to say, but how do you know when a system like Alexa was just officially complex, that you can't introspect to how it works, starts giving you signs of consciousness through natural language. There's a feeling there's another entity there that's self-aware, that has a fear of death, a mortality,
Starting point is 00:34:12 that has awareness of itself that we kind of associate with other living creatures. I guess I'm sort of trying to do the slippery slope from the very naive thing where I started into something where it's sufficiently a black box, the way it's starting to feel like it's conscious. Where's that threshold? Or you would start getting uncomfortable with the idea of robot suffering, do you think?
Starting point is 00:34:42 I don't know enough about the programming that we're going to this really to answer this question, but I presume that somebody who does know more about this could look at the program and see whether we can explain the behaviors in a parsimonious way that doesn't require us to suggest that some sort of consciousness has emerged, or alternatively, whether you're in a situation where you say, I don't know how this is happening. The program does generate a kind of artificial general intelligence, which is autonomous, you know, starts to do things itself and is autonomous of the basic programming
Starting point is 00:35:25 that set it up. And so it's quite possible that actually we have achieved consciousness in a system of artificial intelligence. Sort of the approach that I work with most of the communities really excited about now is with learning methods, so machine learning. And the learning methods are unfortunately are not capable of revealing, which is why somebody like Nome Chomsky criticizes them.
Starting point is 00:35:51 You have create powerful systems that are able to do certain things without understanding the theory, the physics, the science of how it works. And so it's possible, those are the kinds of methods that succeed. We won't be able to know exactly sort of try to reduce, try to find whether this thing is conscious or not, this thing is intelligent or not. It's simply giving, when we talk to it, it displays wit and humor and cleverness and emotion and fear. And then we won't be able to say, where in the billions of nodes, neurons in this artificial neural network is the fear coming from.
Starting point is 00:36:37 So in that case, that's a really interesting place where we do now start to return to behaviorism and say, Yeah, that's, that's, that is an interesting issue. I would say that if we have serious dads, and think it might be conscious, then we ought to try to give it the benefit of the dad, just as I would say with animals, we, I think we can be highly confident that just as I would say with animals, I think we can be highly confident that vertebrates are conscious, but when we get down and some invertebrates like the octopus, but with insects, it's much harder to be confident of that. I think we should give them the benefit of the doubt where we can, which means I think it would be wrong to torture an insect,
Starting point is 00:37:26 but this doesn't necessarily mean it's wrong to slap a mosquito that's about to bite you and stop you getting to sleep. So I think you try to achieve some balance in these circumstances of uncertainty. If it's okay with you, if we can go back just briefly. So 44 years ago, like you mentioned, 40 plus years ago, you've read an animal liberation, the classic book that started, that launched, it was a foundation of the movement of animal liberation. Can you summarize the key set of ideas that are underpinning that book? Certainly, the key idea that underlies that book is the concept of speciesism,
Starting point is 00:38:09 which I did not invent that term. I took it from a man called Richard Ryder, who was in Oxford when I was and I saw a pamphlet that he'd written about experiments on chimpanzees that used that term. But I think I contributed to making it philosophically more precise and to getting it into a broader audience. And the idea is that we have a bias or prejudice against taking seriously the interests of beings who are not members of our species, just as in the past. Europeans, for example, had a bias against taking seriously the interests of Africans, racism, and men have had a bias against taking seriously the interests of women, sexism. So I think something analogous, not completely identical, but something analogous, goes on and has gone on for a very long time with the way humans see themselves vis-a-vis animals. We see ourselves as more important. We see animals as
Starting point is 00:39:13 existing to serve our needs in various ways and you can find this very explicit in earlier philosophers from Aristotle through to Kant and others. And either we don't need to take their interest into account at all, or we can discount it because they're not humans. They can a little bit, but they don't count nearly as much as humans do. My book I use that that attitude is responsible for a lot of the things that we do to animals that are wrong confining them indoors in very crowded cramped conditions in factory farms to produce meat or eggs or milk more cheaply using them in some research that's by no means essential for survival or well-being
Starting point is 00:40:04 and a whole lot, you know, some of the sports and things that we do to animals. So I think that's unjustified because I think the significance of pain and suffering does not depend on the species of the being who is in pain or suffering. Any more than it depends on the race or sex of the being who is in pain or suffering any more than it depends on the race or sex with the being who is in pain or suffering. And I think we ought to rethink our treatment of animals along the lines of saying if the pain is just as great in animal then it's just as bad that it happens as if it were a human. Maybe if I could ask, I apologize, hopefully it's not a ridiculous question, but so as far as we know, we cannot communicate with animals through natural language, but we would be able to
Starting point is 00:40:55 communicate with robots. So returning to sort of a small parallel between perhaps animals in the future of AI, if we do create an AGI system or as we approach creating that AGI system, what kind of questions would you ask her to try to try to intuit whether, whether there is consciousness, whether or more importantly, whether there's capacity to suffer. I might ask the AGI what she was feeling.
Starting point is 00:41:35 Well, does she have feelings? And if she says yes to describe those feelings, to describe what they were like, to see what the phenomenal account of consciousness is like. That's one question. I might also try to find out if the AGI has a sense of itself. So for example, the idea, would you, we often ask people, so suppose you're in a car accident and your brain were transplanted into someone else's body, do you think you would survive or would it be the
Starting point is 00:42:11 person whose body was still surviving, you know, your body having been destroyed? And most people say, I think I would, you know, if my brain was transplanted along with my memories and so on, I would survive. So we could ask a GI, those kinds of questions, if they were transferred to a different piece of hardware, would they survive? What would survive? Get that sort of stuff. Sort of on that line, another perhaps absurd question, but do you think having a body is necessary for consciousness? So do you think digital beings can suffer? Presumably digital beings need to be running on some kind of hardware, right?
Starting point is 00:42:53 Yeah, that ultimately boils down to, but this is exactly what you said is moving the brain for in place. So you could move it to a different kind of hardware. And I could say, look, you know, your hardware is, needs is getting worn out. We're going to transfer you to a different kind of hardware. And I could say, look, you know, your hardware is getting worn out. We're going to transfer you to a fresh piece of hardware. So we're going to shut you down for a time. But don't worry, you know, you'll be running very soon on a nice, fresh piece of hardware. And you could imagine this conscious AGO saying, that's fine, I don't mind having a little rest. Just make sure you don't lose me or something like that. Yeah, I mean, that's an interesting thought that even with us humans, the suffering is in the software. We right now don't know how to repair the hardware.
Starting point is 00:43:35 Yeah. But we're getting better at it and better in the idea. I mean, a lot of some people dream about one day being able to transfer certain aspects of the software to another piece of hardware. What do you think just on that topic, there's been a lot of exciting innovation in brain computer interfaces. I don't know if you're familiar with the companies like neural link with Elon Musk, communicating both ways from a computer, being able to send, activate neurons, and being able to read spikes from neurons, with the dream of being able to expand, sort of, increase the bandwidth at which your brain can
Starting point is 00:44:19 like look up articles on Wikipedia, kind of thing, to expand the knowledge capacity of the brain. Do you think that notion is that interesting to you as the expansion of the human mind? Yes, that's very interesting. I'd love to be able to have that increased bandwidth. And I want better access to my memory, I have to say too, is yet older, I talked to my wife about things that we did
Starting point is 00:44:46 20 years ago or something. Her memory is often better about particular events where we were, who was at that event, what did he or she wear even, she may know, and I have not the faintest idea about this. But perhaps it's somewhere in my memory, and if I had this extended memory, I could search that particular year and rerun those things. I think that would be great. In some sense, we already have that by storing so much of our data online, like pictures of different. Yes. Well, Gmail is fantastic for that because people email me as if they know me well,
Starting point is 00:45:16 and I haven't got a clue who they are, but then I search for their name. I just emailed me in 2007, and I know who they are now. Yeah. So we're already, we're taking the first steps already. So on the flip side of AI, people like Stuart Russell and others focus on the control problem, value alignment in AI, which is the problem of making sure we build systems that align to our own values, our ethics. Do you think sort of high level, how do we go about building systems?
Starting point is 00:45:48 Do you think is it possible that align with our values, align with our human ethics or living being ethics? Presumably, it's possible to do that. I know that a lot of people who think that there's a real danger that we won't, that we're more or less accidentally lose control of AGI. Do you have that here? You're a self-personally. I'm not quite sure what to think. I talked to philosophers like Nick Bostrom and Toby Ord and they think that this is a real
Starting point is 00:46:20 problem where you need to worry about. Then I talk to people who work for Microsoft or DeepMind or somebody, and they say, no, we're not really that close to producing AGI, you know, superintelligence. So, if you look at Nick Bostrom's sort of the arguments that it's very hard to defend. So, I'm of course an IMSL engineer AS system, so I'm more with the deep mind folks where it seems that we're really far away. But then the counter argument is, is there any fundamental reason that we'll never achieve it? And if not, then eventually there'll be a dire existential risk, so we should be concerned about it. And do you have, do you have, do you find that argument at all appealing in this domain or any domain that eventually
Starting point is 00:47:10 this will be a problem, so we should be worried about it? Yes, I think it's a problem. I think there's, that's a valid point. Of course, when you say eventually, that raises the question, how far off is that? And is there something that we can do about it now? Because if we're talking about, this is going to be 100 years in the future. And you consider how rapidly our knowledge of artificial intelligence has grown in the last 10 or 20 years. It seems unlikely that there's anything much we could do now that would influence
Starting point is 00:47:46 whether this is going to happen 100 years in the future. People in 80 years in the future would be in a much better position to say, this is what we need to do to prevent this happening than we are now. So to some extent, I find that reassuring, but I'm all in favor of some people doing research into this to see if indeed it is that far off or if we are in a position to do something about it sooner. I'm I'm very much of the view that extinction is a terrible thing and therefore even if the risk of extinction is very small, if we can reduce that risk, that's something that we ought to do.
Starting point is 00:48:28 My disagreement with some of these people who talk about long-term risks, extinction risks, is only about how much priority that should have as compared to present questions. No, such a, if you look at the math of it from a huge, authoritarian perspective, if it's existential risk so everybody dies, it feels like an infinity in the math equation. That makes the math with the priorities difficult to do.
Starting point is 00:48:56 That if we don't know the timescale, and you can legitimately argue that it's non-zero probability that it'll happen tomorrow. That how do you deal with these kinds of existential risks, like from nuclear war, from nuclear weapons, from biological weapons, from I'm not sure if global warming falls into that category because global warming is a lot more gradual. And people say it's not an existential risk, because there'll always be possibilities of some humans existing, froming Antarctica or more than Siberia or something of that sort. Well, you don't find the sort of the complete existential risks of fundamental
Starting point is 00:49:37 like an overriding part of the equations of ethics. I wouldn't. Yeah, certainly, if you're treated as an infinity, then it plays havoc with any calculations. But arguably we shouldn't. I mean, one of the ethical assumptions that goes into this is that the loss of future lives, that is of merely possible lives, of beings who may never exist at all,
Starting point is 00:50:03 is in some way comparable to the sufferings or deaths of people who do exist at some point. And that's not clear to me. I think there's a case for saying that, but I also think there's a case for taking the other view. So that has some impact on it. Of course, you might say, ah, yes, but still, if there's some uncertainty about this and the costs of extinction are infinite,
Starting point is 00:50:29 then still it's gonna overwhelm everything else. But I suppose I'm not convinced of that. I'm not convinced that it's really infinite here. And even Nick Bostrom in his discussion of this doesn't claim that there'll be an infinite number of lives left. What is a 10 to the 56th or something? It's a vast number that I think he calculates. This is assuming we can upload consciousness onto these, you know, digital forms and therefore there'll be much more energy efficient, but he calculates the amount of energy in the
Starting point is 00:51:04 universe or something like that. So the number is a vast but non-infinite which gives you some prospect maybe of resisting some of the argument. The beautiful thing with the next arguments is he quickly jumps from the individual scale to the universal scale which is just awe inspiring to think of. When you think about the entirety of the span of time of the universe, it's both interesting from a computer science perspective, AI perspective, and from an ethical perspective, the idea of utilitarianism. Because you say what is utilitarianism? Utilitarianism is the ethical view that the right thing to do is the act that has the greatest expected utility where what that means is it's the
Starting point is 00:51:49 act that will produce the best consequences discounted by the odds that you won't be able to produce those consequences that something will go wrong. But in simple case, let's assume we have certainty about what the consequences of actions will be, then the right action is the action that will produce the best consequences. Is that always, and by the way, there's a bunch of nuanced stuff that you talk with Sam Harris on this podcast on the people should go listen to. It's great. There's like two hours of moral philosophy discussion, but is that an easy calculation? No, it's a difficult calculation, and actually there's one thing that I need to add.
Starting point is 00:52:27 And that is utilitarians, certainly the classical utilitarians, think that by best consequences, we're talking about happiness and the absence of pain and suffering. There are other consequentialists who are not really utilitarians who say there are different things that could be good consequences. Justice, freedom, you know, human dignity, knowledge, they all kind of good consequences too. And that makes the calculations even more difficult
Starting point is 00:52:55 because then you need to know how to balance these things off. If you are just talking about well-being using that term to express happiness and the absence of suffering. I think that the calculation becomes more manageable in a philosophical sense. It's still in practice. We don't know how to do it. We don't know how to measure quantities of happiness and misery. We don't know how to calculate the probabilities, the different actions will produce this or that. So at best, we can use it as a rough guide to different actions. And one way we have to
Starting point is 00:53:33 focus on the short term consequences because we just can't really predict all of the longer term ramifications. So what about the extreme suffering of very small groups? So the utilitarianism is focused on the overall aggregate, right? How do you say yourself or your utilitarian, you just find that? You do a utilitarian. sort of, do you, what do you make of the difficult ethical, maybe poetic suffering of very few individuals? I think it's possible that that gets overridden by benefits to very large numbers of individuals. I think that can be the right answer.
Starting point is 00:54:20 But before we conclude that is the right answer, we have to know how severe the suffering is and how that compares with the benefits. So I tend to think that extreme suffering is worse than or is further, if you like, below the neutral level, than extreme happiness or bliss is above it. So when I think about the worst experience as possible and the best experience as possible, I don't think of them as equidistant from neutral. So like it's a scale that goes from minus 100 through zero as a neutral level to plus 100. Because I know that I would not exchange an hour of my most pleasurable experiences
Starting point is 00:55:06 for an hour of my most painful experiences. Even I wouldn't have an hour of my most painful experiences, even for two hours or ten hours of my most painful experiences. Can I say that correctly? Yeah, yeah. Maybe 20 hours then. Yeah, well, what's the exchange rate? So that's the question.
Starting point is 00:55:24 What is the exchange rate? But that's the question, what is the exchange rate? But I think it can be quite high. So that's why you shouldn't just assume that, you know, it's okay to make one person suffer extremely in order to make two people much better off. It might be a much larger number. But at some point, I do think you should aggregate and the result will be, even though
Starting point is 00:55:48 violates our intuitions of justice and fairness, whatever it might be, of giving priority to those who are worse off. At some point, I still think that will be the right thing to do. Yeah, some complicated nonlinear function. Can I ask a sort of out there question? The more we put our data out there, the more we're able to measure a bunch of factors of each of our individual human lives. And I could foresee the ability to estimate well-being of whatever we public, we together collectively agree and it's a good objective for from a utilitarian perspective.
Starting point is 00:56:25 Do you think it'll be possible and is a good idea to push that kind of analysis to make then public decisions perhaps with the help of AI that, you know, here's a tax rate, here's a tax rate at which well-being will be optimized. Yeah, that would be great if we really knew that, if we could calculate that. No, but do you think it's possible to converge towards an agreement amongst humans, towards an objective function as just a hopeless pursuit? I don't think it's hopeless. I think it would be difficult to get converged towards agreement, at least at present, because some people would say, I've got different views about justice, and I think it's difficult would be difficult to get Converge towards agreement at least at present because some people say you know
Starting point is 00:57:06 I've got different views about justice and I think you ought to give Priority to those who are worse off Even though I acknowledge that the gains that the worse off are making are less than the gains that those who are sort of medium badly off Could be making So we still have all of these intuitions that we argue about. So I don't think we would get agreement, but the fact that we wouldn't get agreement doesn't show that there isn't a right answer there. Do you think who gets to say what is right and wrong? Do you think there's place for ethics
Starting point is 00:57:40 oversight from the government? So I'm thinking in the case of AI, overseeing what is what kind of decisions AI can make or not. But also if you look at animal rights or rather not rights or perhaps rights, but the ideas you've explored in animal liberation, who gets to, so you eloquently, beautifully write in your book that this, you know, we shouldn't do this, but is there some harder rules that should be imposed? Or is this a collective thing we converse towards a society and thereby make the better and better ethical decisions? Politically, I'm still a Democrat, despite looking at the flaws in democracy and the way it doesn't work always very well.
Starting point is 00:58:27 So I don't see a better option than allowing the public to vote for governments in accordance with their policies. And I hope that they will vote for policies that reduce the suffering of animals and the reduce the suffering of distant humans, whether geographically distant or distant because they're future humans. But I recognize that democracy isn't really well set up to do that. And in a sense, you could imagine a wise and benevolent, you know, omnibenevolent leader who would do that better than democracy could, but in the world in which we live, it's difficult to imagine that this leader isn't going to be corrupted by a variety of influences, you know, we've had so many examples of people who've taken power with good intentions and then have ended
Starting point is 00:59:26 up being corrupt and favoring themselves. So I don't know, that's why I say I don't know that we have a better system than democracy to make these decisions. Well, so you also discuss effective altruism, which is a mechanism for going around government for putting the power in the hands of the people to donate money towards causes to help, you know, you know, remove the middleman and give it directly to the cause of the care about sort of Maybe this is a good time to ask, you've 10 years ago wrote the life you can save, that's now, I think available for free online. That's right, you can download either the ebook or the audiobook free from thelifeyouconsafe.org. And what are the key ideas that you present in the book? The main thing I want to do in the book is to make people realize that it's not difficult
Starting point is 01:00:27 to help people in extreme poverty, that there are highly effective organizations now that are doing this, that they've been independently assessed and verified by research teams that are expert in this area, and that it's a fulfilling thing to do, for at least part of your life, we can't all be saints, but at least one of your goals should be to really make a positive contribution to the world, and to do something to help people who through no fault of their own are in very dire circumstances and living a life that is barely or perhaps not at all a decent life for a human being to live. So you describe a minimum ethical standard of giving. What advice would you give to people that want to be effectively altruistic in their life,
Starting point is 01:01:23 like live an effective altruistic in their life, like live an effect of altruism life. There are many different kinds of ways of living as an effective altruist. And if you're at the point where you're thinking about your long-term career, I'd recommend you take a look at a website called 80,000Hours, 80,000Hours.org, which looks at ethical career choices. And they range from, for example, going to work on Wall Street so that you can earn a huge amount of money and then donate most of it to effective charities, to going to work for a really good nonprofit organization so that you can directly use
Starting point is 01:01:59 your skills and ability and hard work to further a good cause, or perhaps going into politics, maybe small chances, but big payoffs in politics. Go to work in the public service where if you're talented you might rise to a high level where you can influence decisions. Do research in an area where the payoffs could be great. There are a lot of different opportunities, but too few people are even thinking about those questions. They're just going along in some sort of preordained rut to particular careers.
Starting point is 01:02:32 Maybe they think they'll earn a lot of money and have a comfortable life, but they may not find that as fulfilling as actually knowing that they're making a positive difference to the world. What about in terms of, so that's like long term, 80,000 hours, shorter term giving part of, well actually it's a part of that, you're going to walk at Wall Street,
Starting point is 01:02:54 if you would like to give a percentage of your income that you talk about and life you can save that. I was looking through it quite a compelling, I'm just a dumb engineer. So I like their simple rules. There's a nice percentage. Okay. So I do actually set out suggested levels of giving because people often ask me about this.
Starting point is 01:03:17 A popular answer is give 10% the traditional ties that's recommended in Christianity and also Judaism. But why should it be the same percentage irrespective of your income? Tax scales reflect the idea that the more income you have, the more you can pay tax. I think the same is true in what you can give. I do set out a progressive donor scale, which starts at 1% for people on modest incomes and rises to 33 and a third percent for people who are really earning a lot. And my idea is that I don't think any of these amounts really impose real hardship on
Starting point is 01:03:57 people because they are progressive and good to income. So I think anybody can do this and can know that they're doing something significant to play their part in reducing the huge gap between people in extreme poverty in the world and people living afternoon lives. And aside from it being an ethical life, it's one need to find more fulfilling because like there's something about our human nature that or some of our human nature's maybe most of our human nature that enjoys doing the the ethical thing. Yeah, so I make both those arguments that it it isn't ethical requirement and that kind of world we live in today to help people in great need when we can easily do so. But also that it is a rewarding thing and there's good psychological research showing that people who give more tend to be more satisfied with their lives.
Starting point is 01:04:57 And I think this has something to do with having a purpose that's larger than yourself. And therefore, if you're like, never being bored sitting around, oh, you know, what will I do next? I've got nothing to do in a world like this. There are many good things that you can do and enjoy doing them. Plus, you're working with other people in the effective altruism movement, who are forming a community of other people with similar ideas, and they tend to be interesting, thoughtful and good people as well. Having friends of that sort is another big contribution to having a good life.
Starting point is 01:05:33 So we talked about big things that are beyond ourselves, but we're also just human and mortal. Do you ponder your own mortality? Is there insights about your philosophy, the ethics that you gain from pondering your own mortality? Clearly, you know, as you get into your 70s, you can't help thinking about your own mortality. But I don't know that I have great insights into that from my philosophy. I don't think there's anything after the death of my body, you know, assuming that we won't be able to upload my mind into anything at the time when I die. So I don't think there's any afterlife for anything to look forward to in that sense. The fear death, so if you look at Ernest Becker and describing the motivating aspects of our ability to be
Starting point is 01:06:29 cognizant of our mortality. Do you have any of those elements in your driving your motivation life? I suppose the fact that you have only a limited time to achieve the things that you want to achieve gives you some sort of motivation to get going and achieving them. And if we thought we were immortal, we might say, I can put that off for another decade or two. So there's that about it. But otherwise, no, I'd rather have more time to do more.
Starting point is 01:06:59 I'd also like to be able to see how things go that I'm interested in. Is climate change going to turn out to be as dire as a lot of scientists say that it is going to be? Will we somehow scrape through with less damage than we thought? I'd really like to know the answers to those questions, but I guess I'm not going to. Well, you said there's nothing afterwards. So let me ask the more absurd question, what do you think is the meaning of it all? I think the meaning of life is the meaning we give to it. I don't think that we were brought into the universe for any kind of larger purpose. But given that we exist, I think we can recognize that some things are objectively bad. Extreme suffering
Starting point is 01:07:48 is an example and other things are objectively good, like having a rich, fulfilling, enjoyable, pleasurable life. And we can try to do our part in reducing the bad things and increasing the good things. So one way, the meaning is to do a little bit more of the good things, objectively good things and a little bit less of the bad things. Yes, so do as much of the good things as you can and as little of the bad things. Peter, beautifully put, I don't think there's a better place to end it. Thank you so much for talking to me today.
Starting point is 01:08:22 Thanks very much. It's been really interesting talking to you. Thanks for listening to this conversation with Peter Singer. And thank you to our sponsors, CashApp and Masterclass. Please consider supporting the podcast by downloading CashApp and using the code Lex Podcasts and signing up at masterclass.com slash Lex. Click the links by all the stuff. It's the best way to support this podcast and the journey I'm on, my research, and start up. If you enjoy this thing, subscribe on YouTube,
Starting point is 01:08:55 review it with 5,000 Apple Podcasts, support on Patreon, or connect with me on Twitter, Alex Friedman's build, without the e, just F-R-I-D-M-A-N. And now let me leave you with some words from Peter Singer. Well one generation finds ridiculous, the next accepts. And the third shudders when looks back at what the first did.
Starting point is 01:09:19 Thank you for listening and hope to see you next time. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.