Modern Wisdom - #291 - John Danaher - What Happens If Robots Automate The World?

Episode Date: March 6, 2021

John Danaher is an author and a lecturer at the National University of Ireland. Having a job is valorised in modern society. But if our jobs are taken over by robots, will we find a sense of purpose i...n other things outside of work, or are we just going to lead meaningless lives? Expect to learn why technological unemployment might be desirable, what a cyborg utopia might look like, why John thinks losing work might not result in loss of purpose, the risks of sacrificing human values in pursuit of utopia and much more... Sponsors: Get 20% discount & free shipping on your Lawnmower 3.0 at https://www.manscaped.com/ (use code MODERNWISDOM) Extra Stuff: Buy Automation & Utopia - https://amzn.to/3rhiXzL  Follow John on Twitter - https://twitter.com/JohnDanaher  Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, wonderful people. Welcome back. My guest today is John Danahe. We're talking about what happens if robots automate the world. Having a job is valorized in modern society, but if our jobs are taken over by robots, will we find a sense of purpose in other things outside of work or are we just doomed to lead a meaningless life? So today, expect to learn why technological unemployment might be desirable, what a cyborg utopia might look like, why John thinks losing work might not result in a loss of purpose, the risks of sacrificing human values in pursuit of utopia and much more. Just all the robot episodes at the moment, you know if it's from a sex robot to a self-driving
Starting point is 00:00:45 car to robot overlords and us existing in a vat somewhere. It's cool, I like thinking about this stuff, I like thinking about the future and where society might end up. Lots of challenging and difficult assumptions that we need to get past around what it means for the human race to be the lords of this earth. Anyway, but for now, it's time to learn about our robot overlords with the wise and wonderful John John Danna, welcome to the show. Thanks for having me on. It's a great honor and privilege to be here.
Starting point is 00:01:40 Before we get started, there is a very famous Brazilian Jiu Jitsu teacher who shares your name. Did you know this? I am all to aware of this fact, yeah. Yeah, so I've posted a big list of upcoming guests on my Instagram. No, like, mate, I didn't know that you're interested in Brazilian Jiu Jitsu. And I'm thinking, does a guy who talks about, like, automation and robots also do Brazilian Gigi itsy training, but it turns out that it's just it's just two different interesting people Yeah, you're
Starting point is 00:02:11 Your fans probably got very excited when they thought it was the other John Diner He's got a much higher profile than we did little do they know that they actually wanted to learn about robots So what what are we gonna be talking about today? What's the what's the topic of our discussion? about robots. So what are we going to be talking about today? What's the topic of our discussion? Yes, so we're going to talk with this book that I wrote a couple years ago on automation and utopia, which is kind of a very abstract philosophical look at the meaning of life in a post-work world. If robots take all our jobs away, what are we going to do at their time and will we be able to find meaning and flourish? These sorts of questions are very common at the moment. I think it seems like every other week someone's referring to when the robots take our jobs in a news article.
Starting point is 00:02:56 Yeah, I mean, it's been a fairly persistent theme in popular media and academic discussions as well for the past decade or so. I'd say it really took off after the 2008 financial crisis and the subsequent recession. Ironically, it was probably starting to evaway a little bit more recently due to the uptick in the economy in the past couple of years, but I think COVID-19 has really kickstarted the discussion
Starting point is 00:03:20 once more. An interesting thing I heard today on Ben Shapiro's show was concerns about Joe Biden raising the US minimum wage to $15, encouraging many employers to replace workers with automation precisely for that reason. If it costs X, 10,000, 200,000 of pounds to install the robot system, the more that you raise the minimum wage, the more and more that becomes competitive. Yeah, I mean, so I think like economics 101 would tell you if you raise the price of anything, if it is a price of labor, even make it less attractive for employers, there
Starting point is 00:04:00 are, I don't know exactly whether the rise in the minimal wage in the US to $15 would kick start a wave of automation or whether we're in fact in the midst of a wave anyway. This is just a minor in nudge along the path. There was an interesting world economic forum report a couple of months ago about the impact of COVID-19 on automation which had a survey of business leaders around the world and the percentage of them that were looking to automate their workforce. I think it was about 41% of employers are looking to increase the amount of automation at the moment. And then I believe it's again, somewhere in the 40% is range of people who want to increase the amount of outsourcing of labor that they do. And it was only a handful of
Starting point is 00:04:42 the amount of outsourcing of labor that they do. And it was only a handful of employers that were actually able to expand the workforce in the wake of COVID-19. Is human obsolescence imminent? Yeah, so I mean, that's the sentence that I used to open the book. And it's a little bit of hyperbole, that's what I've said to everybody that's a dreamy.
Starting point is 00:05:03 And this is a little bit of rhetorical hyperbole. I think that the obsolescence is becoming less useful in certain endeavors or growing out of fashion or something like that. The same way your phone obsolesces over time. It becomes taken over by a better technology. So the idea that I start the book with is this sense that maybe humans are obsolescing and more and more and more domains of activity. And we've seen this historically happen in agriculture and manufacturing industries,
Starting point is 00:05:35 being the classic examples of obsolescence due to technology. And now I think we're starting to see it in a range of other professions from finance to the law profession, even into branches of government, where there is increased use of automated technologies like algorithmic prediction tools or robotics to replace human workers or human decision makers. How about rolling that forward?
Starting point is 00:06:04 There's a lot of talk about, well, yeah, robots might have been able to replace weavers and plowers and stuff like that, but they're not going to be able to replace more complex things. Yeah, I mean, there's a famous paradox, it was an inventive way, I called hands more of it, in the late 80s, more of X Paradox, which is about the fact that a lot of what we historically have called abstract, thought, kind of high level of thinking is actually relatively easy to automate because it's very simple, involves routines
Starting point is 00:06:42 and rule following behavior. So it's relatively easy, although it was pretty hard to create a chess playing computer that could beat the best human players. It's much more difficult to turn that to create things that are capable of doing very kind of fine-rained, dexterous physical movements in a changing environment. So more of it said this was a paradox. What we think of as very complex work is actually relatively easy to automate, but things that we think are straightforward and easy like walking from across a bumpy field turns out to be pretty difficult to automate.
Starting point is 00:07:16 Yeah, but not dancing or doing back flips based on what Boston dynamics are doing. Yeah, I mean Boston dynamics are showing that even more of ex paradoxes now becoming less salient. I just a lot of dispute about those videos that they released is that you know carefully curated they are and to what extent these robots are really engaging those behaviors autonomously without a lot of kind of training in advance, a lot of control of the environments that they're in. But I'm certainly impressed by what they've been able to achieve in the past few years.
Starting point is 00:07:50 It looks like they are really kind of pushing the boat out on that level of automation. But yeah, I mean, kind of go back to your point to kind of lost the track a little bit there. I think we are seeing the automation of a lot of knowledge work a canal days, particularly where that knowledge records somehow, you know, relatively routine searching through information, spotting patterns and information.
Starting point is 00:08:13 So you see, in that to some extent, in the medical profession with the use of automation and diagnostic techniques, you're seeing it in the legal professions. I teach law, I teach it at a law school. So this is the background that I come from. So you're seeing it in the legal professions. I mean, I teach law, I teach it at a law school. So this is the background that I come from. So you're seeing the increased automation of certain tasks that lawyers do now, such as searching through documents or even basic forms of legal research and developing legal argumentation are now being automated.
Starting point is 00:08:40 Is there anything that you think won't become automated? Is there a last bastion or some final stands? Yeah, I mean, I'm really not sure. I think in principle, I would say that there's nothing that can't be automated. And that kind of comes from a deep philosophical assumption that I have is that, in a sense, humans are just complex machines, complex biological machines. I don't think there's anything special or magical about humans. There's no supernatural essence or soul to them. That's the perspective that I come with this from.
Starting point is 00:09:11 So in principle, we could create similarly complex machines. It's been done once at Nature Dittit through evolution. So it seems possible that we could do it ourselves through kind of our own intelligence or with assistance from machines themselves in designing more complex machines. But at the moment, it seems that there's lots of things that aren't going under an immediate threat of being automated.
Starting point is 00:09:38 At least, we're not going to have human equivalents for certain kinds of tasks. But you don't always have to have human equivalents for certain kinds of tasks. But you don't always have to have human equivalents for something to be replaced. You can have a cheaper, more efficient robot that isn't necessarily better at a task might still be more attractive to an employer or a business owner. So you've got to think about it in those terms. Why would technological unemployment be desirable for us? Well, I mean, so that kind of comes from a combination of two things I suppose. One is that one of the arguments that I make in the book in a chapter that's, you know, some Washington,
Starting point is 00:10:19 I guess, provocatively titled, why you should hate your job, is that I think that a lot of work in the modern world is pretty unpleasant. It has a number of negative features to it. And it's being made worse oftentimes by technology. Even when machines don't replace humans, humans have to work alongside machines in such a way that actually
Starting point is 00:10:45 disinproves the quality of their work. And then I think it would be desirable to maybe hasten the automation work because there are alternatives to working for living that would be better. But I mean, I will say that that argument can pinches on how you define work and kind of deeper discussion about what it means for humans to flourish and live better lives. Yeah, precisely. Having to go into it. Yeah, that was that was one thing I thought about like I quite like this job. I actually can't believe that I've just referred to it as a job. But I enjoy having these conversations. I don't want some shiny robot bastard to come and take this microphone off me.
Starting point is 00:11:29 Yeah, and I mean, so this is the thing that the observation that I started that chapter of the book with is that I quite like my job as well. I mean, one of the reasons that I like my job is that for the most part, it doesn't feel like a job either. I'm an academic and I get to spend most of my time sitting in my office at home even pre-COVID times, read books and write about things that interest me and nobody's telling me what to do. I find that quite self-actualizing and meaningful. I think I'm probably one
Starting point is 00:12:02 of the lucky ones and I think you're probably one of the lucky ones and I think you're probably one of the lucky ones too. You've managed to craft this space for yourself. I don't know it. I should prep as all this but saying I don't know exactly everything that you do apart from this podcast. But if I assume that this podcast is your major, I can form a work, then it seems like you've crafted a space for yourself, an audience for yourself that you get to dictate the terms of your own life in a way that a lot of people would find end-viewable. This was my observation in the book, because most of us are most people in the world don't
Starting point is 00:12:38 have that kind of luxury, and that I'm a relatively privileged in my kind of job and what I get to do, and you are also relatively privileged. There's lots of people who started podcasts who aren't successful and haven't managed to turn into a way of life. You're one of the superstars out there that's managed to do that. Don't people find like mastery and community
Starting point is 00:13:02 and status and other stuff in work? They might not love it, but it gives them a sense of meaning. I remember seeing a bunch of different studies about people who retire earlier just how much sooner they die. Yeah, I mean, I think that work is certainly a source of good things for people. It's a source of, as you say, mastery. You can master a skill session, gain this sense of pride and achievement from the work that you do. Also, for many people it's their main source of community. They have to work every day for a certain number of hours, and so that they have to associate with certain people that they work with, and they can build meaningful relationships in that way. It's also a source of social status. We live in societies for the most part that really valorize work.
Starting point is 00:13:50 And I think having a job is having a stable income, being able to provide for people that's kind of the be all and end all of life. I suppose like what I would say about that, is that I'm certainly willing to accept that work is a source of good things for many people. I guess the question is whether we can find those things in other outlets outside of work and whether there's a sense in which work is a source of those things for many people
Starting point is 00:14:17 because they have no other option. They have to work. It's a matter of economics necessity for them to work, they're not going to be able to survive without us. And so they have to find those things and work. There's no other forum for them to do so. I would also say that even though people make those claims about work being a source of community,
Starting point is 00:14:41 providing a sense of mastery and so forth, I think it's also true for a lot of people that is not a source of those things, and that actually what they do outside of work, with their hobbies, with their friends, with their families, is a more of a source to mean work for a lot of people's kind of, a means to an end, a form of a drudgery. I do cite this example in the book, and I'm certainly not claiming that this is the only evidence for this proposition or idea or the best evidence, but the polling for a gala frequently done these state of the global workforce surveys every few years.
Starting point is 00:15:16 And certainly for the past decade and a half, one of their consistent findings in those surveys is that most people are not actively engaged by their work. In fact, I think within Europe, the European Union area, the average is about 10% of people are actively engaged at work. In the US, it's a little bit higher, so then like 30%. But in nowhere in the world, is it cracked through like 40, 50% of people being actively engaged in this by the work. Most people seem to find it's kind of mundane, a little bit of notness, and not kind of their main source of pride or mastery. When you think that that's something you're spending
Starting point is 00:15:55 40,000 hours of your life doing, ish. Even if you don't necessarily have a job for life, if you kind of vasolate from all right job to slightly less shit job to whatever it might be. Yeah, I think a lot of the stuff upon reading your book, you really need to remove the visceral response that you have to some of the things that come up. So for instance, talking about the fact that, well, don't people find community at work? And you're like, well, yeah, I mean,
Starting point is 00:16:29 they find community at work. But would you be friends with those people if it wasn't for the fact that you're at work? Like, If it wasn't for the fact that you have to look up to the maybe to, Well, yeah, and it's, We're all in this together.
Starting point is 00:16:43 It's part of this sort of common cause that we've got. Like, if the only thing that you and somebody else have bonded over is the way that you acquire resources through somebody else who is taking on all of the risk, you know, I'm sure that you can find soulmates both sort of romantically and in terms of friendships in jobs, but I don't think that we should kid ourselves that we're bonding over the job. We are in a job with someone else and we have managed to find common ground between us outside of the job.
Starting point is 00:17:16 And the same thing goes for everything else, like like you said, status, like what does it mean to be a carpenter, farmer, hair, hairdresser, you know, what pick, whatever it might be? Like, yeah, that gives you status, but is that the best status that you could give you? Like, is that the highest form of your actualization that you could have got to? And sadly, we never get to split test our own life, which I've always thought would be a fantastic idea. Like, if whoever's running the simulation could allow us to do that and just allow me to
Starting point is 00:17:47 way be split test a bunch of different decisions, that'd be phenomenal. But we never get to do that. You don't know if the fact that you chose to be a hairdresser instead of a masseuse or a PT instead of an accountant or whatever it might be. You don't ever know that what degree of flourishing you've actually managed to get yourself to? Yeah, I mean, I guess part of me thinks that maybe that's one of the strategies of human life is that we don't get to run the experiment again.
Starting point is 00:18:15 I guess you know, it's more true nowadays that people have the opportunity to experiment a little bit with their profession. And as you said earlier, vacillate a bed from job to job and try different things out. And that's more tolerated. A lot of people don't really settle down in a kind of meaningful sense until they're into their their 30s probably in most kind of developed economies.
Starting point is 00:18:34 Whereas, I guess, you know, my parents' generation, you entered your job when you were 18, when you left school, and you stayed in that job the next 40 years. My father, that's literally what happened to me. He entered the bank when he was 18 years old and he stayed there until he retired. Left what he was 65, yeah. So he definitely didn't get to split test his life. He had one path through life and that was the norm a generation ago. So I think we're a little bit of a better position when it comes to that.
Starting point is 00:19:04 But your general argument or idea is right that a generation ago. So I think we're a little bit of a better position when it comes to that, but your general argument or idea is right that we see, I think there's probably a lot of post-hock rationalization. I mean, I'm in this job. I'm working with these people. I have to get along with them. And you know, actually, they have some good features. And yeah, I'll go for a couple of jokes with them. And suddenly, they're my friends for life. And then my work is my source of meaning and status, because it's the thing that can occupy my attention all day long. And so, we don't get considered those alternative options and see whether there are other ways in which we could flourish. And you know, there's a lot of some costs involved for a lot of people as well. This isn't actually something that I already got into in the book,
Starting point is 00:19:49 but it's something I talk about a lot because I teach on finance and the world of banking. One of the courses I teach is just the level of indebtedness in the modern world and how people have less disposable income and less options as a result of this. And I think this leads to the sunk cost fallacy in life that, well, I'm stuck in this rush, and I can't afford to run the alternative experiment. It's only probably when people really hitch rock bottom or they're forced out of the position that they've been in, that they do get to run that split test on their life, so to speak. Yeah. What, in your opinion, is the good life? How do people find meaning in flourishing while we're alive?
Starting point is 00:20:36 Yeah, well, look, I mean, this is a topic that could fill a thousand podcasts, in a sense many of your podcasts have dealt with this theme in the past based on what I've seen. I don't have any radical new answers to that question, apart from repeating wash, full of us, or since psychologists have been saying for centuries. I mean, at a very abstract level, the way in which people think about the good life is the combination of your subjective satisfaction with your life, the amount of pleasure you have, the desires that you fulfill, the goals that you achieve in life, those are markers of having a good life, and then combined with that, the objective value of the things
Starting point is 00:21:24 that you're doing in life, what you produce for the world, what you achieve in the world, those are good things. So, I guess the philosophical views that you could spend your entire life counting the place of grass in your back garden. And maybe you're really happy doing that and satisfied. Maybe some AI has planted a little chip in your brain and to say that this is a real source of pleasure is the equivalent of crack cocaine for you or something. But that doesn't look like a good life in philosophical sense
Starting point is 00:21:57 because you're not doing anything that has objective value or meaning. So one of the accounts that I look at in the book is an account of the good life from a philosophical Susan Woolf, where she talks about the so-called fitting fulfillment theory of the good life, of the meaningful life, that it's one where you're doing something that is objectively worthwhile, that is fitting, and you are fulfilled by doing that thing. And I think you probably need the combination of those two things.
Starting point is 00:22:29 In terms of like what kinds of things have objective value, well again, they're can standard answer so that there's doing good things for the world, for other people, making their lives morally better, alleviating their suffering, there's achieving kind of breakthroughs in knowledge, producing knowledge or information or goods that are valuable to others in the world. And I guess it's also like art and aesthetic production and appreciation as part of good life. The philosophical slogan that some of this things that are worthwhile in life are the good, the true and the beautiful. Let's see. Triumvirate of meaning in life. I'm just about to finish the Happiness Hypothesis by Jonathan Heyte and he finishes the book by
Starting point is 00:23:12 contesting the Happiness Comes from Within Buddhist claim and he talks about something similar which is Happiness Comes from Within and Without and he's talking about this this two-way street. And I think I wonder how much of that is part of a, it is jaded by a society that's a meritocracy, that's one way you are, what you can do, very much about creation and, and tacitly, things being there, you know, we make things happen, we do stuff as a society right now. And I wonder whether we do that because we know that we can, we push people to try and create
Starting point is 00:23:56 things and try and add objective value to the world because we know that that's an option for them, whereas if you're a surf in Romania, in like the 1400s, I don't even know if they had surfs and remainder in the fort. You know what I mean? Someone hoeing the fields and stuff like that. Like would it be as forefront of the way the philosophy looks at this stuff if people didn't have the option to do it? Yeah, I mean, this is a good question. I think that in the modern era, we probably are too wedded to this maybe objectivist and productivist view of what provides meaning in life that it's all about producing good things.
Starting point is 00:24:33 That actually oftentimes, doing that for many people isn't a source of fulfillment. I guess the classical stoic view is that the only thing that you get to control is your own kind of perception of reality and how you interpret events and how you understand them and that you can't rely too much on external phenomena or even do it producing good things in the world because it's subject to so many contingencies and luck. It's a mistake to attach your happiness to things that aren't completely within your control.
Starting point is 00:25:08 And of course, that idea is a feature of a lot of modern psychotherapy, to cognitive behavioral therapy is essentially premistom that ancient stoic ideal, right of controlling your perception of events, that you shouldn't be too attached to the approval of others or attached to achievement as a source of meaning and happiness in life.
Starting point is 00:25:32 So there's certainly part of me that is attracted to the classic stoic view that you've got to focus on your perception of events and the things that are within your control. I do think that the productivist ideal of what the good life is is dangerous in so far as a lot of those objective goods, like doing good things for the world, making the world a more or less better place
Starting point is 00:25:56 or achieving some kind of grace inside and some truth or producing something of value for the world. Those tend to be relatively elitist goods. There's probably only a handful of people that really get to achieve those objectively good things for the world. And I'm not saying that it's impossible for me to do good things for my friends and family,
Starting point is 00:26:17 but the actual, the scope of my influence is relatively minimal. So I do think it would be wise to going to rein in this attachment to producing good things as a source of meaning. Like in one sense, you could read the book that I've written as a way of arguing that, arguing for that hypothesis or for that idea, even though it's probably not something I brought explicitly to the forefront of the book, but now that we're talking about it, it's something that Chris to me in.
Starting point is 00:26:47 I think it aligns as well. Okay, so let's say that it is a good, or that someone's proposing automation should occur. What are the strongest criticisms against letting it happen? I mean, the automation kind of letting it run rampant in human life. Well, I mean, the most obvious criticism of it is, and this, unfortunately, is not something I engage in in the book, is what does it actually do to people's lives from an economic perspective, because at the moment work is an economic necessity for people. It's how they gain access to an income, and they need
Starting point is 00:27:25 an income in order to survive and thrive. You can lament that fact or regret that fact, but that's just a reality in the modern world. It's more true in some countries than others. Some countries we have fairly robust safety nets and welfare that could protect people from the harsh realities of losing their job. I guess one of the interesting features of the COVID-19 pandemic is how a lot of governments have stepped in to provide even more supports for people who've lost their jobs.
Starting point is 00:27:59 Some way, I'm reasonably generous supports in comparison to what was pretty easy there, although that's always been on the assumption that it's a temporary stopgap measure. That keeps getting extended. Keeps getting extended and you kind of wonder how much longer can it be extended for. But yeah, I mean, so losing your income is going to be the main kind of objection to automation. So unless there's something done to correct for this loss of income. Bring that.
Starting point is 00:28:28 It's going to be pretty bad thing for a lot of people. And you can kind of see that happening to some extent already. And I mean, more generally, I think there are objections to the impact that automation is having on human well-being, not just in a purely economic sense. I discuss going to five problems in the book that I think are already apparent, but are likely to get worse the more automation there is.
Starting point is 00:28:59 One problem is linked back to the conversation we're just having about what it takes to live a good life, the sense that you need both subject to satisfaction and some kind of connection to the world around you to be doing things that are good for the world around you. Look at a very obvious point is that the whole purpose of automation is to sever that link between human effort and production in the world so that, you know, humans aren't needed for producing that good. And that's happening, like, not just in jobs, it's also happening in other spheres of life. I mean, one example that I look at in the book is in scientific inquiry. Now, like these are very kind of preliminary,
Starting point is 00:29:44 preliminary forms of technology, but there's a group of researchers in Aberyst with in Wales, actually, who've produced these robot scientists who were able to review the scientific literature, generate their own hypothesis, and test it. The two robots that they've created that I remember reading that were Adam and Eve, that both called different names.
Starting point is 00:30:04 And they were doing fairly, you could say, basic research on testing different kinds of, creating new kinds of east and different kind of drug treatments, but it's an interesting proof of concept that you could actually have scientists, like not just assisting in the process of scientific inquiry, but actually autonomously generating their own hypotheses and testing them. And there's sometimes this notion that if we don't work, we'll just kind of swan around and have more time for scientific inquiry and intellectual endeavor.
Starting point is 00:30:39 All of those things that people are just moon lighting on an evening time, is that they're taking up the violin and they're doing some Picasso in the garage and some DNA, crisper editing on the way to work and all this. Yeah, I mean, it's really is that whether the number one and a lot of people don't do that and don't have the capacity to do it or the means to do it, but also, capacity to do it or the means to do it, but also it could be the case of automating technologies, obviate the need for them to do that in the first place. One of the things I talk about in the book, I'm happy to admit that it's a, it might seem to many people like a satirical example, but the movie Wally, one of my favorite Pixar movies, has this depiction of an automated future where you have lots of robots doing
Starting point is 00:31:27 basic tasks around the world. And what do the humans do in that world? The humans are all, it's probably obese, they're sitting on floating couches in this interstellar cruise ship, trying to transport them to a better world because they've completely environmentally destroyed the Earth and they're watching light entertainment and being
Starting point is 00:31:53 fed a diet of fast food. So they're almost these passive slug-like beings because technology has made their life too convenient and too easy and they don't have any motivation to do anything. And I'm sure that's satire, obviously, in one sense. People have ridiculed it as this kind of vision of the future as the sofa larity, as opposed to the singularity where we all stand up on our sofas. But, in part, I mean, things are something true to it, right? That humans have, although, you know, engaging in difficult tasks and difficult forms of the physical labor or recognition can be very rewarding and fulfilling, they're also very difficult to do and you to be very kind of motivated to do them.
Starting point is 00:32:39 And if technology means that we don't have to do these things anymore, I think there's a danger for a lot of people that they'll just And a fall back into a very passive form of existence I know that all the problems that I discussed in the book the five problems that I discussed are all kind of linked to that basic idea This theme of of passivity as a result of the bottom issue. Yeah, I mean think about The rise in stoicism like why that's happening or why people are enjoying doing Iron Man triathlons or Brazilian jujitsu or cold showers cold showers perfect example
Starting point is 00:33:13 I wanted people want to do it like it sucks You don't nobody enjoys the cold shower. They want to feel alive They enjoy the satisfaction. They enjoy the state change, yes, precisely. There's not many things that we do now unless you skid on the ice outside in your car. There's not many things that we do that make us feel alive, you know, that give us that sort of, there's a dominatrix who Paul Bloom interviewed for one of his upcoming books. She said, nothing captures attention like a whip. And she means that when you slap someone in the face as hard as you can They're not thinking about anything for five seconds after you've hit them. They're just thinking
Starting point is 00:33:51 Did I just get fucking slapped? Did he just like me? That's that's what they're thinking and I think that Again, and Navarrevacant talks about we don't want peace of mind. We want peace from mind and want peace of mind, we want peace from mind. And this desperate desire to kind of just get ourselves into a lower stimulus state in a significantly higher stimulus world is just a constant battle. But again, if we were to be able to have some beautifully omnipotent omnipotent, on this unnissient being a, a, a, a, a, g i, it could solve all of those problems in any case. So anything that we can think of, any of these issues that we can postulate, it can come up with the correct combination of drug cocktail, the correct virtual environment for us to be in the
Starting point is 00:34:38 perfect robot soulmate sex friend that we need to make us feel fulfilled. That solution should be found if you had an NGI that was sufficiently advanced with enough resources to be able to do it. So, okay, so let's say that we managed to replace work with automation in an effort to get to a utopia. What does it even mean to get it right? What is a utopia by your definition? Yeah, and this is another whole other discussion in many ways, but I guess one of the things I do in the book is I contrast two ideas of what a utopia is. There's kind of the traditional popular conception
Starting point is 00:35:21 of what a utopia is, or what you find in so-called utopian literature, which is what I call the blueprint, our blueprint model of Eutopia. You find this in Plato's Republic. What is the ideal city? And he has this very rigid hierarchical society. Everyone knows their place. There are very set rules about what people are supposed to be doing. You get it in Thomas Moore's classic work on Utopia, first coinage of the term, actually in modern English, where he had depicts this hypothetical society, which is kind of a neo-futalist society where everyone is divided into these cases and they have certain rules and society, you know, arguably, this is less true in the sense that communist theory was probably struck by the fact that the communist utopia was never very precisely specified,
Starting point is 00:36:16 but those kinds of societies that did arise, that espoused a communist philosophy, often had this kind of rigid authoritarian structure. So implementing this blueprint, the idea is that we have this model of what the ideal world is, and we just need to kind of match the actual reality to that blueprint. And if that means that some people have to be sacrificed along the way for the good of the revolution,
Starting point is 00:36:42 for the good of achieving this blueprint then, so be it. And I think that's why utopianism in many people's minds has awakened a negative set of connotations that it's associated with a lot of failed movements like communism, critiquitous being utopianist movement, and then it's also associated with a lot of violence and cruelty on the past. When it's seen as utopia. Oddly, utopia is seen as a very ruthless sort of thing where people are just going to be left behind. Makes me think a lot about epigenetic, not epigenetic.
Starting point is 00:37:18 What was the thing that? Eugenics. Eugenics, that's it. Like eugenics and stuff, like selective breeding. Like that's what a utopia. But I mean, I'm massively jaded obviously by precisely the sort of old literature and the new sci-fi that I insist on reading to make me fall asleep on a night time. Yeah, and the famous kind of philosopher
Starting point is 00:37:38 of science, Karl Popper, who might have heard of it, he wrote these influential critiques of utopianism saying that they can necessarily lead to violence because anyone who's part of utopian movement will just think it's the end is just fine and mean. So if you have to break some eggs to make the omelet and if that means cracking heads and putting people in prison camps, so be it. That's definitely not the model of utopianism that I favor in the book. I contrast that with what I call a horizontal, or you could almost call it a frontier model of utopia, that the ideal society is one that is open and dynamic,
Starting point is 00:38:18 that actually doesn't have a fixed destination or fixed map for what the ideal society is, but that is focused on not becoming static, not becoming limited, that explore is different horizons of possibility for humanity both in terms of activities, in terms of how we embody ourselves, how we relate to other people, how we explore our environment. It's the sense that there's always more possibilities that the future can always be better. And maintaining that open horizon in the future
Starting point is 00:38:55 is the key to having a utopian society. So it's maybe a slightly paradoxical idea in the sense that a utopian society for me is not one that has a particular fixed model or blueprint, but it's something that is open-ended and dynamic. Got you. What's a cyborg utopia? You kind of break it into two different types.
Starting point is 00:39:15 What's a cyborg one? Yeah, well, the last part of the book, I look at two different models of utopia, the cyborg utopia and what I call the virtual utopia. I want to take a step back before I talk about it, just to explain why I had those two possible futures, because some of them might think it's, well, why did you pick those two? And why is it so binary?
Starting point is 00:39:38 So where I arrive at the end of the first half of the book is this notion that humanity is at a crossroads. That's what's happening. And I use this idea from evolutionary anthropologies, that humans evolved to fill what I call the cognitive niche, right? We set this apart from other animals that we use our brains, both individually and collectively, to solve problems. And we kind of generate our own ecological niche. We're not as dependent on the physical world or not. and we can have generated our own ecological niche,
Starting point is 00:40:05 we're not as dependent on the physical world or not so susceptible to the whims of the natural world as other animals because we've managed to carve out this niche for ourselves using our brain power. What's happening now is that we're creating technologies that are gradually replacing us that are kind of shunting us out of the cognitive niche,
Starting point is 00:40:24 pushing us out gradually. And so we face dilemma, the question is, do we try to fight back against the machines and reclaim our dominance of the cognitive niche or do we try and retreat from the cognitive niche and let the kind of machines watch over us and look after our economic wellbeing, our needs, the kind of machines watch over us and look after our economic well-being, our needs,
Starting point is 00:40:47 kind of needs for abundance and afterlons and so forth, and do something else. And so, I associate those kind of two options with two different models of Utopia, the cyborg Utopia, which is where we basically try to become like the machines that are gradually replacing us. And the virtual utopia, which is where we essentially retreat from the cognitive niche and do something else. What I think is really interesting about that is it makes me think about the status conversation that we were having before, about your job sort of gives you your sense of who you are and it's the label you give yourself. And that cognitive niche is kind of like a species-wide
Starting point is 00:41:28 status that we've given ourselves, right? Like we are the cognitive kings of the jungle. You know, there isn't anything else that we know in the universe that's smarter than us, that has the powers of abstraction and planning and mindfulness and all of the creativity, everything that we value for the big meatloaf inside of our heads. Pretty soon, unless the, unless AGI continues to remain narrow and go deeper as opposed to actually being able to broaden out and it just seemed like there is a bit of debate about whether or not that's gonna happen. But if it is able to get to proper Nick Boss from ShittyPant stuff, then when no longer going to be top of the tree, we are literally going to be in a best case scenario, friends with a God that we have managed to constrict or convince to align its goals and ask together, but we are no longer going to be top of the tree.
Starting point is 00:42:38 And I wonder what that does to a civilization when that happens. What does it mean to be a human? What does it mean to be supposedly the rulers of a planet when you're no longer the smartest individual on it? Yeah, I mean, that's a really interesting way of kind of framing it or putting it as that, you know, our brain power is the status that we've given ourselves as species. I guess, you know, evolution is hate this notion
Starting point is 00:43:09 that we are part of some chain of being. We're, it's a hierarchy and we, you know, sit on the top of it and they would argue that, you know, the whole point of evolutionary thinking of the Darwinian revolution is to let us of that notion that we are somehow at the center of the evolutionary universe. It's just this kind of massive, sprawling branching tree of different organisms. But yeah, I think the reality is that many human civilizations and many humans probably do think of themselves in some sense superior to the rest of the world and this notion that we're going to lose that status is problematic. And I think it does compose a major existential threat to us, not in the postroman sense of
Starting point is 00:43:56 like the machines are going to turn us into paperclips, but in the sense that what are we here for? What's the purpose of it all? Oh, yeah. So not only could we be paperclips, but mentally we could think of ourselves as paperclips. So there's two different ways that we could be displaced, but spiritually and physically, we could get displaced by the machines. Okay. So, cyborg utopia. Give us what's that look like? Yeah. I mean, there's a number of different ideas of pathways to cyborg utopia. For people who aren't aware of, I most people are aware of this nowadays, cyborg was a cybernetic organism.
Starting point is 00:44:34 Actually, the concept idea comes from a paper written by a couple of scientists who have very suspiciously similar surnames, but they're spelled differently. So I don't actually remember their first names ever, but it's Clients and Clients. The names of the people that wrote this paper on the cyborg. And they were actually writing as part of the space race, like they were talking about,
Starting point is 00:44:57 how can we get humans into space? And they were commenting on the fact that, well, humans aren't very well adapted to space. If you put it aside to spaceship ship for a couple of seconds, we're going to not be thriving and flourishing to put it mildly. So how can we improve things? Well, we could turn humans into machines or integrate human biological systems with machines
Starting point is 00:45:20 so that we're better adapted to the environment of space. And that's where the term cyborg came from, from their paper that they wrote about this idea. Obviously, it's been taken on a whole other life since in popular culture. But that's basically what I'm talking about when I look at the cyborg utopia, is that we fight back against the machines by trying to become machines ourselves, by integrating ourselves more and more with machines so that who we are is part and parcel of what
Starting point is 00:45:52 our automating technology is as well. So our fates are bound together, not just in a loose sense, but in the sense that we are the same thing. Our identity is the same as them. Now, there's two very different pathways to achieving cyber work status. You could achieve it by actually physically integrating yourself with technology.
Starting point is 00:46:16 One of the examples I discussed in the book is this artist, Neil Harbysson. I don't know if you're very interviewed him, you should probably try and interview him. He's an interesting guy. He's a founder of the Slyborg Society and more recently the Trans Species Society who's advocating for the rights of people who don't describe to human identity. I think that's a might be something. It's intriguing to you. I post human identity. So he has this antenna at the back of his head.
Starting point is 00:46:47 And he was born color blind. And what this antenna does is it converts light rays into sound. So it allows him to hear in color. It's a bit of technology that I guess you call it a technology a technologically induced form of synesthesia. He's combining two senses. And he talks about this a lot and how it changes his sensory perception and engagement with
Starting point is 00:47:13 the world. So what he's doing there is using a piece of technology to change how he experiences reality. He's integrated himself with the technology. And in interviews that he's conducted, he refers to himself as a bit of technology. He says, you know, I know it's not that I use technology to engage with the world. I am kind of an extended piece of technology. My identity with this antenna that I've drilled into my skull is part of me. And like he's won the right to kind of wear it in identity photographs and all that kind of thing. So it's an interesting character. Like it's a very primitive form of cyber technology. He's just kind of adding a new sensory modality, but it's, I think, a proof of concept of how we can integrate ourselves with machines.
Starting point is 00:48:00 And there are lots of other people doing similar experiments or developing similar cyborg technologies, brain computer interfaces that allow people to have robotic arms that are directly attached into their nervous system. Usually for people who have suffered from some kind of amputation or loss of limb function or you have these exoskeletons that people are creating that you can lift heavier objects and move faster. These are all examples of technical integration between our biological systems and either a computer or robotic system in some way.
Starting point is 00:48:37 So that's one form of cyber organization. There's another form as well, it's kind of a looser form that some people say that we are going to cyber as already, there's kind of a looser form that some people say that we are going to cyborgs already. There's a Scottish philosopher called Andy Clarke who says that we're natural born cyborgs. Again, kind of go back to this idea of the cognitive niche. How do we succeed? How do we thrive within the cognitive niche? It was because we built technologies that we have tied our fate to that.
Starting point is 00:49:01 Humans have always been a technological species. I've used technology to survive and we're just doing more and more of that nowadays and we've become highly interdependent with our technologies. It's a trite example, but the notion that how close you are to your phone and how often you look at your phone and how you rely on your phone for memory, for navigation, for financial management, whatever it might be. That's an example of a cyborgization. But that's a, I think a looser metaphorical sense of what it means to become a cyborg.
Starting point is 00:49:37 I'm kind of more interested in the technical form of cyborgization. It's an interesting thought experiment. I've just been going through my head there thinking about, I don't think that I'm a cyborg, even if the phone, the phone's outside of me, it's not me. Right, okay. So let's say that everybody within the next 300 years gets a robotic set of hips because hips are a nightmare and we don't want, okay, like no, no, no, because that's just the robot and then you just slip, re-slope, you way all the way down. And you go, okay, like no, no, no, no, because that's just the robot and then you just slippery slope your way all the way down and you go, okay, so now I've got robotic legs, okay, so now, now 80% of me all the way up to the nips
Starting point is 00:50:12 or all the way up to the neck, like that, that's all robotic. We've just dispensed with our bodies, but our heads are still there or reverse it and say, okay, maybe we've realized that actually we can replace certain areas of the brain, like we can get rid of fear, fear and anxiety, response by getting rid of like the way the amygdala works, and we can put a chip in in place of that, and it's kind of the same size and the same shape, and slip-free slope, you weigh all your way down through that as well, and you go, right, okay, now the hippocampus is gone, now like the prefrontal cortex is gone, now this is gone. You actually get to a point where you can remove all parts of you
Starting point is 00:50:47 and replace them with a machine and yet somehow still consider that you're not a machine. Just I think because we hold on to our sense of, I am me and it's very difficult to abstract ourselves into what would it be like for my consciousness to be placed in something else. You know, we understand what happens when we see someone in a wheelchair, they are not the wheelchair. But at what point does replacing the parts of you that you
Starting point is 00:51:08 consider to be you? And this is a broader question that Sam Harris asks a lot, which he says, like, where are you? You consider that you're somewhere in your head behind your eyes. But really, like, what are we talking about here? And I suppose this is a, this is a much deeper sort of philosophical question, but certainly when you talk about cyborgs, it certainly seems that from some sort of objective metric, it would be quite feasible to think of a situation in which we were cybernetic organisms. Yeah, and as I say, there are, there are many people who argue that there are already humans that are cyborgs. You know, kind of neuroprostatics, use of retinal implants, cochlear implants.
Starting point is 00:51:52 You know, they're not replacing parts of your cortex, yes. But they are replacing parts of your, kind of, sensory peripheries. And it seems like a very clear proof of concept that you could do more kind of functional integration with technical systems and the more of that we do, the more technology like we become, the more we become cyborgs. You know, there is an interesting philosophical question, is that like if you replace every single neuron in your head, gradually over time, do you actually maintain the same identity?
Starting point is 00:52:27 Or is there a certain point in time in which the lights switch off? And there's some philosophers who think that maybe that'll happen, like maybe, as you're gradually replacing each neuron, you seem like you're still inside your skull or inside your body, but then at some point suddenly it all disappears.
Starting point is 00:52:42 Like a cybernetics zombie. Yeah, exactly. I mean, there have been people who have argued this. That might be possible. And that we'll never know. And that's the problem because you never get to see. Well, I had a bunch of conversations recently about consciousness. Philip Gough was on talking about consciousness.
Starting point is 00:53:01 And my favorite quote from that reminded me of something I'd read ages ago, which is, if it wasn't for the fact that we experience it, the universe would give us no indication that consciousness existed. Yeah, exactly. Right. So that's the problem. Somebody could be walking around with making the noises, doing the movements, having the responses, wouldn't know. Okay, so cyborg utopia, sounds all right, a little bit of work to be done on it. What's virtual utopia? Right, so this is a much more slippery concept, I think, and something that's difficult to wrap your head around. I'm partly that's because the concept of what virtual reality is is kind of inherently paradoxical and maybe not well understood.
Starting point is 00:53:51 So within the book, I contrast kind of two ideas of what a virtual reality is. One is, I guess, the technical sense of it where you know, you're literally putting on like a headset or something and going inside a computer-simulated environment and living out of life like that. The idea depicted in lots of movies, I guess, the Matrix is a famous variation on this idea of living in a virtual world through technology. And there are other examples that don't spring to mind.
Starting point is 00:54:25 I suppose my favorite example is Neil Stevenson book, Snow Crash, if anyone's ever read that about the metaverse. I've tried. So, here's the main question of this. I've started to try and get into that twice and I've just kind of got stuck a few pages deep, which just hasn't grabbed grip me. Is it worth reading? Yeah, it is worth reading. It's quite dated and I guess like, you'll be familiar with a lot of the concepts in it, but I think that's larger because he's been quite influential in, you know, tech culture and Silicon Valley culture.
Starting point is 00:54:57 You reckon I should give it a go. I reckon you should give it a go. I do have that problem with a lot of Neil Stevenson books, though, in that several of them are gathering dust on my shelves and they're all like a thousand pages long. Yeah, yeah, yeah. Two thousand page multi volume books. Did you try seven eaves?
Starting point is 00:55:15 I did. So that's probably, I might have a similar point with seven eaves. You might do some other questions. Please keep at it. Please keep at it. I promise you that one is worth it. Neil, if you're listening, I'm sorry, mate, but the second, the final third of that book, after you'll know the point at which it's the final third,
Starting point is 00:55:35 the final third of that book just didn't need to be there. Like the first two thirds of it, once you get into it, I couldn't put it down. I went on a stagdew to New York and someone recommended that I start reading it. Let's remember that I've been sort of going out it fairly hard for four days in New York and it was a 10 hour plane journey home or whatever. I didn't sleep. So I opened seven eaves and 230 pages later or whatever the hell I'd done. I hadn't bothered sleeping. I got to Amsterdam absolutely wrecked, but like I knew what had happened 500 days after the
Starting point is 00:56:14 moon had exploded or whatever it was. And yeah, that was a bizarre experience. Anyway, we've got sidetracked. So virtual utopia. Yeah, so this idea of like living in a computer simulated environment, that's what living in a virtual world is, you know, some people can argue that's utopian and part of me thinks that there is potential to us in so far as you can create an endless space of possible worlds to live out your
Starting point is 00:56:38 life and it's kind of this perfect fantasy playground for whatever it is that you happen to desire. In principle, it depends on whether the technology gets there. There was another sense of what a virtual life is that I'm more attracted to, which is the notion, again, that humans have always in a sense been constructing a virtual world for themselves. Again, to go back to my point, to push the cognitive niche and just flip it around, what I said is that we're using our brain power to make ourselves less susceptible to the
Starting point is 00:57:09 vicitudes of the natural world, that we're less subject to its whims and its caprice. We live in a nice safe controlled environments and coming to you from a centrally heated, artificially lit room, right? So I'm not outside in the Siberian winds that are certainly sweeping across the British Isles this evening. So we've always been constructing these artificial environments.
Starting point is 00:57:36 And we're going to continue to do it. It's just that the kind of stakes associated with it, the kinds of lives that we live in, these artificially constructed environments are going to be less, they're gonna have less stakes associated with them. And so far as what we're doing inside them is not going to determine our fate in life, our economic destiny in life. So the Israeli historian and his futurist as well, Yvonne O'Harrari has made this point, right?
Starting point is 00:58:10 That's, he actually made it in a very short op-ed in the Guardian a few years ago, but it's a major theme of both of his first two books, Sapiens and Homodes, that what is distinctive about human civilization is that we use our imaginations to kind of project a layer onto our experience of the world that isn't really there.
Starting point is 00:58:32 You know, we perceive these status hierarchies, these kind of mythical symbols and meanings that actually aren't really in the physical environment. And so here's our argument is that humanity is in a sense built on a series of virtual reality games that we're playing in our head. And so, people who worry about the end of work and the automation of work that we're going to lose something important are misguided because we're always been constructing these new virtual reality games and we're just going to continue doing
Starting point is 00:59:01 it in the future. It's just not going to be an economic game anymore. It's going to be some other kind of virtual reality game that we're playing. And that's the model of the virtual utopia that I defend in the book. This what I call the utopia of games, where we engage in lots of different game-like activities. And they're very meaningful to us.
Starting point is 00:59:22 And we have achievements within those games. We build friendships within those games. But we have achievements within those games and we build friendships within those games But we're not playing those games because of some economic necessity because we're forced to And my claim is that it's possible for us to construct an infinite number of these games to play And that can be a kind of utopia And that can be a kind of utopia. A lot of the objections that I sort of observed coming up inside myself is I'm hearing you talk and as I read the book can be answered again by a sufficiently powerful or sufficiently high fidelity
Starting point is 01:00:00 simulation So one of the things that I was thinking was what isn't Struggler part of what gives life meaning, which wouldn't a virtual, hedonic adaptation kick in at just progressively higher and higher levels, but the virtual reality, if it was programmed by something sufficiently intelligent, would know that you need to have struggle in order to get meaning, therefore the struggle would be programmed in. And we get to a question about if it's like the reverse philosophical zombie, it's like the environment zombie. Like if you experience
Starting point is 01:00:33 everything in the environment, which gives you all of the stimuli that you would have needed to lead a fulfilled and meaningful and flourished life, but it's not in the real environment, is there anything different going on? Yeah, so look, my view is that there isn't ultimately or, I mean, there might be something different, but it's not. It's not a difference that could renders our lives less flourishing or less meaningful. That said, like there is a large school of opposition to this idea, there's a philosophical Robert Nozeku, you know idea. There's a philosopher called Robert Nosek, who quite like as a philosopher.
Starting point is 01:01:08 He's better known as a political philosopher. He could have defended a version of libertarianism and minimal state libertarianism back in the 70s that is very influential in the US. But I kind of like him more as a more general philosophy. I think he has lots of interesting ideas. And he has this famous thought experiment of the experience machine, which is basically
Starting point is 01:01:29 what you're imagining. So you know, imagine you could plug yourself into an artificial simulation that was, you know, high fidelity, very realistic. You didn't, you wouldn't remember your old world, life in the real world. You could kind of play out whatever simulation you want. Would you choose to plug into the experienced machine?
Starting point is 01:01:49 And he argued that he wouldn't, because he wants to live in the real world, and the experienced machine would be missing something that he desires, and he claimed to survey his students on this and that they all agreed with him. And a lot of people, I think, when first presented with this thought experiment, agree with it. It turns out more recently, there's a bunch of experiments that have been done on this and that they all agree with them. A lot of people, I think, when first presented with this thought experiment, agree with it. It turns out more recently, there's a bunch of experiments that have been done on this, where people have asked kind of variations on the original experience machine thought experiments, where they say, well, OK, what if instead of plugging into a machine, you
Starting point is 01:02:18 were asked to plug out of a machine. So you were told, well, everything that you value in your life right now is a simulation. And you can plug out of that simulation going to the real world. Basically, the full experiment that's depicted in the movie, The Matrix, would you choose to do it? And some of the experimental studies in this suggest that actually people wouldn't choose to do it, or they're less likely to choose to do it, significantly less likely to choose to do it if they are plugging out of the machine. So some people have argued that a no-six thought experiment was playing upon a status quo by us. Yes, people have that precise. People are attached to their current way of life and they're afraid to kind of move out
Starting point is 01:02:53 of it. And it's not really saying anything about whether living inside this virtual reality machine is actually valuable or meaningful or not. This is like a virtual reality trolley problem, like an inverted virtual reality trolley problem. Is there a case here I'm going to guess that there will be people will be pretty swayed by a naturalistic fallacy as well? Yeah, I mean, I think that you can see that in some aspects of the modern environmentalist movement, I'm not going to denigrate all forms of environmentalism by any stretch of the imagination. We are facing into some major environmental catastrophes in the future, but I'm not sure that we can actually do anything about them, whether we have the collective will or
Starting point is 01:03:41 the institutions that can do anything about it, but they are serious problems. But that said, within the environmental movement, you can find kind of pockets that are very much attached to some kind of golden age that was in the past or kind of a primitiveist life for more in touch with nature. I think a lot of that is kind of mythical and an overstatement and an over idealization of the past,
Starting point is 01:04:03 which has always been a feature of human life. I mean, there is an argument as to why that happened, you know, with the kind of garden, and a wooden myth that you find, one of the standard historical explanations of that is that this was people who had undergone the agricultural revolution, wishing that they were hunter-gatherers again.
Starting point is 01:04:22 Because it turns out if you study hunter-gatherer tribes, nowadays, and if you look at some historical records of them, it seems that they had much more leisure-filled lives than we do and seem to have been maybe happier than we were at this kind of a martial salons with this famous paper back in the 60s, called the original, the original affluent society or the original leisure society, which he said,
Starting point is 01:04:49 the hunter gatherers with the original leisure society, because if you look at them, what they do, they spend maybe two hours of a working instead of looking for food, and the rest of the time has spent kind of playing, hanging around with their families and not doing much else. So it could be that people are appealing to that kind of mythical ideal, but I think there's
Starting point is 01:05:06 lots of unpleasant features of that form of life too in terms of the dysentery, the disease, the teeth falling out. It's your tribal warfare. Yeah, the rampant rape. Yeah, I had David Pearce on the show, Transhumanist guy. Sure, yeah, yeah. Yeah, so I had David on the show. It must be years ago now.
Starting point is 01:05:26 It must be a couple of years ago. And prior to, prior speaking to him, I think I had a very different sort of view around us moving forward. But in the way that ideas oddly do tend to infect you, because I spoke to him so long ago, and because I've reflected on the stuff that he does for quite a while, that status
Starting point is 01:05:51 quo bias, like my overt and window of what I consider to be a mental projection of where we could end up as a civilization, has just expanded and expanded and expanded. And now I'm like, well, I mean, you know, if there was a way of virtually essentially making me feel like on MDMA all the time and never come down from it and just exist that degrees of human enjoyment that have here to never been discovered, I'm like, mm, is the, it becomes a question, it becomes an ethical question. It simply comes down to that as far as I can see. Yeah, I mean, like, I think David's work is groundbreaking
Starting point is 01:06:33 and definitely, you can perspective shifting work is book the he and his to comparative, which I believe you can access the entire text of it online. It was written back in the 90s, I believe. So it might seem a little bit dated in some aspects, but it's still, I think, a very provocative and current work, and I will really kind of reshape your thinking about, I guess, the big idea within it is the notion that some people can have a lower hedonistic baseline than everyone else, and that means they're
Starting point is 01:07:02 kind of disadvantaged and they live a more impoverished life. And that we should do something to kind of up the hedonistic ante for them to kind of live more flourishing blissful lives. And so he's a, I would say, an alternative model of the utopia to me, but it's not entirely dissimilar. David is a transhumanist. And I guess what I talk about, I talk about the cyborg utopia,
Starting point is 01:07:28 is a kind of transhumanist future. David reduces transhumanism to the so-called three supers. That the goal is super longevity, ending death, and ending death rather, but having extra long lives, super intelligence, creating more and more intelligence and improving human intelligence, and then super happiness, which is this idea of the hedonistic imperative making people more blissful and happy.
Starting point is 01:07:55 And I think that kind of chimes with what I view as the cyborg utopia. I'm a little bit more dismissive of that idea in the book, but I'm favor the virtual utopia, but I'm not completely down on the cyborg utopia, I think it has merits as well. What I'm skeptical about is its technological feasibility in the medium term. What is going to happen next, do you think, in the space of not the science side, but in terms of philosophy, what do you think people are going to spend
Starting point is 01:08:26 the next sort of five to 10 years thinking about with regards to this space? Do you have any inclination of that? Well, I mean, I guess like a lot of my day job is focusing on the ethical implications, legal implications of artificial intelligence and robotics. I hate to call myself an AI ethicist or an internet that because I think that term is loaded and
Starting point is 01:08:52 a lot of the work that these people, that AI ethicists do, is not very similar to what I do. But I think the main debates there are, unfortunately, I think you're very traditional debates about political power, like who controls technology and who controls access technology, you're seeing these things play out. The effect of artificial intelligence on political polarization and debate, the effect that it has on economic polarization and inequality, issues of like bias in technology and bias decision-making. I know I'm talking about the previous book that I did, but I actually am a co-author on a new book which looks at a lot of these themes, it's called, I actually have it here. This wasn't intentional, I just have an extramed I defend. It's called the Citizens Guide to Artificial Intelligence. Which looks as...
Starting point is 01:09:49 Is it out now? It's out at the end of February. So I'm not the lead author on it, but I'm actually good at it to get the lead author I think I call John's, they're really to talk about it. Hello. Hi, good luck, guys.
Starting point is 01:10:00 I'll link to that in the show notes below. I've got Brian Christian on next week. Yeah, yeah, Okay. The alignment problem. Yeah, I just I just purchased that haven't read it yet. It'll be gathering dust next to seven eaves. Yeah, man. I mean, I know that the audience do enjoy this, but it's they'd better continue enjoying it because I find discussions about the ethics to do with artificial intelligence and automation and robots and all that stuff. I find them endlessly fascinating. I find them fascinating in a way that I don't think classical philosophy has it's not a replacement, but it's a fantastic side dish and it's a fantastic side dish. And it's satisfying. I was probably getting a lot of the debates in classical philosophy
Starting point is 01:10:49 within kind of AI and robotics related philosophy. It just, it might seem more kind of current and interesting. So that's that new book is where you think the a lot of the direction will be going. Yeah, like I've got to say is I think there are like a medium term issues and long term issues. So you're the boss from super intelligence existential risk angle, that's sort of the long-term debate concerns. And then the short medium term debate is about like the effect on work, you cannot, the economy, the effect on politics, political debate polarization, and the legal uses of AI, like the use of predictive tools to determine whether somebody is gonna quit a crime again
Starting point is 01:11:31 or in child welfare protection, these kinds of things. These are all examples that we discussed in that book. So that's why I'm raising them. That's where a lot of the conversation is. Now, it's interesting, the question is of why those two conversations are actually quite separate in the philosophical and legal communities. They probably shouldn't be, they should actually be more joined up.
Starting point is 01:11:52 And I'd be one of the people who'd be arguing for them to be more joined up. But I think there are some people who are very dismissive of the long term concerns. They think they're too speculative, too fancifuliful and that we should limit ourselves to the more medium-term concerns. But I think you have to focus on both. I think the long-term, yeah, I think you have to have a long-term as perspective. Otherwise, I think you're anything out on a lot of what is interesting about the human project. There's also a risk here. The black ball out of the urn or the black swan or the unknown and known, whatever you want to call it.
Starting point is 01:12:30 If you decide to take your eye off a ball, which is a civilization killer, which even has an infinitesimal chance of doing it, it's not a very good idea. I thought I knew existential risk and I read Toby odds the precipice last year. And that book made me fully shit myself. I was like, I couldn't, I just can't believe how far public attention and public consciousness is directed in completely the wrong area
Starting point is 01:13:07 when it comes to this. You mentioned sort of the environmental movement early on. And I angered a lot of people across multiple different social media domains about a month ago where I said, I posted the table of Toby Ords' chance of us being destroyed within the next century by, and he lists all the different existential risks, and the caption was, climate change is not an existential risk priority change my mind. And a lot, an awful lot of people got very angry because the wording was purposefully provocative, but as far as I can see by anyone that understands what's going on true. And if we were to have a sufficiently powerful superintelligence, that would probably be able to fix and undo any of the damage that we've done to the environment in any case,
Starting point is 01:13:53 but if it's turned us all into fucking paper clips or stuck electrodes in our face, then it kind of doesn't matter. Yeah, I mean, again, I don't want to reopen the kind of worms that you open for yourself on Twitter, but I suppose like, you know, Toby Warren is a very kind of narrow definition of what an existential risk is. It's like, it has to be basically something that civilization ends, humanity, and climate change could make this very unpleasant for a lot of people, but humans as a general population will survive it, not it, as they have survived significant climatic events in the past.
Starting point is 01:14:28 Before we tumble down further down that rabbit hole, I have your other book here, Automation and Utopia, Human flourishing in a world without work. We'll be linked in the show notes below. Anywhere else that people should go and check out? Also, your new book will be linked as well. Any other stuff people should go and see out? Also, your new book will be linked as well. Any other stuff people should go and see?
Starting point is 01:14:45 I guess just my website, it has a complicated title. It's philosophical discositions, maybe something to link with the show notes as well. It's a blog, like I write a regular article on it, and I also have a podcast of my own that deals with AI and philosophy and ethics issues. So people might be interested in that if they're interested in this conversation. Amazing. John, thank you, man. All right, thank you. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.