Open Book with Anthony Scaramucci - We Will Shape Humanity's Destiny with Nick Bostrom

Episode Date: December 19, 2024

In this conversation, Professor Nick Bostrom discusses his book 'Deep Utopia' and explores the implications of transformative technologies on human life and meaning. He contrasts the potential positiv...e outcomes of AI development with the risks, emphasizing the need for alignment and ethical considerations. He also shares insights on the rapid advancements in AI and the philosophical questions surrounding existence and purpose in a potentially utopian future. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 This episode is brought to you by Tellus Online Security. Oh, tax season is the worst. You mean hack season? Sorry, what? Yeah, cybercriminals love tax forms. But I've got Tellus Online Security. It helps protect against identity theft and financial fraud so I can stress less during tax season or any season.
Starting point is 00:00:20 Plans start at just $12 a month. Learn more at tellus.com slash online security. No one can prevent all cybercrime or identity theft. Conditions apply. So joining us now on Open Book is Professor Nick Bostrom. He is a Oxford professor. And the title of the book is Deep Utopia, Life and Meaning in a Solve World. Professor, first of all, it's great to have you here. This is the reason why you have to go to bookstore, sir. Okay, I was rummaging through the bookstore. I saw your book. I said, okay, this looks interesting. I read your book and then I had my assistant reach out to you. The book is fantastic. Before we get into the book, let's talk about your background and tell us how this became your passion. Well, I mean, my background is I grew up in Sweden and then I've been the founding director of a research institute at Oxford University since 2005 called the Future of Humanity Institute.
Starting point is 00:01:24 And so I was a professor at Oxford until like earlier this year when I resigned. And I've long been interested in the impact of transformative technologies on the human condition. My previous book, which came out in 2014, was about the future of AI and what could go wrong and what we might need to do to avoid that. And then this recent book, Deep Utopia, looks at the other side of the coin. What happens if things go right? Yeah, so let's go back. That book written, or at least published in 2014, title of that book was Super Intelligence.
Starting point is 00:02:07 I actually went to a lecture. I have to confess, I didn't read that book, but I went to a lecture at Columbia University when that book came out where there was a group of people debating the merits of your book and suggesting in the Elon Musk's sort of, of way that there were catastrophic risks associated with artificial intelligence. This book is the opposite of that, the new book, Deep Utopia. So where do you stand in that debate?
Starting point is 00:02:39 I know you've written a pro and a con. Where do you stand? It's not so much that my views have changed. They have always been there, both the belief that if things go well, there could be enormous upside and then also the belief that it's. by no means guaranteed that things will in fact go well. It's just the previous book was focusing more on one side of that. And now I'm trying to dive more deeply into where would we actually end up.
Starting point is 00:03:07 If we might, suppose we do develop like machine superintelligence. We solve the safety problems so they don't all go around a mock and kill humans. And we like suppose we get our act together as humans as well so we don't use these tools to oppress each other or wage war or like to other destructive things. Like take the best possible case to NRA. Then we end up, I think, in this technological mature condition, because if we have superintelligence, it will help us invent all kinds of other technologies.
Starting point is 00:03:37 And then we face these very profound questions, I believe, about the meaning of human life. Like, what would you do all day in a world where, like, AIs could do everything much better than we could do? Well, I mean, there's a lot there's a lot there. I guess the first thing, you know, because I'm meeting you for the first time on this podcast, reading the book, I read the book and said that Professor Boschram is a optimist on the human condition
Starting point is 00:04:06 and a long-term optimist on what you just said, we get our stuff together. We start to figure things out better as a human race. Now, am I right on that? Are you an optimist? No, I mean, I've always struggled with that question because people ask. me. And like, I'm kind of attacked from, some people accused me of being this kind of anti-tech doomer type. And then at the same time, other people are hating at me for being this kind of technoptimist, transhumanist. I sometimes wish I could just invite all of these different
Starting point is 00:04:40 sets of people into the same room and they could like hate on each other directly, cut out the middleman. But what are you, sir? Are you the optimist that I see you as? Or are you, I mean, I think we don't know. I mean, we have a lot of uncertainty about how this. We've never been through an AI revolution before. And so we just have to recognize that as far as we know right now, both of these seem in the card. I mean, if you have to have a label, I guess a fretful optimist, maybe.
Starting point is 00:05:11 Okay, all right. Let's go in the direction for a moment, at least, if we get things right. You write in the book about entering a period, of post-instrumentality. Describe what that means to us. What is post-instrumentality? It would be a condition in which instrumental effort is no longer needed from humans.
Starting point is 00:05:37 The instrumental effort being the kind of thing where you do something in order to achieve something else. So maybe you go to work in order to get a paycheck. Or maybe you go to the gym in order to be fit and healthy. And so a lot of what we do have that structure. We do X to achieve Y. And so at this condition of technological maturity, it seems we would enter a post-instrumental condition because for almost all outcomes we might seek, that would be a shortcut that wouldn't involve our own effort.
Starting point is 00:06:08 So this is a more radical conception than what we might call a post-work condition. Yes. A post-work condition would be we wouldn't have to work to make money because, no human labor would be economically valuable. But that still would leave open the idea that there's a whole host of other things you need to do to put effort in. If you want to be healthy, you can't hire a robot to go to the gym on your behalf in a post-work condition. Like if you want to have a good relationship with your kids, you have to spend time with them yourself, etc. But the post-instrumental condition is more radical.
Starting point is 00:06:41 For example, instead of going to the gym, you could take maybe a pill that would produce the same physiological effects in your body. Okay, so I mean, I'm going to paint two scenarios free. I want you to react a ball. Okay. The scenario, the post-instrumentality scenario is one where we become lazier, or is it one where we become better? Or, I mean, what happens to the universe in that scenario? And then there's the Armageddon scenario where the technology takes over to, decides that human beings are inferior and figures out a way to eradicate human beings, right?
Starting point is 00:07:26 I mean, those are the two scenarios, right? But the question, I think there are more, but yeah, certainly does. There are more. You're right. I mean, you're a way more complex and smarter guy than me. I'm just trying to get, I'm trying to tease out of you because of your book. I'm trying to tease out of you what happens to mankind, basically. so of womankind.
Starting point is 00:07:51 Well, so the second scenario is kind of in the sense what the previous super intelligence was explored. Like we failed to align the AIs and then they become some antagonistic force. So that's the one type. Like so in this book, let's assume that doesn't happen. We get like, you know, we survive, prosper. There's material plenty. Then you ask would that mean we then all become lazy, kind of lie on the couch and having robot butlers that But I think it would depend on our choice ultimately and our human values. We would have the capability if things go well and we reach this condition of technological maturity to really shape not just the world around us, but ourselves as well in whichever way we would want. I guess what I'm getting.
Starting point is 00:08:43 I guess what I'm trying to understand is we work the field. we figured out how to irrigate, we farmed for ourselves, we created these small city-states, we then industrialized. And I don't know, I'm still working super hard, Nick. I mean, it feels like you are too. I mean, you're a very productive project guy. And yet we have all of this technology around us, and we have all of these things that have made us more productive.
Starting point is 00:09:13 I guess what I'm getting at is, let's say we go to that next level where it's writing our papers, it's writing our books, it's doing our trades, it's doing our stock market transactions, it's doing our medicine. Does it mean that we unplug or do you think we we quantum leap again into something else more productive? Well, right now, I mean, most of the work I'm doing is because it's the only way for me to achieve various things. Like, for example, the book, I couldn't just press a button and have the book written, actually. I had to spend the time writing it myself, right? Then you probably have a whole bunch of goals you're pursuing
Starting point is 00:09:51 that require you actually to invest time and effort and skill. Now, in this scenario where AIs truly achieve a general intelligence and then super human levels of general intelligence, I think it is much more difficult to think of tasks that we would actually be helpful. I think there are like a few you could look at cases where, say, consumers have a direct preference, that the work be done in a certain way.
Starting point is 00:10:20 So right now, maybe some consumers pay a little bit extra for a trinket that was made by some politically favored group or some indigenous craftsperson, as opposed to produced in a sweatshop like in Indonesia or something, right? Even if the actual trinket is equivalent, some people care about how it was made. So that would be an example where maybe human work would still be needed. Or are people maybe prefer to watch human athletes compete? in the Olympics, even if there were robots who could run faster or box harder or whatever. So with those carve-out style, I think it does look like we could have full unemployment.
Starting point is 00:10:58 Full unemployment. Right. That's kind of the goal of AI. Yeah, exactly. No one needs to work, right? But full unemployment, but when you say unemployment in our society, people are concerned, but in this society there's so much abundance and there's so much wealth, it really doesn't matter. have to work. Right. Yeah. So, and we see like our education system now, for example,
Starting point is 00:11:23 to a large extent, it's structured towards producing workers who can be productive and contribute to society. So you take kids in, you put them at the desk, you're training them to do as they receive assignments and they're graded and they have to be disciplined. And all of that is because right now in the world, there are a lot of jobs that need to be done. So we need to have people who go into offices and get assignments and etc. Now in a future scenario where we didn't need people to do that, then I think we would want to like change how we think about education. For example, like maybe kids could be educated to enjoy life and to develop appreciation for, you know, the art of conversation and literature and hobbies and appreciating nature and all kinds of
Starting point is 00:12:06 other things instead of like being economically productive. Okay. I mean, it's super, super valuable. I mean, I think this is a fantastic book on so many different levels because it's very, very thought-provoking. So let me ask you this, Nick. What timeline do you give this? Similar to your last book, could we see deep utopia the problems that you describe coming sooner than perhaps anticipated? Yeah, I think timelines have been fast since superintelligence came out, the last 10 years in AI have been really remarkable. And I see no signs currently of it slowing down.
Starting point is 00:12:50 So I think there is a real chance this might happen in the lifetime of a lot of people alive today. And it will be not just the biggest event in our lifetime, but perhaps in the history of the human species, this transition to the machine intelligence era. And it's remarkable just over the past few years, to see how much the public discourse has changed. Like when I was writing superintelligence, this was a completely neglected topic. Nobody in academia,
Starting point is 00:13:21 it was kind of people snickered. I did, ha-ha, science fiction, futurism, whatever. And now, of course, world leaders are discussing transformative AI, all the leading AI labs have teams, specifically researching scalable methods for AI alignment. And it's been like just a radical shift
Starting point is 00:13:38 where these ideas that for many years, you know, I was talking about a few of colleagues. And I think, yeah, increasingly this just will become a main focus for public conversation, as more people realize what's about to happen. Okay. When I sell my business, I want the best tax and investment advice. I want to help my kids, and I want to give back to the community. Ooh, then it's the vacation of a lifetime.
Starting point is 00:14:03 I wonder if my out of office has a forever setting. An IG private wealth advisor creates the clarity you need with plans that harmonize your business. your family and your dreams. Get financial advice that puts you at the center. Find your advisor at IGPrivatewealth.com. When a country's productivity cycle is broken, people feel it in their paychecks, their communities, their futures.
Starting point is 00:14:28 What does this mean for individuals, communities, and businesses across the country? Join business leaders, policymakers, and influencers for CGs' national series on the Canadian Standard of Living, productivity and innovation. Learn what's driving, productivity decline and discover actionable solutions to reverse it.
Starting point is 00:14:49 When I finished reading your book, I wanted to ask you this question, but this is a big permutation. So we live in this great mystery, of course. We don't know our origin. We don't understand what happens to us after our demise. Seems like we're a fairly sentient, fairly intelligent species, but yet we live under this cloud of uncertainty and this mystery. Let's say that artificial intelligence was able to resolve that. Suppose we were able to understand the great mystery of what we're living in. The question I have for you, this is a little bit more of an esoteric, atherial question. Does that make our lives more purposeful or less purposeful? Meaning, I sort of feel like the specter of death forces lots of meaning and lots of
Starting point is 00:15:41 intensity in our lives. But suppose we had an artificially intelligent situation where we could live forever, but even if we didn't live forever, we understand the great mystery. What do you say about all that? Yeah, I mean, I think it's possible. It might make our lives less purposeful after this has happened. I think though it makes them perhaps more purposeful right now when we have this immense obligation to try to make sure that this actually pans out right. It might be that the human
Starting point is 00:16:13 lives we're living now are much more consequential. Maybe it will shape humanity's destiny for millions of years to come, much more so than any other people who were like previously in history. So it has this paradoxical effect that increasing the purposefulness of current lives. If we do succeed, like then, you know, we're home and dry and we could maybe spend our time enjoying the future rather than be driven by some very strong purposes. Yeah, but again, I guess that's the issue. So if I lose my purpose or my current trajectory of purpose, do I find less meaning in life? I guess that's the big question.
Starting point is 00:16:51 Yeah, yeah. So that's exactly the kind of question that the book really dives into. And because a lot of the discussion about these issues has so far been very superficial. So people think, well, what if there were like some increase in the unemployment rate? Like what would we do with training programs or UBI? or whatever. And people's thinking, stop there. They don't think through, if you actually think through this whole situation where robots can do not just some tasks, but all the tasks, and then what happens? So that's why I wrote this book. And so there's like, yeah, really diving into that,
Starting point is 00:17:24 like about meaning and purpose, as well as other values that we have. And I think that there are significant challenges to those values in this condition of technological maturity. And we will need to, I think, rethink and maybe drop some of our values that we currently hold there. Others would survive and kind of reconsider what really gives human life meaning at the fairly fundamental question. I'm ultimately positive that there is a good outcome of that. Like, if you go through this process, it will be different. Not everything you might value about the current world would still obtain, but there will be
Starting point is 00:18:03 other values. And overall, I think it might be such that if people look back on the current time, they will just shudder in horror at what kind of utterly atrocious lives we were leading in 2024 by the comparison of our descendants or ourselves, if we make it through. It's interesting, we're doing that already, though. See, we're looking back 100, 200 years ago. We're saying, oh, wow, our ancestries, there were slaves, There was mistreatment of different people due to their skin colors,
Starting point is 00:18:37 a result of which were rebuking and condemning people from 200 years ago. I guess the question is in 2,224, perhaps there will be a very large group of people that rebuke us. They'll say we were spending too much time emitting carbon. We were spending whatever it is. Yeah. I think that, you know, I don't know. That's a very good point. Yeah, I think it does behoove us. Like when we look back and judge all other previous generations, we need to also stop and reflect, like, how will we look in the eyes of posterity? Probably we will be appearing pretty rotten as well and, like, laboring under huge moral errors and stuff. So I think that is probably good for wisdom to pause every once in a while to reflect at how much we currently take for granted as just everybody believe this clearly might. And yet we look at every other generation and we now see that they were wrong or misguided about many things.
Starting point is 00:19:40 And that's surely holding true for us. But even just setting aside the moral failures of earlier generations, along with their heroism in other respects, like just say they share grinding poverty that most people experienced 100 years ago or 200 years ago. We wouldn't want to go back to that. But I think our current age, even in the richest countries, you know, in the richest countries, you know, in the richest strata of society will seem equally impoverished. Because although maybe like some people can buy a lot of stuff, there's a lot of things you can't buy with any amount of money.
Starting point is 00:20:15 For example, perfect health, like the ability to live to 200 years, the ability to upgrade your brain to get like IQ 400 or the ability to feel happy all the time. Like you can't just spend X money and buy happiness. In the future, these things would actually. be possible. There would be technologies that would allow you to achieve those effects, and that might be very cheap. So we, we, you know, I've made this observation. I want to get your reaction to it. I feel like there are parts of the world that are becoming less religious, more secular, more atheistic, frankly. But then at the same time that that's happening,
Starting point is 00:20:56 there are other parts of the world that are becoming more fervent, more dogmatic about religion. So why is that in your mind? And then if we enter this deep utopia, what happens to religion? Yeah, I don't know the answer. I think potentially it could have a larger role in this future utopian society in as much as there would be fewer other things to compete with it. So right now, even if you are very religious, might still need to spend most of your day just getting by making a living, you know,
Starting point is 00:21:37 tidying up your home, like doing all kinds of practical things. If you didn't have to do all of that, you could focus more on what ultimately matters, like contemplating the divine and developing your relationship to God, et cetera. Now, ultimately, then it's a question of whether people would choose, maybe people will choose different things there. But I think it's like one of those values that is likely to survive the transition. So many other things like that priding yourself of being a breadwinner, right, would kind of be undermined if all the bread just is delivered automatically without anybody having to make an effort because robots are doing the baking.
Starting point is 00:22:17 But these more spiritual values would be plausible candidates for what remains in that condition to focus on. Right. Unless, of course, we've figured out what the great mysteries are. and then maybe we have more or less religion. It's very hard to know. I mean, it depends on what the answer is, right? Like what actually is behind the big...
Starting point is 00:22:38 It depends on what the answers. Although I will say this, I was in Europe this winter, and they were using a church for a conference facility. And I thought of my Catholic grandmother, I think she would have been horrified, that they had repurposed the church for business meetings. Yeah, yeah. And so what did it yet, there's other parts of the world
Starting point is 00:22:58 where the religious fervor is perhaps as intense as it's ever been. So it's an interesting time to be alive. We are a motley bunch as a human population. What surprised you the most? You know, after all of your years of focusing on this, Nick, when is AI and then humanity surprised you the most? Well, specifically with AI development, this a surprising thing is how anthropomorphic,
Starting point is 00:23:27 the current leading edge AI models are. If you look at chat GPT4 or one of these other systems, they really are in many ways very similar to a human mind, including even in some of their foibles. So with some of these systems to get the best performance out of them, you almost have to give them a little pep talk in your prompt. Like in Kurt, you're going to think really carefully about this, think step by step, this is important.
Starting point is 00:23:53 Like the idea that an AI actually gives you a better answer because you say those things to it before, that would have seemed utterly ridiculous 10 years ago, right? That's not how computers behave. And yet that's where we are today. So that's surprising. Also, like that we apparently have a fairly extended period of time when we have AI systems that are roughly human-like
Starting point is 00:24:17 and practically useful for a wide range of tasks, but not yet radical superintelligence. It was not clear that that would be an extended period of many years where that was the case. You could imagine a different scenario where basically AI did nothing and then somebody in their basement figures out like the key missing thing and suddenly it fooms and become super intelligent over a week or something. So those are two.
Starting point is 00:24:40 And I think this relatively slow, more continuous, gradual pace of development in AI has also enabled more people to start to realize what is going on. So you do see more involvement from governments and et cetera, investors. in AI development because it's not so much foresight, but they can kind of see year by year capabilities increasing and then more people can sort of like connect the lines than if it were more coming as a bolt from the blue. Ilam-Ilam sometimes teases that we're living in a simulation.
Starting point is 00:25:15 You think we're living in a simulation, Professor? Well, many have asked, and few have been, I was the person actually wrote the original paper on this, the civilization argument back in 2001 or something. I think it's like there's a significant chance of this, but I have refrained from trying to attach a particular probability to it. Yeah. Well, I mean, one of the things that would reinforce it is that we've got this world that, you know, it's sort of incomplete, but man or woman has been left to complete, right?
Starting point is 00:25:53 I mean, you know, we, we didn't have to. have the capability of broadcasting and using radio transmission, but 1,000 years ago or 2,000 years ago, obviously we didn't understand how to deploy it or use it. Same thing with steel manufacturing or other things. So it feels like the world was put together and left for man and woman to complete. Does that make sense? Yeah, although that could equally well be the case if we are not in a simulation, it seems. Right.
Starting point is 00:26:25 So I'm not sure whether that particular property of the human experience tells something like one way or the other with respect to the simulation hypothesis. Right. Right. Well, we're at the, I have at the end of my podcast, I've come up with my production team five words or phrases. I'm going to read out the word. Then I'm going to ask you to react to you. You can give me a sentence. You can give me a word or some thoughts.
Starting point is 00:26:52 Okay, you ready? So we're going to start with the word future. I say the word future, Nick, what do you say? And so I'm supposed to answer with one word or just a phrase? No, just whatever's coming into your head. You can give me a paragraph if you want. I don't know. Hold on for dear life.
Starting point is 00:27:11 Hold on for dear life. So you see you're, you know, you see so many different permutations of what the future could be that it's uncertain. Yeah, buckle down. And yeah. Yeah, okay. How about trust? I say the word trust.
Starting point is 00:27:30 What do you think? And maybe relatedly faith. Yeah, I think we basically have to have that. I think there are more things in heaven and on earth than are visible tasks in our philosophy. And in particular with respect to these changes that we are bringing into the world, we are really quite clueless about the bigger picture. and we hopefully hopefully we can trust that that it all works out for the best but yeah yeah all right I mean so trust comes with some level of optimism how about I say the word humanity you say what
Starting point is 00:28:09 yeah well work in progress well said I say the word super intelligence you think about what I think that there is a lot of headroom above human cognition. So I don't think we should be thinking of super intelligence as kind of like a really clever nerdy human who is like a little bit faster at solving the math problem. But it's like really a whole level of like you can pick more comparing humans to like apes or something like that. But perhaps much bigger than that. Right.
Starting point is 00:28:52 We're smart. We're not as smart as we think we are. How about deep utopia? I say the words deep utopia. You think what? Hopefully. I hope that these problems that the book is wrestling with will become actual problems that we will have to solve in the real world,
Starting point is 00:29:12 which I think they will if things go reasonably well with respect to these practical challenges we have to face between now and then. Very well said. Well, listen, the book was fantastic. The title of the book is Deep Utopia, Life and Meaning in a Soft World. It's by a professor at Oxford University, Nick Boscham. Thank you so much for joining us today on Open Book. Thank you, Anthony.
Starting point is 00:29:41 I enjoyed it, Nick. I hope...

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.