Big Technology Podcast - AI's Doomsday Philosopher Says Maybe It'll All Be Totally Fine — With Nick Bostrum
Episode Date: August 7, 2024Nick Bostrom is a renowned philosopher and bestselling author of "Superintelligence" and "Deep Utopia." He joins Big Technology to discuss the potential outcomes of advanced artificial intelligence, f...rom existential risks to utopian possibilities. Tune in to hear Bostrom's thoughts on how humanity might navigate the transition to a world of superintelligent AI and what life could look like in a technologically "solved" world. We also cover the evolution of AI safety concerns, the concept of effective accelerationism, and the philosophical implications of living in a post-scarcity society. Hit play for a mind-expanding conversation about the future of humanity and the profound challenges and opportunities that lie ahead. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
The Oxford philosopher who warn the world about the dangers of AI superintelligence comes
on to talk about how we might end up in utopia instead.
And whether we should want that, that's coming up right after this.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech
world and beyond.
We're here today with Nick Bostrom.
He's a philosopher and the best-selling author of Superintelligence and also the author of a new
book, Deep Utopia, Life and Meaning in a Solved World.
I have it here with me today.
Nick, welcome to the show. Great to see you again. Good to see you. You know, I've really struggled to figure out like where exactly to start this interview because I was going to ask you about like your past talking about super intelligence and dangers of that or the beginning of Utopia. And then you know what I said? I'm just going to go back to our last conversation that we had. I'm not sure if you recall it, but I was writing my book always day one and I had a black mirror chapter talking about what could go wrong with artificial intelligence technology. And I was like, all right, I'm going to call Nick Bostrom for the AI.
Black Mirror chapter because you became famous predicting or talking about the probability that we
might end up with dangerous superintelligence and we should be prepared for that. And this is what
you told me. He said, I don't necessarily think of myself as being in the dark. Mirror people
come to me for a quote on the negative side of AI and then other people will read me saying
something negative and then more people will come to me to get the negative side. It kind of gets
to be self-ampifying and people then assume I only have negative things to say about AI. We spoke in
2019. And now you have this book coming out a little bit about how we might end up in
utopia with AI. And of course, there are some concerns there. But did I kind of catch you
in part of the journey between the dangers of superintelligence and maybe the utopic living
that we might end up in? Not really. It's always been there both these sides in my thinking
and my expectations. The previous book, Superintelligence, did focus most of the pages on what could
go wrong. That is true. It seemed at a time more pressing to try to draw attention to the
potential risks of developing artificial general intelligence and superintelligence so that we could
hopefully develop ideas for how to avoid those pitfalls. But even back then, it always was clear
to me as well that if things go right, they could go very right indeed and that there was this
enormous upside as well. And the more recent book tries to like actually analyze what that would look
like if things, as it were, go maximally well. And we're going to get back to some parts of the
interview that we did back in the day as we talk through like what this utopia can look like.
In the meantime, I do want to talk to you about what it was like to be effectively this focal point
for AI fear or negativity. Even if you had this more nuanced view, you certainly have lived
sort of the perspective of being this vector for for people with fears of AI they would come to you and
this is from this New Yorker's a story written about your book in 2015 the title was the doomsday
invention and it's this is describing your argument's true artificial intelligence if realize
might pose a danger that exceeds every previous threat from technology even nuclear weapons and that
if its development is not managed carefully humanity risks engineering its own extinction central to this
concern is the prospect of an intelligence explosion, a speculative event in which AI gains the
ability to improve itself, and in short order exceeds the intellectual potential of the human
brain by many orders of magnitude. So first of all, how do you feel looking back to that
portrayal? Well, not wrong. I think still today that there are very substantial risks in developing
greater than human machine intelligence, including existential risks.
They have become a lot more widely recognized since the book Superintelligence came out.
Back then, it came out in 2014, but it had been in the works for six years prior.
At that time, this whole idea of, well, A, super-intelligent AI in the first place, and B,
that it could pose existential risks, was very much neglected.
Certainly academia completely ignored.
That, until the book came out, and more broadly in the world, there were, you know, basically nobody, maybe a handful of people on the internet who were starting to think about the alignment problem.
That has radically changed in intervening years.
Now, there are many research groups focusing specifically on trying to develop scalable methods for AI alignment, including all the frontier AI labs.
Also, the governance challenges have started to receive a lot more attention, including from top-level policy.
makers in the last two years. So the landscape has shifted dramatically. And it's, I mean,
I guess it's in some sense validating to see that these concepts that used to be super fringe
and totally outside the overturn window are now kind of very mainstream. Where do you think
society is in terms of their concerns with AI now? Is there a proper amount of concern? Are they
overly concerned? Because the truth is that the sort of doom contingent has really elevated
dramatically and is very prominent, you know, even in the most serious research houses,
open AI, anthropic, et cetera.
Yeah, there has been this search of the, yeah, the doomers in the last, I'd have been
two years or something like that, where they really started to become vocal, like on Twitter
and stuff.
I think probably we are still below the optimal level of concern, but the kind of first derivative
is that we're moving rapidly towards increasing levels of concern.
And I've started worrying a little bit about us, not, as it were, increasing the level of concern to the optimal level and then stopping there, but kind of overshooting.
If this negativity about AI snowballs, like it might, in some scenarios, get out of control and we might then lose out on the enormous upside that could be unlocked by developing the good kind of superintelligence.
What does that overshooting look like?
Is it just that these concerns grow, I mean, bigger and bigger?
Well, there are different ways.
I mean, one scenario class, which still looks unlikely to me, although less unlikely perhaps
than two years ago, is that you could imagine so much stigma developing that we get
some ban, like maybe it starts with a pause or something, but then like after six months
nothing has really changed to extend the pause, then you maybe set up regulatory agencies
to police it to then have an, all they are doing is trying to prevent dangerous, like,
or just a kind of negativity might develop into a new orthodoxy where like anybody who says
anything positive about AI gets cancelled or shadow ban, it just becomes like impossible to.
And so it still seems a bit unlikely, but we can't really accurately predict the kind of social
dynamics that will be playing out.
and now under different circumstances than historically,
because we have new technologies available now
that might at some point make it possible
to lock in an orthodoxy in ways
that were never possible historically
because it could have much more fine-grained control
over what people are saying.
So that's one like class like another might be
that instead of AI being banned,
it kind of gets escalated to the national security level,
which may or may not be good,
but you could certainly imagine scenarios
in which instead of it being a kind of peaceful
pursuit for civilian purposes by cooperating scientists generally motivated by some sort of,
you know, humane, cosmopolitan goals.
Like, it becomes more like a kind of, how can we get an edge over our military rivals?
And it's not clear whether in that context the outcome looks rosier than if it were more
like a kind of free-willing civilian enterprise.
So that could be like a concern level with possible negative effects, even short of stopping
AI development altogether.
I mean, if you look at today's large language models, and I know we know this isn't the end
of development, but they are very smart in some areas and really stupid in some areas.
And they continually fail tests that are put in front of them to try to demonstrate real-world
understanding.
Where does this fear come from?
And what could eventually happen as we continue this path of development that could lead
to some of the negative outcomes that you and others anticipate as one possibility?
Yeah, I mean, I don't know how much we should read into the fact that current like AIs still have limitations, that that will remain true until it isn't true any longer, right?
But the idea has always been that we can anticipate that AI capabilities most likely will continue to increase and at some point AI will succeed.
And then we can realize that at that point, like some really powerful dynamics will kick in.
And we could try to use the intervening time to prepare ourselves.
in different ways, including ideally solving the alignment problem, you know, maybe making
some progress on the governance challenges and on like the ethics of, there's a bunch of difficult
ethical questions that will arise with like the moral status of digital minds and the
distribution of benefits, like a lot of quite tricky questions that we could use these intervening
years to try to, you know, think about. Unfortunately, we wasted most of the time we had. I mean,
I think, I mean, I, to me, at least, even back in the 90s when I started thinking about
these types of things, it was clear that we had a good chance of eventually developing AI and
that it was going to pose these risks. We could have used those decades to do our homework.
And it's become increasingly clear, but really only in the last maybe five years or so has
to have been like a significant effort to actually work on these issues.
So we still have some time left on the clock. We don't know how much, but hopefully we can at
these make a good use of the remaining months and years.
Yeah.
Why do you believe that it's a certainty that will sort of achieve artificial general
intelligence?
Certainty is too strong a word, I think.
But you feel it's very likely.
Yeah, at least conditional on science and technology continuing.
And like we could have some sort of civilizational collapse or we could eventually, you know,
go extinct or destroy ourselves in some other way, not unrelated to AI.
There are like developments in synthetic biology and other new weapon systems and whatnot.
But if kind of these development efforts in hardware and in algorithms continue,
then it looks very likely that we will succeed in this.
I mean, from first principles, we have the human brain as an existence proof that general intelligence is possible.
There is no reason at all to suppose the human brain is in an essence optimal,
like neither from a hardware point of view
nor presumably from the algorithmic point of view.
So just as we have machines
that are physically much stronger and faster
than human bodies or any animal body is
than likewise we will have
eventually cognitive systems that are much faster and clever.
So that's like a very high level argument,
but then you can also look just at the kind of advances
we are seeing where more and more things
that used to be impossible for AI's
to do have become done and they're just a lot fewer left of these kind of milestones.
There is this phenomenon that like before it is done, it looks really hard.
Once AIs have done it, then we kind of quickly forget how just how impressive it was and
just take it for granted.
Like, yeah, of course computers can play chess.
Like, of course, what's the big deal?
But like at the time, like it was a big.
And then you can see like, oh, we have that can play go.
they can see, they can imagine, they can write poetry, they can write computer programs,
they can talk to us in ordinary language, they like can pass like, you know, undergraduate level
exams in all these different subjects. Like this, this is a lot of stuff that like to people
30 years ago would have been like, wow, you must be really close to AGI if you have done all
of these things. Okay, so let's talk about Utopia a little bit. Just give us your perspective
what could go right in the best case well why don't we do this way what could go wrong in the
worst case scenario of AI what could go right in the best case scenario of AI and how do how do
humans have a influence in terms of which direction we go so I think there is like the the real
ex-risk existential risks that will arise as we develop and possibly not even super
intelligence but you could imagine even something short of that making it very easy to develop new
weapons of mass destruction in using synthetic biology or other.
Are you more concerned about humans using AI to hurt each other or AI hurting us?
Well, I think they are both worth worrying about.
I think with the X risks, I mean, maybe a slightly larger on the AI being the kind of
agentic part there.
But certainly, we really need to do a decent job on both sort of the alignment and the
governance for us to have a good outcome.
So yeah, so now on the upside,
like there is a huge unlock and a lot of that is just a removal of a bunch of negatives.
If you look around at the world as it is now, it's not that rosy a picture in many ways.
It's quite a horror show with people dying from, you know, Alzheimer's or kids getting cancer
and like starvation and people being bombed and like all kinds of or just at the more mundane level,
people spending most of their adult life, you know, working in a sort of boring occupation that gives them no fulfillment, but they just have to do it to, you know, to pay the rent.
And like, you know, headaches and stomach aches and like all kinds of, just the totality of all of this, extreme misery and very common more everyday misery, that's just within the human sphere.
And then you add the animal kingdom with like ameliorating all that suffering.
would already be, I think, a very strong argument that something at some point needs to be done here.
Can't, like, gone like this.
I mean, do we want like another, you know, 10,000 years, 100,000 years of just this?
But I think on top of that, there is, like, the potential to also unlock, like, new levels of flourishing beyond those which are possible, even under ideal conditions in the current world.
That's a lot harder to paint a very concrete picture of because we are sort of limited in our ability to imagine and appreciate, just as, you know, if you imagine like the great ape ancestors of Homo sapiens kind of thinking about what could be so good about being, you know, human.
And so they might like realize a few things like, oh, we could have banana plantations and have like a lot of bananas and stuff.
and that is true we can have a lot of bananas now but there's more to being human than just
unlimited bananas right like we have sort of you know music and poetry and film and humor and
romantic love and like all kinds of stuff science so similarly there is probably like if we
unlock as it were the a greater space of possible modes of being there are some in there
I'm sure that are extremely valuable, that I think AI would be the most plausible path towards realizing.
If I then really dives in and tries to think more specifically about what would the best possible continuation of a life starting from like our current human starting point look like, then there are some quite interesting philosophical questions that arise.
And so this book, Deep Utopia, it's not really an attempt to sort of, well, before we looked at the down.
side. Now, let's make the case of how wonderful the upside could be. I think the upside could
be extremely wonderful, but that's not sort of the thrust of the buck. It's more like, let's
just look at this. What would happen if we actually did succeed in creating a solved world,
as I call it, like where all the practical problems are already solved, or to the extent that
there are problems that are not solved. They are in a way better dealt with by advanced AIs and
robots than by us. And there is like some aspects of that condition that at least prima facie
look quite unappealing to our current sensibilities. We often define our sense of self-worth
on the idea of being a contributor. Like you're a breadwinner, you make like a positive difference
in the lives of your friends or society at large. You bring value to the world.
So much of our existence is kind of constructed within the constraints of various instrumental necessities that have been with us since the dawn of the human species.
There have always been a lot of things that we need to do just to survive.
And if you remove all of those, there is at least initially the sense of kind of disorientation or an undermining of like we feel like kind of what's the purpose.
like we would just be these blobs.
But this is different from what we spoke about the last time.
You and I were on the phone.
This was in 2019, I think.
You said that we'd have to find some news source.
This is from our conversation that I put in my book.
We'd have to find some new sources of South Worth.
But in Disneyland, the job of children there is to enjoy the whole thing.
And Disneyland would be a rather sad place if it weren't for the kids.
So you say we would all be like kids in this giant Disneyland,
maybe one that would be maintained and improved by our AI machine tools.
So effectively that even if we didn't have to do any sort of sustainment work
that gets turned over to AI, we could be actually quite fulfilled in life.
Yeah.
So how do you get from there to where you are today?
Well, I think that's basically correct.
But we can distinguish two different senses of fulfilled or of having purpose.
So there is, first of all, what you might say, the subjective sense,
It's like the feeling of fulfillment or the feeling of having purpose.
It's like the emotion of being motivated and you're really excited about what you're doing, right?
Like that kind of psychological state, that certainly you could have that in a salt world, in utopia.
The utopians could have like extreme levels of motivation and immersion and subjective purpose.
That's easy.
That's like a checkmark.
And more broadly, you can go through different plausible human values.
And for some of them, you can just write off the bat, say, well, yeah, sure, of course.
That would be trivially easy to do in utopia.
So in this case, through the psychological engineering techniques that they would have.
I mean, already you could have, like, imagine a drug without side effects and without addiction potential.
That just induced a state of fascination and motivation.
We already have simple versions of that.
But you could imagine far more sophisticated ways that would give the utopians, like, very fine-grained direct.
control over their mental states and their psychology.
That would follow from technological maturity.
Right.
So that's easy.
Some people, however, think that there is also a more objective concept of purpose,
where it's not just that you feel motivated, but that what you are doing is actually
objectively worth doing.
That's a little bit less obvious to what extent the utopians would have that, in as much
At least at first sight, it looks like anything they could do.
They wouldn't have to do because they could just press a button and a machine could do it instead.
Except for those few things that you bring up that actually we want humans to do.
For instance, like ordain a marriage, right?
Like that is something or read a poem.
People might want this to be humans.
Potentially, yeah.
So you could automate everything.
But except there might be certain jobs where the, as it were, consumer has a direct preference that the job would be done by human.
which case by as it were almost by definition it's not automatable exactly and this whole idea
of a solved world is basically where AI can effectively take care of everything all of our needs
all the production and we are in this utopia because the machines have done all the hard stuff
that we don't want to do anymore so you could imagine like a very dystopian scenario with advanced
technology if you have like a sort of totalitarian despotic right but imagine also that to whatever
extent governance problems can be solved, they have been solved.
Maybe you can't solve governance, but to whatever extent they're like better and worse
in terms of social political structures. Imagine like we get
something at the good end of that combined with technological maturity.
That's basically the definition of a solved world.
Sorry, I know I took you on a bit of a tangent there.
No, so yeah. So like that kind of layers. So you could say first, well, you could have
like a simple utopia might be a kind of post-scarcity utopia
where we just have abundance of material goods.
So we already, if you're fortunate enough,
to live in like a developed country,
you know, with a decent education, etc.,
you're already pretty close to that.
You might not be able to have like the ideal yacht of your dreams.
Like there are some limitations.
But if you sort of plot a line that has like the starting point hunter-gatherer
And the end point is like complete post-scarcity.
I think we are more than halfway there.
Like it's a bigger difference to go from not having enough to eat
to having enough to eat than, you know,
to get like a slightly more advanced version of an iPhone
or like a third house if you already have two houses.
Like they're diminishing returns.
So that's like you could first consider this concept post-scarcity utopia.
Okay.
So then what's a level of, as it were, more radical?
utopia than that, well, you could have a post-work utopia where not just do we have plenty,
but we don't have to work to produce that plenty. So it's not just that we work all day long
and then we have a lot of money and we buy stuff. But imagine you had all this plenty and
you didn't have to work. It's slightly more radical conception, but not that radical. I mean,
there are already people who are born with a trust fund or something and they never have to work
and they have plenty. Again, there's limits to how big their palaces could be. But like
least some approximation. But I think we can then go further and consider even more radical
conceptions. So I've already alluded to, there is the post-instrumental utopia that you could
have, where it's not just that we don't have to work, I like to make money, but we also don't
have to do any of the other things that we currently have to do for instrumental reasons.
So if, you know, if you are Bill Gates, you still have to brush it.
chief, you still have to do a whole host of things just in your everyday life to get the
outcomes you want. There's like a limit to how much you could ask your assistant to do or that
you could, you know, but in this scenario, like a lot of those other instrumental reasons
we have for doing things would also drop out of the picture with like super advanced automation
technology. And I think there's like a step further than that, which is a call it a plastic
utopia where we also have complete control over ourselves over our own bodies minds and
mental states using like advanced techno biotechnologies or newer technologies
that's going to we're going to achieve that that's wild yeah but I think if you consider
what would be possible at technological maturity which we can at least play some
lower bounds on through a kind of theoretical analyses so we we can sort of
of estimate what kinds of computational systems could be built. We can see what kind of molecular
manufacturing systems are possible to build in our universe, even though we can't currently
put them together with the tools we have now. We can see that there is like a path there.
Other things like cures for aging and stuff like that, like we don't have them yet, but there
is no loss of physics, prevent people from living indefinitely if you had like repair
technologies, et cetera, et cetera, perfect virtual realities, perfect, like, ways to manipulate
the brain, like, that they're better than drugs.
Like, there's this kind of, in fact, like a table in the book that outlines some of the
affordances you would have at technological maturity.
And maybe there will be additional things we haven't yet thought of, but at least these.
And so they would, I think, enable us to instantiate this condition of plasticity, where
human nature itself becomes malleable.
that's crazy so that means there are further questions about purpose so right now like if you
didn't have to work for a living like maybe some people would say well you know maybe i would start
you know going to the gym more to get fit right like that's you can't hire a robot to you know
run on the treadmill on your on your behalf but with plasticity there would be a shortcut to that
you could pop a pill that would induce exactly the same physiological and cycle and we're already
on the way to there with Ozympic.
Yeah, exactly.
That's like one more step in that direction.
And so the thing is with superintelligence,
you get the telescoping of the future.
So all these sort of science fiction-like technologies
that maybe we would develop
if we had like 20,000 years
for human scientists to work for it.
We probably will have a cure for aging
and perfect virtual reality
and space colonies and all the rest of it, right?
but all of that could happen very quickly
if you have superintelligence
doing the research and development.
So you get this kind of telescoping of the long term.
But yeah, so then there's like a further set of things
that currently fill the lives of people
that we wouldn't need to do,
including things we do for fun.
So maybe some person say, well, you know,
if I didn't have to work and I had like,
maybe I would play golf all day long,
like because why why would you play golf for the long well because it's fun it gives me joy
or let's suppose somebody says that well then in this condition of plasticity there would be a
different and easier way for them to get joy they would just pop a pill and they could get
exactly the same level of subjective well-being as like a beautifully manicured golf course could induce
and so i can't even imagine that type of world that's crazy um yeah so it does
then require a fairly fundamental rethink
of what it means to be human
in this radically transformed condition.
But it is kind of the implicit t-loss
of our current strivings, if you think about it.
So we try to, the little problems come up,
like we try to solve it.
And then like there's like another problem
that we all like, so our food rot,
let's invent a refrigerator, like oh, we get fat,
let's like invent those empec, like oh, our cars pollute,
Let's make cleaner engines.
But if you kind of extrapolate and take all of that to its limit,
then you would end up in a situation where we can do everything with no effort.
That would be kind of the limit of technology.
AI, the goal of AI has all along been not just to automate a few specific things,
but to provide the technology that allows us to automate all tasks.
Like AI hasn't really succeeded until all intellectual labor can be done by machine.
and so I think it's kind of we don't we don't think about it like that but if you sort of see what all of this effort is all these investments we have in science and technology and our efforts to make the economic system more efficient to allow kids to learn more in school like all of this kind of adds up to some sort of arrow of attempted progress in a certain direction and you might as well at some point stop to think what happens if we actually get there
And then we do end up, I think, in this condition of a sold world.
And the question is whether we're ready for that.
So, Nick, are you able to stick around for another couple minutes or do you have a hard out?
I could do a few more minutes, maybe.
Okay.
All right.
Let's take a quick break and come back and ask a little bit about whether we can handle that perfect world.
We'll be back right after this.
Hey, everyone.
Let me tell you about the Hustle Daily Show, a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here with Nick Bostrom.
He's a philosopher, the best-selling author of Super Intelligence, an author of the new book, Deep Utopia,
life and meaning in a solved world. It's out now. Great book. Definitely recommend you pick it up.
So you basically said if we get to this perfect world, you think right now we're effectively
unfit to inhabit it. And in your book, we sort of look at, or in the early chapters,
you sort of look at the fact that we've increased our productivity, but we're using it
for consumption rather than leisure. And that's concerning to you. Is that part of the reason
why you think we're not quite ready for this? So where do we, where do we,
fall short in our preparation for utopia?
Well, I think human nature is kind of forged and evolved under various conditions, including
conditions of scarcity and condition where there are like instrumental demands on us.
We need to exert ourselves, make efforts, try, we need to work just to get by in life.
This has been true for hundreds of thousands of years.
It's still true to some extent today, although with certain relaxations.
Like, for example, food is much less of an issue for people living in wealthy countries.
And increasingly also for more middle-income countries where obesity is becoming an issue.
So there you can already see a little bit of a mismatch,
like where we kind of evolved to live under conditions of food scarcity.
And when that no longer obtains, unless we make adjustments,
like we kind of balloon in size and then we need to try to find fixes for that.
But I think a much more profound mismatch between where we currently are psychologically and biologically,
and our environment could arise if we suddenly moved into a condition of a soul world.
So there would need to be some adjustments in that, I think,
scenario if we wanted to take advantage of all the things that would be possible.
And what would those adjustments be?
Well, I think for a start, if we take, for example, human hedonic well-being,
which maybe is like one of the easiest things to look at,
just like the subjective state of positive affect.
Like, are you actually enjoying the present moment?
Does it feel good or is it like unpleasant?
So this is like a fairly fundamental dimension of our psychological state.
Some people think it's the only thing that matters.
If you are a hedonist, in the philosophical sense,
you think that the only bad is suffering and the only good is pleasure.
not necessarily in the sense of physical pressure,
but in the broad sense of sort of positive mental hedonic tone.
So it looks like we are kind of designed in a way
where we have a fairly powerful habituation mechanism.
So if somebody's conditions in life improve a lot,
often maybe they win the lottery or something, right?
So they get really happy they win the lottery,
but very quickly this sort of hedonic tone
falls back to the baseline.
because we are not designed for permanent bliss.
We are designed in such a way that our reward system motivates us to produce more effort at whatever level.
Like, no matter how good our situation is, we are designed to always try to want to make it better.
And so we only get reward when things improve rather than when things are at a good level.
To a first approximation, it's not completely true.
I think people under better conditions are slightly happier than people under worse conditions.
and maybe a lot happier if they're like under really bad conditions,
it's like certainly has a sort of permanent negative effect.
But now if there were no more opportunities for improvement,
and there is like no need for our like instrumental efforts,
that it seems very stingy that we would still not be able actually thoroughly to enjoy lives.
So maybe we would want to change that set point.
So we could all be much happier all the time.
to actually relish this, this future.
Like, it would be sad, like, if everything were as, like, super nice
and we were still miserable there, like,
and then just living like that for, like, millions of years
and, like, not really, that would seem to be a very unfortunate.
So that could be one obvious adjustment.
There's, like, a bunch of other things as well,
that, like, you might imagine upgrades of various human capabilities
our cognitive abilities, our emotional repertoires,
our ability to connect to other people,
obviously physical health.
And then kind of at the philosophical almost,
or like our overall attitude to life.
So the idea that you sort of conceive of your self-worth
as being based on your ability to make a contribution.
Maybe it needs to be rethought here.
like if we can no longer make contributions.
There is an asterisk to that.
I think there are certain ways
in which maybe the utopians could,
but at least our opportunity
to sort of help out other people
would be reduced
if there's just less misery and need
in the world to begin with.
Like if you're a doctor
and there was no disease,
you need to find another occupation
or you can't base your self-worth
of being a really good and caring doctor
if nobody's sick.
Like then you have to rethink that.
And I think at the more general level,
we would all have to sort of rethink what makes like a human life have dignity in this condition
where we are no longer really where it's at.
Yeah, okay, here's the last one.
It's kind of wacky, but I thought I'd throw it out there and see what you think.
If we reach superintelligence and we hit this utopia, do you think, and I know you've spoken
a little bit about the potential of us to be living in a simulation?
So if we reach superintelligence, do you think there's a chance that we're going to crack
out of the simulation and effectively figure out who's running it if we're in one.
That would be one possible scenario, right?
If you are in a simulation, the simulation could just end with nothing, or it could be rerun,
or you could enter a different sort of environment within the simulation continues, but the
sort of virtual environment changes, or indeed you could be sort of uplifted out of the
simulation into the world of the simulator.
All of those are at least kind of metaphysically possible, conditional on us being in a simulation in the first place.
So the simulation hypothesis expands the space of sort of realistic possibilities and the space of realistic futures.
You might think if you're living just in a simple materialistic universe, you die, that's the end, your brain rots, there's no more experience, and there's really not much room for other things given.
like the loss of physics and us being purely material with our soul, et cetera, et cetera.
If you are in a simulation, then there's like a much wider range of things that could happen,
that would seem perfectly plausible given the assumption.
I mean, if we're there, it would be so crazy to just unzip and poke our heads out
and see what's behind the thing.
Yeah.
Okay.
The book is Deep Utopia, Life and Meaning in a Solve World.
It's by Nick Bostrom, our guest today, also the author of Super Intelligence.
Best Sailing author, Nick Bostrom, thanks so much.
for spending some time with us today.
I enjoyed our concession.
Me too.
All right, everybody.
Thanks so much.
We will see you next time on Big Technology Podcast.