Making Sense with Sam Harris - #385 — AI Utopia
Episode Date: September 30, 2024Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people ...don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Transcript
Discussion (0)
Welcome to the Making Sense Podcast.
This is Sam Harris.
Just a note to say that if you're hearing this, you're not currently on our subscriber feed,
and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast,
you'll need to subscribe at samharris.org.
There you'll also find our scholarship
program, where we offer free accounts to anyone who can't afford one. We don't run ads on the
podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
Today I'm speaking with Nick Bostrom.
Nick is a professor at the University of Oxford,
where he is the founding director of the Future of Humanity Institute.
He is the author of many books.
Superintelligence is one of them that I've discussed on the podcast,
which alerted many of us to the problem of AI alignment. And his most recent book is Deep Utopia, Life and Meaning in a Solved World.
And that is the topic of today's conversation. We discuss the twin concerns of alignment failure and
also a possible failure to make progress on superintelligence. The only
thing worse than building computers that kill us is a failure to build computers that will help us
solve our existential problems as they appear in the future. We talk about why smart people don't
perceive the risk of superintelligent AI, the ongoing problem of governance, path dependence, and what Nick
calls naughty problems, the idea of a solved world, John Maynard Keynes' predictions about
human productivity, the uncanny valley issue with the concept of utopia, the replacement of human
labor and other activities, the problem of meaning and purpose, digital isolation
and plugging into something like the matrix, pure hedonism, the asymmetry between pleasure
and pain, increasingly subtle distinctions in experience, artificial purpose, altering
human values at the level of the brain, ethical changes in the absence of extreme suffering,
level of the brain, ethical changes in the absence of extreme suffering, what Nick calls our cosmic endowment, the ethical confusion around long-termism, possible problems with
consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes,
and other topics. Anyway, I think Nick has a fascinating mind. As always,
it was a great pleasure to speak with him. And now I bring you Nick Bostrom.
I'm here with Nick Bostrom. Nick, thanks for joining me again.
Hey, good to be with you.
So you wrote a very gloomy book about AI that got everyone worried some years ago.
This was superintelligence, which we spoke about in the past.
But you have now written a book about all that could possibly go right in an AI future.
And that book is Deep Utopia, Life and Meaning in a Solved World, which I want to talk about. It's a fascinating book.
But let's build a bridge between the two books that takes us through the present. So perhaps you can just catch us up. What are your
thoughts about the current state of AI? And is there anything that's happened since you published
Superintelligence that has surprised you? Well, a lot has happened. I think one of the surprising things is just how anthropomorphic
current level AI systems are. The idea that we have systems that can talk long before we have
generally super intelligent systems. I mean, we've had these for years already. That was not obvious
10 years ago. Otherwise, I think things have unfolded pretty much in accordance
with expectation, maybe slightly faster. Were you surprised? The most surprising thing from
my point of view is that much of our talk about AI safety seemed to presuppose, even explicitly,
that there would be a stage where the most advanced AIs would not be connected
to the internet, right? There's this so-called air gapping of this black box from the internet,
and then there'd be a lot of thought around whether it was safe to unleash into the wild.
It seems that we have blown past that landmark and everything that's being developed is just de facto connected to more or less everything else.
Well, I mean, it's useful to connect it.
And so far, the systems that have been developed don't appear to pose this kind of existential risk that would occasion these more extreme security measures like air gapping.
Now, whether we will implement those when the time comes for
it, I guess remains to be seen. Yeah. I mean, it just seems that the safety ethos doesn't quite
include that step at the moment. I mean, maybe they have yet more advanced systems that they
wouldn't dream of connecting to anything, or maybe they'll reach that stage. But it just seems that we're far before we really understand the full capacities
of a system. We're building them already connected to much of what we care about.
Yeah, I don't know exactly how it works during the training of the next generation models that
are currently in development, whether there is any form of temporary air gapping
until sort of a given capability level can be assessed.
On the one hand, I guess you want to make sure
that you also learn the kinds of capabilities
that require internet access,
the ability to look things up, for example.
But I guess in principle,
you could train some of those
by having a locally stored copy of the internet.
Right.
And so I think maybe it's something
that you only want to implement
when you think that the risks are actually high enough
to warrant the inconvenience
and the limitation of at least during training
and testing phases, doing it in an air-gapped way.
Have your concerns about the risks changed at all?
I mean, in particular, the risk of AI alignment or the failure of AI alignment?
I mean, the macro picture, I think, remains more or less the same. Obviously, there's a lot more
granularity now. We are 10 years further down the track, and we can see in much more specificity what the leading edge
models look like and what the field in general looks like. But I'm not sure the sort of the
P-doom has changed that much. I think maybe my emphasis has shifted a little bit from alignment failure, the narrowly defined problem of not
solving the technical challenge of scalable alignment, to focus a little bit more on the
other ways in which we might succumb to an existential catastrophe.
There is the governance challenge, broadly construed, and then the challenge of digital
minds with moral status that we might also
make a hash of and and of course also the failure in a potential risk of failing ever to develop
super intelligence which i think in itself would constitute possibly an existential catastrophe
well that's interesting so it's it really is a high wire act that we have to develop it exactly as needed to be perfectly aligned with
our well-being and in an ongoing way and to develop it in an unaligned way would be catastrophic but
to not develop it in the first place could also be catastrophic given the challenges that
we face that could only be solved by super intelligence yeah and we don't really know how
exactly how hard any of these problems are.
I mean, we've never had to solve them before. So I think most of the uncertainty in how well
things will go is less uncertainty about the degree to which we get our act together and rally,
although there's some uncertainty about that, and more uncertainty in just the intrinsic
difficulty of these challenges. So there is a
degree of fatalism there. Like if the problems are easy, we'll probably solve them. If they are
really hard, we'll almost certainly fail. And now maybe there will be sort of intermediate
difficulty level, in which case, in those scenarios, it might make a big difference
the degree to which we sort of do our best in the coming months and years.
Well, we'll revisit some of those problems when we talk about your new book, but what do you make
of the fact that some very smart people, people who were quite close or even responsible for the
fundamental breakthroughs into deep learning that have given us the progress we've seen of late. I'm thinking of someone like Jeffrey Hinton. What do you make of the fact that
certain of these people apparently did not see any problem with alignment or the lack thereof,
and they only came to perceive the risk that you wrote about 10 years ago in recent months. I mean,
maybe it was a year ago that Hinton retired and started making worried noises about the possibility
that we could build superintelligence in a way that would be catastrophic.
I gave a TED Talk on this topic, very much inspired by your book in 2016.
Many of us have been worried for a decade about this.
How do you explain the fact that someone like Hinton is just now having these thoughts?
Well, I mean, I'm more inclined to give credit to, I mean, it's particularly rare and difficult to change one's mind or update as one gets older, and particularly if one is very
distinguished like Hinton is. I think that's more the surprising thing, I think, rather than
the failure to update earlier. There are a lot of people who haven't yet really come to appreciate
just how immense this leap into superintelligence will be and the risks associated with it.
But the thing that I find mystifying about this topic is that there's some extraordinarily
smart people who you can't accuse of not understanding the details, right? It's not
an intellectual problem. And they don't accept the perception of risk. In many cases, they just don't even accept that it's a
category of risk that can be thought about, and yet their counter-arguments are so
unpersuasive, insofar as they even have counter-arguments, that it's just some kind of
brute fact of a difference of intuition that is very hard to parse. I mean,
think of somebody like David Deutsch, who you probably know, the physicist at Oxford. Maybe
he's revised his opinion. It's been a couple of years since I've spoken to him about this,
but I somehow doubt it. The last time we spoke, he was not worried at all about this problem of
alignment. I mean, and the analogy he drew at the time was that we all have this problem when we have
kids, we have teenagers, and the teenagers are not perfectly aligned with our view of reality,
and yet we navigate that fine, and they grow up on the basis of our culture and our instruction
and in continuous dialogue with us, and nothing gets too out of whack. And everything he says about this that keeps him so sanguine is based explicitly
on the claim that there's just simply no way we can be cognitively closed to what a superintelligence
ultimately begins to realize and understand and want to actuate. I mean, it's just a matter of
us getting enough memory and enough processing time to have a dialogue with such a mind. And then it just, the conversation sort of peters out without there actually being a clear acknowledgement of the analogies to, you know, our relationship to a species that is much more competent, much more capable, much more intelligent than yours is, there's just the obvious possibility of that not working out well, as we have seen in the way we have run roughshod over the interests of every other species on Earth.
The skeptics of this problem just seem to think that there's something about the fact that we are inventing this technology that guarantees that this relationship cannot go awry.
And I've yet to encounter a deeper argument about why that is not merely guaranteed, why that's in any way even guaranteed to be likely.
Well, I mean, let's hope they are right.
Have you ever spoken with Deutsch about this?
I don't believe I have.
I think I've only met him once, and that was, yeah, I don't recall whether this came up or not.
Long time ago.
So, I mean, do you share my mystification after colliding with many of these people?
Yeah, well, I guess I'm sort of numb to it by now.
I mean, you just take it for granted that that's the way things are.
Perhaps things seem the same from their perspective,
like these people running around with their pants on fire,
being very alarmed, and we have this long track record of technological progress being for the best.
And sometimes ways to try to control things end up doing more harm than what they
are protecting against and but yeah it does seem prima facie like uh something that has a lot of
potential to go wrong like if you're introducing the like equivalent of a new species that are far
more intelligent than homo sapiens even if it were like a biological species that already would seem
a little bit perilous or at least worth being cautious about as we did it. And in here,
it's like maybe even more different and much more sudden.
Actually, I think that variable would sharpen up people's concerns. If you just said we're
inventing a biological species that is more intelligent and capable than we are and
setting it loose. It won't be in a zoo. It'll be out with us. I think people, the wetness of that
invention, I think would immediately alert people to the danger or the possibility of danger.
There's something about the fact that it is a clear artifact, a non-biological artifact that we are creating that makes people think this is a tool, this isn't a relationship.
Yeah, I mean, in fairness, the fact that it is sort of a non-biological artifact also potentially
gives us a lot more levers of control in designing it. Like you can sort of read off every single
parameter and modify it and it's software
and you have a lot of affordances with software
that you couldn't have with like a biological creation.
So if we are lucky,
it just means we have like this precision control
as we engineer this and no built in,
like maybe you think a biological species
might have sort of inbuilt predatory instincts
and all of that, which need not be present in the digital mind.
intelligent and more of it than we have that entails our inability to predict what it will ultimately do, right? It can form instrumental goals that we haven't anticipated. I mean,
that falls out of the very concept of having an independent intelligence. And it's just,
it puts you in relationship to another mind.
We can leave the topic of consciousness aside for the moment.
We're simply talking about intelligence.
And there's just something about that that is, unless, again, unless we have complete control and can pull the plug at any moment, which becomes harder to think about in the presence of something that is, as stipulated, more powerful
and more intelligent than we are. It just, again, I'm mystified that people simply don't acknowledge
the possibility of the problem. The argument never goes, oh, yes, that's totally possible
that we could have this catastrophic failure of relationship to this independent intelligence,
but here's why I
think it's unlikely, right? Like, that's not the argument. The argument is, you're merely worrying
about this as a kind of perverse religious faith in something that isn't demonstrable at all.
Yeah, I mean, it's hard not to get religious in one way or another when confronting such immense prospects and the possibility of
much greater beings and how that all fits in is like a big but i guess it's worth reflecting as
well on what the alternative is so it's not as if the default course for humanity is just this
kind of smooth highway with mcdonald's stations interspersed every four kilometers. It does look like things are a bit out of control already from a global perspective.
We're inventing different technologies without much plan, without much coordination.
And maybe we've just mostly been lucky so far that we haven't discovered one that is
so destructive that it destroys everything because uh i mean the
technologies we have developed we've put them to use and if they are destructive they've been used
to cause destruction it's just so far the the worst destruction is kind of the destruction of
one city at the time yeah this is your urn of invention argument that we spoke about last time
yeah so there's that where you could
focus it, say, on specific technological discoveries. But in parallel with that,
there is also this kind of out-of-control dynamics. You could call it evolution or
kind of a global geopolitical game theoretic situation that is evolving. And our sort of
information system, the memetic drivers that have changed presumably
quite a lot since we've developed the internet and social media, that is now driving human
minds in various different directions. And if we're lucky, that will make us wiser and nicer,
but there is no guarantee that they won't instead create more polarization or addiction or other various kinds of malfunctions
in our collective mind. And so that's kind of the default course, I think. So yes, I mean,
AI will also be dangerous, but the relevant standard is like how much will it increase
the dangers relative to just keep doing what's currently being done.
Actually, there's a metaphor you use early in the book, the new book, Deep
Utopia, which captures this wonderfully. And it's the metaphor of the bucking beast. And I just want
to read those relevant sentences because they bring home the nature of this problem, which
we tend not to think about in terms that are this vivid. So you say that, quote,
humanity is riding on the back of some chaotic beast of tremendous strength,
which is bucking, twisting, charging, kicking, rearing. The beast does not represent nature.
It represents the dynamics of the emergent behavior of our own civilization,
the technology-mediated, culture-inflected, game-theoretic interactions between billions of individuals, groups,
and institutions. No one is in control. We cling on as best we can for as long as we can,
but at any point, perhaps if we poke the juggernaut the wrong way or for no discernible reason at all,
it might toss us into the dust with a quick shrug or possibly maim or trample us to death.
Right, so we have all these variables, that's the end of the quote.
So we have all these variables that we influence in one way or another
through culture and through all of our individual actions,
and yet, on some basic level, no one is in control,
and there's just no, I mean, the system is increasingly chaotic,
especially given all of our technological progress.
And yes, into this picture comes the prospect of building more and more intelligent machines.
And again, it's this dual-sided risk.
There's the risk of building them in a way that contributes to the problem, but there's
the risk of failing to build them and failing to solve the problems that might only be solvable in the presence of greater intelligence.
Yeah, so that certainly is one dimension of it, I think. It would be kind of sad if we
never even got to roll the dice with superintelligence because we just destroyed
ourselves before. That would be particularly ignominious,
it seems. Yeah, well, maybe this
is a place to talk about
the concept of path
dependence and what
you call naughty problems
in the book. What do those two
phrases mean? Well, I mean,
path dependence, I guess,
means that the
result depends sort of on how you got there
and that kind of the opportunities does don't supervene on some current state but also on the
history the history might make a difference not just but the naughty problems basically
there's like a class of problems that become automatically easier to solve as you get better technology.
And then there's another class of problems for which that is not necessarily the case,
and where the solution instead maybe requires improvements in coordination.
So for example, maybe the problem of poverty is getting easier to solve the more efficient,
productive technology we have.
You can grow more, like if you have sort of tractors,
it's easier to keep everybody fed than if you have more primitive technology.
So starvation just gets easier over time as long as we make technological progress.
But say the problem of war doesn't necessarily get easier
just because we make technological progress.
In fact, in some ways, wars might become more destructive if we make technological progress.
Can you explain the analogy to the actual knots in string?
Well, so the idea is with knots that are just tangled in certain ways,
if you just pull hard enough on the ends of that, it kind of straightens out.
But there might be other types of problems
where if you kind of advance technologically
equivalently to tugging on the ends of the string,
you end up with this ineliminable knot.
And sort of the more perfect the technology,
the tighter that knot becomes.
So if you have, say you have a kind of totalitarian system to start off with
maybe then the more perfect technology you have the the greater the ability of the dictator to
maintain himself in power using advanced surveillance technology or maybe like anti-aging
technology like whatever you could with perfect, maybe it becomes a knot that
never goes away. And so in the ideal, if you want to end up with a kind of unknotted string,
you might have to resolve some of these issues before you get technological maturity.
Yeah, which relates to the concept of path dependence. So let's actually talk about the
happy side of this equation, the notion of deep utopia and a solved world.
What do you mean by a solved world? One is characterized by two properties. One is
it has attained technological maturity or some good approximation thereof, meaning
at least all the technologies we already can see are physically possible have been developed.
technologies we already can see are physically possible have been developed.
But then it has one more feature, which is that political and governance problems have been solved to whatever extent those kinds of problems can be solved.
So imagine some future civilization with really advanced technology,
and it's a generally fair world that doesn't wage war,
and where people don't oppress one another
and things are at least decent in terms of the political stuff.
So that would be one way of characterizing it.
But another is to think of it as a world in which there's a sense in which either all
practical problems have already been solved, or if there remain any practical problems, they are better worked on by
AIs and robots. And so that in some sense, there might not remain any practical problems for humans
to work out. The world is already solved. And when we think about this, well, first,
it's interesting that there's this historical prediction from John Maynard Keynes, which
was surprisingly accurate given the fact that it's 100 years old.
What did Keynes predict?
He thought that productivity would increase four to eightfold over the coming 100 years
from when he was writing it.
I think we are about 90 years since he wrote it now, and that seems to be on track.
So that was the first part of his prediction.
And then he thought that the result of this would be that we would have a kind of four-hour working
week at Leisure Society, that people would work much less because they could, you know, get enough
of all that they had before and more, even whilst working much less. If every hour of work was like
eight times productive, as as productive you could like work
you know four times less and still have two times as much stuff he got that part wrong but uh that's
that was yeah so he got that mostly wrong although we do work less like there is working hours have
decreased so there's more sort of people take longer to enter the labor market there's more
education they retire earlier there's more sort of maternity and paternity leave and slightly shorter working hours,
but nowhere near as much as he had anticipated. I'm surprised. I mean, perhaps it's just a test
to my lack of economic understanding, but I'm surprised that he wasn't an order of magnitude
off or more in his prediction of productivity. I'm going out that way. Given what he had to work with,
you know, 90 years ago in terms of looking at the results of the Industrial Revolution,
and given all that's happened in the intervening decades, it's surprising that his notion of
where we would get as a civilization in terms of productivity is, was it all close?
Yeah, I mean, so those basic economic growth rates of productivity growth
haven't really changed that much. It's a little bit like Moore's law where it's had,
you know, a relatively steady doubling pace for a good long time now. And so I guess he just
extrapolated that and that's how he got his prediction. So there's this, I think you touch
on it in the book, there's this strange distortion of our thinking when we think about the utopia or the prospect
of solving all of our problems.
When you think about incremental improvements in our world, all of those seem almost by
definition good, right?
I mean, we're talking about an improvement, right?
So you're telling me you're going to cure cancer.
Well, that's good. But once you improve too much, right, if you cure cancer and heart disease and Alzheimer's and then
even aging, and now we can live to be 2000, all of a sudden people's intuitions become a little
wobbly and they feel that you've improved too much. And we almost have a kind of uncanny valley problem for a future of happiness. It all
seems too weird and in some ways undesirable and even unethical, right? I think, I don't know if
you know the gerontologist Aubrey de Grey, who has made many arguments about the ending of aging,
and he ran into this whenever he would propose the idea of solving aging itself
as effectively an engineering problem. He was immediately met by opposition of the sort that
I just described. People find it to be unseemly and unethical to want to live forever or to want
to live to be a thousand. But then he would break it down and he would say, well, okay,
but let me just get this straight. Do you think curing cancer would be a good idea? And everyone, of course, would say yes. And what about heart
disease? And what about Alzheimer's? And everyone will sign up a la carte for every one of those
things on the menu, even if you present literally everything that constitutes aging from that menu.
They want them all piecemeal, but comprehensively, it somehow seems indecent and uncanny.
So, I mean, do you have a sense of utopia being almost a hard thing to sell?
Were it achievable that people still have strange ethical intuitions about it? Yeah, so I don't really try to sell it, but more dive right into that counterintuitiveness and awkwardness.
dive right into that counterintuitiveness and awkwardness and kind of almost the sense of unease that comes if you really try to imagine what it would happen if we made all these little
steps of progress that everybody would agree are good individually and then you think through what
that would actually produce then there is a sense in which at least at first ties a lot of sites a
lot of people would recoil from that and so the book
doesn't try to sugarcoat that but like let's really dive in and like think just how
potentially repulsive and counterintuitive that condition of a solved world would be and not
blink or look away and like let's steer straight into that and then like analyze what kinds of
values could you actually have in this solved world and i mean i think i'm ultimately optimistic that as it were on the other side of
that there is something very worthwhile but it certainly would be i think in many important ways
very different from the current human condition and there is a sort of paradox there that we're
so busy making all these steps of progress that we celebrate as we make them but we really look at where this ends up
if things go well and and when we do we kind of recoil like uh that seems like yeah so i mean
you could cure the individual diseases and then you cure aging but like also other little practical
things right so you have you know your black and white television then you have a color television
then you have a remote control you don't have to get, then you have a color television, then you have a remote control, you don't have to get up, then you have a virtual reality headset, and then you have
a little thing that reads your brain. So you don't even have to select what you want to watch. It
kind of directly just selects programming based on what maximally stimulates various circuits in
your brain. And then, you know, maybe you don't even have that, you just have something that
directly stimulates your brain. And then maybe it doesn't stimulate all the brain, but just a pleasure center of your
brain.
And as you think through, as it were, these things taken to their optimum degree of refinement,
it seems that it's not clear what's left at the end of that process that would still be
worth having.
But let's make explicit some of the reasons why people
begin to recoil. I mean, so you just took us all the way essentially into the matrix, right? And
then we can talk about, I mean, you're talking about directly stimulating the brain so as to
produce non-veridical but otherwise desirable experiences. So we'll probably end somewhere
near there. But on the way to all of
that, there are other forms of dislocation. I mean, just the fact that you're uncoupling work
from the need to work in order to survive, right, in a solved world. So let's just talk about that
first increment of progress where we achieve such productivity that work becomes voluntary, right? Or we have to think
of our lives as games or as works of art, where what we do each day has no implication for
whether or not we have a sufficient economic purchase upon the variables of our own survival.
What's the problem there with unemployment or just purely voluntary employment
or not having a culture that necessarily values human work because all that work is better
accomplished by intelligent machines? How do you see that? Yeah, so we could take it in stages as
it were layers of the onion. So the outermost and most superficial analysis would say, well, so we get machines that can
do more stuff.
So they automate some jobs, but that just means we humans would do the other jobs that
haven't been automated.
And we've seen transitions like this in the past, like 150 years ago, we were all farmers
basically.
And now it's one or 2%.
And so similarly in the future, people you know will be reiki
instructors or a massage therapist or like other things we haven't even thought of and so yes there
will be some challenges we need to you know maybe there will be some unemployment and we need i
don't know unemployment insurance or retraining of people to and that's kind of often where the discussion has started and ended so far
in terms of considering the implications of this machine intelligence era if i've noticed that the
massage therapists always come out more or less the last people standing in any of these thought
experiments but that might be a euphemism for some related professions the um i think problem goes deeper than that because it's it's not just the current jobs
that could be automated right but like the new jobs that we could invent also it could be
automated if you really imagine ai that is as fully generally capable as the human brain and
then presumably robot bodies to go along with that.
So all human jobs could be automated
with some exceptions that might be relatively minor,
but are worth, I guess, mentioning in passing.
So there are services or products where we care
not just about the functional attributes of what we're buying,
but also about how it was produced. So right now, some person might pay a premium for a trinket if it were made
by a craft, by hand, maybe by an indigenous craftsperson, as opposed to in a sweatshop
somewhere in Indonesia. So you might pay more for it, even if the trinket itself is functionally
equivalent, because you care about the history and how it was produced. So similarly, if future
consumers have that kind of preference, it might create a niche for human labor, because only
humans can make things made by humans. Or maybe people just prefer to watch human athletes compete
rather than robots, even if the robots could run faster and
box harder, et cetera. So that's the footnote to that general claim that everything could be
automated. So that would be a more radical conception than of a leisure society where
it's not just that we would retrain workers, but we would stop working altogether.
And in some sense that's more radical, but it's still not that radical. We already have various groups that don't work
for a living. We have children, so they are economically completely useless, but nevertheless,
often have very good lives. They run around playing and inventing games and learning and
having fun.
So even though they are not economically productive, their lives seem to be great.
You could look at retired people.
There, of course, the situation is confounded by health problems that become more common at older ages.
But if you take a retired person who is in perfect physical and mental health, they often
have great lives. so they maybe travel
the world play with their grandkids watch television take their dog for a walk in the
park do all kinds of things garden like they often have great lives and then there are people who are
like independently wealthy who don't need to work for a living you know some of those have great
lives and so it's just maybe we would all be in more like these
categories all be like children and that would undoubtedly require substantial cultural
readjustment like the whole education system presumably would need to change rather than
training kids to become productive workers who receive assignments and hand them in and do as
they're told and sit at
their desks. You could sort of focus education more on cultivating the art of conversation,
appreciation for the natural beauty, for literature, hobbies of different kinds,
physical wellness. So that would be a big readjustment.
Well, you've already described many of the impractical degrees that some of us have
gotten, right? I mean, I did my undergraduate degree in philosophy. I forget what you did.
Did you do philosophy or were you in physics? I did a bunch of things, yeah, physics and
philosophy and AI and stuff. But I mean, you've described much of the humanities there. So it's
funny to think of the humanities as potentially the optimal i guess not humanities as circa 2024 given what's been happening on
college campuses of late but some purified version of the humanities like the great books program at
at saint john's say is just the optimal education for a future wherein more or less everyone is
independently wealthy yeah or maybe one component of it.
I think there's like, you know,
music appreciation,
many different dimensions of a great life that don't all consist of reading all the books.
But it's definitely like there could be an element there.
But I think the problem goes like deeper than that.
So we can peel off another layer of the onion, which is that when we consider the affordances
of technological maturity, we realize it's not just economic labor that could be automated,
but a whole bunch of other activities as well.
So rich people today are often leading very busy lives.
They are having various projects they are pursuing, etc.,
that they couldn't accomplish unless they actually put time and effort into them themselves.
But you can sort of think through of the types of activities
that people might fill their leisure time with
and think whether those would still make sense at technological maturity.
And I think for many of them, you can sort of cross them out,
or at least put a question mark there.
You could still do them, but they would seem a bit pointless because that would be an easier way to accomplish their aim.
So right now, some people are not as like shopping as activity.
If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access
to all full-length episodes of the Making Sense Podcast. The podcast is available to everyone
through our scholarship program, so if you can't afford a subscription, please request a free
account on the website. The Making Sense Podcast is ad-free and relies entirely on listener support,
and you can subscribe now at SamHarris.org.