Making Sense with Sam Harris - #305 — Moral Knowledge
Episode Date: December 8, 2022Sam Harris speaks with Erik Hoel about the nature of moral truth. They discuss the connection between consequentialism and Effective Altruism, the problems with implementing academic moral philosophy,... bad arguments against consequentialism, the implications of AI for our morality, the dangers of moral certainty, whether all claims about good and evil are claims about consequences, the problem of moral fanaticism, difficulty in thinking about low-probability events, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed
to add to your favorite podcatcher,
along with other subscriber-only content.
We don't run ads on the podcast,
and therefore it's made possible entirely
through the support of our subscribers.
So if you enjoy what we're doing here,
please consider becoming one.
Okay, well, a reminder that we're releasing more episodes of the series The Essential Sam Harris,
created by Jay Shapiro. Jay has mined my catalog and put together episodes on specific themes,
adding his own commentary, read by the wonderful Megan Phelps Roper, and weaving that together with excerpts from many podcasts on a similar topic. We released
Artificial Intelligence about a week ago. There will be episodes on consciousness and violence,
free will, belief and unbelief, existential threat and nuclear war,
social media and the information landscape, death, meditation and eastern spirituality,
and perhaps other topics beyond that. Jay has also created workbooks for each episode
and recommendations for further exploration, And these full episodes and other
materials are available to all podcast subscribers. As always, if you can't afford a subscription,
you need only send an email to support at samharris.org and request a free account.
And I remain extremely grateful to have a community of subscribers that support the podcast,
extremely grateful to have a community of subscribers that support the podcast, knowing that others who can't afford to are nevertheless able to get everything for free. That really is
the business model that makes sense for me in digital media. And it's our policy over at Waking
Up, too. So don't let real concerns about money ever be the reason why you don't get access to my digital work.
Okay.
Well, I've been off Twitter for about 10 days now, and I must say it's been interesting.
It's almost like I amputated a limb.
Actually, I amputated a phantom limb.
The limb wasn't real, and it was mostly delivering signals of pain and disorder.
But it was also a major presence in my life, and it was articulate in ways that I was pretty attached to.
I could make gestures, or seeming gestures, that I can now no longer imagine making.
There's literally no space in which to make those gestures in my life now.
So there's definitely a sense that something is missing.
My phone is much less of a presence in my life.
I've noticed that I sometimes pick it up reflexively,
and then I think, what was I hoping to do with this?
And my sense of what the world is, is different.
My sense of where I exist in the world is different.
This might sound completely crazy to those of you who are never obsessed with Twitter.
But Twitter had really become my newsfeed.
It was my first point of interaction with the world of information each day.
And now that seems far less than optimal.
I once went up in a police helicopter and experienced what it was like to have a cop's eye view of a major American city.
At the time, this really was a revelation to me.
At the time, this really was a revelation to me.
When you're listening to police radio, there's always a car chase, or shots fired, or reports of a rape in progress, or some other astounding symptom of societal dysfunction.
And without a police radio in your life, most of that goes away.
And it's genuinely hard to say which view of a city is more realistic. Is it more realistic, a picture of your life in your city,
for you to suddenly be told that someone is getting murdered right now, a mere four miles
from where you're currently drinking your morning cup of coffee? Is the feeling of horror and helplessness that wells
up in you a more accurate lens through which to view the rest of your day? Or is it distorting of
it? It does seem possible to misperceive one's world on the basis of actual facts because of
what one helplessly does with those facts. It's almost like the human mind has its own algorithmically boosted information.
So, misinformation aside, and there was obviously a lot of that,
I now feel like many of the facts I was getting on Twitter
were distorting my sense of what it is to live in the world,
as well as my sense of my place in it.
Today's conversation was recorded
before I got off Twitter, so you'll hear it come up briefly. Actually, it was recorded the day
before I deleted my account, because I did that on Thanksgiving Day, and this was recorded the day
before. And at a few points, you'll hear the residue of how much time I had been spending on Twitter that day. I complain about it. I draw an analogy to it. And frankly, listening back to this
conversation, I sound a little more cantankerous than normal. This conversation had the character
of a debate at times, especially in the second half. And listening to it, I sound a little bit
at the end of my patience. And while it had some reference to the disagreement being discussed,
it was certainly drawing some energy from my collisions on Twitter that day.
Anyway, today's guest is Eric Hoel. Eric is a neuroscientist and writer. He was a professor
at Tufts University, but recently left to write full-time. He's been a visiting scholar at the
Institute for Advanced Study at Princeton, and a Forbes 30 under 30 notable in science.
He has published a novel titled The Revelations, and he now writes full-time
for his substack, which goes by the name of The Intrinsic Perspective. And today we talk about
the nature of moral truth, and by implication, the future of effective altruism. We discuss the
connection between consequentialism and EA, the problems of implementing academic moral philosophy,
bad arguments against consequentialism, or what I deem to be bad arguments, the implications of AI
for our morality, the dangers of moral certainty, whether all moral claims are in fact claims about
consequences, the problem of moral fanaticism, why it's so difficult to think about low probability
events, and other topics.
Anyway, I really enjoyed this, despite being slightly prickly.
These are some topics that really are at the core of my interest, as well as Eric's.
And now I bring you Eric Hoel.
and now I bring you Eric Hoel I am here with Eric Hoel
Eric thanks for joining me
thank you so much Sam
it's a delight
I actually grew up selling your books
I grew up in my mom's independent bookstore
and all through high school
which was, I
think, like 2004 or so.
This was right when The End of Faith came out.
And I sat on the bestseller list for a long time.
And so I probably sold, I don't know, 50, maybe even 100 copies of that book.
I mean, I sold it a lot.
It was really dominant during that period of time.
Oh, nice, nice.
Where was the bookstore?
Or where is the bookstore?
Yeah, it's in Newburyport, Massachusetts, which is north of Boston. It's just an independent
bookstore up there. But it was great. I highly recommend growing up in a bookstore if you can
get away with it.
I can only imagine. That would have been my dream at really every point from, I don't know,
12 on. That would have been amazing.
Do you guys still have the store?
We do, actually. It survived COVID incredibly, thanks to the generosity
of the local community who leapt in to support it with a GoFundMe.
And it's now going on 50 years, which is pretty incredible.
Well, let's plug the store. What's the name of the store?
The name of the store is Jabberwocky Books in Newburyport, Massachusetts. I highly recommend
checking it out. Jabberwocky as in Lewis Carroll?
Yep. Cool. Well, that's great. I love that story.
So you and I have a ton in common, apparently. We've never met. This is the first time we've
spoken. I have been reading your essays and at least one of your academic papers.
Let's just summarize your background.
What have you been doing since you left that independent bookstore?
Well, I originally wanted to be a writer, but I became very interested in college about
the science of consciousness, which I'm sure you sort of understand in the
sense of it just being very innately interesting. It seemed like a wild west. It seemed like there
was a lot there that was unexplored. And so I became so interested that I went into it and I
got a PhD and I worked on developing what's probably arguably one of the leading theories
of consciousness, which is integrated information
theory. Now, I think that particular theory has some particular problems, but I think it's sort
of what a theory of consciousness should look like. And I was very lucky to sort of work on
it and develop it over my PhD. But during that time, I was still writing. And so eventually, that spilled over onto Substack and doing these
newsletters, which is almost, to me, like this emerging literary genre. Maybe that sounds a bit
pretentious, but I really think of it that way, this frictionless form of communication that I
really find intriguing. And so that's what I've been devoting a lot of my effort to lately. Yeah, yeah. So just to back up a second, so you got your PhD in neuroscience,
and did you do that under Tononi? Yeah, I did. So I worked with Giulio Tononi,
and we were working on... This was right around the time when integrated information theory
was sort of coming together. He's the originator of it, but there was sort of coming together.
He's the originator of it, but there was sort of this early theory team, we called ourselves,
that was all built on shoring up the foundations.
And it was a deeply formative, again, an instance of me just being very, very lucky.
It was a deeply formative experience to work on a really ambitious intellectual project, even though now I can sort of see that, like, frankly, I don't think that
the theory is probably 100% true. I think maybe some aspects of it are true. I think some aspects
of it are incredibly interesting. I think it sort of looks very much like what we want out of a
science of consciousness. But regardless of that, I think as an intellectual project, it was incredibly ambitious and intricate. And
it had just a huge... To go into that environment of really high-level science
at a frontier when you're 22 is mind-expanding, right? I mean, it was just absolutely mind-blowing,
and it was a privilege to be a part of that. Yeah, yeah.
Well, there's so many things we could talk about.
Obviously, we can talk about consciousness and free will, the brain, AI.
I know we share some concerns there.
Digital media, you just raised the point of your migration to Substack.
I mean, maybe we'll linger on that for a second, but we have many, many hours ahead
of us if we want to cover all those things. But there's something else on the agenda here,
which is more pressing, which is your views about effective altruism and consequentialism,
which have been only further crystallized in recent weeks by the fall of Sam Bankman Freed.
So maybe we'll get to some of the other stuff, but we definitely want to talk about moral truth
and the larger question of just what it takes to live a good life, which really are,
those are questions which I think are central to everyone's concern,
whether they think about them explicitly or not. But before we jump in, let's just linger for a
second on your bio, because you made this jump to Substack, which really appears in, at least in the
last, what, 10 days or so to have actually been a jump. You were a professor of neuroscience at Tufts.
Was that correct?
Yeah.
So I'm resigning my professorship at Tufts in order to write full-time on my sub-stack,
The Intrinsic Perspective.
And one of the reasons I'm doing it is just that the medium itself offers a huge amount
to people who are interested in multiple subjects, right?
I mean, you surely have sort of felt some of these constraints wherein, you know, you're really
expected to be hyper-focused on particular academic problems. And, you know, I do do like
technical work and so on, but I'm also sort of just more interested in general concepts. And
there hasn't been, you know, at least for someone who's a writer, there hasn't been a great way
to make a living off of that. And actually, Substack is now sort of providing that. So I
think I can do stuff that's as in-depth as some of my academic work, but sort of do it in public
and create conversations. And I think that that's really important, and I should seize the opportunity while I can. But why resign your post at Tufts? What do
most people not understand about academia at this moment that would make that seem like an
obvious choice? Because I guess from the outside, it might seem somewhat
inscrutable. I mean, why not maintain your professorship, continue to be a part of the
ivory tower, but then write on Substack as much as you want or can? Yeah, I think what is not quite
understood is how focused you have to be on the particular goalposts that are within academia that move you
towards tenure track. So basically, what every professor wants is this tenure at some major
institution. And to do that now, it's not really just a matter of doing your research, right? It's a matter of sort of crafting
your research so it will receive big governmental grants. And the areas in which I work, which is
like science of consciousness, mathematically formalizing the notion of emergence,
these are not areas where there is a huge amount of funding to begin with, right? But beyond that,
it also means being, you know being involved with the student body in
not just having students, but in all sorts of ways like extracurricular activities, volunteering,
taking on essentially busy work of editing journals. And it involves you sort of citation
maxing and paper maxing and sitting on all the right committees. And I sort of have tried to avoid
doing that and thought maybe I could make a career within academia without really leaning in heavily
into all that, into sort of all the goalposts and hoops of academia. And I think it's effectively
just impossible. I've sort of been very lucky to have gotten as far as I have.
And the simple truth is that last year, I published a novel and I've been publishing
essays on Substack. And the simple truth is that a tenure committee will never sit down and say,
oh, you wrote a novel and a bunch of popular essays. That's just this massive plus for our biology department.
It's totally inscrutable to them. And I've never had anyone in any sort of administrative or hiring or grant-giving capacity show anything but hesitation and trepidation about my work
outside of either direct academic stuff or direct research stuff.
Yeah. But has something changed or has that always been the case, do you think?
I think it's essentially always been the case. It's just that, you know, I'm not, you know,
my fear is that people think, oh, you know, this is someone hopping on Substack as some sort of
life raft. I think if Substack didn't exist, I would sort of happily split the difference and just take the career head and keep writing
and probably not get tenure where I want to get tenure or even if I could, but I would still
try it. But I think Substack as this sort of emerging genre, you're an author, you've written books, and there's a certain sensation,
at least that I have, and I imagine most authors have at a certain point,
where when you're publishing a book, it's like you're entering this queue behind a line of
massive titans who've all written incredible works. And you're offering up this meager,
here's my book, I hope it at all lives up to any of this stuff.
And I just don't feel that way on Substack.
I feel like, oh, this is new.
People haven't really done this.
Of course, there's been many great essays throughout history, but this constant contact
of the newsletter form and the frictionlessness of it, it strikes me as like
a new genre and I want to sort of explore it. Yeah. The huge difference is the cadence of
publishing. I mean, to be able to hit publish on your own schedule and then to see it instantaneously
land in the hands and minds of readers or listeners in the case of a podcast, that strikes
me as genuinely new.
I mean, you know, the rhythm of book publishing now, it's been some years since I've been
engaged in it, and it's really hard, especially for a nonfiction book, I guess with a novel,
it would probably feel differently, or this wouldn't be quite the pain point.
But if you have an argument to make that you think has pressing intellectual and even social importance, and it all relates to issues
of the day, to spend a year or more crafting that argument and then to wait nearly a year,
I mean, in the usual case, it's something like 11 months for that to be
published. I mean, it just seems like a bizarre anachronism at this point. And so as a counterpoint
to that, Substack and podcasts and blogs generally, anything digital that you have for which you're
the publisher, it's just a different world. Yeah, absolutely. Publishing moves at a glacial speed.
And it's funny as well, just as someone who grew up, as I said, selling books,
I mean, there are a lot of people who have moved to reading primarily on their phone. And what I
don't want is reading to sort of die out, right? I want to have high-level, book-level content that people can read on their phones.
And one reason for that is just that when you wake up in the morning, what a lot of
people do is check their phones, and they'll look through their social media messages,
and they'll read their emails, but they'll also read an essay.
They'll read an essay with their head right on their pillow. And that is so
powerful if you can sort of direct that towards things worth attending to. And I realized this
by looking at my own behavior. As much as I love books, I mean, I'm sitting in my office
surrounded by free books stolen from my mother's bookstore. But as much as I absolutely love books,
I don't wake up in the morning and put a book in my face. I wake up in the morning and I check my phone. And so I
realized this and I thought, well, what am I doing? Why am I putting all this effort into something
that, yeah, I still read books, but clearly there's this huge open market for good high
level content that you can read online or on your computer.
And I want to bring a lot of the old school sort of literary and scientific qualities.
I mean, that's my hope, right, is to bring that sort of stuff online.
But anyways.
Yeah, yeah, yeah.
Well, I think you're executing on that hope because your Substack essays are great and they're quite
literate.
And you also have a great artist you're collaborating with.
I love the illustrations associated with your essays.
Yeah, it's a huge amount of fun.
He does these artistic reactions to the post.
So he reads draft and then somehow knocks out, you know, with no direction from me,
his sort of reaction to it.
And it's just,
it's a lot of fun. Yeah. Nice. Well, so let's jump into the topic at hand, because this was kicked off by my having noticed one of your essays on effective altruism. And then I think
I signed up for your sub stack at that point. And then I noticed, maybe I was already on it.
your substack at that point, and then I noticed, or maybe I was already on it, and then you wrote a further essay about Sam Bankman Freed and his misadventures. So we're going to jump into
effective altruism and consequentialism, and there are now many discontents. Perhaps we should define
and differentiate those terms first. How do you think about EA and consequentialism? intellectually sexy, which I find very admirable. And they've brought a lot of attention to causes
that are more esoteric. But just to give a very basic definition maybe of effective altruism and
how I think about it, is that you can view it at two levels. So the broadest definition is something
like moneyball, but for charities. So it's looking at charities and saying,
how can we make our donations to these charities as effective as possible? And again, this is
something that immediately people say, that sounds really great. But there's also, it comes out of a
particular type of moral philosophy. So the movement has its origins in a lot of these intellectual
thought experiments that are based around utilitarianism. And, you know, where I've
sort of criticized the movement is in its sort of taking those sort of thought experiments at
too seriously. And actually, back in August, I wrote, I think,
the essay that you're referring to. And it's not just because I've decided to critique randomly
effective altruism, which at the time was just people contributing money to charity,
like what's there exactly to critique about it? But they actually put out a call for criticism.
So he said, please, we'll pay you to criticize us. Again, something that is very admirable. And so I ended up writing a couple essays in response to this call for self-criticism. And my worry was that they were taking maybe the consequentialism, you could call it, you call it utilitarianism, a bit too seriously.
You call it utilitarianism a bit too seriously. And my worry was that they would kind of scale that up. And in a sense, the FTX implosion that recently occurred, which now over a million people, it seems like, have lost money in, that that occurred, perhaps, arguably, this is arguably, in part because of taking some of the deep core philosophical motives of effective altruism too seriously and trying to bring it too much into
the real world. And just to give a definition, maybe we should give some definitions here,
because I've said utilitarianism, I've said consequentialism. So very broadly, I would say consequentialism
is when your theory of morality is based around the consequences of actions. Or to be strict about
it, that morality is reducible in some way to only the consequences of actions. And utilitarianism
is maybe like a specific form of consequentialism. People use
these terms in a little bit different ways, but utilitarianism is kind of a specific form
of consequentialism where it's saying that the consequences that impact, let's just be reductive
and say the happiness or pleasure of individuals is sort of all that matters for morality. And all of effective altruism
originally comes from some moral thought experiments around how to sort of maximize
these properties or how to be a utilitarian. And I think that that's, in a sense, the part of the
movement that we should take the least seriously. And then there's a sense, the part of the movement that we should take the least
seriously.
And then there's a bunch of other parts of movement that I think are good and should
be emphasized.
So I just want to sort of make that clear.
Okay, great.
Well, let me go over that ground one more time just to fill in a few holes, because
I think I just don't want anyone to be confused about what these terms mean and what we're
talking about here.
So, yeah, it is in fact descriptively true that many effective altruists are consequentialists.
And as you say, the original inspiration for EA is, you know, arguably the thought experiment that Peter Singer came up with about the shallow pond,
which has been discussed many times on this podcast.
But briefly, if you were to be walking home one day and you see a child drowning in a shallow pond,
obviously you would go rush over and save it.
And if you happen to be wearing some very expensive shoes,
the thought that you can't wade into that pond to save the life of a drowning child
because you don't want to damage your shoes, well, that immediately brands you as some kind
of moral monster, right? Anyone who would decline to save the life of a child over,
let's say, a $500 pair of shoes, you know, just deserves to be exiled from our moral community. But as
Singer pointed out, if you flip that around, all of us are in the position every day of receiving
appeals from valid charities, any one of which indicates that we could save the life of a
drowning child, in effect, with a mere allocation of, let's say, $500.
But none of us feel that we or anyone else around us who is declining to send yet another check to
yet another organization for this purpose, none of us feel that we or anyone else is a moral monster
for not doing that, right? And yet, if you do the math, in consequentialist terms, it seems like
an analogous situation. It's just a greater remove. The moral horror of the inequality there
is just less salient. And so we just, you know, we walk past the pond, in effect, every day of
our lives, and we do so with a clear conscience. And so it's on the basis of that kind of thought that a few young
philosophers were inspired to start this movement, Effective Altruism, which, as you say,
I like the analogy, it's essentially moneyball for charity. Let's just drill down on what is
truly effective and how can we do the most good with the limited resources we have. And then
there are further arguments about long-termism and other things that get layered in there.
And I should say that Peter Singer and the founders of EA, Toby Ord and Will McCaskill,
have been on this podcast, and in some cases multiple times.
And there's a lot that I've said about all that.
I guess I would make a couple of points here. One is that
there's no, I guess, a further definition here. You brought in the term utilitarianism. So that's
the original form of consequentialism attributed to Jeremy Bentham and John Stuart Mill, which,
when it gets discussed in most circles, more or less gets equated with some form of
hedonism, right? But people tend to think, well, utilitarians really just care about pleasure
or happiness in some kind of superficial and impossible to measure way. And so there are
many caricatures of the view that you should avoid pain at all costs. There's no form of pain that could ever
be justified on a utilitarian calculus. So there's a lot of confusion about that. But I guess
if we wanted to keep these terms separate, I just tend to collapse everything to consequentialism.
You could argue that consequentialism, as you said, is the claim that moral truth,
which is to say questions of right and wrong and good and evil, is totally reducible to talk about consequences, actual or
perhaps actual and potential consequences. And I would certainly sign on to that. You could make
the further claim, which I've also made, is that all of the consequences that really matter in the end have to matter to
some conscious mind somewhere, at least potentially, right? So that we care about the,
in the end, the conscious states of conscious creatures and, you know, anything else we say
we care about can collapse down to the actual or potential conscious states of conscious creatures.
So I would, I've argued for that
in my book, The Moral Landscape and Elsewhere. But much of the confusion that comes here is,
you know, as I think we're going to explore, comes down to an inadequate picture of just what counts
as a consequence. So I want to get into that. But I guess the final point to make here just definitionally is that it seems to me that there's no direct connection, or at least there's no
two-way connection. Maybe there's a one-way connection between effective altruism and
consequentialism, which is to say, I think you could be an effective altruist and not be a
consequentialist, though I would agree that probably most effective altruists
are consequentialists. I mean, you could be a fundamentalist Christian who just wants to get the
souls of people into heaven and then think about effective altruism in those terms. Just how can I
be most effective at accomplishing this particular good that I'm defining in this particular way? And, you know, so I do think EA and consequentialism break apart there. Although,
I guess you could say that if any consequentialist really should be an effective altruist,
if you're concerned about consequences, well, then you should be concerned about
really tracking what the consequences of your actions or a charity's actions are.
And you should care if one charity is doing 100 times more good based on your definition of good
than another charity. And that's the charity that should get your money and time, etc. So
I don't know. Do you have anything you want to modify about all that?
No, no. I think that that's correct. And I agree,
actually, that you could sort of separate out the utilitarianism or consequentialism from
effective altruism in some particular ways. But I think that where it gets a little bit difficult
is that the whole sort of point is this effective part of the altruism. So when one makes a judgment about effectiveness,
they have to be choosing something to maximize or prioritize. So you want to be choosing the
biggest moral bang for your buck, which again, strikes me as quite admirable, especially when
the comparisons that you're making are local. So let's say that you set out
with your goal of saving lives in Africa. Well, maybe there are multiple different charities,
and some are just orders of magnitude in terms of the expected results of just a broad number of
lives saved. And this is actually a big part of precisely what the effective altruism movement
has done. It's isolated some of these charities. There's a couple of them somewhere around like mosquito
bed nets and things like that, that are just really, really effective at saving lives.
But what if you're comparing things that are very far apart? So let's say that you have some money and you want to distribute it between
inner city arts education versus domestic violence shelters. Well, now it gets a lot harder,
and it becomes a little bit clearer that what we mean by morality isn't as obviously measurable
as something like an effective economic intervention
or an effective medical intervention. Maybe it is to some hypothetical being with a really perfect,
good theory of morality. And one way to that effective altruist essentially get around some
of these issues is just to say, well, actually, both of those are essentially wastes of money. You shouldn't
really be contributing to inner city arts education or domestic violence shelters. You
really should be arbitraging your money because your money is going to go so much further
somewhere else. And again, this all sounds good. I don't think that this is bad reasoning or
anything like that. But the issue is that the more seriously you take this and the more literally you take
this, what happens is that it's almost like you begin to instantiate this academic moral
philosophy into real life.
And then it begins to become vicious in a particular way.
Like, why are you donating any money within the United States at all? Why not put it where it
goes much further? And that's where people begin to get off the bus to a certain degree. Again,
no one can blame anyone for maximizing charities, but to say that, okay, wait a minute, a dollar
will go so much further in Africa than it will here, so why donate any money to any charity that sort of operates within
the US? And that's where, again, people begin to say, wait, wait, wait, something is going on here.
And I think what's going on is that this maximizing, totalitizing philosophy within
that you can have this hardcore interpretation of utilitarianism or consequentialism, and you can
take it really,
really seriously. And if you do, I think it can lead to some bad effects, just like the way that
people who take religious beliefs, and I don't want to make the comparison, I'm certainly not
saying that effective altruism is a religion, but in sort of the same behavioral way that people who
take religious beliefs really, really seriously, and they have some sort of access to
moral truth. And that allows them to strap a bomb to their chest or something. And that is this
level of sort of fanaticism. And I think that if you take academic philosophy too seriously,
you should sort of take it as interesting and maybe as motivating, but you shouldn't really
go and try to perfectly instantiate it in the world. You should be very wary about that. And that's what this sort of arbitrage is, right? It's this like taking it really, really seriously.
And I'm going to make a couple of claims here, which I think are true and foundational.
And I would love to get your reaction.
But before I do that, I just want to acknowledge that the issues you just raised are issues that I've been thinking about and talking about, all the while defending consequentialism.
This is really the fascinating point at which our reasoning about what it means to live a good life
and the practical implementation of that reasoning is just very difficult to work out in practice.
And so the first thing I would want to claim here is that consequentialism is a theory of moral truth, right? It's a claim about what it means to say that something is morally true, that something is really good or really bad.
possible and legitimate to care about, but it isn't a decision procedure, right? It's not a way of doing the math that you just indicated may be impossible to do. And there's a distinction I made
in the moral landscape between answers in practice and answers in principle. And it just should be
obvious that there are a wide variety of questions where we know there are answers in principle.
We know that it's possible to be right or wrong about any given claim in this area.
And what's more to maybe not even know that you're wrong when, in fact, you are wrong.
And yet there may be no way of deciding who is right and who is wrong there
or ever getting the data in hand that could
adjudicate a dispute. And so the example I always go to, because it's both vivid and obviously true
for people, is that the question of how many birds are in flight over the surface of the Earth
right now has an answer, right? You just think about it for a second and you know it has an answer,
and that answer is in fact an integer. And yet we know we'll never get the data in hand. We could
not possibly get the data in hand, and yet the data have changed by the time I get to the end
of the sentence. So there is a right answer there, and yet we know no one knows it. But it would be
ridiculous to have a philosophy where a claim about birds and flight
would rule out the possibility of there being an answer to a question of how many are flying
over the surface of the earth, simply because we don't know how to measure it. And the first
thing many people say about any consequentialist claim about moral truth with respect to well-being,
say, the well-being of
conscious creatures, which is the formulation I often use, the first thing someone will say is,
well, we don't have any way of measuring well-being. Well, that's not actually an argument,
right? I mean, certainly it may be the beginning of one, but in principle, it has no force.
And as you can see by analogy with birds.
But further, I would make the claim that any claim that consequentialism is bad, right,
that it has repugnant implications, is ultimately a claim about unwanted consequences.
And usually it's an unacknowledged claim about consequences.
And so in my view, and you just inevitably did it in just stating the case against
taking academic philosophy too seriously. You pointed to all of the terrible effects of
doing this, right? The life negative effects, the fact that now you have to feel guilty going
to the symphony because it's such a profligate wastage
of money and moral resources when you could be saving yet further starving children in Africa.
And so we recognize we don't want to live in that sort of world, right? We love art,
and we love beauty, and we love leisure, and we're right to love those things. And we want
to build a civilization wherein there's such abundance that most people most of the time have the free attention not to just think about genocide and starvation, but to think about the beautiful things in life and to live creative lives, right?
And to have fun, right? going to take the thought experiments of Peter Singer so seriously that you can no longer have
fun, that you can no longer play a game of frisbee because that hour spent in the park with your
children is objectively a waste of time when held against the starvation and immiseration of
countless strangers in a distant country who you could be helping at right this very moment.
Well, we all recognize that that is some kind of race to the bottom that is perverse. It's not
giving us the emotional and cognitive resources to build a world worth living in, the very world
that the people who are starving in Africa would want to be restored to if we could
only solve their problems too. And so it may in fact be true that when brought into juxtaposition,
right, if you put the starving child at my doorstep, well then, all right, we can no longer
play Frisbee, right? So there's a local difference. And that is something that is very difficult to
think about in this context. And we'll get into that is something that is very difficult to think about in
this context. And we'll get into that. But the claim I want to make here is that it's not a
matter of, as I think you said in one of your essays, it's not a matter of us just adding some
non-consequentialist epicycles into our moral framework. It really is, in the end, getting
clearer and clearer about what all the consequences are
and what all the possible consequences are of any given rule or action. And yeah, so anyway,
that may all stop there. But those are the kind of the foundational claims I would want to make
here. Yeah, absolutely. I mean, I think that the danger that I see is not so much someone saying,
I think that the danger that I see is not so much someone saying, let's maximize well-being,
right?
It's more so that someone says, let's maximize well-being, and I have a really specific definition of well-being that I can give you right now.
And what ends up often happening is that you can very quickly find, because it's all about
maximization, you can find these edge cases.
Because it's all about maximization, you can find these edge cases. In a sense, moral philosophy operates like this game wherein you're trying to find examples
that disagree with people's moral intuitions.
An example that people often give would be something like this serial killer surgeon
who has five patients on
the operating table, and he can go out into the streets, grab someone off the streets,
butcher them in an alleyway, take their organs, and save five people. So it's one for five.
And the difficulty is in specifying something, a definition specific enough that you don't want to do that. Most people sort of get off
the bus with that sort of example. And that aspect of utilitarianism is very difficult
to do away with. You can sort of say that maybe there are long-term effects, right?
So what people will often say with this example would be, well, wait, if the serial killer surgeon
got caught, if we lived in a society where people were just being randomly
pulled off the streets and murdered, this seems like sort of, this would have a really high levels
of anxiety on people or something like that. And so the overall net well-being would decrease or
something like that. But I think that that's very difficult to sort of defend, again, once you've
chosen something very specific to maximize, like live saves or something like that.
Was it because that's the mistake of misconstruing consequences? Because I take
this case of the rogue surgeon is, in my mind, very easy to deal with in consequentialist terms,
and yet it's often, even in your essays, you put it forward as a kind of knockdown argument against consequentialism. And consequentialism
just obviously has a problem because it can't deal with this hard case. But I would just say
you can deal with it in precisely the way that people recoil from it as a defeater to
consequentialism. That is a sign of what an easy case it is. I mean,
we all recognize how horrible it would be to live in a world, which is to say how horrible
the consequences are that follow from living in such a world. None of us would want to live in
that world. I mean, no one wants to live in a world where they or someone they love could at any moment be randomly selected to be murdered
and butchered for spare parts, right? And when you would think of just what sort of mind you
would have to have as a doctor to believe that was a way to maximize goodness in this world.
I mean, just imagine the conscious states of all doctors as they surveyed
their waiting rooms looking for people that they might be able to put to other use than merely to
save their lives, right? It's just, it perverts everything about our social relationships. And
we're deeply social creatures. And states of mind like love and compassion
are so valuable to us, again, because how they directly relate to this experience of well-being,
you know, again, this is a suitcase term in my world, which can be kind of endlessly expanded,
but it doesn't mean it's vacuous. It's just that the horizons of well-being
are as yet undiscovered by us, but we know that it relates to maximizing something like love and joy
and beauty and creativity and compassion and something like minimizing terror and misery and
pointless suffering, etc. And so it just seems like a very easy case when you look
closely at what the likely consequences would be. And yet, there are probably local cases
where the situation flips because we really are in extremis, right? I mean, you think of a case
like a lifeboat problem, right? Like, listen, the Titanic has sunk, and now you're on a lifeboat, and it can only fit so many people, and yet there are more people actually clambering on, and you're all going to die kicked in the face until they stop trying to climb onto the lifeboat,
because we're no longer normal moral actors in this moment, and we'll be able to justify all
of this later, because this really was a zero-sum contest between everyone's life and the one life,
right? Those are situations which people occasionally find themselves in. And yes, they do function by this kind of callous, consequentialist calculus.
And they're uncomfortable for a reason.
But they're uncomfortable for a reason because we get very uncomfortable mapping the ethics of extremists onto life as it is in its normal mode, right? And for good reason,
right? I mean, there's so much I realize now, the fire hose of moral philosophy has been
trained on you. But there's so many edge cases that are worth considering here. But again, it never gets us out of the picture of
talking intelligently and compassionately about consequences and possible consequences.
So I think that there is a certain sort of game that can be played here. And this is basically
the game that is played by academic moral philosophers who are debating these sorts of issues, right?
And just to me, I think the clearest conception is to say, okay, we have some sort of utilitarian
calculation that we want to make for these particular consequentialist calculation,
let's say, for these particular cases. And so we have the serial killer surgeon, and we say, okay, the first term in this equation
is five for one. So that seems positive, right? So it's adding this positive term.
But then there are these nth order effects, right? So then you say, well, wait a minute,
if we can add in the second term. And the second term is like the terror that people feel from
living in a society wherein they might be randomly butchered, right?
And then the argument is, well, when you add enough of these higher order effects, you know,
into the equation, it still sort of ends up coming out negative, thus, you know, supporting our
dislike or distrust of serial killers, surgeons going around. And I think what academic philosophers
often do in this case is they say, okay, so what you've done is you've given me a game
where I just have to add in more assumptions in order to make this equation come up positive or
negative. And the goal would be for the critic to make it come out positive so that utilitarianism
recommends the serial killer surgeon and therefore sort of violates our moral intuition. And I guess what I think is that there are some
ways to do that. So an example might be that you say, well, what if you are a utilitarian and you
learn about a serial killer surgeon? Are you supposed to go report them to the police?
Well, if you did that, it would be
very bad. It would even be bad for utilitarianism itself. So you should sort of try to keep it a
secret if you can. In fact, you should sort of support the act by trying to cover up as much
of the evidence as possible, because now this is still technically maximizing well-being.
And even if you say, well, wait a minute, there might be some further effects,
it seems as if there's this sort of game of these longer-term effects. And not only that,
as you add nth-order effects into this calculation, it gets more and more impossible
to foresee what the actual values will be. There's this great story that David Foster
Wallace, the writer, actually quotes at some
point.
There's this old farmer who lives in a village with his son.
And one day, his beloved horse escapes.
And everyone says, oh, bad luck.
And the farmer says, who knows?
And then the horse comes back.
And it's somehow leading a herd of beautiful horses.
And everyone says, oh, great luck.
Who knows?
And then his son tries to tame one of the wild horses, breaks his leg. And everyone says,
oh, bad luck. And the farmer says, who knows? And then, last instance, you know, the army comes in
and drafts every able-bodied man to go serve in, you know, this horrific, I don't know, Sino World
War I conflict, where he would certainly die. But because his leg's broken, he's not drafted.
And so the farmer says, good luck, bad luck, who knows? And it seems to me that there's two issues.
One, as this calculation gets longer, the terms, first of all, get harder and harder to foresee.
And then second of all, they get larger and larger. So this is sort of like a function of almost like chaos theory, right?
It's like what you, what would seem very strange to me.
And again, maybe it's sort of true from this perspective of like this perfect being who
can sort of calculate these things out.
But once you've sort of specified what you're trying to maximize and set it in our terms,
you can find examples where it's like, well, should this Visigoth save this baby in the woods? Well, if it does, that leads to Hitler.
If the Visigoth leaves the baby in the woods, we never get Hitler, right? And that's because
effects sort of expand, just like how if you go back a thousand years, pretty much everyone is
your parent, right? Or 10,000 years or however far
you go back. But pretty much everyone living eventually becomes your parent because all the
effects get mixed. And I think probably causes are sort of similar to that, where they get mixed
together. And so you have these massive expected terms, and they seem totally defined by... You
can always say, well, what was foreseeable and what wasn't foreseeable?
And I agree, like that's, you know, certainly a reply.
But it just seems that when we try to make this stuff really, why I say to be wary about
it is not that I think that it's automatically wrong.
It's that any attempt to try to make it into something very specific and calculable, to
me, almost always appears to be wrong.
And there are always philosophers in the literature who are pointing out, well, wait a minute,
wait, you can't calculate it that way because that leads to this and you can't calculate
it that way.
And I think the effective altruism movement, in a sense, while many within the movement
do not take it so seriously that they are trying to do exactly that, maximize something
that they can sort of specifically quantify. Some people do. And I think Sam Bankman-Fried was one of them. And while I cannot
personally say at all that that actually directly led to his actions, I think that given the evidence
of the case, you could reasonably say that it might have contributed, that takes on risk and this notion
of maximization and having something very specific in mind that he's trying to maximize,
I think very well could have led to the FTX implosion. And therefore, it's an instance of
trying to essentially import academic moral philosophy into the real world and just crashing
on the rocks of the real world. Hmm. Okay, well, just briefly on Sam Bankman Freed, I would think that what's parsimonious
to say at this point about him is that he clearly has a screw loose, or at least some screws loose,
precisely where he should have had them turned down, just in this area of moral responsibility and thinking reasonably about the effects his actions would have or would be likely to have on the lives of other people, right?
I mean, he's just not, you know, the stuff that's come out since I did my last podcast on him has been pretty unflattering with respect to just how he was thinking about morality and consequences.
But I mean, to come back to the fundamental issue here, again, consequentialism isn't a
decision procedure, right? It's not a method of arriving at moral truth. It's a claim about
what moral truth in fact is, right? And what makes a proposition true. So that distinction is
enormously important because I fully agree with you that it's surprisingly difficult to
believe that you understand what the consequences of anything will be ultimately. And there are many
reasons for this. I mean, there's the fact that there are inevitably trade-offs, right? You do one thing, by definition, you at least have
opportunity costs incurred by doing that thing. And it's impossible to assess counterfactual
states of the world, right? You just don't know what the world line looks like where you did the opposite thing. And as you point out in one of your essays, many harms and goods are not directly comparable. You put it this way in mathematical evil to the loss of a person's life, right?
But it seems like, in consequentialist terms, you should be able to just do the math and just keep adding broken toes.
And at a certain point, okay, it would be good, quote, good in moral terms, to sacrifice one innocent human life to save a certain number of
broken toes in this world, right? And that just may not be the way the world is for a variety
of reasons that we can talk about. But I mean, it seems our moral intuitions balk at those direct
comparisons, perhaps for good reasons, perhaps for bad reasons. I mean, we're living in a world where it's not crazy to think that we may ultimately change our moral
intuitions. And then there has to be some place to stand where we can wonder, well, would that be a
good thing to do? Good in terms of consequences, right? I mean, would it be good if we could all
take a pill that would rewrite our moral code so that we suddenly thought, oh, yeah, it's a straightforward calculation between broken toes and innocent human life, and here's the number, right? And now we all see the light. You know, we see the wisdom of thinking in these ways because we've actually changed our moral toolkit by changing our brains. Would that be good or would that be moral brain damage at the population level? That's actually a criticism that people have made
of exactly what you're saying of utilitarianism, where people have basically said, again,
this is sort of a game where I can add a term. So what if in the serial killer example,
I add the term that everyone on earth is a utilitarian and totally buys the fact that you should sacrifice the few to save the many?
And then that actually ends up being positive.
And then you can have a society where everyone's going around and it's like, oh, yeah, you
know, Samantha got taken in by one of the serial killer surgeons, you know, last month.
You know, what a tragedy for us.
But, you know, it's all for the greater good.
Well, I mean, that's the vision of morality that I sketch in The Moral Landscape. I mean,
the reason why I call it The Moral Landscape is that I envision a space of all possible experience
where there are many peaks and many valleys, right? There are many high spots and not so high
spots. And some high spots are very far away from what we would consider a local
peak, and to get there would be a horror show of changes. But maybe there are some very weird
places where it's possible to inhabit something like a peak of well-being where, in the example
I think I gave, is an island of perfectly matched
sadists and masochists, right? Is that possible? It's a cartoon example, right? But maybe something
like that is possible, right? Where I wouldn't want to be there because of all of my moral
intuitions that recoil both from sadism and from masochism. But with the requisite minds,
maybe it's possible that you could have a moral toolkit that perfectly fitted you to that
kind of world and did not actually close the door to other states of well-being that are, in fact,
to other states of well-being that are, in fact, required of any peak on that landscape.
I doubt it in this case, but again, that's just my moral intuitions doubting it.
But the problem is we know our moral intuitions.
I mean, first, the general claim I would make here is that there's just no guarantee that our intuitions about morality reliably track whatever moral truths there are. I mean,
they're the only thing we can use, and we may one day be able to change them, but it's always true
to say that we could be wrong and not know it, and we might not know what we're missing. In fact,
in my view, we're guaranteed not to know what we're missing most of the time.
And so this just falls into the bin of, you know, it's just nowhere written that it's easy to be as good as one could be in this life.
And in fact, there may be no way to know how much better one could be in ethical terms.
And that's both true of us individually and collectively.
Yeah, I think that that's absolutely right.
And it's why I personally am very skeptical of moral philosophy and sort of have been
advocating for people to take it less seriously.
And that's because you can very quickly get to some very strange places, right?
I mean, as an example, if you're trying to maximize well-being, it seems...
Now, again, this depends on your definition of well-being. So let's take a relatively reductive
one like happiness or something, but just for ease. But if you're trying to do that,
it seems way easier to do that with AIs than with people. You can copy-paste an AI, right?
So if you make an AI and it has a good life, you just click copy-paste,
you get another AI. You can fit a lot more AIs into the universe than you can fit human beings.
Again, maybe there's some inaccessible to us or just very difficult to specify notion of well-being
that avoids these things. I honestly believe, and I think here is really
getting to the heart of the matter, that there is some sections of the effective altruist movement
who take that sort of reasoning very seriously. And I sort of just strongly disagree with it.
And let me give an example of this, which is William McCaskill, who I think is a good philosopher,
and I read and reviewed his latest book. And I know you talked to him on the podcast about this book as well. But in it, I was sort of struck by when he's talking about existential risks,
and he's talking about things that might end humanity, he has this section on AI because he
views AI as a threat to humanity. And it reads very differently than the other
sections on existential risk. And that's because he takes great pains to emphasize
that in the case of an AI apocalypse, civilization would sort of continue as AIs.
And it's very difficult to even read that section without it appearing to be almost some sympathy for this, probably because William Casco said he accepts sort of a lot of the
conclusions of utilitarianism from a utilitarian perspective.
It's not necessarily a bad thing in the very long run.
I mean, it's probably very bad when it happens because somehow you have to get rid of all
the humans and so on.
I mean, it's probably very bad when it happens because somehow you have to get rid of all the humans and so on.
And that sort of reasoning strikes me as almost a little bit dangerous, particularly because
the effective altruist movement are the ones giving so much money to AI safety, right?
So as much as it's strange to say that people could be overly sympathetic to AIs, I think
we're living enough in the future where
that is actually now a legitimate concern. Well, for me, everything turns on whether or not
these AIs are conscious, right? And whether or not we can ever know with something like certainty
that they are, right? And I think this is a very interesting conversation we could have about the hard problem
of consciousness and what's likely to happen to us when we're living in the presence of AI that
is passing the Turing test, and yet we still don't know whether or not anything's conscious,
and yet it might be claiming to be conscious, and we might have built it in such a way that
we're helplessly attributing consciousness to it. And many, many of us, even philosophers and scientists, could lose sight of
the problem in the first place. I understand that we used to take the hard problem of consciousness
seriously, but I just went to Westworld and had sex with a robot and killed a few others,
and I'm pretty sure these things are conscious, right? And now I'm a murderer. It's just, we could lose sight of the problem and still not know what we're dealing with.
But on the assumption that consciousness arises on the basis of information processing
in complex systems, and that's still just an assumption, although you're on firm ground
scientifically if you make it,
and on the assumption, therefore, that consciousness is, in the end, its emergence
will be substrate independent, again, it seems quite rational to make this assumption, but
it's by no means guaranteed, well, then it would seem just a matter of time before we
intentionally or not implement consciousness in a non-biological system.
And then the question is, what is that consciousness like and what is possible for it?
And this is a place where I'm tempted to just bite the bullet of implication here, however
unhappily, and acknowledge that if we wind up building AI that is truly conscious and
open to a range of conscious experience that far exceeds our own in both, you know, good and bad
directions, right, which is to say they can be much happier than we could ever be and more creative
and more enjoying of beauty and all the rest, more compassionate, you know, just more entangled
with reality in beautiful and interesting ways. And they can suffer more. They can suffer the
deprivation of all of that happiness more than we could ever suffer it because we can't even
conceive of it, because we basically stand in relation to them the way chickens stand in
relation to us. Well, if we're ever in that situation, I would have to
admit that those beings now are more important than we are, just as we are more important than
chickens, and for the same reason. And if they turn into utility monsters and start eating us
because they like the taste of human the way we like the taste of chicken, well then, yeah,
there is a moral hierarchy depicted there, and we're not at the taste of chicken, well then, yeah, there is a moral
hierarchy depicted there and we're not at the top of it anymore. And that's fine.
I mean, that's not actually a defeater to my theory of morality. That's just, if
morality relates to the conscious states of conscious creatures, well then, you've
just given me a conscious creature that's capable of much more important conscious states than we are.
I think it's the same way.
If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense Podcast,
along with other subscriber-only content, including bonus episodes and AMAs
and the conversations I've been having on the
Waking Up app. The Making Sense podcast is ad free and relies entirely on listener support.
And you can subscribe now at SamHarris.org.