Within Reason - #32 Destiny - Testing Destiny's Ethics
Episode Date: May 22, 2023Steven Kenneth Bonnell II, known online as Destiny, is an American internet personality and political commentator. Find Destiny on YouTube: https://www.youtube.com/user/DESTINY Learn more about your ...ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
destiny thank you so much for being here yeah thanks for having me cosmic do people call you
cosmic or Alex or what be i was going to ask you the same thing not if people call you
cosmic but uh i i tend to prefer Alex in fact i'm considering getting rid of the cosmic skeptic monica
altogether. It's kind of a pseudonym that sounds like a game attack that I came up with
when I was like 14, which isn't far off what actually happened. So I'm considering shaking
it off, but having a bit of a brand identity crisis. So we'll figure that out eventually.
Where did cosmic skeptic come from? Just early internet edgy philosophy name or what?
You know what it is? There's this guy and there's no reason why he would have any reason to be
watching my videos. But I think years and years ago, we used to, I did this thing where we
like made music together at some music club or something. And I saw that he'd set up this
SoundCloud and his name on SoundCloud used the word cosmic. And I was like, that's a pretty
cool word. So when I was trying to come up with a with a YouTube name, I thought Cosmic works. And I was
going to go for Cosmic Critic, which sort of makes me internally cringe. But then I guess so did
cosmic skeptic at the time. But there was the whole, I don't know, it just sounded a bit better.
And to be honest, it looks nice written down, which is, I think, a slightly underrated quality of a good
YouTube name. Have you seen the thing where the guy, it's like a branding guy, and he says,
you know, which one of these pictures is Kiki and which one is Boba? Have you seen that thing?
I have not, no. So he gets like a, it's like a picture of some sort of spiky, 2D object,
and another one's like this sort of, you know, this rounded lobby object. And he says,
which one of these is Kiki and which one is Boba? And the audience immediately identified the
spiky one as Kiki and the sort of rounded one is Boba. And he's sort of making this point that
people sort of, people sort of put things together in a way that you might not expect. He asked
if lemon, the taste of lemon, is a fast taste or a slow taste. Okay. And everybody unanimously said
fast. Yeah. Even though that doesn't really make any sense, there's sort of a sense in which
you understand what they mean. And I think something about the way that cosmic skeptic looked,
written down and felt, given the nature of the channel, was, was something, something like
that. But I don't know if the same is true of you. Where did destiny come from?
Just really early on, geez, I must have been what, probably 14, 15, when I just played games on the internet, I just had like a, when you make your name for battle.net, I chose Neo Destiny because there were just two words that I thought sounded cool. And that was literally it. And now that's kind of my online moniker that I'm a little stuck with.
They do sound cool, man. So do people call you Stephen or Destiny in these contexts? And which do you prefer?
Definitely, Stephen, is what I should be called. And definitely, Stephen is what I prefer, yeah. For sure.
Okay. So, Destiny, let's jump into things. What I was hoping to do.
You just call me. Oh, okay. Yeah, yeah, don't worry. I'm messing. Stephen, Stephen, it is.
I got to my British sarcasm, whatever humor radar is not on right now. So sorry, okay, go for it.
What I'm hoping to do, Stephen, and I'll emphasize that from my throat, Stephen, is something that I think, well, I've had a lot of requests to speak to you in various different content.
contexts. Usually, people want me to talk to you about something sort of vaguely philosophical.
And when I was thinking about what to speak to you about, I was told by a friend that you had
this entire sort of page on your website dedicated to explaining what your views were on certain
things. And there's this whole entry on philosophy. Now, I don't know when you wrote that or how old
that is. I also saw that you put out a video a couple of years ago trying to sort of systematically
go through what your, I guess your ethical world view is, what your position is on things.
And I gave it a listen and there were a few things that sort of made the eyebrows raised to
the back of the head. And I was hoping that what we could do was talk about your worldview,
your philosophical worldview, your ethical worldview, what it is, sort of how you might
justify it. And then we can maybe talk about its implications on the practicalities of the things
you believe as well. Yeah, sure.
A little bit of a philosophical audit, if you will, yeah.
I guess so.
And obviously this is going to have implications.
I mean, a lot of people want me to talk to you about your views that you sometimes express about animals, which I do want to talk about.
Because I think I've heard you in the past say that you essentially just don't care about animals in an ethical context.
That might have changed.
I might be wrong about that.
Is that the case?
No, it's pretty close, yeah.
Because we're definitely going to sort of lock horns there.
I think that having listened to what you said, I'm trying to want to.
understand what you're saying, but I think there might be some ways to push back. But I think in order
to do that, we need an idea of what you actually think. So if you were to sort of give an overview for
someone who wasn't familiar with you, there might be people listening who haven't listened to you
talk about ethics, what your outlook is here. How do you determine what the right thing to do is when
you're going about your ethical conduct? Sure. So on a very, very, very fundamental level,
I essentially want to create like the best world for me, like a world that kind of
maximizes the experience that I have.
There are a couple of assumptions that I have that kind of go along with this.
One assumption is that two people together can create more happiness or utility,
whatever you want to call it, than two people individually.
So if one person on their own can create like 10 units of happiness and another person
can create 10 units of happiness, when they come together, they create together like 30 or 40
units of happiness.
So some level of collaboration is essential to, I think, like,
human flourishing or human happiness and then typically whatever um whatever system of ethics i have
it kind of only works if other people are on board with the same type of thing there has to be some
level of reciprocation uh so for instance i can't have like a system that says i'm not going to
steal or kill people and then collaborate with people that think they can steal from me or kill me
um so whatever ethical system i have has to be universalized to some extent so whatever rights or
privileges I demand for myself. I also have to extend other people. And I just kind of function
in a way that I hope that everybody can kind of share this system and it kind of works for all of us
to maximize the amount of happiness. I guess we can all produce together essentially. That's like
on a very, very, very fundamental view. That's kind of what's going on. So I try to generate from
there. This sounds a bit utilitarian. Is that essentially what you're driving at?
Yeah, I guess you, yeah, sure, I guess you can call it that, yeah. Because what it seems to me like
you're doing here is extrapolating from something like a basic, intuitive truth that my well-being
or sort of pleasurable experiences are seemingly good for me and that painful experiences are
bad for me. And there's a sense in which I just have this basic desire, this want to maximize
my positive experiences. What I'm interested in is how you're extrapolating from that to
treatment of other people. Do you think that the well-being of other people only matters insofar as it
has an effect on your well-being? I think at a really fundamental level, I think yes. Yeah, I think so.
Because I've heard, for instance, on this page, you gave the example of some chocolate bars.
You said, look, there are sort of five people and five chocolate bars. And you might want to say,
well, if you're just interested in maximizing your own experience, why don't you just
eat all five chocolate bars? And he said something like, well, that's because if I ate them all
myself, then these people would be less like to be my friends. You know, they wouldn't be
as kind to me. It would actually have a negative effect on me in the long run. I just thought
to myself, I mean, suppose that they just didn't know that these chocolate bars existed.
Suppose that still they're going to suffer. You know, let's say that this is the only food
available. You're on some desert island. And you just discover sort of buried in the sand, this
food and you can eat it all yourself and they're never going to find out do you think there can
be such thing as an ethical obligation to share that food or do you think if you're if you're well to you
is that maximizing your own positive experiences is the right thing to do then in fact you might
have something like an obligation to not share that food because in doing so you wouldn't be
maximizing your own positive experience um i think i i guess i would hope that there is um
I guess I would hope that in whatever society I create, there's going to be some, I guess, some type of virtues that all of us kind of hold to and try to do our best to uphold irrespective if we get caught or not.
But I don't really know if I expect the average person to do that.
So this is why I usually advocate for some level of social dynamics or some level of governmental pressure to kind of keep people in line.
So, you know, given the opportunity to pay, like if our taxes were hidden and nobody knew if we paid or not.
And you had the opportunity to pay your fair share or not pay anything at all.
I don't know if I would trust everybody to pay everything.
So this is where I would figure that government would step in and say, like, hey, like, you have to pay this.
Like, this is your obligation, not to bring up a whole tax debate or whatever.
And then same thing with like some types of social norms as well, that hopefully you would enforce some type of social norms that would, in places where people could get away with things that they otherwise wouldn't get caught for.
There's going to be hopefully some sort of social consequence there.
But I guess I have a hard time.
there is an attraction to thinking of some other sort of system that I guess necessitates
or requires you to have ethical obligations that have no sort of like actual reinforcing
mechanism.
So for instance, like, if you found something and you had the opportunity to sneak it or share
it, hopefully you'd always share it, I have a thought of a way to say like, well, look, because
of this reason, you should always be obligated to share it other than kind of these very broad
utilitarian arguments, you know, that like, well, if you were on a lost island and everybody
found secret stuff, you know, you'd probably all be better off sharing than just a couple
people hiding or stealing things. But yeah, I don't know really how to like objectively build that
out other than starting from a place of personal preferences, I guess, yeah.
Well, this is my concern with building an ethical worldview on the basis of personal concern
is that if you say something like, look, I just don't have a way of saying that this person
should share this food. I mean, you even said then, well, I would hope that they would share this food.
Yeah. But why? I mean, on like, is that a sort of, I guess like a moral hope? Is that sort of
I would hope that people are good people.
I would hope because if I was the person that didn't find the food, I would want some.
Yeah, so this was another interesting sort of rubbing up against each other of two different intuitions.
You seem to have this egoism of saying, you know, what I care about here is essentially the maximization of my own well-being.
But then in situations where it's clear that your positive experience is going to be maximized by doing something that intuitive,
most people consider to be immoral, we can appeal to something like a different principle,
something like a Rulzian veil of ignorance and saying, well, look, if I didn't know who I was
going to be in this circumstance, if I didn't know if I was going to be the person who gets the
food or the person who doesn't, I'd want the food to be shared.
Yeah, there's, have you, um, or yeah, go ahead. Yeah. But surely that circumstance, if,
if you know what circumstance you're in, if you're not behind a veil of ignorance, if you know
that you're the person who's going to benefit, to make an appeal.
to the fact to say something like, well, if I were the person who wasn't going to get the food,
I'd want them to share, fine, but you're not. So what does that matter?
I feel like there are so many different, there are so many different interactions in life
with different things that at some point, like, it's very rare that you're going to be on top
on every single level. So you would hope that there are times that you give a little,
and hopefully there are times where other people give a little because it all balances out on the end.
But I mean, the direct, actually, the direct example of this, have you seen the movie?
It's a Spanish movie called The Platform, I think.
I haven't.
Oh, fuck.
It's like, it's basically like this moral principle for like an hour and a half
where it's like 100 prisoners every month gets shuffled on a hundred floor prison.
And there's a thing of food that goes from the top of the bottom.
I'm sure you've heard of this.
I've heard of this, yeah.
It feels like a freshman in high school philosophy or whatever decided to make this film to like illustrate this principle.
But yeah, I guess my hope would be that there are so many different transactions in life that like in some sense,
like, yeah, you could probably fuck somebody over with no consequence at all, but at some point
you're going to be on the receiving end of something, whether it's in old age, whether it's
if you're down on your luck financially, whether it's you happen to be out in a public area and
somebody is trying to kill you or whatever, that like we're all kind of motivated to realize
that, you know, if we peel behind that curtain, you know, we might be in the upper position
here, but, you know, we might not always be in that upper position. Yeah, and maybe this is why
we want some kind of law enforcement to force people to share in situations where
they could keep things for themselves, and this would be something like a taxation system.
But speaking sort of morally here, like you could, in theory, at least in some circumstances,
just get away with this. That is like, yeah, I'd rather live in a world where if the government
were somehow omniscient and knew whenever somebody discovered food and other people were starving
could force them to share it. In this circumstance, you know you're going to get away with it.
No one else is going to find this food. You can eat the food right there and then, and no one
will know that you've eaten it. And nothing is going to come of it.
You can say, yeah, but in 10 minutes' time, I might be the person who requires the altruism of other people,
but there's no reason for them not to give it to you because they don't know that you've been selfish.
They have no idea.
The only thing, in other words, that could call somebody in that specific instance to share the food
would be some kind of care or concern with the well-being of others for its own sake.
It can't be based on a sort of reciprocal notion because in this situation, there is no,
no situation in which they even find out that you've stolen the food. You see what I mean?
Yeah, I understand what you're saying? Yeah, I don't know. What's your solution?
And I think, well, I think it potentially leads to some quite dangerous implications. Like,
you could imagine somebody listening to this, resonating with what you're saying and saying,
you know what, that's a, that's a good point. All I care about is maximizing my own pleasure
and saying that, and that other people only matter in so far if they have an effect on me.
And you can imagine somebody who discovers a way to get away with not paying their taxes
or discovers a way to exploit other people politically without them realizing that that's what's happening
and just decides to do it because, you know, what moral intuition is there to the contrary.
And I just wondered, I imagine that if you were speaking in a political context and somebody was,
and you were sort of criticizing somebody for acting immorally, and they said, well, look, I just guess
I never thought I was going to get found out.
And you say, but still, that's evil, that's terrible.
I can't believe you've done this thing.
And they say, well, I mean, to be honest,
I was just following your advice
and trying to maximize my own experience.
Where did I go wrong?
Yeah, I don't know.
It's something I've thought a lot about,
but I guess when I think about these kind of like principles,
I just think of, I try to think of like,
how do humans behave?
And then what standard of behavior
can you realistically hold people to?
And I just haven't been able to think
of any, like, objective moral principle
that says, you know, well, by this principle, like, I got, um, damn, fuck, I didn't know we were
going to get this, uh, this hardcore philosophic, but I guess you're this channel.
Um, I'm just going to ask you questions, okay, actually, because if you have a solution
for me, then fucking, that'd be great.
I wasted so much time reading this bullshit on meta-ethics, it basically meta-anything
of philosophy.
I hate all of this, even ethics, even normative ethics I hate now, but, um, I'm curious if you
can tell me, what is the value of, um, well, let's say you've got two people on an island,
right?
And you're trying to enforce, like, some sort of behavior.
what is the value of being able to say something is right or wrong
if the other person disagrees with you?
Can you tell me that?
What is the value of doing that?
Yeah, so you've got a big guy in an island.
He could beat you up, take all your stuff,
and his life would be fine,
and then you've got a small dude in an island,
he can't really do anything.
Like, what is the value of being able to say,
like, what's right or wrong in that circumstance?
I'm curious.
Well, the value of being able to do that
is, I think, essentially in line with what you're saying,
which is something like self-preservation.
It's similar to sort of the early justifications for government,
which is essentially just giving up some of your freedoms in return for some kind of security.
And I think this is something that slowly happens over the evolution of our species.
Now, myself, I essentially conform to a view called ethical emotivism,
which has gone a little bit out of fashion, I think, perhaps unfairly.
But I think that ethical expressions are essentially just emotional expressions.
But I think about it's like some form of kind of ethical non-cognitism.
positivism basically or yeah yeah it is um so moral statements don't actually have truth value at all
there is no yeah it's not the kind of sentence that can be true or false saying uh that that something
is bad or expressing a moral discontent with it is a bit like saying murder boo no or or don't or don't
or even don't do that like a like a command i think there's a bit more of a prescriptive element it's
not just an expression of boo murder but also something like don't murder which equally is
something that doesn't have truth value but there's some kind of normative
force to it, but it is essentially an expression. Now, I would imagine that what's happened here
is what you've described, which is thinking, well, if we can sort of engender a situation where
people generally do share, that when I'm the person who needs to be shared with, I'm going to
receive the goods, I think this is essentially what's evolved in human psychology, but it's not
a conscious thing in the way that an ethical framework might be. You might sort of think to yourself,
well, I'm going to share this food because one day I might need the food shared back.
I think this is something that's happened subconsciously
in the way that we evolve lots of behavioral and physical traits
that we don't quite know why they're there, but they serve some purpose.
And I think the purpose of something like compassion and altruism
are ultimately self-serving because the chances are you are going to get found out
every now and again, right?
Yeah.
But this is purely descriptive.
This is just sort of describing why it is that people feel empathetic.
And I think you can show that if evolution works at the level of the gene, it seems, let's say, suspicious that our level of altruism and willingness for self-sacrifice seems to map on to how closely, genetically we are to the person that we're considering making the sacrifice for.
Somebody might be more willing to save their brother or their son than a cousin or a cousin than a stranger.
And in fact, although people are sort of working to to ethically step out of this, people generally, I think, have biases towards people who look like themselves.
They might be more likely to donate to a charity that helps people at home rather than abroad, for example.
And if the reason why this empathetic quality has evolved is just essentially for the benefit of the genes, it would make sense that this would map on perfectly.
There's even some interesting thought along the lines of saying, well, why is it that somebody would care differently for some.
say like, what is it, like a son versus a brother or a brother rather than a parent, even
though you should share the same amount of genes. And they suspect it's because of the fact
that a lot of people would have actually been half brothers because of the fact that men were
going around impregnating lots of different women in our evolutionary history. So if you had a brother,
it was more likely to be sort of a half brother, whereas your son was going to share more of
your genes. So, like, so much about who we care for, why we care for them seems to be
dictated by a gene. So I think what you're describing there,
is descriptively what's actually happened,
but it's left us essentially with an inability
not to feel a certain level of compassion to other creatures,
and that motivates us to share our food,
even when we might rationalize that I don't need to share this food,
I can't help but feel compassionate to the other members
of this desert island tribe, and so I share the food,
but that doesn't confer an obligation to do so.
It just sort of describes why I happen to feel like I want to,
you know what I mean?
Sure.
Yeah, I guess I don't know.
if I believe in real ethical obligations then I guess or if I could like objectively say like
someone ought to have some sort of ethical obligation so when I think of like navigating the world
with the system of ethics I try to think of what maps on the closest to what's going on what's like
the least exploitable by another by like a malicious party and then what could be like universalized
through everybody and everybody would kind of work pretty well so together so when I think for
instance of that island example if I was somebody that was of the idea that
that like I can make absolute moral statements
and I can tell this other person like, hey, you can't bully me
or steal my food or kill me because that would be wrong.
And if the other person disagrees with me, then, well, I'm in a lot of trouble
and my moral statements don't really serve me very well.
They don't do anything.
They don't help me navigate the world in a positive way.
But if I come at it and I assume like, okay, well, this guy is coming at this from a
self-interested perspective and I'm coming at this from a self-interested's perspective
and if I can't like, if I can't demonstrate some value or something to this dude
or put him in some sort of conundrum,
he's probably going to kill me and take all my stuff.
So the actions that would motivate me to do
would ensure my safety and survival
while also, I guess, taking into account his,
assuming we both have to survive on this island.
Yeah, I don't know if you're going to,
how would you engage with that?
Well, I mean, it makes ethics very transactional.
Yeah.
It means that I think the problem people are going to have
with this view is that you want to say
that the thing that is wrong
with me exploiting another human being,
the only thing that makes this wrong
or in any sort of meaningful sense
something that I ought not do
is the fact that sort of maybe
this might contribute to a culture in which
it comes back to bite me.
Or maybe this person sort of escapes my exploitation
and ends up with the whip in their hand
and then I'm screwed. Or maybe someone sees me do this
and thinks, well, you know, why the hell should I share with this person?
Can that be the only thing that is wrong
with exploitation, the infliction of suffering?
Yeah, I mean, I understand what you're saying
that it's not, it doesn't feel very emotionally satisfying. I just, I feel like it maps onto the world
in a pretty good way. And I feel like from that perspective, like I could generate government
policy or prescriptive statements for people that I can expect people to kind of follow through
one because I would understand that like the only way to get somebody to follow through on something
that they might not necessarily want to do is with some sort of either social or legal enforcement
essentially. So I'm imagining a situation in which you're sort of, the things that
you tick are different from other peoples.
And in fact, we can sort of talk about the idea I had in mind in a moment because I remember
you were asked on this video that I talked about, somebody I think in your chat or whatever
asked you what happens when one person's sort of base internal preferences contradict
another person's.
Because it's easy enough to say we're going to sort of enter into a transactional relationship
where if we share our food, you're going to benefit, I'm going to benefit, you know, it's
all good, but if there's a situation in which you have a basic conflict, your interests versus
somebody else's interests, what do you do when they contradict? And what you said at the time
was a bad preference, and I'm quoting, a bad preference is one that demands other people
to violate things that were satisfied their own preferences, in my opinion. There's like
an incompatibility there. But I don't think I have any internal preferences that contradict other
people so radically. Or if they do, I would argue that I'm correct and they're incorrect.
and fuck them. I wonder if that's still the approach that you would take to
basic value conflicts. It sounds really harsh, but at some point, it's going to
become, it would turn into violent conflict and it would be a destroying one side and a
survival of the other. So if there were two civilizations that came into contact and in one
civilization, they thought that it was permissible. You know, if you really felt like you
really wanted to kill somebody, you ought to be able to call someone out of the street, just
kill them. And in another society, they're like, we're not doing that. This might be such a
fundamental disagreement or the one civilization of people might be so altered such that this is
such a high preference of theirs that they couldn't conceive of living in a world where they
weren't allowed to fulfill that but in that case when you've got a conflicting like two deeply
conflicting values like that it would have to come out in some sort of violent conflict because there'd
be no other way to resolve it now for me would there be any sorry carry on i was going to say for me i
i firmly believe that like 99% of humans just because we all come from roughly the same
genetic stock. I think we have roughly the same fundamental preferences. I don't think there are
groups of people that have huge differences like that. Now, there might be some that come out
through like socializing or cultural stuff. So like maybe people's opinions on like LGBT issues,
but on really fundamental stuff like should I be allowed to steal from you, rape you, kill you,
you know, attack you without provocation. I think for the most part, I think almost all, every
like human being that isn't, that is mentally sound and everything will agree on these things, I think.
That may be true on some sort of very basic assumptions, but I think people often overestimate
the moral consensus that humanity has had throughout its history.
Views about whether you're allowed to sort of kill people, I think have evolved.
Our views on sort of a nation's right to expand or ownership of other human beings in situations
of slavery, I don't think it's as obvious as perhaps you make it out to be in other.
other words, I think there can be these conflicts. But where there are these conflicts, and you say this would
have essentially evolve into violent conflict, do you think there's a sense in which you can say
either side is right in that conflict? I mean, imagining, for example, some version of World War II
where the fighting was based on some fundamental value conflict about what you're allowed to do to other
human beings. Violent conflict, is there a sense in which we can say that one side is correct there,
Or do we just have to say that what we're witnessing is Mike makes right?
I feel like to say one side is correct.
I feel like you would have to be able to ultimately resolve some moral statement down to some truth value.
There would have to be some moral facts to speak up.
And I'm kind of cocked.
I don't really believe in stuff like that.
So I wouldn't be able to say at the end of the day that like it's right to not want to enslave somebody
or it's right to not want to murder somebody.
I don't know if I believe that any of those moral statements ultimately reduce to some fact of the matter.
So, no, I don't know if I could ever say there's a right or wrong side in any given conflict like that.
There's just the values that I purport to have and hopefully other people around me have them.
And if some people are so incompatible and we can't find common grounds on it,
then at some point it's probably going to come to, yeah, some sort of violent conflict to resolve the difference.
So actually, hold on, there's so many statements like an artist getting clipped of me
and fucking shipped out on the internet from something like this.
So on the other side of this, the reason why I have an issue with this is because I personally,
have not found a way to resolve fundamental moral differences between two people.
The problem being the incommensuality of people's moral systems, right?
You know, some guy says, well, I read all of Kant, and another guy says, well, I read the entire
Bible, okay, well, it's like you've got a guy speaking of binary and a guy based, like,
there's just no, you're never, ever, ever going to have any sort of, any sort of like
reasonable communication between those two people.
So I think my biggest problem when people try to tell me, what about this?
Don't you think this is right or this is wrong?
when you come to people that have these
like different moral systems built on what they believe
are objectively true statements
because some of those objectively true statements
are like axiomatic to them
I don't know how you resolve that difference
with somebody else who might have some different
fundamentally different axiomatic statements
that their moral system is built off of
that's the big problem that I have
so everything kind of becomes a little bit subjective to me
have you tried belief in God
well I mean that works for God-faring people
but then as soon as you run into somebody
who's not then you kind of yeah you have a whole
issue there, you know. That might solve some of your problems with your ability to
rightfully assert your own morality. But I guess you're looking at a situation like World War II
and saying, you know, the classic example, the classic sort of, and it is quite an emotive
point against sort of ethical non-realism is to say, how do we interpret this?
Because you still, I imagine, meaningfully used terms like good and bad in sort of everyday conversation
or like political debates and conversation
you say that this was the wrong thing to do
this was a bad thing he should have done this he shouldn't have done that
this kind of this kind of thing
all like highly contextual but then like I would also say like
I'm a hard line determinist too
but I'll say like you need to make a better choice
and it's like well how do you believe in choice at all okay
well within the context what we're talking about
I'm using the word choice right
but same thing with like good or bad like this is bad behavior
this is good behavior if somebody would have challenged me
you know like oh well fundamentally aren't you a moral subjective
I'd probably just shoot him in the face right
like there's no meaningful conversation yeah
but yeah no like I understand I understand
I definitely know there's a desire to speak of like moral absolute, like it feels good.
We want to be able to say like slavery is morally absolutely wrong or murdering somebody,
you know, killing somebody without provocation is absolutely morally wrong.
But yeah, without having like a belief in actual reducible moral fact, I don't know how you can make
those statements with that level of certainty.
Do you believe in the concept of moral progress?
Do you think the world is better now than it was 200 years ago?
Like I believe in moral progress, I guess.
it's a subjective thing like i mean i'm a i'm a citizen of the world today so i believe that and i like
the morals that exist today so i believe there's probably been like some level of progress uh but i don't
know if i would have been a person a thousand years ago maybe i would have felt like this is a moral
regression right or yeah so yeah in the same way that we can we can sit here and say well you know
there was a time when people sort of hated homosexuals and thought that women should be confined to the
home and how crazy is that look at how much progress we've made you know in a in a in a close
possible universe with the opposite trajectory, we could say, or, you know, if things revert in
100, 200 years time, we could be sat here saying, gosh, I mean, you know, just 100 years ago,
people thought that women should be in the workplace and that homosexuals should be able to
adopt children, how crazy and an awful that was, and look how we've progressed. In other words,
do you think that we have to, if we have a worldview similar to yours here, just abandon
any ability to say that certain states affairs are better and worse,
rather than just saying, like, I prefer it.
I mean, there seems to be a meaningful difference
between me saying,
I prefer a world in which homosexuals aren't persecuted
versus saying something like,
I prefer my showers to be scalding hot
rather than just warm.
You know what I mean?
There seems to be a,
we seem to be talking about a different kind of preference here,
but if it is all just essentially
whatever's going to sort of make me feel better about myself,
there doesn't seem to be much of a difference
between those kinds of preferences, right?
Is that your view that the preference that you have,
that, you know, people not be racist
is similar to the preference that you have
of what temperature your shower should be?
Yeah, I think ultimately at the end of the day,
I think, yeah, I think I would say
it's all roughly the same thing, yeah.
And then I would have to say in considering,
yeah, I can't technically believe,
if somebody would ask me if you believe in moral progress,
I'd have to say no,
because progress implies some objective movement
from or towards something else
which implies there is some like moral goal or whatever.
I can only say that like there's been progress insofar as I guess what I would like to see,
but ultimately it would all be contextual and subjective.
So how do you square that with, I mean, I'm not entirely familiar with what sort of
you're most interested in right at this moment on a political level.
But I imagine there are certain issues that you care deeply about,
try to have debates about and make content about with a view to try to try to try to
trying to, you know, mobilize people into believing something that you believe in saying
probably things like, this is wrong, this is bad, this is good, this is what you should
do. How do you square that sort of commonsensical language on the surface level when talking
about how to behave with the fact that a little bit of digging reveals that what you're
essentially talking about is your favorite temperature of shower?
I think that the thing that I tried to appeal to is that even if these things are ultimately
subjective and even if I can't ground them out in some like resolute concrete morality,
these preferences are real. They're strong and they're shared by almost everybody.
So when I start to argue with people, even my rhetorical strategies are going to be pretty
similar. If I'm arguing trans issues, for instance, with a conservative, I'm not usually going
to argue that like, hey, don't you think it'd be good if we allowed, you know, every single
type of person in society to have access to the same rights and probably, because you're nebulous,
negative positive rights, you're in some weird world. I usually do like a direct appeal to
hey, you're a parent.
If your child had a medical problem
and you had a conversation
with your child and your doctor,
would you want the government
getting in between you and your doctor
and making a good decision for a child?
Right?
So even rhetorically,
I try ultimately at the end of the day
to appeal to people's
like individual preferences
and to try to demonstrate.
Like when I'm really,
when I'm trying to tell you,
when I'm arguing with somebody
and I say, I think what you're doing
is wrong, that's the verbiage that comes out.
But what I'm really saying
and what we'll get it in the conversation
is, hey, the thing that you're advocating for
I don't think you realize it's ultimately contrary
to your own self-interest.
That's essentially what I end up arguing with people, yeah.
That's more or less the approach that I've taken as well.
You're essentially running a consistency test
rather than trying to establish some base moral principles
in which you sort of build up to truth.
The thing I think is more to realize,
the only reason why that consistency test works
is because we have some shared,
I hate that I lean into this much,
but some shared like moral intuition, right?
which is a phrase that I hated three years ago
because I was like, we're going to find the ultimate moral
ball. But now I lean strongly
on these moral intuitions. But this system
breaks when you, if you run into the occasional
fucking psychopath, right? If you start running into somebody who is like,
I don't care if somebody tries to kill me. I actually would want
to live in a world. We all fight to the death. And it's like, okay,
well, you're insane and there is no arguing with you.
I can't appeal to your preferences because your preferences are
so out of line with, you know, 99% of humanity.
But yeah, that's essentially my rhetorical.
In which case, Mike makes
right?
Essentially, yeah.
With a genuine value conflict,
it's just whoever's got the bigger guns wins out.
And that's kind of how
it, I don't want to say, should be,
because that's to import moral language,
but it's not how it shouldn't be.
Yeah, I mean, it's good.
I think that's a good thing,
because ideally in the, oh man,
if I say like angular,
the conservation of angular momentum,
does that mean anything to you?
Yeah, sure.
Okay, like when the, when galaxies are forming,
they have an average spin to them.
And what happens is after millions and hundreds and millions of years, all the particles that aren't spinning in a certain way collide.
And you get this thing that's like moving in kind of one consistent direction.
Your angular momentum is on average some direction for a spinning cloud of stuff.
I feel like for humans, that same concept has to apply, that if you had 100 civilizations and 50 of them were kind of like what I'm saying,
where it's like, listen, we're all going to work together, we're going to be self-interested, we're going to do things.
And then you've got like 50 that are like crazy.
Some are like, we're going to kill people that we don't like and blah, blah, blah.
eventually those types of thoughts get weeded out
and then you're just left with people
that are like, okay, well, we might disagree,
but it's in my best interest to respect your rights
so that you respect my rights and et cetera, et cetera.
I think it's okay to not be tolerant of people
that are so morally out of line with you
that they would be incompatible, you know,
like a free and open society, basically.
And to be clear, that's not because you think
that they're sort of wrong or corrupt,
but just because they disagree with you.
um that you're therefore justified in a sort of asserting your might yeah and imprisoning or
potentially killing them just because they like a different flavor of ice cream to you well hold on
no when i say we disagree what i'm talking about is i'm talking about disagreements on fundamental
moral preferences right so say you've got four religious groups and three religious groups are like
we believe in our god we think that it's wrong to believe in other gods um but if you worship another
God, that's between you and whatever. That's for you to figure out. Let's say there's a fourth
religious group that says, our God is the only God and we're going to kill you if you don't
believe that. That fourth religious group would have a deeply fundamentally incompatible view of
the world as the other three. And I think it would be okay for the other three to be like, well,
listen, we can all coexist in harmony with each other, even if we have disagreements and you can't,
so you have to go. So the fundamental disagreements I'm talking about are ones related to
like basically huge infringements of rights of other people, killing people, stealing people, stuff
like that. But it would be it would be just as okay for that fourth religious group to try to
eliminate the other three. Yes. Which do you are you sort of troubled by that implication of your
worldview that if if if you have this world religion that just says you know what no other god but
God and we're going to kill anybody who disagrees and they just start going ahead and doing that the
only recourse you have is not to say you shouldn't do that is not to appeal to any kind of moral
principle, but just to say, I hope that we've got a better military. And that's the end of it.
Yeah. So, I mean, on a couple levels. So one, yeah, I fully believe that two sides of a conflict
could be fully justified in destroying each other. This comes up interestingly sometimes in
some self-defense things where, let's say two people are having a conflict and somebody enters a
bar and they're trying to figure out, you know, what's going on. And somebody says, oh, John over there
is trying to kill everybody, you know, help. And John isn't trying to kill everybody. And so
a guy that walks in who heard there was a violent conflict
sees John trying to kill every, or gets
reported to that, he might try to stop John from
doing it. John might try to defend himself
and arguably both people have good justification
for their actions, which is unsatisfying because you
want to say there's one right and one wrong, but I
think that that's the, I think that's the reality
of the world. When we say, and then when you use the term
like, do you want to appeal to some moral right
or wrong, I just don't care too much
because I don't know if it matters, right? Like, I could be the most
morally justified correct person in the world,
but if the other guy has the means and capability
to destroy me, that those moral statements mean
nothing. And my moral thought will disappear with me as soon as they basically overrun me,
you know? Yeah, you said that sometimes both sides and a conflict are justified in essentially
wiping each other out. Isn't in situations of basic value conflict? That always the case?
It's always the case that both sides are equally justified and just completely, when it comes to
it, wiping each other out just because they have what is a fundamental disagreement. But when I said,
you know, which flavor ice cream, what I mean to say is that it's of the same nature. It might be
sort of more fundamental on the sort of hierarchy of their beliefs, but it's got just as much
import as saying which flavor of ice cream you prefer. You just have this preference. Somebody else has
a different preference. Yeah, but it would be the difference of the imposition of that value
on other people, right? So if you've got a society where you're saying we can only eat
vanilla ice cream, then, you know, maybe people have to die for that. But if you've got a society
where you say, I wish we would eat vanilla ice cream, but people are free to eat other flavors
of ice cream, right? It's that how much do you impose your values on other people is where
you run into these fundamental value conflicts, right?
Yeah, I mean, in that circumstance, you know, you've got one group that says,
if somebody tries to eat a different flavor ice cream, we're going to kill you.
Yeah.
Another group says, well, I don't know if I'm too happy about that.
So they get killed.
And what's the problem?
Well, I mean, it depends when you say problem.
There is no problem.
The people that kill the people obviously accomplished their end.
So it's good for them.
Now, the people that got killed, it's bad for them.
But I don't see what the value is in any part of this is of appealing to some moral principle.
It reminds me of like the comic of like the, there's like a guy.
guy that's standing at the gates of heaven and it's a biker and he's holding on to his bike and
God's like, well, what happened? The biker's like, I had the right of way. I don't know.
It's like, it doesn't really matter if you had the right away of the guy ran you over, you know?
Yeah, I just, I don't see any of the value in appealing to moral principles when it comes to
like conflict. Yeah. Do you see a danger that this kind of line of thought could lead to some
form of social Darwinism? Very, very popularly after the theory of evolution became excessive.
science, or I should say by natural selection, people like to point out that some people take
this to say, well, look, we're just animals, we just have different values, we just conflict with
each other, sometimes people are stronger than others, you know, and that's just the way it is.
You know, if we decide that we're going to sort of embark on a eugenics campaign to remove
disability, and we're going to start by killing everybody who's disabled right now, because we
believe that, you know, human beings are supposed to be the most sort of evolutionarily
proficient versions of themselves. That's what we're going to do. This line of thought seems
to not just sort of lead to that, but justify that line of thought. Yeah, but if you're fighting,
you're fighting back against a group of people that believed that, like, what would you're,
you need something more than you just say this is morally wrong, I think, right? And I think
even the advertisers, I think intuitively most people would even agree, even if they wouldn't
say so because the advertisements wouldn't just say like this is wrong stop it would probably be
showing videos of mentally disabled people with families next to them of them doing jobs in society
it would be an attempt to appeal whether you like it or not to your kind of these like fundamental
preferences right like look first of all they look like humans and you are a human right you know
even animals we like because they have human like features sometimes and we anthropomorphize them and you know
you see mentally retarded people that doesn't mean that they're worthless it doesn't mean that they don't
have value to family and friends it doesn't mean they can't function in society like ultimately
the campaign to argue against the eugenicsers would end up being like an appeal to their
preferences, you know, rather than just saying like, this is right or this is wrong. Yeah, and like
you said earlier, that relies on the idea that there's some, that basically most of the time
when we have what seems to be a moral disagreement, there's actually a lower resolution of
thought in which we have a shared moral intuition, use the word moral intuition. Yeah, I would
agree exactly that like, and that most, a lot of the disagreements we have, so I would almost
argue that there is no such thing as moral disagreement among humans, except for the case of
like mental illness, that a lot of the moral disagreements we have end up being like these
second or third order thoughts that are more socially influenced. So for instance, like how we feel
about like LGBT people, right? Yeah, it's not like a basic value conflicts if one person is
pro-LGB and one person is not. There's going to be something more fundamental. Yeah, they both of them
fundamentally want like what's best for society and what's best for their family. They just think
about it a little bit differently, right? Or even like, you know,
freedom for human beings when you might have like a religious conception of freedom that
means freedom from sin or something. But you ultimately want freedom for human beings and
the value is agreed upon. Yeah. But I wonder if there's a circularity lurking in the way that
you just said. I think that, you know, everybody shares these moral intuitions most fundamentally
except in the cases of mental illness. Would you be defining cases of mental illness there
by their disagreement with the moral principles? I mean, some people might say, for example,
everybody agrees that it's wrong to sort of torture babies for fun and then we say well some people
disagree with that but they're mentally ill and we say well what mental illness do they have well they're
psychopaths but the reason that we say they're psychopaths is because they don't share our intuition
that torturing babies of fun is wrong if you see what I'm saying so you can't just say something like
well everybody agrees except for people who are mentally ill if you're just defining people who don't agree
as mentally ill people.
You've just got the same problem
that some people don't agree.
Well, but I mean, you could, right?
I think you could make that statement, no?
But what grounds do you have to say
that they're mentally ill?
Well, because they,
well, I guess it wouldn't be mentally ill.
Maybe you would just redefine
this form of mental illness as like,
I guess, like, morally incompatible
or something with, like, current society.
Sure.
The reason why, I guess the reason why I say mentally ill,
the reason why I say mentally ill
is because I feel like fundamentally,
most humans have the shared agreement on things,
and then in order to diverge from this,
at a fundamental level,
you're essentially, it sounds bad to say,
but you're almost like not human
in the way that you view things.
Like if you would extrapolate
some types of self-destructive
or societally destructive behaviors
on a fundamental level to everybody else,
like humanity would basically,
would essentially collapse.
If you had a bunch of people that were like,
I want to kill everybody around me,
I want to rape everybody around me,
where like society would necessarily
devolve into absurdity
and disappear pretty quick, I think, yeah.
So they're mentally ill because they're in a minority
in terms of their preferences?
Maybe.
Well, I'm not defining all mental illness this way.
The only reason why I'm making that carve out
for mental illness is because people that are mentally ill
can have psychotic breaks from reality,
so they're not even interpreting reality
in some fundamental level,
or they can have, like, really fundamental parts
of their mind that are, like, manipulative
or just not good for society.
So like, say somebody that is a psychopath or somebody with extreme narcissism, people that, that plug in to society in a very exploitative way, essentially, yeah.
Yeah, but without the, without recourse to objective ethics, what you essentially say is that their brains are different.
For example, if we took every single narcissist in the world, and we said, well, narcissism is a form of mental disorder.
Because, you know, most human brains work in a particular way, but these people are particularly exploitative and lack empathy.
so they've got this mental disorder.
And we put them all in a bomb shelter
and then we nuke the rest of the world
and the only people who are living
is me and you, Stephen,
and all of these narcissists,
guess what?
Suddenly we're the ones
who are mentally ill
because we're in a minority.
That seems like a bad way
to justify our conception
of what makes a person mentally sound
or mentally ill.
We're essentially saying
that whatever the majority is is wellness.
Yeah, I mean, is it bad?
Have you ever read the book
or heard of the book, I am legend?
No.
I've heard of it.
Oh, that's essentially the plot of that book.
There's a guy that goes around hunting vampires all day,
and then eventually they trick him, they infiltrate his stuff,
and then he realizes at the end of the book,
when they're bringing him to call for account of his crimes,
he's in front of it like a civilization of these like night-dwelling people,
and then he realizes like, oh, well, you know, to them, I was,
or to me they're all monsters, but I guess to them, I am the monster,
and it's him coming to that realization at the end.
Yeah, I mean, what you're saying,
a lot of the objections that I hear from you,
and I agree with all of these,
is that sometimes a lot of the things that I'm saying
are emotionally unsatisfying.
Like, I think we want to have a righteous conviction
to say, that's wrong, don't do that.
And I feel that emotionally,
and I mean, even I want that emotionally,
but one, I don't see how logically I can ever deduce that.
And two, I don't see how it even really matters that much.
Because, like you said, say we eliminate the society,
and it's just you, me,
and then like 20 of the most intolerant people we know, you know, if they come for us with axes
and we're like, well, hold on, stop, like, look at this 42 point syllogism I have to show you
why you're actually morally incorrect. Like they're like, okay, well, I don't care. And then they would
just kill you. And it's like, oh, well, I guess my moral authority here didn't matter much, you know.
Yeah, yeah, I understand that. I mean, to be clear in this particular conversation, it's because
you said, you were making essentially a descriptive claim that most people share their basic fundamental
values. But you said that there's an exception. And again,
again, no moral element here, but you just said that there is an exception for people who are
mentally ill. And I'm saying, I guess I'm saying, let's be careful not to be circular in saying
that we're just going to call anybody who disagrees mentally ill so that we can say, well,
everybody agrees, except for those who are mentally ill. That would be a bit like me sort of,
you know what I mean?
Like some mentally ill people I disagree, but not everybody I disagree with would be mentally
ill. They're probably going to be very mentally sound people that you still have like
very fundamental disagreements with, right? Like in cases of like nationalistic or religious
conflict. These people might not be mentally at all, but like second order facts have caused
them to take a position against you. And the only resolution is through some military conflict
or something, right? Yeah. Although arguably, this takes us back to where we started and saying that
even the sort of the nationalist and the globalist and the religious zealot and the atheist,
like they still ultimately are motivated by something like, you know, maximization of their
their own well-being or positive experience or something like this,
in which case we could say that there's some basic moral intuition that most people share.
And the reason I wanted to ask you about that was because so much of what we think about
the world is based essentially on unprovable intuition.
Yeah.
For example, the existence of the external world, the existence of other minds.
Are you familiar with the problem of induction, for example?
The fact that we can't, we have no reason to say that the laws of physics are not.
going to stop working in five seconds time. Yeah, the sun's not necessarily going to come up tomorrow
or whatever, right? Yeah. Exactly. And the reason it's called the problem of induction is because
it's essentially unsolvable. It might be solvable with theism. That's kind of another discussion,
but it's not a solvable problem. But we say we can recognize I have absolutely no way to justify
objectively to somebody who disagrees with me that the external world exists or that the sun will rise
tomorrow or that other minds exist, any of this stuff. But we still believe it, right? I presume
you still believe these things.
Yeah.
And so if you're willing to allow
unprovable intuitions
in the case of general epistemology
to allow you to say,
well, I think it's objectively true
that the earth orbits the sun,
even though that's based on sort of observations
that rest on unprovable intuitions,
it's just an intuition that the external world exists
and you sense state is accurate,
but you're just going to say,
yeah, but I'm just going to trust it
and say that because using that intuition,
I see that the earth orbits the sun,
I'm going to say it's objectively true
that the earth orbits the sun.
why can't we just do the same thing for ethics in saying that, yeah, we have this unprovable intuition that sort of my, my well-being is good for me or, like, maximization of positive experience is a good thing. And through that unprovable intuition, I see that murder is wrong. And so I'm just going to say that murder is objectively wrong. It's as wrong. It's as true to say that the earth or it's the sun. If you're going to dismiss one, because there's no way to sort of resolve the fundamental conflict to someone who disagrees with you, that's true of like all epistemology. Yeah.
Okay, so maybe you can solve this one for me.
Do you know who perspective philosophy is?
Oh, yeah, yeah.
Okay, I think he's tried to get me on this argument before it,
and we have trouble getting much farther either because my mind's not equipped
or because I'm just so correct and I don't realize it, okay?
So, yeah, maybe you can help me sort this out.
So first let me summarize your argument so you can tell me if I understand this correctly.
So I'm saying, well, ultimately, there is no such thing as moral fact.
We can't prove that moral fact, so it's silly to present.
pretend like you can make objectively true moral statements.
And then you'll say, okay, sure, that might be the case.
However, if we look at things like the problem of induction, or if we look at even the
manifestation of other minds in the world, you can't prove that any of that exists.
However, you don't go through life being like an epistmic, agnostic, or anti-realist,
or you don't go through life assuming that, you know, tomorrow the planet's going to explode,
like you go through life assuming these things are objectively true.
So why would you grant one for like metaphysics or epistemology, but you would
do the same for ethics. Is that essentially kind of the question? And to be clear, it's not
just, I think, I think it's a little bit stronger. It's not that it's not just that you,
you don't go around like acting, you know, as if these things aren't true. You sort of act as if
the external world exists. You act as if other minds are true. That's true. But I think it's
stronger that you believe that it's true that other minds exist as well. It's not just that you
act as if that's the case. You believe, I'm a real person talking to you right now. And that is based
upon, let's say, if I were to run this argument, you might say that is based on as justifiable
an intuition as any moral intuition that you could think of. Okay. So we'll see if you get me over
this hurdle. So my big issue when it comes to things like these, there's a lot of stuff we'll
argue about, and then it'll boil down to this one point. I feel like it's resolving disagreements
between people that point me in a direction of something that I'm more comfortable standing on
solid ground on. So, for instance, when it comes to, like, existence of external universe or
something, there are, I can talk to a lot of other people, and we can have agreements and
disagreements on these things, but ultimately, it seems like, like, if there are four people
standing in a circle, and three people are saying, I think that the earth is round, and another
guy saying, I think the earth is flat, we can run a battery of tests, we can take in all the
sense data, and eventually our minds can come to some agreement because of our ability to interact
with the external world and say, well, look, actually, we are correct, right? And the world is
spherical. And the fourth guy, you were just wrong. You can think what you want, but you're wrong.
However, four people were to stand in a circle and three were to say, murder is wrong. And the other guy
would say, well, I think murder is okay. It feels like there's no possible thing that you could
appeal to or look at or ever have a discussion about to resolve that disagreement. I don't know how
you would do it. Yeah. I think that doesn't work.
Okay, tell me why. I think because in this, in the situation that people disagree about
something like the shape of the earth or that the earth exists, let's say. This might be based upon
some fundamental conflict. Let's say that I said to you, like I don't believe that the moon exists,
right? And you sort of said, but look, I mean, can't you see it? Like, can't you, like haven't you
heard, haven't you seen like the photographs? Can't you see it at night? And I say, well,
actually, yes, I can, but you're not getting me. What I'm saying is I have a more fundamental
skepticism that my sense data is accurate. I think we're living in a simulation or something like
that. Now, if we disagree that the moon exists, there is no test you can run to disprove that
so long as that's what my belief is based upon. Because you could say, well, look, let's get
a telescope. Let's look in the telescope. And I'll look in the telescope and I'll be like,
yeah, I see the same thing as you. There it is. There's the moon. There's a sea of tranquility.
But I still don't believe it's there because more fundamentally, I don't think my sense data is
accurate. I don't think there's an external world, right? In the same way, if you have moral
disagreements, let's say your pro-gay marriage and someone else's anti-gay marriage. And you say,
right, no, we can sort this out. Because look, look at this study that shows that when people are, a society that accepts gay marriage is on the whole happier than one that doesn't. And look at this, this, this, this, the fact that when people aren't allowed to get married, they fall into depression, the suicide rate goes up. You could look at all these sort of studies and statistics and things. But that doesn't get you any closer to solving the problem because, of course, the fundamental value conflict is one that is, is insensitive to empirical and
inquiry. In the case of the basic epistemic intuitions that make up things like belief in the external
world or other minds, they're of the same quality. If you've got four people in a room and they all
agree that sense data is accurate, but one thinks the earth is flat, then yes, based on that fundamental
agreement that sense data is accurate, you could draw a consistency test. But if those four people
in the room, if one of them doesn't believe that the external world exists at all and believes
that they're just a brain in a vat, there is no test you can run. You said there's a battery of
There's not one test you could run to disprove that.
So, okay.
Let me just try this time so I'm keeping track of these.
So here, okay, so here are a couple issues.
So I think we both agree that if somebody believes in something super crazy that can be resolved
with sense data, at some point the person will just be wrong and we can safely discard their
opinions about the earth being flat or round, unless we get at a very, very, very fundamental
level about like Brandon of that, correct?
Yeah, so the guy's saying, well, actually, I don't think the moon is ruined.
You're like, well, look at all these tests.
And the guy's like, well, of course you think that.
The Matrix is telling you to think that.
Like at that point, yeah.
So here is a level where, oh, okay, I'm going to pull out the two most disgusting words
ever, okay?
Have you ever heard of, I'm sure you've heard of the phrase like ultimate skepticism, right?
Oh, yeah, yeah, yeah.
I was half concerned it was going to be the N word or something then, but.
Oh, no, no, it's even worse.
philosophy than the other word, okay? Because as soon as somebody's pulled this trap card out,
you're basically, the whole conversation is meaningless. I would say that at a very, very, very,
very, very fundamental level, that I would say that I am ultimately skeptical. So the strength
of the conviction of my statements is only going to be up to a certain point. So if I say, for instance,
and this is kind of similar to the context we ran into earlier where it's like, oh, that's a good
or that's a bad thing. And you're like, well, you don't really believe that. It's like, well,
I guess that's true. When I say good or bad, I mean, with respect to the subjective moral system
I have blah, blah, blah, blah. If we were to say something like the earth is flat or there are
other minds that exist, I would say that like all of these statements are also contextually
qualified within the realm of these are the things that I can know to be true, but I'm only
going to go like to a certain level of depth with that statement. So if somebody says, well,
I'm a brain and a vet. What I would say is, oh, okay, I guess you could be and I could be
too, but whatever brain in the vet you are disagrees with whatever brain in the vet, me and
everybody else is. So that's irrelevant. Like I would never get to an argument that's like so
fundamental that we have to debate whether or not we like actually exist because that would be
a thing where I don't know if I can actually justify that or prove that. I would act as though I do
much to the same that I act as though there are things that are right or wrong but I think ultimately
that's going to rest on like some subjective axiom that I can't like fully truly prove unless you've got
something for me. That's that's no problem. But I mean there might be problems in that like
if ultimately you actually like in fact did not believe that the external world existed. That's not to say
you believe it doesn't, but say you were actually indifferent or you really had no reason to know
whether induction was true. Yeah. If you actually believe that, it probably would have an effect
on the way that you behave. But putting that aside, I see what you're saying. But why is it that
we're treating these cases differently? Right? Because you would still use, if somebody said, like,
do you think that propositions have truth value? Do you think it's possible for a statement like
the Earth orbits the Sun to be true or false? You'd probably say yes. And you'd probably say,
actually, I think it is true.
And you wouldn't feel the need to sort of issue of throat clearing that said,
well, actually, I'm kind of an anti-realist about facts.
You would just sort of say, okay, technically, yeah, we could be like massive skeptics
about this whole thing.
But nobody really is.
Everybody agrees that this is the case.
And so it's objectively true that the Earth orbits the sun.
Why can't we just treat morality the same?
I'm happy for you to say, like, yeah, sure, if we dug down deep enough and started
questioning our basic moral intuitions, there's no way for me to prove that.
but I'm never going to get into a conversation that goes that deep.
And in, you know, the reality of my life,
I'm just happy to say, yeah, it's objectively true that murder is wrong
because it's based on the same kind of intuitions
that allow us to escape ultimate skepticism in other contexts as well.
Yeah, I guess I just, I feel like the issue is that ultimately there's like,
there's zero sense data for morality.
Like when we talk about like, like even like propositional state,
even like prop logic or even when we talk about math, right?
Like I can arguably, I think, or you can tell you if I'm wrong,
I can actually use sense data a little bit for math.
If I take a mathematical truth, like 1 plus 1 equals 2,
I can actually have one and one things
and then put them together in C2.
But I feel like there's no sense data anywhere
to resolve any sort of moral disagreement.
I'm going back to that, yeah.
You can tell me for this.
Okay.
First thing I would say is that I think there would probably
be some mathematical truths
that are insensitive to sense data.
For example, minus 1, minus 1 is minus 2.
I don't think there's really...
I mean, there might be a way,
you can sort of map that onto your senses.
But I don't think you can sort of observe.
Because of how tautologically math is built,
I think that most of it,
because if we talk about negative one plus negative one equals two,
if I have a foundation to understand what two is,
I can have a foundation to understand what negative one is
because it's taking one less
and then I can imagine negatives as being the flip side.
At the end of the day,
there's going to be some sort of,
like, I can map that on.
What's like the argument against mathematical anti-realist?
It's like math has an uncanny ability
to line up with reality or something like that.
Like, I can generate these things, yeah.
But I understand that, like, negative one and negative one,
but you can argue it's a little bit harder.
But when I say something like murder is wrong,
like your mind is blank.
There's nothing that you can think.
Like, what does wrongness even look like, you know?
But I think what you're doing is you're assuming
that the only kind of evidence,
the only kind of, let's say, experiential evidence
that could count in favor of something
is something like scientific empirical data,
that is the things that you can see in here,
not things that you can feel just as strongly and intuitively as you like in the same way that like when
you when you look at something it just sort of strikes your eye you don't sort of choose to see
the screen that you're looking at right now it just sort of appears in your brain in the same way like
if you have a moral intuition about something you don't sort of choose to feel that it just
strikes you and there's there's I guess technically I guess it strikes a part of the brain
people would poetically describe it as, you know, striking the soul
and in the way that the screen strikes the eye.
But ultimately, what both are doing
are just making a bit of your brain sort of go zing
and you believe something.
In the one case, you know, through your eyes,
you know, you see a computer screen,
but what's actually causing the experience
is something prods your brain
that makes you go, oh, I believe that the screen's there.
The same way, you see a homeless man getting trodden on on the street
and something in your brain just going,
I believe that that's wrong.
Yeah, so I guess my question would have to be that, like,
or here's a question that I would ask.
It feels like if I give you four stimuli, stimuli or stimulus,
stimulus is multiple stimulus.
Stimuli, I would do it.
Stimuli, four stimuli, okay?
If I give you four things to look at,
one thing is a blue circle,
another thing is a planet orbiting another planet,
and another thing is a car,
and then another thing is one person hitting somebody else.
If I ask you to explain all four things, the first three feel like they're fundamentally in a different category than the fourth one.
If I'm starting to get to statements like this is somebody is hurting somebody or somebody's doing something wrong, I guess.
Like the descriptions of the descriptive things of reality, like a blue circle, you know, it looks blue in my eye and it's got this shape or a tree looks like this or a car ontologically has, you know, four wheels, blah, blah, blah, versus like this is a thing that's going on.
and it's wrong, like, to even be able to say that, there's already, like, a lot of things
that are already being processed, you know, in a person's mind. Like, for instance, if they're
wearing a certain outfit, it might actually be wrong. It's like a sexual fetish now, you know,
and that's the relationship between the things is like, there's so much more processing there
than, like, what would be, like, a blue circle or a planet orbiting something, or one and one
equals two, I think. Yeah, go out. Yeah, but, I mean, to be clear, it's a different kind of
intuition. It would be, I mean, what you're raising, there's a philosopher called J.L. Mackey,
who famously raised what he called the queerness subjection, that if moral properties were to
exist in the universe, they'd be so, he uses the word queer, so sort of unimaginably different
from everything else we interact within the universe, that I wouldn't even know how to make sense
of it. And people in response tend to sort of say, well, yeah, but that's kind of what ethics is.
It is this sui generis, you know, totally unique thing.
and yeah there might be some skepticism in saying well if it's this totally unique thing
then how can we even really know anything about it how can we interact with it but as long as
you say that most people share a basic moral intuition you're just grinding the fact that
people do interact with it they do have that experiential phenomena of feeling the the moral quality
sure but i don't think i technically need descriptively i don't have to describe that moral quality
is anything different than what you said earlier, the shower thing, right?
Like, people will avoid stepping into a cold shower because it makes them feel bad.
People will avoid socializing with people that hurt others because it makes them feel bad.
Like, arguably, descriptively, I think I can generally generate all of these statements without
even needing to invoke ethics or morality.
I could just do it with the preference thing, right?
Yeah.
Okay, so try this then.
Let's say, I want to say that sort of morality is objective, and I say that's because, or even
like preferences can be objective and I say that's because you know when I step into a shower
and it's too hot it hurts and you say well who cares if it hurts and I say well if something hurts
that's bad for me and you say well can't you just yeah you know that could be false right like
and I just say I just can't imagine what it means for something to hurt me without thinking
that it's just intuitively the case that imagining something harming me is me imagining something
bad at the same time and you say well that's just that's just an intuition and I'm like yeah but
my brain just sort of does it. I can't help it. And then I say, okay, so we both see a,
we see a guitar sat behind you, a Fender Stratocaster I'm imagining. And I say, you know, I think
that exists. And you say, it's the other way around. You say that guitar exists. I say it
doesn't exist. And you say, but I can see it. It's right there. And I said, okay, yeah, it does
exist, but it also doesn't exist. It exists and doesn't exist. So we're both right. Would you be
okay with that, would you just grant that it doesn't exist?
Because I'm saying, yeah, I don't disagree with you.
It does exist. It just also doesn't exist.
No, I would probably fight you on that.
Why is that?
Because I can go and touch it and interact with it.
And insofar as anything I've ever thought of is existing.
Of course, again, because it does exist.
Of course you can do that thing.
It just also doesn't exist at the same time.
Wait, I understand very, very, very fundamental level.
Like, if you were to say, like, we're in the matrix and it doesn't exist.
Oh, on that level, I could say, okay, that could be true, but I'm agnostic.
No, no, I mean, it literally exists.
doesn't exist at the same time oh it's both true and false at the same time and then it doesn't
exist wait how or no or tell me what you mean by that right so so notice how your brain just goes
no that can't be it's a law it's what would be described as a law of logic right the law of
non-contradiction content we just accept these these axiological laws laws like the law of the
excluded middle something must be true or false it can't be both it can't be neither propositions
Law of identity. Law of non-contradiction, law of identity.
What I've just done there is I've just sort of said to you like, well, sure, I agree with
you that the guitar exists, but it also doesn't exist. And you say, well, I can't believe that
because of this, because of this principle that I have, the law of non-contradiction.
When I say, why, why not? Like, why can't that be the case? Your brain just sort of goes...
Sure, I would say there are certain logical properties. It just can't be the case.
Yeah, that were literally granted a priori in our brains.
but all that essentially is
is just this really strong intuition
you know you can't really explain it
you can't like justify it you just say look dude
like are you telling me you don't feel that
like just pay attention to your brain
like of course it can't be true and false
at the same time and I feel like that's something
like what's going on
with like something
some kind of base moral intuitions
maybe not quite as strongly
there is an analogy that can be drawn here right
I think I would really fight on this
I think I would totally disagree with that
I would argue that I think the three I
here. Oh, God,
there was a book I read Bernard, somebody?
I don't even remember. But like, when you
talk about like non-contradiction, excluded middle law of
identity, these are things where there is no
room for disagreement on. Nobody can disagree with them.
Like, arguably, your mind is not even human at that point.
It's almost unfathomable to think that somebody could
disagree on them. For
some of these like a priori truths that I think
that are granted to our brains by virtue of being human.
I would argue that those types of
I don't even know if I would call
their intuitions. Maybe that's what we call them intuitions.
I would say that these are far stronger or far different than
and moral ones, which are, as crazy as it is,
like we can bend some of them, right?
Like, there might be, in horrible situations,
there might be some people that think that rape is okay
or that murder is okay or that stealing is okay,
but you'll never be able to convince somebody out of identity
or convince somebody of contradiction.
Like, that's just, like, unfathomable.
So I would argue that these types of intuitions are different, but good.
In the same way, you wouldn't be able to convince somebody
that their suffering isn't a bad experience for them.
Yeah, I would agree with that, but I think I can describe all of what you just said with
preferences. I don't need to invoke morality, right? Like, it is a, like, the way that the shower
hurts you when you get in, like, that sensation of pain, you might also get, like, a sensation
of pain when you witness a certain thing that makes you feel a certain way. But I don't think
I need to- I guess I was using the word, the word bad there in a moral sense. Like, somebody
sort of has a feeling that that that pain is bad for them. It's something that sort of, it makes
the world a way that it should not be that there's a way that the world should be that it
isn't right now um i guess there's a question of like yeah are those are those technically moral
statements or not or is it like a moral um ought like i ought to get out of the shower because
it's too hot for yeah yeah you can sort of have descriptive or like moral descriptiveism generally
but i guess i guess what i'm what i'm trying to do here is show that even if i grant you that
these intuitions are a lot stronger. And there are some senses in which some people can deny
certain tenets of what are generally accepted as logical laws. Like the law of the excluded
middle, a proposition has to be true or false. It can't be both. It can't be neither. People
often ask, like, take the proposition that the king of France has brown hair. Like, is that true
or false? And people say, well, it's kind of neither, right? Because there is no referent for
the king of France, because there is no king of France. And so you want to
to say it's false, but it doesn't seem quite right to say that it's false that the king of
France has brown hair. There seems to be a sense in which that that kind of breaks down.
Also, there are some people who might want to say that there are certain contexts in which
you might want to speak of things being true and false at the same time. You might fall into the trap
of saying, like, oh, it can be true and false at the same time that it's raining because it's
raining in one place, but not in another. But the proposition would have to be, it's, you know,
it's raining at this particular place and it's also not raining at the same time. There are
like, there are interpretations of logic that say that these aren't actually sort of as hard and
fast as people tend to think. Now, what we might say is that, yeah, sure, there are people
who just sort of deny that logical laws are the case. And I've met skeptics who, when you
push somebody's epistemological nihilism to its core, and you say, well, how do you even
know that the laws of logic are the case? They sort of say, well, in fairness, I guess I can't
know that. I guess in theory, I actually can't prove the laws of logic. Fine. We just sort of say,
look, I mean, there are people who do that, but that's such a sort of wacky minority view.
Like, can we just sort of agree that this really strong intuition is a good reason to base our
epistemology on it?
Same thing could be true of ethics.
Maybe not quite as strongly, but you might say that there are some basic moral intuitions that,
yeah, some people sort of doubt or disagree with, but there's such a minority, such a wacky
position that, you know, we can at least build our moral epistemology upon those intuitions.
I guess we could, but like at some point, at some point, at some
point, I'm probably going to agree with you, but then I feel like, I guess I would argue that
I feel like your position has weakened to become mine. So, like, if you were to say, well, hold on,
we have very strong intuition to relating to the three, our three, like, fundamental laws of
logic. And I'll go, okay, sure, you know, like, can we have, like, really strong fundamental
intuitions about, like, what's morally right or wrong? At some point, I'll say, like, you know,
in the same way that we prefer things to not be contradictory or to have an identity or to either
be true or false. Yeah, we can probably have really strong preferences over like what's right
or wrong. But I don't know if that gets us any closer to saying that like morality or moral
fact exists, right, or that like ethics are some real thing. I feel like it's just basically
become another way of rephrasing that like, yeah, we all have like certain preferences in life.
Like we might have a really strong inclination towards identity or non-contradiction, much to say
that we probably have really strong inclination towards things like murder or torture. But it doesn't
necessarily mean that the moral fact is there. What it might mean is that we can
say something like if there is such thing as truth or I should say accessible truth because
really this is an epistemological problem rather than an ontological one by which I mean we're
talking about how we might sort of come to know moral truths if we can constantly keep questioning
our assumptions and we could say that like okay we can't say that anything is true and that's
the problem of universal skepticism that you can't really ground an epistemological world
you without pulling yourself up by your bootstraps, but I could say that the way I phrased it
earlier, me saying that murder is wrong is as true or sort of as objective as the fact that
the Earth orbits the sun. Maybe both of them are ultimately sort of like based on on totally
unknowable intuitions. But in the same way that you're willing to just say in the context of general
epistemology, yeah, okay, technically, sure, but come on man, the earth obviously orbits the
sign and that's objectively true. Why aren't we willing to do the same thing in the ethical
framework of saying, oh, yeah, okay, technically it's based upon an intuition that you can't prove,
but come on, man, obviously torturing babies for fun is objectively wrong.
Yeah, I guess it, I'm trying to think if I have like a psychological hump that I just can't
get over. Because the first thing I want to say is that like we have sense data to resolve,
we're looping now. We have sense data to resolve the thing about the earth being
round or not. But then I think you want to say, okay, well, we kind of have sense data in a way
we can sense like morally, moral intuitions of something being right or wrong.
Not quite. What I mean to say is that like the sort of earth orbiting the sun thing is
based upon the intuition is not that the earth orbits the sun. The intuition is that your
sense data is accurate. Yeah. That's the intuition, right? That you just have absolutely no
evidence for or against. I guess I feel like the difference in the two propositions
between like the ethical one and the
the earth orbiting,
the physical one, I guess.
Is it like, it, it's what it feels like to me.
There's a room and behind the door.
I have really no idea what's behind the door.
And then two people come up and one guy's like,
I think that, I heard a little bit of noise.
I think there might be like a person behind the door.
And then the other guy is like, okay, well,
I think there might be a beluga whale behind the door.
And I would look at the guy that says
there's a whale behind the door and it's like, probably not.
And then he'd say, well, if you think there could be a person behind the door, why not a whale?
And it's like, well, I feel like I have a lot more reasons to believe in the person behind the door than the whale, although I guess theoretically the whale could be there.
I guess I feel the same way when we talk about these physical and ethical statements.
Like physically, I feel like we have so many more reasons to believe there is like universally shared consensus around these kind of like basic logical truths.
They're testable in so many different ways with multiple sense data that all coincided with each other.
and we can harshly resolve disagreements
and almost resolutely say right or wrong
unless you want to be like ultimately skeptical
of your own existence about disagreements here
but when it comes to these ethical statements,
that's it, isn't it?
As long as you don't have that fundamental disagreement.
Like the same thing with the whale
and the human behind the door,
you're absolutely right that these disagreements
can be clinically solved
so long as there is a fundamental agreement on something.
And in the case of like the whale and the human,
the agreement is something like
we live in a world that obeys physical,
laws, you know, that sort of the world that we observe is real and will resemble the one
that's behind the door, you know, so it's unlikely that the well is going to be there. Absolutely,
you can resolve that easily. Similarly, in a moral case, if you just grant some kind of assumption,
we live in a moral universe where there are moral properties like goodness and badness and
suffering is bad and pleasure is good, then when it comes to moral disagreements, yeah, you can
clinically solve those problems too. But what you're going to want to say as a moral anti-realist,
it's like, well, yeah, of course you can solve these moral problems if you assume,
some moral baseline, but what's the justification for the moral baseline? And I'm saying,
sure, you can easily solve, like, is it more likely to be a human or a whale behind the door
if you assume that we live in a physical law, a physical world that obeys laws that resemble
what they did yesterday? Sure, but what's your justification for believing that? See what I mean?
It's like the same question can be asked. And I feel like what you're doing is when you say
that descriptive disagreements can be resolved really easily, you're smuggling in agreement upon some
fundamental principle that's not justified.
Yeah.
And you're not willing to do the same thing.
Yeah, then I'm essentially holding a higher standard for the ethical propositions than
like the physical ones or whatever.
And I kind of understand what you're saying there.
I guess the only thing is that like, and maybe this is just the limitation of my mind,
like I could fathom that one day we'll figure out like what is dark matter or we'll
figure out the question of some really challenging thing in physics.
I can't even imagine.
And it almost feels like a God question.
like, imagine something's impossible.
I can't imagine ever knowing the fact of the matter of,
is abortion right or wrong?
Like, it just feels like something that is just so out of reach,
like almost asking, like, imagine what it was like to be before you were born.
And it's like, my subjective, I can't, I can't do that.
I can't imagine what it's like to not be.
I can't be and not be at the same time, right?
And that's what it feels like for like the moral questions.
It's just like, I don't even know what direction I would even begin to step in.
And it's so fundamentally different than anything else.
Yeah, go ahead.
Because you've sort of compared,
like understanding what dark matter is to like fundamental moral truth um it's more like saying well yeah
I can't imagine a world in which we sort of suddenly just uncover like the truth about moral
intuition we suddenly are just able to prove that pleasure is good or something but I also can't
imagine a world in which we're able to actually scientifically prove that the external world exists
that we're not living in a simulation I can't I can't I can't believe in a world in which we
are suddenly able to prove that induction is true like
I can't see that either.
Well, that is true.
The difference is, maybe this is a difference.
You can talk on this.
I don't feel like there are any real fundamental disagreements on the presuppositions needed to build out, like, physics and chemistry.
Like, nobody's out here seriously saying, you know, like, I don't believe in non-contradiction,
or I don't believe that, like, we can measure anything in a laboratory.
Whereas for the fundamental, the stuff that you need to get morality off the ground, there are massive disagreements.
I believe, I am only granted things for special revelation.
God tells me what's right or wrong.
And then someone else might say, like, oh, well, actually through reason, you know,
every human perfectly reasonable can come to the same moral truths.
And there's like, yeah, in physics, I don't see these like fundamental presupposed statements
even existing.
We all generally come from the same place, I'm pretty sure.
But in ethics, how do you resolve people that are coming from fundamentally,
completely different places?
Like these axiomatic statements, how do you ever figure out like who's right or wrong there?
Well, for start, I think there are people who do,
quarrel with the fundamental
axiological assumptions of science.
Generally, the scientific community
just looks at them with
skepticism and amusement and sort of excludes them
from the process. But this is kind of like
what happens when the moral,
there are people who are skeptical
of the basic moral intuitions that most people share,
but what happens then is that the moral community
looks upon them with skepticism and amusement
and essentially excludes them. It's kind of like what you were talking
about earlier. You sort of see someone who has a
fundamental value conflict and you say, well,
mate, you're just not part of our moral community. We're going to throw
in jail. We're going to kill you if we need to. You're just not a part of this. And we almost
like sort of laugh at the absurdity of the things that they believe. Similar things can happen
in a scientific community. It's just because you're unlikely to need to kill somebody or
imprison them for this reason. You might have a similar situation in which you have, you know,
those those wackos who make those crazy hippie videos about how like nothing exists man and
science is false and all this kind of stuff. Like there are people who believe that. We just think
that they're a minority and that intuition is otherwise so widely shared that we essentially
just ignore it and don't seriously accept the challenge that they're posing to us, which is
justify your basic intuitions about your world view. I think the same thing's happening in both
cases. But for science, like, if you progress to a certain point, at some point, like a paradigm
shift will happen, right? Like, there are people that'll say, like, well, we don't, like, we're past
Newtonian physics. We've moved on from that because empirically, we validated so much stuff that
now we've moved on to the next paradigm. Even if the people that disagreed with it were in a minority
initially, eventually they can argue for those positions. But my understanding today, like,
is Kant any more popular now than he was hundreds of years ago? Or, you know, how many people
are still religious and believe in, you know, morality coming from the Bible of the crown of the
Torah? How many, like, yeah, it seems like for as many years as exist, there's more moral
philosophers, like, how much closer are we to convening on, like, any type of, like, moral
truth? Yeah, God. You said yourself a moment ago that there are a certain very basic moral
assumption. I mean, in your own words, you said that, yeah, people might disagree about, like,
LGBT or like or this kind of thing, but the really basic stuff, you know, don't kill people
for fun. That's everybody basically agrees with that all throughout history. Like there just has
been a convergence. I think that the majority of moral history, if you look at what moral
philosophers are doing, in many cases they're just sort of trying to justify intuitions or
they're trying to explain morality or they're trying to get to grips with what it is and
definitions and meta-ethics, but there isn't as much dispute about the kind of things that are
that are right and wrong fundamentally maybe. I mean, of course that does exist.
But also, like, yeah, sort of we can say, yeah, we had Newtonian physics, now we have, you know, Einstein or whatever the trajectory is.
But there's a sort of more fundamental assumption that's needed for the scientific method, which is that, yeah, Newtonian physics worked yesterday and it works right now.
If I drop this object, it's going to fall to the ground.
You just assume that that's still going to be the case in 20 seconds.
I assume that, like, I'm not just going to start floating and fly into the ceiling or something.
like that. And the point is, I have no way to justify that intuition. I have no way to justify
that, that belief, except intuition. People will want to say, by the way, who are listening to
this, well, can't we say that because it's always, for all of history, things have fallen to the
ground, doesn't that give us a reason to think they'll continue falling to the ground? Technically,
no, and I don't really have time to get into that now, but that's the problem of induction.
And if you want to know why that's the case, you know, look into the problem of induction.
It's fascinating and hugely problematic, but yeah, like, sure, you have scientific progress,
but you've got absolutely nowhere closer to guaranteeing that the laws of physics aren't going
to change tomorrow.
You've got absolutely nowhere closer to proving that the external world exists or that other minds
exist or the very things that the entire scientific project is based upon.
You know what I mean?
Yeah, I understand what you're saying.
But I think that now I'm going to appeal to the satisfying or unsatisfying thing.
I think that I feel like that's an emotionally not compelling argument that, like,
well, sure, like maybe, you know, we haven't figured out what is a moral fact yet,
and you feel like we made scientific progress, but tomorrow all the laws of physics might change.
It's like, okay, maybe.
But that doesn't feel like a very compelling, you know, this might happen, I guess, type of statement.
And it kind of doesn't feel that compelling that, oh, well, maybe the Holocaust was just fine.
You know, people want to say, like, yeah, I guess I can't, like, prove that the Holocaust is wrong
because I'm a moral anti-realist, but, like, you know.
I would argue that I think you could justify that.
And here's how I would do it.
and I'm not trying to bring up any of your drama,
or I don't know if you are comfortable talking about vegan things at all,
but I think that it is totally possible that in 50 to 100 years,
especially depending on the progress making of make of lab grown meat,
people might look back and go, Holocaust, I don't even know what that was.
I'm thinking about the hundreds of millions of animals
that were tortured and murdered on a daily basis.
And, you know, the amount of people killed in any war for humans pales in comparison to that.
But today, we are fully on board with, like, eating and doing whatever with animals.
And so, like, in the same way that we might say,
like, I couldn't even imagine thinking the Holocaust wasn't.
wasn't an okay thing. That was clearly wrong. It's like, well, theoretically, a hundred
years or not, people might say the same about meeting eat, but you have no feeling about
that right now, you know? Yeah, I mean, I think I actually agree with you on that point that
people are more capable than they think. I guess I'm appealing to an intuition here of saying
like, people are going to hear you say, because what was it you said a second ago when
you were like, I'm going to appeal to the emotional thing now? You're like, this is not emotionally
satisfying. Yeah, it doesn't feel satisfying to say like, well, couldn't all of signs change
tomorrow therefore right yeah yeah sure and and in the same way that that i could just say the same
thing to you which is like yeah i understand why people would think that but i'm willing to say
that people are just actually underestimating their own sort of ability to think certain things right
in the same way that we might say well people are going to think i could never see the holocaust is right
but maybe they actually could if they were born in like 1930s germany and they were raised in the right
environment. Actually, they would, they would see it that way. But look, I also sort of, we've, I don't
want to potentially run in circles or go too much back and back and forth on these issues. It's been
fascinating. That's why I met at epic sucks, you know. Yeah, I had another, I had another quote from
you actually, which I, which I didn't quite get to bring up yet, which is, somebody said,
somebody said on that stream that you did, they said to you, morality is more complicated than
internal happiness to which you said, quote, I disagree. I will wholly argue and stay mad. All of
meta ethics is fucking trash garbage waste of time it's philosophers that are bored as fuck
circle jerking against other philosophers that are bored as fuck all of us have things in life that
we want we try to satisfy those wants that's all morality is okay so everybody who disagrees
suck a dick um i wonder if some of the the conversational you know tones that we've been
playing uh today could maybe persuade some people that it's not quite as dire as you make it out to be
because we have essentially been doing meta-ethics here.
Yeah, I know.
But then my counter-argument to that would be,
which, by the way, I like waste my time talking about crazy shit.
It's fun for me.
Or I shouldn't say waste my time.
That's mean.
This has been a fun conversation.
I enjoy it.
But sometimes I feel like we can spend so much time at a meta-level.
It's like, did we get any closer to having an opinion?
Or like, should we have socialized health care or not?
How should we deal with homeless people in the United States?
What's like the correct way to deal with a parent that was abusive in our early years?
that on the applied level
there are so many fascinating questions
on the normative level
I think there's a lot of interesting questions
that was a lot of moral terms
you just used there for an anti-realist
true yeah well listen
what's the correct thing to do
what should we do should we have socialised healthcare
well in your view arguably
not sure so this is why
at the meta level
I just say listen I'm just going to assume we all share
these basic kind of moral truths
we adrenaline want to be happy healthy
have our families taken care of
and be not fucked with and then boom
then you're done with it
and then you move on.
And I feel like no matter what any kind of, like, ethical philosopher debates,
more or less, we're probably going to come out with about the same answers.
I would be surprised if there were many, if there were many, like, moral philosophers
that would come out with, like, massively disagreements with me on, like, some applied level,
basically.
That, like, the way that I get there might be a little bit weird or somebody might say,
well, you're using people as a means to an end, or, well, you know, I don't like the fact
that you can't say that the Holocaust is objectively wrong.
Yeah, maybe probably now.
These things might not feel that satisfactory,
but at the end of the day,
when we get to like our applied statements,
I have like a very Rawlsian view of the world.
I think that most of the ethical statements I generate
are generally pretty positive,
and I don't have to waste all this time on the meta level
to kind of get there.
But I understand that that's also,
it sounds really dismissive and arrogant of me to say that,
which it is.
But yeah.
David Hume said of,
I don't know if he was talking specifically
about the problem of induction
or the problems of philosophy in general,
that you have this list of problems,
like the problem of induction,
that you study for hours and hours
and think,
my God, there's no solution to this.
You have absolutely no grounding for our epistemic world.
Do you have no better reason to think that I'm going to start flying as I'm going to start
falling if I jump out of a window?
But then you close the book, you put it back on the shelf, you leave your study, and you just
act as if you hadn't done any of that at all.
Because of course you don't believe you're going to start flying.
And there's a sense in which, like, even philosophers who do that for a living will agree
with you that, okay, it's not going to change how you feel.
But the purpose of this kind of, there's constructive and, I guess, like, destructive
philosophy and constructive philosophy might be trying to sort of build up world views.
But what we're doing here is essentially saying, well, we do believe certain things.
Let's try and figure out why we do, whether it's justified and sort of break it down.
And that's what we're engaged in here.
But it won't change the fact that we do believe these certain things.
But since you brought it up, I wanted to ask, while I still have you, what do you got against
animals, man?
Well, they taste really good.
Isn't that what meat eaters say?
That is what they say, and I don't think any vegan would ever deny that.
I mean, I guess since we're doing sort of meta-ethics, regardless of whether or not you're going to be a vegan, I've heard you say that essentially if there is such thing as moral consideration or moral worth, it just is something that sort of doesn't apply to non-human animals.
Yeah, that basically, I feel like you can start from one of two points.
You can either say, I value human sentience, and that's where I begin all of my.
moral construction from, or you can just say that, like, I value all sentience, and that's where
vegans typically begin their construction from. I don't think there's necessarily a good
argument for one or the other, because I view these as being very foundational statements, and
being a little ethical, anti-realist, means I can pick whatever one makes you feel better.
So, yeah, I just, I don't know. Yeah, that's basically where I...
And what is it that you happen to value?
The human sentience, basically.
Human sentience. So does that mean... Can you just, like, define human sentience?
that the human brain seems to produce some conscious experience that I would call a human conscious
experience.
Yeah.
Okay.
Is that true of all human beings?
Probably not.
I could imagine somebody having enough of their brain removed such that they don't have
that experience anymore.
It's probably possible.
But would you still sort of value their sentience in so far as they have it just because
they're human beings?
No.
If they're not having a human conscious experience, probably.
not. So, for instance, I could imagine a person gets into an accident and the majority of their
brain is destroyed or removed, but their body is kept 100% alive and healthy. Arguably, this person
would have no moral value, other than, like, what's the family would think, I guess, yeah.
So you've got kind of two necessary conditions here. One is sentience, the other is being a human.
You sort of need both in order to, in order to care. Well, I say human, I say human, I say human,
human sentience, essentially, yeah.
So you could say to be a human and to have sentience, I guess.
But like a human, like there's a human conscious experience.
The moral qualifiers, I mean, or I shouldn't say moral qualifiers.
But basically the statements are, is do you have a human conscious experience
and do you have the ability to deploy said experience?
Those are like the two things I say that give you like as a, like give you worthy
or make you worthy moral consideration.
So somebody that like has their brain destroyed or is dead, for instance, like a human
body that might have a full brain, doesn't have the capability to deploy a human
conscious experience. So they have no moral consideration. Somebody who's sleeping does. You can wake
them up. I don't wake a live person. Somebody with mental disability does. But you could conceivably
peel away enough parts of the brain, I guess, that they wouldn't. Because you can have sentience
without human beings, and you can have human beings without sentience, right? So I guess what I'm
asking is, is it sort of those are the two boxes you have to fulfill. If you're sentient,
but not human, you don't care. If you're human, but not sentient, you don't care. But if you're
human and you're sentient, then you've sort of conferred moral value onto this being.
Well, I guess the question is, are you considering, is sentience, is all sentience the same
to you? Like, is that just like a thing? Or give me a definition for this. Yeah, I mean, I guess
by sentience, I mean the ability to experience pleasure and pain or desirable and non-desirable
states of affairs. Oh, okay. Sure. Maybe I should say. It's essentially that the ability to have
preferences, I suppose, is one way of putting it. Okay, because I feel like human sentience, I would view
differently than like the sentience of a lot of animals or other things. But sure, I could say
then to be a human and to deploy a sentient experience or conscious experience, yeah, have
sentience, yeah. Because the problem that I have with your view is that it's sort of like
an on-off switch, right? You've got like this, this care for human beings that extends presumably
to political activism to saying that we should hold other people at gunpoint to take their money
to make sure that other people aren't suffering. Like very seriously, we're taking this very
seriously the suffering of human beings. And even like slightly more menial sufferings,
you know, like being cold at night, people should be able to warm their homes. And so we should
sort of have a welfare blanket for that kind of purpose. Not that that's menial, but I mean
in comparison to something like, you know, being forced into a gas chamber, it's not as bad. But
when it comes to non-human animals, particularly farm animals or farmed animals, I should say,
pigs and cows. It's just like an off switch. Is it just like there's just nothing? Or is it sort of like,
well, they have moral worth. They just have significantly less. Or in your view, is it, is it just like
these are sort of inanimate objects that you can do with as you please? Yeah, they're basically
philosophical zombies to me, I guess. Yeah. Is that true of every animal that isn't a human being?
So like chimpanzees, dolphins. Yeah. I could imagine there might be some animal of different
sophistication somewhere in the universe or an undiscovered one on the planet Earth. But in so far,
animals on the planet go yeah so wait so it's about sophistication um or something like there could be like
other types of animals in the universe that have like a conscious experience i guess that is similar
enough to like a human being or something okay um yeah the reason why i see it is problematic to
to have it as sort of a binary on and off rather than something like a a scale of gradation
is because all life on earth exists on a scale of gradation that is like no
species has ever given birth to a new species. So, of course, like many animals have died in the
history of planet Earth, but they all lived at some point. So, you know, your parents were humans,
their parents were humans, their parents were humans, their parents were humans. You go back a few
hundred thousand years, you've got different, well, this would be sort of pre-human, but, you know,
there will be different species of humans at first.
And then you go back far enough for a couple of 100,000 years
and you're looking at sort of apish creatures
that are more resemble something like a chimpanzee
than they do a modern homo sapiens.
And you go further back and you get to a fish, right?
But there's no sort of like distinct boundary here.
Every single animal gave birth to the same species.
They were just sort of such minor changes
that over, you know, billions of years of evolution,
we get human beings.
The problem is that in principle, if you're just going to say, yeah, human sentience, human experience, human beings, they matter, and any other animal does not.
And if I were to sort of resurrect the evolutionary chain of human beings back to our sort of common ancestor, there has to be a point at which you just sort of arbitrarily say the sort of apish hominid on this side of the line, I do not care.
inanimate object, do whatever you want with them. And the sort of identical creature on the other
side of the line, human being, sentience, care about, want to sort of hold people at gunpoint
to make sure that they don't get cold at night. That to me seems like an entirely untenable
position. Why is that untenable? It's untenable in the sense that, I mean, would you accept
that, for a start, do you think that that's essentially what you would do or are doing
I mean, I'm sure there'd be some, I mean, I'm sure there'd be some haziness in the middle, right?
Much that, like, I'm sure you believe you have a neck and you believe you have a head,
but I don't know if you could tell me exactly where one ends or the other begins.
Yeah, so, I mean, there's going to be some sort of continuum upon which I'll say,
like, it's probably going to be kind of hazy in here.
But I think roughly, yeah, that's essentially what's going on, yeah.
Because this is the weird thing, like, as you say, almost sort of, I mean, you point out to me that, yeah,
Like, yeah, of course you're never going to be able to draw the line, but you can't draw the line easily with many things.
But that's why I think that if your ethical views of sort of what counts is based upon essentially the qualities of the animal,
that you have this one animal that has this thing called sort of human sentience and this other animal over here that, in your view, does not.
We should be talking about a sliding scale here rather than an on and off switch because of the fact that you could resurrect every evolutionary link between those two animals.
and there's no point at which the switch just gets turned off, you know what I mean?
And it seems very strange to say that you've got sort of 100, 100, 100, 100, 100.
And then somewhere in the middle, it just suddenly goes from like 100 down to zero, not instantly,
but like over like maybe a few generations.
And then you're just right at zero again.
It seems much more plausible that we should look at this as either going slowly from 100
all the way down to zero at some point over here pretty much equally,
maybe with a slight curve or something.
I mean, I say that actually it shouldn't go down to zero altogether.
Sure. I mean, I could fight with you. This is a big problem I think in physics right now is that people feel like there needs to be some grand order or some grand unifying thing to unite everything in the universe. And it might be that there is just no clean way to do it. I don't know if it's a reasonable argument to say like, well, it's unsatisfying, therefore it's impossible that our moral consideration would drop off so suddenly. But like our capacity for breeding does, right? Like we have like human, human, human, human, human, and we can't breed with the next closest thing to us at all. Like that goes from 100 to absolute zero.
instantaneously. So I mean, yeah, I guess it's unsatisfying, but. Of course, we're in that,
it's easy to do now. We're in that situation now because of the fact that the evolutionary links are
dead. They don't exist. And so we can quite easily isolate, you know, human beings and
chimpanzees and dolphins. But I mean, there were a time when there were different human beings
simultaneously, different species of human beings simultaneously walking around on planet Earth,
that I believe at least some of them could, you know, breed with each other. It's not entirely
clear. Like, for example, you know, let's go back to when you have Neanderthals, you have
homo erectus, you have a bunch of different human species all walking around on the planet.
Like, are you okay with factory farming those human beings? Is it specifically homo sapiens that you
that you care about? Is it, is it human beings broadly? Is it like, I mean, what is it that
you're sort of basing this distinctive on switch for morality upon?
Yeah, I don't know, somewhere around human conscious experience. It's not going to be very
satisfying. I don't know exactly if there were other types of humans that walk the earth. I think
it would be pretty difficult to do it. But I don't think that a vegan justification of saying,
well, we ought to value all sentience. I mean, I feel like that's about as arbitrary. Like, why value
the sentience of animals over, like, the existence of nature? Like, why not the grand beautiful
structure of a tree versus like the sentient mind of like an animal? Why ought one be valued over
another? I feel like fundamentally it's all kind of a bit arbitrary. So potentially, but surely sort of
valuing the sentience of a non-human is much closer to valuing the sentience of a human
than it is to valuing something like the existence of nature. They're much closer to each other.
I mean, in some ways. If I speak to someone like you who says, well, I actually do have this
intuition that human beings matter. And I say, why is that? And you say, because they have this
thing called human sentience. And I say, well, there's this thing that other animals have that's a bit
like that, which would be like animal sentience, which I think maybe should count too. And you say,
well, that's arbitrary, because why don't I care about, you know, nature in the trees? I'm like,
well, that's wildly different to the thing that we're talking about. I'm saying that there's
something that seems very similar to what's going on in the human brain, in other human brains.
And in fact, again, on an evolutionary trajectory, it wouldn't really make sense to say that the
consciousness that evolved in human beings is just of a completely different kind and quality
to the consciousness that evolved in other animals. Like, that just sort of doesn't make evolutionary sense.
I mean, I say it doesn't make evolutionary sense, but I mean, like, if you'll
look at the progress of humans on the planet, it is distinctly unique compared to every other
species on Earth, right? Nobody's even close to anything. I don't think any other creatures
even really have developed language or the capacity for language like humans have. Some things
can use crude words to describe things, but in terms of like being able to imagine things that
are not being able to express a negative. Yeah, like these are... The ability to abstract is often
pointed as one of the distinctly human features. And it's not even things that like exist in a
gradient. It's like, this is, I don't know what it was or how.
Maybe it's the Prometheus alien guys came down or whatever, but like this is like a switch that flipped for human minds.
It just doesn't exist at all in any other creature on the planet.
Again, I guess like I can understand it being unsatisfying and we can even appeal to intuition to some extent.
But then I can also appeal to intuition.
It's like, well, every single animal on the planet like tortures and eats other animals.
And intuitively, I guess like we also kind of torture, maybe not torture or I guess we could if you argue factor farming, like eating other animals as well.
So I guess it's hard to, I feel like you can argue the intuition on both ends there.
then intuitively humans might feel a certain way seeing an animal die where it makes you feel
sad but then intuitively we also have all the benefits of eating food and meat especially that
makes us feel good and helps us in a number of health ways so I feel like appealing to intuitions
there is very difficult as well I mean if I were if I were to grant you that it certainly
wouldn't justify sort of any treatment of other animals I mean you could say yeah well
animals sort of predate on each other fine but they don't sort of lock each other into cages
and put them in gas chambers, that would be a very weird and inhuman thing to do.
And I don't mean in the moral sense.
I mean, inhuman in the sense of what human beings naturally do.
Also, I mean, the language thing is important.
I've heard some evolutionary biologists suggest that it might be the fact that human beings
have developed complex language that's allowed us to produce, you know, cities and civilizations.
It might actually be the fact that we have language.
That could be a plausible contender.
But also, like, this doesn't seem relevant to me to the question of sentient.
This doesn't seem relevant to me to the question of sort of having preferable states of affairs.
In other words, if, like, if I break your arm and I break the leg of a pig,
I don't see any good reason to think that in terms of their crude physical experience,
it's somehow worse for you than it is for the pig.
Indeed, it might actually be worse.
And I'm not going to claim that it's worse, but I'll give you some thoughts as to why it might be worse.
We accept that other animals are, in many cases, much more sensitive.
sensory-driven creatures than we are. And in fact, the fact that we have evolved, a capability
for language and rationality, means that we don't need to rely so much on our crude physical
sensations to help us to survive. So we don't need as strong a sense in which, you know, touching
the stove hurts your hand, because we can tell each other not to do that, you know, whereas
non-human animals don't have that, and so they need to rely more, more strongly. And so, for example,
dogs rely on their sense of smell. And most people accept that dogs experience smell far more
acutely and intensely than we're capable of even imagining. And we think that's probably because
that the way that they've evolved, they're more reliant upon it. Okay, hawks experience eyesight far more
intensely and acutely than we're capable of even imagining because they rely more heavily upon it
to navigate the world. If we are these sort of hyperrational agents that have developed language
and we can talk to each other, we don't need to rely on our sensations of pain as much to navigate
the world. So who's to say that these animals, when they experience that pain, don't experience that
pain in a much more acute and intense manner than we're capable of imagining in the same way
that they experience smell and eyesight in that way. Now, I don't know that that's the case.
Yeah. And like I'm saying, I would say like it's possible, but like we can bear, we can't even
imagine other people's minds. How could we imagine that there's any sort of actual experience going
on in the mind of an animal like that? But you care about other humans, but you don't care about
the pigs, right? Sure. But I only care about other humans because I see that we have the same structure
and thus like some conscious experience is obviously arising. I would hope from a similar
structure. But for animals whose brains seem to have markedly different capabilities in us,
I don't know if I'm to believe, I'm just supposed to take it on probability that I guess
they're probably deploying a similar conscious experience, but I have no reason to really believe
that. Well, they have similar enough structures to think that when they exhibit signs that inhuman
beings would indicate experience of severe physical pain, that they're feeling that too,
right? Potentially, I mean, I could say that like insects exhibit similar behavior. Now, they don't
typically possess all of the different structures of the brain.
Some of them only have like a nervous system and that's it.
But yeah, I guess I just, I have a hard time buying the argument that like, well, our brains
are kind of similar and I know that we have markedly different capabilities, but we should
probably just assume that animals have some sort of conscious experience as pretty similar
or comparable at least to ours.
I just, I'm not sure if I buy into that completely.
Would you say the same thing about like eyesight or hearing?
Like, do you think that like, you know, like the eyes of a chimpanzee?
experience the world like radically
differently to the way the human beings do?
It seems to me that
I would imagine that chimpanzee ears
probably work in roughly the same way,
eyes work in roughly the same way.
Maybe they can sort of perceive
a slightly varied set of wavelengths
or something, but the physical experience
is probably roughly the same.
I see no reason to exclude physical pain
from that same comparison.
I have no idea.
I feel like there's a temptation to say
that they must perceive sense data like
us in terms of visuals and in terms of auditory stuff. It's tempting to say that, but I don't know
if there's a compelling rational reason why you ought to accept that. We just say, well, I mean,
if it looks similar enough, it kind of appears similar to us, they have kind of sort of similar
brain, so they must have the similar type of thing. It's not just how they look now, but also
the origin. If we look at a sort of natural selection picture of the evolutionary development
and we say, well, we sort of have a rough idea of how our eyes evolved and how our ears evolved.
You know, we can say why they evolved to different sort of environmental pressures and this kind of thing.
And we can say, yeah, I mean, the same thing is true of chimpanzees.
I mean, the eye and the ear developed before our split with the chimpanzee.
Like these beings all already had eyes before the split between the modern chimpanzee and the modern human beings.
So we've got good reason to think that they're both basically doing the same thing.
Well, yeah, but we're not talking about an eye or an ear.
We're talking about sight and sound, right?
And those are things that happen within the mind, right, regardless of the development of the organ itself.
So, again, like, I'm sure we have a similar organ that is perceiving light in a certain way.
I shouldn't even say perception of light, that light hits it in a certain way, and it is the capability to focus and unfocus on things.
But is the experience that it produces in the mind the same?
I'm not sure.
Are there animals, for instance, that are there animals that truly create music, for instance?
That would be like a big sound thing.
Now, I know we've got birds that kind of like sing songs, but like are they truly creating
like music or is this like a heavily instinctual thing where it produces certain songs because
it knows it'll get a mate.
Like that would be, for instance, a thing of like, oh, well, here is like an auditory
experience that must be similar to ours.
Yeah.
There's evidence that that may be, again, a product of their inability to produce the music.
I mean, you can, there's been research on this.
I mean, you can watch videos on YouTube of animals listening to music.
You can go and stand by like a field of cows and start like playing the trumpet
and they'll just sort of converge and come and listen.
Now, I have no idea how they're experiencing that music.
But like, when we're talking about the development of physical pain,
which seems in almost all, in all cases, to have evolved as a way of saying,
this is dangerous, this is harmful for you.
So we're going to give you a negative experience that you would rather
not be happening so that the organism avoids that or gets away from it then and avoids it in
future. That is why we think pain evolved. That's why we think pain receptors exist and why we think
that sort of human beings are capable of having experiences that they'd rather not have.
There is no reason to think that the same thing is not true of other animals, especially when we
share an evolutionary trajectory. I just don't see a tenable way to suggest that
other animals that have brains that light up when you do things to them and they react in
similar ways and they scream out in pain and they try to run away and I recognize that you know
a plant can grow towards the sun this kind of thing but we're ticking so many of the boxes here
that I understand and at the very least it shouldn't be on me to prove that animals do feel pain
before we say we can do whatever we like to them I think it should be on you to prove that
they can't before we start doing that you know but I mean like that that's a thing though
is that neither of us can ever prove one or the other.
I think it's tempting to say that they must have an experience similar areas when it comes to pain
because of some outward things that we see.
But I think that we kind of just like we work backwards and we try to rationalize that
just because we see a thing that makes us feel a certain way.
If I would have talked to you about like two different species
and I would say that they have similar brains evolve from similar things
and they can both produce almost identical sounds from their mouths
and they have tons of common ancestors or whatever,
you would assume these two things could communicate with each other.
but like parrots and crows can basically speak
but have like no capacity for language whatsoever.
And I feel like at the very least that that should be there
if they can produce sounds that are almost identical to human sounds.
They've got like the ability to enunciate.
They've got the similar brains.
We're all, I don't know if they're mammals or not, probably not mammals.
But like we have like similar backgrounds and everything.
Like you would expect that some kind of language back and forth
could happen there.
But like they don't even have the capacity to abstract thought like that.
I also have a rough idea of like the parts of our brain that is involved in the use of language,
as well as sort of the process of abstracting, the feeling of pain, all of these kinds of things.
We have a sort of good picture of which parts of our brain are involved in different kinds of thinking.
And we can look at other animals and see if those parts of their brains are present as well.
And where they are and when they're lighting up in the same kind of way and having the same kind of effect due to the same kind of stimulus,
I just think, yeah, sure, you can't prove.
that they're experiencing pain, but it seems ludicrous to think that they don't.
Yeah, I understand what you're saying there, but like, I think you're presupposing a lot.
Like, if I ask you, like, can you point to me which parts of the brain, like, produce consciousness?
I don't think you can do that.
Again, not, like, like, kind of.
I mean, you can sort of, you can look at the parts of the brain, which, and I wish I could remember
which parts of the brain I'm talking about here.
It's largely associated with, like, prefrontal cortex communication, but there's even
questions of, like, in split brain people.
there are two conscious experiences happening.
Or I think there was a man that had a severe case of, I want to say, like, hydroencephalitis
or something, and like 70% of his brain was like water, but he was still like walking
around and could talk and communicate with people.
And it's like, is he even having a conscious experience or is he an actual living philosophical
zombie, you know?
Yeah, I think, and there's instances where people's brains are damaged in such a way that
you can sort of, you can show them an image that they're blind to, that they don't know
what they've just seen, they couldn't tell you what they've seen, but if you ask them to
draw what's in front of them. They can draw it. And it seems like the brain's sort of getting
split up and there's lots of philosophical questions as to whether there are sort of two persons
there. But like, it's, it's, I think I have about as much, if I have more reason to think that
you can feel pain than that a pig can feel pain, I think I only have like the tiniest amount
more reason to think. Sure. Okay. And maybe it's got something to do with the fact that you can
communicate with me. Maybe it's got something to do with the fact that you can
tell me. But even then, like, I'm more convinced that you're in pain, I'd be more convinced
if you just, like, clutch your chest and sort of rolling around on the floor than if you
calmly told me that your chest really hurt. I'd be more convinced that you're feeling pain
without the language based on just the way that you behave, because I identify that behavior
with the way that I behave as well. And given that we share an evolutionary trajectory, given
that pain exists in human beings so that we can avoid things that are dangerous, and so it
gives us a negative experience that we'd rather not be the case, I just don't see a good reason
to think that this doesn't apply to other animals as well. And maybe it applies in a lesser
sense. Most people, especially either if they're trying to justify our treatment of animals,
or if they're like religious trying to offer the theodicy against the problem of animal
suffering, they sort of have to believe that it must be different in some way, it must be lesser.
But to completely and utterly deny that these animals have any sense of an ability to
feel pain at all, just sort of doesn't seem right to me.
Well, it doesn't seem right to me that I can't eat delicious cheeseburgers.
I understand what you're saying.
Listen, we can prepare for a more formal vegan demand at some point if you want.
Yeah, I understand what you're saying.
It's not so much that I even want to talk to you about veganism, but just like, I mean,
we're doing matter ethics.
It's like, like, even if animals taste nice and even if you're justified in inflicting
suffering upon them to eat their products, even if that were true, to deny that they feel pain
at all.
I mean, I know a lot of people who say, yeah, animals feel.
pain but animal suffering doesn't really matter or like you know whatever but but to just deny that they
feel it at all i think is is such a rare position that isn't any longer taken seriously by either like
either in the the sphere of moral philosophy or in the sphere of like psychology and neuroscience i just don't
think it's a it's a held position and i have a feeling that there's some motivated reasoning going on
that the reason why you might be so reluctant to ascribe any kind of
you know, sentient experience to these animals
or to limit it so dramatically
is perhaps because, you know, it suits how you want to treat them,
if you know what I mean.
Not then, you know, that sounds very accusatory,
but I feel like that's what people are going to probably assume
is going on here.
Yeah, probably.
But I mean, like on the same end,
I would look to vegans and I would say that, like,
I think that you see something cute and cuddly
and it produces, like, the correct facial expressions to feel good about it,
and then you produce some motivated reasoning that essentially gives you a reason
to, like, not hurt said cute, cuddly things.
I feel like there is no...
Like, I understand what we're saying,
and maybe it just sounds like I'm not willing to make what some people would consider,
like, a very reasonable jump, that, like, a human brain and animal brain
are that much different, so therefore they ought to be able to deploy a similar conscious experience.
But I don't know.
I just don't find that compelling to say, like, well, look, they're close, so, you know,
they're basically the same.
If you want to say, like, am I willing to say the animals don't feel pain?
I think the problem with that feeling thing is there's a lot baked into what it means to feel something.
Like, is there going to be some sensation of pain that an animal's, like, nervous system is capable of producing, you know, to avoid external stimuli or whatever?
Yeah, of course, obviously, I would assume that.
But it's not really a question of can it feel pain or not feel pain.
I think the question is whether or not the animal's deploying a conscious experience that's having, like, the sensation of pain through that experience.
And I think that's like the question that we kind of get at with veganism that's really hard to prove other than to kind of like beg, you know, that like, well, look, like their brains are kind of close to our brain. So their conscious experience should be kind of close to our conscious experience, which I just don't find very compelling. But I mean, I understand why. Yeah. I didn't ask this earlier because I, it's something that I'm sure you've talked about sort of myriad times elsewhere. But just for clarity sake, I mean, if you had a human being with the brain of a pig, would that human being
just have absolutely no worth to you.
I think essentially so, yeah, I think, yeah, I think so, yeah.
Because it would be the same as like a human like in a coma or a human that was not
deploying a conscious experience at all.
But I will say that like a human with a pig brain or something would be different
than a human that's like disabled.
Like a human with Down syndrome or a human with autism wouldn't be the same as like
a human with a pig brain.
Yeah, of course.
But in the sort of relevant sense of their sentience level, if I sort of isolated the sentience
part of the human brain and reduce it to the same level as a pig. So what you essentially have
as a human being who, when you prod the human being, the human being sort of screams in pain
and exhibits all the signs of experience physical pain, but they're severely cognitively impaired
such that they can't talk to you, they can't communicate their ideas to you. All you know
is that they've got this severe impairment, but at the same time when you inflict what you think
is a painful experience upon them, they scream, they reel, they try to get away, they sort of
issues, they, they, uh, gestures if they want you to stop. Their brain is still lighting up in a similar
way to non-impaired human beings do this kind of thing. Would you just say, uh, yeah, well, I mean,
I know they're exhibiting all those signs, but like, I mean, you can't prove that they're feeling
pain and they're so impaired that I'm just going to do literally whatever I want to them because
they have zero moral worth at all. I mean, I understand you saying, uh, maybe you'd be, maybe you'd like
favor the well-being of other human beings or maybe it's, their moral worth would be lowered in your
estimation or something, but to say that on these grounds, just nothing, just like a flat line in
terms of your moral consideration. I mean, again, like the comparison of like horse brain and a human
or something to impaired human being, I think are fundamentally different things. I think an impaired
human being is an impaired human being. It's not a dog or a cat or a pig, right? Would you say that like
a person with, would you, in your scale of moral consideration, would you consider like a healthy dog
to have more consideration than like a human with Down syndrome? Well, I'm not advocating
ethics based on sort of level of
human sentience of comprehension or something like that
because again I don't think sentience like a dog might be like more intelligent
or something like that than a human being with a particular cognitive impairment
but I don't think that they're therefore like more sentient you know what I mean
sure yeah I don't know if anything can be more or less sentient I just think there are probably
different types of sentient experiences than my guess or are you yeah like I don't think
There's like the sentience, like the conscious experiences of like a human and a dog and a bat and a bird, like are all probably, my guess would be is that they're all quite different.
But I mean, I don't know.
I'm pretty agnostic towards I'm not sure.
I mean, Thomas Nagel wrote a famous article called, what does it like to be a bat, essentially concluding that there's just no way we could ever even hope to know what it's like to be a bat.
It's not imagining yourself sort of in a bat's shoes, as it were, but imagining yourself as a bat, experiencing the world as a bat.
He's like, it's just such an impenetrable area of experience that we will just never know what it's like.
But it seems so radical to me as I'm sure it will seem to most of the people listening to say that there is absolutely no moral worth for these other animals, which again, if you were to reconstruct this evolutionary trajectory, you would just have to along some, like somewhere between homo sapiens and like some apish ancestor.
So you just sort of have to randomly say like, sorry, buddy.
Like imagine you're like God and you're deciding who gets into the afterlife and you have this rule that human beings get to go to the afterlife.
They get compensated for their suffering.
All other animals do not.
And you have this like chain of human beings.
And at some point, you have to just say to this guy like, sorry, mate, you're not coming in.
And he says, but what is the difference between me and that person on the other side of the gate?
And you have to say nothing.
There is literally no difference between you and that person in terms of your abilities.
But look, man, I've got to draw the line somewhere.
And rather than say, okay, I'm going to let you in, but we're going to slowly start, like, lowering the stance.
No, no, no.
It's like off suddenly.
It's completely arbitrary as well as being probably completely unjust and unfair in that circumstance as well.
Sure.
I mean, you say unjust and unfair, which is we're kind of begging the question.
I mean, obviously, if you assume that they're just or fair or unjust or unfair based on what we're granting sanctions to.
But I mean, like, yeah, obviously, I think my answers are pretty obvious, given that we just came off of like an hour and a half conversation or whether or not I can ever say a thing is, like, objectively right or wrong.
So, I mean, yeah, I mean, that's, yeah, my end.
I mean, we also concluded that you can't say that the Earth orbits the sun, objectively speaking.
True.
Yeah, if you want to be at some foundational level, I will be ultimately skeptic.
But, I mean, like, if I can't even say, like, human murder is objectively right or wrong,
I don't think it's that surprising that I'm not going to enter an opinion on the similarities
of animal consciousness and human consciousness to check for, like, rights or wrong there as well, right?
But you still think it is wrong in some sense, so, like, for me to torture another human being, right?
even if it's just like subjective or whatever.
Like you still think it's like wrong that I shouldn't do it,
that you would vote against me doing so
if there was some like way that I could ask for your opinion
as to whether I do it or not.
Yeah, probably.
In some utilitarian sense,
I'd probably say the same for an animal too, right?
The type of human mind that would torture an animal
is probably a very unhuman,
unhealthy human mind.
There's probably going to be some natural inclination there
that like, I think a normal healthy human mind
likes puppies and likes kitties
and a normal and a human mind that's willing to torture
or be accepting of like extreme,
like causing animals extreme pain
would probably also inflict the same.
on humans as well, right?
I mean, not necessarily, especially.
I mean, maybe somebody thought that,
but now they've listened to this conversation
and they've heard what you say,
and they say, oh, I can just draw
a completely distinctive difference
between humans and every single other animals.
So, yeah, I'm perfectly happy to go
and, you know, torture dogs
and vacuum seal cats in a bag.
But I'm never going to do that to humans
because I only care about humans,
just like Stephen does.
Well, listen, if the idea of torturing animals
hurts you and bothers you so much
by listening to me, then you'd probably become a vegan.
I mean, like, I mean, I think that's, the thing that the most irritating conversations all have are with people that, um, seem to express a very emotionally strong, uh, reluctance to accept any sort of like animal killing for fun or whatever, but then they all, uh, but then they seem to be willing to eat meat, which I think, I think that position is untenable. Um, right. But I mean, um, yeah, I think if listening to me, like, I would probably, I don't think I would ever torture a cat or a dog. That seems really fucked up. Um, so I mean, why, why, why, why, why, why, why, why, why, why, why, why, why, why, why,
not just because it makes me feel bad but like okay but earlier you said that the reason that you
care about other human like suffering is because it it makes you feel bad and that like your your moral
sense of mistreating other humans is ultimately just based on how it makes you feel so now you're
saying that the same is true of cats and dogs that you say that yeah actually no I do care about them
morally I mean only insofar as it you know it affects my well-being yeah sure but the ultimate but it
is like the same thing here right as what you were saying earlier about human beings yeah but the
difference is that animals are fundamentally different and that the way that they plug into
human experiences, there's a lot of ways that we can gain utility out of them that don't involve
like living alongside them and going to work with them every day and treating them like fellow
humans, right? So like we can't, we don't eat fellow humans, but you can eat animals.
We don't keep humans around as slaves for a variety of reasons, but you're going to like a dog
or a cat that's essentially like a slave to you for purposes of like entertainment and whatnot.
So yeah, I mean, I mean, if you make the situations the same, often they are actually the same,
like you can imagine a human being, a cognitively impaired human being, we often sort of give
them like a carer, they're not allowed to leave the house on their own. The carer chooses when they
eat, when they sleep, this kind of thing, which is quite similar to sort of having a pet.
I mean, you wouldn't describe it in terms that would be quite grotesque. But it's a similar
kind of like the justification for saying, actually, we are going to restrict this person's
freedom and we choose when they eat, we choose what they do. We do actually do that in a human
context as well. For extreme levels of impairment, I agree, but we still treat them as distinctly
and markedly human, right, for a variety of reasons.
I mean, you can go down very dark paths about eugenics
or not treating mentally disabled to people in a sort of way,
but there's obviously that's a whole other thing to go down.
Yeah, I mean, like you said a moment ago,
you know, if you're upset with torturing animals,
you should be vegan.
Maybe that's true.
It's certainly true that factory farming should be opposed.
I mean, somebody might say that, like,
well, I do care about animal suffering.
I don't like torturing animals, but I don't think painlessly killing animals is wrong.
And I also, I'm not convinced that, like, you know, boycotting animal products is the way to solve the problem, whatever.
But I think you're right that if you are bothered with, you know, the torch of animals, you should be against.
You should think that factory farming is a bad thing, right?
Like, factory farming is a bad thing.
And that's basically what I would hope that you would agree with if press hard enough that, like, it's like a Lincoln set of slavery.
If this is not wrong, then nothing is wrong.
Like if there is this thing called sort of wrongness and badness, it seems that sort of this, this horror show of exploitation, mutilation, gas chambering of animals must be wrong.
I guess I'm sort of surprised with how carefully we can have this really long discussion about ethics and sort of so conscious of implications of different worldviews and talking with such sort of nuanced.
and specificity, to be so blaséé about the suffering of animals, like, it just doesn't
matter at all. And based on what is essentially an arbitrary line in the sand between homo sapiens
and every single other human being that exists on the planet, I, it just kind of, I'm quite
astonished by it, you know? Yeah, I mean, I guess what I said earlier, I think there were like
three reasons why I gave for my moral system. I was like,
I believe that humans working in concert with each other produce more happiness than when they work separately.
This doesn't apply to animals in the same way at all.
We don't work or live alongside like horses and lions and tigers and pigs and cows.
For the protection of my own preferences, I have to live in a certain way with other humans.
This doesn't apply to animals at all.
Obviously, we can farm them, we can eat them.
They live in separate spaces.
They don't adhere to anywhere of things.
And then for universalization, I would hope that almost every other group of humans can reciprocate said values to me.
Animals aren't capable of doing this either.
They don't have even concepts of moral system or ethical systems.
None of the reasons that I gave earlier for all of the careful, for all the carefulness that I have when it comes to dealing with other groups of humans, none of those things apply to animals.
I also have a really hard time applying a gradient scale to animals.
It seems weird to me to say that like it's probably not okay to torture an animal, but if you want to like kill an animal, you know, prematurely to eat it, that is okay.
That seems like a weird like thing to say.
Well, I'll give it a little bit of moral consideration, but not that much.
I don't know, I guess maybe
maybe there might be some world in the future
It makes sense to value some things more than others
Yeah, but I mean we don't really need to eat animals at all to survive
Right?
Like so what there's not?
Also, something I'm curious for you
Like when an animal kills and tortures another animal
Would you say that like a wrong has been committed there?
Do you consider that wrong?
So when we see like groups of lions like
Catching and torturing and killing an animal
Is there like a wrong action committed there or?
No, no, I'd say it's bad but not wrong.
I mean like it's, it's,
complicated in the fact that you might say, there's an interesting question as to whether
we should sort of intervene in such cases, because although the lion isn't committing
a wrong, because it's not a moral agent in the way that human beings are, we might still say,
well, it's still bad, they're still suffering, and we could prevent it from happening.
I think the problem with that is we don't know the sort of wider effect this is going
to have on the ecosystem.
Like, it might actually have adverse effects on the predator prey numbers and leads
to sort of widespread starvation or overpopulation.
these kinds of problems.
Sure, but even the existence of an ecosystem presupposes a whole bunch of suffering, right?
Yeah, yeah, yeah.
But to answer your question, like, no, I don't think there's, like, moral agency involved
when the lion, you know, ripped a part of gazelle.
Would you agree with me that there should be a vegan imperative to genocide all cats on the planet?
I'm not sure about that, no, because...
Because I feel like keeping a cat as a house pet is like a necessarily evil thing if you are vegan.
I don't know how you could ever live with having an obligate carnivore as an animal for recreation.
Oh, sorry.
I thought you were just talking about like pet ownership, the ethics generally.
You were talking about the facts that they have to give me to say that.
Yeah, I mean, I don't know what I think about that.
I guess because cats are obligate carnivores, it can't be immoral in the same way,
certainly for them to be eating those foods and potentially not for you to be procuring the food for them.
arguably if you're like breeding these cats into existence for the purposes of having a pet
which is going to require you to buy animal products to feed them that would be wrong on a vegan
worldview but most vegans think that you know breeding animals into existence for for pet ownership
is wrong anyway and that you should favor sort of rescue animals or you know stray cats in which
case but even for rescue animals i feel like the the moral choice then like if i were to put it
in any other context imagine i could adopt little humans that only eat other humans the moral thing
would probably be to adopt the human and then kill it immediately, right, so that you can reduce
the suffering the necessary suffering you're causing by having other humans need to be eaten,
right? That's funny because I was about to ask you the exact same thing, expecting that you'd have
the opposite intuition that if, like, I don't know, if for some reason, like, I don't know, like,
in a human context, would we be willing to, like, euthanize a human being because their existence
somehow causes necessarily the suffering and death of other human beings? I actually, I,
I don't know what the answer to that question would be,
but I guess for a vegan, at least for like a utilitarian vegan,
the answer would be the same in both cases.
And it sort of doesn't matter which you choose,
I guess, as long as you're being consistent, right?
Maybe, yeah.
I mean, if one human being is causing the harm or destruction
to other human beings, I think those human beings
would have a right to kill that human being, right?
In what, in any circumstance in which a human being
is causing harm and, like, suffering to another human being?
I think, well, it's going to depend on the scale of harm
or suffering, right? Like, if somebody is, like, farting in a bus, you probably don't have
the right to kill that human because you have to smell their farts. But, yeah, but like,
if something, like, proportionally, yeah, you probably have some right to respond. If somebody's
causing, like, a destructive harm, like potential death or whatever, then you have a right.
If they're infringing on those rights, you have their rights are essentially revoked in that
sense, right? If somebody's trying to kill you, you can kill them, et cetera. But it's going to
depend on the level of, yeah, infringement. Yeah, I mean, I think there's a, there's a sense in which
If somebody is even innocently threatening your life, you have a right to self-defense.
Like if, you know, if somebody gets strapped to the front of a tank that's about to run me over
and it's not my fault that they're there, I think, and the only way to stop them is like blowing up the tank.
I think I have a justification for doing so.
But as a third-party observer, I'm not sure if that would still be the case.
I think if I were observing the situation where an innocent person is strapped onto a tank
that's about to run over another human being, the only way to stop it is to blow up the tank.
It's certainly not obvious to me that I have a right to blow up the tank in the way.
that I have a right to defend myself if I'm in the situation, you know?
So maybe the fact that you're procuring the food for the obligate carnivore pet
makes a difference here in a way that we wouldn't say it's wrong about for the cat to procure
the food, or even as a pet owner for you to allow the cat to go out and do that,
you sort of procuring the food might be different.
Yeah, potentially, yeah.
Yeah, maybe I'd have to think about that more.
I just like to tell vegans to kill their cats, so.
Yeah, I don't know, I mean, there's a whole conversation that we can have
about veganism generally, but I guess what I wanted to talk about was the ethical treatment of
animals generally, or your sort of ethical view of animals, which is quite separate from the
vegan discussion because it's connected, of course, it informs it. If you don't think animals
have worth at all, then you can do whatever you like to them. But even if you think animals have
worth, you might not think that they have enough worth to not be killed under any circumstances. You
might think that factory farming is bad. You might think that factory farming is still fine because although
they suffer it's not that much or something like this i think that would be a bit weird to say but
you could do that it's a separate discussion but just on this topic of like animal suffering it's
been sort of interesting to to prod you a little bit i guess um yeah and i'll be interested to hear
what our what our collective listeners have to say about the matter probably going to be really man
but i will say um the uh i think the worst people i think i'm the second worst people
I think the worst people are people that seem to be gravely concerned with animal suffering
that have no problem eating meat products.
I think you have to pick one side of the fence there.
I don't think that you can be concerned with like some animals and not others.
I see people are very concerned about like Cecil the Lion or concerned with cats and dogs,
but they seem to have no issue or are completely indifferent to things like factory farming and whatnot.
Yes.
Yeah.
It's possible maybe in the future if I think about it more.
I haven't really thought much about like a sliding scale of morality.
I mean intuitively it feels better
I guess like sometimes if I'm in like a grocery store
I might choose like a factory farm thing
because the idea of like little chickens
running around and hatching eggs feels better
than like the fucking pita factories
of the massive farm chickens
so yeah that might be something
I changed my mind in the future
but I mean you do have some moral concern
for these animals then
yeah you have to every human does
like we share enough like outward features that
but again I would say like there are thought experiments
where you can hijack that system very easily
like for instance I could or not me
but like somebody could very easily sufficiently program a robot
that could exhibit such emotions
but we were very confident
there's no like experience going on that
I remember there was one of the
is it the dog robot or it might be one for the
I wish I could remember the lab that makes the huge walking robots
that everybody's like oh Boston Dynamics or whatever
Boston Dynamics yeah and there's a couple videos
where you watch them like push the robot over
and he's like trying to stand up and you actually
kind of feel a little bit bad you're like oh shit
or when you talk to like that chat GPT thing
like you can kind of bully it in some ways like
oh this actually feels like a little bit bad
did you ever see the movie Blade Runner 2049
no i'm i'm i'm i'm pretty bad at that film's i'm sure there's a part in that movie where like
there's a robot that belongs to another robot and that robot gets killed and you're like oh man
that feels really bad even though you know it's not only not only is it a robot but it's a robot
but it's a robot that was made to help another robot it's like yeah so so in some ways like i
listen to my intuitions and then in other ways it's like okay yeah but my intuitions can also
take me to kind of silly spots but like um yeah maybe maybe the maybe the gradient thing is
something i'll i'll change my mind on in the future we'll see yeah maybe we can uh you know
and talk about it again. I'm hoping that people
will be glad to see us together. I know I've had a lot of requests
to talk to you in various contexts
pretty much as long as I've been doing
YouTube. I think since
a time when
you had less subscribers than I did.
Sorry. Yeah. Well, hey, there's ever
like a particular applied or
any non-metta fucking moral question
that comes up? Yeah, if you ever want to chop shop again or chat or
feel free to shoot me a message.
I'm glad we got to do the meta stuff
because I know that you're a bit sort of allergic to
it. So I'm a lot of managed to get a
conversation on it. Yeah, and rest you too. So yeah, thanks. I appreciate the conversation.
Cool, man. All right. Well, Stephen Bunnell. Thanks for coming on the podcast.
Thanks for having it.