Modern Wisdom - #214 - Cosmic Skeptic - How Do We Define What Is Good & Bad?
Episode Date: August 27, 2020Alex O'Connor is a philosopher & YouTuber. Get ready for a mental workout today as Alex poses some of the most famous and most difficult questions in ethics. What does it mean to say that something is... good? Why SHOULD you do one thing instead of another thing? Why should we care about wellbeing? What is the definition of suffering? On whose authority is anything good or bad? Sponsor: Check out everything I use from The Protein Works at https://www.theproteinworks.com/modernwisdom/ (35% off everything with the code MODERN35) Extra Stuff: Watch Alex on YouTube - https://youtu.be/gcVR2OVxPYw Subscribe to Alex on Patreon - https://www.patreon.com/CosmicSkeptic Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hi friends, welcome back.
My guest today is Alex O'Connor, otherwise known as Cosmic Skeptic, and today we are posing
some of the most difficult and famous questions in ethics.
Some of my favourite episodes that we do on Modern Wisdom are ones that pose questions
that force you to actively participate in the discussion.
I really enjoy bringing you along for the ride and making sure that you're engaged in what is being said. So today, Alex gives us some mind bending
questions that force us to question our intuitions around what is good and what is bad.
So today, expect to learn what does it mean to say that a thing is good? Why should you
do one thing instead of another? Why should we care about well-being? What is the definition of suffering and on whose authority
is anything good or bad? Really, it's an episode that you should be focused on while you're listening
to it. Let's put it that way. But for now, it's time for a mental workout with the wise and wonderful Alex O'Connor
Alex bloody O'Con, how are you?
Rest, I am fine well, all the better for seeing you as they say.
It's been too long.
I haven't seen you in person since we went to that event in London, and I can't think how
long ago that was now where you made me do the yoga that you seem to be telling everyone
in their dog about.
That photo of me, that photo of me that you keep posting is like, you know, the one of Beyonce that she wanted to disappear from
the internet. That's my version of that. Just me trying to work out how to put my
leg under the, yeah, it was a nightmare. But yeah, it's a shame. It's good to be
speaking to you again, even if it's a public conversation. Yeah, I know, man, it is
that was one hell of a weekend. One hell of a weekend. I've got the full length, one hour yoga form,
recording both of us doing it side by side,
and I'm considering offering it out to the highest bidder.
I'm pretty certain that some people on the internet
some fairly sort of prominent debate is of yours
that would pay good money for that kind of ammunition.
Yeah, yeah, I do worry about some of the ammunition
that my friends have on me and people they could sell it to.
I think maybe I could release it as a Patreon exclusive
or something, that might be a good,
or you could release it as a Patreon exclusive.
That's the only Patreon.
Yeah, they'll jump over to you though.
Yeah, I don't know, we'll see,
but I haven't even seen that video.
So God knows what other weird shapes
I tried to morph my body into.
It was graceful.
It was, it was your first time.
No one's good at the first time.
No one's good at the first time.
No one's good at the first time, as they say.
But no, yeah, that was February, man.
That was a while ago.
Was that February?
Yeah.
And it's crazy.
It's crazy world out there.
I've been getting a bit of a flack for it, actually,
because speaking, because we're going to be talking
about ethics today, because when you reached out to me,
and I was only a podcast before, and we talked about
veganism, which was the first time we'd properly spoke,
and that was ages ago now.
But of course, talking about veganism requires ethics more broadly as an underlying, but hopefully
I can sway you in an ethical direction that puts you off the idea of sharing those videos of
me online, but we'll see where it goes. I see how you're circling this background. Now,
I'm a bad friend if that video ever surfaces anywhere. So okay, ethics,
morality, where do we start? Yeah. Well, look, I mean, it's a complicated business,
right? Ethics generally speaking, people have broad intuitions that certain things are
right or wrong. And it's clear that some of these things seem to vary across cultures,
across upbringing, but generally speaking, everybody
seems to have some kind of intuition that there is such thing as right or wrong. When
you break down ethics, there are a number of layers in which you can look at it. So one
of the most important distinctions is between what might be called practical ethics and what
might be called meta-ethics. The simplest way to define these two is to say that practical ethics is answers the question of what is good, whereas meta ethics answers the question of what good is right. So if you're talking about practical ethics, this is generally what people think of when they think of ethics, right, like is abortion moral is youth andia permissible? These kinds of questions, questions of vegan isn't that kind of stuff, social justice, any of that.
But underlying that, we need to have some understanding
of the meaning of the words we're using.
I mean, what does it actually mean to say something is good?
What does it mean to say that you ought
to do something like what are the definitions of these terms?
And this is what you call meta-ethics.
And this is the more difficult part, in my view.
And it's the part that seems to be the most
kind of irresolvable disagreements.
Because if you just have a different intuition about what good is,
then you're kind of talking past each other.
Whereas if you can at least agree that say you think that good
is good consistent in what maximizes well-being for conscious creatures,
then when you have an argument about veganism,
it becomes essentially an objective discussion because you can just objectively
point to what affects well-being in various ways.
But if you disagree about what it actually means, then you run into problems, right?
The thing is, like, nobody seems to agree on this, right?
This isn't like a scientific endeavor where you conduct a bunch of experiments and you
do peer review and then you just figure out what the answer is.
It's like, we're asking the same questions that Aristotle and Plato were asking, not much
has actually changed.
And a lot of the time, it's useful to read those classics because a lot of people end up
reinventing the wheel and doing it badly because they're kind of thinking about ethics and
they have this bright idea and it's amazing. But, you know, a hundred other people have done it
before them, but it's so buried in the literature that they don't know it exists. So it's really
worth kind of familiarizing yourself with some of the, at least the basic kind of conceptions of ethics
across the board before you can jump in, talking about practical ethics. But I don't know, like
the kind of most interesting distinction that people tend to start with is what ethics is kind of driving at.
I mean, so for instance, if I asked you what you think it means to say something is good.
What does that, I mean, what do you, I couldn't your intuition, what do you think that really means? Leaving the situation with more well-being or less suffering than when you found it?
Sure, so the question trivia lead just becomes, why should you care about well-being?
Why does that matter? Why shouldn't we try to maximise suffering? maximum suffering. There is a preference in one way and that tends towards something
which is not painful. Okay, but what if someone comes along and says, look, I prefer
other people's stuff. I don't want myself to suffer, but I think, you know, I'm a sadist
and I really enjoy other people's sufferings. I think the best thing to do is to cause
as much suffering to them as possible whilst minimizing my own suffering.
Presuming that we're all sovereign wills, the preference of one shouldn't influence
the preference of another.
Right, so now the question becomes why, right?
Because you can say something like, look, generally there's a preference towards this
thing or the other, but just because the majority of people prefer something, that doesn't mean it's necessarily the right thing to do, okay?
And the first question that this highlights is objectivity and ethics, right? One of
the most important distinctions to make is between a conception of objective ethics and
a conception of subjective ethics. To say that something is objectively right or objectively
moral, to say that objective ethics exists is to say that ethical propositions are true
regardless of what people think about them.
It doesn't matter what your opinion is in other words, because you say something like,
well, it could be to do with like a preference for well-being, but if everybody on planet
earth decided that, you know, the Holocaust was the right thing to do, right, say the
Holocaust wiped out all of its opposition and Germany win the war, and everybody, or at
least the majority of it on planet earth, I convinced that that was a good thing to do. Most people want to say
that that doesn't matter, it was still bad. It was still wrong, even if everybody doesn't agree
with that, right? There's something universal that sits outside of what an individual's sense of
is. Right. Now, of course, that leads us to the intuition that morality does have at least some objectivity.
That you can say that an ethical proposition is actually true or false, and it doesn't matter
what you think about it.
The problem then becomes grounding it.
On whose authority is it good or bad?
Traditionally, this is where religion would step up to the mark and still tries to today.
In fact, a popular argument for the existence of God
is the moral argument.
And the moral argument is really simple.
And it says, if God does not exist, objective morality
does not exist.
Objective morality does exist, therefore God exists.
The idea being that if there are such things
objective morality, it needs to be grounded in something.
And it can only really be grounded in some kind of authority right it can't be grounded in some kind of
preference of a human being or some kind of naturalistic feature it has to have some kind of authority behind it not can only really come from
a kind of supernatural authority that supersedes everybody else right and so it's basically saying that because objective morality exists
God must exist and there are
because objective morality exists, God must exist. And there are multiple ways to respond to this.
The first is to try to ground objective morality
in something else.
But the second thing you can do
before we get too far ahead of ourselves
is to just say that morality actually isn't objective.
You could say that morality is subjective.
That is, it is dependent on what the person feels.
Now, of course, the biggest problem with this
is that you run into the kind of situations where somebody might say, well, in my subjective opinion, the Holocaust was a good thing.
Therefore, it was the right thing to do, right?
And this doesn't sit right with us, but there are kind of different levels that subjectivity
can work.
Right, so let me give you one example of the utilitarian case.
Utilitarianism is broadly an objective ethical theory, but I'll give you a kind of form
of it that could demonstrate how the objective and the subjective can come together. So I might say
something like, I subjectively value my well-being, and so do you. You subjectively value your well-being,
and everybody subjectively values their well-being. Now, it's trivially true that you have to value your own pleasure. You have to think it's a good thing.
Right? Some people take pleasure in things that other people find painful, but that would not,
that would just be a wrong definition of pain for them, because for them that would be a pleasurable
experience, or at least it would lead to a pleasurable experience. Now, I can say something like, look, the preference for pleasure is subjective.
It's just due to your preference. But there are objective things we can know about how to maximize it.
So if I was a utilitarian who thought that the basic justification for having pleasure and
well-being and suffering as the basis on which we ground ethics, I could
say that that's subjective, but I could then be objectiveist in my ways of kind of
analyzing situations and seeing what would actually maximize pleasure.
So it can get a bit complicated, right?
And it's like this isn't even kind of, this isn't even the beginning.
But like one of the best ways oftentimes
to explore different ethical theories,
and one of the ways in which people like to write about it
in the academy is essentially to rely on intuition,
because you come up with some kind of ethical theory,
and you need a way to test it.
So you have your ethical theory, and then you run it
against some kind of counter example.
And you say, well, this ethical theory
leads us to this conclusion, which seems so absurd
that we have to reject the ethical theory.
And this is a lot of the time how ethics is done.
Right, in order to show why someone's ethical theory is wrong,
you show what it leads to and show that it's an absurdity
or an immorality or something that's so obviously bad
that we have to throw out the theory.
This is known as a reductio ad absurdum.
And that's the best way to explore ethics.
But like jumping into the question of ethics,
it's like, it depends on what you want to do.
It depends on what, if you want to know what good is,
it depends on if you want to sort of work out
famous moral dilemmas.
It depends on if you're just trying to look
at a specific moral issue and trying to kind of break it down.
Like, it depends what you want to do, right?
Like, in a conversation like this, there are so many avenues you can go down.
I understand. Is it possible to have a conversation about practical ethics
with the conversation about meta-ethics still being poorly defined?
It seems a little bit like playing football, but some people think you're allowed to handle the ball
and other people think you can only kick it.
Yeah, well, that's exactly the problem. If you don't have the same meta or a local
theory, it's like you could be playing chess with someone who's using the rules of rugby
and it's just, it's not going to work, right?
But generally speaking, people do have certain base level intuitions that are roughly the
same, right? And you can talk about practical ethics without breaking it into meta-ethics.
And I think it's more interesting to do so because metathics can get complicated and tricky
and you have to define the difference between doing
and allowing and you have to,
it gets complicated.
I imagine it gets quite semantic as well
that you just, a lot of the time it's
what does this word mean?
And then you get into sort of centers of etymology
and sort of bizarre, it's just linguistic territory. This I think is one of the big challenges we have with communication,
sort of generally, the moment in the media, that no one's actually defining what words mean.
We can mean multiple things. There's a term called semantic overload,
which I learned from Ben Shapiro, and that is what's being used and awful lot at the moment, semantic overload.
So I imagine when you're having a very complex discussion, really trying to get into the
weeds with something and work out the nuance of practical ethics and then someone comes
in and goes, well, I know that you said that you kicked the ball, but is it really a ball?
And why is it called ball?
And you're like, oh, fucking, Elmate, come on.
Exactly right.
But this is what you run into, especially if you talk to someone who's got an interest
in philosophy or ethics. If you talk to them on a practical level and you say well
look I think that you know the the mask and fireman and torture of animals is is immoral for the
purpose of you know a fancy stake right and someone turns around if you talk to the average person
they'll talk around they'll turn around and say but what if they're treated this way what if they
treated that way they bring up practical concerns but an ethicist might say, well, why?
Why should I care about animal suffering?
What does that matter?
I could say, well, care about animal suffering
for the same reason.
You care about human suffering.
And they say, why should I care about human suffering?
Yeah.
And the question can go on and on and on and on and on and on.
And you never kind of come to a useful stopping point
unless you can agree on something.
But you can talk about practical ethics
without breaking it down to meta-ethics. If you do have certain level agreements, so
if I agree that we shouldn't cause harm to human beings unnecessarily, and so do you, then
I can make an argument from consistency. I can say that, well, I think you're holding
an inconsistent set of beliefs if you're okay with harming a human being, you're okay
with harming a non-human animal, but you're not okay with harming a human being, you're okay with harming a non-human animal,
but you're not okay with harming a human being.
It's like, what are your reasons for one and the other?
And I could show that maybe there's an inconsistency there, right?
But like, you could be consistently wrong.
Right, like, we could make our ethical, practical ethical case
completely consistent.
I could say that, well, in order to be against animal exploitation,
you also have to be, I don't know,
you have to be pro-life in this instance and that instance. And for some strange reason, you also have to be, I don't know, you have to be pro-life
in this instance and that instance. And for some strange reason, you also have to hold this
other belief and this other belief and this other belief. And you can kind of convince
someone of all of those things. But like, they could just be, they could just have a consistent
worldview that's wrong at its basis, right? So it depends what you're trying to do. If you're trying
to convince someone of a moral cause, then it's better to talk on the practical level and try to point out inconsistencies. But if you're trying to get to the
question of what is actually good, then you're better off talking a bit about matter ethics.
And one of the most important questions is what is the focus of ethics? Does the focus of ethics,
let's say, does ethics should ethics focus on focus on say the consequence of an action should
it focus on the action itself should it focus on the agent performing the action right
these three are broadly three ways in which people distinguish ethics. So if we consider
a statement like murder is wrong, some people might analyze that to mean that murder is
wrong because the consequences that it leads to to, that is, you know, someone dying, people suffering, people mourning,
are bad, right? And so generally speaking, in order to determine whether something's right
or wrong, we look at the consequences of the action. Seems somewhat intuitive, but some
people like to, in says say that the focus of ethics should be, and this is consequentialism,
sometimes called teleology, the Greek word telos for end or purpose. Some people prefer to look at the agent, they prefer
to say that the reason you shouldn't murder is because the virtuous person wouldn't murder,
right? Murder is not a virtuous thing to do. Aristotle's ethics was a virtue ethics theory,
and it was kind of like the right thing to do is what the virtuous person would do in other words
So it's not so much about the action or its consequences. It's about
It's about the person committing the action right
Some people prefer to just look at the action itself not the consequence for the action itself
They say murder is wrong in and of itself regardless of the consequences regardless of who's performing it
And this generally comes from this this is a typical view of religious people, a lot of the time,
with divine command theory, you think that ethics is just what's commanded by God.
So if God says, don't murder, then don't murder. That's kind of it, full stop.
Doesn't matter what consequences are, it's just wrong in and of itself.
And all of these things have kind of weight to them.
And the reason why people
like kind of flip back and forth in them when they're studying them is because each of
them seem to have kind of difficult ethical territory. Like if you, I think most people
are at the beginning tend to be more attracted to consequentialist ethics. And that's because
I think that generally speaking our society is is is is a bit more based on consequentialist
ethics than anything else like in in in the modern era, that seems to be the implicit way that people do ethics.
But there are some really difficult problems with that.
For instance, let me take your proposition that the right thing to do is what maximizes
well-being.
This would be a consequentialist view view and essentially a utilitarian one. Utilitarianism being the idea that we should maximise utility and utilitarian's
identify utility with pleasure. So essentially the best thing to do is to maximise pleasure
or minimize suffering. Now there are various problems with this, but let me give you one example.
This comes from a guy called Roger Chris who who's a kind of leading John Stuart Mill scholar and every undergraduate
at Oxford has to read his commentary on utilitarianism. And he gives this example of the rash doctor,
right? So let me ask you a question here. A doctor has a patient and they've got two potential medicines that they can give to the patient.
Option A and option B. Option A, if successful, will restore the patient to a hundred percent
health, but it's got a 99% chance of failure and only a 1% chance of working.
The 99% chance is that they'll die.
So 99% chance of this is going to kill the patient.
Only 1% chance is going to succeed,
but if it does, it's going to restore them fully to health.
Option B, it will only restore them to, say, 85% health,
but it's going to 99% chance of being successful
and only a 1% chance of the patient dying.
Say the doctor chooses option A and it works. Did the doctor do the right thing?
So everyone who's listening, I've already warned you about this, but I want you to be playing
along at home as well, because this difficulty, the mental gears that you're going to be able
to hear in me that are worrying away, I want you to be suffering along with me. So, from a consequentialist, just the outcome, does the end justify the means? I suppose,
in that form, yes, you could do that a million times and keep on getting the one out of 100.
Well, so the difficult thing to say is, like, intuitively, when faced with the option,
before we know what's actually going to happen happen and you've got the two medicines in front of you, you'd probably advise
the doctor to say option B. Right? And that seems justifiable, right? It seems like that should be
the case. But the weird conclusion is that if he uses option B and it works and the patients
restored to 85% health, Whether or not the doctor did
the right thing completely depends on what would have happened if he had administered
drug A. Because if had he administered drug A it failed then what the doctor did actually
did maximize well being right because it was 85% health versus death. Whereas if it were
the case that had he gone for option A, it would have worked,
then what he's done has actually not maximized pleasure, right? Because instead or well-being, let's say,
because instead of 85% health, he could have got 100% health. Now,
the kind of easy answer to this is to say, well,
okay, so it's not actually about you shouldn't do what will actually maximize pleasure,
you should do what will probably maximize pleasure, right?
But you can see we've already kind of adapted the theory, right?
We've already gone from kind of saying, well, obviously it's about kind of what maximizing
the right thing to do is whatever's going to actually maximize someone's wellbeing.
But like, that's not always the case.
Because the caveat in that.
Yeah, even if like in this situation, yeah, had you done the other thing, it would have actually like in reality in the actual world would have caused more, more pleasure. It's like it probably wasn't just to fight to do that, right?
So, yeah, now we're kind of talking about probabilistic utilitarianism, right?
Does this continue to roll down? So does probabilistic utilitarianism then split into some other subdivision,
some other subdivision? Well, it doesn't always divide kind of subdivide in that manner, but
there are lots of different kinds of divisions. So, I imagine there's just a tree branch that
continues to go down. My point is like, for every situation that you encounter, do you then need
to continue to create a sub discipline within that that allows you to explore that particular
type of solution? Pretty much and luckily because these questions have been being asked for
thousands of years, you can find hundreds of essays on any particular kind of individual
instance of a moral dilemma that you have, but like there's another there's a further distinction
that might be made between what you could call what Roger Chris, but at least calls the criterion of good and
the decision procedure, right?
And these are two separate things.
So the criterion of good is like the criterion by which we determine whether or not something
is good.
Whereas the decision procedure is the way that we try to go bringing about that good, right?
So take this utilitarian analysis where we've shown that you should act in a way that
probably maximizes pleasure.
That would be our decision procedure.
We kind of decide intuitively that the way to determine how we decide what to do should
be based on probabilistic utilitarianism.
But has it changed the actual criterion of good?
If we offer a kind of abstract analysis of what good is?
Well, we don't think that good is what would
we don't think that good is the result of what would what you should probably do or something like that.
We still think that the good thing is what maximizes pleasure and minimizes suffering,
even if we've decided that the way we decide which actual action we're going to take is more probabilistic.
So the criterion of good, the utilitarianism, is still what is actually most pleasurable,
but the decision procedure leads us to probabilistic utilitarianism.
The root together now has some form of discounting that's been through.
Yeah, and it seems a bit strange. Why is it that we've got an ethical theory where we've
decided that this is what's good, but that's not what we should actually do in order to try and achieve that good
Right, it seems like an inconsistency. It seems a little little strange, right?
further distinctions like
The classic kind of argument against utilitarianism is is something like an instance of a gang rape or something
It's like well, don't don't the pleasure of of the many
Outweigh the suffering of the single individual and some people would well, no, because the suffering is so great that even five people, you know, getting immense pleasure, it's
not going to outweigh it. But if you think that, then just make it six people or seven people
or a hundred people until the scales get balanced out. And some people would say something
like, well, clearly, it would still be wrong in all circumstances. Right. So can we really
say that the maximization of pleasure
is the criterion of good, is how we should determine what we're doing. If we've got a situation
where it seems it doesn't matter how much the scale of the well-being tips one way or the other,
we still wouldn't be in favor of this. And it's like, yeah, you've now got to rethink things.
And this is why people prefer sometimes a kind of action-based view of morality,
known as deontology, right?
Like the idea that the thing is wrong in itself, right?
It's not about the consequence,
it's that gang rape is wrong.
It's not wrong because of the suffering
that it will bring about this person
or something like that, it's just wrong, right?
And so when faced with an ethical dilemma like that,
you've kind of got two choices.
You either have to further adapt or explain
or analyze your utilitarianism,
or maybe you have to adopt deontology,
or maybe you have to accept the conclusion
that gang rape is actually moral.
And that's the least popular line to go down,
funnily enough.
But so, the utilitarian might say, okay, well, look, I mean, it's not about
what will maximize pleasure in any given instance, but let's say, you know, the best thing
to do is to act according with a general rule, which if followed broadly, would maximize
pleasure, right?
So even if in that individual instance, you know, it would maximize pleasure to allow
people to commit
horrible crimes.
If we allowed everybody to live by that rule, suffering would rise over rule because
of people being scared, of being accosted on the street, and people being scared, of being
robbed, or raped, or whatever it may be.
So it now becomes, the thing that we should do is act in accordance with rules, which
if generally abided by would maximize pleasure.
Okay.
So now our decision procedure is kind of morphed into, you shouldn't do what always maximizes
pleasure.
You shouldn't even do what always probably maximizes pleasure.
You should do what would probably maximize pleasure if we made it a rule that everyone
followed.
It's like we're getting a lot more kind of further detached.
From the tree.
We're done.
And you notice the way that we've done that
is simply by taking the ethical theory that we started with
that you kind of hypothesize at the beginning
and just said, but that leads to this.
Okay, so we should adapt it in this way,
but then that leads to this and that leads to this
and that leads to this, right?
And like, yeah, these things kind of come out of nowhere.
Like a lot of the time someone will come up with an idea that just says like,
like, what about this counter example?
And it just kind of blows everyone away and everything has to be rethought.
That happened in the, in the philosophy of knowledge,
because one of the most interesting things about philosophy to me is that
nobody has a sufficient
analysis really of what knowledge is. No one can really decide on what constitutes knowledge.
And the reason for that is because, well, let's think about what not. Okay, let me just
ask you just out of interest. What do you think, if you have to give a definition of knowledge,
what would it be? How can you say that you know something is true? That sounds like two questions. Knowing that something is true versus knowing things.
Well, I mean to say, like, what's the definition of knowledge in either case?
What does it mean to know something?
An accumulation of understanding about the world?
Okay, so basically, you now holding a belief about the world? Okay, so basically you now holding a belief about the world, which is represented accurately
in reality.
Okay, so which is actually true, right?
So you holding a belief about the world, which is true.
How was that far off?
As someone else said that, someone's honest.
Well, you can test it, right?
Because you could say something like, if somebody was in a room with no windows
and irrationally, they just believed it was raining. But it was raining. Did they know
that it was raining? Because they believed that it's raining and it's true that it's
raining, but do they know that it's raining? Well, clearly not. Okay, so it can't just
be kind of having a belief that's true. That can't be knowledge because you can accidentally hold a true belief.
So then the definition, the popular definition became justified true belief.
And you'll hear this phrase thrown around all the time. It's like, well, you have to believe something is true,
it has to be true and you have to be justified in believing that it's true.
What do they mean about justified?
And then you have to believe that it's true for good reasons.
And people will offer different analyses of what is a good reason and what's not, but
so they'll say for instance, just imagining that it's raining outside isn't a good reason.
But if you look out of the window and you see that it's raining, that's good reason
to believe that it's raining.
And so if you look out the window, you see that it's raining, then you believe that
it's raining, it is actually raining, and you're justified in that belief.
So you know that it's raining.
Well, problem solved, right?
Enter Edmund Gettier, who just kind of blew the lid off everything.
And as far as I recall being told this, he kind of, he, he, he, he, he broke this kind
of somewhat flippantly, like he was going through like different problems that, that seemed
to just be kind of put to bed and just for the hell of it was just seeing if he could come up with counter examples.
And he wrote this really, really short paper. It's like two pages long and it just blew
up the philosophy of knowledge, right? Got rid of this idea of justified tree belief.
And I'll tell it to you now, right? This is kind of great.
This sounds a little bit like, you know, around about springtime when you finally got rid
of all of the shit old clothes that you don't need anymore. And you've like put them all in the charity shop,
your mum's taking them or Oxfam in those big sort of see-through bags. And
you like, right, all my socks are back in, these are only the socks that I wear,
all the drawers are organized, color coded, size, everything else, and
someone's just come in and gone, just grabbed all of your stuff and then just
thrown it around the room. Yeah, it's also sometimes a bit like someone's just come in and gone, eh, eh, eh, eh, eh, eh, eh, eh, eh, eh, eh, eh, just grabbed all of your stuff and then just thrown it around the room.
Yeah, it's also sometimes a bit like someone's kind of
as your mom is driving away with all of the clothes
in the bag, someone looks and goes,
you know that she's taken this with her
and you have to come.
Oh, fuck, fuck, fuck, fuck, fuck, fuck, fuck, fuck, fuck,
you realize that what you've said, yeah,
what you've said is actually taken away
this really important belief of your drive
because you're like, I think this is the right theory, this is the right way to
go. And someone says, yeah, but you know that if you do that, then this other belief
you hold has to fly out the window and you're like, oh crap, and you're running after it
as fast as you can.
Okay, so, and this kind of, this kind of how we're knowledge, right? Getty, yeah, yeah.
And such cases, as he presented in this paper, I know now is Gettier cases, which essentially a Gettier case is an
instance of justified true belief that is not knowledge. Because again, we're working with
counter examples here. So if the theory is that, well, knowledge is justified true belief,
then if you can offer an example of someone having justified true belief that isn't knowledge,
then we have to throw out that theory and we have to come up with something better. So Gettier
says, imagine, and I don't need needs to make sense, but it's been a
while since I've, since I want to make sure I get this right, imagine somebody is waiting
for, they're in a job interview, there are two guys in a job interview.
And while they're waiting to hear back from the interviewer, the person who he's across
from is getting bored and he decides to take the
coins that he's got in his pocket out and starts counting them on table for these board.
And he sees him counting 10 coins. So he knows that this guy's got 10 coins in his pocket.
Then what happens is the interviewer comes out and basically says, listen, I haven't
spoken to the board yet, but it seems like you're going to get the job, right?
We're pretty sure you're going to get the job, which, or sorry, we're pretty sure the
other guy is going to get the job, the guy who was counting the coins.
I say, we're pretty sure this guy is going to get the job.
And this kind of gives you a justified belief that this man is going to get the job.
But one thing that you know, although you could say that maybe you have a justified belief that this man's going to get the job. But one thing that you know,
although you could say that, you know, maybe you have a justified belief that this man's
going to get the job, another thing that you have a justified belief and by derivative,
is you have a justified belief that the person who will get the job has 10 coins in his pocket.
Because you've seen this guy counting out 10 coins and you've got a justified reason to think
that he's going to get the job.
And so you have a justified belief that the person who gets the job is going to have 10 coins in his
pockets. Now something goes wrong, something like really unexpected, something unlikely. So it's
not fair to say that you could have predicted this, but something happens. And as it turns out,
you end up getting the job. It's you, not the other guy, you get the job.
And you think, oh, this is great, I've got the job.
But just as it happens, you happen
to have 10 coins in your pocket.
Just by chance, you've also got 10 coins in your pocket.
So your belief, you had a justified true belief
that the person who gets the job
would have 10 coins in this pocket.
But it seems,
it seems like you can't say that you knew that because as it turns out, like, yeah, I guess it was
true that the person who got the job had 10 coins in this pocket, and I guess you were justified
in believing that. But like, surely that's not knowledge because clearly you kind of meant
something else, right? Like, that can't be knowledge, but this is an instance of justified true
belief. And this actually happened to me once in person, because there are all kinds of get-to-get cases that you can construct.
And that's kind of a clumsy a wanted to understand perhaps, but this happened to me once.
I was in a car and I was driving around a big corner, right? And so I saw this, this child
kind of above the hedge, like around the corner, kind of bobbing up and down, and I looked
over there and I thought she was riding a horse. So as we're going around the corner kind of bobbing up and down and I looked over there and I thought she was riding a horse.
So as we're going around the corner, I think, oh man, like there's a horse over there.
That's quite exciting. So I was quite excited to see this horse, right?
So it's her around the corner and turns out she's not on a horse.
She's walking, she's on like her dad's back, right?
But just by chance, there also happens to be a horse in the field.
Now I kid you not, this actually happened to me.
I sat there and I thought to myself, this is a guessy air case because
like although, you know, maybe I wasn't entirely justified in believing that the girl was on a horse.
I like seeing a horse that high up bobbing along. I think you know, I could form a justified belief that she was riding a horse
and the belief was that there's a horse there. So I believed that a horse was there. It was true that a horse was there
and I was justified in believing that a horse was there. But So I believed that a horse was there. It was true that a horse was there. And I was justified in believing that a horse was there.
But did I know that the horse was there?
Like, can you really say that I knew it before,
you see what I'm saying?
Like, this doesn't seem to count as knowledge, right?
And so guess yeah, kind of,
it's talking about these cases and people are like,
oh, damn, so now we have to change it up.
And it just, it just kind of completely upends
everything that we think about the analysis
of knowledge. And this is what happens in ethics all the time.
Some bastard comes in with a cricket bat and breaks everything.
Exactly. But sometimes it can also work in people's favor, right? So an example would
be with an analysis of free will. I'm someone who doesn't believe that free will exists.
Or rather, I say, I have an active belief that free will does not exist.
Why do you have that distinction?
Because it's one thing to be just unconvinced
of something.
It's another thing to believe that it's false.
So, like, let me put it this way.
It's a nothing versus atheist.
Pretty much, yeah.
But I would say that agnosticism is a claim to knowledge,
whereas theism is a claim to belief.
So I'd characterize it like this.
This comes from my friend Matt Dillahunti.
If I had a random jar of coins, I don't know how many things I can use right now, and
you didn't know how many coins were in the jar, and neither did I, and I said, look, I
think there's an even number of coins in this jar.
Would you believe me?
No. But that doesn't Would you believe me? No.
But that doesn't mean you believe it's false, right?
No, I just don't agree.
I think.
The distinction between not believing a proposition and believing that the proposition
is false.
So if you believe that it was false, that means you believe there's an odd number.
If you just don't believe that it's true, that means that you're
kind of reserving judgment. So some people might say, I don't believe in free will. That
is I'm reserving judgment. I don't know. I'm not convinced that free will exist. I'm
saying I'm convinced that free will does not exist. But and that's a whole other podcast.
But this has interesting implications for ethics because there's this common intuition
that in order to be held responsible for something, you need to have freely chosen to do
it. It's like, you can't be held responsible for something you didn't freely choose to do.
This is a general intuition that people held for the longest time, which seems to make
a lot of sense.
How can you hold someone responsible if they can't have acted otherwise?
Can you hold them morally responsible?
And so if there's no free will, basically nobody's ever responsible for anything they do.
Is there any morality left?
Well, exactly. This is the big problem. It's like, where does morality go?
But then again, people were coming up with camera examples. Now, let me say, I might have to search, actually.
Let me try and find something here. So generally speaking, the idea is that the principal could be kind of summed up as,
you can't be held morally responsible if it's the case that you couldn't have acted otherwise.
So if you were a passenger on a train, and a train kills a person, it's not your fault
that the train killed the person, you didn't get to control the train.
Yeah, but also it might be something like if you trip over by mistake and you knock someone
in front of a train and they get hit by a train, that's not really your fault because it's
an accidently fell over and that's different from pushing them potentially.
But if you tripped through no fault of your own or if someone else pushed you and you
you ended up pushing into them, it's like you're not more on your responsible because
you couldn't have acted otherwise.
And it's based on intent.
It seems like that appears to be the distinction.
Yes.
I suppose so.
Actually, no, it might not be because there's some constraints you're saying couldn't
have acted otherwise.
So you might not have intended to do something, but you could have acted otherwise.
Yeah, this is the manslaughter versus murder versus accidental death.
Yeah, pretty much.
Oh, fine.
But then it also gets more complicated.
Of course it does.
Is there anything that doesn't?
Oh, man.
Veganism is quite simple.
Yeah, vegan.
It's quite simple.
It's quite simple.
Vegan bingo at home, that's three times today.
Yeah.
Haha.
Yeah, how do you know if someone's vegan, you know, that's the, that's the, that's the
outcome.
Like that kind of gets on my nerves, it's like, um, how do you know if someone's got a
natalogy, don't worry, they'll tell you.
Like, yeah, I'm, of course I'm going to bloody tell you that I'm going from meal with
you.
I'm going to die.
Yeah.
Okay.
So, uh, intent.
What's this thought experiment?
So, uh, Harry Frankfurt comes along and, basically, the job is to disprove this intuition,
if you've got to come up with a situation
in which someone couldn't have acted otherwise,
and yet is still morally responsible for their action,
which seems like a hard task.
One example that Peter Vanning in Magen has given
is of somebody who witnesses a crime
and decides not to call the police immorally, it should call the police but decides not to, feeling a bit evil or something, doesn't
want the person to see justice. But this person doesn't know that the phone lines were down
anyway. So even if he tried to find the police he couldn't have found the police. So this
person did not call the police but he couldn't have acted otherwise because he couldn't have
phoned the police. But we still hold him morally responsible, but he couldn't have acted otherwise because he couldn't have phoned the police. But we still hold him morally responsible, even though he couldn't have
acted otherwise. That's a good example. Now, somebody might say something like, yeah, but look,
I mean, clearly what matters here is like the intention, right? It's the intent. It's kind
of the person's eye because he still could have tried to call the police and then the phone
lines would have been down. But like Harry Frankfort discusses, lows and lows are very particular
cases. But one of the examples you could potentially give is that you've got somebody who is
like a neuroscientist and is able to like, is hooked up to your brain in such a way
that he can kind of prod your brain and make you do certain things and change your mental
states. And he basically says, you know, let's say you're voting in an election or something, and one of the candidates
is some kind of genuine tyrant, and it would be a moral to vote for this person.
The neuroscientist essentially kind of scans your brain and sees that if you're going
to vote for the person of your own accord, he just lets you do it.
But if he realizes from your brain activity that you're about to go and vote for the person of your own accord, he just lets you do it. But if he realizes from your brain activity
that you're about to go and vote for the other person,
he will kind of fiddle about with your brain
in such a way that you become motivated to go
and vote for the other person.
So, you know, let's say that you go to vote
for the good party and this person kind of hooks up
to your brain and changes your motivations
and makes you want to go and do the thing.
You say that person's not responsible for voting for the bad person, because it's not their fault that the person was in their brain, but like
what if the person goes and votes for the bad person of their own accord?
Because like clearly they're morally responsible because they've freely chosen of their own accord to go and
vote for this this bad tyrant, right?
But they couldn't have acted otherwise because have they tried to vote for the other one? They would have been motivated to vote for this bad tyrant, right? But they couldn't have acted otherwise, because
had they tried to vote for the other one, they would have been motivated to vote for them
anyways. It's like, can we really say that morally responsible? Well, intuitively, we
want to say, yeah, they're still morally responsible. But if you're morally responsible,
despite the fact that you couldn't have acted differently, you couldn't have even wanted
differently, then we seem to have a counter example to the case or to the proposition that
you can only be held responsible if you could have acted otherwise.
And again, this leads to a wealth of discussion, and there are lots of different responses to this.
I've got my responses to it, other people have their responses to it, and I think there's
satisfactory responses to this. But at the very least, you can see that it's not as simple as
people originally thought, because? Because the great thing about
ethics is you can come up with incredibly contrived thought experiments. You can come up with
the most ridiculous scenarios in the world, and someone can say that's the most unrealistic
thing I've ever heard. But it doesn't matter, because if it contradicts your theory,
then it means the theory is wrong, right? It doesn't matter how crazy the thought experiment
has to be. It's kind of like when people are talking about utilitarianism and they say, okay, but what if we painlessly kill somebody,
right? And they suffer no pain. Someone says, well, that's still bad because people around them
would suffer. They say, okay, well, what if, you know, what if we kill somebody who's like
homeless and has no family and no one cares about? We say, okay, yeah, but that's still wrong because
someone has to do the killing and that kind of that might kind of affect them about. We say, oh, okay, yeah, but that's still wrong because someone has to do the killing
and that might kind of affect them.
And they say, okay, so what if we like,
what if we like earn a cabin in the woods
and we painlessly kill someone
and the person who does the killing immediately forgets
and the person who makes them forget
doesn't know why they're making big,
and it's like, it's getting a bit contrived,
it's getting a bit crazy,
but the thing is, if in that situation situation you've still got a counter example, if the counter example is like
logically coherent, as far fetches it may be, it can still expose the floor in the ethical thinking,
right? The battleground, the battleground of ethics is really fast moving, I'm guessing based on
that. Like, you think about science, you have to construct an experiment, perhaps you can get some funding,
put some pants on, leave the house, do the actual thing, analyze the data.
Whereas all that you really need to do here is have an arm share in a brain.
One of the great things about ethics is you can do it all by yourself and you don't need
any funding for it.
But at the same time, it's also very slow moving because the basic questions have been the
same questions for thousands of years.
It's just like somebody will come up with an interesting particular thought experiment
that requires various different responses or a thought experiment that ends up having
quite impactful or quite an impact on the rest of philosophy or something like that.
But like the fundamental question at the basis hasn't really changed. Modern developments can inform our ethical discussions, but they
don't really change the nature of them. For example, when you develop technology, you can
start talking about living in a simulation and this kind of stuff. But the question is still
the same. So the thought experiment might be the experienced machine of hooking someone
up to the matrix where they experience more pleasure, but it's not real. Yeah, sure, that's kind of a new thought, but the idea of kind of somehow replicating
reality is not new.
We've just moved from Descartes and his evil demon that makes you think you're living
in a particular way, to it being done by the matrix, right?
So, like, particular thought experiments in the way that they're expressed can change quickly.
But the fundamental questions are still the same.
So, as it been quite a while since anything revolutionary
in terms of a question to be asked,
is it supposed to a criticism of an existing question,
has come up?
I mean, you'll tend to kind of,
tend to know when that happened,
because this is what can make people famous.
When they come up with a way of kind of putting things together,
a lot of people consider the most recent really important
person to have been someone like a manual cat.
The thing is, it's difficult
because it's so abstract, right?
It's difficult to determine whether what someone's saying
is actually new or if it's just a synthesis
of previous ideas or something like that.
But generally speaking, the progress is slow.
The big steps that are taken in ethics
will be on a very, very particular question
on a very, very particular point of ethics.
If somebody manages to prove somehow philosophically
that we're
not living in a simulated reality, that would be a really important and philosophical discovery of how we'd manage to argue that. That would have wide-reaching
implications. For instance, if someone kind of discovered, it just so happens that I've
got a philosophical argument that says that we can't replicate consciousness. Because you know
the simulation argument of Nick Bostrom says that, you know, humanity will
get the point where it can simulate human consciousness.
And that consciousness will be able to simulate consciousness and so on and so on.
And the likelihood that you happen to be in base reality is minimal.
It's tiny.
To introduce.
Did you watch the episode of Joe Rogan where Nick tries to explain that to him?
I didn't, but I'm, we've talked about that.
Yes, I brought it up.
Anyone that's listening, so you'll have heard me talk about super intelligence a number of times,
one of my favorite books. Recently read The Pressor Pist by Toby Yord, who I met a text
to you about and said is he one of your lecturers at uni, The Pressor Pist by Ex-Essential
Rescue from the same future of Humanity Institute. Nick Bosterham, guy that I've read for tons and
tons of time, sits down with my favorite
podcast, Joe Rogan, I think, fucking hell, this is great. Basically, I'm brilliant. Joe simply
does not get the simulation hypothesis, which is like Nick's sort of, at least one of Nick's
crowning works. And he's fairly straightforward to understand. And then for 45 minutes,
And then for 45 minutes, continues to force the audience down the same groundhog sort of exchange. So yeah, if you want to find out about Nick Postram, do not watch his episode with Joe Rogan
unless you want to tear your eardrums out.
So yes, you can just read the paper that he put out.
Precisely. There's one interesting thing I was thinking there that the discovery of knowledge in philosophy or ethics,
where does that come from?
Or what is that discovery?
Because it's not like we have discovered a new star,
this is a particular new type of element,
this is a new proton.
It's somehow universal and existent, and yet is also manifest by someone's thoughts
and also quite sort of transient and ephemeral.
I think you can think of it in the same way.
This is probably the most helpful way to think of it, potentially, is to think about it
in terms of like mathematical discoveries, because maths is kind of a language that we invent
to describe things that we believe are analytically true.
And it's essentially tortologies,
to say that one plus one equals two
is kind of the same as just saying two is two.
You can make mathematical discoveries
because people kind of have,
they put together equations and I'm not a mathematician,
but you can kind of make discoveries
by putting different propositions together and seeing how they work, right?
And it's weird to think that you can kind of discover things in this manner, in this kind
of weird abstract kind of sense, but like I think the same thing is roughly going on, like
ethical movements are made when people kind of realize implications of beliefs we already
hold or realize a new way of justifying them or something like that, or realize an inconsistency
that we hold.
Most of the kind of, when you say something like
ethical progression, people tend to think of,
like in practice, they tend to think of things like
slavery being abolished, or the vegan movement,
that's number four.
Yeah.
And like, yeah, sure, but there are two different things
we could be talking about,
because there's that kind of ethical progression, which is where we like to think, oh, well, we're practically
changing to live up to like the objective ethical standard that we've constructed or that
exists or however you want to frame it.
But another question is like, what about the frame itself?
Like can we kind of have a development in that frame?
And sometimes the development in the frame
leads to development and practice.
But this is one of the questions that will help you determine
whether you think objective morality exists.
It's like do we discover ethical truths
or do we invent them?
Do we kind of come across objective truths
about the way we should behave
or do we just kind of decide on a new way of living
to be consistent with our preferences or something like that?
Like that's one of the most fundamental questions to figure out whether ethics actually
objectively exist. Is that the question that John Peterson and Sam Harris got a little bogged
down with during their live debate? I think potentially because they would both, I'm not
asked familiar with John Peterson's, I'm Sam Harris' morality,
but I think both believe in objective morality,
but they have different justifications for it.
And Sam Harris is just trash.
I mean, it's awful.
It's just, it doesn't make any sense.
And I think Jordan Peterson was kind of trying to poke holes
in it because Sam Harris kind of says,
well, well-being should be the basis of morality because we all care about well-being. And someone says, well, well being should be the basis of morality, because we all care about
well being. And someone says, yeah, but like, why is well being good? Why is suffering bad?
And Sam Harris says, well, if you don't believe me, you're going to put your hand on a hot
stove. And it's like, dude, you're missing, you're really missing the point here. It's
not, I don't doubt that I wouldn't like putting my hand on a hot stove. But why does that
make it objectively wrong? Like, because for it to be objectively wrong, it would be kind of wrong regardless of what I think
about. It doesn't matter if I prefer it or not. It's wrong of its own accord. Somehow,
if no human beings existed, it would still be wrong to inflict suffering, even if there's
no one to inflict the suffering upon something like that. And somehow, it kind of doesn't
address that. And I think Jordan Peterson kind of tries to put coals in that. But he
poking holes in the justification, not the idea that ethics is objective.
Did you see, I'm going to guess that you probably won't have done because you're not on social
media as much as me, but Sam is releasing a making sense book, which is a synthesis of
things he's spoken about on the podcast.
He's doing a Tim Ferriss.
Did you see that?
I didn't see that.
It's come out today.
So it got announced. It's come out today.
So it got announced.
It really.
Pre-order available today.
So it's just the Tim Ferris model, man.
Have a conversation.
Write a book about the conversations.
Then have a conversation about the book that you wrote about the conversations.
And then just continue to keep on going.
You're monetized.
So, okay, man, let's have some of your favorite thought experiments that we might have not
covered yet.
Anything that you think is going to bend people's minds a little bit as they try and answer
it.
Well, I think we're speaking.
I mean, okay, so I remember when you asked me to come on, I wasn't sure if you wanted
to talk more about metharethics or if you wanted to, because you said you just want to go
through some like ethical dilemmas.
So, like, there are a few that you can,
that you can think of and there are different ways in which they're difficult. So for instance,
consider the fact that this would be, this would be a demonstration that sometimes the consequences
of a certain thing that we think is good or actually that they have unexpected consequences.
So broadly speaking, we think that it's good to educate people.
The more educated people are the better.
But education rate is directly correlated to suicide rate.
The more educated the society becomes, the higher the suicide rate.
So is it actually immoral to educate a society more if it's going to lead to more
suicide alistair? Like, are we placing a value on knowledge itself?
Do we not value knowledge because of what it does for humanity?
Like, and do we have the right to essentially kind of risk a rise in suicide in
order to educate other people? Like, I don't know, it's like you don't necessarily
think of that consequence. Another example of like an unintended consequence might be something like, I remember I went to
a talk once about the ethics of markets, right?
What can be sold and what can't?
So things like organ selling and prostitution and sex work, right?
And I think broadly speaking, in liberal society, people are in favor of sex work.
They say, look, you are, it's your body, you can
sell it as you please, like you should be able to do that. And, you know, I was roughly
in agreement, but I'm listening to this talk and I'm thinking, okay, so let's say we legalize
prostitution. There's a consequence that people don't necessarily think about, right? If
you become like a merchant of sex, that's like a really good name for a band.
Yeah, or a street name maybe. We have laws that exist that say that if you're a merchant,
if you're selling a product, you don't get to discriminate who you sell your product to.
Right now, it's slightly different with things like baking a cake that has a certain message on
it. That was a big controversy, but that's because you're producing a specific,
you know, a specific kind of designed toward a thing, right?
It's not just a general product that you're offering.
So the person who baked that cake basically said,
I'm not gonna bake a cake that says
like happy marriage to a gay couple
because that goes against my beliefs,
but I can't deny them service to like a product
that anybody else could buy because they're gay.
So if they came in and bought one of my pre-made cakes, I can't deny services than I wouldn't do, right?
Now, did they make that distinction? Yeah, the cake seller definitely did.
I certainly actually had a fairly good sort of consistent grounding philosophy.
Yeah, he said, look, it's not about like, I'm not going to sell to a gay person. That's
not what it is. I'm not going to produce like essentially like a piece of that.
A celebration of that. In the same way that like, although, you know, I'm not going to produce like, essentially like a piece of celebration of that.
In the same way that like, although, you know, I wouldn't want to draw an equivalency between
these two, it works by analogy. If someone came in and said, you know, can you bake a cake that
says the N word on it or something, you'd be like, I don't want to make that because it goes
against my beliefs. And someone could be like, but you're discriminating against me because,
you know, because I'm black and I'm ordering a cake. It's like, no, it's like, it's because
of what you're making me do.
Now, obviously I'm not trying to say that,
you should be as uncomfortable writing the end word
as saying, happy, happy,
happy, waiting to a cake.
I'm saying like the kind of basic principle
is that there's a difference between refusing someone's
service because of who they are
and refusing them service
because of what they're asking you to do.
But if you're selling sex,
just as a general product,
we have laws that say that you can't discriminate. You can't refuse someone service because they're gay or because they're black or because they're female or something.
But you should be able to refuse sex at any time for any reason. Right. Like if you're
having sex in the porn industry or something, you should be allowed to determine whether
you have sex with men or not. You should be able to decide whether you have sex with men or not.
You should be able to decide whether you have sex with someone who's black or white
or someone who you find attractive or don't find attractive.
Someone who's disabled. It is your basic right to determine who you have sex with and who you don't.
But if you're selling a general product of sex, does that mean that you
forced to offer your services to men as well as women? If you became a a prostitute Chris could i try and hire you and you say look i don't
i don't serve men i could take you to the supreme court and say well look you're discriminating against me on the basis of sex it's like.
That's an interesting implication that people don't generally think about.
And i raise it with the guy given the talk and you can't really have an answer for it because i don't think it's a particularly common objection to bring up i know that.
because I don't think it's a particularly common objection to bring up. I know that I discovered actually after the fact that one of my tutors for practical ethics had written a paper to this
effect discussing this exact question of like, like, here's a kind of interesting reason that maybe
we shouldn't be in favor of the legalization of prostitution. And it's not what you think. It's not
like, oh, it's, it's a lute, it's bad. It's like, it's like, you're going to either have to create
something of like an entire new moral
legal category about denying people to service on the basis of sex or disability or attractiveness.
I mean, imagine refusing someone of a service because you don't think they're attractive.
This is brand new territory, but if you don't do that, then you're essentially saying
this person must be compelled to have sex with anybody who wants it if they pay the right price.
And that seems equally uncomfortable. So it's like, maybe not as simple as you first thought.
Isn't it interesting where we have things that we're incredibly familiar with that are
artefacts of our heritage as human beings, like modes of thinking that implicit assumption that you should only have sex
with people that you want to have sex with. And yet at the same time, when that comes crashing
into something that we also know really, really well, which is a more modern invention,
like free markets and an equality and equal treatment. When those two things sort of come crashing
together, you're like, hang on, I feel like I'm supporting both teams here.
Like how the fuck is this working?
And people try to kind of, people might try to wriggle out of it or synthesize it and
then might say something like, well, if you agree to be a prostitute, then you agree
to offer your services to anybody.
So it's like, you know, we're not just saying anybody has to have sex with whoever asks
it.
It's like, if you enter into a contractual agreement
that says you are a prostitute,
you now have to have sex with whoever wants,
but it's like, are you saying that someone can like
sign off their consent?
Are you saying that someone can like sign a contract
which says, I am no longer able to consent
in any of my sexual encounters
so long as the person is paying?
It's like, I don't think that's what you want either.
Right, it's like, and maybe we might think that,
although every instinct says something like,
well, your body is your private property
and you should be able to sell it,
or all you like, you should be able to do what you like
and make money with it as you like,
we might think that we value our principles of equality
under like the free market.
So, so highly that it's worth sacrificing this other kind
of principle that people should be able to sell their bodies.
And we say, actually, no, like, we're not going to allow people to sign away their consent.
And so we're not going to legalize prostitution. But it's like every argument that you hear in
the popular discourse against prostitution is like, it's wrong, man. It's like, you shouldn't,
man, that's like, that's that could be your daughter, man. It's like, it's kind of, it hasn't got
the right focus, right? There are better arguments that you can be making against illegalization of prostitution
that's for the benefit of the person who'd be or the rights of the person who's selling
it.
It's a difficult question.
You know, your listeners kind of just mulling it over just thinking like, what are the
implications of those kinds of beliefs?
Like there are so many kind of hidden around the corner that you might not be, you might
not be entirely aware of.
I agree, man. What else you got? Anything else that you've got in the corner that you might not be, you might not be entirely aware of.
I agree, man.
What else you got?
Anything else that you've got in the tank that you've been thinking about?
Here's one of my favorite ethical dilemmas.
This is one of my favorite ethical dilemmas that was given to me before I came to university.
I was like, I was like 16, 17.
I was in a pub.
And I met somebody who was studying physics and philosophy at Oxford.
And it's the first time I met him and he's just kind of, he's a bit of a bit of strange man. He turns me and says, here's one for you. Okay. And this question always a good way
to start, isn't it?
This question remained with me to this day, right? And I want you to answer this honestly.
And you know, you might, you might be kind of exposing some immorality by answering this
honestly, but I want you to try your best. Would you rather kill an innocent person and
then immediately forget about it or
not kill the innocent person but live the rest of your life thinking that you had?
The second one. You'd rather not kill the person but live the rest of your life thinking
that you had. Yeah, probably because I haven't fully thought
through just how painful it would be to believe that I would have killed someone
Right now, so this is the this is the answer that I think
People often give this answer at first because it kind of sounds like the right one to do
And maybe you actually maybe it would actually be better for you after thinking about it for the longest possible time
The majority of people tend to kind of say well actually I think I'd have to say that I'd rather kill the person than immediately forget because like, if you're talking about like,
as you say, like it's like, you don't know what it's like to live that life believing
that you've actually killed that person, but there are two different questions
on the table here. The first is what you would do in the second is what you should do, right?
So I think most people, when they think about it deeply enough,
realize that because of the amount of sacrifice that they're making,
they should probably kill the person and then forget because they just can't deal with the
kind of backlash of that. But that's a separate question from what they should do,
and maybe they shouldn't kill the innocent person. But you could also say, like, should we expect
somebody to essentially sacrifice her life of well-being for the life of another person?
Like, do we have the right to expect that of a person? Or do they have a right to say
that, you know, if I don't commit this action, it's going to have this horrible impingement
on the rest of my life, and I actually have a right to kind of look after my own interests
first, even if my own interests are kind of lesser than in other persons. In the same way that if you decide not
to give to charity, that's your right to do so. But like the 25 pounds that you're going
to save is so much less to you than it would be worth to people for whom you could buy
a mosquito net or something and save them from getting malaria. But we say that even though like the benefit that it gives you is much less than the benefit that
we give the other person, like you have a right to look after yourself first and look
to your interests first. And maybe you could say the same thing in this instance, I'm not
entirely sure. But it's a difficult question to think if you were really in that situation, what
do you think you would do?
It's hard, man. I mean, the question here, and it seems like this happens with a lot of
it, is whether or not you're able to take a third party perspective and the wood versus
the should, i.e. the armchair philosophy versus the actual brats, brats roots action,
those two things often I'm going to guess come into conflict with each other because there
isn't a third party perspective. If we are talking about you doing the thing, there is
no third party perspective for you to take. There is only in the
should, not the would. Right, but if morality is objective, then we should say, like, it doesn't matter what you would. There's a right answer to the question, regardless of what you find yourself
actually doing. Like in the famous trolley problem, when you ask somebody whether they would pull
leave them, whether they'd push the fat man off the bridge, you know, the famous daughter of the tram. So, you know, the trolley is going down the track,
and it's about to run into five workers
who are working on the track, and you can pull a lever,
and it will divert the train onto a track
that's got a single person working.
So it will kill one person instead of the five,
and the question is, you know, should you pull the lever,
or would you pull the lever?
Most people say, yeah, of course I pull the lever.
I'd rather, you know, the train goes into one person
than five people.
The principal being, yeah, okay, so I'll sacrifice
one person's life to save five innocent people's life.
Fine.
But then you ask the question,
what if you're walking along and there's no lever
and the train's going towards five people on the track,
but there's this really fat man walking across the bridge.
And if you push him off the bridge,
he's gonna land on the train, it's gonna kill him,
but it's gonna stop the train.
Would you push the fat man?
And people are like, well, I don't know, I don't think so.
I don't think I have the right to do that,
but it's like, hold on, why not?
Why is it that you're willing to kind of pull a lever
that kills one person, say five people?
You're not willing to push the man to kill the man,
say that five people. Michael Sandell, who's a philosopher at Harvard,
one of the most famous philosophers living,
he's got a great book by the way,
called Justice, which is a fantastic introduction to ethics.
Like, I don't feel like I've done a very good job here
of like actually going through the various ethical theories,
we've just kind of been mulling here and there,
but if you want, if you want like a really good introduction
to just what ethics is, the different ways of thinking
about it, justice by Michael Sandell is a fantastic book.
And in a lecture, you can watch on YouTube, he gives the trolley problem,
which is a great starting point for practical ethics.
It's one of the first things that people will talk about, and he's talking to his students,
and he's asking this question, and the student kind of says,
but you know, I don't want to, it's different, because there's a difference between me like,
if you're like driving the train. So Michael Sandel says you're driving the train and you can turn
the wheel and it goes into one person instead of five people. And that's instead of the
lever, right? So most people say, yeah, they would turn, they would turn the wheel, they'd
go into the one person instead of the five, but they wouldn't push the fat man. And
one of the students says, but look, I mean, the difference is like, there's a difference
between like turning the wheel and like actually like getting your hands on a man and pushing him off the thing and so Michael Sandell says, well, what
if the fat man is kind of on a trap door?
And the way to open the trap door is to grab a big wheel and you just turn the wheels
and it's like, okay, like, yeah, I still probably wouldn't do that, right?
And but again, the reason I think this up is to say like, I think it's fair to say that
maybe like, most people would, would pull the lever, but wouldn't
push the fat man. Surely, if the principle is the same, then you should do the same thing
in either situation. Or is there a difference? What's going on there? Interestingly, there
have been studies done where people have undergone an MRI scan whilst being asked the trolley
problem. When they think about like the lever,
it's okay. So, so the people who say that they would both pull the lever and push the fat
man, when they're thinking about the question, the parts of their brain associated with rationality
are lighting up. For the people who say that they would pull the lever, but wouldn't push
the fat man, when they're thinking about the questions, the emotional parts of the brain
are lighting up. Right, implying that actually, yeah, no, no,
the reason you wouldn't push the fat man is actually
because of like, you know, your emotional tendency
to what you would and wouldn't do
rather than your rational thinking
about what you shouldn't do.
But okay, there are some situations in which,
I can give you two situations,
which are like almost exactly the same,
and yet you would say that one is right and one is wrong.
Let's say, for example, you're an ambulance driver, almost exactly the same and yet you would say that one is right and one is wrong.
Let's say, for example, you're an ambulance driver and you've got two people in the back
and they need to get to hospital immediately, right?
And if you don't get there immediately, they're going to die.
As you're driving along, you look out the window and you see a boulder just rolling
rolling towards an innocent man, right? Now, what you could do is you could stop the ambulance you could get out of the ambulance you could push the boulder out of the way and you'd save the innocent man.
But you shouldn't do that right.
You should stay driving because then the two people in the back of the ambulance are going to die.
You stay you stay in the ambulance you think I wish I could say that man but I can't I'm willing to like allow him to die so that I can make sure that these two guys get to hospital right fair.
Would you say that that's a fair analysis?
Fully busy day.
Yeah.
I hope that guy gets a pay rise.
But now imagine, same situation, except this time you're driving the ambulance to people
in the back.
And there's a boulder in the way.
It's in the road.
And the only way to to keep moving forward is to kind of push the boulder so that it rolls
and kills an innocent man. Are you allowed to do that? Seems like maybe not. I don't know if
an ambulance drive was riding along, that would be a good thing to that. You just kind of bump
into the thing and make it hit the other person. You'd probably say no, but like, what's the difference?
Like, what is the difference between,
between allowing a bolder to kill an innocent man,
save the people in the back and pushing the bolder into the innocent man,
save the people in the back.
In both situations, you decide whether the innocent man lives or dies.
And the basis in which you make that decision is, you know,
with reference to the people in the back of the ambulance. So what is the difference?
Is it not the continuation, the point at which you enter into the story and the effect
that you have moving forward?
Sure. So this is the distinction of doing and allowing. And this is where this is where
I've stumbled upon semethics. Yeah. Well, this, I mean, this is, this is kind of the,
the analysis that's given. It's like, well, there's a difference between doing and allowing.
There's a difference between allowing a bad thing to happen
and being the cause of a bad thing to happen.
But my God, it gets worse.
It gets even more complicated than that.
Because you've got to decide how are you defining the difference
between doing and allowing?
Like, what really is the difference there?
For example, if I walk up to somebody
who is attached to a life support machine
and I unplug them, have I killed them
or have I allowed them to die?
Because the machine was, okay,
so maybe you could think something like,
if you're the doctor, yeah,
and you're the one who plugs in the machine,
then by taking out the machine, you're allowing
them to die, because they were going to die anyway.
You're stopping them from dying by plugging it in when you unplug it, you allow them to
die.
But if the doctor plugs it in and then walks away and then another person comes in and
unplugs it, they're probably killing the person, right?
That would be, but what, what, surely it can't be, the difference between doing and allowing
can't be like, who's, who's doing the action, right? That can't be, the difference between doing and allowing can't be like who's doing the action right?
That can't be it.
But that seems to be a difficult intuition to think about.
But also, I mean, I wrote a paper about this once for one of my tutors and basically
said, the distinction is a bunk one.
It doesn't make any sense.
If I take it to his extreme, I could say something like, if I come up and slit your
throat, I'm not killing you.
I'm allowing you to die because I'm just removing the barrier of your blood that's kind of
keeping it from like spurting everywhere.
It's a similar kind of thing.
For instance, you could say, take the life support machine.
If instead of like a mechanical exterior electronic thing that plugs into the wall, let's
say that we manage to kind of grow a mechanical, exterior, electronic thing that plugs into the wall, let's say that
we manage to kind of grow a cellular life support machine.
Right, so someone's on life support and we design a kind of technology that says we can,
we can take like a pill that they take that causes them to like grow a certain kind of
organ or something which acts in the same way as the life support machine does.
If that person just walking down the street, mind in their own business and I come up to
them with a knife and I just cut it out of them. Surely I've killed them. But like, it's the same thing.
All I've actually done is removed the barrier that was keeping them alive, so I've just allowed
them to die. Right now, earlier from the thought experiment with the ambulance driver, it seems to be
that there is a moral distinction, not just like a physical,
because right now we're talking about the non-moral distinction.
We're just talking about what is the descriptive difference
between doing and allowing,
but once you've done that,
you then have to determine whether it's morally relevant or not.
So we've already seen that there seems to be
a morally relevant difference between doing and allowing.
It seems to be that allowing is less bad than doing.
But then what does that mean for like kind of cutting out someone's
organ or something like, is that not as bad as somehow killing them in another way or something
like that? Like it's like, it's not as simple as people think it is, right? Because this is the
explanation we go through because I give you the thought experiment with a boulder and your answer
is, well, I mean surely the difference is that in one case, you're kind of putting yourself into the situation. You're doing rather
than allowing, but it's like, yeah, but I could say the same thing about slaying someone's
throwing something like that, right? I'm not entirely sure that's the case, but also,
sometimes we would think that allowing someone to die is just as bad as killing them.
So for instance, Peter Singer gives the example
of the child drowning in the shallow pond.
Imagine, for example, this is his famous example from him,
if you're walking down the street
and there's like a really shallow pond next to you,
and there's a child drowning in the shallow pond.
And you could go and save that child really easily,
but you decide not to,
because you don't wanna ruin your shoes,
you just bore them and they cost you 30 quid
You're an immoral monster, right? That's a horrible thing to do
So it's really really bad because you've allowed this child to die because you refused to intervene at the cost of 30 pounds
But that's what you do every single time you refuse to give 30 pounds to charity
The child isn't drowning in front of you the child is
starving on the other side of the world and and you're saying, no, I'm not going to intervene because
I want to keep my 30 pounds. What's the difference between that and walking past the child and
the shallow pond and saying, I'm not going to intervene because I don't want to ruin my
shoes, which cost me 30 pounds?
It's distance. It's a lack of directness or feeling like you are direct. This is how it's
done. So let's take that one by one, I mean take the distance case. Okay, so a child drowning
in Ethiopia is on the other side of the globe. Okay, what if they're drowning in Morocco?
What about Spain? What about France? What about Wales? What if it was like, what if it was
just, what if it was like in the next village along? What if it was like, what if it was just, what if it was like in the next
village along? What if it was like next door and you had to kind of go outside and knock
on the door and get the done? Like, how close, how close does that have to be? Like what,
a surely distance doesn't make a difference if like the money that you can give what will
get to its destination easily. It's not like you have to go over to Ethiopia or something.
It's like, you can just donate the money online, you can do it from your laptop. It would be less
effort for you to donate 30 pounds to Oxfam, then to go and save a child from drowning in
a puddle, right? But we seem to think that you have to save the child from the puddle,
but you don't have to give money to charity. And Peter Sengi uses this to say that essentially,
need to charity. And Peter Sengen uses this to say that essentially, you taking 30 pounds
that you could have, you're refusing to send life-saving treatment to a child for no good reason, is just the same as you posting them cyanide. As you're just putting them cyanide in the male
and shooting it over there and killing them. Like, it's like, what is the moral difference? Why is
it that just because in one case you're allowing them to die because you don't want to intervene and in the other case you're causing them to die because you're actually doing the intervention like what's the moral difference here.
Why is one okay and what not why do you have the right to allow a child to die of malaria but you don't have the right to allow a child to die from suffocation from drowning.
a child to die from suffocation, from drowning. It's interesting when you come across people
who have thought about this stuff thoroughly
and then have the conviction of their efforts to go after it.
So two examples, first one being yourself.
You became convinced of being being fifth time lucky,
veganism through just an armchair philosophy
sort of thought framework,
desperately asked YouTube to see if someone out there
could find a rebuttal that would mean
that you didn't have to go vegan,
then didn't stuck to your guns and went vegan.
The other equivalent is Toby Ord,
this guy that wrote the precipice,
and in his episode with Sam Harris,
he said, Toby Ord, and I quote, his publishers Bloomsbury
who owe me an absolute shit to him, because I keep putting their authors on the show, he's
not doing press.
The week he went on, and I'm like, no, no, no, he's not, not doing press.
He's just not doing press that isn't Sam Harris.
Anyway, not, not that I'm bitter, Toby, if you're listening.
He is one of the key figures in the effect of altruism movement.
Have you seen this where people commit significant portions of their earnings for the rest of
their life? And then they find the most effective charity that they can give it to, they
maximise the value out of every pound, and are giving like real significant portions of the wealth forever
and they're like tied into like the sort of lifetime contract thing.
He's another person, similarly self, presumably has gone through thought experiments like
this surrounded by people like fucking Nick Boster and he was just bending his mind.
And yet the end result is, you know, 10%, 20% of his salary for the rest of time.
Yeah.
Effective.
Well, this is the thing.
The reason we do ethics, we should remember, is to figure out the right way to live.
This is essentially why we do ethics, to figure out what the right thing to do.
If you're just doing ethics in such a way and you think of it as a philosophical play thing,
then fine.
a philosophical play thing, then like fine, but I think it's far more beneficial to use ethics to determine how you actually live your life. So if you come to a conclusion and it seems
really abstract, right? It seems like driving an ambulance and you've got people in there
as like a rock just materialized and you're like, but you realize that by answering that question
that can lead you to the conclusion, so I should give up 10% of my income to charity. Dangerous. It's like, actually, yeah,
no, that makes sense, you know, and maybe you should then do that, but then if you become convinced,
that's the case, and it's like, an act, act in accordance with it. If somebody, if you become
ethically convinced that it's wrong to kill animals, then stop killing animals. If you become
ethically convinced, there's no difference between refusing to give 30 quid to Wax Fam and refusing to save the child
in the puddle, then give 30 quid to Wax Fam. It should be actually determining our actions.
Albert Camus and Myth of Sisyphus says that for the man who does not cheat, or he determines
to be true, must determine his actions. And that sentence, almost single-handedly, made
me go vegan.
What a bastard. I'm going to tell you a story because I didn't ever get
round to it. I texted you while I was in Ibiza about this. But I was away with Ricky,
my buddy. We were having a great day. We had a day sort of exploring the island
where we went out for dinner with the photographer and his misses that we were there on the night time, she's vegan, he's keto and Ricky is a X-R-A-F soldier and
like a classic sort of CrossFit guy. And I found myself promulgating your position, putting
forward your moral philosophy position on veganism as I had two non-vegans and one vegan sat
around. And I was saying all of the stuff that there's what is the characteristic, that an animal
lacks, that if a human lacked blah blah, and everything that we went through, if you haven't,
if you don't know what I'm talking about, check out Alex's channel, I'll check out his last episode
on my show, which will be linked in the show now to below. And I found myself, and then his misses piped up, and was like, so why do you still eat meat?
And it came back to the practical implication, which I think we keep coming back to.
And I keep trying to impress on you, because I think that it is, at least for me, the kryptonite that has meant something I'm convinced of
hasn't been something that I've taken as a lifestyle change,
which is the amount of effort that is required to do it
is simply, practically highly inconvenient
and because the path of least resistance
and there's a lot of inertia to overcome with that,
that I tend to not do it.
But I said, I held my hands up and it's weird when you say something that's kind of in homage to someone that's a lot of inertia to overcome with that, that I tend to not do it. But I said, I held my hands up,
and it's weird when you say something
that's kind of in homage to someone that's a friend,
but they're not there to hear it.
It's just kind of an interesting sort of situation
to be a part of, but as I look, man,
like I know that I'm living out of alignment
with things that I know to be true.
I believe that increasingly now convinced that, as you say, people in the future will
look back on what we do with animals in terms of factory farming now as an absolute
travesty.
And yet the food that I ordered was a chicken bacon pizza and the other.
So yeah, it was interesting to see people
like yourself and like Toby odd who don't just sort of
talk the talk, but they put their money
and their vegetables where their mouth is.
Yeah, well, it's why bother.
Like why bother doing any of this investigation
if you're not going to allow to inform your action?
Like it would be like kind of spending years and years
trying to figure out the best way
to develop nuclear technology.
And then you finally work it out and you go,
ah, it can't really be bothered to do it.
I mean, it's a lot of effort, you know.
It's like, then why bother do the investigations
in the first place?
I would say, like, remember, difficulty is a lot of effort. It's like, then why bother to do the investigations in the first place? I would say, remember difficulty is a relative term.
If I offered you five pounds to run a marathon, that's a difficult thing to do, five pounds.
But if I offered you a million pounds to run a marathon, that's probably the easiest
million dollars you have made.
It's like, the term difficult is relative to the consequences of the action, right?
Like, and relative to the consequences of the action, right?
And relative to the suffering that you're safe by going vegan,
the difficulty level is like nothing. And we don't allow that kind of,
those kind of reasoning and that kind of excuse making to be taken seriously on any other ethical issue. I mean, like any other ethical issue that you choose, if you're talking about theft,
you could be talking about domestic abuse or something. If someone was just like, look, I agree that it's wrong. It's just a lot of effort to change it.
It's like, well, tough. Ethics isn't supposed to be easy. If something's wrong, then stop doing it.
It's not like it's impossible for you, right? It's like it's inconvenient. Yeah, well,
you know what else is inconvenient? Being put in a gas chamber. Like, I think I have a bit more
sympathy for the pig there than I do for you. I'm afraid to say. And it's worth just remembering that,
okay, the worst part is that it's not really fair
that I lumped that on you
because I was talking to a friend of mine the other day
at the pub and he's not a vegan
and the rest of us there were vegan
and we were talking about it
and we ended up having a big discussion about it
and by the end of the night we're outside just going,
we'll just do it, you know, you know, that it's wrong,
you know, and then I saw someone who we didn't,
like a stranger kind of go past and is on a motorbike or something.
And you know, when you see someone drive past,
and you've never met them,
but you just look at them and think,
I just, I don't like that person.
It was like, it was one of those situations.
And I thought to myself, that guy eats meat.
And here, and I thought, I thought, here I am.
I thought, here I am, like, I'm here with my friend,
who's one of the nicest people I know.
One of the nicest people I know.
And here I am, like, lambasting him for his dietary choices.
That guy is going to go and eat me, and he's not going to suffer for it at all.
It's like, why am I punishing you for being my friend?
You know, like, this doesn't seem fair.
It doesn't seem fair that I should lay this on you, and I'm not going to go and lay it on the stranger.
Why should he get away for the rest of his life, never even having to consider the eating me this wrong?
But you don't get to get away with that because you have the misfortune to meet me and now like, this
is on your mind every time you go and buy some milk. And it annoyed me, it frustrated me
because I thought, this isn't fair on people. This isn't fair on people that just by chance
they happen to have met the wrong person who now is putting this in the head. But then
I also thought, well, no, because like by telling you about this and by saying it bluntly,
it's like, it's an indication of how much I respect you as someone who wants to better
themselves, who wants to get rid of false beliefs, who wants to adopt more true beliefs,
and live in line with a virtuous ethic.
And it's like, everything that I see you do, when you're tweeting about things and
you're posting Instagram posts and sometimes they're motivational, sometimes, like you said
earlier, something like always avoids stupidity.
Right.
And it's like, yeah, it's pretty stupid to think that you're justified in torturing an
animal so that you can have a bit of bacon.
If you want to avoid stupidity, then avoid it.
If you want to avoid stupidity, then avoid McDonald's.
You know, like, it's like, and I realized that I'm justified in talking
to you in this manner because it's not an attack. It's a, it's a like, like, man, I see what
you're trying to do. And I reckon like, I respect you enough to know that you can do better
and this is how I think you could do it. And that's just, that's just one man's opinion.
Like, you know, who am I to tell you how to live your life? Well, I'm not, but I'm
telling you how I'm here to tell you how I, how I live my life, why I live my life in
that way and how it's benefited me and how I think it could benefit you too.
And then it's up to you,
as to whether you wanna make that decision or not.
And when history looks back, as Peter Singer says,
you'll be counted among the oppressors
and the liberators, and you've gotta make that choice.
And I love it, Alex, today's been awesome, man.
Thank you so much for coming on.
Where's your people going?
They wanna watch something.
Why don't you, if people to watch one video of yours, what would you recommend?
They go onto your channel, so search Cosmic Sceptic, which will be linked in the show notes below.
What video should I put in there if there was a watch one?
I would recommend, the one that I have as the channel trailer is a speech I gave in Tel Aviv,
called Why It's Time to Go Vegan. And it's one of my proudest videos. I think it landed quite well.
People seem to like the speech.
Other than that, there's a video I made called
a meat eaters case of veganism,
which a lot of people, I think that's how you bound me
in the, because that's the video we originally discussed.
That's one of my favorite videos to talk about,
like ethics and stuff.
I used to be more about philosophy of religion,
atheism and that kind of stuff,
but I could care less at this point if you believe in God,
if you're still like paying for animals to be tortured. So like those are the most important things, but you know, the second that kind of stuff, but I could care less at this point if you believe in God, if you're still like paying for animals to be tortured.
So like, those are the most important things, but you know, the second link kind of that
would definitely be patreon.com forward slash cosmetics, kept it.
That's the, you know, make sure that that's that's also in the link in the description.
And you have a Patreon to you now, which I'm so glad to hear about.
And no, indeed, and no indeed, man, all because of you, all because of you. And so yeah, everything will be linked in the show notes
below.
Oh, final thing actually.
So you mentioned about the gentleman from America,
whose book was a good read as a basis for ethics.
Michael Sandel.
Yeah.
Anything else that you think easy intro or just an interesting
myth of Cicifus, I guess? I'm not sure if that's the easiest intro
It's about the I mean that's about the philosophy of suicide which can be a bit much
I would recommend if you're just looking to get into ethics
Michael Sandell's justice is a good place to start you can also repeat a singer's practical ethics
Where he he goes over and kind of discusses what does a quality
mean, when is it okay to kill, you know, like all of these kind of, these, these kind of
questions, it's a really good introduction, practical ethics by Peter Singer.
If you're interested in the discussions on utilitarianism that we've been having, then
Roger Crisp's book, Mill on Utilitarianism is the analysis that's worth reading.
And I think if you want to read utilitarianism you want to do? What do you want to do? What do you want to do? What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do?
What do you want to do? What do you want to do? What do you want to do? What do you going on. Yeah, you probably want to make sure that you actually know what you're reading.
It's like a lot of times you're reading like old philosophy, you probably want to read
it with some kind of analysis, not because it's necessarily because it's difficult for
philosophy, but because the language has changed.
People meant different things by different words.
You need to make sure you understand what people are saying.
Yeah, ethics wise, I think I'm just looking at the ethics section are saying. Yeah, ethics-wise, I think,
I'm just looking at the ethics section of my,
oh, ethics in the real world
is also a good collection of essays by Peter Singer as well.
You can probably tell, I mean, Peter Singer
is one of the most influential thinkers on my thinking
and my life.
Ethics in the real world are a series of very short essays
on very particular ethical questions.
Like, should you refer to in new stories,
should you refer to animals as who or that?
So like, you know, should you say the cow that or the cow
who like very specific questions
across a broad range of different philosophical issues
and it's quite an interesting read.
So there's some recommendations that I would give
to give people going.
Cool.
Awesome, Alex Mann.
Thank you so much.
We're gonna have to find somewhere else. I'm gonna have to get something else. Cool. Awesome. Alex, man, thank you so much. We're going to have to find somewhere else.
I'm going to have to get something else to pull you back on about yoga or whatever the next
adventure is that we take post-cultivates.
Yeah, about that.
Yeah, anytime, man, it's always good to speak to you.
If you ever need me to come on and talk about veganism or something or something or
anything, that's fine.
That's fine.
I don't mind.
All veganism.
Anything all veganism.
All veganism as everything. Veganism is my everything my one my love. Yeah
The angle of my eye
Well, I hope it I hope it's I hope it's non-GMO apple because
Yeah, it's it's organic
Whatever the latest both three range. Yeah, I crossed that apple of my eye
whatever the letters, both three range. Yeah, grass fed.
Apple with my eye.
Yeah.
Well, thank you for your time.
Cheers, dude.
you