The Joe Walker Podcast - Doing Good Better - Will MacAskill
Episode Date: April 30, 2019Will MacAskill is an Associate Professor in Philosophy at the University of Oxford.See omnystudio.com/listener for privacy information....
Transcript
Discussion (0)
Hello there, ladies and gentlemen, boys and girls. Welcome back to the Jolly Swagman podcast.
I'm Joe Walker, your host. And just a reminder, my friend and co-host Angus Isles stepped back from the
podcast. He's focusing on a new career in New York City. You can follow him on Twitter at Angus
Isles. Stay in touch with him there. But the show will go on, although I'll be pivoting the platform
or at least pivoting the brand and the jingle a little bit away from Jolly Swag Men, plural to
man singular. I think I'll just keep the brand. It doesn't make sense, the Jolly Swag Men, plural to man, singular. I think I would just
keep the brand. It doesn't make sense, the Jolly Swag Men, but I kind of like it because it lets
us do whatever we want and we don't take ourselves too seriously. But we're going to keep interviewing
fantastic guests on a range of topics. I've got some amazing episodes coming up for you soon.
This episode is with Will McCaskill. So who is Will McCaskill?
Will is one of the founders of the Effective Altruism movement. He's a professor at Oxford
University, but he's the youngest tenured professor of philosophy in the world at the age of 32. And
he's held that mantle for the last four or five years. Will has co-founded a bunch of organizations
in the effective altruism ecosystem.
He co-founded the Center for Effective Altruism.
He founded the Global Priorities Institute at Oxford.
He's the co-founder and president of 80,000 Hours,
an organization which helps you
have the most impact in your career.
I interviewed their director of research,
Rob Wiblin, last year.
And Will is also the co-founder of Giving What We Can, an organization which encourages people
to give at least 10% of their income to the most effective charities. His non-profits have
collectively raised over 1.13 billion pounds in lifetime pledged donations. And he's also the
author of the book Doing Good Better, which was lauded by the author of the book, Doing Good Better,
which was lauded by the likes of Bill Gates, Stephen Levitt, and Stephen Pinker.
Now, is charity a personal matter where we select the causes that we feel most attached to
and leave it at that? Or do we have a moral obligation above and beyond that to try and do the most good we can wherever that good happens to be?
In other words, should we search for and give to the most important causes and effective organizations?
And how do we determine what those causes and organizations are in the first instance?
Well, these questions are the crux of the conversation you're about to
listen to with Will. And I really hope you enjoy and take something away from this that might
encourage you to think more about how you can do the most good in your life. So without much
further ado, please enjoy my chat with Will McCaskill. Will, welcome to the podcast. Could you please introduce yourself to us before we get started?
So I'm Will McCaskill, and I'm an Associate Professor of Philosophy at Oxford University
and one of the co-founders of the organization's 80,000 Hours, Giving What We Can,
and the Center for Effective Altruism. And right now I'm working as part of an organization called
the Global Priorities Institute at Oxford University.
Great. So let's get straight into it.
Can you begin by telling us why some charities are better to give to than others?
Right. So many people think that if you're trying to do good, such as by giving to charity, what really matters is just, you know just good intentions. If you're really well-meaning and you're trying to make a difference with your money or your time, then probably things are going to go OK.
I think, sadly, that's not the case. And so I often tell a story of a program called Play Pumps International, a charity that was operating in South Africa and some other countries. And the idea behind the play pump was that it was a children's merry-go-round that would
double as a water pump for the local community.
So children would push this kind of brightly colored merry-go-round, and the power of water
of children's play would provide clean water.
And in the early 2000s, this was a really hot idea.
It got a lot of positive press attention.
Newspapers called it the magic roundabout.
It won a World Bank Development Marketplace Award.
And so it would seem like, wow, this is a really exciting way of, you know, it being
a win-win.
You could provide children with their
first playground amenity. And then at the same time, you could provide clean water for the local
community. It just seems like this amazing innovation. And so this is exactly the sort
of thing that you might do if you're just, you know, well-intentioned and you don't think too
much about what you're doing when it comes to doing good. But sadly, there were a couple of studies done on the play pump and actually had many
different flaws. So as well as posing a health risk, children would often fall off the play
pumps. Sometimes they would vomit from the spinning.
It just wasn't very good as a pump.
In fact, one estimate suggested
that children would need
to allegedly play on this pump
for 27 hours per day
in order to provide enough water
for the local community.
And the thing is,
children wouldn't want to play
on this thing all day.
They'd get bored pretty quickly,
especially because it was
very tiring. Unlike a normal merry-go-round that
spins freely this would need constant torque to pump up the water so it was left to the elderly
women of the village to turn this brightly colored play pump around and around task which they found
undignified and demeaning and so when it was actually when people actually asked the local
community did you want this thing they generally no, they would have much preferred the traditional
Zimbabwe hand pump, which is very boring, very unsexy, but much more effective.
And thankfully, the news, um, as a result of these investigations by an organization called scat and, um,
uh, UNICEF that came to light and the main funders of the play pump,
the case foundation realized that they'd made a terrible mistake,
um,
and publicly disavowed the fund,
their funding of the organization and cut off further funding.
And that's actually like terrific because that very rarely happens when it
comes to nonprofits.
Um,
it's very rare that you actually get this feedback mechanism where if a charity is not doing a good job or even doing harm, it's very rare that the
funders actually find out how good a job or bad a job the charity is doing. And so that shows us
that good intentions aren't good enough. Instead of merely asking, well, are people really well-meaning? Are they trying to do good? Instead, which are the ones where if you give to this charity, you'll do the most good.
You'll have the largest positive impact on people's lives.
And it turns out there's actually loads of data that you can use in
order to help you answer that question. And in fact, different charities vary not just by a
matter of kind of 50% or something, but in fact, different charities can vary by a factor of 100
in terms of how much of an impact they have. So there's like a power law distribution for
charity returns. That's exactly right.
So what statisticians call a fat-tailed distribution,
where the best charities are outstandingly good compared to typical charities.
Will, I want to ask you about the founding of the effective altruism movement now.
Tell me about the time you met Toby Ord and something about a graveyard.
So when I had been growing up, I always wanted to make a difference and had done various
ad hoc ventures.
I'd ran summer camps for children with disabilities.
I'd worked in an old folks home.
And I was definitely donating some as well.
I'd been very convinced by the arguments of Peter Singer and was just really trying to wonder what should I do
in order to live my life in accordance with my own values.
And I had come to Oxford to do a PhD in philosophy. And at the time, I was really
aiming to be the most sort of esoteric, abstruse philosopher you can imagine.
If you wanted to know why the word cup refers to cups, I was going to be the specialist in that. But it just made me think,
well, this isn't me living my life in accordance with my own values. And so I started going
around all the academics I knew, various philosophers asking, well, what sort of impact do you think
you've made with your work? And universally, they'd say, yeah, absolutely none. That was
really quite disappointing. This was until I got put in touch, for different reasons
actually, with another graduate student called Toby Ord. I remember seeing on his website
that he took the problem of extreme poverty very seriously. um he now and he has pledged to give the what he
called the acquired amount and i thought wow this is just total bullshit this is just like those
other academics who aren't really doing anything um but then we agreed to meet um and we met in
uh yeah we met in a graveyard in ox, not exactly the most romantic place that you might think for the kind of blossoming intellectual conversation.
But, yeah, this graveyard actually doubles as the gardens from my old college in Edmund Hall.
And we were meant to just kind of grab coffee, talk for maybe an hour.
We actually ended up talking for about five hours.
And he had already done a lot of thinking, first about, you know, how he could do the most
good in his life. So firstly, he'd made a commitment to give everything he could above
20,000 pounds per year to whatever challenges he thought going to have the biggest impact.
Then secondly, he'd really thought about all the different ways in which you can have the biggest impact and dug into
health economics and development economics in order to work out of all of these different
programs that you could be spending your money on. Maybe that's deworming bed nets, maybe that's
anti-inflammatory virals, condom distribution. How much are you improving people's lives with a given amount of money?
And this just completely blew me away.
And so I helped him over the next six months to set up an organization called Giving What We Can,
which encouraged people to give at least 10% of their income to whatever they believe is the most effective charity.
And secondly, started doing research to answer this question, what are those charities that
can do the most good?
So that was 2009.
What we realized over time was that these ideas were much more general than we initially
thought.
So not only could we ask the question, how do you do the most good with
your charity, but also how do you do the most good with your time? And so I also helped to
set up an organization called 80,000 Hours, which advised to answer the question of all the careers
you could pursue, what are those in which you can have the biggest positive impact, and then gives
one-on-one advice to people in order to coach them and advise them on how they can have the biggest positive impact and then gives one-on-one advice to people in order to coach
them and advise them on how they can have the biggest positive impact with their career
it also broadened in terms of choice of cause areas so if you want to do as much good it makes
sense to think about trying to benefit the poorest people in the world because they have so few
resources additional resources can have a really huge impact but it's definitely not the only
problem in the world and we started to work on this question of what we call global
privatization. Of all the different causes or problems you'd be focusing on, what are those
where you're going to have the biggest impact? And progressively, the effect of autism community
started to also focus on issues involving factory farming, where you can have a truly
huge impact in terms of reducing the suffering of farmed animals with a given amount of money.
And secondly, on reducing what are called existential risks, where these are risks of
events that could permanently derail human civilization or in fact lead to the extinction
of the human race.
And these aren't large probability, but the downside would be so great
that it can seem to make a lot of sense in terms of expected value
to focus on reducing these risks too.
So we coined this term, effective altruism, in early, very early 2011.
We actually kind of took a vote when we were founding the organization, the Center
for Effective Altruism. We knew we had this really important concept, but we didn't know
what term it should have. We briefly were using the term super hardcore do-gooder to
refer to what are now called effective altruists. This was somewhat tongue-in-cheek. I led this project to actually
come up with a name in order that we could name this organization that we were wanting
to set up. We collectively converged on this term, effective altruism. We didn't expect
it to be a real take-off term, but it clearly has been, it's clearly resonated with people. And now it's the case that there are hundreds of local groups around the world
based around the ideas of effective altruism, and many thousands of people who are really
trying to put these ideas into practice and are using the ideas of careful evidence and
careful reasoning in order to try and have a bigger impact with
their lives.
Now, at what point in that process did you realize that existential risks were worthy
of inclusion in the AI movement?
Do you remember the moment?
I certainly remember the first time I heard about it, which was at the same graveyard
conversation where Toby Ord had mentioned that he thought that it was at least possible that risks of human extinction could be even more important than fighting extreme poverty.
And at the time, I thought this was completely insane as an idea.
I was much more committed only to global poverty um and then it was just over like it was
progressively over the next year or two that um i started to realize the power of these arguments
and i think i dismissed the idea of existential risks of being a global priority just because
of a heuristic that it sounded a bit silly, it sounded a bit sci-fi, to be worrying about things like advanced artificial intelligence or synthetic biology,
including the ability to create new viruses, as well as more well-known existential risks
like climate change and nuclear war. It just seemed maybe a bit above my pay grade or something. But now I think, no, we should take the ideas very seriously.
Human extinction would involve the deaths of all seven billion people on the planet,
but it would also mean the curtailment of all of humanity's future potential.
That's a civilization of hundreds of thousands, perhaps millions, perhaps even billions of
years.
It's hard to think that, you know, ensuring
that we do continue civilization into the coming centuries, it's hard to think that it could be
anything more important than that. I want to introduce our listeners to some of the key
concepts in EA now. And one of the most important is the idea of the diminishing marginal value of
money. Could you explain to us what that's about and how it importantly helps us to determine
which causes we should give to? Right. So this is a crucial concept where
it's a very common idea and, you know, actually just very plausible that resources in general
have diminishing returns. So, you know, suppose you didn't have any warm clothes, you know,
the first sweater or jumper that you had would be really, really useful for you. It would stop you from perhaps facing death.
The second jumper that you had would still be useful. It means you can actually have a change of clothes, but it would no longer be life or death. By the time you get to having a hundredth jumper,
a hundredth sweater, then it's just like a nuisance. It's just not worth nearly as much,
perhaps worth nothing at all, perhaps even negative. And that's true just in general.
Similarly, if you think, is $1,000 worth more to you if you're currently living on $10,000,
or if you're currently living on a million dollars? Clearly, $1,000 is worth more to you if you're currently living on $10,000 or if you're currently living on a million
dollars. Clearly, $1,000 is worth more to you if you're only on $10,000. It's going to have a bigger
impact in terms of your happiness. And that's crucial when we start thinking about what causes
ought we to be focusing on. Because typically, we often focus on just like really big problems,
often really salient problems. And we don't ask the question, how many resources are already going to this cause?
And one example of this is disaster relief, where when there's a major natural disaster,
like an earthquake, huge amounts of funding, comparatively speaking.
I mean, you know, I definitely don't
think it's overfunded. I think it's still a great use of money, but very large amounts of money
compared to what I call ongoing natural disasters like malaria, tuberculosis, HIV, AIDS are spent.
And that means even though obviously it's crucial that we are spending money on disaster relief, I think the impact you're going to be able to have is less than if you'd focused on areas that
are more neglected, which include malaria and deworming and tuberculosis.
And then I think that applies at the level of broader causes as well.
I think the reason you can have such an impact when it comes to animal welfare is in significant part because so little money is spent on factory farming,
just a few tens of millions of dollars per year. And the same is true with existential risks,
perhaps partly because they're low probability, they're hard to understand,
because they sound maybe a bit sci-fi, some of the technologies are still a bit further off.
Again, it's only a few tens of millions of dollars that's spent in terms of philanthropic money
on this area. And that means you have an opportunity to, as a philanthropist, to
potentially have a really transformative impact on this area in a way that more established causes.
It's much harder to have that same sort of transformative impact. Great. So another crucial idea in the EA movement is the notion that overheads or admin costs
are not the best metrics by which to measure the effectiveness of a given charity. Can you tell us
why that's true? So when people hear the idea of kind of effective charity, they often think that the way to assess whether a charity is a good charity is by looking at how much is spent on overheads.
They look at, of all the money that the charity receives, what proportion is spent on things like admin and fundraising and what proportion is spent on the program itself. And the idea is that the lower the proportion of money that's spent on admin and fundraising
and other overheads, the better the charity is.
And this is an idea that's been championed by charity evaluators like Charity Navigator.
And I think this is an extremely misleading idea, potentially to the point of being just, you know, worse than nothing in terms of how people should think about charity.
It can be the case that it's good for weeding out particularly bad charities.
So it is the case that a very small number of charities are spending more than 90 percent of their income on fundraising.
And that's absurd.
Charities like that clearly shouldn't exist.
But it's just not that when we're looking at, when we're not thinking about those charities,
but we're thinking about at least reasonably good charities, it's just not that relevant
how much a charity spends on its program versus overheads compared to what choice of program the charity
is implementing.
So if you imagine the Play Pump again, suppose that Play Pumps International had almost no
administration costs.
Suppose it was 0% everyone was a volunteer.
That still wouldn't make it a good thing to be implementing.
The problem is that the program, the actual play pump, is not a good idea.
Whereas some of these absolutely fantastic organizations like Against Malaria Foundation, for example, saves a life for every $3,500.
Or does as much good as saving a life for every $3,500.
Suppose it wasted 50% of the money it received.
It would still be doing as much good as saving a life for every $7,000.
That's still an amazing amount of money.
So when there's such a huge discrepancy in the importance of different causes
and within that the importance of different programs,
then it's the case that how much a charity spends on overheads,
is it 10% or is it 30%?
It's just not very important. I think when an individual is looking at a charity, um, I mean, my best advice is just
to actually defer to experts on this topic where that might be give well, that might be open
philanthropy projects, advice for individual donors. Uh, we actually set up a set of funds
where people can simply give to a world-leading
expert within each of the areas of animal welfare, global health and development, and
existential risks, in order that that expert can say, okay, this is where I think the very
best use of money is at the moment.
But if an individual does want to look at a charity themselves, the questions I'd suggest
they ask is, is it on an important cause area?
Is the problem that it's trying to solve unusually great in scale and or tractable and or neglected?
Is there good evidence behind the program?
Are the people in charge of running this organization particularly competent?
Do they have particularly good credentials,
whether that's experienced learning organizations very effectively
or kind of research background as well?
I think for an individual donor, going the kind of further step of trying to make
an overarching kind of cost-bene benefit analysis is just going to be
too difficult too kind of time consuming takes give well tens of thousands of years for charity
to do that so this is very different to how people normally give which is according to their emotions
or their gut feeling now an interesting example of where this can go astray is the ALS Ice Bucket Challenge.
Notwithstanding the incredibly effective marketing campaign, you were a little skeptical of that movement.
That's right.
Yeah, people, again, normally give what they've got.
In the case of the ALS campaign, I was worried that we might have kind of more licensing effects. I was
kind of using this as an example of a general trend within the kind of marketing of charity,
which is trying to ask more and more people for smaller and smaller amounts of money.
And the worry there is that there's a set of psychological research that suggests that at least in many
cases if someone does one good deed that means they're less likely to do a different good deed
and so if you get people to donate you know a small amount of money perhaps for a charity that's
also not among the most effective charities um there's a very significant chance that you've
therefore caused them to not donate
somewhere else.
So you've kind of moved where they donate, but you actually haven't increased their total
donations.
And I think that should be a really significant concern for those in the charitable sector
in general who are doing advertising and marketing, and suggests that, well, we really should
be trying to focus on,
firstly, advertising for those nonprofits that are having the biggest impact with a dollar,
but then also ensuring that there's not kind of race to the bottom dynamics
where different charities are just trying to squeeze money out of existing donors
at the expense of them donating to other nonprofits.
We want to be increasing the pie rather than just moving it around. Certainly, sometimes people can misunderstand what we're saying. Sometimes
do use this term that they put in our mouths, which is that we're saying that some charities
are not worthy. We'd never use a term like that.
The way I think about donating to ALS, donating to medical research, great use of money.
In a vacuum, if there were no other charities that I thought could have an even huge impact,
I would be out on the streets campaigning for people to give more money to that sort
of research.
It's not that I think these organizations aren't doing good.
It's just that we live in this very unusual world, a world where even relative to these extremely good uses of money, we can have, you know, a hundred times, a thousand times as much impact by spending that money elsewhere.
And so just in the same way as saying, like, well, if you say one investment opportunity is, you know, really, really exciting, that doesn't mean the other investment opportunities aren't also good, even if they're not as good.
That's the same sort of idea that we're saying with charities as well.
There's a really big difference between a charity being optimal and a charity being unworthy.
Continuing in this vein of common objections to effective altruism, I listened to your debate with Giles Fraser
at Church House in 2015.
And the two main objections that Giles raised to you
were one, that charity is a personal issue
and that it's legitimate to be guided by our emotions.
And two, he seemed to equate effective altruism
with pure utilitarianism. And he argued that equate effective altruism with pure utilitarianism. child because they could sell the Picasso for lots of money and, for example, give that
money to an effective charity like the Against Malaria Foundation, who would distribute bed
nets around Africa and ultimately save many more lives than just the one.
Tell us, what were your responses to these two main critiques?
Yeah, so on the first idea, or second idea that effective optimism simply is
utilitarianism um that's just definitely not the case so effective optimism is about making
outcomes better making the world a better place any moral view that's got any thread of
plausibility thinks that's at least one part of living a good ethical life, is
making the world better.
Utilitarianism is distinctive insofar as it says it's the only thing that matters.
All our actions, no matter what we do, is about maximizing the good.
We understand that in terms of improving people's well-being by as much as possible.
But all effective altruism is saying is, hey, well, you should at least be thinking about
how you can do so. be thinking about how you can
do so and here's how you can do so well here's our you know best guesses at the moment about how we
can do the most good um and any plausible model view is going to say that that's um uh part of
what matters they might not say you should you know violate people's rights in order to do it
they might not say um uh yeah they might not say, you know,
you're obliged to give away all of your income. But the idea that we should be giving some amount
of our income to charity or doing other good things is just perfectly natural part of common
sense morality. And effective altruism doesn't really take a stand on how much we ought to be
doing. I mean, with respect to the burning building case,
Giles Fraser is a philosopher and loves to,
as well as a priest and is, you know,
loves to come up with these kind of philosophical thought experiments.
Sadly, I do think in the world, you know, the way the world is,
in this highly idealized, um, abstract thought experiment,
I think it would have to be the case that the better thing to do is to save the Picasso and
save. And, you know, if you could in this weird thought experiment, thereby save the lives of
thousands of people around the world and the way to see to see this... Sorry, just to jump in there.
I mean, one of the reasons it sounds so egregious
is simply because of the particular details
with which he's constructed that thought experiment.
But if, for example, you had one child trapped in a burning building
and 1,000 children trapped in another,
most people would agree that it would be fine
to choose the 1,000 children.
That's exactly right. And what's more, they'd think it's acceptable to save a thousand children,
even if the only way you could do so was by grabbing a Picasso and using it to prop open
the door so children could get out. So in that case, we have no problem with
using a Picasso as a means to saving a thousand lives.
In fact, we think that's the right thing to do when compared to one.
It's just that in this thought experiment that Giles gives,
we move the thousand people off the scene.
They're like distant strangers.
Whereas I think in order to have accurate intuitions,
move them back on the scene.
You know, that's what Peter Singer does in his Dlanding Child thought experiment as well.
Don't think of the person whose life you could save on the other side of the world as just
some abstract entity.
Imagine that they're a real person right there in front of you.
In that case, what do you think you want Modeli to do?
And that's, I think, a more reliable set of intuitions than when we're just thinking about
people abstractly.
If you were actually faced with that choice, Will,
in some weird alternate universe, which option would you choose?
I mean, I'd certainly be extremely conflicted.
And, yeah, the fact that I think you ought to save the Picasso
definitely doesn't mean that I would.
Yeah, I'd almost certainly save the child, I think.
I mean, doctors have to face this all the time.
And I talk about this in my book, Doing Good Better.
And the way the doctors have, you know, we're in an emergency room.
They have patients of all sorts of different conditions coming in.
And they have limited resources.
So they need to figure out how do they prioritize among these patients and what they've done is
the system of triage where um some problems are just so bad that nothing you do is going to make
a difference um in which case they'd be deprioritized some problems are just you know
they're not very bad they're going to the patient's kind of going to be fine
even if you don't help them.
And for some patients, the difference, you know,
you're going to have a really big difference
if you choose to spend your time trying to treat them.
And those are the patients that you should try and focus on first.
So, you know, when it comes to these real life or death decisions,
we make, you know, we're very happy with this idea of triage.
We're very happy with this idea of prioritizing.
And I think what we don't realize is that we're in that situation all the time.
Every moment of every day, we are in that emergency room because we could be choosing
how to spend our money one way rather than another, which would involve very significantly
impacting the lives of one set of way rather than another, which would involve very significantly impacting the lives
of one set of people rather than another.
And that's a very stressful thought.
I'd encourage people just to condense that thought perhaps
into one time a year where they decide about how to spend their money
and think about their career and so on.
But that is just the fact of the world we live in.
Now, Will, jumping to a slightly different topic, you and 80,000 Hours give some quite
counterintuitive career advice, which is that in thinking about how we should structure
our careers and choose where to work, you advise young people and graduating students
not to necessarily choose what they're passionate about or what they love, which seems to fly in the face of a lot of the career advice
dished out to young people or people leaving school.
How did you arrive at that conclusion?
So there are two ways of thinking about this idea
that's very, very common career advice,
that people should just follow their passion.
The first and most fundamental is that it thinks very much about careers
in terms of what am I like, rather than what the world needs. It just is the case that
focusing on some areas, some problems, you're going to have more of a contribution than
others, because there's just more need. If I dedicated my life to singing bad karaoke or something, even if I was the most
passionate person in the world about that activity, I just wouldn't make that much of
an impact.
Similarly, whereas if I was focusing on something that's very neglected, very important, like factory farming or existential risks, global health and development, I'm going to have a much larger impact because this is an area that's really crying out for talented people to be working on it.
So that's the first thing.
I encourage people to think in significant part, not the whole part, but in significant part about what does the world need rather than simply you know what do i want to do um but then the second part is that even if you're
just thinking about having a you know rich rewarding life successful life i think follow
your passion is kind of the long framing and this is for a couple of reasons one is that um most
people just don't have work-related passions. In one study, only 4% of students had work-related passions.
Lots of people had passions, but they're all for art and music and sports, which precisely
because everyone's passionate about them, are unbelievably difficult to actually find
a career in.
The second is that passions change over time.
We tend to think that the way we are now
is how we're going to be in 10 years' time,
but people's interests change an awful lot.
And finally, this idea of passion.
The way you get passion is not that you're just born with it.
Instead, it tends to develop from being able to autonomously use a skill that you've mastered.
It's in virtue of getting very good at something and then being able to use it in a way that really
contributes. That's what ends up making you feel really passionate about this area. The idea of
following your passion really, I think, often gets the cart before the horse. And now that's not to say that you shouldn't think about what we call personal fit,
which is, you know, what's the area that I think with work I could become very good at.
I think that's obviously crucially important when you're thinking about your career.
I think that following your passion is not a very good gloss on that. And secondly,
that it ignores the incredibly important question as well what
actually the most pressing model problems of our time have you followed your passion yeah i mean i
think it's just not a question that i can kind of answer um precisely because i think in a way i'm
passionate about you know many things certainly i never would have thought that I'd be passionate about setting up nonprofit organizations, which include things like fundraising and management or public speaking.
I do a very large array of things.
And in most cases, it is the case that I didn't have any pre-existing passion, but
I thought this was a really important thing that no one else was doing, so I better do
it.
And then, as a result of developing skill in those areas, I did develop a passion for
that.
Similarly, even within philosophy, so it's certainly the case that I've always been very
driven by ideas, including big ideas and academic thought. Even then, if I'd been following
my passion, I would have been that absolute philosopher of language and logic. I'd be
working on studying Wittgenstein. Instead, I switched my research focus and I work on
questions related to effective autism within ethics. That wasn't something I had much background
in when I made the switch, but it's now something, yes, I'm incredibly passionate about.
I think it's incredibly important.
And so much more in my life, you know, I, I feel extremely passionately about the work
that I do.
Um, but that's because I think it's important.
And because over many years I've, um, developed skills and expertise in these areas.
Awesome.
Will McCaskill, thank you so much for joining me.
Thanks so much.
Thanks so much for listening, guys. I hope you enjoyed that conversation with Will McCaskill.
Links to everything we discussed can be found on our website, thejollyswagmen.com. M-E-N dot com. And if you want to follow Will on Twitter,
his handle is at Will McCaskill.
And if you want to follow me on Twitter,
my handle is at Joseph N Walker.
Until next time, thank you for listening.
I treasure you and I appreciate you.
Until next time, ciao. Ciao.