Making Sense with Sam Harris - #228 — Doing Good
Episode Date: December 14, 2020Sam Harris speaks with Will MacAskill about how to do the most good in the world. They discuss the "effective altruism" movement, choosing causes to support, the apparent tension between wealth and al...truism, how best to think about generosity over the course of one's lifetime, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Welcome to the Making Sense Podcast.
This is Sam Harris.
Welcome to the Making Sense Podcast. This is Sam Harris.
Okay, so today I'm bringing you a conversation that I originally recorded for the Waking Up app,
and we released it there as a series of separate lessons a couple of weeks back.
But the response has been such that I wanted to share it here on the podcast and put it outside the paywall. This seems like a
better holiday message than most. As I think many of you know, waking up isn't just a meditation app
at this point. It's really the place where I do most of my thinking about what it means to live
a good life. And this conversation is about generosity, about how we should think about doing good in the world. Increasingly, I'm looking to use this podcast and the Waking Up app to do more
than merely spread what I consider to be good ideas. That's their primary purpose, obviously,
but I want to help solve some of the worst problems we face more directly than just talking about them. And I want to do
this systematically, really thinking through what it takes to save the most lives or reduce the
worst suffering or mitigate the most catastrophic risks. And to this end, I've taken the pledge
over at Giving What We Can, which is the foundation on effective altruism started by the
philosophers Will McCaskill and Toby Ord, both of whom have been on the podcast. And this pledge is
to give a minimum of 10% of one's pre-tax income to the most effective charities. I've also taken
the Founders Pledge, which amounts to the same thing, and I've had Waking Up become one of the first corporations
to pledge a minimum of 10% of its profits to charity. And the thinking behind all of this
is the subject of today's podcast. Of course, there is a bias against speaking about this sort
of thing in public or even in private, right? It's often believed that it's better to practice
one's generosity
anonymously, because then you can be sure you're doing it for the right reasons. You're not trying
to just burnish your reputation. As you'll hear in today's conversation, there are very good reasons
to believe that this is just not true, and that the imagined moral virtue of anonymity is something
we really need to rethink. In fact, I've just
learned of the knock-on effects of the few times I have discussed my giving to charity on this
podcast, and they're surprisingly substantial, just to give you a sense of it. Last year, I released
an episode titled Knowledge and Redemption, where we discuss the Bard Prison Initiative,
based on the PBS documentary that Lynn Novick and Ken Burns did. And Lynn was on that podcast.
And at the end, I think I asked you all to consider supporting that work, too. And together,
we donated $150,000, based on that one episode alone. I've also occasionally mentioned on the podcast
that I donate each month to the Against Malaria Foundation, and it was actually my first podcast
conversation with Will McCaskill that convinced me to do that. And I do it through the charity
evaluator GiveWell.org. Well, the good people at GiveWell just told me that they've received over $500,000 in donations from you guys,
and they expect another $500,000 over the next year from podcast listeners who have set up their
donations on a recurring basis. So that's $1 million and many lives saved just as a result
of some passing comments I've made on the podcast. And then I've heard from Will McCaskill's people over at Giving What We Can,
where I took their 10% pledge, which I haven't spoken about much,
but it seems that hundreds of you have also taken that pledge.
Again, unsolicited by me.
But specifically attributing this podcast and the Waking Up app
as the reason. That's hundreds of people, some of whom may be quite wealthy or will become wealthy,
who have now publicly pledged to give a minimum of 10% of their pre-tax income
to the most effective charities every year for the rest of their lives.
That is awesome. So all of this inspired me to share this conversation from the Waking Up app.
Again, this is a fairly structured conversation with the philosopher Will McCaskill. Some of you
may remember the conversation I had with Will four years ago on the podcast. That was episode
number 44, and that's a great companion to today's episode because it gets into some of the
fundamental issues of ethics here. Today's conversation is much more focused on the actions
we can all take to make the world better and how we should think about doing that. Will and I
challenged some old ideas around giving,
and we discussed why they're really not very good ideas in the end. You'll also hear that there's
still a lot of moral philosophy to be done in this area. I don't think these issues are fully
worked out at all. And that's really exciting, right? There's a lot to talk about here, and
there's something for moral philosophers to actually do that might really matter to the future of our species. In particular, I think there's a lot of
work to be done on the ethics of wealth inequality, both globally and within the wealthiest societies
themselves, and I'm sure I will do many more podcasts on this topic. I suspect that wealth
inequality is producing much, if not most, of our political
conflict at this point, and it certainly determines what we do with our resources.
So I think it's one of the most important topics of our time. Anyway, Will and I cover a lot here,
including how to choose causes to support and how best to think about choosing a career so as to do the most good over
the course of one's life. The question that underlies all of this, really, is how can we live
a morally beautiful life, which is more and more what I care about, and which the young Will
McCaskill is certainly doing, as you will hear. Finally, I want to again recognize all of you who have made these donations
and pledges, as well as the many of you who have been supporting my work these many years, and also
the many of you who have become subscribers to the podcast in the last year. I couldn't be doing any
of these things without you, and I certainly look forward to what we're going to do next.
without you. And I certainly look forward to what we're going to do next. 2021 should be an interesting year. So my deep thanks to all of you. And now I bring you Will McCaskill.
I am here with Will McCaskill. Will, thanks for joining me again.
Thanks so much for having me on.
here with Will McCaskill. Will, thanks for joining me again. Thanks so much for having me on.
So I just posted a conversation that you and I had four years ago on my podcast onto Waking Up as well, because I thought it was such a useful introduction to many of the issues we're going
to talk about. And it's a different conversation because we got into very interesting questions
of moral philosophy that I think we
probably won't focus on here. So it just seems like a great background for the series of lessons
we're now going to sketch out in a conversation. But for those who have not taken the time to
listen to that just yet, maybe we should summarize your background here. Who are you, Will, and how
do you come to have any opinion about altruism, generosity, what it means to live
a good life?
Give us your potted bio.
So yeah, my potted bio.
So I grew up in Glasgow, and I was always interested in two things.
One was kind of ideas, and then in particular, philosophy, when I discovered that.
And second was interested in helping people.
So as a teenager, I volunteered running summer camps
for children who were impoverished and had disabilities.
I worked at a kind of old folks home.
But then it was when I came across the arguments of Peter Singer,
in particular his arguments that we have the moral obligation
to be giving away most of our income to help people in very poor countries, simply because such a move would
not be a great burden on us. It would be a financial sacrifice, but not an enormous sacrifice
in terms of our quality of life, but could make an enormous difference for hundreds of people
around the world. That moved me very much. But kind of being human, I didn't really do very much on the
basis of those arguments for many years until I came to Oxford, met another philosopher called
Toby Ord, who had actually very similar ideas and was planning to give away most of his income
over the course of his life. And together we set up an organization called Giving What We Can,
which encouraged people to give at least 10% of their income to those organizations they think that can do the most good.
Sam, I know that you have now taken that 10% pledge, and I'm delighted that that's the case.
And since then, this kind of set of ideas that were really just two, you know, very impractical philosophy grad students kind of setting this up and not think,
you know, I certainly never thought it was going to be that big a deal. I was just doing it because
I thought it was morally very important. It turned out just a lot of people had had similar sets of
ideas and giving what we can acted like a bit of a lightning rod for people all around the world
who were motivated to try to do good, but also to do it as effectively as possible.
Because at the time,
we had a set of recommended charities.
There was also the organization GiveWell,
whose work we leaned extremely heavily on,
making recommendations about what charities
that they thought would do the most good.
And Effective Altruism at the time
focused on charity in particular,
and in particular focused on
doing good for people in extreme poverty.
And since then has broadened out a lot.
So now most people in the effective altruism community, when they're trying to do good, are doing so via their career in particular.
And there's a much broader range of cause areas.
So animal welfare is a big focus.
And in particular, and I think increasing, are issues that might potentially affect future generations in a really big way. And in particular, kind of risks to the future of civilization at all that Toby talked about when he was on your podcast.
memory, which I think I got from your original interview with Tim Ferriss on his podcast.
Am I correct in thinking that you were the youngest philosophy professor at Oxford?
Yes. So the precise fact is when I joined the faculty at Oxford, which was age 28,
I'm pretty confident I was the youngest associate professor of philosophy in the world at the time.
Oh, nice. Nice. All right. Well, no doubt you're quickly aging out of that distinction. Have you lost your record yet?
Yeah. Well, I'm an old man at 33 years old now, and I definitely lost that a few years ago.
Well, so it's great to talk to you about these things because, as you know, you've been very
influential on my thinking. You directly inspired me to start giving a minimum of 10%
of my income to charity and also to commit waking up as a company to give a minimum of 10% of its
profits to charity. But I'm very eager to have this conversation because it still seems to me
there's a lot of thinking yet to do about how to approach doing good in the world. There may be
some principles that you and I
either disagree about, or maybe we'll agree that we just don't have good enough intuitions
to have a strong opinion one way or another. But it really just seems to me to be territory that
can benefit from new ideas and new intuition pumps. And there's just a lot to be sorted out
here. And I think, as I said,
we will have a structured conversation here, which will break into a series of lessons. And so this
is really an introduction to the conversation that's coming. And all of this relates specifically
to this movement you started, Effective Altruism, and we'll get very clear about what that means and
what it may yet mean. But this does connect to deeper and broader questions like,
how should we think about doing good in the world in general? And what would it mean
to do as much good as possible? And how do those questions connect to questions like,
what sort of person should I be? Or what does it mean to live a truly good life? These are
questions that lie at the core of moral philosophy and at the core of any person's individual attempt
to live an examined life and develop an ethical code and just form a vision of what would be a
good society. I mean, we're all personally attempting to improve our lives, but we're
also trying to converge on a common picture of what it would mean for us to be building a world
that is making it more and more likely that humanity is moving in the right direction.
We have to have a concept of what the goal is here or what a range of suitable goals might be.
And we have to have a concept of when we're wandering into moral error, you know, personally and collectively.
So there's a lot to talk about here, and talking about the specific act of trying to help people,
trying to do good in the world, really sharpens up our sense of the stakes here and the opportunities.
So I'm really happy to be getting into this with you.
Before we get into what effective altruism is, I think we should address a basic skepticism that people have, and even very rich people have, perhaps especially rich people have this. It's a skepticism
about altruism itself, and in particular a skepticism about charity. And I think there are some good reasons to be skeptical about
charity, at least in a local context, and then there's some very bad reasons. And I just want to
lob you some of these reasons and we can talk about them. Because I meet, I would imagine you've
encountered this yourself, I meet some very fortunate people who have immense resources and can do a lot of good in the world
who are fundamentally skeptical about giving to charity. And the bad reason here that I always
encounter is something we might call the myth of the self-made man, the idea that there's somehow an ethically impregnable position to notice all the ways in which you
are responsible for all of your good luck, no matter how distorted this appraisal might be.
You weren't born into wealth, and you made it all yourself, and you don't owe anyone anything. And in
fact, giving people less fortunate than yourself
any of the resources you've acquired is not really helping them in the end. I mean,
you want to teach people to fish, but you don't want to give them fish. There's some
Ayn Randian ethic of radical selfishness combined with a vision of capitalism that, you know,
wherein free markets can account for,
you know, every human problem simply by all of us behaving like atomized selves,
seeking our own happiness. It will be no surprise to people who've listened to me
that I think there's something deeply flawed in this analysis. But what do you do when someone
hits you with this ethical argument that they're self-made
and everyone should aspire to also pull themselves up by their own bootstraps?
And we falsify something about the project of living a good life by even thinking in
terms of altruism and charity.
I think there's a few things to say here.
and charity? I think there's a few things to say here. So in the first case, the fact that you're a self-made man, I mean, I do disagree with the premise. I can predict 80% of the information
about your income just from your place of birth. Whereas, you know, you could be the hardest
working Bangladeshi in the world, but if you're born into extreme poverty in Bangladesh, it's going to be very difficult indeed to become a billionaire. So I agree with
you that that's a myth. But even if we accepted that, the fact that you have rightly earned your
money yourself doesn't mean that you don't have any obligations to help other people. So Peter
Singer's now very famous thought experiment. You walk past a pond.
It's a very shallow pond.
You could easily kind of wade in as deep as you like.
And you can see that there's a child drowning there.
Now, perhaps it's the case that you're an entirely self-made man.
Perhaps it's the case that the suit that you wore, you justly bought yourself.
But that really seems
neither here nor there with respect to whether you ought to try and wade in and save this child
who might be drowning. And I think that's just quite an intuitive position. In fact, this ideal
of self-actualization, of kind of being the best version of yourself that you can be, which is
the kind of admirable version of this otherwise sometimes
quite dark perspective on the world. I think that is like part of being a self-actualized,
authentically living person is living up to your ideals and principles. And for most people in the
world, you actually want to be a helpful, altruistic person. Acting in that way is acting
in accordance with your deepest values that is acting an authentic
kind of self-actualized life and then just on the second point is about whether well maybe charity
gets in the way maybe it's actually harmful because it makes people rely on bailouts well
here we've got to just think about you know there market failure, where in the case of public goods or externalities,
markets don't do what they ought to do. And perhaps you want government to step in,
provide police or defense and streetlights or taxes against climate change. And even the most
kind of hardcore libertarian free market proponent should accept that's a good thing to do sometimes.
But then there's also cases of democratic failure too. So what if the potential people are not
protected by functioning democratic governments? That's true for people in poor countries.
That's true for non-human animals. That's true for people who are yet to be born,
people who don't have a vote. The future generations are disenfranchised. So we shouldn't expect markets or government to be taking appropriate care of those individuals who
are disenfranchised by both the market and by even democratic institutions. And so what else is there
apart from philanthropy? Yeah, yeah. So I've spoken a lot about the myth of the self-made man whenever I criticize the notion of free will.
It's just obvious that however self-made you are, you didn't create the tools by which you
made yourself, right? So if you are incredibly intelligent or have an immense capacity for
effort, you didn't create any of that about yourself. Obviously, you didn't pick your
parents, you didn't pick your genes, you didn't pick the environmental influences that determined every
subsequent state of your brain, right? You didn't create yourself. You won some sort of lottery
there, but as Will, you point out, where you were born also was a major variable in your success,
very likely. You didn't create the good luck not to
be born in the middle of a civil war in a place like Congo or Syria or anywhere else, which would
be hostile to many of the things you now take for granted. So there's something, frankly, obscene
about not being sensitive to those disparities. And as you point out, living a good life and being
the sort of person you are right to want to be has to entail some basic awareness of those facts and
a compassionate impulse to make life better for people who are much less fortunate than we are.
I mean, it's just, if your vision of who you want to be doesn't include being connected to the rest
of humanity and having compassion be part of the operating system that orients you toward the
shocking suffering of other people.
Even when it becomes proximate, even when you're walking past Singer's shallow pond
and you see someone drowning, we have a word for that orientation, and it's sociopathy or
psychopathy. It's a false ethic to be so inured to the suffering of other people that you can just
decide to close your accounts with even having to pay attention to it and all under the rubric
of being self-made.
But none of this is to deny that in many cases, things are better accomplished by business
than by charity or by government than by charity, right, or by government
than by charity, right? So we're not denying any of that. I happen to think that building electric
cars that people actually want to drive, you know, may be the biggest contribution to fighting
climate change, or certainly one of them, and maybe better than many environmental charities
manage to muster. I mean, so there are different levers to pull here to effect change in the world.
But what also can't be denied is that there are cases where giving some of our resources
to people or to causes that need them more than we do
is the very essence of what it means to do good in the world.
That can't be disputed.
And Singer's Shallow Pond
sharpens it up with a cartoon example,
but it's really not such a cartoon
when you think about the world we're living in
and how much information we now have
and how much agency we now have
to affect the lives of other people.
I mean, we're not isolated
the way people were 200 years ago. And
it is uncontroversial to say that anyone who would walk past a pond and decline to save a
drowning child out of concern for his new shoes or his new suit, that person is a moral monster.
And none of us want to be that sort of person. And what's more, we're right to not want to be that sort of person, and what's more, we're right to not want to be that sort of person.
But given our interconnectedness and given how much information we now have about the disparities
in luck in this world, we have to recognize that though we're conditioned to act as though
people at a distance from us, both in space and in time, matter less than people who are
near at hand. If it was ever morally defensible, it's becoming less defensible because the distance
is shrinking. We simply have too much information. So there's just so many pawns
that are in view right now. And a response to that is, I think, morally important.
But in our last conversation, Will, you made a distinction that I think is very significant,
and it provides a much better framing for thinking about doing good.
And it was a distinction between obligation and opportunity.
The obligation is Singer's shallow pond argument.
You see a child drowning, you really do have a moral obligation
to save that child, or there's just no way to maintain your sense that you're a good person
if you don't. And then he forces us to recognize that really we stand in that same relation to
many other causes, no matter how distant we imagine them to be. But you favor the opportunity framing of racing in to save children from a burning
house. Imagine how good you would feel doing that successfully. So let's just put that into play
here, because I think it's a better way to think about this whole project.
Yeah, exactly. So as I was suggesting earlier, just if, I mean, for most people around the world, certainly in rich countries, if you run in and you rescue that child. Like that moment would stay with you for the entire life. You would
reflect on that in your elderly years and think, wow, I actually really did something that was like,
that was pretty cool. It's just, it's worth lingering there because everyone listening to us knows down to their toes that that would be, if not the defining moment in their life,
you know, in the top five, there's just no way that wouldn't be one of the most satisfying
experiences. You could live to be 150 years old and that would still be in the top five most satisfying experiences of your life.
And given what you're about to say, it's amazing to consider that and how opaque this is to most
of us most of the time when we think about the opportunities to do good in the world.
Exactly. And yeah, I mean, continuing this, imagine if you did a similar thing kind of several times. So one week you saved someone from a burning building. The next week you saved
someone from drowning. The month after that, you saw someone having a heart attack and you
performed CPR and saved their life too. You'd think, wow, this is a really special life that
I'm living. But the truth is that we have that opportunity to be as much of a moral hero,
in fact, much more of a moral hero every single year of our lives. And we can do that just by
targeting our donations to the most effective charities to help those people who are poorest
in the world. We could do that too if you wanted to choose a career that's going to have a really big impact on the lives of others.
And so it seems very unintuitive because we're in a very unusual place in the world.
You know, it's only over the last couple of hundred years that there's such a wild discrepancy
between rich countries and poor countries,
where people in rich countries have a hundred times the income of the poorest people in the world,
rich countries have a hundred times the income of the poorest people in the world. And where we have the technology to be able to change the lives of people on other sides of the world, let alone the
kind of technologies to, you know, imperil the entire future of the human race, such as through
nuclear weapons or climate change. And so our model instincts are just not attuned to that at all. They are just not sensitive to the sheer scale of what an individual is able to achieve if he or she is trying to make a really positive difference in the world.
think about William Wilberforce or Frederick Douglass or the famous abolitionists, people who kind of campaigned for the end of slavery and the amount of good they did, or other of these
kind of great model leaders and think, wow, these are like really special people because of the
amount they accomplished. I actually think that's just attainable for many, many people around the
world. Perhaps, you know, you're not quite going to be someone who can do as much as contributing
to the abolition of slavery, but you are someone who can do as much as contributing to the abolition of slavery. But
you are someone who can potentially save hundreds or thousands of lives or make a very significant
difference to the entire course of the future to come.
Well, that's a great place to start. So now we will get into the details. Okay, let's get into effective altruism per se. How do you define it at this
point? So the way I define effective altruism is that it's about using evidence and careful
reasoning to try to figure out how to do as much good as possible, and then taking action on that
basis. And the real focus is on the most good. And that's so important
because people don't appreciate just how great the difference in impact between different
organizations are. When we've surveyed people, they seem to think that the best organizations
are maybe 50% better than typical organizations like charities. But that's not really the way
of things. Instead, it's that the
best is more like hundreds or thousands of times better than a typical organization. And we just
see this across the board when comparing charities, when comparing different sorts of actions.
So for global health, you will save hundreds of times as many lives by focusing on
anti-malarial bed nets and distributing them
than focusing on cancer treatment. In the case of improving the lives of animals and factory farms,
you'll help thousands of times more animals by focusing on factory farms than if you try to help
animals by focusing on pet shelters. If you look at kind of risks to the future of civilization,
man-made risks like novel pandemics are plausibly just a thousand times greater than magnitude
than natural risks like asteroids that we might be more familiar with.
And that just means that focusing not just on doing some amount of good,
but doing the very best, it's so important.
Because it's easy just not to think about how wild this fact is.
So imagine if this were through with consumer goods.
So at one store, you want a beer.
At one store, the beer costs $100.
At another, it costs 10 cents.
That would just be completely mad.
But that's the way things are in the world of trying to do good.
It's like a 99.9% off sale or 100,000% extra fee.
By focusing on these best organizations, it's just the best deal you'll ever see in your
life.
And that's why it's so important for us to highlight this.
Okay, so I summarize effective altruism for myself now along these lines.
So this is a working definition, but it captures a few of the areas of focus and the difference
between solving problems with money and solving problems with your time or your choice of career.
In your response to my question, you illustrated a few different areas of focus. So you could be
talking about the poorest people in the world, but you could also be talking about long-term risk
to all of humanity. So the way I'm thinking
about it now is that it's the question of using our time and or money to do one or more of the
following things. To save the most number of lives, to reduce the most suffering, or to mitigate the
worst risks of future death and suffering. So then the question of effectiveness is, as you
point out, there's so many different levels of competence and clarity around goals. There may
be very effective charities that are targeting the wrong goals, and there are ineffective charities
targeting the right ones. And this does lend some credence to the skepticism about charity itself
that I referenced earlier. And there's one example here which does a lot of work in illustrating the
problem. And this is something that you discuss in your book, Doing Good Better, which I recommend
that people read. But remind me about the ill-fated Play Pump.
Yeah, so the now infamous Play Pump was a program that got a lot of media coverage in the 2000s and even won a World Bank Development Marketplace Award.
And the idea was identifying a true problem that many villages in sub-Saharan Africa
do not have access to clean drinking water.
And its idea was to install a kind of children's merry-go-round,
one of the roundabouts, the things you push
and then jump on and spin around.
And that would harness the power of children's play
in order to provide clean water
for the world so the put by pushing on this merry-go-round you would pump up water from the
ground and it would act like a hand pump providing clean water for the village and so people loved
this idea the media loved it said you know providing clean water this child's play or
it's the magic roundabout they love to put a pun on it. So it was a real hit.
But the issue is that it was really a disaster,
this development intervention.
So none of the local communities were consulted
about whether they wanted a pump.
They liked the much cheaper, more productive,
easier to use Zimbabwe hand pumps
that were sometimes in fact replaced by these play pumps.
And moreover, in fact, the play pumps were sufficiently inefficient
that one journalist estimated that children would have to play on the pump
25 hours per day in order to provide enough water for the local community.
But obviously children don't want to play on this merry-go-round
all the time. And so it would be left off into the elderly women of the village to push this
brightly colored play pump round and round. One of the problems was that it didn't actually
function like a merry-go-round where it would gather momentum and keep spinning. It actually
was just work to push, right? Well, exactly. You need the point of
a children's merry-go-round is you push it and then you spin. And if it's good, it's very well
greased. It spins freely. But you need to be providing energy into the system in order to
pump water up from the ground. And so it wouldn't spin freely in the same way. It was enormous
amounts of work. Children would find it very tiring. So it was just a fundamental misconception about engineering to deliver this pump in the first
place. Yeah, absolutely. And then there's just like, why would you think you can just go in and
replace something that has already been quite well optimized to the needs of the local people?
Seems quite unlikely. Like if this was such a good idea, got to ask the question, why wasn't
it already invented? Why wasn't it already invented?
Why wasn't it already popular?
There's not a compelling story about,
well, it's a public good or something.
There's a reason why it wouldn't have already been,
wouldn't have already been developed.
And that's, you know,
let alone the fact that the main issue
in terms of water scarcity
for people in the poorest countries
is access to clean water rather than access to water.
And so instead organizations like Dispensers for Safe Water, which install chlorine at the point of source, so at these hand pumps, chlorine dispensers that they can easily put into
the jerry cans that they use to carry water, that sanitizes the water. These are much more effective
because that's really the issue is dirty water rather than access to water most of the time.
Okay. So this just functions as a clear example of the kinds of things that can happen
when the story is better than the reality of a charity. And if I recall correctly,
there were celebrities that got behind this and they raised, it had to be tens
of millions of dollars for the play pump. Even after the fault in the very concept was revealed,
they persisted. I mean, they kind of got locked into this project and I can't imagine it persists
to this day, but they kept doubling down in the face of the obvious reasons
to abandon this project. I mean, this included kids getting injured on these things and kids
having to be paid to run them. And it was a disaster any way you look at it. So this is
the kind of thing that happens in various charitable enterprises. And this is the kind of thing that if you're
going to be effective as an altruist, you want to avoid. Yeah, absolutely. And just on the whether
they still continue. So I haven't checked in the last few years, but a few years ago when I did,
they were still going. And they were funded mainly by corporations like Colgate Palmolive,
and obviously in a much diminished capacity
because many of these failures were brought to light.
And that was a good part of the story.
But what it does illustrate is a difference
between the world of nonprofits and the business world
where in the business world,
if you make a really bad product,
then well, at least if the market's functioning well,
then the company will go out of business.
You just won't be able to sell it because the beneficiaries of the product are also the people paying for it.
But in the case of non-profits, it's very different. The beneficiaries are different
from the people paying for the goods. And so there's a disconnect between how well can you
fundraise and how good is the program that you're implementing. And so the sad fact is that bad
charities don't die, not nearly enough.
Yeah, actually, that brings me to a question about perverse incentives here that
I do think animates the more intelligent skepticism. And it is on precisely this point
that charities, good and bad, can be incentivized to merely keep going. I mean, just imagine a charity
that solves its problem. It should be that, you know, if you're trying to, let's say, you know,
eradicate malaria, you raise hundreds of millions of dollars to that end, what happens to your
charity when you actually eradicate malaria. We're obviously not in that position
with respect to malaria, unfortunately, but there are many problems where you can see
that charities are never incentivized to acknowledge that significant progress has
been made, and the progress is such that it calls into question whether this charity
should exist for much longer. And there may be
some, but I'm unaware of charities who are explicit about their aspiration to put themselves
out of business because they're so effective. Yeah. So I have a great example of this going
wrong. So one charity I know of is called Scots Care, and it was set up in the 17th century after the personal union of England and Scotland.
And there were many Scots who migrated to London, and we were the poor.
We were the indigent in London.
And so it makes sense for there to be a nonprofit helping make sure that poor Scots had a livelihood, were they able to feed themselves and so on.
Is it the case that in the 21st century,
poor Scots in London is the biggest global problem?
No, it's not.
Nonetheless, Scots care continues to this day
over 300 years later.
Are there examples of charities that explicitly
would want to put themselves out of business?
I mean, giving what we can, which you've joined, is one. Our ideal scenario
is a situation where the idea that you would join a community because you're donating 10%
is just weird, wild. Like if you become vegetarian, very rare that you join the kind of
a vegetarian society. Or if you decide not to be a racist,
or decide not to be a liar.
It's not like you join the no liars society or the no racist society.
And so that is what we're aiming for, is a world where it's just so utterly common sense
that if you're born into a rich country, you should use a significant proportion of your
resources to try and help other people, impartially considered. That the idea of needing a community, you're needing to
be part of this kind of club, open-order group of people, that just wouldn't even cross your mind.
So the day that giving what we can is not needed is a very happy day from my perspective.
So let's talk about any misconceptions that people might have
about effective altruism, because the truth is I've had some myself, even having prepared to
have conversations with you and your colleague Toby Ord, he's also been on the podcast.
My first notion of effective altruism was that, very much inspired by Peter Singer's Shallow Pond,
that it really was just a matter of focusing on the poorest of the poor in the developing world,
almost by definition, and that's kind of the long and the short of it. And you're giving as much as
you possibly can sacrifice, but the minimum bar would be 10% of your income. What doesn't that capture
about effective altruism? Yeah, thanks for bringing that up, because it is a challenge we've faced
that the ideas that spread are the most mimetic with respect to effective altruism, and not
necessarily those that most accurately capture where the movement is, especially today. So as you say, many people
think that effective altruism is just about earning as much money as possible to give well
recommended global health and development charities. But I think there's at least three
ways in which that misconstrues things. One is the fact that there are just a wide variety of
causes that we focus on now. And in fact, among the kind of most engaged people in effective altruism,
the biggest focus now is making sure, is future generations,
and making sure that things go well for the very many future generations to come,
such as by focusing on existential risks that Toby talks about,
like man-made pandemics, like AI.
Animal welfare is another cause area. It's not definitely by no means the majority focus, but is a significant
minority focus as well. And there's just lots of people trying to get better evidence and
understanding of these issues and a variety of other issues too. So voting the forum is something that I've
funded to an extent and championed to an extent. I'm really interested in more people working on
the risk of war over the coming century. And then secondly, there are, as well as donating,
which is a very accessible and important way of doing good. There's just a lot of, in fact,
the large majority of people within the effective altruism community are trying to make a difference, not primarily via their
donations, though often they do donate too, but primarily through their career choice by working
in areas like research, policy, activism. And then just as a kind of framing in general,
we just really don't think of effective altruism as a set of recommendations, but rather like a research project and methodology.
So it's more like aspiring towards the scientific revolution than any particular theory.
And what we're really trying to do is to do for the pursuit of good what the scientific revolution did for the pursuit of truth. It's an ambitious goal, but trying to make the pursuit of good this more rigorous, more scientific enterprise. And
for that reason, we don't see ourselves as this kind of set of claims, but rather as a living,
breathing, and evolving set of ideas. Yeah, yeah. I think it's useful to distinguish at least two levels here. One is the specific question of whether an individual cause or an individual charity is a good one.
And how do we rank order our priorities?
And all of that is getting into the weeds of just what we should do with our resources.
And obviously, that has to be done.
And I think the jury is very much out on many of those questions.
And I want to get into those details going forward here. But the profound effect that your work has had on me thus far arrives at this other level of just the stark
recognition that I want to do good in the world by default, and I want to engineer my life such that
that happens whether I'm inspired or not. The crucial distinction for me has been to see that there's the good feeling
we get from philanthropy and doing good, and then there's the actual results in the world.
And those two things are only loosely coupled. This is one of the worst things about us that
we need to navigate around, or at least be aware of as we live our lives, that we tend, you know, we,
you know, human beings tend not to be the most disturbed by the most harmful things we do,
and we tend not to be the most gratified by the most beneficial things we do, and we tend not to
be the most frightened by the most dangerous risks we run, right? And so it's just, we're very easily distracted by
good stories and other bright, shiny objects. And the framing of a problem radically changes
our perception of it. So the effect, you know, when you came on my podcast four years ago,
was for me to just realize, okay, well, now we're talking about GiveWell's most effective
charities and the Against Malaria Foundation is at the top. I recognize in myself that I'm just
not very excited about malaria or bed nets. The problem isn't the sexiest for me. The remedy isn't
the sexiest for me. And yet I rationally understand that if I want to save human lives, this is the dollar for dollar, the cheapest way to save a human life.
So the epiphany for me is, I just want to automate this and just give every month to this charity without having to think about it. And so, you know, that is gratifying to me to some degree,
but the truth is, I almost never think about malaria or the Against Malaria Foundation
or anything related to this project. And I'm doing the good anyway because I just decided
to not rely on my moral intuitions day to day and my desire to rid the world of malaria, I just decided to
automate it. The recognition that there's a difference between committing in a way that
really takes it offline so that you no longer have to keep being your better self on that topic
every day of the week is just wiser and more effective to decide in your clearest moment of
deliberation what you want to do and then just to decide in your clearest moment of deliberation, you
know, what you want to do, and then just to build the structure to actually do that thing.
And that's just one of several distinctions that, you know, you have brought into my understanding
of how to do good.
Yeah, absolutely.
I mean, it just, we've got to recognize that we are these fallible, imperfect creatures.
Donating is much like, you much like paying your pension or something.
It's something you might think, oh, I really ought to do,
but it's just hard to get motivated by.
And so we need to exploit our own irrationality.
And I think that comes in two stages.
First, like building up the initial motivation.
You can sustain that for perhaps a know, perhaps feeling of moral outrage
or just a real kind of yearning to start to do something. You can get that. So in my own case,
when I was deciding how much should I try and commit to, to give away over the course of my
life, I just, I looked up images of children suffering from horrific topical diseases. And
that, you know, really stayed with me, kind of gave me
that initial motivation.
Or I still get that
if I read about
the many close calls we had
where we almost had
a nuclear holocaust
over the course of
the 20th century.
Or if I, you know,
learn more history
and think about
what the world
would have been like
if the Nazis had won
the Second World War
and created this global
totalitarian state.
There's, you know, or fiction, like like was recently reading 1984 and again this kind of
ways of just thinking just how bad and different the world could be that can really create the
sense of like moral urgency or just you know on the news too the kind of moral outrages we see
all the time and then the second is how we how direct that. And so in your own case, just saying,
yes, every time I have a podcast, I donate three and a half thousand dollars and it saves a life.
Very good way of doing that. Similarly, you can have a system where every time a paycheck comes in,
10% of it just, it doesn't even enter your bank account. It just goes to, or at least immediately
leaves to go to some effective charity that you've carefully
thought about. And there's other hacks too. So public commitments are a really big thing now.
I think there's no way I'm backing out of my altruism now. Too much of my identity is
wrapped up in that now. So even if someone offered me a million pounds and I could skip town,
I wouldn't want to do it. It's part
of who I am. It's part of my social relationships. And that's fairly powerful too.
Actually, in a coming chapter here, I want to push back a little bit on how you are personally
approaching giving, because I think I have some rival intuitions here. I want to see how they
survive contact with your sense of how you should live. There's actually a kind of related point
here where I'm wondering, when we think of causes that meet the test of effective altruism,
they still seem to be weighted toward some obvious extremes, right? Like when you look at the value of a
marginal dollar in sub-Saharan Africa or Bangladesh, you get so much more of a lift in human well-being
for your money than you do or than you seem to in a place like the United States or the UK,
that by default, you generally have an argument for doing good elsewhere rather than
locally. But I'm wondering if this breaks down for a few reasons. So, I mean, just take an example
like the problem of homelessness in San Francisco, right? Now, leaving aside the fact that we don't
seem to know what to do about homelessness, it appears to be a very hard problem to solve. You can't just build shelters for the mentally ill and substance abusers and call it a day, right? I mean, they quickly find that even they don't want to be in those shelters. And, you know, they're back out on the streets. And so you have to figure out what services you're going to provide. And there's all kinds of bad incentives and moral hazards here that when you're the one city that
does it, well, then you're the city that's attracting the world's homeless. But let's
just assume for the sake of argument that we knew how to spend money so that we could solve this
problem. Would solving the problem of homelessness in San Francisco stand a chance of rising to the near the top of our priorities, in your view?
Yeah, so it would all just depend on how, like the costs to save homelessness and how that compared
with our other opportunities. So in general, it's going to be the case that the very best
opportunities are, in order to improve lives, are going to be in the poorest countries, because
the very best ways of
helping others have not yet been taken. So malaria is still life. It was wiped out in the US, and
certainly by the early 20th century. It's an easy problem to solve. It's very cheap.
And when we look at rich countries, the problems that are still left are, you know, the comparatively
harder ones to solve, for whatever reason. So like in the case of homelessness,
I'm not sure about the original source of this fact,
but I have been told that...
So yeah, for those who haven't ever lived in the Bay Area,
the problem of homelessness is horrific there.
There's just people with severe mental health issues,
clear substance abuse, just like everywhere on the street. It's so prevalent.
It just amazes me that one of the richest countries in the world, in one of the richest
places within that country, is unable to solve this problem. But I believe at least that in terms
of funding at the local level, there's about $50,000 spent per homeless person in the Bay Area.
And what this suggests is that the problem is not
to do with a lack of finances. And so if you were going to contribute more money there,
it's unlikely to make an additional reason. Perhaps it's some perverse incentives effect,
perhaps it's government bureaucracy, perhaps it's some sort of legislation. I don't know.
It's not an issue I know enough about. But precisely because the US is so rich, the San Francisco
Bay Area is so rich, is that if this was something where we could turn money into a solution to the
problem, it would more likely, more than likely, it probably would have happened already. But that's
not to say we'll never find issues in rich countries where you can do an enormous amount of
good. So open philanthropy, which is kind of a core effective altruist foundation,
one of its program areas is criminal justice reform
that it started, I believe, about five years ago.
And it really did think that the benefits to Americans
that it could provide by funding changes to legislation to reduce the absurd
rates of over-incarceration in the US, where for context, the US incarcerates five times as many
people as the UK does on a per person basis. And there's a lot of evidence suggesting you could
reduce that very significantly without changing rates of crime. It seemed to be comparable to actually the best interventions
in the poorest countries.
Of course, this has now become an even more well-focused issue.
So I believe that they're finding it harder to now, you know,
make a difference by funding organizations
that wouldn't have otherwise be funded.
But this is at least one example where you can get things that come up
that just for whatever reason have not yet been funded, kind of new opportunities, where you can do as much good.
It's just that I think they're going to be comparatively much harder to find.
Yeah, I think that this gets complicated for me when you look at just what we're going to
target as a reduction in suffering. I mean, it's very easy to count dead people, right? So if we're
just talking about saving lives, that's a pretty easy thing to calculate. If we can save more lives
in country X over country Y, well, then it seems like it's a net good to be spending our dollars
in country X. But when you think about human suffering and when you think about how so much
of it is comparative, like the despair of being someone who has fallen through the cracks in a
city like San Francisco could well be much worse. I mean, I think there's certainly, I don't know
what data we have on this, but there's certainly a fair amount of anecdotal testimony that poor people in a country like Bangladesh, while it's obviously terrible to be poor in Bangladesh, and there are many reasons to want to solve that problem.
And by comparison, when you look at homeless people on the streets of San Francisco, they're not nearly as poor as the poorest people in Bangladesh, of course, and nor are they politically oppressed in the same way.
I mean, by global standards, they're barely oppressed at all.
But it wouldn't surprise me if we could do a complete psychological evaluation
or just trade places with people in each condition,
we would discover that the suffering of a person
who is living in one of the richest
cities in the world and is homeless and drug addicted and mentally ill, or to pick off that
menu of despair, is actually the worst suffering on earth. And again, we just have to stipulate
that we could solve this problem dollar for dollar
in a way that, you know, we admit that we don't know how to at the moment. It seems like just
tracking the, you know, the GDP in each place and in the amount of money it would take to deliver a
meal or get someone clothing or get someone shelter and, you know, the power of the marginal dollar calculation doesn't necessarily
capture the deeper facts of the case, or at least that's my concern.
So I'd actually agree with you on the question of, take someone who, yeah, they're mentally unwell,
they have drug addictions, they're homeless in the San Francisco Bay Area, how bad is their day?
And then take someone living in extreme poverty in India or sub-Saharan Africa, how bad is their
typical day? Yeah, I wouldn't want to make a claim that that homeless person in the US has a better
life than the extreme poor. I think it's not so hard to just hit rock bottom
in terms of human suffering.
And I do just think that
the homeless in the Bay Area
just seem to have like really terrible lives.
And so the question,
the question in terms of the difference
of how promising it is as a cause
is much more to do with this question
of whether the low-hanging fruit has already been taken,
where just think about the most sick you've ever been
and how horrible that was.
And now think about that for months,
having malaria, for example,
and that you could have avoided that for a few dollars.
That's an incredible fact.
And that's where the real difference is, I think,
is in kind of the cost to solve a problem
rather than necessarily like the kind of per person suffering.
Because while rich countries are in general happier than poorer countries,
the worst off people, I mean, especially in the US,
which has such a high variance in life
outcomes. Yeah, the lives of the worst-off people can easily be much the same.
Yeah, I guess there's some other concerns here that I have, which,
and this speaks to a deeper problem with consequentialism, which is our orientation here,
you know, not exclusively, and people can mean many things by that term,
but there's just a problem in how you keep score because, you know, obviously there are bad things
that can happen which have massive silver linings, right, which, you know, have good consequences in
the end, and there are apparently good things that happen that actually have bad consequences
elsewhere and or in the fullness of time.
And it's hard to know when you can actually know that you can assess what is true, the net,
how you get to the bottom line of the consequences of any actions. But when I think about the knock-on
effects of letting a place like San Francisco become a slum, effectively, right? Like, you just
think of, like, the exodus in tech from California at this moment. I don't know how deep or sustained
it'll be, but I've lost count of the number of people in Silicon Valley who I've heard are
leaving California at this point. And the homelessness in San Francisco is very high
on the list of reasons why. That strikes me as a bad outcome that has far-reaching significance for
society. And again, it's the kind of thing that's not captured by just counting bodies or just
looking at how cheap it is to buy bed nets. And I'm sort of struggling to find
a way of framing this that is fundamentally different from Singer's Shallow Pond that
allows for some of the moral intuitions that I think many people have here, which is that
there's an intrinsic good in having a civilization that is producing the most abundance possible. I mean, we want a highly
technological, creative, beautiful civilization. We want gleaming cities with beautiful architecture.
We want institutions that are massively well-funded, producing cures for diseases,
that are massively well-funded, producing cures for diseases, rather than just things like bed nets, right? And we want beautiful art. There are things we want, and I think there are things we're
right to want, that are only compatible with the accumulation of wealth in certain respects.
One framing, I mean, from Singer's framing, those intuitions are just
wrong, or at least they're premature, right? I mean, we have to save the last child in the last
pond before we can think about funding the Metropolitan Museum of Art, right, on some level.
And many people are allergic to that intuition for reasons that I understand, and I'm not sure
that I can defeat Singer's argument here, but I have this image that essentially we have a lifeboat
problem, right? You and I are in the boat, we're safe, and then the question is how many people
can we pull in to the boat and save as well? And as with any lifeboat, there's a problem of capacity. We can't save
everyone all at once, but we can save many more people than we've saved thus far. But the thing
is, we have a fancy lifeboat, right? I mean, civilization itself is a fancy lifeboat. And
there are people drowning, and they're obviously drowning, and we're saving some of them. And you
and I are now arguing that we can save many, many more and we should save many,
many more.
And anyone listening to us is lucky to be safely in this lifeboat with us.
And the boat is not as crowded as it might be, but we do have finite resources in any
moment.
And the truth is we're, because it's a fancy lifeboat, we are spending some of those
resources on things other than reaching over the side and pulling in the next drowning person.
So there's a bar that serves very good drinks, and we've got a good internet connection so we
can stream movies. And while this may seem perverse, again, if you extrapolate from here,
you realize that
I'm talking about civilization, which is a fancy lifeboat.
And there's obviously an argument for spending a lot of time and a lot of money saving people
and pulling them in.
But I think there's also an argument for making the lifeboat better and better so that we have more smart, creative people incentivized to spend some
time at the edge, pulling people in with better tools, tools that they only could have made had
they spent time elsewhere in the boat making those tools. And this moves to the larger topic of just how we envision building a good society, even while there are
moral emergencies right now somewhere that we need to figure out how to respond to.
Yeah, so this is a crucially important set of questions. So the focus on knock-on effects
is very important. So when you, again, let's just take the example of saving a life,
you don't just save a life because that person goes on and does stuff.
They make the country richer.
Perhaps they go and have kids.
Perhaps they will emit CO2.
That's a negative consequence.
They'll innovate.
They'll invent things.
Maybe they'll create art.
There's this huge stream, basically from now until the end of
time, of consequences of you doing this thing. And it's quite plausible that the knock-on effects,
though much harder to predict, are much bigger effects than the short-term effects,
the benefits of the person whose life you saved or who you've benefited.
In the case of homelessness in the Bay Area versus extreme poverty in a poor country,
I'd want to say that if we're looking at knock-on effects of one,
we want to do the same for both.
So, you know, one thing I worry about
over the course of the coming decades,
but also even years,
is a possibility of a war between India and Pakistan.
But it's a fact that rich democratic countries
seem to not to go to war
with each other. So one knock-on effect of, you know, saving lives or helping development in India
is perhaps we get to that point where India's rich enough that it's not going to want to go to war
because, you know, the cost-benefit doesn't pay out in the same way. That would be another kind
of potential good knock-on effect. And that's
not to say that the knock-on effects favor the extreme poverty intervention compared to the
homelessness. It's just that there's so many of them, it's very, very hard to understand
how these play out. And I think actually, you then mentioned, well, we want to achieve some of the great things so we want you know to achieve the kind of highest
apogees of art of development i mean a personal thing i'm sad that i will never get to see is
the point in time where we just truly understand science where we have actually figured out the
fundamental laws especially the fundamental physical laws. But also just, you know, great experiences too.
People having, you know, peaks of happiness that, you know, put the very greatest achievements
in of the present day, just in the, you know, very greatest peaks of joy and ecstasy of the
present day, just as basically almost, you know, insignificant in comparison. That's something that really I do think is important.
But I think for all of those things,
once you're then starting to take that seriously
and take knock-on effects seriously,
that's the sort of reasoning that leads you to start thinking about
what I call long-termism,
which is the idea that the most important aspect of our actions
is the impact we have over the very long run,
and will make us want to prioritize things like ensuring we don't have some truly massive
catastrophe as a result of a nuclear war or a man-made pandemic that could derail this process
of continued economic and technological growth that we seem to be undergoing, or could make us want to avoid
certain kind of just very bad value states, like the lock-in of a global totalitarian regime,
another thing that I'm particularly worried about in terms of the future of humanity.
Or perhaps it is just that we're worried that technological and economic growth will slow down,
and what we want to do is spur, you know, continued innovation into the future.
And I think there actually are just really good arguments for that. But I think I would be
surprised if, though, if that is what your aim is, the best way of doing that goes via some route,
such as focusing on homelessness in the Bay Area, rather than trying to kind of aim at those ends
more directly. Okay, well, I think we're going to return to this concept of the fancy lifeboat
at some point, because I do want to talk about your personal implementation of effective altruism
in a subsequent lesson. But for the moment, let's get into the details of how we think about choosing a cause in the next
chapter. Okay, so how do we think about choosing specific causes? I've had my own adventures and
misadventures with this since I took your pledge. Before we get into the specifics, I just want to
point out a really wonderful effect on my psychology that is, I mean, I've always been, I think,
by real world standards, fairly charitable. So giving to organizations that inspire me or who I
think, which I think are doing good work, is not a foreign experience for me. But since connecting with you and now since taking the pledge, I'm now
aggressively charitable. And what this has done to my brain is that there is a pure pleasure
in doing this. And there's a kind of virtuous greed to help that gets kindled. And rather than
seeing it as an obligation, it really feels like an opportunity.
I mean, just you want to run into that building and save the girl at the window.
But across the street, there's a boy at the window and you want to run in over there too.
And so this is actually a basis for psychological well-being. I mean, it makes me happy to put my
attention in this direction.
It's the antithesis of feeling like an onerous obligation. So anyway, I'm increasingly sensitive
to causes that catch my eye and I want to support, but I'm aware that I am a malfunctioning robot
with respect to my own moral compass. As I said, I know that I'm not as excited about bed nets to stave off
malaria as I should be. And I'm giving to that cause nonetheless, because I just recognize that
the analysis is almost certainly sound there. But for me, what's interesting here is when I
think about giving to a cause that really doesn't quite meet the test, well, that then
achieves the status for me of a kind of guilty pleasure. Like I feel a little guilty that I gave
that much money to the homeless charity because Will just told me that that's not going to meet
the test. So, okay, that's going to have to be above and beyond the 10% I pledged to the most
effective charities. And so just having to differentiate the charitable donations that meet the test and those that don't is an interesting
project psychologically. I don't know, it's just a very different territory than I've ever been
with respect to philanthropy. But so this raises the issue, so one of these charities is newly
formed, right? So it does not yet have a long track record.
I happen to know the people who, or some of the people who created it.
How could you fund a new organization with all these other established organizations
that have track records that you can assess competing for your attention?
First thing I want to say is just, does this count towards the pledge?
And one thing I definitely want to disabuse people of the notion
of is that we think of ourselves as the authority of like what is effective. These are our best
guesses. We've, GiveWell or other organizations have put enormous amounts of research into this,
but there's still estimates. There's plenty of things you can kind of disagree with.
And it's actually quite exciting often to have someone come in and start disagreeing with us
because maybe we're wrong and that's great.
We can change our mind and have better beliefs.
And the second thing is that early stage charities absolutely can compete with charities with a more established track record.
In just the same way as if you think about financial investment, investing in bonds or the stock market is a way of making a return.
But so is investing in bonds or the stock market is a way of making a return, but so is investing in startups.
And if you had the view that you should never invest in startups, then that would definitely
be a mistake. And actually quite a significant proportion of GiveWell's expenditure each year
is on early stage nonprofits that have the potential in the future to become top recommended
charities. And so a set of questions I would ask for any organization
I'm looking at is what's the cause that it's focused on? What's the program that it's implementing?
And then who are the people who are kind of running that program? But the kind of background
is that there's just some things we know do enormous amounts of good and have this enormous
amount of evidence for them. And so I feel like we want to be focusing on things where
either there's like very promising evidence and we could potentially get more,
or it's something where in the nature of the beast, we cannot get very high quality evidence,
but we have good compelling arguments for thinking that this might be super important.
So, you know, funding clean energy innovation, funding, you know, new developments in carbon capture and storage or nuclear power or something.
It's not like you can do a randomized controlled trial on that, but I think there's good kind of theoretical arguments for thinking that might be an extremely good way of combating climate change.
It's worth bearing in mind that like saying something that is the very best thing you can do with your money is an extremely high bar. So, you know, if there's tens of
thousands of possible organizations, they can only be one or two that have the biggest bang for the
buck. All right. Well, it sounds like I'm opening a guilty pleasures fund to run alongside the
Waking Up Foundation. I'm very glad that they're pleasures. I'm glad that you are sufficiently
motivated. You know, it's a very good instinct that you find out about these
problems in the world, which are really bad and are motivated to want to help them.
And so I'm really glad you think of them as pleasures. I don't think you should be
beating yourself up, even if it doesn't seem like the very most optimal thing.
Yeah, yeah. No, I'm not. In fact, I have an even guiltier pleasure to report, which,
you know, at the time I did it, you know, this is not through a charity. This is just a, you know,
personal gift. And this does connect back to just the kind of lives we want to live and how that
informs this whole conversation. I remember I was listening to the New York Times Daily podcast,
conversation. I remember I was listening to the New York Times Daily podcast, and this was when the COVID pandemic was really peaking in the U.S., and everything seemed to be in free fall.
They profiled a couple who had a restaurant in, I think it was in New Orleans, and they had an
autistic child, and they were, you know, everyone knows that restaurants
were among the first businesses crushed by the pandemic for obvious reasons. And it was just a
very affecting portrait of this family trying to figure out how they were going to survive
and get their child to help she, I think it was a girl, needed. So it was exactly the little girl fell down the well
sort of story compared to the genocide that no one can pay attention to because genocides are
just boring. And so I was completely aware of the dynamics of this. Helping these people could not survive comparison with just simply buying yet more bed nets. And yet, the truth is,
I really wanted to help these people, right? So, you know, just sent them money out of the blue.
And it feels like an orientation that, I mean, there are two things here that rise to the defense of this kind of behavior. It feels like an orientation that I want to support in myself
because it does seem like a truly virtuous source of mental pleasure.
I mean, it's better than almost anything else I do, spending money selfishly.
And psychologically, it's both born of a felt connection and it kind
of ramifies that connection. And there's something about just honoring that bug in my moral hardware
rather than merely avoiding it that seems like it's leading to just finding greater happiness
in helping people in general, in the most effective ways,
in middling effective ways. Feeling what I felt doing that is part of why I'm talking to you now,
trying to truly get my philanthropic house in order. So it sort of seems all of a piece here.
And I do think we need to figure out how to leverage the salience of connection to other
people and the pleasure of doing good.
And if we lose sight of that, if we just keep saying that you can spend $2,000 here, which
is better than spending $3,000 over there, completely disregarding the experience people
are having engaging with the suffering of others. I feel
like something is lost, and I guess there's another variable I would throw in here is,
you know, this wasn't an example of this. This wasn't a local problem I was helping to solve,
but had it been a local problem, had I been offered the opportunity to help my neighbor,
you know, at greater than rational expense, that might have been the right thing to do. I mean,
again, it's falling into the guilty pleasure bin here compared to the absolutely optimized,
most effective way of relieving suffering. But I don't know, I just feel like there's something
lost if we're not in a position to honor a variable like locality ever. We're not only
building the world or affecting the world here,
we're building our own minds. We're building the very basis by which we would continue to do good
in the world in coming days and weeks and months and years. Yeah. So, I mean, I essentially
completely agree with you and think it's really good that you supported that family. And yeah, it reminds me
in my own case, something that stayed with me. So I lived in Oakland, California for a while
in a very poor, predominantly black neighborhood. And I was just out on a run and a woman kind of
comes up to me and asks if I can stop and help for a second. And I thought she was just going to
want help carrying groceries or something, be fine. it turns out she wanted me to move her couch like all the way down the street took like
two hours um uh and I just don't and so and that was out of my working day as well because
lunch and I just don't regret the use of that time at all and why is that and even from a
rational perspective I'm not saying that this is oh i
shouldn't just merely shouldn't beat myself up or something and i think it's because most of the
time we're just not the bigger question of like what individual action do we do like in any
particular case which kind of model philosophy has typically focused on kind of act consequentialism that's not typically
the decisions we face we face these much larger decisions like what career to pursue or something
sometimes those are more like actions but we also face the question of just what person to be
what kind of motivations and dispositions do i want to have and i think the idea of me becoming
this like utility maximizing robot
that is like utterly cold and calculating all the time, I think is certainly not possible for me,
given just the fact that I'm an embodied human being, but also probably not desirable either.
I don't think, you know, I don't think that an effective altruism movement would have started had we all been these
cold utility maximizing robots. And so I think cultivating a personality such that you do get
joy and reward and motivation from being able to help people and get that feedback. And that is
like part of what you do in your life, I actually think can be the best way of living a life when
you consider your life as a whole.
And in particular, it's not necessarily,
doing those things does not necessarily trade off
very much at all,
can perhaps even help with the other things that you do.
So in your case, you get this reward
from supporting this poverty-stricken family
with a disabled child,
or get reward from helping people in your
local community that I'm presuming you can channel and like helps continue the motivation to do
things that might seem much more alien or just harder to empathize with and I think that's okay
I think we should accept that and that's in fact should be encouraged so yeah I think like it's
very important once we take these ideas outside of the philosophy seminar room and actually try to live them to appreciate the instrumental benefits of
doing these kind of everyday actions, as long as it ultimately helps you stand by this commitment
to at least in part try and do just what we rationally, all things considered, think is
going to be best for the world.
Yeah, so you mentioned that the variable of time here, and this is another misconception about effective altruism, that it's only a matter
of giving money to the most effective causes. You spent a lot of time thinking about how to
prioritize one's time and think about doing good over the course of one's life based on how one spends one's time. So in our next chapter, let's
talk about how a person could think about having a career that helps the world.
Okay, so we're going to speak more about the question of giving to various causes and how to
do good in the world in terms of sharing the specific
resource of money. But we're now talking about one's time. How do you think about
time versus money here? And I know you've done a lot of work on the topic of how people can think
about having rewarding careers that are net positive. And you have a website, 80,000 hours, that you might want to
point people to here. So just let's talk about the variable of time and how people can spend it
to the benefit of others. Great. So the organization is called 80,000 hours because
that's the typical number of hours that you work in the course of your life. If that's a, you know, approximately 40 year career, working 40 hours a week, 50 weeks a year.
So we use that to illustrate the fact that your choice of career is probably the,
altruistically speaking, the biggest decision you ever make. It's absolutely enormous. Yet
people spend very little of their time really thinking through that question. I mean, you might think if you go out for dinner, then you spend maybe 1% of the
time that you would spend at dinner thinking about where to eat, like a few minutes or something.
But spending 1% of 80,000 hours on, you know, your career decision on what you should do,
that would be 800 hours, enormous amount of time. But I mean, why did I do
philosophy? Well, I, you know, I liked it at school. I could have done maths, but my dad did maths. I
wanted to differentiate myself from him. Like I didn't have a very good reasoning process at all,
because we generally don't, you know, pay this nearly enough attention. And certainly when it
comes to doing good, you have an enormous
opportunity to have a huge impact through your career. And so what 80,000 Hours does via its
website, via podcast, and via a small amount of one-on-one advising, is try to help people
figure out which careers are such that they can have the biggest impact. And in contrast,
this is a much, you know, this is a much,
you know, the question of what charities do I need to is exceptionally hard. This is even harder
again, because firstly, you'll be working at many different organizations over the course of your
life, probably, not just one. And secondly, of course, there's a question of personal fit.
Some people would be, some people are good at some things and not others.
It's the wisdom. And so how should you think about this? Well, the most important question,
I think, is the question of what cause to focus on. And that involves big picture worldview judgments and, you know, philosophical questions too. So we tend to think of the question of cause
selection by using the heuristics of what
cause is.
And by a cause, I mean a big problem in the world like climate change or gender inequality
or poverty or factory farming or pandemics, possibility of pandemics or AI lock-in of
values.
We look at those causes in terms of how important they are, that is, how many individuals are
affected and by how much, how neglected they are, that is, how many individuals are affected and by how much,
how neglected they are, which is how many resources are already going towards them,
and then finally how tractable they are, how much we can make progress in this area.
And in significant part because of those heuristics, that's why we've,
Effective Altruism has chosen the focus areas it has, which includes pandemic preparedness,
artificial intelligence,
climate change, poverty, farm animal welfare, and potentially some others as well, like
improving institutional decision making, and some areas in scientific research. And so that's by far
the biggest question, I think, because that really shapes the entire direction of your career.
And I think, you know, depending on the philosophical assumptions you put
in can result in enormous, you know, differences in impact. Like, do you think animals count at
all or like a lot? I mean, would make enormous difference in terms of whether you ought to be
focusing on that. Similarly, like what weight do you give to future generations versus present
generations? Potentially you can do hundreds of times as much good in one cause area as you can in another. Yeah, and then within that,
the question of where exactly to focus is going to just depend a lot on the particular cause area
where different causes just have different bottlenecks. We tend to find that, you know,
working at the best non-profits is often great. Research is often great, especially in kind of new, more nascent causes
like safe development of artificial intelligence
or pandemic preparedness.
Often you need research.
Policy is often a very good thing to focus on as well.
And in some areas, especially where money is the real bottleneck,
then trying to do good through your donations primarily and therefore trying to take a job that's more lucrative, can be the way to go
too. Yeah, that's a wrinkle that is kind of counterintuitive to people. The idea that the
best way for you to contribute might in fact be to pursue the most lucrative career that you might be especially well-placed to pursue.
And it may have no obvious connection to doing good in the world, apart from the fact that you
are now giving a lot of your resources to the most effective charities. So if you're a rock star,
or a professional soccer player, or just doing something that you love to do,
and you have other reasons why you want to do it, but you're also making a lot of money that you can
then give to great organizations, well then it's hard to argue that your time would be better spent
working in the non-profit sector yourself, or doing something where you wouldn't be
laying claim to those kinds of resources.
Yeah, that's right. And so it can be.
So within the effective altruism community,
this is now, I think, a minority of people
are trying to do good in their career
via the path of what's called earning to give.
And again, it depends a lot on the cause area.
So what's the, you know, how much money is there
relative to the kind of size of the cause already?
And, you know, in the case of things like scientific research
or AI or pandemic preparedness,
there's clearly just a lot more demand
for altruistically-minded, sensible, competent people
working in these fields than there is money.
Whereas in the case of global health and development,
there's just these interventions and programs that we could scale up with hundreds of millions, billions of dollars that we just know work very well.
And there, money is kind of more of the bottleneck.
And so kind of going back to these misconceptions about effective altruism, this idea of earning to give, again, it's very mimetic.
People love how counterintuitive it is.
And it is one of the things we believe, but it's definitely a kind of minority path,
especially if you're focused on some of these areas where there already is a lot of potential
funding. And it's more about just how many people we have working in these areas.
Hmm. This raises another point where the whole culture around
charity is not optimized for attracting the greatest talent. We have a double standard here,
which many people are aware of. I think it's most clearly brought out by Dan Pallotta. I don't know
if you know him. He gave a TED Talk on this topic, and he organized some of the bike rides across America
in support of various causes. I think the main one was AIDS. He might have organized a cancer
one as well. But these are ventures that raised, I think, hundreds of millions of dollars. And
I think he was criticized for spending too much on overhead. But it's a choice where you
can spend less than 5% on overhead and raise $10 million, or you could spend 30% on overhead and raise
$400 million. Which should you do? And it's pretty obvious you should do the latter
if you're going to use those resources well. And yet there's a culture that prioritizes
having the lowest possible overhead.
And also there's this sense that
if you're going to make millions of dollars personally
by starting a software company
or becoming an actor in Hollywood or whatever it is,
there's nothing wrong with that.
But if you're making millions of dollars a year
running a charity,
well, then you're a greedy
bastard, right? And the idea that, you know, we wouldn't fault someone from pursuing a comparatively
frivolous and even narcissistic career for getting rich in the meantime, but we would fault someone
who's trying to cure cancer or save the most vulnerable people on earth
for getting rich while doing that. That seems like a bizarre double standard with respect to
how we want to incentivize people. Because what we're really demanding is someone come out of
the most competitive school and when faced with the choice of whether or not to work for
a hedge fund or work for a charity doing good in the world, they have to also be someone who
doesn't care about earning much money. So we're sort of filtering for sainthood or something like
sainthood among the most competent students at that stage. And that seems less than optimal. I don't know
how you view that. Yeah, I think it's a real shame. So
newspapers every year publish rankings of the top paid charity CEOs, and it's regarded as a
scandal. The charity is therefore ineffective. But what we should really care about, if we actually care
about, you know, the potential beneficiaries, the people we're trying to help, is just how much money
are we giving this organization and how much good comes out the other end. And if it's the case that
they can achieve more because they can attract a more experienced and able person to lead the
organization by paying more, now sure, that's like, it's maybe a sad
fact about the world. It would be nice if everyone were able to be maximally motivated purely by
altruism. But we know that's not the case. Then if they can achieve more by doing that, then yeah,
we should be encouraging them to do that. You know, there's some arguments against like,
oh, well, perhaps there's kind of
race to the bottom dynamics
where if one organization starts paying more,
then other organizations should need to pay more too.
And it just, you get bloat in the system.
I think that's the strongest case
for the idea of low overheads
when it comes to fundraising.
Because if one organization is fundraising,
well, perhaps in part they're increasing
the total amount of charitable giving that happens,
but they're also probably taking money away
from other organizations.
And so it can be the case that a general norm
of lower overheads when it comes to fundraising
is a good one.
But when it comes to charity pay,
we're obviously just radically far away from that.
And yeah, it shows that people are thinking about charity in a kind of fundamentally wrong way,
at least, you know, for the effect of altruist purposes we're thinking of,
which is not thinking about it in terms of outcomes, but in terms of the virtues you
demonstrate or how much you're sacrificing or something. And ultimately, when it comes to
these problems that we're facing,
these terrible injustices, this horrific suffering,
I don't really care whether the person that helps is virtuous or not.
I just want the suffering to stop.
I just want people to be helped.
And as long as they're not doing harm along the way,
I don't think it really matters whether the people are paid a lot or a little.
I think we should say something about the other side of this equation, which tends to get
emphasized in most people's thinking about being good in the world. And this is the side of kind
of the consumer-facing side of not contributing to the obvious harms in a way that is egregious or dialing down one's complicity
in this unacceptable status quo as much as possible. And so this goes to things like
becoming a vegetarian or a vegan or avoiding certain kinds of consumerism based on concern
about climate change. There's a long list of causes that people get committed to more in the spirit of negating certain bad behavior or polluting behavior rather than
focusing on what they're in fact doing to solve problems or giving to specific organizations.
Is there any general lesson to be drawn from the results of these efforts on both fronts? I mean, how much
does harm avoidance as a consumer add to the scale of merit here? What's the longest lever we can
pull personally? Yeah, so I think there's a few things to say. So right at the start, I mentioned
one of the key insights of effective altruism was this idea that
different activities can vary by a factor of a hundred or a thousand in terms of how much impact
they have. And even within ethical consumerism, I think that happens. So if you want to cut out most
animal suffering from your diet, I think you should cut out eggs, chicken, and pigs,
maybe fish. Whereas beef and milk, I think, are comparatively small factors.
If you want to reduce your carbon footprint,
then giving up beef and lamb,
reducing your sense of landing flights,
reducing how much you drive makes significant differences
and dozens of times as much impact as things like recycling
or upgrading light bulbs or reusing plastic bags.
From the purely consequentialist outcome-based perspective,
I think it is systematically the case
that these ethical consumerism behaviors
are small in terms of their impact
compared to the impact that you can do
via your donations or via your career.
And the reason is just there's a very limited range
of things that you can do
by changing your consumption behavior. There's just things you are buying anyway, and then you can stop.
Whereas if you're donating or you're choosing a career, then you can choose the very most
effective things to be doing. So take the case of being vegetarian. So I've been vegetarian for
15 years now. I have no plans of stopping that. But if I think about how many animals I'm
helping in the course of a year as a result of being vegetarian, and how does that compare when
I'm looking at the effectiveness of the very most effective animal welfare charities, which are
typically what are called kind of corporate campaigns. So it turns out the most effective
way of reducing the number of, that we know of, reducing the number of that we know of reducing the number
of hens in factory farms laying eggs and just the most atrocious terrible conditions of suffering
seems to be by a like campaigning large retailers to change the eggs they purchase in their supply
chain you can actually get a lot of a lot of push there. And the figures are just astonishing. It's something like 50 animals
that you're preventing the significant torture of
for every dollar that you're spending on these campaigns.
And so if you just do the maths,
the amount of good you do by becoming vegetarian
is equivalent to the amount of good you do
by donating a few dollars to these very most effective campaigns.
I think similar is true
for reducing your carbon footprint.
My current favorite climate change charity,
Clean Air Task Force,
which lobbies the US government
to improve its regulations around fossil fuels
and promotes energy innovation as well,
think probably reduces a ton of CO2
for about a dollar. And that means if you're
in the US, an average US citizen emits about 16 tons of carbon dioxide equivalent. If you did all
of these most effective things of cutting out meat and all your transatlantic flights and getting rid
of your car and so on, you might be able to reduce that six tons or so. And that's, you know, the same as giving about $6 to these most effective charities.
And so it just does seem
that these are just much more powerful
from the perspective of outcomes.
The next question philosophically is
whether you have some non-consequentialist
reason to do these things.
And there, I think it differs.
So I think the case is much stronger
for becoming vegetarian
than for climate change. Because if I buy a factory farmed chicken and then donate to
a corporate campaign, well, I've probably harmed different chickens. And it seems like
that's, you know, you can't offset the harm to one individual by a benefit to another individual. Whereas if I have a lifetime of emissions, but at the same time donate
a sufficient amount to climate change charities, I've probably just reduced the total amount of
CO2 going into the atmosphere over the course of my lifetime. And there isn't anyone who's
harmed in expectation, at least by the entire course of my life.
And so it's not like I'm trading a harm to one person for the benefit to another.
But these are quite subtle issues when we get onto these kind of non-consequentialist
reasons.
Yeah, and there are also ways in which the business community and innovation in general
can come to the rescue here.
So for instance, there's a company,
I believe the name is going to be changed,
but it was called Memphis Meats
that is spearheading this revolution
in what's called cultured meat or clean meat
where they take a single cell from an animal and amplify it
so no animals are killed in the process of making these steaks or these meatballs or these
chicken cutlets, and they're trying to bring this to scale. And I had the CEO, Uma Valetti,
on my podcast a couple of years ago and actually invested in the company along with many other
people, and hopefully this will bear fruit. That's an example of something where, though it was unthinkable some years ago,
we might suddenly find ourselves living in a world where you can buy steak and hamburger meat and
pork and chicken without harming any animals. And it may also have other significant benefits,
like cutting down on xenoviruses, and that connects to the pandemic risk issue.
I mean, we're really where our factory farms are wet markets of another sort, and so it
is with climate change.
On some level, we're waiting and expecting for technology to come to the rescue here,
where you're just bringing down the cost of renewable energy to the point where there is literally no reason
to be using fossil fuels or bringing us a new generation of nuclear reactors that don't have
any of the downsides of old ones. And again, this does connect to the concern I had around the
fancy lifeboat. We have to do the necessary things in our lifeboat that allow for
those kinds of breakthroughs, because those are, in many cases, the solutions that
just fundamentally take away the problem rather than merely mitigate it.
Yeah, so I totally agree. And I think that in so in the case of, you know, if you're trying to alleviate
animal suffering by as much as possible, I think that, yeah, funding research into clean meats,
plausibly the best thing you can do. It's hard to make a comparison with the more direct campaigns,
but definitely plausibly the best. In the case of climate change, I've recently been pretty
convinced that the most effective thing we can be doing is promoting clean energy innovation.
In this case, this is another example of importance versus neglect in this,
where you mentioned renewables, and they are a really key part of the solution.
But other areas are really notably more neglected.
So carbon capture and storage, where you're capturing CO2 as it emerges from fossil fuel
power plants, and nuclear power get quite a small amount of funding compared to solar
and wind, even though the Intergovernmental Panel on Climate Change thinks that they're
also a very large part of the solution.
But here, I think the distinction is focusing on issues in rich countries in order to benefit
people in those rich countries, or kind of as a means to some other sort of benefit.
And so I think it's very often the case that you should focus on, like, you might be sending
money towards things happening in a rich country like the US, but not because you're trying
to benefit people in the US, because you're trying to benefit the world.
So maybe you're funding, yeah, this clean meat startup,
or you're funding research into low carbon forms of energy.
And sure, like that might happen at the US,
which is still the world's research leader.
That's fairly justified.
But that's kind of partly the beneficiaries in the US of these things.
But it's also, you know, it's global, it's future generations too.
You're kind of influencing, as it were, the people who are in the positions of power who have the
most influence over how things are going to go into the future.
Okay, so in our next chapter, let's talk about how we build effective altruism into our lives and just make this as personally actionable for people as we
can. Okay, so we've sketched the basic framework of effective altruism and just how we think about
systematically evaluating various causes, how we think about what would be prioritized with respect to things like actual outcomes versus
a good story. And we've referenced a few things that are now in the effective altruist canon,
like giving a minimum of 10% of one's income a year. And that's really, if I'm not mistaken, you just took that as a nice round number
that people had some traditional associations with.
In religious communities, there's a notion of tithing,
that amount, and it seemed like not so large
as to be impossible to contemplate,
but not so small as to be ineffectual.
Maybe let's start there.
So am I right in thinking that the 10% number was kind of pulled out of a hat, but seemed like a good starting point,
but there's nothing about it that's carved in stone from your point of view?
Exactly. It's not a magic number, but it's in this Goldilocks zone where
Toby originally had the thought that he would be promoting what he calls the further pledge, which is where you just set a cap on your income and give everything above that.
But the issue, I think, seems pretty clear that if he'd been promoting that, well, very few people would have joined him.
We do have a number of people who've taken the further pledge, but it's a very small minority of the 5,000 members we have.
On the other hand, if we were promoting a 1% pledge, let's say, well, we're probably just
not changing people's behavior compared to how much they donate anyway. So in the UK, people
donate on average 0.7% of their income. In the US, if you include educational donations and
church donations, people donate about 2% of their income. So if I was saying, oh, you should donate 1%, probably those people would have been giving
1% anyway. And so we thought 10% is in this Goldilocks zone. And like you say, it has this
long history where for generally religious reasons, people much poorer than us in earlier
historical epochs have been able to donate 10%. We also have 10 fingers.
It's a nice round number.
But many people who are part of the community
donate much more than that.
Many people who are firm core people,
part of the effective altruism community,
don't donate that much.
They do good via other ways instead.
It's interesting to consider the psychology of this because
I can imagine many people entertaining the prospect of giving 10% of their money away and
feeling, well, I could easily do that if I were rich, but I can't do that now. And I can imagine
many rich people thinking, well, that's a lot of money, right? It's like every
year after, I'm making a lot of money and you're telling me year after year after year, I'm going
to give 10% away. That's millions of dollars a year. So it could be the fact that there's no
point on the continuum of earning where if you're of a certain frame of mind, it's going to seem like a Goldilocks value.
You either feel too poor or too rich, and there's no sweet spot, or to flip that around,
you can recognize that however much money you're making, you can always give 10% to
the most effective ways of alleviating suffering once you have this epiphany.
You can always find those 10% at every point. And if you're not making much money,
obviously 10% will be a small amount of money. And if you're making a lot of money,
it'll be a large amount. But it's almost always the case that there's 10% of fat there to be found.
So yeah, did you have thoughts about just the psychology
of someone who feels not immediately comfortable with the idea of making such a commitment?
Yeah, I think there's two things I'd like to say to that person. One is the kind of
somewhat direct argument, and second is more pragmatic. The direct one is just that even if
you feel like, oh, I could donate that
amount if I were rich, probably you are rich if you're listening to this. So if you're single
and you earn $66,000, then you're in the global 1% of the world in terms of income distribution.
And what's more, even after donating 10% of your income,
you would still be in the richest 1% of the world's population.
If you earn $35,000, which we would not think of as being a rich person,
even after donating 10%, you'd still be in the richest 5% of the world's population.
And learning those facts was very motivating for me when I first started thinking about my giving.
So that's kind of direct argument. But the more pragmatic one is to think, well, if you're at most stages in your life, you'll be
earning more in the future than you are now. You know, people's incomes tend to increase over time.
And you might just reflect, well, how do I feel about money at the moment?
And if you feel kind of all right about it, you know, perhaps you're in a situation where you're like, oh no, I'm actually just fairly worried. There's like serious health issues or something.
Then it's like, okay, we'll take care of that first. But if you're like, well, actually,
you know, life's pretty all right. Don't think additional money will make that much of a
difference. Then what you can do is just think, okay, maybe I'm not going to give up to 10% now,
but I'll give a very significant proportion of the additional money I make any future raises.
So maybe I give 50% of that amount. And probably after that means that you're still increasing the
amount you're earning over time. But at the same time, you're, you know, if you do that,
then over the few years, you'll probably quite soon end up giving
10% of your overall income. So at no point in this plan, do you ever have to go backwards,
as it were, living on less. In fact, you're always earning more, but yet you're giving more at the
same time. And I've certainly found that in my own life where, you know, I started thinking about
giving as a graduate student. So, you know, I now earn, you know, I now live on like twice as
much, more than twice as much as I did when I first started giving, but I'm also able to give,
you know, a significant amount of, of my income. Remind me, how have you approached this personally?
Because you haven't taken a minimum 10% pledge. You, you think of it differently. So what have
you done over the years? Yeah. So, you know, so I have taken the giving what we can pledge, you think of it differently. So what have you done over the years? Yeah. So, you know, so I have taken the giving what we can pledge, which is 10%
kind of at any point. And then I also have intention and plan to donate everything above
what is the equivalent of 20,000 pounds per year in Oxford 2009, which is now about 27,000 pounds
per year. I've never written this down as like a formal
pledge. The reason being that there were just too many possible kind of exceptions. So if I had kids,
I'd want to increase that. If there were situations where I thought my ability to do
good in the world would be like very severely hindered, I'd want to kind of avoid that.
But that is the amount that I'm giving at the moment. And it's the amount I plan to give
for the rest of my life. Just so I understand it. So you're giving anything you make
above 27,000 pounds a year to charity? Yeah, that's right. Post-tax. And so my income is a
little bit complicated in terms of how you evaluate it because it's my university income, but then also
book sales and so on. I think on the most natural, and there's things like speaking engagements I
don't take that I could, but I think on the most natural way of doing it, I give a little over 50%
of my income. So I want to explore that with you a little bit, because again, I'm returning to
our fancy lifeboat and wondering just how fancy it can be in a way that's compatible with the
project of doing the most good in the world. And what I detect in myself and in most of the people
I meet, and I'm sure this is an intuition that is shared by many of our listeners, many people Many people will be reluctant to give up on the aspiration to be wealthy with everything that that implies.
Obviously, they want to work hard and make their money in a way that is good for the world or at least benign.
They can follow all of the ethical arguments that would say, you know, right livelihood in some sense is important. But if people really start to succeed in life, I think there's something that will strike many people, if not most, as too abstemious and monkish about the lifestyle you're advertising in choosing to live on that amount of
money and give away everything above it, or even just giving away 50% of one's income.
And again, I think this does actually connect with the question of effectiveness. I mean,
so it's at least possible that you would be more effective
if you were wealthy and living with all that, all that that entails, living as a wealthy person.
And I mean, just to take by example, someone like Bill Gates, you know, he's obviously the
most extreme example I could find because he's, you know, he's one of the wealthiest people on earth still. I think he's number two, perhaps.
And he's also probably well-established now.
He's the biggest benefactor of charity in human history, perhaps.
The Gates Foundation has been funded to the tune of tens of billions of dollars by him
at this point.
And so I'm sure he's spent a ton of money on himself and his family, right?
I mean, his life is probably filled to the brim with luxury, but his indulgence in luxury
is still just a rounding error on the amount of money he's giving away, right?
So it's actually hard to run a counterfactual here, but I'd be willing to bet that Gates would be less effective and less wealthy
and have less money to give away if he were living like a monk in any sense. And I think maybe more
importantly, his life would be a less inspiring example to many other wealthy people. If Bill Gates came out of the closet and said,
listen, I'm living on $50,000 a year and giving all my money away to charity,
that wouldn't have the same kind of kindling effect I think his life at this point is in fact having, which is you can really have your cake and eat it too. You can be a billionaire who lives in a
massive smart house with all the
sexy technology, even fly around on a private jet, and be the most charitable person in human
history. And if you just think of the value of his time, right? Like if he were living a more
abstemious life, and I mean, just imagine the sight of Bill Gates spending an hour trying to save $50 on a new toaster oven, right?
You know, bargain hunting.
It would be such a colossal waste of his time, given the value of his time.
Again, I don't have any specifics really about how to think about this counterfactual, but I do have a general sense that, and actually, this is actually a point
you made in our first conversation, I believe, which is you don't want to be an anti-hero in
any sense, right? You want to, like, if you can inspire only one other person to give at the level
that you're giving, you have doubled the good you can do in the world. So on some level, you want
your life to be the most compelling advertisement for this
whole project. And I'm just wondering if, I mean, for instance, I'm just wondering what changes we
would want to make to Bill Gates's life at this point to make him an even more inspiring
advertisement for effective altruism to other very, very wealthy people.
And I mean, it might be dialing down certain things, but given how much good he's able to do,
him buying a fancy car, it doesn't even register in terms of actual allocation of resources.
So anyway, I pitched that to you.
Yeah, terrific. So there's three different strands I think I pitched that to you. Yeah, terrific. I think, so there's
three different strands I think I'd like to pick apart. So the first is whether everyone should be
like me. And I really don't want to make the claim. I certainly don't want to say, well,
I can do this thing so everyone else can. Because I really just think I am in a position of such utter privilege. So being born into a middle-class family
in a rich country, being privately educated,
going to Cambridge, then Oxford, then Cambridge, then Oxford,
being tall and male and white and broadly straight.
And then also just having inexpensive of inexpensive tastes like my ideal
day involves sitting on a couch and drinking tea and reading some interesting new research
and perhaps like doing going wild swimming it's also yeah and then secondly also i have just these
amazing benefits in virtue of the work that i do. I have this incredibly, like I meet
these incredibly varied, interesting kind of array of people. And so I just don't really think I could
stand here and say, well, everyone should do the same as me, because I think I've just had it kind
of so easy that it doesn't really feel like, you know, if I think about the sacrifices I have made or the things I found hard over the
course of 10 years, that's much more like doing scary things like being on the Sam Harvest podcast
or doing a TED talk or, you know, meeting, you know, very wealthy or very important people,
things that might kind of cause anxiety, much more than the kind of financial side of things.
But I recognize there are other people for whom, whom like money just really matters to them. And I think you just, in part,
you're kind of born with a set of preferences and these things, or perhaps they're molded early on
in childhood and you don't necessarily have control over them. So that's kind of me as an,
yeah, what I'm trying to convey through this. Second is the time value of money.
And this is something I've really wrestled with
because it just is the case
that in terms of my personal impact,
my donations are just a very small part of that.
Because we have been successful,
we are giving what we can.
There's now moved $200 million. There's over one and a half billion dollars of pledge donations. The EA
movement as a whole certainly has over 10 billion dollars of assets that kind of will be going out.
And then, you know, I'm donating my, you know, thousands of pounds per year and it does not
make tens of thousands of pounds per year. And it's just very clearly kind of small
on the scale. And so that's definitely something I've wrestled with. I don't think I lose enormous
amounts of time. My guess is that it's maybe a couple of days of time a year. I have done some
things. So like, you know, via my work, I have an assistant. If I'm doing business slips, like that
counts as expenses rather than my personal money.
So that I'm trying to keep it separate.
There's some things you can't do.
So like if you live close to your office,
you know, I can't count that as a business expense,
but it would shorten your commute.
So it's not like perfect as a way of doing that.
And so I do think there's an argument,
an argument against that.
And I think that is definitely a reason of caution
for making kind of a very large commitment. And then the final aspect is, yeah, what sort of message you want
to send? And probably my guess is that you just want a bit of market segmentation here, where
some people should, you know, some people should perhaps show what can be done. Others should show,
well, no, actually, you can have this amazing life while, you know, not having to wear the hair shirt and so on. You know, I think
perhaps you could actually convince me that maybe I'm, you know, sending a long message and would
do more good if I had some other sort of pledge. And maybe you would be right about that. I
definitely, when I made these plans, I wasn't thinking through these things quite as carefully
as I was now.
But I did want to just kind of show a proof of concept.
Yeah, I guess I'm wondering if there's a path through this wilderness that doesn't
stigmatize wealth at all.
I mean, the endgame for me in the presence of absolute abundance is, you know, everyone gets to live like Bill Gates on some
level. If we make it, if we get to the 22nd century and we've solved the AI alignment problem
and now we're just pulling wealth out of the ether, I mean, essentially just we've got
Deutsche's universal constructors building every machine atom by atom,
and we can do more or less anything we want.
Well, then this can't be based on an ethic where wealth is at all stigmatized.
What should have opprobrium attached to it is a total disconnection from the suffering
of other people and comfort with the more shocking disparities
in wealth that we see all around us. Once a reasonably successful person signs on to the
effective altruist ethic and begins thinking about his or her life in terms of earning to give on some level. There's a flywheel effect here where
one's desire to be wealthy actually amplifies one's commitment to giving so that in part,
the reason why you would continue working is because you have an opportunity to give so much money away and do so much good.
And it kind of purifies one's earning in the first place.
I mean, I can imagine most wealthy people get to a point where they're making enough money so that they don't have to worry about money anymore.
And then there's this question, well, why am I making all this money?
Why am I still working and the moment they decide
to give a certain amount of money away
a year just algorithmically
then they feel like well okay if this number keeps
going up that is a good thing right so like I can get out
of bed in the morning and know that today you know
if it's 10% you know one day in 10
is given over wholly to solving the worst
suffering or saving the most lives or mitigating the worst long-term risk. And if it's 20%, it's,
you know, two days out of 10. And if it's 30%, it's three days out of 10. And they could even
dial it up. I mean, I'm just imagining, let's say somebody is making $10 million a year and he
thinks, okay, I can sign on and give 10% of my
income away to charity. That sounds like the right thing to do. And he's persuaded that this should
be the minimum, but he then aspires to scale this up as he earns more money. Maybe this would be
the algorithm. For each million he makes more a year, he just adds the percentage. So if he's
making $14 million one year, he'll give 14%
of his income away. And if it's $50 million, he'll give 50% away, right? And obviously,
if let's say the minimum he wants to make is $9 million a year, well, then he can get up to
91% of $100 million a year. He can give that away. But I can imagine being a very wealthy person who,
as you're scaling one of these outlier careers, it would be fairly thrilling to be the person
who's making $100 million that year, knowing that you're going to give 91% of that away
to the most effective charities. And you might not be the person who
would have seen any other logic in driving to that kind of wealth you know when you were the person
who was making 10 million dollars a year because 10 million dollars a year is was good enough i
mean obviously you can live on that you know there's nothing materially is going to change
for you as you make more money but because he he or she plugged into this earning to give logic,
and in some ways, the greater commitment to earning was leveraged by a desire to maintain
a wealthy lifestyle, right? It's like this person does want $9 million a year, right, every year,
but now they're much wealthier than that and giving much
more money away. I'm just trying to figure out how we can capture the imagination of people who
would see the example of Bill Gates and say, okay, that's the sweet spot, as opposed to
any kind of example that however subtly stigmatizes being wealthy in the first place?
Yeah, I think these are good points.
And it's true, I think the stigma on wealth per se is not a good thing, where if you build
a company that's doing good stuff, and people like the product, and they get value from
it, and so there's enormous surplus, so there's a lot of gains from trade
and you get wealth as a result of that.
That's a good thing.
Obviously, there's some people who make enormous amounts of money
doing bad things, selling opioids or building factory farms,
but I don't think that's the majority.
And I do think it's the case that,
you know, it's kind of like optimal taxation theory,
but the weird thing is that you're imposing the tax on yourself,
where depending on your psychology, if you, you know, say I'm going to give 100%
as the highest tax rate, well, you're not incentivized to earn anymore. And so the
precise amount that you want to give is just quite sensitive to this question of just how
unmotivated you're going to be in order to earn more. So in my own case,
you know, I'm not, it's very clear that I'm, the way I'm going to do good is not primarily via my
donations. So perhaps this disincentive effect is, you know, not very important. But if my aim were
to get as rich as possible, then, well, I'd need to really look inside my own psychology,
figure out how much, especially
over the entire course of my life, can I be motivated by pure altruism versus self-interest.
And I strongly doubt that the kind of optimal tax rate would be, you know, via my donations
would be 100%.
It would be something in between.
That's what I'm kind of fishing for here.
And, you know, I by no means am convinced that I'm right, but I'm just wondering if
in addition to all the other things you want, you know, as revealed in this conversation,
you know, for yourself and the world and, you know, acknowledging that your, you know,
your primary contribution to doing good in the world might in fact be your ideas and
your ability to get them
out there. I mean, like you've had the effect you've had on me and I'm going to have my effect
on my audience and conversations like this have the effect that they have. And so there's no
question you are inspiring people to marshal their resources in these directions and think more
clearly about these issues. But what if it were also the case
that if you secretly really wanted to own a Ferrari, you would actually make different
decisions such that in addition to all the messaging, you would also become a very wealthy
person giving away a lot of money? Yeah. I mean, if it was the case,
person giving away a lot of money. Yeah. I mean, if it was the case, you know, if it was the case that I was planning to earn to give. And so I think a very common kind of figure for people
who are going to earn to give via entrepreneurship or other high earning careers is a 50% figure
where they plan to give half of what they earn, at least once they start earning a significant amount. And that has seemed to work pretty well from the people I know.
It's also notably the figure that Bill Gates uses for his giving pledge, where billionaires can join
the giving pledge if they give at least 50% of their income, of their wealth.
Most take that pledge, if I'm not mistaken, it's pushed off to the end of their life,
right?
They're just imagining they're going to give it upon their death to charity, right?
So you are allowed to do that.
I don't know exactly the proportions.
It varies.
Like the tech founders tend to give earlier than other sorts of people.
I'm also not just, I'm actually a little bit confused about what pledging 50% of your wealth
means.
I'm also not just, I'm actually a little bit confused about what pledging 50% of your wealth means. So if I'm a billionaire one year and then lose half my money and I've got $500 million the
next year, do I have to give half of that? Or do I have to give half of the amount when I pledged,
which would have been all my money? Anyway, it confuses me a little bit, the details of it,
but it is the case that,
yeah, you can fulfill your pledge completely in the giving pledge by donating entirely after your death. And there are questions about how much people actually fulfill these pledges too.
But then, yeah, and I think, and I really do want to say like, that's also just quite reasonable.
Different people have different attitudes to money. I think it's a very rare person indeed that can be entirely motivated. Because we're talking about motivation over
decades and we're talking about every single day, motivation can be motivated at all times by
pure altruism. I think that's very hard. And so if someone instead wants to pick
percentage number and aim to that, that seems like a sensible way to go. And in particular, you want to be sustainable, where if it's the case that moving from, I don't know, 50% to 60% means that actually you like your desire to do all of this kind of burns out and you go and do something else. That's fairly bad indeed.
out and you go and do something else. That's fairly bad indeed. And you want to be someone who's like, you know, I think the right attitude you want to have towards giving is not to be
someone where it's like, oh yeah, I'm giving this amount, but it's just so hard. And I like,
I really don't like my life and it's really unpleasant. That is, you know, not an inspiring
message. Julia Wise has this wonderful, a member of the Effective Altruism community,
has this wonderful post called Cheerfully, Altruism community, has this wonderful
post called Cheerfully, where she talks about having kids and thinking about that as a question
and says that, no, what you want to be is this model, this ideal where you're doing what you're
doing. And you're saying, yeah, my life is great. I'm able to do this and I'm still having a really
wonderful life. That's certainly how I feel about my life. And I think for many people who are going into these higher learning careers saying, yeah, I'm donating 50% and my life
is still like absolutely awesome. In fact, it's better as a result of the amount I'm donating.
That's the sweet spot I think that you want to hit.
There's another issue here around how public to be around one's giving. And so, you know, you and I are having a public conversation about
all of this, and this is just, by its very nature, violating a norm that we've all inherited,
or a norm or a pseudonorm around generosity and altruism, which suggests that the highest
form of generosity is to give anonymously. There's a Bible verse
around this. You don't want to wear your virtue on your sleeve. You don't want to advertise your
generosity because that conveys this message that you're doing it for reasons of self-aggrandizement.
You're doing it to enhance your reputation. You want your name on
the side of the building. Whereas if you were really just connected to the cause of doing good,
you would do all of this silently and people would find out after your death, or maybe they
would never find out that you were the one who had secretly donated millions of dollars to cure some terrible disease or to buy bed nets. And yet, you and I, by
association here, have flipped that ethic on its head because it seems to be important to
change people's thinking around all of the issues we've been discussing. And the only way to do that
is to really discuss them. And what's more, we're leveraging a concern about reputation kind of from the
opposite side in recognizing that taking a pledge has psychological consequences, right? I mean,
when you publicly commit to do something that not only advertises to people that this is the
sort of project a human being can become
enamored of, you then have a reputational cost to worry about if you decide that you're going
to renege on your offer. So talk for a few minutes about the significance of talking about any of
this in the first place. Yeah, so I think the public aspect is very important. And it's for
the reason you mentioned earlier, that take the amount of good that you're going to do in your
life via donations, and then just think, can I convince one other person to do the same?
If so, you've doubled your impact. You've done your life's work over again. And I think,
plausibly, people can do that many times over, at least in the world today, by being this kind of inspirational role model for others.
And so I think this religious tradition where, no, you shouldn't show the generosity you're
doing, you should keep that secret, I think that looks pretty bad from an outcome-oriented
perspective.
And I think you need to be careful about how you're doing it.
You want to be effective in your communication as well as your giving.
Where, you know, it was very notable that Peter Singer had these arguments around giving
for almost four decades with comparatively little uptake, certainly compared to the last
10 years of the effective altruism movement.
And, you know, my best hypothesis is that move from a framing that appeals primarily to guilt,
which is, you know, it's a low arousal motivation.
You don't often get up and start really doing things on the basis of guilt, to inspiration
instead, saying like, no, this is an amazing opportunity we have.
And so this is a norm that I just really want to change.
You know, in the long run, I would like it to be a part of common sense morality that you
use a significant part of your resources to help other people. And we will only get there, we will
only have that sort of cultural change if people are public about what they're doing and able to
say, yeah, this is something I'm doing. I'm proud of it. I think you should consider doing it too.
This is the world I want to see. Well, Will, you have certainly gotten the ball rolling in my life, and it's something I'm
immensely grateful for. And I think this is a good place to leave it. I know there will be questions,
and perhaps we can build out further lessons just based on frequently asked questions that come in
in response to what we've said here. But I think that'll be the right way to proceed. So for the meantime, thank you for doing this, because I
think you're aware of how many people you're affecting, but it's still early days, and I think
it'll be very interesting to see where all this goes, because I know what it's like to experience
a tipping point around these issues personally.
And I have to think that many people listening to us will have a similar experience one day or another,
and you will have occasioned it.
So thank you for what you're doing.
Well, thank you for taking the pledge and getting involved.
And yeah, I'm excited to see how these ideas develop over the coming years.