The Joe Walker Podcast - The Art/Science Of A High Impact Career - Rob Wiblin
Episode Date: June 2, 2018Most people want to help others with their career, but what's the best way to do that? Become...See omnystudio.com/listener for privacy information....
Transcript
Discussion (0)
From Swagman Media, this is the Jolly Swagman Podcast. Here are your hosts, Angus and Joe.
Hello, ladies and gentlemen, and welcome back to another episode of the Jolly Swagman Podcast.
I'm Joe Walker, and this week my guest is the Executive Director of Research at 80,000 Hours
one of my favorite people Rob Wiblin but before I introduce Rob and this episode I just wanted
to take a moment to say thank you to all the people who wrote in after our episodes with
Mark Cohodes and Brendan Eich we were overwhelmed with feedback and we really appreciate it.
It's one of the many reasons why we keep doing what we do here.
So thank you.
So to today's episode, I met Rob at the home slash office of 80,000 Hours in Berkeley,
California.
And 80,000 Hours is an organization that helps talented young people find the careers where they can have the
most possible social impact. It was founded at Oxford in 2011 and it successfully partook in
the Y Combinator Startup Accelerator program as a non-profit and it's part of the broader
effective altruism movement, a movement and a philosophy which uses careful reasoning and evidence to
find the ways that we can do the most possible good in the world. If you're at all interested
in the impact you can have with your career or in how to have a greater impact, then you have
to familiarize yourself with 80,000 hours. And I say that very much as a convert. I'm someone who used their career guide and their career planning tool back in 2016.
And it changed the course of my career, pointing me in the direction of startup entrepreneurship
and finance, two areas which I'd never really considered.
But to not be familiar with 80,000 hours, if you're interested in having an impact,
would be like trying to get to a destination you've
never been before without ever having read a map or ask someone for directions it's absolutely
essential and these guys really are the world experts when it comes to this topic so rob and
i covered a lot of ground in this episode and we speak about the cause areas that 80,000 Hours is
painstakingly selected as the topics that you should focus on first if you want to have
the highest impact with your career.
We talk about the rigorous criteria that the team has designed to help select these cause
areas.
We talk about how you can determine whether you're the right fit for a
particular role. We talk about strategies you can use to build up more career capital. And we talk
about how much impact you could actually have over the course of your career, literally down to the
number of lives you could save. I guarantee that at least something in this episode will change
your mind. So without much further ado, please enjoy my conversation with Rob Wiblin.
Boom.
All right.
Sorry.
Thank you for joining me.
It's a pleasure to be on.
I've got to admit, I haven't been listening to the show until I was preparing for this
episode, but I'm really impressed with some of the guests you've gotten on.
And you've really improved over the course of the last year.
Thank you.
Thank you.
I might start stealing some of your guests for my end.
Likewise, because you do your own podcast as well, the 80,000 Hours podcast.
Yeah.
So, got to get that plug in early.
Straight off the bat.
Yeah.
We've got the 80,000 Hours podcast with Rob Wiblin is the show.
The pitch is to show about the world's most pressing problems and how you can use your
career to solve them, which I guess we'll be talking a bit about today.
One of my favorite podcasts after the Jolly Swagman, of course.
Oh, don't.
There's no need.
There's no need.
So, Rob, we went to the same university, the ANU in Canberra.
Huh, I actually didn't know that.
Oh, really?
Okay.
Well, there you go.
Well, of course, we didn't know each other at university. You were a few years ahead of me,
but you studied genetics and economics. So, you were top of the class when you graduated
and you're probably one of the most interesting people from the ANU that I've met because, I mean,
you could have done anything and instead instead you've gone and joined the effective
altruism movement, which I think is one of the great kind of exciting up and coming movements
of our times. So, that's sort of what we're here to talk about and we'll talk about 80,000 hours
as well, which is a part of the EA movement. Let's start with EA more broadly. Just give us a definition
of effective altruism. Yeah. So, the summary we usually give is that effective altruism
is the use of evidence and careful analysis to try to improve the world as much as possible.
So, in any situation where you're trying to raise welfare as much as you can
and you're trying to use evidence and being really analytical about how you do it,
I guess we would class that as effective altruism.
And how old is the movement?
Well, I guess the name people came up with in 2011.
But, of course, these set of ideas didn't start then by any means.
It's grown out of a whole lot of pre-existing intellectual movements.
I guess one of them is kind of utilitarian philosophy.
So, Peter Singer and other moral philosophers like that.
There's also kind of the evidence-based medicine movement, the evidence-based development movement. And I think the third group would probably be
GiveWell and perhaps the rationality community in the Bay Area. A lot of people who are interested
in giving to the most effective charities or finding the most important problems to solve
were kind of clustered around San Francisco in the 2000s, and quite a lot of them have become
involved in what's
now called the Effective Altruism Movement.
Yeah.
And we should say GiveWell is probably the foremost charity evaluator in the world.
Yeah.
They started in 2007, and their goal was to find charities that you could be really confident
were having a very large social impact, were helping people in a big way with each dollar
that they received.
And I think now they are one of the most rigorous and well-known charity evaluators in the world,
and they move pretty substantial money with their recommendations.
So, all of these different threads, utilitarian philosophy, evidence-based health,
charity evaluation sort of coalesced into the effective altruism movement.
Yeah, I think that's right. Yeah, it was a coming together of all of those ideas and I guess trying to push them forward
by pulling together the best ideas from all of them.
And so, from when you graduated university,
how did you find your way into the movement?
Well, I guess my interest in this whole set of ideas
goes back quite a long way.
When I was a teenager in Australia,
Peter Singer is like a pretty well-known philosopher in Australia, obviously, and I encountered his ideas about how we might have pretty substantial moral obligations to help other people in as much as our lives are going well and we have surplus resources that we can use that would make a much bigger difference to other people than they would to ourselves. So I found out about that, I think, when I was like 14 or 15. I was reading some of his essays, and it just really resonated with me. I thought,
like, yeah, this is kind of right. If I'm, like, someone who's very wealthy by global standards,
and, you know, with very small sacrifice on my part, I can radically improve someone else's life,
then that's something that I really ought to do. And, of course, I also found out about his views
on, like, animal rights and animal welfare
and stopped eating meat around the same time.
But there wasn't really a group that you could join
that was thinking about if you adopt this view
that you ought to do as much good as you can,
what does that imply for your career or the rest of your life
beyond perhaps just donating some of the money that you earn.
And so I continued to read about people who had kind of a similar outlook,
and that actually prompted me to switch into studying economics at university
because I found economists seem to share this worldview.
That's more than any other discipline, thinking about, you know,
how can you, like, maximize the efficiency of the things you're doing
to have the largest impact.
It's kind of an economic way of thinking.
But I didn't make that much progress until I found utilitarian interest groups online and moral philosophy interest groups online.
And perhaps also the rationality community, Less uh and things like that and uh also the future of humanity
institute at oxford which was thinking about kind of the long-term future of humanity and where it's
going and how we could push it in the right direction uh i think that they were basically
doing yeah the cutting edge work trying to figure out if you want to have the biggest impact with
your life uh what what should you do and uh because i got to know people involved in all those different groups, when in 2011-12 at Oxford University, a whole lot of students there, and particularly philosophy PhD students, they actually formed a critical mass that allowed them to start an organization, the Center for Effective Altruism.
I found out about that pretty soon.
And people forwarded me this job advert for being their first director of research and said you should really apply for this this is the thing that you've been like
talking about like since i since i've known you just obsessively like wondering you know how can
you have the largest impact this is the perfect job so you should apply uh which i did i didn't
didn't really know the people there very much so it was a bit of a uh you know crossing the world
on a hope and a prayer and uh being hopeful that this movement would actually take off rather than kind of a charity collapsing a month after I arrived in England.
But it worked out pretty well.
I've been fortunate, I guess, to graduate right around the time that Effective Archism
was taking off and be able to get in on the ground floor.
Yeah, that's awesome.
Sort of seems like a perfect fit in hindsight.
And I think one word to describe you is prolific.
It's a double-edged conflict.
No, no, no.
The quality we're not going to comment on.
No, no, no.
Quality and quantity.
And its director of research seems to be a great fit for you.
It's interesting how many Australians are involved
in effective altruism.
You know, really prominent EAs.
You've got Peter Singer, Toby Ord, yourself.
Is that just an accident of personal connections?
Well, it's not only them.
There's Brenton McIntyre.
Yeah.
Sorry, Peter McIntyre and Brenton Mayer, two of my colleagues, also Australians, who moved over to work in 80,000
Hours. There's Tara McAuley, Sam Deer in the Center for Effective Actuism. So, yeah, we are
like extremely overrepresented, it seems. We've wondered what's going on there. I think part of
it might just be the influence of Peter Singer, that he's like better known in Australia than
perhaps anywhere else. And that leads people in this kind of philosophical direction.
Another possible answer is that you get kind of founder effects.
So, you know, a couple of people come over and then their friends in Australia find out
about it, and so it kind of spreads through social networks.
Another possibility is that it's just like quite a good fit for Australian culture, that
we tend to be very pragmatic in how we think about solving problems and perhaps less inclined
towards the kind of continental
philosophy which leads to something of a different attitude um australians tend to just kind of want
to get get shit done and that's uh kind of the attitude that um effective altruism tends to
attract interesting did you have any other ideas what it could be no no i think you've sort of
covered them all it's probably as with with, it's probably a combination of all those different things.
And you were there during some of the, you know, the earliest conversations about the
movement and what direction it should take.
Can you tell us the story?
I mean, effective altruism wasn't always described as effective altruism, but I believe you were
in the room during the debate as to what label should be applied to the movement.
Can you tell us that
story? Unfortunately, I arrived a few months late to actually be part of that conversation.
I'll never be able to say that I was there when internationalism was constructed.
Can you relay the story anyway? Yeah. So, there was, I guess, like half a dozen,
maybe a dozen people in Oxford who were planning to start the center for something, the center of something, and they were trying to figure out what should we call this.
They had a whole bunch of different options. I think strategic altruism was one of them,
extreme do-goodery. There was probably a bunch of bad ideas in there. Basically,
they had a vote on which one they thought was best, and they went with effective altruism.
They knew that whatever name they chose was going to stick and probably
become impossible to change. It's a tricky thing whenever you're starting a project that you have
to make these decisions kind of blind very early on not knowing how people are going to react and
then uh you're stuck with them basically forever because it'd be so hard for us to change the name
now but i think i think they chose uh fairly well um there is a bit of a downside with the name
effective altruism that it uh if you say you're an Effective Altruist
or you're part of the Effective Altruism movement,
it sounds a little bit presumptuous perhaps
because you're assuming that you are effective
and maybe even more effective than other people.
So, sometimes people are back to that.
It's more of an aspiration.
More of an aspiration, yeah.
I think aspiring Effective Altruists is what we should call ourselves.
But, I mean, it was very perceptive at the time
that so much thought went into the name
because, you know know names are very
important like nominative determinism is is a thing and it was funny i i was reflecting the
other day on the the difference between uber and lyft and the respective difficulties they've had
with with governments around the world um it'd be interesting to consider how much of that effect
has to do with the names.
You know, Lyft is a lot sort of lighter and friendlier
and, you know, Uber is German for above.
Above what? The law?
A somewhat negative association historically.
Exactly.
I think a lot of the credit there actually has to go to Toby Ord
who founded Giving What We Can,
this group of people who give 10% of their income to the most effective charities or pledge to do that. It was pretty
young people who were mostly involved in starting this intellectual movement, at least in Oxford.
But Toby had a bit more experience and he knew that you can have good ideas, but if you put them
the wrong way, if you frame them the wrong way, them the wrong name that can really turn people off and he i think he got everyone to think you know we have to get this right the first time we have
to think carefully about how we're framing it we have to think about objections that people might
have and how we can address those and especially not caught kind of unnecessary controversy
this kind of sometimes there's like aspects of there's controversial ideas that are kind of
core to what you're pushing but there can be this kind of juvenile attitude people have of like
wanting to get attention by being controversial and promoting ideas that get people's backs up
but i think very often that's a distraction uh from the core message that you're trying to push
which at least in our case i think is is fairly uncontroversial that if you can help other people
in a huge way at small cost to yourself then maybe you ought to do that or at least it would be good if you did that and you know some
people should be looking into into how you can do that yeah so you just mentioned giving what we can
and which was founded by toby ord and will mccaskill yeah and that's one of many ea organizations
which now i guess sort of form an ecosystem of different organizations. We've spoken about GiveWell,
which preceded the EA movement officially, but it's still an important part of it. There's the,
I guess, the Future of Humanity Institute at Oxford, the Center for Effective Altruism,
and then the most recent EA organization, which is 80,000 Hours. And you're now the Director of
Research at 80,000 Hours. You also host, as we mentioned at the beginning Research at 80,000 Hours. That's it, yeah. You also host, as we mentioned at the beginning,
the 80,000 Hours podcast.
We're here in, I guess, the home slash office
of 80,000 Hours in Berkeley, California,
where you guys have recently relocated.
So, let's talk about that now.
I guess, firstly, what does the name 80,000 Hours mean?
What was the inspiration for that?
So, initially, the project was called High Impact Careers,
but we found people did not like that name.
We realized pretty fast that we'd chosen the wrong name.
So, we became 80,000 Hours.
And 80,000 Hours is approximately the number of hours
that someone would work in a full-time career.
So, I think it's eight hours a day, five days a week,
for 50 weeks a year for about 40 years.
And with that, we're trying to highlight that 80,000 hours is quite a lot of time.
So, you should probably spend a decent amount of time thinking about how you're going to allocate all those resources, you know, at least a few hours, maybe even hundreds, possibly thousands of hours given the stakes involved.
But at the same time, 80,000 hours in your life is not that much relative to the scale of the problems in the world.
You know, people do spend billions of hours trying to solve all of these problems and you've only got a tiny amount relative to that.
So, you should really be very judicious about where you spend that time because you can only bite off a small fraction of all of the issues.
So, what do you guys actually do as an organization?
So, we have a career guide on our website where we offer all of our kind of core advice on how people can have a larger social impact with their career while also having a very
fulfilling and enjoyable career at the same time.
We are constantly producing kind of further research to look into like the world's most
pressing problems and how we think people can solve them in the biggest way in their career.
So, for example, we're worried about the risk of new pandemics.
It's one of the problem areas that we do a lot of research on.
So we're looking for high-impact jobs there,
thinking what interventions can one have that could help to contain diseases
before they spread globally.
And then we publish that on our website and discuss it in the podcast.
And we also look into particular career paths.
And, for example, we think that someone could potentially have a lot of impact in their
career by going into politics in the United States.
So, we're doing research into, you know, what kind of roles do you have there?
You've got kind of think tanks, elected office, being a congressional staffer.
And then we write up reviews of those different career options and how you can get into them and kind of try to assess how much influence they give you.
And then there's also the in-person team who have done coaching with people.
So in the past, people were able to apply for coaching through the website and get kind of one-on-one free advice from us on what we think that they should do that would allow them to have
the biggest social impact with their work. At the moment, we're not doing coaching. Instead,
we're doing headhunting. So, we're trying this alternative approach where we find particularly
high-impact roles and then see if we can find someone who's a really good fit for them and get
them to apply for that role and kind of match them them up. We're just trying to figure out how the in-person team can have the biggest impact themselves.
There's a whole lot of different ways that they could help to get people working on the
most pressing problems in the most effective ways.
And we're just experimenting and testing which one gets the biggest bang per hour of work.
So, as we discussed, 80,000 Hours is a relatively young organization.
Why did you guys see a need to
start it? So, our two founders, Will McCaskill and Ben Todd, they were both studying philosophy
at Oxford. And they thought that they wanted to have as much impact, you know, help people as
much as possible with their career. And so, they started just doing some research of their own,
trying to figure out what jobs should they take. You know, should they go into philosophy?
Does that have a big impact?
Should they go into politics instead?
Perhaps they should go and try to make a lot of money
and donate that to effective charities.
And they basically found no one had really tried
to pull together this information before.
So everything that they were finding
was basically kind of compiling original work.
And I think pretty early on, a couple of months into just doing this investigation
for their own sake,
they gave a presentation at Oxford,
which got a couple of dozen people along.
And they explained basically the key ideas that they'd found.
I think they, this is up on YouTube
if you want to check it out.
Although the information's a little bit outdated now.
But they've found things like,
it seems like doctors don't actually save that many lives,
but you can save a lot of lives if you give to like really effective charities that work in the developing world.
Yeah, I think a doctor saves about 10 lives over the-
That's about right.
In rich countries, that's about right.
Yeah, over the course of his or her career.
Yeah.
Yeah.
So, they like gave up kind of the preliminary results that they'd found just after a small amount of investigation. And they found that a number of people in the audience
completely changed their life plans
on the basis of this one presentation that they'd given.
And so they began to wonder,
maybe the most impactful thing that we could do
is to continue doing this research
and then tell other people about it.
And, you know, if you can persuade just one other person
to do the thing that you would have done with your life,
then potentially you've,'ve like doubled your impact. So, there's this argument that by doing advocacy, by trying to
change other people's behavior, you can potentially get a lot of leverage. It probably seems easier to
persuade one other person to take the career that you would have taken than to spend your entire
life doing that career yourself. And given that it seemed like they'd shifted several people's
career plans in kind of a day or at least a few months of work, there was a lot of gain here.
So, it was a volunteer project for a while. They continued doing research and putting information
up on the website and they got kind of enough promising signs that it was a useful thing to do
that in 2012, they decided to make it a proper organization and hire their first staff member
and they found people who were willing to donate and kind of the rest is history. Yeah. So, Rob, what kind of impact could people expect to have
taking a job in a high impact area? So, in some of the priority areas that we're focused on,
it's quite hard to measure the impact. Yeah. It's very hard to like tell how much you've reduced
the risk of these things that may well not happen anyway. But kind of to set a lower
bound, we look at if you took like a really low risk option where we're really confident of the
impact, how much good could you do? And a good baseline there is to think, well, what if you
like went and got just a professional job that you might have taken anyway, and then you donated
that money to a charity that kind of saves lives at the lowest cost that we can find.
And in that case, if you look at GiveWell's estimates of how much it costs to prevent someone from dying in the developing world of an easily prevented disease,
if you give to the Against Malaria Foundation, they predict that it costs about $7,000 to prevent someone, to prevent a child from from dying of malaria so if
i'm thinking about you know the typical audience in that you're talking to is probably earning
maybe so could earn somewhere between like fifty thousand one hundred and fifty thousand dollars
you know it's going to vary over the course of their career but they might well be able to spare
seven thousand dollars each year without you know taking a dramatic hit to their quality of life
that's still be able to eat out sometimes and live in a perfectly nice house,
which suggests that they could save someone's life
basically every year over the course of their career,
at least on average,
which would mean maybe they could save 40 people's lives.
And that's just taking a very kind of conservative baseline
where you're not necessarily pursuing a vastly different career.
You're giving a decent,
like much more of your income perhaps than other people do,
but not an amount that's going to be really hard to bear.
And you're giving to like perhaps the safest option,
like an option that's not that highly leveraged,
that you're not getting like big gains
in kind of the possibility by taking risks in politics
or research, you're just choosing
the absolute kind of safe index fund.
And nonetheless, you'd be able to save 40 lives,
have a massive impact on 40 different people.
So I think that suggests this is a very important issue.
If I said that at relatively low cost,
you could save 40 people's lives,
I think most people would say
that's a really valuable thing to do.
And we think it's more impact than a doctor has
in their rich fold
over the course of their career is our estimate.
In reality, I think if people focus on, you know, our priority areas
and they're willing to kind of go hard and take some risks,
they can have much more impact than that,
potentially, you know, 10 or 100 fold as much.
But, you know, it's harder to prove that.
It's more kind of a judgment call.
I know Will McCaskill says, you know,
imagine if you rushed into a burning building and saved someone's life you'd feel like an absolute
hero and you would be you know you know multiply that by 40 right yeah i mean i guess just just
pressing the donate button on the website doesn't give you quite the same level of satisfaction
people are not going to carry you through the streets uh but uh but yeah you you're having a
huge impact on unreal people now Now, the most impactful career,
does that mean that you need to be a utilitarian
or most people tend to be utilitarian
when they come to ADK for advice?
Are they thinking about how they can maximize the good
that they create in the world?
So, if you're a utilitarian,
you're likely to be very interested in our advice
but the vast majority of readers aren't utilitarians.
To be honest, I am quite sympathetic to that view although i um think we don't really have
really strong evidence within moral philosophy to know uh which approach is correct if any of them
so one should generally be a bit uncertain about these things and give a bit of weight to
every approach but i feel reasonably confident that um welfare is one of the things that matters morally, if anything does.
And that's actually why we focus on improving welfare of people and animals.
And welfare meaning like well-being, not necessarily social security.
Yeah, well-being, making people's lives go well.
We focus on that because basically every moral philosophy agrees that that's one of the
things that matters, that it's very often bad when people suffer and that if they're enjoying
their lives or getting the things that they want out of life, then that's better than if they don't.
So, it's a fairly unifying principle that most people are interested in. And if just one of the
things you care about is whether people's lives go well, that they like don't suffer unnecessarily and that they mostly have a good
time and,
you know,
achieve their goals and find fulfillment.
Then I think our advice is,
is going to be going to be useful to you.
And,
and that's,
I think why we have quite a large audience is that kind of regardless of
your philosophical views,
that there's potentially quite actionable things that you can get from,
from reading our career guide.
Would you describe yourself as utilitarian?
Definitely utilitarian leaning.
Yeah. Okay.
Yeah.
I mean, I feel like there's a correlation between people who are
high IQ and utilitarian leaning. Is that fair to say?
It's interesting. I've seen surveys of moral philosophers,
which suggests that they're
pretty divided across a lot of different views i think at least there's a correlation between
kind of like analytical thinking perhaps and like kind of a mathematical style of reasoning
or a logical style of reasoning and utilitarianism um i'm not sure whether like you know every kind
of uh or like every aspect of intelligence necessarily associates with utilitarianism yes
yes yeah i think it might also just be that um people who are more intelligent are more drawn to kind of
strong consistent theories whereas i think people who are less intelligent perhaps spend less time
thinking about this and more often go with kind of common sense uh morality which is kind of a
pastiche of all kind of lots of different considerations thrown together or system one
morality right yeah so I wouldn't surprise
me if you would find that kind of intelligent people are more likely to be utilitarians and
more likely to be libertarians and more likely to be deontologists and all these different things
because they're more likely to kind of pick a theory and run with it. Yeah. So, does the EA
movement then think that, you know, everyone can be persuaded across to pursuing high-impact careers or is it just temperamentally not for
everyone uh well i think many people can be persuaded to make doing good for the world
with their career um a significant factor uh at least uh most people who are relatively well off
or like you know live in rich countries where they don't have to worry about just surviving themselves and providing for their family.
I don't think most people are going to become
utilitarians at any point soon, but precisely because almost
everyone would like the world to be better. We have surveys on this,
what things do people worry about when choosing a career? And from memory, about 80 or percent of people say that uh they would like their career to make the world a better place
to help other people and that's an important aspect of their work uh and when we've done
research into what uh makes for a good career uh what makes it enjoyable and fulfilling uh feeling
like your work is meaningful which is basically comes down to feeling that it's useful to other
people uh is one of the one of the key properties of of a career that people want to stay in like
medicine working medicine is like very unpleasant in some ways it's like very long hours difficult
work you're you're dealing with like um potentially tragic situations but uh people in in medicine
tend to have like very high levels of satisfaction with their career and the key reasons that they
find it meaningful because they feel like they're helping people and in most cases they are so i think even if you're kind of
only concerned with self-interest then you have a reason to uh want your career to to be actually
helpful to other people and it's it's i guess every so often i do talk to someone and uh i explain
what we do and you know what kind of advice that we give and they just say to be honest i just don't care about other people uh like i'm not going to choose a career based on these ethical
considerations because fundamentally i'm a selfish person and i just want to kind of provide for
myself and my family and friends but that's rare i think that's most people don't feel that way
um and when i encounter those people i'm just like okay uh i don't know i don't i don't agree
with that but i'm not going to like spend my time trying to trying to change your mind because there's lots of
people who are much more sympathetic and yeah easy to persuade do you think that's rare because there
are social norms against publicly stating that you don't care about other people or it's rare
because people are genuinely you know at least partly altruistic yeah i mean a bit bit of column a bit of column b uh i think it's it's fairly rare for someone to want to use say most of their time or their money
to help other to help complete strangers at least uh that is yeah an unusual kind of psychological
quirk but i think most like yeah humans just are very social animals uh we want kind of the
approval of others uh we want to seem useful to the group and so we want to feel like we're contributing and probably
these are like very strong evolutionary reasons why yeah people who were like useful to others
and like good to have around uh were more successful uh in the evolutionary environment
and more and more likely to reproduce so we have these like pretty strong instincts like i mean
when people feel like they're not contributing to society, I think it produces a lot of mental health issues potentially,
a lot of grief for them.
So, yeah, I do think humans are very complicated animals.
We have a lot of motivations, different kinds,
but I think most people really do want to make a difference to the world.
And given how humans evolved in these very cooperative societies,
that's not too surprising.
So, 80,000 hours helps people find the most impactful careers
and that necessarily means that you need to help decide
between different causes.
So, firstly, let's talk about some of the different cause areas
that you guys recommend and then secondly, I'll ask you about how you actually manage to quantify them and draw comparisons.
So, what are some of the things that you recommend people work in?
Yeah.
So, we've only been able to look at kind of a fraction of the problems in the world
because there's so many different ways of slicing and dicing them.
Although, we haven't chosen the ones that we've investigated at random.
But some of the things that we think are most important
are kind of global priorities research,
so trying to figure out which problems are most important,
given that not many people work on that.
As I mentioned, kind of disease control, pandemic prevention.
We think people underestimate pretty significantly
how easy it could be for civilisation to be pretty destabilized by a disease that killed, you know, hundreds of millions or billions of people.
We're very interested in the development of kind of new technologies. So, historically,
we've seen that society has been kind of radically changed by technology in the past. It seems like
one of the main drivers of history. So, you know, we invented all kinds of machines that made the industrial revolution possible and completely transformed the world and human life.
And so anything that looks like it could do that in future is potentially something that could have a very large effect on history and where you might want to have people guiding how that technology appears.
There's a bunch of different possible technologies that could have like a very big effect on history.
One of the ones that's most prominent,
people discuss it a lot now, is artificial intelligence.
So what if we managed to make machines
that could do general reasoning the way that humans do,
but do it a lot faster, perhaps, or a lot more cheaply?
How could that transform society?
And obviously, that could have very positive effects
if you could get these AIs doing tasks that humans can't do
or doing all kinds of things that we do do now, but for us,
so we could just have lots of leisure time or a lot more wealth.
But it could also potentially go badly
if the wrong people control this technology
or they apply it in the wrong way
or the AI system is designed in a way that puts it at odds
with human interests.
Then there's other issues like preventing war.
So like one of the ways that the 21st century could go really badly
would be if the United States and China
ended up fighting kind of a great power war.
Decent chance that that could lead to basically
the end of human civilization.
And there's obviously people in government
trying to prevent that from happening.
But there's not a lot of kind of charities that are like organizations uh in in in the private sector or in the non-private
sector that are focused on preventing that even though it's potentially one of the most important
uh questions questions facing us then you've got kind of nuclear security in general uh
we we do still face the possibility basically of civilization ending if there's ever kind of a
nuclear accident that prompts a nuclear exchange between the u and china or uh russia and the united states that's
something that's like started getting a bit more attention in the last year or two unfortunately
uh for for all the wrong reasons um i think uh what what other issues are there there's also
improving kind of decision making procedures in government uh so some of your listeners might be
familiar with the work of philip tetlock who's spent the last 30 years doing research into how to predict the future
accurately. And also a former guest of the 80,000 Hours podcast.
Yeah. I'm a huge fan of Philip's work. I was very honored to get to talk to him for a bit.
You can stick up a link to that episode. It was a great episode, yeah.
Yeah, so he spent a lot of time trying to figure out
how can you predict the future accurately
and then how can you use that information to make better decisions.
I mean, he's done a lot of this work for the US intelligence services.
So there's a particular focus on kind of international relations
and politics and predicting disasters that might happen there
and how do you prevent them. So I think his work got a whole lot more funding after the Iraq war when a lot of people
realized that basically a failure of intelligence or at least something that was to some extent a
failure of forecasting, of predicting whether Iraq had weapons of mass destruction led to the
waste of trillions of dollars and the loss of hundreds of thousands of lives, basically.
And so they turned to academics of various different kinds
to figure out how they could improve the methods here
so that you wouldn't just get the groupthink
and perhaps the political meddling that made the Iraq War possible.
So, yeah, if you're interested in that,
he's got this book, Superforecasting,
where he describes how you can predict the future better.
And we think that that could be extremely important
because bad decisions in government are one of the key ways, again,
that the future could go very badly
and civilization could be destabilized
if you get the wrong decisions made.
So, if you make sure that military generals are getting the right advice
and have an accurate idea of what impact their actions will have,
that could be very good.
Yeah, yeah.
So, that's sort of like a meta- an enabling factor that that's right so if you could improve this forecasting and
decision making uh processes then that would potentially have a lot of impact over many
different areas you know if you could get this uh into all government departments it could improve
policy making in education and in health and all kinds of social policy as well as questions of kind of defense
and security so that's one of the reasons why we think it could be quite high impact
so so that's it that's just a taste but if people are interested they can look at our
problem profiles on our website yeah you've got your work cut out for you people
we need to solve these starting now we We only have about seven or eight staff.
So, we do kind of have to narrow down our focus a bit.
We can't know a ton about all of these different areas.
So, at any point in time, we pick a handful of them and try to learn as much as we can about them so that we can offer really good advice to people who are interested in going into those areas.
Wow. interest in going into those areas wow so one thing that strikes me about a lot of these causes
is that for many people at first glance they would come across as very remote risks and i guess
as a species almost by definition we're bad at thinking about tail risks especially existential risks, because if we had experienced, you know,
something that destroyed us, we wouldn't have been around for that
to have been built into our evolutionary psychology.
Right.
But I guess one of the key messages maybe that we'd like to get across
for this podcast is that these things are still really important to think about.
And we don't want any more Steven Pinker. So, I'm not sure if you've read Enlightenment now.
I have, yeah.
Yeah, quite shocked at how, and we discussed this in our episode with Hugh Price, the Cambridge
philosopher, but I was surprised at, you know, how someone of Pinker's credentials could be so
dismissive of existential risks. In Enlightenment Now, he thinks that,
you know, they're sort of romantic, dystopian ideas pursued by or funded by tech billionaires,
and they distract from the real problems like climate change. So, what do you think of that?
I was also somewhat frustrated by parts of Steven Pinker's book, though I really respect him and I
liked a lot of what he was saying. But I think we dived into talking about what problems we think are most pressing in the world.
But it's worth taking a step back and thinking about how we assess problems to try to shortlist
the ones that we think are most important for people to focus on. So, the three criteria that
we look at in each of our problem profiles or each time we review a new problem area is importance,
tractability, and neglectedness. So, importance or scale is we try to measure how many people
are affected by this problem and how much. So, for example, if you were thinking about
working on curing a disease, we would look at how many people, you know, have this disease,
how many people are forecasted to have it in future,
and how bad is it for each of those people?
You know, how much does it cause them to suffer
or how much does it interfere with their life?
Then for tractability or solvability, we think, well,
if we increase the resources going towards solving this problem by 10%,
you know, what fraction of that problem would we be likely to solve?
Or what would be our probability of completely fixing it,
perhaps if you're thinking of just inventing
a complete cure for a disease?
So that gives you some idea of how easy is it to fix?
Because there's some issues that are very big in scale
and no one's working on them,
but that's because you can't fix them.
So it would be great if we could invent
a perpetual motion machine
that would solve a lot of the world's problems that have unlimited energy, but it's an intractable problem.
And then there's neglectedness, which we often find drives a lot of the differences
between the problems. So because when people start working on a problem, they tend to choose
the most impactful things first,
and then once they've done that,
they move on to solutions that seem somewhat less promising.
You get this thing called declining returns.
So if you're the first person to work on a problem,
you can probably have a lot more impact
than if you're the hundredth person to arrive
or the millionth person to arrive working on a problem.
So we very often look for issues that we think are big in scale and can be solved,
but that not many people are working on them. So, a lot of your listeners will be worried about
climate change. And we basically agree with the consensus view that this is a very serious problem
that could be destabilizing for civilization. But there is a lot of money spent already and a lot of people focusing their careers
on preventing climate change.
And for that reason, we suspect that a lot of the low-hanging fruit
is already being taken,
that a lot of the most impactful things that people can do
are already being done,
and adding one more person to that effort
won't make such a huge contribution.
Whereas some of the other problems that I mentioned
are potentially similar in
scales similar in importance or similar in the kind of risk that they present to humanity
but there's far fewer people who are focusing their whole career on solving those problems
um and that's basically why they end up uh look at looking particularly important and this is a
reason why we think if if you're looking for the most impactful things to do they're going to be
weird because things that seem absolutely common sense,
lots of people have already noticed them
and started working on them
and you probably heard of them before.
Right.
Yeah.
So, we think of it as it's inevitably going to be the case
that if we're doing our job,
the things that we're suggesting
are going to seem a bit counterintuitive in some way.
They probably shouldn't seem absolutely crazy,
but they're not going to be completely mainstream.
And as we've done more and more research, I think our advice has moved further and further away from the mainstream, which is exactly what you'd
expect. Because we start out mostly knowing about what other people know. And we're not yet ready,
perhaps, to make bets that are strongly against the consensus. But as we learn more, and we think
about these ideas and run them past other people and check them,
you can become gradually more confident
that the ideas that you have
that not everyone already believes
are actually worth pursuing.
That maybe they're not guaranteed to be right,
but that they've got a good enough shot
that this is kind of the highest impact thing
that you're at least we're able to work on.
Okay, I'm glad you backed up.
So this sort of makes sense of that list
of very unusual cause areas.
Yeah.
Or at least, yeah, somewhat obscure issues.
Yeah.
So, then let's move on perhaps to the risk thing.
So, yeah, people will have noticed that a lot of the issues that I mentioned were about kind of risk management and risk management at kind of a civilizational level.
So, what's going on there?
I think there's two main drivers. One is that when you're thinking
about how to improve welfare as much as possible, one issue that's kind of neglected in our view is
the very long term. So, there's about 8 billion people alive now, but humanity could potentially
continue for hundreds or thousands of years, potentially even longer. So, there might be, you know, until humanity dies out, it could be potentially trillions of people
who live. And their interests are not hugely represented in kind of the political system,
or people don't tend to pay a lot of attention to that. There's some discussion of intergenerational
equity, but given the magnitude of the damage that we could do to them and the number of people or the number of beings that could potentially exist in the future, I think that the long term doesn't get as much weight as probably it ought to.
And that's a reason why issues that affect future generations primarily, we think, are kind of neglected by our economic and political system.
And you can see that in climate change, but I think that the same thing has played out in these other problem areas. And we're particularly focused on problems that we
think could lead to human extinction, because that would preclude the existence of all these
future generations. So, yeah, if we were to have a nuclear war that resulted in all people dying,
that would be absolutely terrible for the current generation, you know, one of the worst things really possible. But it would also be a catastrophe for all of the
future generations that could have existed. And I have an episode with the philosopher Toby Ord
about this issue of, you know, how should we think about intergenerational equity and the
long-term future and how much weight should we give it relative to other things. So, that's one
reason to focus on kind of making sure that humanity doesn't die out or that
civilization doesn't really run off the rails. Another reason why we focus on these risk
management things is, as you said, we think humanity is, or like people in general are
quite poor at thinking about risk. And I think there are good evolutionary reasons for that,
that the kind of risks that we faced in the historical environment where the human mind was evolving are not really like the ones that we face now.
Since we invented nuclear weapons, we did have the ability to just cause the human race to end.
But there was nothing like that in the ancestral environment. And so, we don't really think that much about tail risks
and kind of these extreme but somewhat unlikely events.
It's very hard for us to get kind of,
for them to have quite the emotional salience that they ought to have,
for them to, like, to scare us as much as they should.
And for that reason, tail risks often get neglected.
But at the same time, sometimes they get overweighted.
So, terrorists,
for example, exploit the fact that if something is extremely visually distinctive, that they do
precisely the things that kind of terrify us, even if they're not necessarily killing that many
people. And so, they can make us overweight particular risks. Yeah, this is called the
availability heuristic. Yeah. So, the availability heuristic is that we assess how common things are by how much we can remember them. And because terrorist attacks are so vivid,
so striking, we tend to think that they're more common than they are because they stick out in
our memory. So, yeah, there's ways that we can end up paying too much attention to particular risks
and also ways that we can end up neglecting them. And it's basically something that we're not very,
not terribly good at reasoning about as individuals,
nor as a civilization or as countries. So, you know, we've spoken to people in government about these issues, bureaucrats and politicians, and they'll often say, you know, you're right. These
are like very serious risks that we're facing. And someone should be focused on that. But I can't do
that because that's not what the electorate demands.
There's no money for it in our budget.
There's no political demand for it.
So this is just something that I don't have the discretion to work on,
even though I think it's very important.
And so basically the fact that we're not terribly good at reasoning about risk and particularly about high stakes, low probability risks,
goes up the political system and means that as a civilization,
as a species, we're just kind of flying blind.
Not nearly as much resources is given to these things
and not nearly as much reasoning goes into these things as would be ideal.
Wow.
Well, I'm glad you guys are thinking about it.
I mean, you've sort of turned this whole field of what should we work on into a science.
Like, I wouldn't say that.
Okay.
Perhaps early on in the appearance of effective altruism, there were people who wanted to make this into a science.
But it does involve a huge amount of judgment calls.
And especially because we're working,
like we think that the most important areas to work in
are very often kind of new areas
or like problems that not many people have worked on.
So there's often not a huge evidence base.
They're things where you're doing the initial discovery.
You have to be a bit speculative,
a bit willing to deal with imperfect evidence,
which means that you have to rely quite a lot on just human judgment,
like knowing a lot about the world and knowing a lot about history
and being able to make good decisions in a very uncertain environment
about what matters the most and what things will work to solve them.
And that means that it doesn't look so much like the natural sciences.
It looks more like social science, where the evidence is much worse,
much patchier, and you have to accept the fact that often there just isn't
a paper that settles an issue. Yeah. So, speaking of which, going back to the criteria you guys use
for prioritizing causes of importance, tractability, and neglectedness. So, I'm sort of imagining three
axes in my mind, X, Y, and Z, and one's importance, one's tractability, and one's
neglectedness. And if you had an issue that was equally important, tractable, and neglected,
it would sort of form a cube along those three axes, and you could almost weigh the volume of
that cube against other issues. But are each criteria equally weighted? Like, is one unit
of importance equal to one unit of neglectedness
to one unit of tractability?
So, we define and then measure these terms
in such a way that you can multiply them through.
And you're exactly right.
The volume does indicate kind of how pressing it is,
all things considered.
Kind of the math is a little bit difficult to go through in audio,
but you can link to our framework
where we explain kind of how we get things to cancel out such that it does work it does work
smoothly yeah and if you can kind of estimate each of these three parameters then uh yeah then
the cube is is the pressingness wow so you guys actually have modeled this yeah uh this model
comes from owen cotton barrett who's a mathematician at Oxford, who now kind of does
global prioritization research. I think initially the importance, tractability, and neglectedness
framework started out as this qualitative thing where people were just kind of saying, well,
this seems like very important. This seems like very solvable or like very hard to solve. And
it seems like a lot of people are working on this or not many. So, just kind of score it on out of
five. But he found a way that you can attach you know specific measurements to each of these words
that matches how we talk about it but also means that when you multiply them it's actually a
meaningful number and so you can yeah you can try to be more precise about it than just saying this
is like very neglected or not neglected so speaking of ai i mean this is an issue that i've been
thinking a lot about recently.
We just spoke to Hugh Price on the podcast
and I've been reading Bostrom's book, Superintelligence.
What are some of the key AI risks
and why is it an issue we should be worrying about?
So, we think it's an important issue
because the scale of the impact could be very huge,
both positive and negative.
There's not that many people working on it.
And it also just seems like there's things that we can do that would really reduce the risk and increase the potential upside.
So there's a lot of potential ways that AI could go wrong.
One might be that it's used in military technology.
So this becomes kind of a new arms race between countries.
And that most of the advances are in how do you use AI in a hostile way and it could be very destabilizing to the international order.
Another one that people have talked about is that artificial intelligence could very learning advances to kind of everyone else so that most people benefit from it.
A more extreme way that things might go wrong is if you have an artificial intelligence system that is significantly more intelligent than humans typically are and perhaps can think much faster than we can because just messages move much faster on silicon chips than they do in the human brain. And would have a
short-term memory that's much more than seven items. So, it might be able to quickly have
insights that humans might not be able to have. And then if we give it a goal that kind of isn't
what we really meant. So, there's all these like stylized examples of how
this could potentially go wrong that the classic one is kind of the paper clipping factory where
you tell a you tell an ai to make as many paper clips as cheaply uh as possible and it just ends
up converting the whole world into paper clips like obviously it's not going to go down that way
but just in general if you have a machine intelligence that has a lot of processing power and even
the ability potentially to improve how well it
thinks, that's a very
powerful machine. That's like an intellectual
rocket that's taking off.
And you really want that rocket to be
pointed in the right direction. If it's
pointed in the wrong direction, it's just going to move further and further
away from our goal or never really get to what
we want. And there is
certainly this risk that if we give an AI system a goal that isn't what we
want and it has the ability to think about our intentions and predict the future very
well, it could realize that a risk to it achieving its goal is that we're going to turn it off
and we're going to stop it from achieving its goal because it's not what we intended
for it to do.
And in that case, you very quickly become adversaries and it's going to try to figure out how can I make sure that the humans aren't turning off. Now, I don't think
that's actually going to happen because people have noticed this issue and they're finding ways
of making AI corrigible, which is this term for able to realize that its goal is mistaken in some
sense and undo it. But there's other problems of that kind that seem harder to solve and that probably others
that we haven't even realized yet. And we basically need people to do this technical research to
figure out how do you design a machine learning system that can notice errors, that we can correct
it, that it's not going to run out of control, that it's going to do the things that we want.
And just current algorithms don't have these properties.
They're not easy to inspect.
They're not potentially easy to stop.
They don't notice their own mistakes.
They don't notice when the environment's changed,
and so they're doing something that was not the original goal.
There's all kinds of failure modes that they have.
And if readers are really interested in diving into this,
there's this great paper called Concrete Problems in AI Safety
that describes six different ways that these algorithms
kind of deviate from what humans want.
And if you make them much more powerful,
then they can deviate in ever greater ways.
So what should people do?
If you're someone who's a machine learning researcher,
then obviously you can do that kind of technical research.
You can understand how these algorithms function
and design parts in them that undo these,
that makes these mistakes less likely.
If you're thinking about international relations or politics,
then you can think, you know,
what kind of policies should be adopted?
How can you have kind of treaties between countries
that ensure that machine learning is not used
in an adversarial way that people kind of coordinate?
And in particular, that they coordinate
to not develop the technology really quickly
because they're in some kind of race against one another.
And so they have to scrimp on these safety issues
and making sure that they're not going to go wrong
in some unanticipated way.
So those are some of the approaches that you could take.
We have quite a long problem profile,
three podcasts,
a couple of follow-up articles on this on our website.
If you find that what I'm saying is a little bit surprising
or confusing or not completely convincing,
then that would be pretty sensible.
Some of what I'm saying isn't completely common sense.
But we flesh that out in these articles
and kind of explain some of the details that I've had to skip over here.
Great. We'll link to those as well. Okay, so we've sort of kind of explain some of the details that I've had to skip over here. Great.
We'll link to those as well.
Okay.
So, we've sort of spoken about one side of the equation, which is which are the most important causes to work on if you want to have an impactful career.
But I guess the other question is, you know, what's the best fit for the individual?
So, how do individuals decide which career path they might choose personally?
Yeah.
So, we have an article in the career guide about this that you should link to.
Some of the key things that we say in there are that personal fit is very important.
Sometimes people misread us as saying there's kind of like one most important career that
everyone should do.
But that's absolutely not the case because people differ so much.
There's no way that just a few things could be the most important because it's going to depend on your specific circumstance.
And if you look at the evidence on kind of achievement, it seems like the most successful people in most fields are vastly more successful than the average or kind of the median person within that field. So you see this in kind of scientific research in business, in politics,
that people in the top 1% of success
are getting most of the citations.
They have kind of most of the political power.
They're making most of the profits in business.
This doesn't definitively show
that like personal fit is so important
because it could be that the outcome is somewhat random
and that you're getting like very skewed outcomes, but that's not only because of personal fit but i think it's pretty
suggestive that uh if you're like more likely to be one of those people who really thrives in a
field then uh you're like much more likely to have a large impact uh so so we discussed that
then there's a question of uh given given that personal fit seems to matter quite a lot
uh how should you figure out out what you're good at?
And the bottom line there is that it doesn't seem like it's possible to find that out without actually trying to do things.
So there's kind of career aptitude tests and these personality tests that try to tell you whether you're a good fit for X, Y, and Z.
They don't really have that much predictive value, unfortunately.
What's far better, far more predictive of your performance in a job is doing a work test.
So take a thing that you'd have to do in this job.
Like for me, I guess it'd be like hosting a podcast or writing an article and try to do it and then get the people to assess how good you were at it.
That gives you a better idea than anything else, perhaps unsurprisingly,
of how likely you are to be good at that job.
And you might think,
wow, just doing one piece of work,
that doesn't really give you enough time to learn,
which is exactly right.
So we suggest that early on in people's careers,
when they're undergraduates or early graduates,
that they try to do a whole lot of internships,
usually in quite disparate areas.
Do an internship in politics,
do an internship in business,
do an internship in the non do an internship in business,
like do an internship in the nonprofit sector and then see which one of these things
seems like the best fit for you.
And then after you graduate,
at least to begin with,
keep like moving job every kind of year or two
until you find something where you feel like,
yes, I'm nailing it here.
Like this is really the place for me.
And potentially, if you just keep getting into jobs where you don't feel like you yes, I'm nailing it here. Like this is really the place for me. And potentially, if you just keep getting into jobs
where you don't feel like you're killing it,
then you should maybe just keep switching
for potentially quite a while
until you do find something
where you have the potential
to be really extraordinarily good.
I'm curious, has any of your career advice
changed over time?
Yeah, it's changed in a bunch of ways.
I think early on, we talked a lot about earning to
give so this was the idea that one of the highest impact things that you could do would be to go out
and make a lot of money and then give it to effective charities um i think part of the reason
why we talked about that was both that it seemed like a good idea and also that the media was very
interested in covering this so okay whenever
we talked about it kind of uh had its own momentum and people would ask us about this all the time
because so the framing that journalists would give it would be uh you know maybe the most moral
thing you can do is to become a banker and this was you know soon after the financial crisis yeah
and so this was like a very counterintuitive to people yeah and so it's kind of an interesting
story to run um but but we did think then that this was potentially very high impact.
And we do think that it's high impact now.
But as we've looked more into other areas like doing science R&D or going into politics or even just, you know, starting a new nonprofit organization focused on one of these priority areas. We think for many people, maybe most people, at least people who are willing to take risks
and be very ambitious and aggressive with their career,
that very often those will be the higher impact paths.
And I think we've kind of updated our website
to indicate that in around 2015.
What are some other areas?
I think over time, we've also come to appreciate
the importance of personal fit, as we were talking about.
Perhaps early on, we didn't give that quite enough weight.
I guess, as I was saying, also, we've become more confident in some of the counterintuitive problems
that we encourage people to work in. So, early on, we talked quite a lot about global health
and poverty. And we think that is an important issue um and it was one of the areas
where the evidence base is much stronger so you can get um a much better sense of what kind of
impact you might have either giving money or working um within the area of you know trying
to prevent people from dying of these easily prevented diseases in the developing world
uh so that was a natural place to start when you're a bit unconfident is to choose something
that seems like really impactful um uh where you can get really strong evidence of what what the impact is
over time yeah we've moved away from these like more more common sense answers towards
uh these more unusual uh answers so we were aware of these other considerations like what about
preventing war you know what about steering the development of new technologies but when you've
encountered these ideas uh you know we've only heard about them a couple of months ago or a few years ago, you really want to go and double check
your reasoning in as much as they like run against common sense. And basically, over time, we did
that. We ran it past a lot of people. We like thought about it more and thought, no, actually,
these things really are important. And so, we've like, you know, gradually started being more
aggressive and saying, yeah, we think that these are some really pressing problems that would be very valuable to get more people working on.
Another thing that we kind of changed our mind on was early on we were worried about
the fact that when you take a job, very often you're displacing someone else from that job.
So, you can imagine, let's say you're applying to work in a non-profit and you get a role.
Isn't it the case that kind of someone else would have gotten that job because they had a whole lot of applicants to that job?
And so, really, how much impact are you having?
Maybe this suggests that you'd have a larger impact potentially by going to work and earning to give because it's very likely that the other person who would have gotten the job in banking wouldn't have been giving nearly as much of their money so this is the concept of marginal impact as opposed to absolute
impact well i think uh kind of factual impact maybe is the term so you've got to really think
about actually what would have happened otherwise and we thought at the time that if you apply to
get a job almost always that job would have been filled by someone else who was similarly as good
especially in um kind of high prestige or high interest roles. And I think we were wrong
about that. Very often, at least in kind of the high skilled roles that we're encouraging people
to go into, the best candidate is significantly better than the second best candidate. And if
that candidate were to disappear, there's a decent chance they just would hire no one at all,
which was somewhat surprising to us. That to us. So, how did you actually ascertain that?
I think asking people, like learning more.
Yeah.
Learning more just about kind of the world of business and non-profits and politics.
Interesting.
It just turns out that I guess maybe because personal fit and kind of experience and skills
are so important, very often there really is only one, at least in these high skill
roles, there's really only
a few people who are able to do them to a really high caliber. And if you can't get that person,
I mean, in these high school roles, there's a lot of potential to mess things up and to make
things worse. And so, someone who doesn't know what they're doing, very often people will be
too risk averse to hire anyone. So, they want to get someone who they really trust and there just aren't a lot of people who fit that bill so yeah that that pushed in favor of people doing direct
work uh that is so so that that was one of the things that made us less keen about earning to
give and more pushing towards people towards just doing jobs that seem directly valuable
uh another issue well initially we were very interested in trying to like assess which roles
are more replaceable than others and i think we concluded that it's just kind of too hard to measure so that brought
us actually back towards kind of the common sense view that you should just take the job that seems
like highest impact in itself and then hope that the kind of the replay this replaceability
consideration kind of cancels out across the different roles um because yeah if you can't
get evidence of how it differs then uh it can't really guide your decision.
So, yeah, we have a blog post that we can link to about that, about how our view changed about replaceability.
Great. Yeah.
I mean, it's amazing how dynamic you guys are in terms of your one impression I get from 80,000 hours.
And I guess the movement more broadly is how self-aware it is, constantly willing to update and upgrade the ideas?
Yeah. I think early on, our views changed a lot. We're not terribly ideological. We're really quite pragmatic because we're trying to achieve particular outcomes, like improve welfare,
and we're not very committed to how you might do that. So, in as much as someone turned up evidence
that you could potentially have a very large impact by going into politics, we had no particular reason to deny that or argue with that.
We just, like, I guess we scrutinize it, right?
But we're not committed to our pre-existing view.
We're very happy to change.
And I think mostly people react the way that you do that, like, adds to our credibility when we change our minds.
So, we have a mistakes page on our site where we talk about things that we've done wrong and incorrect conclusions that we reached.
And I think it's good to be open about that.
There's just no way that kind of at the start of the project you can know all the answers to these things.
So, it absolutely is the case that our views should change early on.
I think our views kind of have stabilized a bit over time, which you might think maybe we're getting old and sclerotic and not mentally flexible enough. It could also just be that kind of we've settled on better answers
and so it's now harder to overturn these results
that we've kind of believed for some time.
As the director of research,
what's your intuition about how much they'll change
over the next five or ten years?
I think we'll definitely change some of the priority problem areas.
At least we'll, like, expand to some new ones,
especially as we get, we get more researchers and writers
and people working on the in-person team.
So we can have more focus areas.
I think we may get better evidence
about the impact of advocacy versus research
versus direct work versus earning to give.
I mean, certainly the circumstances might change.
So over time, we found that it was easier to raise quite a lot of money than we had expected, but somewhat harder to get people who are a great fit for the roles that we were trying to hire for and the other groups are trying to hire for.
So, that pushed against only to give.
That was maybe more of a change of circumstance and kind of a change of opinion.
I think the core career guide is going to remain fairly the same because with the
core of the career guide, a lot of what we're saying is just summarizing consensus wisdom
from social science about what makes for a satisfying career and how do you find a role
that's a good fit for you? How do you develop skills that are really useful? That's not the
area where we're kind of making,
you know, offering unusual views.
With that, we kind of just want to go with the best evidence that's available.
So that's fairly solid.
An area that we're still exploring is
how do you get people to coordinate very effectively?
So initially, there was only kind of a dozen,
a couple of dozens of people,
maybe hundreds of people
who were interested in following this advice. So the question of of like how do you organize yourself as a group to have more
impact by working together was less relevant but as the effective altruism community has expanded
there's more there's thousands of people um who are trying to have a large impact with their career
and they're working across many different areas um and they're somewhat connected by this reputation
that the effective altruism community has as a whole.
And so there's ways that if they coordinate well
and share evidence well and work together,
they can have a much larger impact, we think.
But also if people mess up and do disgraceful things
that harm the reputation of everyone else,
that can make things worse.
So this question of, yeah,
how do you coordinate large groups effectively
is going to become more important,
hopefully become more important as the number of people following our advice grows.
So, one feature that seems to distinguish the jobs in a lot of these core areas that you guys have identified is that the roles are highly competitive. a young person, say a graduate who has a relevant degree and wants to do something highly impactful
with his or her career, but probably doesn't have the skills or experience as of yet to obtain one
of these roles? Yeah. So, this is one of the most important articles in the career guide is how to
build up career capital, which we define as anything that puts you in a better position to
have an impact in the future. So, obviously obviously that includes skills that you might learn at university or on the job. Also credentials that can open doors that otherwise would just be
closed to you. It's people who you know who can like tell you the information that you need or
give you the introductions that you need. Also sometimes just having money in the bank gives
you a lot more flexibility to change jobs. So there's all kinds of things that enable you to
have a larger impact and get these high-impact roles in future.
So we talk there a lot about what kind of majors are good if you're an undergraduate,
what are the best jobs to get straight out of university
if you can't immediately get one of these high-impact roles?
What other ways can you build up career capital?
How can you meet the people who you need to know?
When is it a good idea to do a PhD and further education?
And when is it not?
And what kind of self-training can you do that adds the most value?
So, there's a lot we could go into there.
Maybe we should just direct people to that article.
Yeah, yeah.
I think that at least, you know, it sounds like you've got the answers.
Well, we have some answers.
Yeah.
I guess, what's a few highlights?
We tend to encourage people to go into fairly challenging,
often quantitative majors in university
because very often the skills that you learn there
are things that are very hard to learn
outside of a structured course.
We encourage people to, early in their career,
take jobs that make them fairly flexible,
that teach them skills that will be relevant
in quite a large number of roles
because very often early on in your career, you don't know what you're going to be doing
so learning some very narrow technical skill that doesn't transfer to any other job is a bit risky
so instead it's useful to learn something like how to write well which you're very likely to need in
a wide range of roles and then in terms of doing phds we encourage people to probably not start
them straight out of university unless
they're really confident they want to go into an area, go into a field. Instead, you know,
try maybe, you know, a job or two before you commit to doing four to seven years on a particular
research topic. Because we have just seen a lot of people do PhDs because they're kind of on
autopilot and they just want to continue studying because it's too scary to enter the real world of
work. And then they get towards the end of it and they're like, wow, I just
burned a bunch of years and I don't want to work in a sewer anymore.
I want to loop back to effective altruism now. You know, we began by talking about EA and
under the definition of altruism, it would seem to entail sort of, you know, giving something away,
whether that's a part of you or something you own. How does the concept of,
you know, having a high impact career and the, you know, the organization of 80,000 hours
fit under that definition? Because if you're not earning to give, then you're not necessarily
demonstrating altruism. Yeah. So, altruism is often defined as helping other people at your own expense but i mean to us
we don't care whether it's at your own expense or not if it's good for you then all the better so
probably a more accurate term would just be helping helping if you help other people and
you enjoy it then that's fine so we don't particularly it's not good from our point
of view of someone sacrificing if they're like paying an extra cost to help someone
yeah i think that's something that people get a bit obsessed by because they want
to show off uh how um giving they are how much they want to help even at cost to themselves
and that can sometimes lead people to do things that are less effective but uh but more showy of
the of the cost that they've paid um but yeah we're not terribly interested in in that kind of
thing you're just thinking about the best outcomes.
Yeah.
If you're having a great time and it's absolutely no sacrifice for you, then all the better.
And I think for many people, the highest impact careers that they can take are really interesting.
And they're often jobs that pay quite well because they're viewed as roles that are important in society.
And so, they tend to pay to get the right people. So, I think for most
people, pursuing a high-impact career in the way that we recommend ranges between a small sacrifice
and kind of a small gain. I don't think it tends to affect the well-being of the people who are
coaching or who are reading our advice very much one way or the other.
As a prominent effective altruist yourself, Rob,
are there any areas where the effective altruism movement is going wrong?
Yeah, that's a very good question and one that I ask a lot of my guests
because we're interested to get feedback and respond to it.
I think I used to have really good answers to that
and strong answers to that.
But I feel we have been reasonably good at improving
in some of the ways that people have criticized us for i think but one was for example uh people not
being friendly enough uh and i think that was partly a symptom of the fact that you know we
attract people who really enjoy debating and really enjoy kind of arguing about big ideas
and so that that can potentially lead to kind of heated discussions also just a lot of the
conversation happens online and you know know, Facebook and just text
is not a great medium for, you know,
having really friendly exchange.
But in person, people are really nice.
But people have learned this lesson
to try to be more polite than they would otherwise be.
And that's made things more enjoyable in the community.
I think one way that we're still potentially failing
is that we tend to attract people who are very analytical thinkers.
They enjoy theory.
They love abstraction.
And we do a lot of that.
But that means that we can potentially neglect going into the nitty-gritty details of collecting the empirical information from the real world about the problems that we're worried about and how to solve them.
So, for example, if you're worried about pandemics, there's a lot of people who have expressed concern about this for a long time,
but relatively few who have gone into what specific diseases to worry about, what are the
technical details of how they would change in a way that's dangerous, what policies could you do
and how would you get them implemented, how do people feel about those policies in government,
in the bureaucracy, what are the challenges? That kind of information can be harder to collect, and it feels maybe a bit more like a slog to people because there's just so many facts to
pull together to figure out what to do. But it's very hard to have an impact without at some point
engaging at that level of detail and really understanding how to act in the world. So,
that's something that I'm trying to improve with the podcast because very often I'll talk to people
for two or three hours, kind of subject matter experts, and just grill them on all of these
details. And then we put up the transcript and people can listen to it if they're interested.
And it's very hard to kind of write really polished articles that have that level of detail.
But in a conversation, you can cover a lot of ground quite fast. Perhaps another issue is that
I talk to quite a lot of people who are looking for ways to have an impact
by volunteering or kind of doing internships or just doing it on the side. And I think that that
is quite hard, at least in the priority problems areas that we talk about. Typically, in order to
make a difference, you need to become really specialized in some way, kind of an expert in
some way. And you're just much more likely to achieve that if you find a way to do the work full time. So, I guess I would encourage more
people to think about, yeah, how can I become, you know, someone who's very good in at least,
you know, maybe some narrow area over the course of my career. And that allows you to kind of
avoid making mistakes because you're doing something perhaps in another, you know,
you just don't have the experience to know how things can go wrong. So, there's a lot of people who, you know,
promote ideas from effective altruism in kind of a casual way. And very often that's useful,
but there's also a lot of ways that you can do that badly. If you explain the ideas
in a way that's, you know, unappealing or, you know, as I said, unnecessarily controversial
or just confusing, that can turn people away.
And, of course, people who are doing that full-time
learn these things very quickly and then can do an excellent job.
Whereas someone who's volunteering, it's a bit more hit and miss.
So, yeah, I guess there's this whole issue of kind of quality control,
perhaps, so like trying to do things, maybe fewer things
to a very high standard rather than like many things in a scattershot way that's perhaps another way that i feel we could get um improve a
bit awesome rob this has been an amazing conversation and you know i really appreciate
your time and i think it's it's clear how passionate you are about these issues and
i think we're probably all glad that you've dedicated your considerable intellectual
energies to thinking about how to make the world a better place and not just some corporate career.
But at the same time, do you ever experience regret or I guess what Alain de Botton would
call status anxiety, knowing that you probably could have gone and earned a very high-powered
salary somewhere else, but you've sort of foregone that to work on these causes?
No, not even close. So, one thing is that my salary is fine. It's true, I don't make quite
as much as I would if I went into the private sector and really tried to maximize my earnings.
But I think my work is much more fulfilling. Like, yeah, I really feel like I'm having a
positive impact on the world, which I might not have if i just went into a random corporate job um i gotta admit i actually
just have really cheap taste i uh don't particularly like fancy things i don't i'm happy to kind of
travel on the cheap um i still feel a bit like an undergraduate in some ways so yeah i don't really
like i i'm saving money basically on on the income that I have from the nonprofit sector.
Great.
So, that's not an issue.
And also, you know, I enjoy being able to share my ideas with people.
I guess I don't have a lot of capital capital, but, you know, in a sense, I have kind of
cultural capital.
Yes.
Because people like listen to the show, they like read my articles and maybe that gives
you like a different kind of status that is also fulfilling if I'm honest about it.
Yeah, absolutely. And you're also one of the, I mean, this is only really the second time I've
met you, but I feel like I almost meet you every day on Facebook. You're one of the best,
I guess, utilizers of social media I've ever seen. Every day, I think you're putting out posts or links to articles that are
stimulating genuine. I don't think I've seen on anyone else's profile, let alone even a page,
the same level of intellectual debate as I see you generating on Facebook of all places.
Firstly, do you have some sort of goal here? Is this like brand building or something?
Secondly, uh techniques are
you using you scheduling posts how do you find the time to do it yeah so my facebook presence it's a
a mixed a mixed blessing so what what is going on there i think it started just because i have
this compulsion to like share my ideas so i like read a lot i spend a lot of time like i said i
had cheap cheap taste and like one of the things that i most enjoy doing is just reading articles on the internet and then i'm like
oh i love this i want to share this with people or like i thought this was stupid i want to say
why it's stupid um and i've been doing that basically since 2006 or 2007 uh facebook turned
out to be the the place where i think you get the largest audience and reaction to that i guess that
this was before twitter although twitter has serious problems because you can't really write anything substantive on there but people are absolutely
addicted to facebook right so people are just always checking facebook but a lot of the but
they often complain that kind of the content on there isn't isn't very good uh so if if you
actually put interesting articles on on facebook um you know add your commentary and then have like
other smart people responding to it uh you have this kind of captive audience because facebook has figured out how to like compel people to come
back to facebook because they're so addicted to it and then you can like drive them into
into reading your your ideas so i don't know why other people haven't done it i guess
maybe other people like actually running for newspapers rather than just on their own facebook
wall um i guess you could say you're the first person to find out a way to create social benefit
on the facebook yeah i'm actually i mean i'm not a huge fan of facebook i think it may well be a bad I guess you could say you're the first person to find out a way to create social benefit out of Facebook.
Yeah, I'm actually, I mean, I'm not a huge fan of Facebook.
I think it may well be a bad thing.
And I'm not sure, like, it does take up probably more of my time than it ought to.
Certainly it has, like, when I was an undergraduate.
But it has produced some benefit for some people.
It's been a somewhat useful way of promoting some of the ideas that I really care about.
What advice would I give to people?
One thing is like it takes many years potentially to build up a good audience of readers, especially of readers who leave good comments.
So I have been posting articles that I think are intellectually stimulating on there for about 11 years. So that does allow you to attract smart people gradually over time.
And I have posted several things a day,
probably on average over that time period.
So you can imagine the amount of wasted time from my own life.
What else can I say about that?
I guess I wouldn't recommend that people do this.
But if you find yourself uh compelled to post things
on facebook uh i recommend kind of off like quoting things from the article so people don't
have to click through and they can read the most interesting paragraph and offer your own offer
your own response and perhaps also cultivate people who will leave interesting comments and
and i think if there's people who are kind of toxic and like uh leave comments that are kind
of nasty then my
recommendation would be to just just to delete them because you want to create kind of a nice
intellectual community yeah and in a sense it's it's your you know your house you don't want
people coming in and trashing it yeah i mean i think i've actually used that analogy uh before
when people say well why don't you let people comment i'm like well i don't just allow total
strangers into my house to kind of like shout abuse at me and my friends so um if people can
say whatever they want i'll swear but yeah so I mean, you have a role that's very intellectually challenging.
How do you balance your Facebook use against what, you know, I guess Cal Newport would describe as
deep work? I fail to. Well, so, I do block myself from Facebook. I have one of those apps that like
prevents me from going there, particularly today. Which one do you use? Uh, freedom. Yeah. It's a potentially
quite useful. Unfortunately, what part of my role is promoting our content on social media. So I
guess I kind of take the hit for the rest of the team that I do that. And other people get to have
more peace of mind. Uh, I've also at various points in time, like I'll just take like three
months off of social media.
So just like completely block it and close my account while I potentially do
more deep work.
But the truth is I haven't found a,
haven't found a great solution to this.
Yeah.
It's a,
it's a process of constant sort of flotation and avoidance as well.
We have to rope in these various apps.
And internal conflict.
I think one
thing i would say is that um a lot of the things that i post are things that i'm encountering
through my work one way or another so you know often if i'm doing research i'm like wow that
was an interesting article like i'd love to share that with people and get their thoughts on it yeah
i think there's there's definitely some sort of creative process to what you're doing yeah i think
that's right i'm kind of like uh putting up drafts of my ideas as i go yeah um that's not all of it
there's definitely some time wasting in there as well.
But some of it's useful.
So, Rob, it's now time for the final five.
Are you ready?
I'm not sure, but hopefully.
First question, what is the last thing you do at night
and the first thing you do in the morning?
You're going to get me on bad habits again, I think.
So, I'm like, I'm not a great person in terms of personal systems
and living a super clean life.
I'm surprised to hear that.
Probably the last thing I do at the moment is watch the Colbert Late Show.
So, your listeners will be aware, I think, that like the politics in the United States is a
little bit depressing at the moment, but it is good to laugh at it, at least with the-
The least we can do.
Exactly. In the morning, snooze my alarm.
How many times on average?
Two or three.
Okay. I'm about the same, actually. it's the sweet spot the second question what is
one thing that you hold to be true that most or the rest of society doesn't agree with you on
yeah this is a yeah a classic question yes this is like a bunch of a bunch of unusual political
views and i guess philosophical views i have um i think one of them that shows up quite often in my attitudes to
practical questions is that I guess I don't believe in kind of persistent personal identity.
So, the way that I think about personal identity is not that kind of I'm Rob for my entire life.
I'm just like one same person all the way through. I'm just like a kind of a set of properties that are changing
over time and so i'm kind of like myself when i was 10 but i'm kind of i'm not really the same
person it's just a question of degree kind of so you can imagine someone who's like a completely
different person or maybe not even a person at all and you say well they're like zero similarity
and then you've got like me now compared to like me in five seconds it's like close for one but
it's just kind of a sliding scale and as and as you uh age uh over time like you just become gradually a different person uh
at every point in time um yeah that shows up in kind of issues of like moral responsibility
uh in questions of uh like what if you could you know take one person make them two what if you
could like take a person and then like put them on a computer would they be the same person um would it be good to extend someone's life um actually that reminds
me of another one so uh i think that it would probably be pretty good if we could just um end
aging altogether uh and people could decide when they wanted to die um that's definitely a
controversial opinion yeah it's very very divisive yeah some people are very on board with that some
people think that it's uh that's a bad idea um i could put up a link to uh what i think is a good video about that
okay advances my view i mean i guess i also think um it is like if humanity survives for a thousand
another thousand years i think we'll figure out a way to basically uh stop aging i think it's i
think it's gonna be a very difficult technical problem but i don't i i would be surprised if
it was an impossible thing to do.
And I think it would basically be good
if people could live as long as they would like to.
Yeah.
There's certainly some promising signs at the moment
coming out of the science of aging.
Yeah.
I mean, I'm not expecting to live forever.
I think it'd be very surprising if it happened soon enough
that they would do anything for me.
It's a depressing thought.
Any other contrarian truths oh what is there um i guess i think that we could probably increase immigration
several times over and that would be you know good for australia or the us and also good for the for
the migrants but i suppose there's the question now of political blowback about that that maybe
uh even if it's good in a direct sense it uh leads to bad um bad political outcomes down down the line uh what else maybe i'll leave it at that
i have other controversial i think i think that's a pretty good uh survey of your opinions the first
one i mean we've we've had quite a few philosophers on the podcast but no one has spoken about the
the personal identity idea which is uh
like a derek parfitt so in reasons and persons there's a lot of exploration of this i don't
completely agree with uh parfitt's views or maybe i just have kind of a different emphasis but uh
uh yeah uh we can stick up a link to to some articles about this personal identity question
and kind of the paradoxes that you get if you take the standard view that someone's just the same person all the way through their life.
Okay, third question.
What is the worst piece of advice that you've ever received
relating to your career?
Some people have suggested that I should go into politics,
that I should actually run for office.
Why not?
I guess, you know, never say never.
It sounds like a very stressful thing to do um constantly being in the in the public gaze
like that and having people have a go at you um maybe also i feel like perhaps i'm just a bit too
honest a lot of the time about i think it'll be very hard to uh not to do the political thing
that maybe it's not the right personal thing but maybe i could become more circumspect as i get
older question number four what's one thing that you've changed about yourself
in the last year?
I guess I kind of do have the view that
people, once they're adults, don't tend to change a ton.
That like very often kind of your strengths remain your strengths
and your weaknesses remain your weaknesses.
And very often people do better by trying to find a role
in which their kind of strengths are important
and their weaknesses don't matter so much
rather than trying to dramatically change who they are.
What brought about that realization?
I guess just like observation of people.
You're sort of overturning the whole self-development industry here.
I mean, it's worth trying to improve yourself,
but like maybe don't count on it.
At least don't count,
like you can develop absolutely new skills and habits, but it's hard to change kind of these fundamental uh things
all the time like if you're an agreeable person it's you're probably going to be agreeable in 20
years time if you're very conscientious you'll probably still be conscientious and if you're
kind of disorganized there's a good chance you'll still be disorganized but that doesn't mean you
can't make progress um because you can find roles in which you know being disorganized isn't such a
huge problem uh yeah where maybe it's even a benefit.
Yeah, at least as far as those habits are concerned,
I think Daniel Kahneman says that one of the most profound realizations
he had in the behavioral sciences was it's more about changing
your environment, not yourself.
That's the most effective thing we can do.
Yeah, I mean, I'm not sure about that,
but that does kind of fit with my observations a bit.
How have I changed though i guess i've become more focused on good communication okay and less focused on kind of getting attention and being a bit inflammatory
uh i think when i when i was younger i kind of enjoyed riling people up and potentially saying
things that were like not entirely well thought through, you know, in order to get a rise out of people.
Okay.
These days, I just find that a bit cringy.
So, I just try to kind of explain things in a pretty plain way and like, you know, think things through before speaking to a greater degree than before.
And not presenting things in kind of an unusual, in a provocative light, just trying to make things seem sensible
rather than peculiar.
Yeah, I think most of us probably look back on our younger selves
with a fair degree of cringing.
I think this is a pretty common thing as people get older.
You're like, look at the things you wrote when you were 20
and just like face palm.
Exactly.
Okay, question number five.
Do you have a final message for our audience?
This is the chance for me to pitch my stuff isn't it plug it plug away so yeah i um have this podcast the 80 000 hours podcast um we do we do deep dives into you know what we think are the
most uh pressing problems and you know concrete things that you might be able to do to solve them
and make a real difference we also have just kind of some uh fun episodes where we discuss, you know, topical issues with people who've often written books about them or written
articles about them. I think it's quite a lot of fun. I think it's pretty informative. I really
love the kind of long interview format that we're doing here and that we do on the show. It allows
you to kind of learn a lot more about a topic than you typically do in just, you know, a magazine
article or a newspaper article where the journalist only has an hour to write it. So, they make
mistakes and they don't really get enough detail for you to actually do anything with what you're you know, a magazine article or a newspaper article where the journalist only has an hour to write it. So they make mistakes
and they don't really get enough detail
if you'd actually do anything with what you're reading.
So check it out.
I'm trying to think, which episodes would I recommend?
I think the episode with Will McCaskill.
Great episode.
Yeah, would be very interesting to people
if they're interested in kind of the moral philosophy
that we've been talking about.
If you're interested in setting priorities globally,
then maybe the episode with Holden Karnofsky,
who founded GiveWell and now runs the Open Philanthropy Project,
could be a good option.
We've got, I think, three long episodes now
about kind of pandemic control.
We've got three episodes on artificial intelligence,
both kind of the technical side and the policy or strategy side.
We've got the episode with Philip Tetlock, of course,
about government decision-making and how we can improve that.
We've got a number of episodes on factory farming,
which didn't come up here.
So why we think factory farming is really quite morally abominable
and what could be done to end it without really having
to inconvenience people at all.
So, yeah, if any of those topics are interesting to you,
then pull out your phone and type in 80,000 Hours Podcast
and bring it up and see if you like it.
Also, we have our career guide,
which we've been talking about on our website.
It's just got a lot of really useful information there,
even if you're not interested in doing good with your career.
I think people can learn a lot.
That's quite actionable.
We have this... I've been talking about how I'm a bit skeptical
of self-improvement, but our most popular article
is How to Be Successful in Any Career,
which talks about things that you actually can do
that make your life better across a whole lot of different domains,
including your personal life and your professional life,
mental health, where to live, that kind of thing.
So check that one out.
It's a reasonable place to start.
Otherwise, if this is all interesting to you,
then you should seriously consider getting involved
in the effective altruism community as a whole.
So, if you just type in effective altruism into Google,
you'll get a bunch of sites.
There's the Effective Altruism Handbook,
would be a decent place to start reading.
There's a list of resources on effectivealtruism.com.
And there's also this conference, Effective Altruism Global.
So, that's coming up in a month.
Well, I guess when this comes out, it'll be about two weeks in San Francisco.
So, it might be too late for people to apply and go to that one.
But there's one in Melbourne this year, I think, in June or July.
Yeah.
So, if you'd like to meet more people who are involved in 80,000 hours or related issues,
then you can apply to go to that.
I think it's not too expensive and there'll be a couple hundred people there.
So, we'll stick up a link to the application form for that.
Yeah.
Well, Rob, it's been a real pleasure.
Thank you so much for joining me.
It's been a lot of fun.
I hope to talk again soon.
Absolutely.
Cheers.
Thank you, my friend, for listening to that incredible conversation with the great Rob Wiblin and everything we discussed,
all of the links that we mentioned and all the topics that we covered
are available in the show notes on our website, thejollyswagman.com,
so you can find that all there.
And if you're enjoying what we're doing,
I'd really appreciate it if you could rate and review us on iTunes.
It helps other people who might be interested in this show to find it as well.
So thank you.
And until next week, this has been great.
Ciao.