The Tim Ferriss Show - #612: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change
Episode Date: August 2, 2022Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of C...hange | Brought to you by LinkedIn Jobs recruitment platform with 800M+ users, Vuori comfortable and durable performance apparel, and Theragun percussive muscle therapy devices. More on all three below. William MacAskill (@willmacaskill) is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will.His new book is What We Owe the Future. It is blurbed by several guests of the podcast, including Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. . . . This is an altogether thrilling and necessary book.” Please enjoy!*This episode is brought to you by Vuori clothing! Vuori is a new and fresh perspective on performance apparel, perfect if you are sick and tired of traditional, old workout gear. Everything is designed for maximum comfort and versatility so that you look and feel as good in everyday life as you do working out.Get yourself some of the most comfortable and versatile clothing on the planet at VuoriClothing.com/Tim. Not only will you receive 20% off your first purchase, but you’ll also enjoy free shipping on any US orders over $75 and free returns.*This episode is also brought to you by Theragun! Theragun is my go-to solution for recovery and restoration. It’s a famous, handheld percussive therapy device that releases your deepest muscle tension. I own two Theraguns, and my girlfriend and I use them every day after workouts and before bed. The all-new Gen 4 Theragun is easy to use and has a proprietary brushless motor that’s surprisingly quiet—about as quiet as an electric toothbrush.Go to Therabody.com/Tim right now and get your Gen 4 Theragun today, starting at only $199.*This episode is also brought to you by LinkedIn Jobs. Whether you are looking to hire now for a critical role or thinking about needs that you may have in the future, LinkedIn Jobs can help. LinkedIn screens candidates for the hard and soft skills you’re looking for and puts your job in front of candidates looking for job opportunities that match what you have to offer.Using LinkedIn’s active community of more than 800 million professionals worldwide, LinkedIn Jobs can help you find and hire the right person faster. When your business is ready to make that next hire, find the right person with LinkedIn Jobs. And now, you can post a job for free. Just visit LinkedIn.com/Tim.*[07:20] Recommended reading.[13:26] How Dostoevsky’s Crime and Punishment changed Will’s life.[18:12] Maintaining optimism in the age of doomscrolling.[23:41] What is effective altruism?[26:04] Resources for maximizing the impact of your philanthropy.[27:45] How adopting a check-in system has most improved Will’s life.[32:32] Caffeine limits.[34:08] Effective back pain relief.[41:18] What is longtermism, and why did Will write What We Owe the Future?[43:44] Future generations matter.[46:42] Finding the line between apathy and fatalism that spurs action toward ensuring there’s a future.[52:23] What Will hopes readers take away from What We Owe the Future.[55:56] What is value lock-in?[1:01:38] Most concerning threats projected over the next 10 years.[1:09:28] Most promising developments happening now.[1:13:47] How Will refocuses during periods of overwhelm.[1:18:48] Perils of AI considered plausible by the people who create it.[1:30:42] Longtermist-minded resources and actions we can take now.[1:36:29] Parting thoughts.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim’s email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim’s books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
This episode is brought to you by Viore Clothing, spelled V-U-O-R-I, Viore. I've been wearing
Viore at least one item per day for the last few months, and you can use it for everything.
It's performance apparel, but it can be used for working out. It can be used for going out
to dinner, at least in my case. I feel very comfortable with it. Super comfortable,
super stylish. And I just want to read something that least in my case, I feel very comfortable with it. Super comfortable, super stylish.
And I just want to read something that one of my employees said.
She is an athlete.
She is quite technical, although she would never say that.
I asked her if she had ever used or heard of Viore, and this was her response.
I do love their stuff.
Been using them for about a year.
I think I found them at REI.
First for my partner, t-shirts that are super soft but somehow last as he's hard on stuff. And then I got into
the super soft cotton yoga pants and jogger sweatpants. I live in them and they too have
lasted. They're stylish enough I can wear them out and about. The material is just super soft
and durable. I just got their clementine running shorts for summer and love them. The brand seems
pretty popular, constantly sold out. In closing, and I'm abbreviating here, but in closing,
with the exception of when I need technical outdoor gear, they're the only brand I've bought
in the last year or so for yoga running loungewear that lasts and that I think look good also.
I like the discreet logo. So that gives you some idea. That was not intended for the sponsor read. That was just her response via text.
Viori, again spelled V-U-O-R-I, is designed for maximum comfort and versatility. You can wear it
running. You can wear their stuff training, doing yoga, lounging, weekend errands, or in my case,
again, going out to dinner. It really doesn't matter what you're doing. Their clothing is so
comfortable and looks so good. And it's non-offensive. You
don't have a huge brand logo on your face. You'll just want to be in them all the time.
And my girlfriend and I have been wearing them for the last few months. They're men's core short,
K-O-R-E. The most comfortable lined athletic short is your one short for every sport. I've
been using it for kettlebell swings, for runs, you name it. The Banks short, this is their go-to-land-to-sea short, is the ultimate in
versatility. It's made from recycled plastic bottles. And what I'm wearing right now,
which I had to pick one to recommend to folks out there, or at least to men out there,
is the Ponto Performance Pant. And you'll find these at the link I'm going to give you guys.
You can check out what I'm talking about. but I'm wearing them right now. They're thin performance sweatpants,
but that doesn't do them justice. So you got to check it out. P-O-N-T-O, Ponto Performance Pant.
For you ladies, the women's performance jogger is the softest jogger you'll ever own.
Viore isn't just an investment in your clothing, it's an investment in your happiness. And for you,
my dear listeners, they're offering 20% off your first purchase. So get yourself some of the most
comfortable and versatile clothing on the planet. It's super popular. A lot of my friends I've now
noticed are wearing this, and so am I. VioriClothing.com forward slash Tim. That's V-U-O-R-I
Clothing.com slash Tim. Not only will you receive 20% off your first purchase,
but you'll also enjoy free shipping on any US orders over $75 and free returns. So check it
out. Vioreclothing.com slash Tim. That's V-U-O-R-I clothing.com slash Tim and discover the versatility
of Viore clothing.
This episode is brought to you by Theragun. I have two Theraguns and they're worth their weight in gold. I've been using them every single day. Whether you're an elite athlete or just a regular
person trying to get through your day, muscle pain and muscle tension are real things. That's
why I use the Theragun. I use it at night.
I use it after workouts. It is a handheld percussive therapy device that releases your
deepest muscle tension. So for instance, at night, I might use it on the bottom of my feet. It's
helped with my plantar fasciitis. I will have my girlfriend use it up and down the middle of my
back and I'll use it on her. It's an easy way for us to actually trade massages in effect. And you can think of it, in fact, as massage reinvented on
some level. Helps with performance, helps with recovery, helps with just getting your back to
feel better before bed after you've been sitting for way too many hours. I love this thing. And
the all new Gen 4 Theragun has a proprietary brushless motor that is
surprisingly quiet. It's easy to use and about as quiet as an electric toothbrush. It's pretty
astonishing. And you really have to feel the Theragun's signature power, amplitude, and
effectiveness to believe it. It's one of my favorite gadgets in my house at this point.
So I encourage you to check it out. Try Theragun. That's Therag T-H-E-R-A-G-U-N. There's no substitute for the Gen 4 Theragun with an OLED screen. That's O-L-E-D
for those wondering. That's organic light emitting diode screen, personalized Theragun app,
an incredible combination of quiet and power. And the Gen 4 Theraguns start at just $199.
I said I have two. I have the Prime and I also have the Pro,
which is like the super Cadillac version. My girlfriend loves the soft attachments on that.
So try Theragun for 30 days starting at only $199. Go to therabody.com slash Tim right now
and get your Gen 4 Theragun today. One more time. That's therabody.com slash Tim,
T-H-E-R-A-B-O-D-Y.com slash Tim. question. Now would have seen an appropriate time. What if I did the opposite? I'm a cybernetic
organism living tissue over metal endoskeleton. The Tim Ferriss Show.
Hello boys and girls, ladies and germs. This is Tim Ferriss. Welcome to another episode of
The Tim Ferriss Show. My guest today is William McCaskill. That's
M-A-C-A-S-K-I-L-L. You can find him on Twitter, at Will McCaskill. Will is an associate professor
in philosophy at the University of Oxford. At the time of his appointment, he was the youngest
associate professor of philosophy in the world. A Forbes 30 under 30 social entrepreneur, he also
co-founded the non- nonprofits Giving What We Can,
the Center for Effective Altruism, and the Y Combinator-backed 80,000 Hours,
which together have moved over $200 million to effective charities. You can find my 2015
conversation with Will at Tim.blogs.com. Just a quick side note, we probably won't spend too
much time on this, but in that 2015 conversation, we talked about existential risk and the number one highlight was pathogens.
Although we didn't use the word pandemic, certainly that was perhaps a prescient discussion
based on the type of research, the many types of research that Will does.
His new book is What We Owe the Future.
It is blurbed by several guests of this podcast, including neuroscientist and author
Sam Harris, who wrote, quote, no living philosopher has had a greater impact upon my ethics than Will
McCaskill, dot, dot, dot. This is an altogether thrilling and necessary book. And quote,
you can find him online, williammccaskill.com. Will, nice to see you again. Thanks for making
the time. Thanks for having me back on. It's a delight.
And I thought we would start with some, say, warm-up questions to get people right into some details of how you think, the information you consume, and so on and so forth.
So we're going to begin with a few questions I often reserve for the end of conversations. And we covered some of the other rapid fire questions in the last conversation for people who want a lot on your bio, how you ended
up being the youngest associate professor of philosophy in the world at the time of your
appointment, and so on. They can listen to our first conversation. But we spoke about a few books
last time, and I'd be curious, what is the book or what are the books that you have given most as
a gift and why? Or what are some books that have had a great influence on you? I know we talked
already about Practical Ethics by Peter Singer and then Superintelligence by Nick Bostrom last
time. But do any other books come to mind when I ask that question?
Yeah, so here are a couple. One is The Precipice by my colleague Toby Ord, who I co-founded Giving
What We Can With back in 2009. And it's on the topic of existential risks. So I see it as a
complement to my book, What We Are the Future. And it details in quite beautiful prose and also painstaking detail
some of the risks that we face as a civilization
from the familiar asteroids
to the less familiar super volcanoes
and to the truly terrifying,
which I also discuss in the book
and discuss how we might try and handle
like artificial intelligence
and engineered pathogens and engineered pandemics.
And it also just talks
about what we can do about them as well. And so I think it's just like absolutely necessary as a
read. We'll talk, I guess, a bunch about some of those topics as we get into my work too.
So I have like another kind of set of books, which are quite different, but they've had
some of the biggest impact on just the background of my thinking
over the last few years in very subtle ways. And that's Joe Heinrich's books,
The Secret of Our Success and The Weirdest People in the World. And Joe Heinrich is a
quantitative anthropologist at Harvard. And his first book is just why are humans the most powerful and
ecologically dominant species on the planet? And people often say like, oh, it's our big brains.
And he's like, no, our brains are several times the size of a chimpanzee's brain,
but that's not the distinctive thing. The distinctive thing is that we work together.
Essentially, we're capable of cumulative cultural, where I can learn something and then my children
will pick it up from me, even if they don't really understand why I'm doing it. And that
means that the way humans function, it's not like a single brain that's three times the size of a
chimpanzee. It's tens of thousands of brains all working in concert and now millions of brains over many
generations and that's why there's such a big gap between chimpanzee ability or intelligence and
human intelligence where it's not a scale up of 3x it's a scale up of 300 000 the hive mind of
hominids basically that's exactly right so this perspective, humans are not just another species that are weird and not hairy,
and particularly sweaty and good at long-distance running.
Aristotle commented that humans are the rational animal, and that's what made them distinct
from other animals.
Whereas actually, we're just very sweaty, and that's one of our most distinctive characteristics.
And I like to think that I am therefore the most human of humans because I'm the sweatiest person I've met. So he has this book and that alone just, you know, really blew
my mind. It really made a big difference to how I understand humans. And he has this other book,
The Weirdest People in the World, which is about the psychology in particular of weird people, Western, educated, industrialized,
rich, and democratic, which are the subject of almost all psychology experiments, but they're
not representative at all of most cultures. And in fact, they're very unusual among most cultures,
much more individualistic, much more willing to challenge authority, even perceive the world in slightly different ways. And the overall
picture you get from these two books is an understanding of kind of human behavior that's
very different from the kind of economic understanding of human behavior, as though
we're all these just like self-interested agents going around trying to kind of maximize profit for
ourselves. Whereas on this vision, it's like, no, we're these cultural
beings. We have like a vision for the world and we go and like try and put that vision into the world.
And that's what the kind of big fights are about. And I think it has a much better explanation of
history. When you said quantitative, I think you said quantitative anthropologists. Am I
hearing that correctly? What is a quantitative anthropologist? I know those two words separately, and I can pretend like I understand what those mean
together, but what does a quantitative anthropologist do?
So you might know kind of evolutionary biology has these formal models of how genes evolve
over time.
It's hard to make predictions within this field, but at least you have these like precise
formal methods that you can start to kind of understand what's going on in terms of how organisms evolve.
Now, it turns out you can do the same thing but applied to cultures.
Dawkins made this word meme very famous, and that kind of gets across the idea,
although it's not quite right because it's not like there's a single divisible unit of culture.
But nonetheless, you can think of different cultures
kind of like different species.
And some are more fit than others,
so some are going to win out over time.
And you can apply the same sort of formal methods
that evolutionary biologists use
to study evolution of genetics
to the evolution of cultures as well.
And Joe Henlich does that at least a little bit
all right well i'll have to i'll have to take a look now we were talking what we using the royal
we you were talking about i suppose i mentioned in passing existential risks and threats and
i have a number of questions related to this not, but I want to touch upon first, perhaps an
unexpected insertion. I have in my notes here, Crime and Punishment by Dostoevsky as a book that
was important to you. And I would like to know why that is the case. That book is actually what
got me into philosophy originally. Back when I was about 15, I read it.
And I was at a time, variance in literature,
I wanted to be a poet and an author.
And it was via that that I learned about this word philosophy.
And I realized that like, oh, actually,
you can just tackle the big ideas directly.
You don't need to go via fiction.
But I was also particularly interested in the time in existentialist philosophy.
And this is something that honestly, like I kind of still bear with me.
I'm kind of a bit unusual in that I often think to myself,
could I justify my life now to my 15-year-old self?
And if the answer is no, then I'm a bit like,
oh, what are you doing?
You're not living up to
what earlier Will would have wanted for present Will and the key thing I think for 15 year old
Will who was inspired by existentialism was living an authentic life and I still find that very
liberating and empowering and inspiring so some of of the things I do, so for example,
I give away most of my income, which is like a very unusual thing to do. And you might think,
oh, that's like a sacrifice. It's making my life worse. But actually I find it kind of empowering
because it's like, I am making an autonomous decision. I am not merely kind of following
the dictates of what social convention
is telling me to do, but I'm like reasoning about things from first principles and then making a
decision that's genuinely authentically mine. And that was something I hadn't particularly
pegged it to kind of acting model-y when I was 15, although to some extent,
but that was something that really moved me then, and honestly continues
to move me today. How would you just, I often say for the listeners out there who may not be
familiar, but if I'm being honest with myself, I have not studied existentialism. And I hear
certain names associated with it, so I can kind of fake it until I make it and create the illusion.
I'll be like, ah, Kierkegaard, I think, maybe this person, that person. But
what is existentialism as it is portrayed in Crime and Punishment or conveyed?
One of the things I liked about Crime and Punishment and Dostoevsky's work in particular,
at least as wrestling with existentialism, where it can get used in various ways. But here, one way of thinking about it is just
the world as it is has no intrinsic meaning. And yet we are like placed into it and have to
make decisions. And that's this like absurd position to be in. And you can create your own meaning out of that through radically free acts, like authentic,
genuine acts.
And Dostoevsky in his work kind of wrestles between three positions, I think.
One is this like existentialist position.
A second is just pure nihilism, which is just like actually literally, if you take it seriously,
and there's no God,
then everything is permitted.
There's no reason to do anything.
Not even like reason created from yourself.
And then third is this religious position,
which I think he actually ultimately endorses.
And it's almost like nihilism as like a proof.
The rejection of nihilism, like like therefore guarantees that you should be like
religious. QED, God. Yeah, exactly. Yeah. Well, it's like life is meaningless unless God exists,
you know, I'm now describing it slightly Pascalian terms, but like you may as well act as if,
as if there's a God that is giving meaning to life. We're not going to spend a whole bunch of
time on it now, but in our last conversation, we talked about Pascal's wager,
but also Pascal's mugging, if I remember correctly.
Yeah, very good.
Something along those lines.
So we won't take a side alley down into Pascal's mugging just now,
but I said I had two things I wanted to ask you about.
The first was crime and punishment, which I think we've covered.
The second, before we jump into our longer conversation, which will go all over the place, and I may ask still some of the shorter
questions, when people hear existential threats, when they hear super volcanoes, AI, man-made
pathogens, et cetera, I think that there will likely be an apprehension, perhaps a little
seizure of the breath for some people listening who might think to themselves, my God, this is
just going to be audio doom scrolling. This is just going to come away from this conversation
with higher blood pressure, more cortisol. And my impression of you in the time that we've spent together is that you are not
nihilistic. You're not apathetic. You are not pessimistic. You're quite the opposite of all
of those things in some respects. How do you do that? Is that just will out of the box and that's
just how you came programmed? Is there more to it? And this is, I think, a crux question, because I don't see in the people, say, in my audience, including those who are very competent, effective action, if they don't have some degree of optimism or belief that they can exert change. So could you just speak to that?
Because I know I also succumb to just getting
waterboarded with bad news all day from around the world.
I'm like, I can't do, I can't,
I cannot put a salve onto all of this
for all of these people.
And it can be overwhelming.
So how would you respond to that?
I think there's two things that motivate this.
One is just the desire to actually make the world better.
And then second, I'll call low standards.
So on the first side, you know, age 21, and I'm like, man, I'm about to really start my
life.
I'm trying to look for like, I want to act modally.
I'm trying to look for different causes.
I bounce into a lot of the sorts of classic causes that you'd find on a
social for someone socially motivated in a college campus,
like vegetarian society,
left-wing politics,
climate change stuff.
I found there was very little in the way of action.
There was an awful lot of like guilt and an awful lot of talking about the
problems,
but not that much in terms of like,
Hey, here are the solutions. This is how you can actually make the world better. And this is what
we should do. But if you actually care about what making, wanting to make the world better,
and that's the key motivation, the size of a problem and like really thinking about the
suffering. I mean, it can be important, especially if it's motivating you, but the ultimate thing is
just what do you do? Something could be the worst problem in the world, but if there's nothing you can do, then it's just not relevant for the purpose of action.
And that therefore really makes me think in the first instance, always about, okay, well,
what's the difference we can make? Not like how scary are things or how bad are things,
but instead like how much of a difference can we make? And there it's like very positive.
So in the last podcast, we talked about a lot about global health and development.
And what's the difference you can make there? Well, if you're a middle-class member of a rich
country, it's on the order of saving dozens, hundreds, maybe even thousands of lives over
the course of your life, if you put your mind to it. That's huge. Now we're talking about
existential risks and the long-term future of humanity. That's huge. Now we're talking about existential risks and the
long-term future of humanity. What's the difference you can make? You can play a part in being pivotal
in putting humanity onto a better trajectory for not just centuries, but for thousands,
millions, or even billions of years. The amount of good that you can do is truly enormous.
You can have cosmic significance. And that's pretty inspiring.
And so, yeah, when you think about the difference you can make, rather than just
focusing on the magnitude of the problems, I think there's every reason for optimism.
And then the second aspect I've said was low standards, which is just,
what's a world that you should be sad about? What's a world you should be happy with?
Well, in my own case, I think, look, if I came into the world, and when I leave it,
the world is neither better nor worse. That's like zero. I should be indifferent about that.
If I can make it a bit better in virtue of my existence, hey, that's pretty good.
The more good I can do on top of that, the better.
And I think I have made it much better. I'm not zero. I'm like positive. And so all of the additional good that I potentially do feels like a bonus. And so similarly with humanity, when I
look to the future, what's the level at which I'm like, ah, it's indifferent. That's where just like
the amount of happiness and suffering in the future kind of cancel out. And relative to that, I think the future is going to be amazing. Already, I think the world today
is like much better than if it didn't exist. And I think it's going to be a lot better in the future.
Like even just the progress we've made over the last few hundred years, people today have like
far, far better lives. If you extrapolate that out just you know a few hundred years more let
alone thousands of years then there's at least a good chance that we could have a future where
everyone lives not just as well as the best people off alive today but maybe tens hundreds
thousands of times better yeah i mean kings a few hundred years ago didn't have running water, right? No.
Air conditioning.
They didn't have anesthetic.
Yeah, no antibiotics.
Oops.
If they were gay, they had to keep it secret.
They could barely travel.
Yeah.
Lots of things we easily take for granted, which we can come back to, except it may be related.
But why don't we take 60 to 120 seconds just for you to explain effective
altruism.
Your name is often associated, just so we have a definition of terms and people have
some idea of the scope and meaning of effective altruism, since you're considered one of the
creators or co-creators of this entire movement.
If you wouldn't mind just explaining that briefly,
and that way people will have at least that as a landmark as we go forward.
Effective altruism is a philosophy and a community that's about trying to figure out
how can we do as much good as possible with the time and money we have,
and then taking action on that basis. So putting those ideas into practice to actually try to make the world better as effectively
as possible, whether that's through our donations, with our careers, with how we vote, with our
consumption decisions, just with our entire lives.
What have been some of the outcomes of that?
Over, you know, I've been promoting these ideas along with others for over 12 years now.
We've raised well over or moved well over a billion dollars to the most effective causes.
So that means if we take just one charity that we've raised money for against Malaria Foundation,
we've protected over 400 million people, mainly children, from malaria.
And statistically, that means we've saved about 100,000 lives or maybe a little more, which is, you know, the size of a small town, about the size of Oxford.
And that's just one charity. There's several more within global health and development.
I think in terms of other cause areas that we focused on within animal health and welfare,
hundreds of millions of hens are no longer in cages because of corporate cage-free campaigns that we've helped
to fund. And then within the field of existential risks, it's not as easy to say, oh, we've done
this concrete thing. This thing would have killed us all, but we avoided it. But we have helped make
AI safety a much more mainstream field of the search. People are taking the potential benefits, but also the risks from AI
much more seriously than they were. We have also invested a lot in certain pandemic preparedness
measures. Again, it's kind of still early stages, but some of the technology there or things I think
have really promising potential to action, at least making sure that COVID-19 is the last
pandemic we ever have. One of the many things I appreciate about you and also broadly speaking,
many people in the effective altruism community slash movement is the taking of a systematic
approach to not just defining, but questioning assumptions and quantitatively looking at how you can do good, not just feel good,
if that makes sense. And that seems obvious to anyone who's in the community, but the vast
majority of philanthropy or charity, broadly speaking, is done without that type of approach,
from what I can tell. And it's really worth taking a closer look for those
people listening. Are there just a few URLs you'd like to mention for people who'd like to dig into
that? And then we can move into some of the more current questions. If you're interested in how to
use your career to make the world better, then 80,000 hours.org is a terrific place to go. I'm a co-founder of that organization,
gives in-depth career advice and one-on-one career coaching as well. If you're interested
in donating some of your money, then givingwhatwecan.org encourages people to take a
giving pledge, typically 10% of one's income or more. It's a great way to live. If you're
interested in donating to effective charities,
then givewell.org is the single best place
for donating to global health and development charities.
That's givewell.org.
There's also the Effective Altruism Funds, or EA Funds,
that allow you to donate within animal welfare
and existential risks
and promotion of these ideas as well.
All right. A few more calisthenics, then we're going to go into the heavy lifting,
the max squats of long-termism. All right. In the last, say, five years, you can pick the
time frame, but recent history, what new belief, behavior, or habit has most improved your life?
I think the biggest one of all, and this was really big during
writing the book, which was this enormous challenge. It was like my main focus for two
years over the course of the pandemic was evening check-ins with an employee of mine who also
functioned a bit like a productivity coach. So every evening I would set deadlines for the next
day, both input and output. So input would be how many hours So every evening I would set deadlines for the next day,
both input and output. So input would be how many hours of tracked writing I would do,
where going to the bathroom did not count. And a really big day would be six hours.
Sometimes, very occasionally, I'd kind of get more than that. And also output goals as well.
So I'd say I will have drafted this section or these sections, or I will have done such and such.
I would also normally make some other commitments as well, such as how much time do I spend looking at Reddit on my phone?
How much caffeine am I allowed to drink? Do I exercise? Things like this. And Laura
Pomerius, who is doing it, is wonderful and the nicest person ever and she just never beat me up about this but I would beat
myself up and it would make me it was incredibly effective at making sure I was just like actually
doing things because I like many others find writing it's like hard it's like hard to get
motivated it's hard to keep going and sometimes I don't know I'd have gotten drunk the night before
let's say and it was a Sunday and normally you, it would be a write-off for the whole day.
But I think like, oh no, it would just be so embarrassing at 7pm to have to tell Laura,
like, yeah, I didn't do any work because I got smashed.
And so instead, I would feel hungover and I would just keep typing away.
And that was just huge.
I mean, I think it increased my productivity.
I don't know, it feels like 20% or 25% or something,
just from these like 10-minute check-ins every day.
So these were 10-minute check-ins, seven days a week?
What was the cadence?
I was working six days a week.
So yeah, if she was doing something else at the weekend,
we wouldn't check in.
Right.
So the format would be, walk me through 10 minutes,
would be the first five minutes.
Here's how I measure it up to what I committed.
And here's what I'm doing next.
Exactly.
So you have a view of the day.
Did I hit my input goal, my output goal?
How much caffeine did I drink?
Did I exercise?
And then also, was I getting any migraines or back pain, which are two kind of ongoing
issues for my productivity.
And then next would be a discussion of what i would try to do the following day
and interestingly you might think of a productivity coach as someone who's like
really putting your like nose to the grindstone whereas with law that's kind of the opposite
because my problem is that i beat myself up too much and so we would have a conversation
out of the closet with the re's Pieces candy. Exactly.
Yeah, so I would be like, oh, I got so little done today,
so I'm going to have to just have a 12-hour day tomorrow or something.
Or I'll work through the night or something like that. And she's like, that doesn't make any sense.
We've tracked this before, and when you try and do this,
maybe you get an hour of extra work where you feel horrible for days afterwards.
So she would be very good at counteringing bullshit that my brain would be saying basically.
Just a quick thanks to one of our sponsors and we'll be right back to the show.
This episode is brought to you by LinkedIn Jobs. It's summer 2022 and many small business owners
are busier than ever. Finding the right candidate to join your business can be the leverage you need to be able to ignore the trivial, delegate some of the urgent stuff,
and focus on longer-term strategic decisions. The big picture. LinkedIn Jobs makes it easier
to grow your team by helping you find the people you want to interview faster and for free.
Create a free job post in minutes on LinkedIn Jobs to reach your network and beyond to the
world's largest professional network of more than 800 million people. Then add your job and the purple
hashtag hiring frame to your LinkedIn profile to spread the word that you're hiring so your network
can help you find the right people. Simple tools like screening questions make it easy to focus on
candidates with just the right skills and experience so you can quickly prioritize who
you'd like to interview and hire.
It's why small businesses rate LinkedIn Jobs number one
in delivering quality hires versus leading competitors.
LinkedIn Jobs helps you find the candidates
you want to talk to faster.
Did you know every week,
nearly 40 million job seekers visit LinkedIn?
Post your job for free at linkedin.com slash Tim.
That's linkedin.com slash tim to post your job for free.
Terms and conditions apply.
So a couple of things.
Caffeine, what were your parameters on caffeine?
Like what were the limitations or minimums?
I don't know how you said it on caffeine.
And then how did you choose this employee specifically for this and why?
Caffeine. I think a big thing is just if I drink too much, I'm likely to get a migraine.
So I set my limit at three espressos worth. So about 180 milligrams of caffeine.
And I'm very sensitive. So it's like.
180 is legitimate for a sensitive person.
Yeah. Yeah, exactly. So that's like the that's kind
of a max that i do whereas a double espresso is fine but then it's like shading in between i'll
be like very cautious about and then how did i choose this person i think it's like a very subtle
thing the kind of rapport or personal fit you have with someone who can be a good coach where
she kind of knew me well enough
that she knew the ways to like push me around the combination of like maybe i call it friendly
pushiness or something was it's like perfect and it's very you know it could be very easy to
go along on either side of that line sounds like i need an evening check-in. All right. Who is my victim going to be?
All right.
Maybe we can start here.
Yeah.
So I'll give you...
Will, I know, I know.
I know it's four in the morning,
but I had to call you for my evening check-in.
We're in different time zones for people
who may not have picked up on the...
That is not a New Jersey accent that Will has. Okay. Comment, sidebar, on low back pain. I know this came up in our last conversation. Have you not found anything to help? And I may have some suggestions if you would like suggestions, but have you found anything to help? Actually, I've almost completely fixed it.
So it was just, I mean, I was working, you know, I was just sitting in a chair,
especially, you know, pandemic and writing a book for eight hours a day. But there was actually
only one period that I started getting lower back pain. I remember in our conversation,
you recommended me these boots so they could hang upside down. And I did buy them and I confess,
I never used them. So I'm sorry, Tim.
Adherence failure. No, that's a failure of my recommendation. If it's not going to be used,
it doesn't make any sense for me to recommend it.
But what I did do in the end, I just developed my own workout routine where I got advice from
physios and so on. I talked to like loads of doctors. In general, people just aren't really
engaging with what your problems are. And like self-experimentation, I think was just better.
And it's also like, the other thing is just all of this takes loads of time. And like,
if you're a time-pressed individual, firstly, the advice is often geared towards old people.
So it's like very easy stretches or like movement that most people aren't doing. And then secondly,
it's like, man, you want to do all of this? It's like two hours or something. How can you do this more efficiently? So I developed my own routine, which involves
standing on a BOSU ball. So it's all on a BOSU ball. I've got two free weights. I do a squat.
I'm sitting in like squat position as the resting position. That's very good because it stretches
your hip flexors. And for those people who can't see, well, he's got his hands in front of his
chest. Yeah. Imagine looks kind of like a prairie dog, but really I think what that symbolizes is he has
the dumbbells in front of his chest, like a goblet squat, if people know what that is.
Yeah, exactly. With my legs wide, elbows in between your knees so that your
legs are kind of slayed out like that. And you'll feel a stretch on your hip flexors.
So cultures that squat to sit actually experience lower rates of
back pain. So that was the kind of inspiration there. And then from there, standing up squat,
do a bicep curl up into a shoulder press, go down, then deadlift going into an upright row.
That's all in a BOSU ball. And the thoughts here are strengthening your entire kind of anterior
pelvic chain. So I think my hypothesis was like, why was I getting this?
It's because I was an idiot young male who was like,
why would you work anything out apart from your beach muscles?
What would be the point of that?
And that majorly distorted my posture.
And then I would do that.
So kind of one of them every 20 seconds in like two sets of 10 minutes.
And then that combined also with core work.
So plank in particular really just, I think, sorted things out because it's all about,
you know, I've had just bad posture for 25 years, made worse by like very poor focus
at the gym.
And so it's like this long process of reconfiguring your body.
So it makes more sense. And in particular, as this long process of reconfiguring your body. So it makes more sense.
And in particular, as we talked about, I had anterior pelvic tilt. So my, my gut stuck out,
my pelvis was too far forward. And so then it's like your glutes tearing that back and then
stretching out your hip flexors. Oh, and I invented my own stretch as well. So, um, so um for the listeners who don't know i actually i was previously married and i took a different name
took my wife's grandmother's maiden name so my name wasn't always mccaskill it used to be crouch
and so i named this stretch the will crouch in honor of my former self
it involves hooking your you stand up you hook your foot into your two hands and then press out kind
of extend your leg but pushing against your two hands and that stretches out this muscle that is
goes all the way from your kind of pelvis up your back and i've not found any other stretch that
stretches that particular muscle and that was the one that was really causing all the pain so the muscle is the longissimus thoracis yes you do
this standing oh yeah it's standing that's right yeah it's like a kid and play dance move okay so
people may just i'll put my liability hat on i'll just say maybe start on the ground to try this
so you don't get your foot stuck and topple over like a you know uh an army figurine
onto your head but yes i can see how that would work anyone anyone would say that i'm not a
professional um workout coach i can't i can't wait for the will crouch youtube instructional
fitness series so i did take on this role in the early stages of the pandemic the house i was in
i would go outside every lunch at 1 p.m and put on my best scottish accent
and i'd be like right you wee pricks get on the floor and give me 20 very effective
never made it to YouTube though. Well, you know, it's never too late.
So a couple of things real quick.
The first is these exercises.
Did you do them every day in the morning?
Did you do them midday?
How many days a week?
At what time of day?
So I almost always work out just after lunch.
People always complain to me.
It's like, oh, you'll get a sore stomach or something.
I'm like, but I don't, never happens. But I deliberately time it because I have a real
energy dip just after lunch. And so doing something that's just not work makes a ton of sense.
Yeah. Yeah. Plus after sitting for a few hours, you can break up the two marathons of sitting. And I'll make one other recommendation for folks who may also
suffer from occasional or chronic low back tightness, which has been an issue for me also,
if I sit a lot and it ends up affecting my sleep most significantly and can cause that type of
anterior pelvic tilt and lordosis. So if your gut is sticking out and you look like
you're fat or pregnant, even though you are not, perhaps that means your pelvis is pouring forward.
So if you think about your pelvis as a goblet or a cup full of water, if you're pouring that
water out the front, you have anterior pelvic tilt. And one of the causes of that or contributing factors can be
a really tight iliopsoas or iliacus that then in some fashion connects to the lower back, the
lumbar. And so you get this incredible tightness slash pain. For me, it can cause tossing and
turning at night and really affect my sleep. And the device that was recommended to me a few times before I finally bit the bullet and got it was something called the SoRite, P-S-O hyphen R-I-T-E.
It's the most expensive piece of plastic you'll ever buy, but worth it at something like $50 to
$70 for self-release of the SoRite, which is incredibly difficult to achieve, I find, incredibly difficult
to achieve by yourself otherwise. And a lot of soft tissue therapists are not particularly good
at helping with it, nor is it practical really to necessarily have that type of work done every day,
even if you could. So the pso-right is helpful. All right. So let's move from personal long-termism,
making sure that you're able to function and not be
decrepit when you're 45, into the broader sense and discussion of long-termism. What is
long-termism, and why did you write this book?
Well, long-termism is about three things. It's about taking seriously the sheer scale
of the future that might be ahead
of us and just how high the stakes are in anything that could shape that future. It's then about
trying to assess what are the events that might occur in our lifetimes that really would have
impacts, not just for the present generation, but that could potentially shape the entire course
of humanity's future. And then third, trying to figure out like, okay, how do we ensure that we can take actions to
put humanity onto the right path? And I think you're exactly right to talk about personal
long-termism and the analogy there, because in the book, in What We Are the Future, I talk about
the analogy between the present world and humanity and an imprudent teenager, like a
reckless teenager, where what are the really high stakes decisions that a teenager makes?
It's like not what you do at a weekend. Instead, it's the decisions that would impact the entire
course of your life. So in the book, I tell a story where I was quite a reckless teenager.
I nearly killed myself climbing up a building. That was one of the biggest decisions,
dumbest decisions like I ever made. Because if I had died, then it would have been 60,
70 years of life that I would have lost. In the same way, if humanity dies now,
you know, if we cause our own extinction, or the end of the uncoverable end of civilization,
such as by a worst case pandemic, then we're losing not just 70 years of life it's thousands millions even billions of years of future civilization and so similarly if i made
decisions as a teenager that affected the kind of whole course of my life like whether to become a
poet or a philosopher or you know i could have become a doctor and similarly i think in the
coming century in our lifetime humanity potentially makes decisions about how is future society structured?
What are the values we live by? Is society a liberal democracy around the world? Or
is it a totalitarian state? And how do we handle technologies like AI that I think could impact
the very, very long run? So I want to read just a paragraph that you sent me, which I found thought-provoking because it's
a framing that I had not heard before. And here goes. Imagine the entire human story from the
first Homo sapiens of East Africa to our eventual end, represented as a single life.
Where in that life do we stand? We can't know for sure, but suppose humanity lasted only a
tenth as long as the typical mammalian species. Even then, more than 99% of this life would lie
ahead. On the scale of a typical life, humanity today would just be six months old. But we might
well survive even longer, for hundreds of millions of years, until the Earth is no longer habitable or far beyond. In that case, humanity is experiencing its first blinking moments out of the womb. listeners, is that there's a small percentage who are rushing headlong into battle with some
vision of long-termism and feel committed to fighting the good fight. And a non-trivial
percentage have decided it's too late. They've decided that the end is nigh. We are the frog
in the water, slowly heating that will be boiling before we know it.
And I find this at least, whether we put aside for the second, how people might find fault
with it or pick at it, a useful counter frame, right?
Just to even sit with for a few minutes.
Why do you think it's important to at least consider that something like this is plausible?
Maybe it's not 90% likely, but let's just say it's even 10%, 20% likely.
Well, it's so important just because future generations matter, future people matter.
And whatever you value, whether that's, you know, well-being or happiness,
or maybe it's accomplishment, maybe it's great works of art, maybe it's scientific discovery, almost all of whatever
you value would be in the future rather than now.
Because the future just could be vast indeed, where if you look at like what has been accomplished
since the dawn of humanity, well, dawn of humanity was the end of a thousand years ago.
Agriculture was 12,000 years ago. Industrial evolution was 250 years ago. And yet,
even on the scale of a typical mammal species, we have 700,000 years to go.
Now, we're not a typical mammal species. We could last only a few centuries. We could last
10 years if we really do ourselves in, in the short term. But we could last much longer.
And that just means that all of what we might achieve, all of the good things that humanity
could produce, they're basically in the future. And that's really worth thinking about taking
seriously and trying to protect and promote. You know, one thing that you and I were chatting a bit about, I brought it
up before we started talking, is the question of if it is possible to make, let's just call it
altruism, or in this case, long-termism, investing in a future we will not necessarily, most likely, see ourselves. Can you make that self-interested? Or how do you position it
such that it appeals to the greatest number of people possible, since our collective
destiny depends on some critical mass of people taking it seriously, right? It probably isn't
one person. We're not going to get, say, 9 billion people. So how many do we hope to embrace this philosophy? And is it possible to position it as self-interested? And this is going to be a bit of a ramble, so I apologize in advance. nihilism almost as a proof and him ultimately landing on God. Like, yeah, you kind of need
something resembling God to sort of make sense of this sea of uncertainty so that you can
maybe stabilize oneself and feel a sense of meaning. It brought to mind something I read
very recently, and I apologize, this is, again, going to be a bit of meander, but this is something that
Russ Roberts included, so famous for Econ Talk podcast, in an article he wrote called My 12
Rules for Life. Now, he is, I'm not sure this is the best descriptor, but culturally and I would
think religiously Jewish. So he has that as a sort of latticework of sorts. But number two in his 12
rules for life was find something healthy to worship. And I'm just going to take a second
to read this. He quoted David Foster Wallace, and I'm going to tie this into what I just said in a
second, because here's something else that's weird, but true. In the day-to-day trenches of adult
life, there's actually no such thing as atheism. There's no such thing as not worshiping. Everybody worships. The only choice we get is what to worship. And the compelling
reason for maybe choosing some sort of God or spiritual type of thing to worship, be it JC or
Allah, be it YHWH, not sure what that is, or the Wiccan Mother Goddess or the Four Noble Truths or
some inviolable set of ethical principles is that pretty much everything else
you worship will eat you alive. If you worship money and things, if they are where you tap real
meaning in life, then you'll never have enough, never feel you have enough. It's the truth.
Worship your body and beauty and sexual allure, and you'll always feel ugly. And when time and
age start showing, you will die a million deaths before they finally grieve you. On one level,
we all know this stuff already. It's been codified as myths, proverbs, cliches, epigrams, parables, the skeleton of every great
story. The whole trick is keeping the truth up front in daily consciousness. Okay, and then dot,
dot, dot, but the most important need to be spun to envelop self-interest
if it is basically something to worship that gives you purpose when there is so much uncertainty
and chaos and entropy around us.
Anyway, long TED Talk. Thank you for coming. But
what are your thoughts on any of that? And the overarching question is, how do we make
long-termism catch to have some critical mass of people who really embrace it?
I think there's a really important insight there. Actually, one made by John Stuart Mill in a speech to Parliament at the end of the 19th century. And he asked this question, like, what should we do for posterity? After all, what has posterity ever done for us? And then actually, he makes the argument like posterity has done a lot of things for us, because the projects we have only have meaning insofar as we think that they might contribute to this kind of relay race among
the generations. So here's a thought experiment. There's this film Children of Men, and in it,
people are just unable to reproduce. And so it's not that anyone dies, there's no
catastrophe that kills everybody, but there's no future of human civilization.
You know, how would that change your life? And I think for many, many people,
and many, many projects, it would just rob those projects of meaning. I certainly wouldn't be as
nearly as interested in intellectual pursuits, or like trying to do good things, and so on.
Maybe I would to some extent, but for a lot of things, it seems like, oh, they have meaning
because take scientific inquiry, is this semi-built
cathedral of knowledge that has i've inherited from all of my ancestors that has then been passed
to us and it is incomplete so we've got general relativity and we've got quantum theory and
they're amazing but we also know they're incomplete and like maybe we can work
harder and see farther and build the cathedral a little higher but if it's like no actually it'll just get torn up it's kind of like
oh you're painting an artwork and you can add to the painting a bit and like it's going to just go
in the shredder the day afterwards you're not going to be very motivated to do it and so one
thing i think that a lot of people find motivating
is this thought that you're part of this grand project, much, much grander than yourself,
of trying to build a good and flourishing society over the course of not just centuries,
but thousands of years. And that's one way in which our lives have meaning.
What do you hope the effect will be
on people who read What We Know The Future?
What are you hoping some of the things will be
that they take from that?
The number one thing is just a worldview
that's what my colleague Nick Boston calls
getting the big picture roughly right.
So there were just so many problems
that the world faces today.
So many things you could be focusing on and paying attention to. But there's this question just, well, what's most
important? What should be taking most of our attention? And the ideas in the book, I hope,
give a partial answer, which is, well, the things that are most important are those that really
shape the long-term future of the human project. And that really narrows things down, I think.
So that's kind of broad kind of worldview.
More specifically, though, I would like it to be something
that guides the decisions people make over the course of their lives.
So I think the biggest decision people make are what career they pursue.
So do you go and become a management consultant or a financier
and make money and
live in the suburbs or do you instead like pursue a life that's really trying to make the world
better and if so then what problems are you focusing on where it seems to me some of the
bigger problems are the development of very advanced or the biggest issues or events that
will occur in a lifetime are the development of advanced artificial intelligence, in particular artificial
intelligence that's as smart as humans or maybe considerably smarter. I think that has a good
claim to being one of the most important technological discoveries of all time once
we get to that point. And that point, very good chances in the coming decades. A second is the
risk of very catastrophic pandemics, things that are far worse than COVID-19, which again, I think
are just on the horizon because of developments in our ability to create new viruses. And a third is
a third world war, which again, if you look at history and look at leading scholars
underlying models of war, I think it's like a really pretty good chance we see a third world
war in our lifetime, something like one in three. And I think that could quite plausibly have just
unparalleled destruction and misery on the world in the limit, just being the end of civilization, whether that's because of nuclear warheads
scaling up a hundredfold
and being used in an all-out nuclear war,
or because of the use of bioweapons.
So these are all things that smart people
who read this book could go and work on.
I'm aware that that, again, kind of sounds bleak,
but perhaps the final thing is like,
there is this positive vision in the book too which is that if we avoid these threats or like manage these technological
transitions well we really can just create a future that's truly amazing and this is present
kind of throughout the book i did feel like i hadn't fully given it its due so there's a little
easter egg in the book as well,
right in the final page of QR code that sketches a little vision of a positive
future in a short story form.
But maybe that's the final thing of all in terms of this worldview is
appreciating.
There's so much at stake that are enormous risks that we face or threats that
we face that we need to manage.
But if we do, then we can create a world that is flourishing and vibrant and wonderful for
our grandkids, for their grandkids, for their grandkids.
What is value lock-in?
And could you give some historical examples?
So value lock-in is when a single ideology or value system or kind of set
of ideologies takes control of an area or in the limit the whole world and then persists for an
extremely long time. And this is one thing that I think can have very, very long lasting effects.
And we've already seen it throughout history. And so in What We Are the Future, I give a story of ancient
China. So during this period that's known as the Hundred Schools of Thought, the Zhu dynasty had
fallen, and there was a lot of kind of fragmentation, ideological fragmentation in China.
And wandering philosophers would go from state to state with a package of kind of philosophical ideas and moral views and political policy recommendations and try and convince political
elites of their ideas.
And there were four main schools.
There were the Confucians that we're kind of most familiar with, the legalists, which
are kind of like Machiavellian political realists.
Just, you know, how do you get power was the main focus of them.
Taoists, who are these kind of more, somewhat more spiritual, like acting in accordance with
the way, with nature, like advocating spontaneity, honesty. And then finally, the Moists,
which I read and I'm like, wow, they were kind of similar to the effective altruists,
except in ancient China, where they were about promoting good outcomes and good outcomes impartially considered. They forewent much like fancy
spending on luxury or ritual. So their funeral rites were very modest. They wore very modest
clothes. And they were just really concerned about trying to make the world better. And so
they created a paramilitary group in order to defend cities that were under siege. The reasoning being that if defensive technology and defensive
strategy was so good, then no one could ever wage a war because no one could ever win.
And so there was this great diversity of thought. But what happened? One state within China,
the Qin, influenced by legalism, took over and tried to essentially make legalism state orthodoxy.
And the Emperor Qin declared himself a 10,000-year emperor, wanted this ideology to persist
indefinitely. It actually only lasted 14 years because there was a kind of counter-rebellion,
and that was the start of the Han dynasty, which then successfully did, basically,
over the course of a while, quell kind of other ideological competition,
and instead implemented Confucianisms like this is the official state ideology,
and that persisted for 2000 years. And that's kind of just one example among many.
Over and over again, you see what's the kind of ideology or belief set of a ruling power,
whether that's Catholics or Protestants, or is it the communism of the
Khmer Rouge of Stalin, or national socialism of Hitler, once that ideology gets into power,
people with those ideology get into power, they quickly try and stomp out the competition.
And the worry is that that could happen with the entire world. So again, I spoke of a risk of
third world war. Well, what might happen as a result?
One ideology could take power globally after winning such a war,
implement a world government, a world state,
or at least dominant world ideology.
Then we're in this situation where there's much less ideological competition.
And at least one reason why we've gotten model change and model progress over time, which is in virtue of having a diversity of moral views that are able to fight it out,
and in ideal circumstances, the best argument wins.
We would no longer have that.
And so if there was a single dominant ideology in the world,
that could persist for an extremely long time, I think.
And if it was wrong, which is quite likely to be wrong,
because I think most of our moral views are probably wrong that would be like very bad indeed i mean just to give you an idea i
mean this is not exactly ideological but you mentioned the han dynasty and you know mandarin
chinese one way to say mandarin chinese is han yu which is the language of the Han people. Like Hanyu, pinyin is the romanization system used,
which most people have seen with the diacritical marks for tones
for Mandarin Chinese.
So these things can last a very long time indeed.
Do you have any other examples of value lock-in that could be past tense,
historical examples, or attempts that are being made currently that you think
are worth making mention of could be either i mean historically one particularly salient example
or striking example was when the Khmer Rouge took power in cambodia paul pott just very systematically
like anyone who disagreed with the party ideology generally would just be executed.
So 25% of the population were killed in Cambodia. And again, it's like
very transparent what's happening. He has this quote, purify the party, purify the army,
purify the cadres. So it's just very clear that what's going on is almost like a species or virus kind of taking over and other competitors get wiped out.
This one ideology takes over, competitors are wiped out.
Similarly, if we look at like British history with at different times Catholics and Protestants taking power,
there was one act passed called the Act of Uniformity, which with protestants saying like okay catholicism is now
banned in this country and again it's like very boldly named and in general just if you have a
particular moral view then you are going to want everyone else in the world to have that particular
moral view as well so of ai pathogens let's just say bioweapons we can include in that world war iii how would you rank those for
you personally in terms of concern over the next let's call it let's call it 10 years
over the next 10 years i'd be most concerned about ai over the next 50 years, I'd be most concerned about AI. Over the next 50 years, let's say my
lifetime, I'd be both most concerned about AI and war. Developments in AI, the reason I say that is
wars are most likely when two countries are very similar kind of military power and
the kind of historical rate of one major power in the world,
like one of the big economies of the world going to war with another,
when it kind of gets overtaken economically or militarily,
it's pretty high.
Some ways of modeling put it to the size like 50% for AI.
But I think that's more likely to happen not kind of within the next 10 years,
though it's definitely possible
that there will be some kind of outbreak of war such as between the US and China,
even though the risk of war between the US and Russia is definitely higher than it has been in
the last, I guess, 30 years, potentially. I still think the odds are like quite low, thankfully.
With AI, on the other hand, I think the chance of very rapid and surprisingly rapid developments in AI within the next 10 years are higher than any 10-year point after that.
So as in 2020s, it's more likely there'll be some truly transformative development than 2030s or 2040s, is my kind of view.
And that's for a couple of reasons.
One is that if you look at like how much computing power
different brains use,
and you compare that with how much computing power
the kind of current language models use
or the biggest kind of AI systems use,
the biggest AI systems use the computing power
of approximately the brain of a honeybee.
It's kind of hard to estimate exactly,
but that's kind of where we are,
which is a lot smaller than you might think.
It's much smaller than I thought.
And you might think like, okay,
it's the point in time where you've got AI systems
that are about as powerful as human brains.
That's like a really crucial moment in time
because that's, you know,
potentially the point in time at which AI's, you know, potentially the point
in time at which AI systems just become more powerful than us, or at least approximately
then when we start to get overtaken. And that, again, it's like very uncertain, but there's a
decent, pretty good chance that happens in something like 10 years time. And now it's very
hard to do technological prediction. I am not making any confident predictions
about how things go down,
but it's at least something
we should be paying attention to,
just from a kind of outside perspective,
if you think, oh yeah,
and then we're at the point
where we're like training these AI systems
that are like doing as much computing as the brain is.
That's like, okay, well,
it means maybe they're going to be just
of a similar level of power and ability as human brains.
And then that's really big. And that's kind of big for a few reasons, I think. Well, maybe they're going to be just of a similar level of power and ability as human brains.
And then that's really big.
And that's kind of big for a few reasons, I think.
One is because it could speed up rates of technological discovery.
So historically, we've had like fairly steady, technologically driven economic growth.
That's actually over a couple of hundred years.
But that's because of two things happening.
One is ideas get progressively harder to find,
but we throw more and more researchers. So we have like a bigger population,
we throw a larger percentage of the population at them. If instead we can just create engineers and research scientists that are AI systems, then we could rapidly increase the amount of R&D
that's happening. And what's more, perhaps they'd be
much, much better at doing research than we are. Human brains are definitely not designed for doing
science, but we could create machines that really are. And in the same way that Go, the best AI
systems are far, far better than even the very best human systems now, the same could happen
within science.
And if you plug that into like pretty standard economic models, you get the conclusion that,
okay, suddenly things start really moving like really very fast. And you might get many centuries worth of technological progress happening over the course of a few years or a decade.
And that could be terrific. That's what a lot of the, you know, in a sense, I think both
the kind of the optimists and the doomsayers are correct, where that could be amazing. If it gets
handled very well, then it could be radical abundance for everyone. We could solve all the
other problems in the world. If it gets handled badly, well, the course of that tech development
could be dangerous pathogens, or it could enable us to lose control to AI
systems, or it could be involved like misuse by humans themselves. There's a lot of things going
on that could be extremely important from the long-term perspective. Well, let's go into it
more. I mean, I, as a simpleton, assume that pretty much any new technology is going to be
applied to porn and warfare first, and that those two would also sort of reciprocally
drive forward a lot of new technology.
I'm actually only 10% joking.
Well, do you know DAL-E2?
I do, actually.
Yes, I'm going to be using it a bunch this week.
Oh, fantastic.
Well, for listeners who don't know, it's a fairly recent AI system.
And you can tell it to produce a certain image using text. So maybe that image is an astronaut riding a unicorn
in space in the style of Andy Warhol, and it will create a near-perfect rendition of that.
And you can really say a lot of things. You can say, oh, I want a hybrid of a dolphin and a horse riding on a unicycle.
And it will just create a picture of that.
It's really like in a way that really makes it seem like it understands the words you're
telling it.
And at the moment, it does faces.
Well, it can create like faces of imaginary people, almost picture perfect.
Again, if you pay close attention, you can see like weird details. When you say imaginary people, almost picture perfect. Again, if you pay close attention,
you can see weird details.
When you say imaginary people, what do you mean?
As in non...
So if you type in a picture of Boris Johnson,
then it will not give you a picture of Boris Johnson.
And I don't know this for sure,
but my strong guess is that's because
it's been deliberately restrained
so that it does not do that.
So it doesn't deepfake everything?
Exactly, yeah.
Because you were mentioning porn.
With that technology, you could, well, fill in the blanks.
I'll let you think of your own text prompts that you could put in involving, you know, Joe Biden or Tim Ferriss or whoever you want.
Joe Biden and Tim Ferriss, uh-oh.
Yeah, exactly. Boris Johnson, too. Joe Biden and Tim Ferriss. Uh-oh. Yeah, exactly.
Boris Johnson too.
You're all in the frame.
Who knew you were such good friends?
Oh, God.
The horror.
The horror.
So to my knowledge, that's not been used for porn yet,
but I think the technology would make it completely possible.
And then is it going to be used for warfare?
Like, absolutely.
I mean, there'll be a point in time when we can automate weaponry.
So at the moment, part of the cost of going to war is that your people, part of your population will die. That's also a check on dictatorial leaders as well. You need to at least keep the
army on your side. Otherwise there'll be a military coup. Now imagine if there's a world where the army is entirely automated.
Well, dictators can be much more reassured because their army can be entirely loyal to them.
It's just coded in.
Also, the costs of going to war are much lower as well, because, yeah, you're no longer sustaining casualties on your own side.
And so that's just one way in which technological advances could be hugely disruptive via AI. And it's far from the biggest way.
Let's take just a short intermission from Skynet and World War III, just for a second.
We're going to come back to exploring some of those, but what are some actual long-termist projects
today that you are excited about? So one that I'm extremely excited about is investment in
and development of a technology called far UVC lighting. So far UVC is just a very specific
and quite narrow spectrum of light and with sufficient
intensity just put into light bulbs it seems like that just sterilizes a room we're not confident
in this yet we need more research on its efficacy and safety but if this was just installed in all
lighting in every house around the world basically in the same way that we do for fire regulation.
Every house, at least in a relatively well-off country,
has to meet certain standards for fire safety.
It could also have to meet certain standards
for disease safety,
like having light bulbs with UVC light as part of them.
Then we would make a very substantial progress
to never having a pandemic again,
as well as as a bonus,
eradicating all the spiritual disease. And so this is like some extremely exciting technology.
There's a foundation that I've been spending a lot of time helping to set up over the last six
months called Future Fund. This is something that like, yeah, we're donating to and investing in,
because yeah, it just could make an absolute transformative difference so that's one other things that are very concrete within the
biotech space include early detection of new pathogens so just constantly sampling wastewater
or constantly testing healthcare workers and doing like full spectrum diagnostics of like
just all the dna in the sample excluding human d? Is there anything there that just looks like a
pathogen and we don't understand? So that we can kind of react to new pandemics very quickly.
Also more boringly, just like better PPE, where you could just have, you know, you put on your
super PPE hood and you're now just completely protected from any sorts of pathogens. That
could enable society to continue even if there was an outbreak of a really bad pandemic. So that's very exciting within biotech. Within AI there's a lot of work on technical AI
safety where the idea is just using methods to ensure that AI systems do what we want them to do.
That means even if they're very powerful not trying to seek power and disempower the people who created them, not being deceptive, not causing harm.
And there are various things you can do there, including with these kind of, you know, not as sophisticated models that we're currently using, like tests to see if they are acting deceptively, what structures you can use to make them not act deceptively.
Can we have better interpretability so that we actually understand
what the hell is going on with these AI models?
Because at the moment, they're very non-transparent.
We really don't know how do they get to a particular answer.
It's just this huge computational process
where we've trained it via learning over,
in computer time is like extremely long time so maybe it's like tens of thousands or even millions of games of go that's played and now
it's like very good at go but like what's the reasoning that's going on we don't really know
and then yeah we could keep going as well it's just like many many things within technical ai
safety and then there's the governance side of things, both for AI, for other technologies,
for reducing the risk of World War III.
Here, I kind of admit it gets tough.
It's like very hard to measure and be confident
that we're doing stuff that's actively good.
And we have to hope a little bit more
that just having smart, thoughtful, competent people
in positions of political influence influence where they're able to
understand the arguments on both sides and put in policies and regulation in place such that like
we more carefully navigate these big technological advances or such that we don't go to war or face
some sort of race dynamic between different countries. That is also just like extremely important to me in my view.
Okay.
Let me take a pause to jump back to a number of the questions I have next to me.
When you feel overwhelmed or unfocused,
or if you feel you've lost your focus temporarily,
what do you do?
You know,
the questions you ask yourself,
activities?
Because I think it is easy,
maybe I'll just speak for myself,
to feel like there's so much
potentially coming down the pike
that is an existential threat.
It's easiest just to curl up into the fetal position
and just scroll through TikTok or Instagram and pretend like it
isn't coming. So I'm not saying that is where you end up, but when you feel overwhelmed or unfocused
or under-focused, what do you do? For me, it's most often driven by
dropping my mood. So I've had issues with depression since forever basically although now it's just it's far far better like i think i normally say i'm something like five to ten times
happier than i was a decade ago it's pretty good that is good happy happy about that and so i have
a bit of have you heard of the term of a trigger action plan say that one more time. I'm not sure if it's the word or the Scottish accent.
I know. Trigger action plan.
Oh my God, I have Shrek on the podcast. This is amazing. All right, go ahead. Trigger action plan.
Go ahead. No, I don't know what that is. The idea is there's a trigger, like some event that you hit
that happens. And it's just like when like some event that you hit that happens.
And it's just like when that event happens, you immediately just put into place some action.
So a fire alarm goes off.
Then it's like everyone knows what to do.
There's the fire drill.
You follow the fire drill.
You like stand up, you walk outside, you leave your belongings.
And it's like, it's so that you don't have to think in complex situations.
I do that.
But for when I have like low mood, where the thing that has been very
bad in the past is when it's like, oh, I've got low mood, so I'm not being as effective and
productive. So I'm going to have to work even harder. And therefore I beat myself up and it
makes it even worse. So instead what I do is I'm just like, if I notice my mood is really slumping
and therefore it's harder to work, I just bump, like, fix my mood to the top of my
to-do list. It becomes the most important priority, where the crucial thing is not to let it spiral.
The number one thing I do is just I go to the gym or go for a run. Because there I'm like, look,
I want to do this a certain amount of time per week anyway. It's something I enjoy. I find it
like recreation. So worst case, I'm just moving time around. Similarly, I'll probably
meditate as well. Then at the same time, in terms of how I think, I have, again, certain kind of
cached thoughts that I found very helpful. So one is just thinking about how it's like, yep,
this happens before and it's not the end of the world. It's been okay. If I've gone through this
before, it's just, I'll be able to get through it again probably second is just thinking about no longer assessing like the individual day that i'm having
but instead some like larger chunk of time where it's easy to beat yourself up if you're like look
i've had just the shittest day and i've done nothing. What a loser. Whereas if you're like, okay, well, how have I done over the last three years or 10 years or like my whole life? And at least assuming
you feel kind of okay about that, which I do, then that's very reassuring. It's like, okay,
I've had a shit day, but like if someone were to write a history of my last few years,
they probably wouldn't talk about this day. They talk about the other things.
And so it's kind of like, I've got a little bit in the bank there so even
if i just take the whole day off in the grand scheme of things it doesn't really matter that
combined with like taking a bit of time away from whatever's making me plunge and then like exercise
i just find has a mood boost as well but then also gives time for these thoughts to really
percolate and sink in generally means that i can just then like come back a couple of hours later and be pretty
refreshed but the key thing is just like once this happens you just you just do the thing and
you stop thinking it's like look this is what my plan is can you say trigger action plan one more
time in a heavy scottish accent trigger action plan that's what you need pal i'm gonna put that right at the beginning of the
podcast oh so good so good uh thank you thank you name name bother pal you know if this long-term
ism effective altruism philosophy thing doesn't work out for you i think you have a future in
voice acting so you always have effective altruism is about doing the most you can
not being a wee girl's blows i don't know what if you speak in scott yeah speaking this proper
scottish accent suddenly you've got to be somewhat aggressive and like insulting someone otherwise it
just doesn't quite work doesn't quite work you can't whisper an aggressive scottish accent it's
very hard very very challenging I'm not even going
to try that. It would be embarrassing. But let's hop back into AI for a moment. So you hang out
with a lot of the smart, cool kids and very technical people who really understand this
stuff. When they talk about robots gone bad, or just the plausible scenarios that would be very bad what are they like what
are the two or three things that they would see as a an event or a development that would
sort of be the equivalent of the trigger action plan, right? Where it's like, oh, this is like life before and life after.
What are the, say, two or three or one to three scenarios
that they've honed in on?
I think there are two, from my perspective,
two extremely worrying scenarios.
One is that AI systems get just much more powerful than human systems,
and they have goals that are misaligned with human goals. And they realize that
human beings are standing away of them achieving their goals, and so they take control. And perhaps
that means they just wipe everyone out. Perhaps they don't
even need to. So an analogy is often given between the rise of Homo sapiens from the perspective of
the chimpanzees, where Homo sapiens were just smarter. They were able to work together. They
just had these advantages. And that just means the chimpanzees just have very little say in how
things go over the long term. Basically no say.
It's not that we made them extinct, although in a sense they're kind of lucky we made many.
In fact, I think we made most of large animals extinct due to the lies of Homo sapiens.
But that could happen with AI as well.
We could be to the AI systems what chimpanzees are to humans.
Or perhaps it's actually more extreme because once you've got AI systems
that are smarter than you
and they're building AI systems
that are smarter again,
maybe it's more like we're like ants
looking at humans
when we're looking at advanced AI systems.
Give me the second one
and then I'm going to come back
to the first one
with just a sci-fi thought experiment.
And then the second one is like,
okay, even assume that we do manage
to align AI systems
with human goals, so we can really get them to do whatever they want.
Nonetheless, this could be a very scary thing, where if you do think that AI systems could
lead to much faster rates of technological progress, in particular by automating technological
discovery, including the creation of better AI systems.
So we've got AI writing the code
that builds the next generation of AI,
that then writes even better code
to build the next generation of AI.
Things could happen very quickly.
Well, even if you manage to have AI systems
do exactly what you want them to do,
well, that could concentrate power
in a very small number of hands.
Could be a single country hands could be a single country could
be a company could be like an individual within a single country who wants to instill a dictatorship
and then once you've got that power it's kind of similar to what happened during the industrial
revolution and earlier so europe got more and more powerful technology over that period. And what did it do?
It used it to colonize and subjugate a very large fraction of the world. In the same way,
it could happen, but even faster, that a small group gets such power and use it to
essentially take over the world. And then once it's in power, well, once you've got AI systems,
I think you're able to have indefinite social control
in a way that's very worrying.
And this is value lock-in again, where at the limit, imagine you're kind of the dictator
of a totalitarian state, like 1984, The Handmaid's Tale or something, and that's a global totalitarian
state.
And you really want your ideology to persist forever.
Well, you can pass that ideology on to this AI successor
that just says, yep, you rule the world now.
And the AI has no need to die.
It's like software.
It can replicate itself indefinitely.
So unlike dictators, which will die off eventually,
causing a certain amount of change to occur,
well, this is not true for the ai could replicate itself indefinitely and it could be like in every
area of society and so then when you've got that it's like why would we expect moral change after
that point and it's like kind of hard to see so in general i think there can be these states where
you get into a particular state of the world and you just kind of can't get out of it again and this kind of orwellian perpetual like totalitarianism
is actually one of the things i really worry about okay so again this is a happy book it's
so within the context of our discussion of the happy book you talked about i think it was the
moists i can't remember the term you used but you mentioned they were similar to effective altruism
and they formed a paramilitary group when are you forming the effective altruism paramilitary group
counter ai insurgency squad is that in the works well there is an analogy
between we haven't yet we haven't yet got our own army um probably probably that won't happen
i think things are going pretty weird if they have and i might need to might need to intervene
at that point but there is an analogy where they built very powerful and created very good
defensive technology so you got trebuchets
very powerful for attacking you say trebuchets trebuchets oh trebuchet yeah it's like a catapult
what is that better it's like it's got a it's got a sling on it is that what it is it's like a
catapult with a sling so you get the uh yeah it's like a addle addle but for throwing much bigger
things anyway the physics involved here i think the same yeah yep for sure but you know also walls are defensive technology if you had just
really good walls really good defenses but you said wolves for a second i was like wow i did not
did not see that coming we're gonna have to resurrect the wolf if we won't have any hope
of defensive wolf technology all right walls yeah Well, they are training eagles to attack drones.
So it's not so insane.
Yeah.
All right, walls.
We are wolves to attack the robot overlords.
I don't back the wolves, I've got to say.
But we can think in the same terms of,
look, there's certain technology
that has kind of offensive advantage. So the ability
to design new pathogens, there's certain technology that's like has a defensive advantage,
like this far UVC radiation. And so one of the things we're doing is really trying to develop
and speed up kind of defensive technology. And so similarly, when you look at AI,
there's some things that just pure capabilities, it's just ai getting more and more powerful and then there are some things that are helpful in
making sure that ai is safe like understanding what's under the hood of these models um just
means that like okay we know what's going on a bit better we can like use it better we can predict
what sort of behavior it'll have let's talk about defensive capabilities. I'll just give another example of an asymmetric offense-defense situation, which would be drone warfare. So the ability to create
weaponized, potentially lethal drones and swarms and so on is much lower than the cost to defend
against them, generally speaking, right?
I mean, certainly that becomes true if you start to combine targeted bioweapons with drones. Things
get really expensive at best to defend against. But let's talk about my sci-fi scenario.
Sure.
So when you use the analogies of humans and chimps, or the analogy of the industrial revolution and the technological gains in Western Europe predominantly, which then allowed the physical, and that's the word I'll underscore, sort of subjugation and colonization and dominance of a significant percentage of the world's population.
I suppose there's part of me that on a simplistic level feels like,
even though this would not be easy to do,
because it would be sort of like a homicide-suicide for a lot of folks,
the more interdependent we become, but it's like,
all right, if AI is constrained to a physical infrastructure
that is dependent upon power,
would not part of the defensive
planning or preemptive planning go into trying to restrict AI to something that can be unplugged,
to put it really simply. But how are people playing out this hypothetical scenario, right?
So would the AI, presumably, if it's as smart or smarter than we are, foresee this and then develop sort of solar-powered extensions of itself so it can do A, B, and C.
I mean, how are people—I'm sure this is part of the conversation, I've just never had it—so what are smarter people exploring with respect to this type of stuff?
I think actually your nose is pointed in a good direction on this one. And it's this sort of thing that makes me, among my peers, on the more optimistic end of thinking that advanced AI would not kill everybody.
Where, yeah, you could have like air-gapped computers, so they can't access the internet now. They don't have other ways of kind of controlling the world apart from kind of text output.
And they've been trained to act as kind of oracles.
So you just ask them a question and they give you ideally like very justified kind of true answer.
And perhaps you have many of these as well.
You've got like a dozen of them and they don't know that the others exist.
And then you just start asking them for help.
So you're like, okay, we're going to start building these like incredibly powerful ai systems that much much smarter than we're generally able than
we are like what should we do what should our plan be and so that's a pathway where it's like
you're using ai to help solve the problems that we will face with even more powerful ai
and what's the response i mean some people would say oh well if the ai systems are really that powerful
they're like far better than humans they will then trick you so and they will be able to do
that just by outputting text and telling you like oh do this thing or do this thing and that will
be this like long deceptive play and i just think that's unlikely that just seems pretty
speculative to me i don't have like strong reasons to think that we would have that the current ai systems we have are more like you know they just
output text it's not like they're an agent that's like trying to do things in the world
you just put in the text in input it gives you a text output at least for language models
and potentially we can scale that up to the point where they're like these kind of sages in boxes. I think that's a significant
pathway by which we make even more powerful AI systems that are kind of agentic, are kind of
acting, you know, have a model of the world and are trying to do things in the world, that we make
them safe. But that's exactly a good example of, yeah, again, kind of differential technological
progress where an AI system that's just like this article in a box separated from the rest of the world, that seems very good from a
defensive perspective.
Whereas an AI system that's been trained on like war games and then it's just like released
into the open seems like potentially very bad.
Oh, yes, indeed.
Robots gone wild. Are you going gonna create a new subreddit i don't know if i'll create a subreddit i mean i'm i'm tempted to start digging spider holes in my backyard and
learning how to you know hunt with bow and arrow but you know all things in due time
what are the most important actions people can take now? What are some actions for
people who are like, I want to be an agent of change. I want to feel some locus of, if not
control, at least free will. I don't want to just lay on my back and wait for the cyborg raptors to descend upon me or to become a house pet for some
dictator in a foreign land who overtakes the us as a superpower whatever it is right
i want to actually do something what what are some of the most important or maybe impactful
actions that that people can take so i think there's kind of two pathways for such a person. One, you might be
motivated to help, but you don't really want to rejig your whole life. Ideally, perhaps you just
don't really want to have to think about this again, but you do want to be doing good. Then I
think the option of donations is just particularly good. So you could take the Giving What We Can
pledge, you make a 10% donation every year of your income, and then you can give to somewhere
like the Long-Term Future Fund that's part of EA Funds, and it'll get redistributed to what some
kind of domain experts think are the highest impact things to do in this space.
That's the kind of like baseline response or something. And I think it's important to
emphasize you can do an enormous amount of good there. You know, there's a lot of ways we could
spend money to do good things in the world, including from this long-term perspective. The second is if you're
like, no, actually, I want to be more actively a kind of agent of change. Then I think the first
thing to do is to learn more. You know, I've tried to pack as much as I can into a book, but
I think there's a lot to engage with. I mean, the book is what we are the future.
You know, it's talking about some big philosophical ideas.
It's also just covering a lot of broad ground of different disciplines, different issues.
Like, you know, we talked about AI and bioresk and World War III.
There's plenty of other issues I didn't talk about.
We haven't talked about nukes.
We haven't talked about technological stagnation, which I think is particularly important as well.
We also haven't talked even just about promoting better values as well, kind of, or more
broad ways of making the long-term better. So all of these things are things that I think we
can learn. Therefore I'd encourage reading The Precipice by Toby Ord, which I mentioned in my
recommended books. Also 80000hours.org as well has enormous amounts of content. Openphilanthropy.org also has just a lot of really interesting content.
They're a foundation, but they've done some really deep research into some of these topics,
such as this issue which we didn't get a touch of when we should expect human-level
intelligence to arise, with some arguments that we should really put a lot of probability
mass, like maybe more than 50% on it coming in the next few decades.
And then following that, I think the most important single decision is how can you either use or leverage your career or switch career in order to work on some of these most important issues.
And again, we've really tried to make this as easy as possible by providing like endless online advice and also like one-on-one coaching such that people can
get advice and then the final thing would be getting involved with the effective altruism
community where this stuff is hard it can be intimidating one of the big things that just
is a fact when we start thinking about these more kind of civilizational scale issues compared to
the kind of original seed of ea which is funding these very well-evidenced programs
that demonstrably improve health,
is like, it can be very overwhelming
and it can be hard to know exactly how to fit in.
But we now have a community of thousands
or tens of thousands of people
who are working together
and really keen to help each other.
And there are many conferences,
like EA Global conferences
at places like London, DC, San Francisco, or kind of independently organized conferences, EA Global Xs. In many places, there'll be one in India, for example, in early January, as well as like hundreds of local groups around the world where people get together and can often like provide support and help each other to then figure out like okay what is the
most impactful thing you can do so yeah that would be my kind of laundry list of advice and
with respect to say ditching the news or at least going on a lower information diet with the most
manufactured urgency that we get flooded with, and instead of spending time
looking at big picture trends, or trying to get that big picture roughly right,
as you put it, both from a historical perspective and a current perspective,
would you still recommend for podcasts in our time hosted by Melvin Bragg,
who I believe discusses history, philosophy, science with leading academics, and the 80,000 Hours podcast. Would those be two you would still recommend?
Yeah, I would still strongly recommend them. There's also another podcast here, This Idea,
by Finn Morehouse and Luca Rigetti. I also particularly like Relationally Speaking by
Julia Galef as well. It's like very good.
And then in terms of websites, if you want like big picture,
like beyond the websites I've already said,
to just have like the best big picture understanding of the world,
I don't know of a single better source than Our World in Data,
which is just, I mean, it was very influential during the COVID pandemic, but it has,
you know, if you want to learn about nuclear war or long run economic growth or world population,
it's articles that are presenting both data and like the best understanding of the data
in just this like timeless evergreen way with exceptional rigor and exceptional depth.
It's just amazing.
I use it very heavily to like orient myself for the book.
So Will McCaskill,
people can find you on Twitter at Will McCaskill,
M-A-C-A-S-K-I-L-L on the web,
williammccaskill.com.
The new book is What We Owe the Future.
I recommend people check it out.
Is there anything else you would like to add? Any requests to the audience? Anything you'd like to point people to?
Any complaints or grievances with this podcast process that you would like to air publicly?
Anything at all that you'd like to add before we wrap this conversation up?
The main thing to say is just, as we've said over and over again, I think we face truly enormous challenges in our life. Many of these challenges are very scary. They can be overwhelming. They
can be intimidating. But I really believe that each of us individually can make an enormous
difference to these problems. We really can significantly help as part of a wider community
to putting humanity onto a better path. And if we do, then the future really could be long
and absolutely flourishing.
And your great great grandkids will thank you.
Well, thank you very much.
Well, I always enjoy our conversations
and I appreciate the time.
Ditto, thanks so much, Tim.
Absolutely.
And to everybody listening,
I will link to all the resources
and the books and websites and so on
in the show notes as per usual at Tim.blog slash podcast. And until next time, be just a little bit
kinder than necessary. And thanks for tuning in. Hey guys, this is Tim again. Just one more thing
before you take off. And that is five bullet Friday. Would you enjoy getting a short email
from me every Friday that provides a little fun
before the weekend?
Between one and a half and two million people subscribe to my free newsletter, my super
short newsletter called Five Bullet Friday.
Easy to sign up, easy to cancel.
It is basically a half page that I send out every Friday to share the coolest things I've
found or discovered or have started exploring over that week.
It's kind of like my diary of cool things. I'll see you next time. strange esoteric things end up in my field and then I test them and then I share them with you.
So if that sounds fun, again, it's very short, a little tiny bite of goodness before you head off
for the weekend, something to think about. If you'd like to try it out, just go to tim.blog
slash Friday, type that into your browser, tim.blog slash Friday, drop in your email and
you'll get the very next one. Thanks for listening. This episode is brought to you by Theragun. I have two Theraguns and they're worth their weight
in gold. I've been using them every single day. Whether you're an elite athlete or just a regular
person trying to get through your day, muscle pain and muscle tension are real things. That's
why I use the Theragun. I use it at night. I use it after workouts. It is a handheld percussive therapy device that releases your deepest muscle tension.
So for instance, at night, I might use it on the bottom of my feet.
It's helped with my plantar fasciitis.
I will have my girlfriend use it up and down the middle of my back and I'll use it on her.
It's an easy way for us to actually trade massages in effect.
And you can think of it,
in fact, as massage reinvented on some level. Helps with performance, helps with recovery,
helps with just getting your back to feel better before bed after you've been sitting for way too
many hours. I love this thing. And the all new Gen 4 Theragun has a proprietary brushless motor
that is surprisingly quiet. It's easy to use and about
as quiet as an electric toothbrush. It's pretty astonishing. You really have to feel the Theragun's
signature power, amplitude, and effectiveness to believe it. It's one of my favorite gadgets in
my house at this point. So I encourage you to check it out. Try Theragun. That's Thera,
T-H-E-R-A-G-U-N. There's no substitute for the Gen 4 Theragun with an OLED
screen. That's O-L-E-D for those wondering. That's organic light emitting diode screen,
personalized Theragun app, an incredible combination of quiet and power. And the Gen 4
Theraguns start at just $199. I said I have two. I have the Prime and I also have the Pro,
which is like the super Cadillac
version. My girlfriend loves the soft attachments on that. So try Theragun for 30 days starting at
only $199. Go to therabody.com slash Tim right now and get your Gen 4 Theragun today. One more time,
that's therabody.com slash Tim, T-H-E-R-A-B-O-D-Y.com slash Tim. This episode is brought to you by
Biori Clothing, spelled B-U-O-R-I, Biori. I've been wearing Biori at least one item per day
for the last few months, and you can use it for everything. It's performance apparel,
but it can be used for working out.
It can be used for going out to dinner, at least in my case.
I feel very comfortable with it.
Super comfortable, super stylish.
And I just want to read something that one of my employees said.
She is an athlete.
She is quite technical, although she would never say that.
I asked her if she had ever used or heard of Biori,
and this was her response. I do love their stuff. I've been using them for about a year. I think I found them at REI. First for my partner, t-shirts that are super soft but somehow last as he's hard
on stuff. And then I got into the super soft cotton yoga pants and jogger sweatpants. I live
in them and they too have lasted. They're stylish enough I can wear them out and about. The material
is just super soft and durable.
I just got their clementine running shorts for summer and love them.
The brand seems pretty popular, constantly sold out.
In closing, and I'm abbreviating here, but in closing, with the exception of when I need technical outdoor gear,
they're the only brand I've bought in the last year or so for yoga, running, loungewear that lasts and that I think look good also.
I like the discreet logo. So that
gives you some idea. That was not intended for the sponsor read. That was just her response via text.
Viore, again spelled V-U-O-R-I, is designed for maximum comfort and versatility. You can wear it
running. You can wear their stuff training, doing yoga, lounging, weekend errands, or in my case, again, going out to dinner.
It really doesn't matter what you're doing.
Their clothing is so comfortable and it looks so good.
And it's non-offensive.
You don't have a huge brand logo on your face.
You'll just want to be in them all the time.
And my girlfriend and I have been wearing them for the last few months.
They're men's core short, K-O-R-E.
The most comfortable lined
athletic short is your one short for every sport. I've been using it for kettlebell swings, for
runs, you name it. The Banks short, this is their go to land to sea short, is the ultimate in
versatility. It's made from recycled plastic bottles. And what I'm wearing right now, which
I had to pick one to recommend to folks out there, or at least men
out there, is the Ponto Performance Pant. And you'll find these at the link I'm going to give
you guys. You can check out what I'm talking about. But I'm wearing them right now. They're
thin performance sweatpants, but that doesn't do them justice. So you got to check it out. P-O-N-T-O,
Ponto Performance Pant. For you ladies, the women's performance jogger is the softest jogger
you'll
ever own. Viore isn't just an investment in your clothing, it's an investment in your happiness.
And for you, my dear listeners, they're offering 20% off your first purchase. So get yourself some
of the most comfortable and versatile clothing on the planet. It's super popular. A lot of my
friends have now noticed are wearing this. And so am I.
Vioreclothing.com forward slash Tim.
That's V-U-O-R-I clothing.com slash Tim.
Not only will you receive 20% off your first purchase, but you'll also enjoy free shipping on any U.S. orders over $75 and free returns.
So check it out.
Vioreclothing.com slash Tim.
That's V-U-O-R-I
Clothing.com slash Tim and discover the versatility of Biori clothing.