Dwarkesh Podcast - Fin Moorhouse - Longtermism, Space, & Entrepreneurship
Episode Date: July 27, 2022Fin Moorhouse is a Research Scholar and assistant to Toby Ord at Oxford University's Future of Humanity Institute. He co-hosts the Hear This Idea podcast, which showcases new thinking in philosophy, t...he social sciences, and effective altruism.We discuss for-profit entrepreneurship for altruism, space governance, morality in the multiverse, podcasting, the long reflection, and the Effective Ideas & EA criticism blog prize.Watch on YouTube. Listen on Spotify, Apple Podcasts, etc.Episode website + Transcript here.Follow Fin on Twitter. Follow me on Twitter.Subscribe to find out about future episodes!Timestamps(0:00:10) - Introduction(0:02:45) - EA Prizes & Criticism(0:09:47) - Longtermism(0:12:52) - Improving Mental Models(0:20:50) - EA & Profit vs Nonprofit Entrepreneurship(0:30:46) - Backtesting EA(0:35:54) - EA Billionares(0:38:32) - EA Decisions & Many Worlds Interpretation(0:50:46) - EA Talent Search(0:52:38) - EA & Encouraging Youth(0:59:17) - Long Reflection(1:03:56) - Long Term Coordination(1:21:06) - On Podcasting(1:23:40) - Audiobooks Imitating Conversation(1:27:04) - Underappreciated Podcasting Skills(1:38:08) - Space Governance(1:42:09) - Space Safety & 1st Principles(1:46:44) - Von Neuman Probes(1:50:12) - Space Race & First Strike(1:51:45) - Space Colonization & AI(1:56:36) - Building a Startup(1:59:08) - What is EA Underrating?(2:10:07) - EA Career Steps(2:15:16) - Closing RemarksPlease share if you enjoyed this episode! Helps out a ton! Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Transcript
Discussion (0)
have the pleasure of interviewing Finn Morehouse, who is a research scholar at the Oxford
University's Future of Humanity Institute, and he's also an assistant to Toby Ord, and also
the host of the Here This Idea Podcast. Finn, I know you've got a ton of other projects under
your belt. So do you want to talk about all the different things you're working on and how you got
into EA and this kind of research? I think you nailed the broad strokes there. I think,
yeah, I've kind of failed to specialize in a particular thing.
And so I found myself just dabbling in projects that seem interesting to me,
trying to help get some projects off the ground and just doing research on, you know,
things which seem maybe underrated.
I probably won't bore you with the list of things.
And then, yeah, how do I get into EA?
Actually, also a fairly boring story, unfortunately.
I really loved philosophy.
I really loved kind of pestering people by asking them all these questions,
you know, why are you not, why are you still eating meat?
read to kind of Peter Singer and Wilma Kerskiel.
And I realized I just wasn't actually living these things out myself.
I think there's some just like force of consistency that pushed me into really getting involved.
And I think the second piece was just the people.
I was lucky enough to have this student group where I went to university.
And I think there's some dynamic of realizing that this isn't just a kind of free floating set of ideas,
but there's also just like a community of people I like really get on with and have
all these like incredibly interesting kind of personalities and interests.
So those two things, I think.
Yeah, and then so what was a process like?
I know a lot of people who are vaguely interested in EA,
but not a lot of them then very quickly transition to, you know,
working on research with top EA researchers.
So, yeah, just walk me through how you ended up where you are.
Yeah, I think I got lucky with the timing of the pandemic,
which is not something I suppose many people can say.
I did my degree.
I was quite unsure about what I wanted to do.
There was some option of taking some kind of close to default.
path of maybe something like, you know, consulting or whatever. And then I was kind of, I guess,
forced into this natural break where I had time to step back. And I, you know, I guess I was
lucky enough that I could afford to kind of spend a few months just like figuring out what I
wanted to do with my life. And that space was enough to like maybe start like reading more about
these ideas also to try kind of teaching myself skills I hadn't really tried yet. So try to, you know,
learn to code for a lot of this time and so on. And then I just thought, well, I might as well,
wing it. There are some things I got applied to. I don't really rate my chances, but the cost
to applying to these things is so low, it just seems worth it. And then, yeah, I guess I got very
lucky in here I have. Awesome. Okay. So let's talk about one of these things you're working on,
which is that you've set up and are going to be helping judging these prizes about EA writing.
One is you're giving out five prizes for $100,000 each for blogs that discuss effective altruist
related ideas. Another is five prizes of $20,000 each.
to criticize the EA ideas.
So talk more about these prizes.
Why is now an important time
to be talking about and criticizing EA?
That is a good question.
I want to say I'm reluctant to frame this as like me personally.
I certainly have helped set up these initiatives.
So I heard on the inside that actually you've been pulling
a lot of the weight on these projects.
Certainly, yeah.
I find myself with a time to kind of get these things over the line.
which I'm yeah, I'm pretty happy with.
So yeah, the criticism thing, let's start with that.
I want to say something like, in general, being receptive to criticism
is just like obviously really important.
And if as a movement you want to succeed,
where succeed means not just like achieve things in the world,
but also like end up having just close to correct beliefs as you can get,
then having this kind of property of being like anti-fragile with respect to being wrong.
like really celebrating and endorsing, changing your mind in a kind of loud and public way.
That just seems really important.
And so, I know, this is just like a kind of prima facie, obvious case of wanting to incentivise
criticism.
But you might also ask, like, why now?
There's a few things going on there.
One is, I think, the effective altruism movement overall has reached its place where it's
actually beginning to do, like, a lot of really incredible things.
There's a lot of, like, funders now kind of excited to find kind of fairly ambitious.
scalable projects.
And so it seems like if there's a kind of an inflection point,
you want to get the criticism out the door
and you want to respond to it like earlier rather than later
because you want to set the path in the right direction
rather than a just course,
which is more expensive later on.
Well, McCasker made this point a few months ago.
You can also point to this dynamic
in some other social movements
where the kind of really exciting beliefs
that kind of have this like period of plasticity in the early days,
they kind of ossify and you end up with this like set of beliefs
It's kind of like trendy or socially rewarded to hold.
In some sense, you feel like you need to hold certain beliefs in order to kind of get credits from, you know, certain people.
And the costs to like publicly questioning some practices or beliefs become too high.
And that is just like a failure mode.
And it seems like one of the more salient failure modes for a movement like this.
So it just seems really important to like be quite proactive about celebrating this dynamic where,
you notice you're doing something wrong and then you change track and then maybe that means shutting
something down right you set a project the project seems really exciting you get some like feedback back
from the world feedback looks more negative than you expected and so you stop doing the project and in some
important sense that is like a success like you did the correct thing and it's important to celebrate that
so I think these are some of the things that go through my through my head just like framing criticism
in like this kind of positive way yeah seems pretty important right right I mean analogously it said
that losses are as important as profit in terms of motivating economic incentives,
and it seems very similar here.
In a Slack, we were talking, and you mentioned that maybe one of the reasons that's
important now is if a prize of $20,000 can help somebody, help us figure out how to,
or not me, I don't have the money, but like help SVF figure out how to how to better allocate
like $10 million.
That's this deal.
It's really impressive that effective altruism is a movement that is willing to fund criticism
of itself.
I don't know.
Is there any other example of a movement in history?
where that's been so interested in criticizing itself and becoming anti-fragile in this way?
I guess one thing I want to say is like the proof is in the pudding here.
Like it's one thing to kind of make noises to the effect that you're like interested in being criticized.
And I'm sure lots of movements make that.
Another thing to like really follow through on them.
And you know, EA is a fairly young movement.
So I guess time will tell whether it really does that well.
I'm very hopeful.
I also want to say that this like particular prize is like, you know, one kind of part of a much, a much bigger thing hopefully.
That's a great question.
I actually don't know if I have good.
answers, but that's not to say that there are none, I'm sure there are. Like, political liberalism
as a strand of thought, like in political philosophy comes to mind. It's maybe an example. One of the
random thing I want to point out or mention, you mentioned profits and just like doing the math
and what's the like EV of like investing in just red teaming an idea, like shooting an idea down.
I think thinking about the difference between the for profit and non-profit space is quite an
interesting analogy here. You have this very obvious feedback mechanism in for profit land,
which is you have an idea,
no matter how excited you are about the idea,
you can very quickly learn whether the world is as excited,
which is to say you can just fail.
And that's like a tight, useful feedback loop
to figure out whether what you're doing is worth doing.
Those feedback loops don't, by default, exist
if you don't expect to get anything back
when you're doing these projects.
And so that's like a reason to want to implement those things like artificially.
Like one way you can do this is with like charity,
evaluators, which in some sense impose a kind of market-like mechanism where, like, now you have
an incentive to actually be achieving the thing that you are, like, ostensibly setting out to
achieve because there's this third actor that's, well, party, that's kind of assessing whether
you're getting it. But I think that framing, I mean, we can try saying, say more about it,
but that's like a really useful framing, I think, to me anyway. And, yeah, but one other reason
this seems important to me is if you have a movement that's about, like, 10 years old, like this,
you know, we have like strains of ideas that are thousands of years old that have
significant improvements made to them that that were missing before.
So just on that, that alone, it seems to me that the reason to expect some mistakes,
either at a sort of like theoretical level or in the applications, that does seem like,
I do have a strong prior that there are such mistakes that could be identified in a reasonable
amount of time.
Yeah.
I guess one framing that I like as well is not just thinking about here's a set of claims we have,
we want to like figure out what's wrong, but some really good criticism can look like,
look, you just miss this distinction, which is like a really important distinction to make,
or you miss this like addition to this kind of naive, like conceptual framework you're using.
And it's really important to make that addition.
A lot of people are like skeptical about progress in, in kind of non-empirical fields.
So like philosophy, for instance.
It's like, oh, we've been thinking about these questions for thousands of years,
but we're still kind of unsure.
And I think that misses like a really important kind of progress,
which is something you might call like conceptual engineering or something,
which is finding these like really useful distinctions
and then like building structures on top of them.
And so it's not like you're making claims,
which are necessarily true or false,
but there are other kinds of useful criticism,
which include just like getting your kind of models like more useful.
Speaking of just making progress on questions like these,
one thing that's really surprising to me,
and maybe this is just like my ignorance of,
the philosophical history here.
It's super surprising to me that the movement, like long-termism, at least in its modern
form, it took thousands of years of philosophy before somebody had the idea that, oh, like,
the future could be really big, therefore the future matters a lot.
And so maybe you could say, like, oh, you know, there's been lots of movements in history
that have emphasized, I mean, existential risk maybe wasn't a prominent thing to think about
before nuclear weapons, but that have emphasized that civilizational collapse is a very
prominent factor that might be very bad for many centuries, so we should try to make sure
society's stable or something. But do you have some sense of, you have a philosophy background?
So do you have some sense, what is the philosophical background here, and to the extent that these
are relatively new ideas? How did it take so long? Yeah, that's like such a good question, I think.
One name that comes to mind straight away is this historian called Tom Moynihan, who, so he wrote
this book about something like the history of how people think about existential risk.
And then more recently he's been doing work on the question you ask, which is like,
what took people so long to reach this, like what now seems like a fairly natural thought.
I think part of what's going on here is it's really hard or easy, I should say, to underrate
just how much, I guess it's somewhat related to what I mentioned in the last question,
just how much kind of conceptual apparatus we have going on that's like a bit like the water we swim in now
and so it's hard to notice.
So one example that comes to mind is thinking about probability as this thing we can talk formally about.
This is like a shockingly new thought.
Also the idea that human history might end and furthermore that that might be within our control,
that is to decide or to prevent that happening prematurely.
These are all like really surprisingly new thoughts.
I think it just like requires a lot of imagination and effort to put yourself into the shoes of people.
living earlier on who didn't have the kind of yeah like I said the kind of tools for thinking
that make these ideas pop out much more naturally and of course as soon as those those tools are in
place then the like conclusions fall out pretty quickly but it's not easy and I agree that I appreciate
that actually wasn't a very good answer just because it's a hard question yeah so you know what's
interesting is that more recently I'm maybe I'm unaware of the full context of the argument here
but I think I've heard Holden Kornoski write somewhere that
that he thinks that there's more value in thinking about the issues that EA has already identified
rather than identifying some sort of unknown risk that, for example, what AI might have been
like 10, 20 years ago, AI alignment, I mean, given this historical experience that you can
have some very fundamental tools for thinking about the world missing and consequently miss
some very important moral implications. Does that imply that we should expect the space that
AI alignment occupies in terms of our priorities? Should we expect something as big or bigger
coming up or just generally tools of thinking like you know expected value thinking for example
yeah that's a good question um i think one thing i want to say there is it seems pretty likely
that the most important like kind of useful concepts for finding important things are also going
to be the lowest hanging and i don't know i think it's like very roughly correct that we did in fact
like over the course of building out kind of conceptual frameworks we picked them at
the most important ideas first and now we're kind of like refining things and adding maybe somewhat more peripheral things
um that's at least if that like trend is roughly going to hold and that's a reason for um not expecting to find
like some kind of earth-shattering new concept from left field although i think that's like a very
weak and vague argument to be honest um also i guess i guess it depends on what you think your time span is
like if your time span is the entire span of time that humans have been thinking about things then maybe
you would think that actually it's kind of strange it took like 3,000 years before,
maybe even longer, I guess it depends on when you define the start point.
It took, you know, 3,000 years for people to realize,
hey, we should think in terms of probabilities and in terms of expected impact.
So in that sense, maybe it's like, I don't know, it took like 3,000 years of thinking
to get to this very basic, very basic idea, what seems to us like a very important
and basic idea.
I feel like maybe I have, I want to say two things.
If you imagined lining up like every person who ever lived, just like in a row,
and then you kind of like
walked along that line
and saw how much progress people have made
across the line.
So you're going across people rather than across time.
And I think like progress in how people think about stuff
looks a lot more like linear
and in fact started earlier
than maybe you might think
by just looking at like progress over time.
And if it was faster early on,
then if you're kind of following the very long run trend
then maybe you should expect like
not to find these kind of, again, totally left-field ideas soon.
But I think a second thing, which is maybe more important,
is like, I also buy this idea that in some sense, progress about thinking,
in thinking about what's, like, most important, is really kind of boundless.
Like David Deutsch talks about this kind of thought a lot.
When you come up with new ideas, that just generates new problems, new questions,
should be some more ideas.
That's very well and good.
I think there's some sense in which, you know, one priority now,
could just be framed as giving us time to like make that progress.
And even if you thought that like we have this kind of boundless capacity to come up with
a bunch of new important ideas, it's pretty obvious that that's like a prerequisite.
And therefore in some sense it's like a robust argument for thinking that like,
trying not to kind of throw humanity off course and preventing,
mitigating some of these catastrophic risks is always just going to shake out as like a pretty
important thing to do.
maybe one of the most important things.
Yeah, I think that's reasonable.
But then there's a question of like,
even if you think that the existential risk
is the most important thing,
to what extent have you discovered all the,
again, that like X risk argument.
But by the way, earlier what you said about,
you know, trying to extrapolate
what we might know from the limits of physical laws,
you know, that can kind of constrain
what we think might be.
possible. I think that's an interesting idea, but I wonder, like, partly, like, one argument is
just like, we don't even know how to define those physical constraints. And, like, before you
had the theory of computation, it wouldn't even make sense to say, like, oh, like, this much
matter can sustain, like, so much, so much flops, floating point operations per second. And then
second is, like, yeah, if you know that number, it still doesn't tell you, like, what you could do
with it. I, you know, what, I think, um, uh, an interesting thing that whole, uh,
Karnocki talks about is, uh, he has this article called, This Can't Go Odd, where he makes the argument
that listen, if you just have a compounding economic growth, at some point you'll get to the point where, you know, like you'll have many, or many, many, many times Earth's economy per atom in the affectable universe.
And so it's hard to see like how you could keep having economic growth beyond that point.
But that itself seems like, I don't know, if that's true, then there has to be like a physical law that's like the maximum GDP per atom is this, right?
Like if there's no such constant, then you can like, you should be able to surpass it.
I guess it still leaves a lot to be to start to, like, even if you could know such a number,
you don't know like how interesting or what kinds of things could be done at that point.
Yeah, I guess the first point is, you know, even if you think that like preventing these kind of very large-scale risks that might like curtail human potential,
even if you think that's just incredibly important, you might miss some of those risks because you're, you just unable to articulate them or really like conceptualize them.
I feel like I just want to say at some point,
we have a pretty good understanding of kind of roughly what looks most important.
Like for instance, if you kind of, I don't know, get stranded on a camping trip
and you're like, we need to just survive long enough to make it out.
And it's like, okay, what do we look out for?
I don't really know what the wildlife is here because I haven't been here before,
but probably it's going to look a bit like this.
I can at least imagine, you know, the risk of dying of thirst,
even though I've never died of thirst before.
And then it's like, what if we haven't like even begun to think of like the other.
And it's like, yeah, maybe, but it's kind of, there's just some, like, you know, table something, practical reason for focusing on the things which are most salient.
And, like, definitely spending some time thinking about the things we haven't thought up yet.
But it's not like that list is just, like, completely endless.
And there's a kind of, I guess, a reason for that.
And then you said the second thing, which I don't actually know if I have, like, a ton of interesting things to say about, or maybe you could try, like, kind of zooming in on what you're interested in there.
I come to think of it.
I don't think the second thing has a big implications for this argument, but the two,
yeah, we have like 20 other topics that are just as interesting that we can't move out to.
But yeah, but just as of a, I don't know, as a closing note, the analogy is very interesting
to me.
The camping trip you're trying to like do what needs to be done to survive.
I don't know, okay, so to extend an analogy, it might be like, I don't know,
somebody like a liaison discovers, oh, that berry that we're all about to eat because
we feel like that's the only way to get sustenance here.
while we're, you know, just almost starving.
Don't eat that bear because that berry is poisonous.
And then maybe somebody could point out,
okay, so given that fact that we've discovered
one poisonous food in this environment,
should we expect there to be other poisonous foods
that we don't know about?
I don't know.
I don't know if there's anything more to say on that topic.
I mean, one thing, well, like, one, I guess,
kind of angle you can put on this is,
you can ask this question, like,
we have precedent for a lot of things.
Like, we know now that igniting nuclear weapons does not ignite the atmosphere, which was a worry that some people had.
So we at least have some kind of bounds on how bad certain things can be.
And so if you ask this question, like, what is worth worrying about most in terms of what kinds of risks might reach this level of potentially posing an existential risk?
Well, it's going to be the kinds of things we haven't done yet, that we haven't like got.
some experience with. And so you can ask this question, like, what is, what things are there in
the space of, like, kind of big, seeming, but totally novel, precedent-free changes or events?
And it actually does seem like you can kind of try generating that list and getting at answers.
This is why maybe, or at least one reason why AI sticks out, because it's like, fulfills this
criteria of being pretty potentially big and transformative. And also the kind of thing we don't have
experience with yet but again it's not as if that list is like in some sense endless like there
are only so many things we can do in the space of decades right okay yeah so moving on to another
topic we're talking about the for-profit entrepreneurship as as a potentially impactful thing
you can do sorry maybe not in this conversation but like we separately we had one point
yeah yeah so okay so to clarify this is not just for
profit in order to do earning to give so you become a billionaire and you give your wealth away.
To what extent can you identify opportunities where you can just build a profitable company
that solves an important problem area or makes people's lives better? One example of this
is Wave. It's a company, for example, that helps with transferring money and banking services
in Africa probably has boosted people's well-being in all kinds of different ways.
So to what extent can we expect just a bunch of for-profit opportunities for making people's lives better?
Yeah, that's a great question.
And there is really a sense in which some of the more, like, innovative, big for-profit companies
just are like doing an incredibly useful thing for the world.
They're like providing a service that wouldn't otherwise exist.
And people are obviously using it because they are a successful for-profit companies.
company. Yeah, so I guess the question is something like, you know, you're stepping back, you're
asking, how can I like have a ton of impact with what I do? The question is, I'll be like underrating,
just starting a company. So I feel like I want to throw a bunch of kind of disconnected observations.
We'll see if they like tie together. There is a reason why you might in general expect a non-profit
route to do well. And this is like obviously very naive and simple, but where there is,
is a for-profit opportunity, you should just expect people to kind of take it. Like, this is why we don't
see $20 bills lying on the sidewalk. But the natural incentives for, in some sense, taking
opportunities to, like, help people where there isn't a profit opportunity, they're going to be weaker.
And so if you're thinking about the, like, difference you make compared to whether you do something
or whether you don't do it, in general, you might expect that to be bigger where you're doing something
nonprofit. Like, in particular, this is where there isn't a market for a good thing.
So it might be because the things you're helping, like, aren't humans.
It might be because they, like, live in the future so they can't pay for something.
It could also be because maybe you want to or get a really impactful technology off the ground.
In those cases, you get a kind of free rider dynamic, I think, where there's less reason to, like, where you can't protect the IP and patent something.
There's less reason to be the first mover.
And so this is like, maybe it's not for profit, but starting a.
or helping kind of get a technology off the ground,
which could eventually be a space
for a bunch of for-profit companies to make a lot of money,
that seems really exciting.
Also, creating markets where there aren't markets
seems really exciting. So, for instance, setting up,
like, AMCs, advanced market commitments,
or prizes, or just giving, yeah,
creating incentives where there aren't any.
So you get the, like, efficiency and competition
kind of gains that you get from the for-profit space.
That seems great.
But that's not really answering your question,
because the question is, like,
what about actual for-profit companies?
I don't know what I have to say here, like in terms of whether they're being underrated.
Yeah, actually, I'm just curious what you think.
Okay, so I think I have like four different reactions to what you said.
I've been remembered the number four, just in case I'm at three and I'm like, I think I'd have another thing to say.
Okay, so yeah, so I had a draft an essay about this that I didn't end up publishing, but that led to a lot of interesting discussions between.
So that's why we might have, I don't know, in case the audience feels like there,
interrupting a conversation that was already preceded the beginning here.
So one is that to what extent should we expect this market to be efficient?
So one thing you can think is, listen, the amount of potential startup ideas are so vast
and the amount of great founders is so small that you can have a situation where the most
profitable ideas are, yeah, it's right that somebody like Elon Musk will come up and like
pluck up like all the maybe like the $100 billion ideas.
But if you have like a company like wave, I'm sure they're doing really well.
But, you know, if it's not obvious how it becomes the next Google or something.
And I guess more importantly, if it requires a lot of context, for example, you talked about like neglected groups.
I guess this doesn't solve for animals and future people.
But if you have somebody, something in global where you're like a neglected group is, for example, people living in Africa, right?
the people who could be building companies don't necessarily have experience with the problems that these neglected groups have.
So if you have, it's very likely, or I guess it's possible, that you could come upon an idea if you were specifically looking at how to help, for example, you know, people suffering from poverty in the poor parts of the world.
You could like identify a problem that just like people who are programmers in Silicon Valley just wouldn't know about.
Okay, so a bunch of other ideas regarding the other things you said.
One is, okay, maybe a lot of progress depends on fundamental new technologies and companies
coming at the point where the technology is already available and somebody needs to really
implement and put all these ideas together.
Yeah, two things on that.
One is, like, we don't need to go on rabbit hole on this.
One is the argument that actually the invention itself, not the invention, the innovation
itself is a very important aspect and potentially a bottleneck aspect.
of this of getting an invention off the ground and scaled.
Another is if you can build a $100 billion company or a trillion dollar company,
or maybe not even just like a billion dollar company,
you have the resources to actually invest in R&D.
I mean, think of a company like Google, right?
Like how many billions of dollars have they basically poured down the drain on like
hair brain schemes?
You can have like reactions to deep mind with regards to AI alignment.
But, I mean, just like other kinds of research things they've done seem to be like really interesting and really useful.
And, yeah, all the other fang companies have like a program like this, like Microsoft Research or I don't know what Amazon's thing is.
And then another thing you can point out is with regards to setting up a market that would make other kinds of ideas possible and other kinds of business as possible.
In some sense, you could make me make the argument that maybe some of the biggest companies, that's exactly what they've done, right?
if you think of like Uber, it's not a market for companies.
Or maybe Amazon is a much better example here where, you know, like theoretically,
you had an incentive before like if a pandemic happens, I'm going to manufacture a lot of masks,
right?
But Amazon provides, makes the market so much more liquid so that you can just start manufacturing
masks and now immediately put them up on Amazon.
So it seems in these ways, actually maybe starting a company is really is an effective way
to deal with those kinds of problems.
Yeah, man, we've gone so atync here.
I should have just like said one thing and then,
sorry for throwing lots of things at you.
There's a lot there.
As far as I can remember,
those are all great points.
Yeah,
I think my like high level thought is,
I'm not sure how much we disagree,
but I guess one thing I want to say is,
again,
thinking about like in general,
what you expect the real biggest opportunities
to typically be for like just having a kind of impact.
You know,
one thing you might think of is,
if you can optimize for two things separately,
that is optimize for the first thing
and then use that to optimize for the second thing,
versus trying to optimize for some, like,
combination of the two at the same time,
you might expect to do better if you do the first thing.
So, for instance, you can do a thing
which looks a bit like trying to do good in the world
and also, like, make a lot of money, like social enterprise.
And often that goes very well,
but you can also do a thing, which is try to make a lot of money
and just make a useful product
that is not directly aimed
at, you know, proving humanity's prospects or anything,
but it's just kind of just great.
And then use the success of that first thing
to then just think squarely,
like how do I just do the most good
without worrying about whether there's some kind of profit mechanism.
I think often that strategy is going to pan out well.
There's this thought about the kind of the tails coming apart
if you've had this thought that at the extremes
of like either kind of scalability in terms of opportunity to make a lot of profit and at the
extreme of doing like a huge amount of goods you might expect it's better to be like not such a
strong correlation again one reason like in particular that you might think that is because
you might think the like future really matters like humanity's future and um sorry to be like a
stock record but like because there's not really like a natural market there because these people
don't haven't been born yet that is like a rambly way of saying that okay that's not always going to be
true but i basically just agree that yeah i would want to resist a framing of doing good which just
leaves out like also doing some selling some successful for-profit company like there are just
a ton of really excellent examples of where that's just been a huge success and yeah should be
celebrated um so yeah i think i disagree with the spirit um maybe we disagree somewhat on the like
how much we should kind of relatively emphasize these different things
things, but doesn't seem like a kind of very deep disagreement.
Yeah, yeah.
Maybe I've been spending too much time with Ryan Kaplan or something.
So by the way, the tail is coming apart, I think is a very interesting, very interesting way to
think about this.
Scott Alexander's a good article on this.
And like one thing he points out is like, yeah, generally you expect like different parts
of different types of strength to correlate, but the guy who has the strongest grip strength
in the world is probably not the guy who has the biggest squad in the world, right?
Yeah.
Okay.
So I think that's an interesting place to leave that idea.
Oh, yeah, another thing I wanted to talk to you about was back testing EA.
So if you have these basic ideas of we want to look at problems that are important, neglected, and tractable, and apply them throughout history.
So like a thousand years back, 2,000 years back, 100 years back.
Is there a context in which applying these ideas would maybe lead to a perverse outcome and unexpected outcome?
and are there examples where
I mean there's many examples in history
where things you could have like
easily made things much better
but maybe it made it much better
than even conventional morality
or like present day ideas
would have made them
so we're at the first part of the question
which I understand it is something like
can we think about what some kind of
effective altruism like movement
or if these ideas were in the water
like significantly earlier
whether they might have misfired sometimes
or maybe they might have
succeeded. In fact, how do we think about that at all? I guess one thing I want to say is that
very often the correct decision, ex ante, is a decision which might do really well in like
some possible outcomes, but you might still expect to fail, right? The kind of mainline outcome
is this doesn't really pan out, but it's a moon shot. And if it goes well, it goes really well.
This is, I guess, similar to certain kinds of investing where if that's the case, then you should
expect, even if you follow the exact, like, correct strategy, you should expect to look back
on the decisions you make, made rather, and see a bunch of failures, where failure is,
you know, you just have very little impact. And I think it's important not to, to kind of resist
the temptation to, like, really kind of negatively update on whether that was the correct
strategy because it didn't pan out. And so, I don't know, if something like EA type thinking
was in the water and was, like, thought through very well, yep, I think it would,
go wrong a bunch of times and that shouldn't be kind of terrible news.
When I say go wrong, I mean like not pan out rather than do harm.
If it did harm, okay, that's like a different thing.
I think one thing this points to do, by the way, is like you could take it,
you could choose to take a strategy which looks something like mini max regret.
Right.
So you have a bunch of options.
You can ask about the kind of roughly worst case outcome or just kind of like,
you know, default ehe outcome on each option.
and one strategy is just like choose the option with the least bad kind of meh case
and if you take this strategy you should expect to look back on the decisions you make
and like not see as many failures that's one point in favor of it another strategy is just like
do the best thing in expectation like if i made these decisions constantly what in the long run
just ends up like making the world best and this looks a lot like just taking the highest EV option maybe
you don't want to like run the risk of causing harm so you know that's okay to include and you know
I happen to think that like that kind of second strategy is very often going to be a lot better and it's
really important not to be misguided by this kind of feature of the mini max regret strategy where you
look back and kind of feel a bit better about yourself in many cases if that makes sense yeah that's super
interesting I mean if you think about back testing in terms of the like you know models for the stock
market one thing that to analogize this one thing that uh to uh
happen is that a strategy of just like trying to maximize returns from a given trade,
that results very quickly in you going bankrupt because like sooner or later there will be a trade
where you lose all your money.
And so then there's something called the Cali Criterion where you reserve a big portion
of your money and you only bet with a certain part of it, which sounds more similar to the
minimized regret thing here.
Unless your expected value includes a possibility that, I mean, in this context that like,
you know, like losing all your money is like an existential risk, right?
So maybe you're like bake into the cake in the definition of expected value,
the odds of like losing all your money.
Yeah, yeah, yeah, yeah.
That's a great, that's a really great point.
Like I guess in some cases you want to take something,
which looks a bit more like the Kelly bet.
But if you act to your margins, like relatively small margins
compared to the kind of pot resources you have,
then I think it often makes sense to take just do the best thing,
but not worry too much about what's the kind of like size of the Kelly bet.
But yeah, that's a great point.
And like, I guess a naive version of doing this
is just kind of losing your bankroll very quickly
because you've taken to enormous bets
and forgotten that it might not pan out.
Yeah, so I appreciate that.
Oh, what did you mean by add to the margins?
So if you think that there's a kind of
pool of resources from which you're drawing,
which is something like maybe philanthropic funding
for the kind of work that you're interested in doing,
and you're only a relatively marginal actor,
then that's unlike being like an individual investor
where you're more sensitive to the risk
of just running out of money
and when you're more like an individual investor,
then you want to pay attention to what the size of the Kelly bet is.
If you're acting at margins,
then maybe that is less of like a big consideration,
although it is obviously still a very important point.
Well, and then, by the way,
I don't know if you saw my recent blog post
about why I think there will be more EA billionaires.
Yes.
Okay, yeah, yeah.
I don't know what your reaction to any of the ideas there is.
But my claim is that we should expect
the total funds dedicated to
a year to grow quite a lot.
Yeah, I think
I really liked it by the way.
I think it was great.
One thing I made me think of
is that there's quite an important difference
between trying to maximize returns
for yourself and then
trying to get the most returns just like
for the world, which is to say just doing the most good.
Where one
consideration we've just talked about
which is a risk of just like
losing your bankroll, which is where like Kelly betting becomes relevant.
Another consideration is that as an individual, just like trying to do the best for
yourself, you have like pretty steeply diminishing returns from money or just like how well
your life goes without extra money, right? So like if you have like 10 million in the bank and
you make another 10 million, does your life get twice as good? Obviously not, right? And as such,
you should be kind of risk averse when you're thinking.
about the possibility of like making a load of money.
If on the other hand, do you just care about like making the world go well,
then the world is an extremely big place.
And so you basically don't run into these diminishing returns like at all.
And for that reason, like if you're making money, at least in part to in some sense give it away
or otherwise just like have a positive effect in some impartial sense,
then you're going to be less risk adverse, which means
maybe you fail more often,
but it also means that people who succeed
succeed really hard.
So I don't know if that's, in some sense,
I'm just recycling what you said,
but I think it's like a really kind of neat observation.
Well, and another interesting thing is that
not only is that true,
but then you're also,
you're also in a movement where everybody else has a similar idea.
And not only is that true,
but also the movement is full of people
who are young, techie, smart,
and as you said,
risk neutral. So basically people who are going to be way overrepresented in the ranks of
future billionaires. And they're all hanging out and they have this idea that, you know, we can
become rich together and then make the world better by doing so. You would expect that this would be
exactly the kind of situation that would lead to people teaming up and starting billion dollar
companies. All right. Yeah, so a bunch of other topics in effective altruism that I wanted to ask you
about. So one is should it impact our decisions in any ways if the many world's interpretation of
quantum mechanics, that's true. I know the argument that, oh, you can just think of, you can just
translate amplitudes to probabilities. And if it's just probabilities, then decision theory doesn't
change. My problem with this is, I've gotten, like, very lucky in the last few months. Now, I think
it, like, changes my perception of that if I realize actually most me's, and okay, I know there's
problems with saying me's, to what extent they're fungible. Most branches of the, of the multiverse,
Like, I'm, like, significantly worse often.
That makes it worse than, oh, I just got lucky.
But, like, now I'm here.
And another thing is, if you think of existential risk and do think that even if, like,
existential risk is very likely in some branch of the multiverse, humanity survives,
I don't know.
That seems better in the end than, oh, the probability was really low, but, like, it just
resolved to we didn't survive.
Does that make sense?
Okay.
all right there's a lot there
I guess rather than doing a terrible job
at trying to explain what this many worlds thing is about
maybe it's worth just kind of pointing people
towards you know just googling it
I should also add this enormous caveat
that I don't really know what I'm talking about
this is just kind of an outsider
who's taking this kind of I know
this stuff seems interesting
yeah okay so there's this question of like what
if the many worlds
view is true
what if anything could that mean
with respect to questions about like what should we do
what's important.
And one thing I want to say is,
just like without zooming into anything,
it just seems like a huge deal.
Like every second,
every day,
I'm in some sense,
like,
just kind of dissolving into this,
like,
cloud of me's and like,
just kind of unimaginably large number of me's.
And that each of those me's is kind of in some sense
dissolving into more cloud.
Um,
this is just like wild.
Also seems somewhat likely,
um,
to be true as,
like,
far as I can tell.
Okay, so like, what does this mean?
You, yeah, you point out that you can talk about having a measure over worlds.
In some sense, there's actually a problem of how you get, like, probabilities
or how you make sense of probabilities on the many worlds view,
and there's a kind of neat way of doing that, which, like, makes use of questions
about how you should make decisions.
That is, you should just kind of weigh future use according to, in some sense,
how likely they are.
But it's really the reverse.
You're like, expecting what it means for them to be more likely in terms of,
how it's rational to weigh them.
And then I think it's like a ton of very vague things I can try saying.
So maybe I'll just try doing like a brain dump of things.
You might think that like many worlds being true could push you towards being more risk
neutral in certain cases if you weren't before because in certain cases you're like
translating from some chance of this thing happening or it doesn't into some fraction of,
you know, worlds this thing does happen and another fraction it doesn't.
that's kind of like I do think it's worth reading too much into that because I think a lot of the like important uncertainties about the world are still like subjective uncertainties about how most worlds will in fact turn out but it's kind of interesting and notable that you kind of like convert between overall uncertainty about how things turn out to like more certainty about the fraction of things turn down I think another like interesting feature of this is that so the question of like how you should act is no longer the question of like how should you kind of
benefit this person who is you in the future, who is one person. It's more like, how do you
benefit this, like, cloud of people who are all successes of you that's just kind of, like,
diffusing into the future? And I think you point out that you could just, like, basically salvage
a lot of basically all decision things, even if that's true. But the, like, picture of what's
going on changes. And in particular, I think just intuitively, like, it feels to me, like the gap
between acting in a self-interested way and then, like, acting in an impartial way, we
like helping other people.
It kind of closes a little in a way.
Like,
you're already benefiting many people
by doing the thing that's kind of rational
to benefit you,
which isn't so far from benefiting people
who aren't like continuous with you
in this special way.
So I kind of like that as a thing.
It's interesting.
Yeah.
And then, okay, there is also this like slightly more out there thought,
which is,
here's the thing you could say.
Many worlds is true.
then there is at least a sense in which there are very, very many more people in the future
compared to the past, like just unimaginably many more.
And even like the next second from now, there are many more people.
So you might think that should like make us have a really steep negative discount rate
on the future, which is to say we should like value future times much more than present times.
And like in a way which is kind of, it wouldn't like modify how we should act.
It just like explodes how we should think about this.
this definitely doesn't seem right maybe one way to think about this is that if this thought was true or like what kind of directionally true then that might also be a reason for being extremely surprised that we're both speaking at like an earlier time rather than a later time because if you think you're just like randomly drawn from all the people who ever lived it's like absolutely mind blowing that we get drawn from like today rather than tomorrow yeah given it was like 10 to the something many more people than tomorrow um so it's probably probably
probably wrong and wrong for reasons I don't have a very good handle on because I'm just like
don't know what I'm talking about. I mean, I can kind of try parroting the reasons, but like,
it's something I'm, you know, I'm interested in trying to really crock those reasons a bit more.
That's really interesting. I didn't think about that argument for the selection argument.
I think one resolution I've heard about this is that you can think of the proportion of,
you know, Hilbert space or like the proportion of all the, like the universe's way function,
that could be the probability rather than each different branch.
You know what I just realized?
The selection argument you made,
maybe that's an argument against Bostrom's idea of we're living in a simulation
because basically his argument is that there would be many more simulations
than there are real copies of you,
therefore you're probably in a simulation.
The thing about saying that all the simulations plus you
are, your prior should be equally,
distributed among them seems similar to saying your prior of being distributed along each
possible like branch of the way function should be your prior across the them should be the same
whereas I think in the context of the wave function you were arguing that maybe it should be like
you shouldn't think about it that way you should think about like maybe a proportion of the total
wave total hover space yeah does that make sense I don't know if I put it
Wait, say it again how it links into simulation type stuff.
Instead of thinking about each possible simulation as an individual thing across which,
that is equally as likely, each individual instance of a simulation is equally as likely as you're living in the real world.
Maybe simulation as a whole is equally likely to you living in the real world.
Just as you being alive today rather than tomorrow is equally likely,
despite the fact that there will be many more branches, new branches of the wave function tomorrow.
Okay, there's a lot.
I get a lot going on.
I feel like there are people who actually know what they're talking about here,
just tearing their hair out.
Like, you've seen this obvious thing.
So you mentioned,
that's a nature of opening a podcast.
It's the point.
But by the way,
if you are one such person,
please do like email me or DME or something.
I'm very interested.
So yeah,
you mentioned like it is,
obviously there is a measure over,
over Wells and this like lets you talk about things being sensible again.
Also, maybe like one minor thing to comment on
is talking about probability.
is kind of hard because in or many worlds just everything happens that can happen.
And so it's like difficult to get the language exactly right.
But anyway, so totally get the point.
And then the question of how it maps on to simulation type thoughts.
Here's a, I don't know, like maybe a thought which kind of connects to this.
Do you know like sleeping beauty type problems?
No, no.
Okay, certainly a vaguely remembered example.
But let's start it.
So in the original sleeping beauty problem,
you go to sleep,
okay,
and then I flip a coin,
or, you know,
whoever,
someone flips a coin.
If it comes up tails,
they wake you up once.
If it comes up heads,
they wake you up once,
and then they,
you go back to sleep and,
you know,
your memory is wiped,
and then you're working up again,
as if you're being woken up in the other world.
And,
okay, so you go to sleep.
you wake up and you ask what is the chance that the coin came up heads up tails and it feels
like there's kind of really intuitive reasons for both 50% and one third here's a related question
which is maybe a bit simpler at least in my head i flip a coin if it comes up heads i like just make
a world with one observer in it and if it comes up tails i make a world with 100 observers in it maybe
it could be like running civilizations with 100 people you like wake up and one of
of these worlds, you don't know how many other people are there in the world. You just know that
like someone has flipped a coin and decided to make a world with either one or 100 people in it.
What is the chance that you're in the world with 100 people? And like there's a reason for thinking
it's half and there's a reason for thinking that it's like, I don't know, 100 over 101. Does that make
sense? So I understand the logic behind the half. What is the reason for thinking? I mean, regardless
of where you ended up as the observer, it seems like if the odds of like the coin coming up,
Oh, I guess is it because you'd expect there to be more observers in the other universe?
Like, wait, yeah, so what is the logic for thinking?
It might be 100 over 100.
Well, you might think of it like this.
How should I reason about where I am?
Well, maybe it's something like this.
I'm just a random observer, right?
Of all the possible observers that could have come out of this.
And there are 101 possible observers.
And you can just imagine that I've been like randomly drawn.
Okay.
And if I'm randomly drawn from all the possible observers, then is...
overwhelmingly likely that I'm in the big world.
Huh.
That's super interesting.
I should say, actually, I should plug someone who does know what they're talking about on this,
which is Joe Carl Smith, who has like a series of like really excellent blog posts.
Oh.
He's coming on the podcast next week.
Yes.
Amazing.
Okay, you should ask him about this because he's like really going to talk you about it.
I don't want to like, okay, I don't want to scoop him.
But one thought that comes from him, which is just like really cool, maybe just to kind of round this off is if you're like a,
100 over 100 on examples like this.
And you think there's any chance
that like the universe is infinite in size,
then you should think that the chance
you're in a universe that is infinite
and extent is just like one or close to one,
if that makes sense.
I see, yeah, yeah.
Okay, so in the end, then does your awareness
in many worlds is like a good explanation?
Has that impacted your view of what should be done in any way?
Yeah, so I don't really...
know if I have a good answer.
My best guess is that things just shake out to kind of where they started,
as long as you started off in this kind of, like,
relatively risk, neutral place.
I suspect that if many worlds is true,
this might have, like,
this might make it much harder to hold on to kind of intuitive views
about personal identity for the reason that, like,
there isn't this, like, one person who you're, like, continuous with throughout time
and know other people,
which is how people tend to think about what it is to, like, be a person.
And then there's this kind of like vague thing, which is just occasionally I, you know, just like remember like every other month or so that maybe many worlds is true.
And it's kind of like blows my mind.
I don't know what to do about it.
And I just like go on with my day.
That's about where I am.
Okay.
All right.
Other interesting topics to talk about talent search.
What is the, what is EA doing about identifying, let's say, more people like you, basically, right?
But maybe even like people like you who are not in like places where they're not next to Oxford.
I don't know where you actually are from originally.
but like if they're from like some like, I don't know, like trying or Andy or something.
Yeah, yeah, yeah.
Well, what is EA doing to recruit more fins from,
from places where they might not otherwise work on EA?
Yeah, it's a great question.
And yeah, to be clear, I just won the lottery on things going right to kind of be lucky
enough to do what I'm doing now.
So, yeah, in some sense, the question is how do you like print more winning lottery tickets?
And indeed, like, find there's people who really deserve them,
but like it's currently not being identified.
A lot of this comes.
I read that book,
Talent,
um,
Tyler Cowen and Daniel Gross recently.
And yeah,
there's something really powerful about this fact that this like business of,
you know,
finding really like smart driven people and connecting them with opportunities to
like do the things they really want to do.
This is like really kind of still inefficient.
And there's just still like so many just people out there who like aren't kind of
getting those opportunities.
I actually don't know if I have much more
like kind of insight to add there
other than this is just a big deal.
And it's like,
there's a sense of which it is an important consideration
for this like project of trying to do the most good.
Like you really want to find people
who can like put these ideas in practice.
And I think there's a special premium
on that kind of person now
given that there's like a lot of philanthropic kind of funding
ready to like be deployed.
There's also a sense in which this is just like,
in some sense it's like a cause in its own right.
It's kind of analogous to open borders in that sense,
at least in my mind.
Hadn't really like appreciated it on some kind of visceral level
before I wrote that book.
And then another thing you talks about in the book is
you want to get them when they're young.
You can really shape somebody's ideas
about what's worth doing if you have,
and then also their ambition about what they can do
if you catch them early.
And you know,
Tyler Kound also had an interesting blog post
a while back where he pointed out that
a lot of people applying to its emergent ventures
program, a lot of young people
are flying are heavily influenced
by effective altruism.
Which seems very, like, it's going to be a very important
factor in the
long term. I mean, eventually these people
will be in positions of power.
Yeah, so maybe
effective altruism is already succeeding to the extent
that a lot of the most
ambitious people in the world are
identified that way, at least, I mean, given
the selection effect that the teleconference program
has. But yeah, so
what is it that can be done to
get people when they're going.
Craig.
Yeah, I mean, it's a very good question.
And I think, like, what you point out there is,
is right, there's some,
Nick Whittaker has this blog post.
So something like the, it's called
the Lamp Light Model of Talent Curation.
He, like, draws this distinction between casting,
like, a very wide net.
This is just kind of very legibly prestigious.
And then, you know,
filtering through thousands,
of applications or in some sense like putting out the bat signal that in the first instance just
like attracts the like really promising people and maybe actually drives away people who will be
better fit for something else um so like an example is if you were to hypothetically write a quite
quite a wonky economics blog like every day for however many years and then run some
furniture program you're just like automatically selecting for people who read that book and that's
like a pretty good kind of starting population to begin with so i really like that that kind of
thought of just like not needing to be like incredibly loud and like prestigious sounding but
rather just like being quite honest about what the thing is about so you just attract the people who
who like really sort it out
because that's just quite a good feature.
I think another thing that, again,
this is like not a very interesting point to make,
but something I've really realized the value of
is like having physical hubs.
And so there's this model of, you know,
running like fellowships, for instance,
where you just like find really promising people.
And then there's just so much to be said
for like putting those people in the same place
and, you know, surrounding them with maybe people
who are a bit more like senior,
I'm just kind of like letting this natural process happen
where people just get really excited
that there is this like community of people
working on stuff that previously
you've just been kind of reading about
in your bedroom on like some blogs.
That like as a source of motivation,
I know it's like less tangible than other things
but yeah, just like so so powerful.
And like probably I don't know,
one of the reasons I'm like working here maybe.
Yeah. It is one aspect of working from home
that you don't get that.
regarding the first point.
So I think
maybe that should update
in favor of not doing community outreach
and community building.
Like maybe that's negative marginal utility
because like if I think about
for example,
my local,
so there was an effective altruism group
at my college that didn't attend
and there's also like an effective altruism group
for the city as a whole in Austin
that I don't attend.
And the reason is,
just because I don't know, the people who,
there is some sort of adverse election here
where the people who are leading organizations like this
are people who couldn't just like,
aren't directly doing the things that effective altruism says
they might be, might consider doing,
and are more interested in the social aspects of altruism.
So I don't know, I'd be much less impressed with the movement
if my first introduction to it was these specific groups
that I, like, I've had the personal,
I've personally interacted with,
rather than, I don't know, just like hearing William McCaskill on a podcast.
By the way, the fourth ladder being my first introduction to effect of algorithm.
Yeah, interesting.
I feel like I really don't want to like underwrite the job that community builders are doing.
I think in fact it's turned out to have been like, and still is just like incredibly valuable,
especially just looking at the numbers of like what you can achieve as like a group organizer at your university.
Like maybe you could just change the course of like more than one person's career.
over the course of like a year of your time that's like pretty incredible but yeah i guess part of what's
going on is that the difference between like going to your like local group or like engaging with
stuff online is that you get to kind of choose the stuff you engage with and like maybe one upshot here
is that they're like kind of set of ideas that might get associated with um EA is like very big and
you don't need to buy into all of it or just like be passionate about all of it like
if this kind of AI stuff
just like really seems interesting but maybe other stuff
is just like more peripheral then
you know one yeah like this could push
towards wanting to have like just a specific
group for people who are just like you know
this AI stuff seems cool other stuff not my like
cup of tea um so yeah I mean in the future
as like things get scaled up as well as kind of scaling out
I think also maybe having this like
differentiation and kind of diversification of like
different groups I mean seems pretty good but just like more
of everything also seems good.
Yeah, yeah.
I'm probably overfitting on my own experience.
And given the fact that I don't, didn't actively interact with any of those communities,
I'm probably not even informed on those experiences of loves.
But there was an interesting post on an effective vouchersome forum that somebody sent me
where they were making the case that at their college as well,
they got the sense that the EA community building staff had the negative impact
because people were kind of turned off by their peers.
And also, there's a difference between, like, I don't know,
somebody like saying Baker and Friedel, Roel McHastkel, advising you, obviously, virtually, to do these
kinds of things versus, like, I don't know, some sophomore at your university starting philosophy,
right?
No offense.
Yeah, I do.
I do.
I think my guess is that, like, on net, these efforts are still just, like, overwhelmingly
positive.
But, yeah, I think it's, like, pretty interesting that people have the experience you describe
as well.
Yeah, and interesting to think about ways to kind of like get around that.
So long reflection is a, it seems like a bad idea, no?
I'm so glad you was.
Yeah, I want to say, I want to say no.
I think in some sense I've like come around with as an idea.
But yeah, okay, maybe it's worth like.
Oh, really interesting.
Maybe it's worth, I guess, like trying to explain what's going on with this idea.
So if you were like to zoom out like really far over time,
and consider our place now, like in history.
And you could like ask this question about,
suppose in some sense,
humanity just became like perfectly coordinated.
What's the plan?
Like what kind of in general should we be prioritizing?
And like in what stages?
And you might say something like this.
It looks like this moment in history,
which is to say maybe this century or so,
just looks kind of wildly and like unsustainably dangerous.
like, or kind of
so many things are happening at once.
It's really hard to know how things are going to pan out,
but it's possible to imagine things panning out really badly
and badly enough to just like more or less end history.
Okay, so before we can like worry about some kind of longer term considerations,
let's just get our act together and make sure we don't mess things up.
So, okay, like that seems like a pretty good first priority.
But then, okay, suppose that you succeed in that
and like we're in a significantly safer kind of time.
time, what then?
You might notice that
the scope
for what we could achieve
is like really extraordinarily large,
like maybe kind of larger than most people
kind of like typically entertain.
Like we could just do a ton of really exceptional things.
But also this is kind of feature
that maybe in the future,
not especially long-term future,
we might more or less for the first time
be able to embark on these like really kind of ambitious projects that are in some important
sense, like really hard to reverse.
And that might make you think, okay, at some point it would be great to like, in some,
like, you know, achieve that potential that we have.
And just like, like, for instance, a kind of lower band on this is lifting everyone
out of poverty, who remains in poverty.
And then like going even further, just making everyone even wealthier, able to do more
things that they want to do, making more scientific discoveries, whatever. So we want to do that,
but maybe something to come in between these two things, which is like figuring out what is
actually good. And okay, why should we think this? I think one thought here is it's very plausible.
I guess it's kind of links to what we were talking about earlier, that the way we think about,
you know, like really positive futures, like one of the best futures.
It's just like really kind of incomplete.
Almost certainly we're just getting a bunch of things wrong by this kind of pessimistic
induction on the past.
Like a bunch of smart people thought really reprehensible things like 100 years ago.
So we're getting things wrong.
And then it's like second thought is, I don't know, seems possible to actually make progress here
in thinking about what's good.
There's this kind of interesting point that most, like work in,
I guess you might call it like moral philosophy,
has focused on the negatives.
So, you know, avoiding doing things wrong, fixing harms,
avoiding bad outcomes.
But this idea of like studying the positive,
studying like what we should do if we can kind of do like many different things.
This is just like super, super early.
And so we should expect to be able to make a ton of progress.
And so, hey, again, imagining that the world is like perfectly coordinated.
Would it be a good idea to like spend some time, maybe a long period of time,
kind of deliberately holding back from embarking on these like huge irreversible projects,
which maybe involved like leaving Earth in kind of certain, you know, scenarios,
or otherwise just like doing things which are hard to undo.
Should we spend some time thinking before then?
Yeah, sounds good.
And then I guess the very obvious response is, okay, that's a pretty huge assumption
that we can just like coordinate around that.
And I think the answer is, yep, it is.
But as a kind of directional ideal, should we push towards or away from the idea of like taking our time, holding our horses, kind of getting people together who haven't really like been part of this like conversation and like hearing them?
Yeah, definitely seems worthwhile.
Okay.
So I have another good abstract idea that I want to entertain by you.
So, you know, it seems like kind of wasteful that we have these different companies that are building the same exact product.
But, you know, because they're really the same exact product, they don't have economy.
means of scale and they don't have coordination.
There's just a whole bunch of loss that comes from that, right?
Wouldn't it be better if you could just coordinate and just like figure out the best
person to produce something together and then just have them produce it?
And then we could also coordinate to figure out like what is the right quantity and quality
for them to produce.
I'm not trying to say this is like communism or something.
I'm just saying it's ignoring what would be required.
Like in this analogy, you're ignoring like what kinds of information gets lost and what
kinds of, what it requires to do that so-called coordination in the communism example.
In this example, it seems like you're not, whatever would be required to prevent somebody
from realizing, like, let's say somebody has a vision for like, we want to colonize a star system,
we want to like, I don't know, make some new technology, right?
That's part of something that the long reflection would control.
Maybe you're getting this wrong, but it seems like it would require almost a global panopticon
a totalitarian state to be able to prevent people from escaping the reflection.
Okay, so there's a continuum here.
And I basically agree that some kind of penopticon like thing, not only is impossible,
but actually sounds pretty bad, but something where you're just like pushing in the direction
of being more coordinated on the international level about things that matter seems like
desirable and possible.
And in particular, like, preventing really bad things rather than like try to get people to like
all do the same thing.
So the Biological Weapons Convention
just strikes me as an example,
which is like imperfect and underfunded,
but, you know, nonetheless,
kind of directionally good.
And maybe an extra point here is that
there's like a sense in which
the long reflection option,
or I guess the better framing is like
aiming for a bit more reflection rather than less,
that's like the conservative option.
That's like doing what we've already been doing
just a bit longer.
rather than some like radical option.
So I agree.
It's like pretty hard to imagine like, you know,
some kind of super long period
where everyone's like perfectly agreed on doing this.
But yeah, I think framing it as like a directional ideal
seems pretty worthwhile.
And I guess I don't know,
maybe I'm kind of naively hopeful
about the possibility of coordinating better
around things like that.
There's two reasons why this seems like a bad idea to me.
One is, okay, first of all,
who is going to be deciding
when we've come to a,
a good consensus about, okay, so we've decided like this is the way things should go.
Now we're like ready to escape the long reflection and realize our vision for the rest
of the lifespan of the universe.
Who is going to be doing that?
It's the people who are presumably in charge of the long reflection.
Almost by definition, it'll be the people who have an incentive in preserving whatever power,
well, power balances exist at the end of the long reflection.
And then the second thing you'd ask is like
there's like a difference between I think
having a consensus on not using biological weapons
or something like that where you're limiting a negative
versus it seems like when we've had
when we've required society-wide consensus
on what we should aim towards achieving
the outcome has not been good in history.
It seems better than on the positive end
to just leave it open-ended
and then just maybe when it has,
necessary say that like the very bad things we might want to restrict together yeah okay
I think I kind of just agree with a lot of what you said so I think the best like framing
of this is the version where when you're preventing something which most people can agree
is negative which is to say some actor unilaterally deciding to like do this huge irreversible
or set out on this huge irreversible project.
Like something you said was that the outcome
is going to reflect the, like, values of whoever is, like, in charge.
And then not just the values.
I mean, also, I mean, just, like, think about how guilds work, right?
It's like, if the, whenever we, for example, an industry,
we led how the industry should progress,
we led those kinds of decisions we made collectively
by the people who are currently,
dominant in the industry
um you know gills or something like that
um or um or like industrial conspiracies uh as well
it seems like the outcome is um uh outcome is just uh bad like uh and so like my prior
would be that at the end of such a situation our ideas about what we should do
would actually be worse than going into the long reflection
I mean obviously the uh it really depends on how it's implement
right so I'm not saying that but I just just like broadly given all possible
implementations and maybe the most likely implementation given how governments run now
yeah yeah yeah I should say that like I am in fact like pretty
absurd I just kind of I know it's more enjoyable to like give this thing it's hearing
no no I enjoy the uh the parts where we have disagreed so one thought here is if you're
worried about the future like the course of the future
you're being determined by some single actor.
I mean, that worry is just symmetrical with the worry of letting whoever wins some race first
go and do, you know, go and do the thing, the like project where they more or less
kind of determine what happens to the rest of humanity.
So the option where you like kind of deliberately wait and let people like have some
like global conversation, I don't know.
It seems like that.
is less worrying, even if the worry is still there.
I should also say, I can imagine the outcome is not unanimity.
In fact, it'd be pretty wild if it was, right?
But you want the outcome to be some kind of like stable, friendly disagreement
where now we can kind of like maybe reach some kind of co-seant solution
and we'd like go and do our own things.
There's like a bunch of projects which kind of go off at once.
I know, that feels like really great to me compared to whoever gets their first
determining how things turn out.
But yeah, I mean, it's hard to talk about stuff, right?
Because it's like somewhat speculative.
But I think it's just like a useful like North Star or something to try pointing towards.
Okay, so maybe to make it more concrete.
I wonder if your expectation that the consensus view would be better than the first mover view.
In like today's world, maybe, okay, either we either we have the form of government and
not just the government, but yet also the, I mean, the industrial and logistical organization
that, I don't know, like Elon Musk has designed for Mars, either that is, so if he's the
first mover for Mars, would you prefer that or we have the UN, uh, come to a consensus between
all the different countries, uh, about like how we should have the first Mars colony organized?
Or should, would it, would the Mars colony run better if after like 10, 20 years of that,
they're the ones who decide how the first Mars colony goes?
global consensus views to be better than first new reviews.
Yeah, that's a good question.
And I mean, one obvious point is not always, right?
Like, there are certainly cases where the consensus view is just like somewhat worse.
I think you limit the downside with the consensus view, right?
Because you give people space to express why they just don't think there's like one idea is bad.
I don't know if this is done to your question, but like, it's a really good one.
You can imagine the kind of the UN-led thing.
It's going to be like way slower.
it's going to probably be way more expensive.
The International Space Station is a good example
where I don't know, I think that turned out pretty well,
but a private version of that would have happened
like in some sense it's a lot more effectively.
I guess I'm not, like the Elon example is kind of a good one
because it's not obvious why that's like super worrying.
The thing I have in mind in the like long reflection example
is maybe like a bit more kind of wild.
But it's really hard to make it concrete.
So I'm yeah, somewhat floundering.
there's there's also another reason to like um to the extent that somebody has the resources
i don't know maybe this like just gets to an irreconcilable um question about your priors about
other kinds of political things but um to the extent that somebody has been able to build up
resources uh privately to be able to be a first mover in a way that is going to matter for the long
term what do you think about uh what kind of views they're likely to have and what kind of
competencies they're likely to have versus assuming that the way governments were
work and function and the quality of their government governance doesn't change that much for the next
hundred years.
What kind of outcomes you will have from?
Basically, if you think like the likelihood of leaders like Donald Trump or Joe Biden is like
going to be similar for the like the next hundred years.
And if you think like the richest people in the world are the first movers are going to be people
that are similar to Elon Musk.
I can see to people having like genuinely different reasonable views about who should like
the, should Elon Musk have 100 years from now or the Joe Biden of 100 years from now have
the power to decide of the long run.
in the course of humanity.
Is that a fulcrum in this debate that you think is important or is that maybe is that not
as relevant as that I might think?
Yeah, I guess I'll try saying some things and maybe it will like respond to that.
Kind of two things are going through my head.
So one is something like you should expect these questions about like what should we do
when we have the capacity to do like a far larger range of things that we currently have
the capacity to do.
That question is going to hinge like much more importantly on like theories people have
and like world views and very big kind of particular details
much more than it does now
and I'm going to do a bad job
but trying to articulate this
but there's some kind of analogy here where
if you're like fitting a curve to some points
you can like overfit it
in fact you can overfit it in various ways
and they all look pretty similar
but then if you like extend the axis
so you like see what happens to the curves
like beyond the points
those different ways of fitting it
and you like go all over the place
and so like there's some analogy here
away. When you kind of expand the space of what we could possibly do, different views which
look kind of similar right now, or at least come to similar conclusions, they just like go all
over the shop. And so that is not responding to your point, but I think it's like, maybe worth saying,
like, this is a reason for expecting, reflecting, reflecting on what the right view is to be quite important.
And like, event, I guess I'll at least into a second thought, which is something like,
I guess there's two things going on. One is the thing you mentioned, which is,
there are basically just a bunch of political dynamics
where you can just like reason about
where you should expect values to heads
for like political reasons.
In some sense, it's like now better than the defaults.
And what is that default?
And then there's a like a kind of different way of thinking about things
which is like separately from political dynamics.
Can we actually make progress and like thinking better
about what's best to do?
In the same way that we can like make progress in science,
like kind of separately from the fact.
fact that like people's views about science are influenced by like political dynamics and maybe like a
disagreement here is a disagreement about like how much scope there is to just get better at thinking
about these things. I mean one like reason I can give I guess I kind of mentioned this earlier is
this project here of like thinking about what's best to do maybe kind of thinking better about ethics
is not the thing it's like maybe more relevant to think this is like on the order of kind of 30 years
old rather than on the order of 2,000 years old.
You might call it like secular ethics.
Parfitt writes about this, right?
He's like, talks about this kind of, there are at least reasons for hope.
We haven't ruled out that we can not make a lot of progress.
Because the thing we were doing before, like, we were trying to think systematically about
what's best to do was just very unlike the thing that we should be interested in.
I'm sorry, that was like a huge rabble, but hopefully there's something there.
Yeah, I want to go back to what you were saying earlier about how you can think of,
I don't know, a global consensus
as the reduced variance
version of future views.
And, you know, so I think that, like, to the extent
that you think a downside is really bad,
I think that's a good argument.
And then, yeah, I mean,
it's like similar to my argument against, like,
monarchist, which is that, like,
actually, I think it is reasonable to expect that
if you could, like, reasonable, you could
reliably have people like Lee Kuan Yu
who are in charge of your country and you have a monarchy,
that things might be better than a democracy.
It's just that the, the,
the bad outcome is just so bad that it's like better just having like a low variance
thing like democracy is if I want to talk about maybe one last kind of trailing thought on what
you said is um I think I guess Popper has this thought and also David Deutsch like did a really
good job at kind of explaining it about one like underrated value of democracy is not just in some
sense having this function to like combine people's views into like some kind of you know optimal
path which is like some mishmatch where everyone thinks it's also like having the ability for
people who are being governed to just like cancel this current experiment in governance and try
again so it's some you know it's like we'll give you freedom to you know implement this kind
of governance plan that seems really exciting and then we're just going to like pull
breaks when it goes wrong. And that kind of the option to like start again in general just feels like
really important as some kind of tool you want in your like toolkit when you're thinking about these like
pretty big futures. I guess my hesitation about this is I can't imagine a like a form of government
where at the end of it I would expect that a consensus view from I mean not just like nerdy communities like
EA but like an actual global consensus would be something that I think is a good path. Maybe to
Maybe it's something like I don't think it's like the worst possible path.
But I mean, one thing about reducing variance is like if you think the far future can be really,
really good, then by reducing variance, you're like cutting off a lot of expected value, right?
And then you can think like democracy works much better in cases where the problem is like closer
to something that the people can experience.
It's like, I don't know if democracies don't have famines because like if there's a famine,
you get voted out, right?
Or like, yeah, major wars as well, right?
but if you're talking about like
some form of
consensus way of deciding
what should the far far future look like
it's not clear to me why the consensus view
on that would be
is likely to be correct
yeah yeah yeah I think maybe
some of what's going on here is
I'd want to resist
the
and it's my fault for I think
like suggesting this framing
that it's like you just
you spend a bunch of time thinking
and like having this conversation
and then you just hate this like
international vote on what we should do.
I think maybe
another framing
is something like, let's just give the time for the
people who like want to be
involved in this to like make the progress
that could be possible on thinking about these things
and then just like see where we end up
where
I don't know, there's like a very weak analogy to
progress in like other fields
where we don't make progress
in like mathematics or science
by like taking enormous votes on what's true.
But we can by just like giving people who are interested in making progress the space and time to do that.
And then at the end, it's like often pretty obvious what turns out.
That's like very begging the question because it's like way more obvious what's right and wrong if you're like doing math.
It's like doing this kind of thing.
But no, but also like what happens if you, this seems similar to like the question about monarchy where it's like what happens if you pick the wrong person or like the wrong Pollard Bureau to pick what the what the charter you take to the rest of.
of the universes.
Yeah, it seems like a hard problem to ensure that you have the group of people who will be
deciding this.
Either if it's a consensus or if it's a single person or anything in between.
Like, it has to be some decision maker, right?
I think you could just imagine there being no decision maker, right?
So like, the thing could be, let's agree to have some time to reflect on what is best.
And we might come to some.
Oh.
And then at the end, like, you know, one version of this is just let things happen.
Like, there's no final decision when someone works up.
It's just like that time between doing the thing and thinking about it, just like extending that time for a bit.
It seems good.
I see.
Okay.
Yeah.
Sorry I missed that earlier.
Okay.
So actually, one of the major things you were going to discuss, this is like one, all the things we discussed so far was one like quadrant of like.
Of the conversation.
Actually, you know, before we talk about space governance, let's talk about podcasting.
So you have your own podcast.
I have my own.
What have you, why did you just start it?
And like, what have you, what have you been in your, like, experiences so far?
What have you learned about the joy and impact of podcasting?
So, story is, Luca, who's a close friend of mine who I do this podcast with.
We're both at university together.
And we're like both podcast nerds.
And I think I remember we were in our last year.
And we had this conversation like, we're, like surrounded by all these people who just seem like incredibly interesting.
Like all these, you know, like academics we've really loved to talk about.
I'll talk to.
And if we just like email them saying we're doing a podcast and wanted to interview them,
that could be a pretty good excuse to talk to them.
So let's see how easy it is to do this.
Turns out the startup costs on doing a podcast like pretty low if you want to do like a scrappy version of it.
Right.
Did that it turns out that like academics especially, but just like tons of people really love being asked to talk about the things they think about all that all day.
Right.
It's just like a complete win-win where you're like,
you're trying to boost the ideas of someone or some actual person
who you think deserves more air time.
That person gets to like talk about their work and, you know,
spread their ideas.
So it's like,
there's like no downsides to doing this other than the time.
Also, I should say that the kind of yes rates on our emails
was like considerably higher than we thought.
We were, you know, like two random undergrads with microphones.
But there's just a really nice.
like kind of snowball effect where if someone who is like well known is like gracious enough to say yes
despite knowing not really knowing what you're about and then you do an interview and like it's
pretty good interview when you're emailing the next person you don't have to like sell yourself
you can just be like hey i spoke to this other impressive person um and of course you get this like
this kind of snowball so no it's definitely a ponzi scheme it's great it's like the best kind of ponzi
scheme though. Podcasts as like a form of media are just like incredibly special. There's something
about just the incentives between like guest and host just like aligned so much better than like,
I know if this was like some journalistic interview, it'd be like way kind of uncomfortable.
There's something about the fact that it's still kind of hard to like search transcripts.
So there's less of a worry about like forming all your words in the right way. So it's just like more
relax. Yeah. Yeah. Yeah.
Yeah, I know.
And it's, and it's, it's such a natural form of, you can think of writing as a sort of way of imitating conversation.
And audiobooks is a way of trying to imitate, like, a thing that's trying to imitate conversation.
You're like audio books seem like they're supposed to, because like, you get, writing is like, yeah, you're visually perceiving what was originally,
an ability you had originally for understanding, you know, you know, you could, you know, you could,
you know, audible ideas.
Um,
but then audio books,
it's like you're,
you're going through two layers of translation there where you don't have
the natural repetition and the ability to gauge the other person's reaction.
Uh,
and so on that,
um,
and the back and forth,
obviously that,
uh,
a natural conversation has.
And,
um,
yeah,
so that,
that's why it's like people,
um,
potentially listen to like podcasts too much where,
I don't know,
they're just like,
they have something in the areas,
the whole day,
which you can't imagine for,
yeah,
totally.
Right.
yeah a few things this makes me think of one is there's some experiment where i guess you can just do it yourself when
if you force people not to use disfluences disfluencies sorry like ums and ours those people just get like much worse at uh reading words
in some sense like disfluencies like help us i guess i'm using the word like right now communicate thoughts for some reason
and then if you take a yeah yeah i guess i can speak for myself and then you word for word to transcribe
what you are saying
or when I say you, I mean me.
It's like hot garbage.
It's like I've just learned how to talk.
Yes.
But that pattern of speech, like you point out,
is in fact easier to digest
or at least it requires less kind of stamina or effort.
No, yeah.
The same job has an interesting point about this
in antifragile.
I'm vaguely remembering this,
but he makes a point that
sometimes when a signal is distorting,
it in some way, it makes it
you retain or
absorb more of it because you have to go through
X for effort to understand it,
which is a reason for example, I think
his example was if
I don't know, if somebody's
like speaking but they're like far away
or something so their audio is muted.
You had to like apply more concentration
which makes it
which makes it actually
which means you retain more of their content.
So if you like overlay what someone says
with a bit of noise or you turn in the volume
very often people have like better
comprehension of it because because of the thing you just say which is like you're paying more attention also
I think maybe I was misremembering the thing I mentioned earlier or maybe it's a different thing which is
you can like take perfect speech like recordings and then you can like insert umz and a
umz and like make it worse and then you can do like a comprehension test where people listen to the
different versions and I can't remember it and they do better with the versions which are like less
perfect um is it just about having more space between uh words like or is it actually the um like if
you just added space instead of ums would that have the same effect or is that there's something
specific about it is a limit to how much i can stretch up from my second year psychology course
um is some global consonant that of just like it's like ome or something it like evokes
it evokes like absolute concentration yeah exactly um i'm curious to ask you like i know i want
to know what you feel like you've learned from during podcasting so i don't know what maybe one
question here is like, yeah, what's some kind of underappreciated difficulty of trying to
ask good questions? I mean, obviously, you are currently asking excellent questions. So what have
you learned? That, one thing you, I think I've heard this advice that you want to do something
where a thing that seems easier to you is difficult for other people. Like, I have tried,
okay, so one obvious thing you can do is like ask on Twitter, I'm interviewing this person,
what should I ask them? And you'll observe that all the
like all the questions that people will like propose are like terrible.
And so but maybe it's just like, oh, yeah, there's, there's adverse selection.
The people who actually could come up with good questions are not going to spend the time to like reply to your tweet.
But then I've even like, hopefully they're not listening.
But I've even like try to like hire like, I don't know, research partners or research assistants who can help me come up with questions more recently.
And the questions they come up with also seem like it just left like, how did grow up in the Midwest?
like change your views about blah blah blah it just like i have the it's just a question that's
whose answer is not interesting it's not a question you would organically have if you at least
what do you i hope you wouldn't have organically i wanted to ask them if you were only talking to
one on one so um it does seem like the skill is um harder than i would have uh it's rare than i would
have expected i don't know why i don't know if you have a good sense of because you you have an excellent
podcast we were asked good questions uh i don't know what do you think it i i don't know what do you think
have you observed this where the asking good questions is a rare skill than you might think?
Certainly I've observed that it's really hard skill.
I still feel like kind of, I still feel like it's really difficult.
I also at least like to think that we've got a bit better.
First thing I thought there was this example you gave of like,
what was it like growing up in the Midwest?
We always asked those kinds of questions.
So, you know, like, how did you get into behavioral economics?
And why do you think it was so important?
These are just like guaranteed to be kind of,
inspiring answers.
So specificity seems like a really good,
like kind of...
What is your book about?
Yeah, exactly, exactly.
Yeah, tell us about yourself.
This is why I love conversations with Tyler.
It's one of the many reasons I love it.
He'll just like launch with, you know,
like the first question will be like about some footnote
in this person's like undergrad dissertation.
And that just sets the tone so well.
Also, I think cutting off,
which I've made.
made very difficult for you, I guess, cutting off answers what is the interesting thing has been said.
And the elaboration or like the caveats on the like meat of the answer are often just like way less worth hearing.
I think I'm trying to ask questions which a person has no hope of knowing the answer to even though it would be great if they knew the answer to.
Like so what should we do about this policy is a pretty bad move?
also if you speak to
people who are like familiar with
asking questions about like their book for instance
in some sense you need to like flush out the kind of pre-prepared
like spiel that they have in their heads
um like I know you could even just do this like before the interview right
and then like it gets to the good stuff where they're actually being made to
think about things Rob Weblen has a really good um like a list of interview tips
which I think I don't know I guess a reason this is kind of nice to talk about
other than the fact
it's just like good to
have some kind of like inside baseball talk
is that you know like
skills of interviewing feel pretty transferable
to just asking people good questions
which is like a generally useful skill
hopefully
so yeah I guess I found
that it's like really difficult
I still get pretty frustrated with how hard it is
but it's like a cool thing to realize
that you are able to like kind of slowly learn
yeah so okay so what is
how do you think about the value
you're writing through your podcast
and then what advice do you have for somebody who might want to start their own?
Yeah.
So I don't know.
One reason you might think podcasts are really useful in general is, I guess the way I think about this is like,
you can imagine there's a kind of just stock of like ideas that seem really important.
Like if you just have a conversation with, I don't know, someone who's like researching some cool topic
and they tell you all this cool stuff that's like isn't written up anyway.
You're like, oh my God.
needs to like kind of exist in the world
I think in many cases
this like stock of important ideas
just grows faster than you're able to like
in some sense pay it down and like put it out into the world
and that's just a bad thing
so there's this overhang you want to fix
and then you can ask this question of okay what's just like
the most one of the most effective ways
to like communicate ideas
relatively well
put them out into the world
well I don't know just like having
conversation with that person is just like one of the most kind of efficient ways of doing it.
I think it's like interesting in general to consider like the kind of rate of information transfer
for different kinds of like media and stuff like transmitting and receiving ideas.
So like on the like best end of the spectrum, right, I'm sure you've had kind of conversations where
everyone you're talking with like shares a lot of context. And so you can just kind of blur out
this like slightly incoherent three minute. Like I just had this kind of thought in the shower and they can
fill in the gaps and basically just like get the idea. And then at the kind of opposite end,
like maybe you want to write an article in like a kind of prestigious outlet. And so you're like
kind of covering all your bases and making it like really well written. And then just like
the information per kind of effort is just like so much lower. And I guess like academic, certain kinds
of academic papers are like way out on the other side. So yeah, just like as a way of solving this
kind of problem of this overhang of important ideas, book guys just seem like a really kind of good way
to do that.
I guess when you don't successfully
put ideas out into the world,
you get these little kind of like
clusters or like fogs of
like contextual knowledge
where everyone knows these ideas in the right circles,
but they're hard to pick up from like legible sources.
And it's like kind of maps onto this idea
of like context being that thing which is scarce.
I remember like Tyler Cohen talking about that.
of like eventually made sense in that context.
I will mention that it seems like kind of a,
the thing you mentioned about either just head off on a podcast and explain your idea
or take the time to do it in like a prestigious place.
It seems very much like a barbo strategy.
Whereas the middle ground spending like four or five hours writing a blog post
where it's not going to be in something that plays super prestigious.
You might as well just like either just put it up in a podcast
if it's a thing you just want to get over with,
or, you know,
I'd spend some time,
it's a little bit more time,
getting in a more prestigious place.
The argument against it, I guess,
is that the idea seems more accessible
if it's in the form of a block post for,
I don't know,
for posterity,
if you just want that to be, like,
the canonical source for something.
But again, if you want it to be the canonical source,
you should just make it a sort of like more official thing.
Because if it's just a YouTube clip,
then it's a little difficult for people to, like,
reference to it.
You can kind of get the best of both worlds.
So you can put your recording into like there are, you know,
the software that transcribed your podcast, right?
You can put it into that.
If you're lucky enough to have someone to help you with this, you can get someone or
you can just do it yourself, like go through the podcast, the transcript to make sure
it's kind of there aren't any like glaring mistakes.
And now you have this like artifact that is in text form that like lives on the internet.
And it's just like way cheaper than writing it in the first place.
But yeah, that's a great point.
And also people should read your barbells for life.
Is that it?
Barbell strategies for life?
yeah yeah that's it yeah cool maybe one last thing that seems worth saying on this
topic of podcasting is like it's quite easy to start doing a podcast and um my guess is often worth
at least trying right so i don't know i guess there are probably a few people listening to this
who have like kind of entertained the idea uh one thing to say is it doesn't need to be the case
that if you just like stop doing it and it doesn't really pan out after like five episodes or
even fewer that it's a failure like you can't
can frame it as I wanted to make like a small series. It's just like a useful artifact to have in the
world which is like I don't know here's this kind of bit of history that I think is underrated.
I'm going to tell the story in like four different hour long episodes. If you like set out to do that
then you have this like self-contained chunk of work. So yeah, maybe that's like a useful framing.
And there's a bunch of resources which I'm sure it might be possible to link to on just like how to set up a podcast.
I like tried writing like collecting some of those resources. The thing to emphasize I think is that you
I think I've talked to like at least
I don't know three or four people at this point
to have told me like oh I have this idea
for a podcast it's going to be about
you know like architecture
it's going to be about like VR or whatever
they seem like good ideas I'm not making
what the ideas themselves I but
I just like I talked to them like six months later
and it's like they haven't started it yet
and I just tell them like literally just email somebody right now
whoever you want to be your first guest
I mean I cold emailed Brian Kaplan
Kaplan he ended up being my first guest
just email just email
them and like set something on the calendar because I don't know what it is maybe just about life in
general I don't know if it's specific to podcasting but the amount of people I've talked to who have
like vague plans of starting a podcast and have nothing scheduled or like no immediate like they
I don't know what they're they're expecting like some mp3 file to appear on their hard drive on
some fine day um so yeah but yeah just do it like get it on the calendar now yeah that seems
that seems good also there's like some way of
thinking about this
where you could just like
if you just write off
in advance that your first
I don't know let's say seven episodes
I just could be like
embarrassing to listen to
um
that is more freeing
because it probably is the case
um
but you like need to go through the like
the bad episodes before you start getting good at anything
I guess it's like not even a podcast point um
yeah also there's if you're just like brief
and polite there's like very little cost
in being ambitious with the people you reach out to
So yeah, just go for it.
Brian has an interesting, he wrote an interesting argument about this somewhere where he was pointing out that actually the cost of cold emailing are much lower if you're like an unknown quantity than if you are like somebody who has somewhat of a reputation.
Because if you're just nobody, then they're going to forget you ever cold email them, right?
They're just going to ignore it in their inbox.
If you ever run into them in the future, they're just like not going to register, I've registered you the first time.
If you're like somebody who has like somewhat of reputation, then there's like a mystery of like, why are we not getting in?
introduced to somebody who should know both of us, right?
If you claim to be, I don't know, like a professor who wants to start a podcast.
Yeah, but anyways, just reinforcing the point that the cost is really low.
All right, cool.
Okay, let's talk about space.
Space governance.
So this is an area where you've been writing about and researching recently.
Okay, one concern you might have is, you know, that Toby Horta has that book about the
the precipice about how we're in this time of peril,
where we have like a one in six odds of going extent this century.
Is there some reason to think that once we get to space,
this will no longer be a problem,
or the risk of extinction for humanity,
you know, asymptote to zero?
I think one point here.
So actually, maybe it's worth beginning
with a kind of like naive case for thinking that like spreading through space
is just like the ultimate hedge against extinction.
And this is, you know, you can imagine just like duplicating civilization or at least having kind of civilizational backup like things, which are like in different places in space.
If the risk of any one of them like being hit by an asteroid or like otherwise encountering some existential catastrophe, if those risks are independent, then you get this like, it's exponent, it's like a power law with every new backup, right?
it's like it's like having multiple kind of backups of some data in different places in the world right so if those risks are independent then it is in fact the case that like going to space is just like incredibly good strategy I think they're pretty compelling reasons to think that a lot of the most worrying risks are like really not independent at all so one example is you can imagine very dangerous pathogens if there's any travel between these places and the pathogens are going to travel but like maybe the most
more pertinent example is if you think it's worth being worried about artificial general
intelligence that is like unaligned that goes wrong and like really relentlessly pursues
really terrible goals, then just having some like, some just physical space between two
different places is really not going to work as a real kind of hedge. So I'd say something like,
you know, space seems kind of, it seems a net useful to like diversify, go to different
places, but like absolutely not sufficient for like getting through this kind of time of
perils.
Then yeah, I guess there's kind of this follow up question which is like, okay, well, why expect
that there is any hope of like getting the risk down to sustainable levels?
If you're sympathetic to the possibility of like really just transformative, unofficial general
intelligence like arriving, you might think that in some sense getting that transition
right where the outcome is that.
now you have this thing on your side,
which, like, has your interests in mind or has, like, good values in mind,
but has this, like, general purpose kind of reasoning capability,
that in some sense, this just, like, tilts you towards being safe,
just, like, indefinitely long.
And one reason is, if bad things pop up, like, some unaligned thing,
then you have this much better established, safe and aligned thing,
which has this kind of defensive advantage.
So that's one consideration.
and then if you're like less sympathetic to this AI story,
that I think you'd also just tell a story
about like being optimistic for our kind of capacity
to like catch up along some kind of wisdom or coordination dimension.
If you like really zoom out and look at how like quickly
we've just invented all this like kind of insane technology,
that is like a roughly kind of exponential process.
You might think that that might.
that that might kind of eventually like slow down,
but our like improvements and just how well we're able to coordinate ourselves,
like continues to increase.
And so that you get this like defensive advantage in the long run.
Those are two pretty weak arguments.
So I think it's like actually just a very good question to think about.
And I like, you know, also kind of acknowledge that's like not a very kind of compelling
answer.
I'm wondering if there are aspects that you can,
discern from first principles about the safety of space
which suggests that either I don't know either
there's no good reason to think the time of perils ever ends
I mean because the thing about AI is like that's true
whether you go to space or not right like if it's aligned then I guess they can
indefinitely reduce existential risk I mean one thought you can have is maybe
I don't know contra the long reflection thing we're talking about
which is that if you would think that one of the bottlenecks to
a great future could be
I don't know like some sort of tyrannical
Tyrannical is like a kind of a coded term in terms of conventional
political thought but you know what I mean
then the diversity of political models that
being spread out would have maybe that's a positive thing
on the other hand a Gwern has this interesting blog post about
about space wars where he points out that
the logic of mutually short destruction goes away in space
so maybe we should expect more conflict
because it's hard to identify who the culprit is
if an asteroid was redirected to your planet
and if they can sweet it up sufficiently fast,
they can basically destroy your above-ground civilization.
Yeah, so I mean,
is there something we can discern from first principles
about how violent and how, I don't know,
how pleasant time and space will be?
Yeah, it's a really good question.
I will say that I think I have not, like,
reflects on that question enough to like give a really authoritative answer.
Incidentally, one person who absolutely has is Anders Samburg, who has been thinking about almost
exactly these questions for a very long time.
In some point of the future, might have a book about this.
So watch that space.
One consideration is that you can start at the end.
You can consider what happens very far out in the future.
and it turns out that because the universe is expanding
for any just like point in space
if you consider the next cluster over
or maybe even the next galaxy over
there'll be a time in the future
where it's impossible to reach that other point in space
no matter how long you have to get there
so even if you sent out like a signal in the form of light
it would never reach there because there'll always be a time
in the future where you start expanding fast and then speed of light
relative to that other place
So, okay, there's a small consolation there, which is if you last long enough to get to this kind of era of isolation, then suddenly you become independent again in the, like, strict sense.
I don't think that's especially relevant when we're considering kind of more, I guess, I guess, relatively speaking, near a term thing.
Gwen's point is really nice.
So Gwen starts by pointing out that we have this, like, logic with nuclear weapons on Earth or mutually assault destruction, where the emphasis is on a second strike.
So if I receive a first strike from someone else,
I can identify the someone else that first strike came from
and I can kind of like credibly commit to retaliating.
And the thought is that this like disincentivizes that person
from launching the first strike in the first place,
which makes a ton of sense.
Goin's point, I guess the thing you already mentioned is in space,
there are reasons for thinking it's going to be much harder
to like attribute where a strike came from.
that means that you don't have like any kind of credible way to threaten a retaliation.
And so mutually assured destruction doesn't work.
And that's kind of like actually a bit of an uncomfortable thought because the alternative to mutually
initial destruction in some sense is just first strike, which is if you're worried about some
other actor being powerful enough to destroy you, then you should destroy their capacity to
destroy you.
So yeah, it's like certainly bleak.
blog post. I think there are like a ton of other considerations, but some of which are a bit more
hopeful. One is that you might imagine is in general a kind of like defensive advantage in space
over offensive. One reason is that space is, uh, this like dark canvas in 3D where there's absolutely
nowhere to hide. And so you can't sneak up on anyone. Um, but yeah, I think it's like a lot of,
there's a lot of stuff to say here and a lot of, a lot of, I don't quite fully understand yet.
But I guess that makes it an interesting and important place to be a subject to be studying if we don't know that much about how it's going to turn out.
So Von Neumann has this vision that you would have set up a sort of virus-like probe that infest a planet and uses of its usable resources to build more probes, which go on and infect more planets.
is the long run future of the universe that all the available low-hanging resources are burnt up in, you know, some sort of like fire, you know, expanding fire of Von Neumann probes.
Because it seems like as long as one person decides that this is something they want to do, then, you know, yeah, the low-hanging fruit in terms of spreading out will just be burned up by somebody who built something like this.
Yeah, that's a really good question.
So, okay, maybe there's like an analogy here where we have on Earth, we have organisms,
which can, like, convert raw resources plus sunlight into more of them, and they replicate.
It's notable that they don't, like, blanket the Earth.
Although, I do just as a tangent, I remember, someone mentioned,
there's a thought of like an alien, you know, arrived on Earth,
and ask a question of what is the most successful species
it would probably be grass
but again the reason that like
particular organisms
that just reproduce using sunlight
don't just kind of
have this like green goo
dynamic is because there are
competing organisms
there are things like you know antivirals
and so on
so I guess like you mentioned
as long as if as soon as this thing gets seeded
it's game over
you can imagine trying to
catch up with these things and stop them.
And I don't know, what's the equilibrium here,
where you have things that are trying to catch things
and things which are also spreading.
It's like pretty unclear,
but it's not clear that it's,
everything gets burned down.
Although, I don't know,
it seems like worth having on the table
as a possible outcome.
And then another thought is,
I guess something you also like basically mentioned,
Robin Hanson has this paper called,
I think, Burning the Cosmic Commons.
I think the things he says are like a little bit subtle,
but I guess to kind of like bastardize the overall point,
there's an idea that you should expect kind of selection effects
on what you observe in the long run of like
which kinds of things have won out
and this kind of like race for different parts of space.
And in particular the things you should expect to win out
are these things which like burn resources very fast
and are like greedy in terms of grabbing.
as much space as possible.
And I don't know, that seems like roughly correct.
He also has a more recent bit of work called Grabby Aliens.
I think there's a website Grappyalions.com,
which kind of expands on this point
and asks this question about what we should expect to see such.
Kind of, yeah, grabby civilizations.
Yeah, I mean, one, maybe kind of one, like,
slightly bantiful upshot here is you don't want this, like,
greedy even Norman type probes to win out,
but also just like dead,
they have no kind of nothing of value.
And so if you think you have something of value to spread,
maybe that is a reason to spread more quickly
than you otherwise would have planned
once you've like figured out what that thing is,
if that makes sense.
Yeah, so then does this militate towards the logic
of a space race where similar to the first strike
where if you are not sure that you're going to retaliate,
you want to do a first strike, maybe there's a logic to,
as long as you have like at least,
somewhat of a compelling vision of what the far future should look like.
You should try to make sure it's you who's the first actor that goes out into space.
Even if you don't have everything sorted out, even if you have like concerns about how.
Yeah.
Yeah, yeah.
Like you'd ideally like to spend more time.
My guess is that the timescales on which these dynamics are relevant are like extremely long time scales
compared to what we're familiar with.
So I don't think that any of this like straightforwardly translates into, you know,
wanting to speed up on the order of decades.
And in fact, if any, like, delay on the order of decades, I don't know, presumably
or also centuries gives you, like, a marginal improvement in your, like, long run speed,
then just because of the, like, again, the time scales and the distances involved,
you almost always want to take that trade off.
So, yeah, I guess I'd want to, I'd be wary of, like, reading too much into all this stuff
in terms of, like, what we should expect for some kind of race in the near term.
It just turns out the space is like extremely big and there's like a ton of stuff there.
So in in anything like the nint, I think this reasoning about like, oh, we'll run out of useful resources probably won't kick in.
But that's like, that's just me speculating.
So I don't know if I have a kind of clear answer to that.
Okay.
So if we're talking about space governance, is there any reason to think, okay, in the far future, we can expect that space will be colonized either about.
by fully artificial intelligence or by simulations of humans like M's.
In either case, it's not clear that these entities would feel that constrained by whatever
norms of space governance we detail now.
What is there reason for thinking that any sort of charter or constitution that the UN might
build, regardless of how, I don't know, how sane it is, will be.
the basis of which like the actual long run fate of space is decided upon yeah yeah so i guess i know
the first thing i want to say is that it does in fact feel like an extremely long shot to expect that
any kind of norms you end up agreeing on now even if they're good flow through to the point where
they really matter um if they ever do but okay so you can ask like what of the worlds in which
this this like early thinking does end up being being good on the end point i don't know like i can
imagine for instance the u.s constitution surviving in importance at least to some extent if digital
people come along for the right it's not obvious why there's some discontinuity there i guess the
the important thing is considering what happens after anything like kind of transformative
artificial intelligence arrives my guess is that the world's and
which this is like even kind of remotely there's like you know super long term what norms should we
have for settling space the worlds in which this matters or does anything worthwhile uh worlds in which
you know alignment goes well right and it goes well in the sense that there's a significant sense
in which humans are still in the driving seat and when they're looking for precedents they just look
to existing like institutions and norms so i don't know that seems kind of
There's like so many variables here that this seems like a fairly narrow kind of set of worlds,
but I don't know, seems pretty possible.
And then it's also kind of like, you know, settling the moon or Mars,
where that is just like much easier to imagine how this stuff actually kind of ends up influencing
or positively influencing how things turn up.
Feels worth pointing out that there are things that really plausibly matter when we're thinking
about space that aren't just like thinking about these crazy kind of very long
and sci-fi scenarios, although they are like pretty fun to think about. One is that there's just a
ton of like pretty important infrastructure, kind of currently orbiting the earth and also
anti-satellite weapons are being built. And my impression is, well, in fact, I think it's the
case that there is kind of warringly small amount of agreement and regulation about the use
of those weapons. Maybe that puts you in a kind of analogous position.
to not having many agreements over the use of nuclear weapons,
although maybe less worrying in certain respects,
but still seems worth taking that seriously
and thinking about how to make progress there.
Yeah, I think it's just like a ton of other kind of near-term considerations.
There's this great graph, actually, on our data,
which I guess I can tell you the link to after this,
which shows a number of objects launched into orbit,
especially low-off orbit, just over time.
And it's just like perfect hockey stick.
And I know it's like quite a nice illustration
of why you might kind of pay to,
like think about how to make sure this stuff goes well.
And the story behind that graph is kind of fun as well.
I was like messing around on some UN website,
which had,
it was just like this database,
incredible database,
which has more or less every kind of officially recorded launch,
logged with like all this data about like how many objects were contained or whatever.
It was like the clunkiest API you've ever seen.
You have to like manually like click through each page
and it takes like five seconds to load.
And you have to like scrape it.
So I was like, okay, this is great that this exists.
I am not, like, remotely sophisticated enough to know how to, like, make use of it.
But I emailed the Aweil and Data people saying,
FYI, this exists.
If you happen to have, like, you know, a ton of time to burn, then have added.
And Ed's, Ed, Ed, Matt, too, from Awarden Data,
got back to me, like, a month later, like, hey, I had a free day.
All done.
And it's, like, up on the website.
It's so cool.
So cool.
Cool.
Okay.
I think that's my space rambling.
Dwakish, I'd quite like to ask you a couple questions if that's all right.
I realize I've been kind of hogging the airwaves.
Yeah, so here's one thing I've just been interested to know.
You're doing this like blogging and podcasting right now, but yeah, what's next?
Like 2024, Duwkesh, what is he doing?
I think I'll probably be, I've just, I don't know, the idea of building a startup has been very compelling to me.
and not necessarily from, I think it's the most impactful thing that could possibly
done, although I think it is very impactful, but it just, I don't know, it just like,
if you have a, people tend to have like different things that are like, I want to be a doctor
or something that it's, you know, it's something that's stuck in your head.
So yeah, I think that's probably what I'll be attempting to do in 2024.
I don't, I think the situation.
in which I, like, remain a blogger and podcaster is if it just turns out to be, like,
I don't know, if I have, if the podcast just becomes, like, really huge, right?
At that point, it might be more sense that, oh, like, actually, this is a way.
Currently, I think the impact the podcast has is like, like, 0.00001.
And then the point zero one is just me getting to learn about a lot of different things.
So I think for to have any, not necessarily that it has to be thought up in terms of impact,
but in terms of like how useful is it.
I think it's only the case
if it like really becomes much bigger.
Nice.
That sounds great.
Maybe this is going right to start the conversation.
What about a nonprofit startup?
All the same like excitement.
If you have a great idea,
you kind of skip the fundraising stage.
More freedom because you don't need to like make.
Well, no, you still have to raise money, right?
Sure, but like if it's a great idea, then I'm sure there'll be like support to make it
happen.
Yeah.
If there's something where I don't see a way to like profitably do it,
and I think it's very important that we've done.
Yeah, I definitely wouldn't be opposed to it.
Is that, by the way, where you're leading?
Like, I asked you in 2024, what is Finn doing?
Do you have a non-profit startup?
I don't have something concrete in mind.
That kind of thing feels very exciting to me, to at least try out.
Gotcha, gotcha, I guess.
I guess my friar is that there are profitable ways to do many things
if you're more creative about it.
there are obvious counter examples of so many different things where yeah i could not tell you how you
could make that profitable right like if you have like something like one day sooner where they're
trying to you know speed up challenge trials it's like how is that a startup it's not clear um so yeah
i think that there's like a big branch of the decision tree where i think that's the most compelling
thing i could do nice and maybe a connected question is i'm curious what you think
EA in general is underrating from your point of view. Also, maybe another question you can answer
said is like what you think like I'm personally getting wrong or got wrong, but maybe the
kind of the more general question is a more interesting one for most people. So I think when you have
statements, um, which are somewhat ephemeral or ambiguous, like let's say there's like some
historian like Toynbee, right? He wrote a study of history and one of the things he says in it is
like civilizations die when
the elites lose confidence
in the norms that they're setting
and they're in their
lose the confidence to rule.
So I don't think that's like actually an ex-risk, right?
I'm just like trying to use that as an example
like something that comes up off the top of my head.
It's the kind of thing,
like it could be true.
I don't know how I would think about it
in a sort of,
I mean, it doesn't seem tractable.
I don't know how to even analyze
whether it's true or not using the modes of analyzing importance of topics that we've been using
throughout this conversation.
I don't know what that implies for EA because it's not clear to me.
Like, maybe EA shouldn't be taking things that are vague and ambiguous like that seriously
to begin with, right?
Yeah, if there is some interesting way to think about statements like that from a perspective
that EAs could appreciate, including myself from a perspective that I could appreciate,
I'd be really interested to see what that would be.
Because there does seem to be a disconnect where when I talk to my friends who are intellectually inclined, who have a lot of interesting ideas, requiring a sort of like translation layer, almost like a compiler or like a transpiler that, you know, converts, you know, code from this language into like assembly here.
It does create a little bit of inefficiency and potentially a loss of topics that could be talked about.
Nice.
That feels like a great answer.
I'd just say it's something I'm kind of right about as well,
especially in leading towards a more kind of speculative,
long-termist ends.
Seems really important to like keep hold of like some real truth-seeking
attitude where those kind of like,
where the obvious feedback of whether you're getting things right or wrong
is much harder.
And often you don't have the luxury of having that.
So yeah, I think just like keeping that attitude in mind
just seems like very important.
I like that.
What is your answer, by the way?
What do you think that you should improve off on?
Yeah, I guess off the top of my head, maybe I have two answers which go in exact opposite directions.
So one answer is that one kind of something that looks a bit like a failure mode that I'm a bit worried about is as or if the movement grows significantly, then the ideas that kind of originally motivated it that were like quite new and exciting and important ideas somewhat kind of dilute maybe because it's like, I guess it's related to what you said.
lose these attitudes of like just taking weird ideas seriously like scrutinizing one another quite a lot
and it becomes a bit like I don't know green washing or something where like the language stays but the
real kind of like fire behind it of like taking impact really seriously rather than just saying the
right things that kind of fades away so I don't know if I don't think I want to say E is currently
underrating that in any important sense but it's like something that seems worth having as a you know kind of
worry on the radar. And then like the roughly opposite thing that seems also worth worrying about
is I think it's just really worth paying attention or like it's worth considering best case
outcomes where a lot of this stuff maybe grows quite considerably. You know, thinking about how
this stuff is like could become mainstream. I think thinking about really scalable projects as well
there's just like little fun kind of interventions on, on margins.
There's at least some terms that becomes like very important.
And so as such, you know, one part of that is maybe just like learning to make a lot of
these fields legible and attractive to people who could contribute who are like learning
about it for the first time.
And just, yeah, in general, planning for the best case, which could mean just like being,
thinking in very ambitious terms, thinking about things going very well, that just also seems
worth doing. So I think it's a very vague answer, but maybe that's, yeah, maybe no worth
GPM, but that's my answer. Perhaps, you know, opposite to what you were saying about
not taking weird ideas too seriously in the future is maybe they're taking weird ideas too
seriously now, it could be the case that just following basic common sense and morality,
kind of like what Tyler Cowen talks about in cyber attachments, is really the most effective
ways to deal with many threats, even weird threats. If you have, if you have areas that are more
speculative like bi-risk or AI where it's not even clear that the things you're doing to
address them or necessarily making them better. I know there's concern in the movement, like the
initial grant that they gave to Open AI might have like sped up AI doom. Maybe the best case scenario
in cases where there's a lot of ambiguity is to just do more common sense things like and maybe
this is also applicable to things like global health where my malaria and nets are great. But the way
Act, hundreds of millions of people have been lifted out of poverty is just through implementing
capitalism, right? It's not through targeted interventions like that. Again, I don't know what
this implies for the movement in general, like, even if like just implementing like the neoliberal
agenda is the best way to like decrease poverty. Like what does that mean that somebody should do
if they're, yeah, what does that mean you should do with the marginal million dollars, right? So it's not
clear to me. It's something I hope I'll know more about in like five to 10 years is I'd be very
curious to talk to future me about like what does he think about common sense morality versus
taking beard ideas seriously. I think like one way of thinking about quote unquote weird ideas
is that in some sense they are the result of like taking a bunch of common sense starting points
and then just like really reflecting on them hard and seeing what comes out. Yeah. So I think maybe
the question is like how much trust should we place on those like reflective processes versus like
how much what shall I prior be on like weird ideas?
being true because they're because they're weird is that like good or bad and then like separately i know
one thing that just seems kind of obvious and important is if you take these ideas like first of all
you should ask yourself whether you like actually believe them or whether they are like kind of fun to
like say or like you're just kind of saying that you believe them and then sometimes i know it's like
fun to say weird ideas but it's like okay i actually don't have good grounds to believe this
and then second of all if you do in fact believe something it's like really valuable to ask if you
think this thing is really important and true. Why aren't you working on it if you have the
opportunity to work on it? This is like the hamming question, right? What's the most important
from your fields? And then what's stopping you from working on it? And obviously, many people
have the luxury of like dropping out everything and working on the things that they in fact believe
are really important. But if you do have that opportunity, that's a question which I know is maybe
just valuable to ask. Maybe this, like this is a meta objection to EA, which is that I'm aware of a
lot of potential objections to EA, like the ones we were just talking about, but there's so
many other ones where people will identify, yeah, yeah, that's an interesting point. And then like,
nobody knows what to do about it, right? It's like, you know, should we take common sense,
morality and more seriously? It's like, oh, that is an interesting debate. And then, but how do
resolve that? I don't know how to resolve that. I don't know if somebody's come up with a good way to
resolve that. I guess it kind of hooks into the long reflection stuff a little bit. Because one answer
here it's just time. So I think the story of people raising concerns about AI is maybe instructive
here where, you know, early on you get some real kind of just like radical out there.
Researchers or writers who are kind of raising this as a worry. There's a lot of kind of like weird
baggage attached to the what they write. And then maybe you get like a first book or two. And then you
get like more kind of prestigious or established people.
expressing concerns.
I think one way to accelerate that process when it's like worth accelerating is just
to ask that question, right?
Like, do I in fact see like, can I go along with this argument?
Do I see a hole in it?
And then if the answer is no, like if it just kind of checks out, even if you obviously are
always going to be uncertain.
But if it's like, yeah, it seems kind of reasonable, then by default, you might just like
spend a few years being like, just kind of living like, oh yeah, this thing that I guess
I think is true.
reacting on. You can't just like skip that step and be like, well,
I don't know. I'm not sure I agree. I think um maybe maybe an analogy here is like I
don't know you're in a relationship and you think like oh well I don't see what's wrong with
this relationship. So um instead of just waiting a few years to like try to find something
wrong with it might as well just tie the knot now and get married. I think it's something similar
with um, I think of failure mode if you maybe not in this because we're EAs you wouldn't see it in
the A, but we can see generally in the world
is that people just come to a conclusions
about how the world works or how
the world ought to work too early
in life where when they don't seem
to know that much about what
is optimal and what
is possible. Yeah, that's a great point.
So, yeah, maybe they should just wait a little
longer. Maybe they just like integrate like these
weird radical ideas as things that
exist in the world and wait until you're like late
20s until you decide. Actually
this is the whole thing I should do with the rest
of my career or with my political
of rights or whatever.
Yeah, I think that's just a really good point.
I think maybe I'd want to kind of walk back
what I said based on that.
But I think there's some version of it
which I'd still really endorse,
which is maybe like, you know,
I spent like some time reflecting on this such that I don't
expect further reflection is going to like radically change
what I think.
You can maybe talk about
this being the case of like a group of people
rather than a particular person.
And I can just like really see
this thing playing out where I just like,
believe it's important for a really long time
without acting on it.
And that's the thing which seems worth
skipping.
I mean, to be a little tiny bit more concrete,
like, if you really think some of this,
these potentially catastrophic risks just like are real
and you think there are things that we can do about it,
then sure seems good to start working on this stuff.
Mm.
And you really want to, like, avoid that regret of, you know,
some years down the line, like, oh, I really could have to start a work on earlier.
There are occasions where this kind of thinking is useful, or at least kind of asking this
question, like, what would I do right now if I just, like, did what my kind of idealized self
would endorse doing?
Maybe that's useful.
So it seems that if you're trying to pursue, I don't know, a career related to EA, there's
like two steps where the first step is you have to get a position like the one you have right now,
where you're, you know, learning a lot and figuring out of future steps.
and then the one after that is where you actually lead or take ownership of a specific project
like a non-profit startup or something.
Do you have any advice for somebody who is before step one?
Huh.
That's a really good question.
I also will just do the annoying thing of saying.
Definitely other things you can do other than that kind of like two-step trajectory.
But yeah.
As in go directly to step two?
Or just never go to step two
and just like be a really excellent researcher
or communicator and like anything else.
Sure, sure, sure.
I think like where you have the luxury of doing it,
not kind of rushing into the most salient like career option
and then retroactively justifying
why it was the correct option
I think is like quite a nice thing to bear a mind.
I suppose often it's quite uncomfortable.
Do you mean something like consulting?
Yeah, something like that.
Yeah, I mean, the kind of the obvious advice here is that there is a website designed to answer this question, which is 80,000 hours.
Oh, yeah, so maybe some, there's a particular bit of advice from ADK, which I found very useful, which was, after I left uni, I was like really unsure what I wanted to do.
I was choosing between a couple options.
And I was like, oh, my God, this is like such a big decision because I guess in this context is not only, not only do you have to answer the question of what?
might be a good fit for me, what I might enjoy, but also, like, sometimes what is, like,
actually most important maybe? And how am I supposed to answer that, given that, like,
there's a ton of disagreement? And so I just, like, found myself, like, bashing my head against
the wall of trying to get to a point where it's certain that, like, one option was better than
the other. And the piece of advice that I found useful was that often you should just write off the
possibility of becoming fully certain about what option is best. Instead, what you should do
is you should reflect on the decision, like, proactively. That is, you know, talk to people,
write down your, like, thoughts and just like keep iterating on that until the like,
the dial stops moving backwards and forwards and just kind of settles on some particular
uncertainty. So it's like, look, I guess I'm a kind of 60%, 70% option A is better than B. And that
hasn't really changed having done like a bunch of extra thinking that's roughly speaking the
point where it might be best to make the decision rather than holding out for um certainty does that make
sense yeah it's like kind of like gradient descent where if uh if the loss function hasn't changed
in the last iteration you uh you call it yeah nice like it like it yeah that's super interesting
though i i guess one problem maybe that somebody might face is that before they've actually
done things, it's hard to know that like that's actually a like, not that this is actually
going to be my career, but I would have like the podcast was just something I did as in like I was
bored during COVID and I, yeah, the classes went online and I just didn't have anything else to do.
I don't think it's something I would have pursued if I ever thought of it. Well, I never thought
of it as a career, right? So it's like, but just doing things like that can potentially lead you down
interesting, interesting avenues. Yeah, yeah, yeah. I think that's a great point. That was, um,
I guess we're both involved with this blog prize
and there was a
like a kind of mini prize last month
for people writing about like the idea of agency
and what you just said I think links into that really nicely
there's this kind of property of going from
realizing you can do something to doing it
which just seems like both really valuable
and learnable.
So yeah just like have it going from the idea of
I could maybe do a little podcast series
to like actually testing it
and like being open to the possibility that it fails,
but you learn something from it,
just really valuable.
Also,
we were talking about sending cold emails
in that same bit of the conversation, right?
Like, if there's someone you look up to
and you think it's like very plausible
that you might end up in their like line of research
and you think there's a bunch of things you can learn from them,
as long as you're not like demanding a huge amount of their time or attention,
then you can't just like ask to talk to them.
I think finding a mentor in places like this
is just like so useful.
and just like asking people if they could fill that role.
Like again, in a kind of friendly way,
it's just, you know,
maybe it's a kind of a move people don't opt for a lot of the time.
But yeah, it's like taking the non-obvious options,
being proactive about connecting to other people,
seeing if you can like physically meet other people
who are like interested in the same kind of weird things as you.
Yeah, this is all like extremely obvious,
but I guess it's stuff I kind of would really have benefited from learning earlier on.
Yeah, and the unfortunate thing is it's like,
not clear how you should apply that in your,
own circumstance when you're trying to decide what to do.
Okay, so yeah, let's close out by talking about just like plugging the effective ideas,
the blog prize you just mentioned, and then the red teaming EA contest.
You want to talk, we already mentioned that earlier.
Yeah, yeah, yeah, yeah, yeah.
If you just want to leave like links and just, again, it summarizes them for it.
Cool. I appreciate that.
Yeah, so the criticism contest, the deadline is the 1st of September.
The kind of canonical post that announces that is an EA forum post, which be very grateful if you could link to somewhere, but I'm happy to do that.
And then price pool is at least a hundred thousand dollars, but possibly more if there's just like a lot of exceptional entries.
And then hopefully all the kind of relevant information is there.
And then yeah, this block price as well, which I've been kind of helping run.
I think you mentioned by that start.
So the like overall price is yeah, $100,000 and up to five five of those prizes.
But also there are these smaller monthly prizes that I just mentioned.
So last month was the theme was agency.
And the theme this month is to write some response or some reflection on this series of blog posts called the Most Important Century blog post series by Holon Karnovsky, which incidentally people should just read anyway.
I think it's just really, like, truly excellent.
And kind of remarkable that, like,
one of the most affecting, like,
series of blog posts I've basically ever read
was written by the co-CEO of his, like, enormous,
um, like philanthropic organization in his spare time.
It's just kind of insane.
Um, yeah, so, uh, the website is, uh, effective ideas, uh,
dot org.
Yeah, and then obviously, where can people find you?
So your website, Twitter handle, and then where can people find your podcast?
Oh, yeah, so website is my name.com, then morehouse.com, Twitter is my name.
And podcast is called Hear This Idea, as in Listen, this idea.
So it's just that phrase.com.
And I'm sure if you kind of Google it, it'll come up.
But by the way, what is your, what is your probably distribution of how impactful these
criticisms end up being or just how good they end up being.
Like if you had to guess, what is like your median outcome?
And then what is like your 99th or 90th percentile outcome of how good these end up being?
Yeah.
Okay.
That's a good question.
I feel like I want to say that doing this stuff is really hard.
So I don't want to like discourage posting by saying this, but I think, you know, maybe
the median submission is, you know, like really robustly useful.
absolutely worth writing and submitting.
That said,
maybe the difference between the most valuable posts of this kind
or work of this kind and the median kind of effort
is probably very large,
which is just to say that the ceiling is really high.
If you think you have 1% chance of influencing $100 million
of philanthropic spending,
then there is some sense in which a impartial philanthropic donor
I might be willing to spend roughly 1% of that amount to kind of find out that information,
right, which is like a million dollars.
So yeah, the stuff can be like really, really important, I think.
Yeah, okay, excellent.
Yeah, so the stuff you're working on seems really interesting.
And the blog prices seem like they're going to, they might have a potentially very big impact.
I mean, our worldviews have been shaped so much by some of these bloggers we talked about.
So, yeah, if this leads to one more of those,
that alone could be very valuable.
So Finn, thanks so much for coming on the podcast.
This was the longest, but also
one of the most fun conversations I've gotten a chance to do.
The whole thing was so much fun.
Thanks so much for having me.
Thanks for watching.
I hope you enjoyed that episode.
If you did and you want to support the podcast,
the most helpful thing you can do is share it
on social media and with your friends.
Other than that, please like and subscribe on YouTube
and leave good reviews on podcast.
platforms. Cheers. I'll see you next time.
