Angry Planet - Silicon Valley Wants ‘More Everything Forever’
Episode Date: June 17, 2025Listen to this episode commercial free at https://angryplanetpod.comThe futures of the past have curdled into the nightmares of the present. The richest and most powerful people the world has ever kno...wn want to colonize mars, live forever, and digitize human consciousness. To make these technological miracles come to pass, they say, will require people to dramatically change the way they live and work. Will it be worth it? Does science even say it’s possible?On this Angry Planet, astrophysicist and author Adam Becker joins us to explain all the problems with Silicon Valley’s dreams of the future. It’s not a short list. Much of the tech, and even the physics, don’t work the way techno-utopians say it does. Some of the people hawking robot slaves and immortality are chasing the impossible for tragic personal reasons. Others are just trying to sell you something. It’s all the subject of Becker’s new book: More Everything Forever.ShareFranics Fukuyama and the end of everything“Death is the ultimate limit, the ultimate loss of control.”Moore’s law, the singularity, and Ray Kurzweil’s fatherThe Face on Mars and large language modelsElizas all the way downThe false binaries of the tech bro futureSilicon Valley’s lost boys“Death is avoidable and taxation is theft.”Stasis for me but not for thee“Mars sucks”Against Life Extension by Francis FukuyamaMore Everything Forever on BookshopMore Everything Forever on AmazonSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Love this podcast.
Support this show through the ACAST supporter feature.
It's up to you how much you give, and there's no regular commitment.
Just click the link in the show description to support now.
Hello and welcome to another conversation about conflict on an angry planet.
I am Matthew Galt.
Jason Fields is in an office today wearing a suit for the first time in years.
Pray for him.
But he will be back.
So today we're doing kind of what I would call part three in an ongoing series.
We've had exploring the masters of the universe as they are now, the ideology of the masters of the universe, or maybe just the people who think that they're the masters of the universe.
And to help us do that, we have a new guest, Adam Becker, talking about his wonderful and kind of prescient book, More Everything Forever.
Sir, can you kind of introduce yourself and give me the, I know before we jumped on, you said you tend to go on when people ask you a question where we're,
rambly show, but give me like the back of the book summary for what this says.
Sure.
So I'm a journalist with a background in astrophysics and specifically a science journalist.
And this book is about the horrible ideas that tech billionaires have about the future and why they don't work.
That's succinct.
That's beautiful.
Thank you.
Thanks.
So I was thinking about this conversation earlier in the week.
Did you happen to read the new essay from Francis Fukuyama?
Actually, I didn't.
No.
So he published something on Persuasion, which is Yasha Monks like Substack,
like on Monday or Tuesday.
Oh, wow.
And it was an argument for death and decay.
He is 72.
and he said, you know, I've lived in the Bay Area for a long time,
and I've been around a lot of people that sure think that they should live forever.
Let me tell you why that's a bad idea.
And he's like, even at 72, I know that people, younger people are tired of hearing about me
and about the end of history, et cetera, et cetera.
And at a certain time, you've got to move on.
You've got to move on from all of this.
And like death opens up the door for great change.
And something I've often thought about
just like looking at Peter Thiel
and Brian Johnson and a lot of the people you talk about
in this book. And I think
you even say this kind of towards the end is that
kind of the deal with all these guys
at the end of the day
is an inability to reckon
with their own mortality.
Yeah.
So can you talk about
the fear of death and why that seems to
be driving like so many bad ideas?
is, I know we're starting off real light.
Oh, yeah, yeah, totally.
I mean, one of the books that I read is research for this book,
is an incredibly light book called Death.
And the cover, I have to say, is like the most hardcore metal cover that I've ever seen for a philosophy book.
It's just a crow.
But yeah.
Anyway, no, I think it's completely natural to be afraid of death.
I mean, I'm afraid of death.
I think even to a point, it's healthy to be afraid of death, to a point.
But death is not something that we can get around.
It is coming.
We all die.
and there is not a way around that.
You know, there's certainly, you know, we've had major medical advances that have made it possible to live longer, healthier lives.
And that could very well continue to a point.
But we are our bodies.
We do not, you know, inhabit our bodies.
We do not, like, you know, haunt our bodies as spirits, you know, living in a meat sack.
We are our bodies.
and our bodies have their limits.
And those limits can probably be extended a bit farther than we have,
maybe even quite a bit farther, but eventually we die.
And there's this, you know,
and there are arguments to be made about like, yeah,
and it's good that we die.
Ultimately, we would not want to live literally forever.
And that's true.
I don't think it would be good for us to live literally forever.
but the real key here with these tech billionaires is that they don't accept the idea of any kind of
limit or any kind of loss of control.
And death is both of those.
It's the ultimate limit and the ultimate loss of control.
And even in our lives, even while we're alive and healthy and in the world,
we do not have control over, you know, 90, 99% of the things that happened to us.
And as, you know, to paraphrase an author I like, a lot of that remaining 1% is whether or not we're going to accept that fact.
You know, lots of things are outside of our control.
But that's very scary for a lot of people.
And I think it's especially scary for these tech billionaires who,
really believe that they are the masters of the universe, that they do have control over their
own lives. And it is true that due to their power and influence, they have more control
than most of the rest of us, but even they are subject to the whims of fate and the ultimate
thing they don't have control over is death. And so it's this ultimate fear that drives so much
of what they do. Right. Seeding the stars.
developing these AI systems, nanotechnology, you know, to even get like to go back far like farther like cryogenics.
A lot of this stuff underneath all of it is about the halves being able to live either forever or much longer, even if it means changing dramatically their body and the interaction with their body and maybe becoming something else, right?
Yeah.
So like at the core of this book, or I think like the structure of this book is really fascinating because it's kind of built around characters.
And then you use these characters to kind of explore these different ideologies.
Yeah.
So one of the ones that we've never talked about on the show before, but I think is important.
And it's kind of like, I would almost say like an er character for like a lot of this stuff.
Is it Kurtzweil? I always say his name wrong.
I think it's Kurzweil, but yeah, Kurzweil, Kurtzweil.
I've heard it pronounce both ways, and I asked to interview him for this book, and he said no.
This is side note real quick. This is one of my favorite parts of the book is at the end you've got like the two pages.
Like, this is everybody I actually interviewed and like the days that I interviewed them.
And here's the people I asked that I maybe talked about in the book, like your Mark Andreessen, Elon Musk.
and here's, you know, what they said or didn't say.
Yeah.
Which I thought was very, that was really interesting to see that in a book like this.
But thanks.
Tell me who is Ray Kurzweil and what is the singularity?
So Ray Kurzweil is a technologist and inventor and an author and self-styled futurist.
and you know he's credit where credit is due he is you know really invented and created some very good
and interesting technology he was a pioneer in text to speech and speech to text technology
that was really useful in creating assistive devices for the visually impaired he was an early
pioneer in electronic keyboards and electronic instruments
but the singularity is a different beast.
The singularity is this idea that Kurzweil and others have advocated.
Kurzweil didn't invent it, but he was, and probably still is its most notorious and well-known
advocate of the idea.
The singularity is this idea that there has been this accelerating trend, an exponential trend,
in the history of technology and before that in the history of biology and the cosmos,
that is leading to a point in our near future where our technology will keep improving itself
and working upon itself to the point where the world is fundamentally changed
in ways that make our lives, it will make our lives completely unrecognized.
to us today. And to be more specific about that, a key role in this is played by artificial
intelligence. And the idea is that AI will get smarter and smarter and smarter and will be
able to design even smarter AIs. And that will lead to a self-reinforcing, accelerating cycle,
which will lead in short order to a godlike, super intelligent AI and A.I. And
AGI, that AGI is artificial general intelligence.
And the idea is that that godlike AI will transform the world and, you know, remove all of
these limits.
It will extend our lives as long as we want.
It will allow us access to the stars and limitless or near limitless energy.
It will reshape.
literally the fabric of everyday life to make it, you know, subject to our every whim, or so
Kurzweil says. And, and it's all coming very, very soon. Cursewile has written several
books about this. The most famous one was a book in 2005 called The Singularity is Near.
And then last year in 2024, he put out a sort of sequel to that book titled, and I really,
I'm not kidding. This is the real title.
the singularity is nearer.
And in both books, he said, yeah, human level AI is showing up in 2029, and the singularity will
arrive around 2045.
But you don't even have to make it to that point because starting around 2029, the advent of human
level AI and other exponential trends and technology will mean that, you know, our lives
will start to be extended essentially as long as we want by that point. And there's no reason
to believe any of this and a great deal of reason not to. Like there's no real science behind this.
The exponential trends that Kurzweil points to are mostly illusions, the exponential trends that
are real always end. And that's sort of the fundamental problem and the fundamental mistake
that Kurzweil has made. When you see an exponential trend in nature, the,
one thing that you know always has to be true is that it ends. Exponential trends end.
You know, the classic example of an exponential trend is like a bacteria colony growing in a
little dish of nutrients. And what happens is, yeah, that that colony of bacteria grows exponentially.
The number of bacteria goes up and up and up and up until they eat all the food in the
dish and then it crashes and they all die. And this is what happens with exponential trends. They
either lead to crash or they level off. And that's just how it works because there are
always limits. It's so funny. I've read perhaps more esoteric and new age texts from the Bay Area
in the 1960s than I then is perhaps healthy.
Yeah.
But there is like in a lot of that stuff, there is a lot of Moore's Law is going to create the singularity.
Well before, like you said, he didn't popularize this.
Like, well, like, and from the view of the 1960s, if you're 20 or 30 years old, reading or writing something like that, there's something about it that that I can forgive it a little bit more because you've seen such great change in your past.
there's such turbulence like in your in your present and like it seems like the number could always go up from where you are.
Yeah.
And for a while, Moore's law held and every couple years the computers got faster and faster.
I have a harder time.
I don't feel as sympathetic for anyone that's spouting that line now.
Yeah.
And yet the line keeps getting spouted.
Yeah.
Sorry, go ahead.
Oh, yeah.
I mean, it's a fundamental.
like tenet of belief here in the bay.
You know, like it's, it's what so...
Leaf being the key word, I think.
Absolutely.
Yeah.
I mean, it is what, it is what a lot of the, like,
philosophy of Silicon Valley and so far as it has a philosophy,
it was built on this belief that there would always be more and that things would
always get faster and better and that there were no limits.
and now that we're hitting up against many of those limits and the ones that we haven't hit up against or at least, you know, pretty clearly in the front windshield.
It's just making the people who are proclaiming that get even louder.
Well, this is what happens when when millinorism kind of meets, meets the end or when prophecy fails, right?
Yeah.
There's a doubling down, like almost inevitably every time.
Yeah, exactly.
I mean, I think that's a lot of what the single thing.
singularity is nearer is about, right? You know, Kurzweil made all kinds of predictions in 2005
about what would happen over the next, you know, 25 to 40 years after that book came out. And now,
20 years later, most of those predictions have not come true. And Kurzweil hasn't accepted them.
There's this, there was this interview that he did in like 2009, I think it was in Rolling Stone.
And in that interview, he said, yeah,
In 15 years, I'll be 70, but I, or 75.
So, yeah, I think 75.
But I expect to be biologically 40.
And I remember reading that at the time.
He does the Brian Johnson thing where he's like, I'm taking the right supplements.
Yeah.
They run these blood tests on me.
Actually, I have the blood of a 40-year-old man.
Right.
Exactly.
And like, it's not clear what biologically 40 means other than, you know, being actually 40.
but especially because, you know, 40 looks different for different people.
But about almost exactly 15 years later, I think it was 14 years later.
So like 2023, the same journalist at Rolling Stone interviewed him again.
And I really, really wanted him to ask, hey, man, how's that biologically 40 thing going for you?
And he didn't ask that.
And I understand, you know, there are other things they had to talk about.
but man, you know, the denial on this stuff, you know, Kurzweil was asked, I think last year,
hey, why hasn't all of this stuff that you were talking about with, say, brain computer interfaces
come true? And he blamed safety regulations and, you know, testing. And then he said,
but it's okay, we're still on track for the singularity because we're going to get those interfaces
once we can just simulate the brain in a computer, which, first of all, there's no reason to think
that we will be able to do that anytime soon and perhaps ever. And second, how would we go about
building that simulation without the brain computer interfaces that it's too dangerous to test
to create? And also the testing is not the only barrier to creating those interfaces. So yeah,
it's just this delusion that they can't break free of. And one of the reasons why I picked Kurzweil
as one of the characters to emphasize or to profile in the book is,
that he's very honest and clear and I think sympathetic about where his motivation comes from. He says it's because of dad. I see him, yeah, I see him as a tragic figure. Yeah, me too. For for sure. Can you, yeah, tell me, tell us about the thing with his father. Yeah, absolutely. I mean, like, and I want to be really clear. Like, I'm not just being glib when I say I see him as a tragic figure or I have a lot of sympathy for him. I really do. And, and I'd like to think that that comes through in the book. But, uh, currently.
Curzweil suffered a real tragedy.
You know, his father, who he clearly adored and, and, like, seems to have had the kind of relationship with his father that we, you know, we would all like to have with our fathers if we don't.
And his dad died relatively young.
And Kurzweil, you know, saw him decline and have heart trouble, but couldn't do anything about it.
And then when Kurzweil was 22 and his dad was, I think in his mid-60s, maybe late-60s,
so, you know, not really young, but not particularly old either.
He died.
And Kurzweil was clearly devastated and said that, you know, that personal experience with death made him want to find a way to get around death using technology because he believes that
with technology you can do anything.
And, you know, I love my dad and I don't want my dad to die, but my dad is going to die at some point,
and so is my mom, and so is everybody else I love, and so am I.
And that I don't like that, but I also don't think that I have much of a choice about it.
and and I also, like I was saying earlier, wouldn't want to live forever anyway.
So, yeah, but I think Kurzweil's tragedy is that instead of going to therapy, he decided that this was what he was going to do.
And men will literally posit the end of the world as we know it and the beginning of the world to come rather than going.
to therapy.
Yeah, I mean, some of the scenes that you paint in the book are so sad.
He's got these boxes of artifacts that he attaches to his father.
Does he have some of his DNA as well?
Yeah, I mean, he's got a plan to go collect his father's DNA from his father's grave site
and put together his father's writings.
I mean, he also took a lot of his father's writing and sort of shoved it into a large
language model sometime in the last year or so.
to create an early version of what he calls Dadbot.
And he describes this conversation he had with DadBot.
And it's, I think, really tragic because it looks like a fairly generic conversation with a large language model.
It does not look like a real conversation between, you know, a father and son.
This is a good segue, I think, to start talking about large language models and maybe one of my favorite lines in the book.
seeing intelligence in Chad Cheapy T or an imminent apocalypse in the current state of AI is just a face on Mars for software engineers.
Thanks.
What is the face on Mars and how does it apply to what we read into these conversations with these large language models?
So the face on Mars is like one of my favorite pieces of, you know,
esoteric conspiracy theory woo stuff that people used to subscribe to, and I guess some people still do.
Back in the mid-70s, NASA sent a couple of probes and landers to Mars, the Viking probes,
and those landers also had orbiters associated with them that took, you know, detailed orbital pictures of Mars,
or the most detailed ones that we'd had up to that point.
It was the 70s, so the resolution was, you know, okay.
and there was this one spot on Mars where if you zoomed way in, there was this rock feature that looked, if you kind of squinted at it, a lot like a face.
And this led to all kinds of conspiracy theories about, you know, oh, the ancient aliens who built the pyramids also put this face on Mars as a sign for us and like, and or you know, NASA's covering up alien activity, all this stuff, right?
And then in the 90s and the early 2000s, NASA started sending a lot more stuff to Mars and getting way more detailed pictures of Mars.
And at one point, I think, in the late 90s, they finally got back to the spot where the quote-unquote face was.
And it was just a rock.
It's just a rock.
And if you go back and look at the original picture of the face from the Viking world,
orbiter, it just looks like a rock. It's very clearly the same rock. And, you know, we want to see a face in it,
or we tend to see a face in these things for the same reason we see a man in the moon, right? There's
this very human phenomenon called paradolia, where we will see patterns, especially human
patterns like faces, where none exist. And it's just the same thing with large language models.
You know, even calling them AI is a kind of marketing gimmick, right?
When I was a kid, now I'm going to date myself, like 30 years ago.
When I was a kid, AI meant Commander Data on Star Trek, right?
It meant like the real deal, the science fictional idea of a machine that could do everything that humans do or almost everything.
And now AI means this crap.
And what we used to call AI now, that's AGI.
They're moving the goalposts by deflating the meaning of the terms.
But, you know, chat GPT, these large language models, they're not AI.
They're text generators.
They automatically generate text without any thought or anything behind it.
It is just pattern matching.
The only thing that these machines do is predict what the next likely word or set of words is given the previous words in the text.
And yeah, sure, that's part of what we do as humans as well, but it's not the whole deal.
We have an idea of what the text refers to.
And we have thought and deliberation behind what we do in a way that these machines
don't. Yeah, it's funny this book comes out with a line like that comes out right as these
stories are also landing. I think Rolling Stone has been writing them and some other places
of these people getting lost in these conversations with these LLMs. Yeah. Like developing romantic
entanglements and also religious fascination. And I keep thinking about like Plato's Cave in like
just kind of intentionally trapping yourself in there and playing with the shadows, basically,
and thinking that that's really what's going on.
Yeah.
Yeah.
I mean, in some ways, these people are talking to themselves.
They are talking to something that will take what they say and reflect it back to them.
You know, there's an old joke.
I don't know who came up with it, but it's from the early days of AI, I think the 70s or 80s,
that the easiest kind of person to, you know, get an AI to simulate is someone in a coma, right?
Because that's, that's real easy. All you need is a text prompt that never returns anything.
But there was this old program called Eliza. And I really think that a lot of the hype around current AI systems.
could really stand to be informed by a better idea of the history of AI chatbots.
You know, not that the new ones aren't doing anything new.
They are doing some new stuff.
But, you know, they're somewhat less impressive when you look at the older stuff.
Eliza was an attempt to create a different kind of somewhat easy person to simulate.
a kind of therapist that just sort of repeats what you've said back to you with slight modifications.
And so you would say, I'm not feeling so great today, Eliza.
And Eliza would say, oh, I'm sorry to hear that you're not feeling so great.
Why do you think that might be?
And then you'd say, oh, well, you know, I got in a fight with my partner.
And I'm really bummed about it.
and Eliza would say, oh, I'm sorry to hear that you've gotten a fight with your partner and you're really bummed about it.
Why might that be?
I mean, except it had, you know, a few different options.
It wasn't just that every time.
At the time in the 70s, and I'm pretty sure Eliza came out in, I think, the early 70s.
There were people...
1960s, actually.
Oh, even earlier. Amazing.
People reported that they were having conversations with Eliza and getting a lot out of it.
and feeling that, you know, Eliza really understood them.
This is just that, again, but, you know, beefier, both because the capabilities of the system are somewhat better and because there are more people who have access to it.
You know, back in 1966, if you wanted to have a conversation with Eliza, you needed to have a friend who had access to a mainframe and a text terminal.
Now you just need a phone.
So, yeah, I think there is this desire, this desire to be seen in a safe way, right?
And have this sort of perfect conversation partner who will always see you and reflect back to you everything that you say without ever needing anything of their own.
And that only validates you.
and that's not how actual people are.
And if you just use that as a substitute for real social interaction, you will get lost.
And I think that there's this intense reaction.
You know, I work in the tech press, so I see a lot of this of, there's these binaries, right?
Either this stuff is going to save the world or it's going to destroy.
all of us. And those are kind of the futures that are pitched by a lot of the people in your book, too.
Either the thing that I'm talking about is going to happen and it's going to save us or it's going to
destroy all of us, like in the case of the less wrong gentleman whose name I always can't remember.
Yadkowski, yeah.
Yes, thank you. I do tend to think, and I think that, you know, like you say this in the book as well,
that these are, this is all false dichotomy. Yeah. Like, these things are not,
going to happen.
Yep.
The future is unknown and probably a little bit weirder than than we are anticipating.
Sure.
All of this to say, like, I don't think these large language models are going anywhere.
I think we are going to be making use of them in one way or another.
Oh, you don't.
Well, you've got kind of a...
I would like to think that we'll find a way to rein them in because I think that we're doing.
Oh, I think so, too.
Doing a lot of damage.
I think so, too.
I think that, like, the economics and the environment, in the environmentals of them are going to,
um, be more shocking to the wider public, uh, than they are perhaps now.
Yeah.
Um, you know, like the, the, the data centers running on methane in Mississippi.
Uh, just this stuff is, is not good.
And it, for what for, for a word calculator?
Yeah, exactly.
And, and, like, the, like, the, like, the.
also the human cost of the people doing the the reinforcement training and the the training to make
sure that or like to attempt to make sure that these things don't produce harmful content
or legal content that the people doing that training often in you know places like africa
Kenya um they are incurring a really heavy psychological cost and so you've got the environmental
cost, you've got the human cost, and you've also got, you know, the massive scale of
intellectual property theft. And all of that needs to be addressed before I think these things
can be integrated into society in a healthy way. And I'm worried that that's not going to happen.
Yeah, I don't know what happens to make it burst, but I do see like an inevitable receding.
but I do still think
but I do still think like
even if it's just to spin up something
on a computer you have at home
like some model you've pulled off of hugging face
or whatever people are going to make use of these things
yeah right
they may not be as
all over the places they are now
but I think they're going to be around
and and I also don't think
they're going to get hooked up to 3D printers
and produce
like a disease that wipes us all out,
like in the fevered imaginings of,
say his name for me again.
Aliaser Yadkowski.
Thank you.
And tell us,
we've talked about him on the show before,
but you actually interviewed him.
He was one of the people that said yes.
Yes.
Right.
So what was your read on him?
What were your interactions like?
I mean,
so Yudkowski is,
Yudkowski is self-toucher.
taught in a pretty extreme way, right? He, he's supposedly an AI researcher, but he doesn't have a
college degree or a PhD or even a college degree in AI. And that's not to say that self-taught
people never produce anything of worth. But the, my impression of my impression of Kerkowski is that he is
the
exactly the stereotype of why being self-taught can be bad, right?
Because the problem with being completely self-taught is it might lead you to disregard
everyone else's contributions and think that you can just reinvent everything on the fly.
And Yudkowski has kind of said that, almost that explicitly, about a great many things
and just dismissed expertise in general in a lot of.
lot of different fields of science and human endeavor. And I think that's a serious mistake. You know,
you can be self-taught and then go out and learn a lot of things and then contribute and make
real meaningful contributions to the community of people working on a particular area. But you need
to know what's come before you and you need to, you know, work with that because there's no one
who's ever lived, who is so smart that they're smarter than everyone else who's ever been around combined.
And, you know, Yudkowski, I think my impression of him is that that's not really what he believes in his heart of hearts.
Maybe I could be wrong.
Maybe I've misread him.
But, you know, he reminds me of a kid in a lot of ways.
He reminds me of like a 15-year-old who's very, very smart and believes that they know more than everyone else.
And, you know, I knew a lot of 15-year-olds like that when I was 15.
I was kind of like that when I was 15.
But then I grew up.
And Yadkowski is, you know, three times that age.
He's not 15.
He's 45.
I think. And I have the impression, I have the impression that he hasn't grown up, which is also, I think, kind of tragic in a different way from Kurzweil.
Yeah, I kind of see him as the archetypical, artypal millennial internet addict.
Yeah. Yeah. You kind of build your, you kind of build this.
like reinforcing system around especially like a forum moderator right right right like you get on
one of these places you build your own community you set up the rules and anyone that acts against
those rules is going to get flung outside of the group yeah um and in that like you stop kind of taking
in criticism and maybe stop listening to your own self criticism um and you know start to believe your own
hype.
Yeah.
Even if, even if it's stuff that is patently ridiculous to people on the outside.
Yeah.
Yeah.
No, I think, I think that's right.
And, like, I think that's right saying that as a millennial internet addict.
Yeah.
I mean, we come by it.
We've seen it before.
Yes.
You know?
Yes, indeed.
Yeah.
Like, we've, we all dug ourselves some bad holes when we were 16, 17.
Oh, totally.
Even a little bit older.
Yeah.
Yeah.
Yeah.
Yeah.
But then, you know, we grow up.
And I think that's a lot of what's going on with the tech billionaires as well is, you know, there's some stunted development going on here where they just don't grow up.
I mean, you take a look at Peter Thiel, who, you know, boasted in 2009 when he was probably roughly my age, that he was remaining true to the faith of his teenage years, that death is avoidable and taxation is theft or something along those lines.
and he definitely said that he remained true to the faith of his teenage years.
And, you know, almost no matter what the faith of your teenage years was, that's not something to be proud of.
You know, we grow and change.
That's an important part of being alive.
Which is so funny because some of them also argue against stasis.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah, yeah, exactly.
Like, they're afraid that our civilization will stagnate.
But they themselves, they got it all figured out.
So they can stagnate.
That's cool.
That's cool because, you know, they're good.
They've got it.
But, but, you know, if our civilization never changes and never grows, that's real bad
because growth is always good no matter what.
And I just, I just.
it's a bizarre belief that's completely divorced from reality.
You talk a little bit more about the civilization of stasis thing, because it really, it really threw me reading the quotes.
Yeah, exactly, because it's so, it's so disconnected from, as we just said, like, what they believe about themselves.
Yeah.
That it's, it's really wild that they can't step out aside of themselves and see what they're saying.
how hypocritical it is?
Yeah.
Yeah.
No, I mean, Jeff Bezos is a really good example for this, right?
Bezos has talked, I think he used the exact phrase, civilization of stasis.
He says that blue origins, the best thing that he could be doing with his money.
And he says, you know, the reason for that is that if we stay here on Earth, then we're going to run out of energy.
And he says, you know, even if we stick to renewables,
there's just a limited amount of solar energy that comes in from the sun that hits the earth.
And our rate of growth in energy usage as a species has been going up exponentially.
And in a few hundred years, we're going to hit a limit.
And actually, all of that is true.
That is all correct.
And what he doesn't say, but which is also true is even if we found some energy source
that wasn't related to solar energy.
Like, you know, solar is obviously solar and wind is also indirectly solar energy and all that.
But if, you know, say we started using fusion energy, which is something that could come down the line decades from now,
or if we, you know, really upped our usage of uranium or for fission plants or whatever, something like that,
we'd still have a problem on roughly the same time scale because of just the loss of thermodynamics saying that we would hit, you know, too much unavoidable waste heat from our energy consumption and it would make the Earth too hot.
It would be like a different and even more pernicious kind of global warming.
So Bezos is actually right about that.
The conclusion that he draws from that is that if we don't leave Earth,
a species and get most of us or all of us off Earth, then we will stagnate and become a civilization
of stasis because we will have to limit the amount of energy that we use. And that means,
you know, horrible rationing according to Bezos. And so therefore, we should go out into space.
And he says in space, we can put, you know, trillions of people into space stations all around
solar system and with a trillion people living in space stations, we'd have a thousand
Mozart's and a thousand Einstein's and it would be a new golden age and we could, you know,
take heavy industry off of Earth and make it into a, you know, natural paradise. And, and,
there's a lot of problems with this, like those space stations are just an unrealistic fantasy
that we don't know how to build. But the key problems that he seems to have missed here
are first of all, yeah, it's true that we're going to have that limit on our energy usage if we keep growing it exponentially the way that we have in a few hundred years here on Earth.
But if you go out into the solar system, then your source of energy is the sun, and that's a limited resource as well.
And if you keep growing at that rate, it only buys you about an extra thousand years-ish.
And then if you say, okay, but let's go beyond the sun.
the problem with the fact that we can't go faster than the speed of light, but if you spot Jeff
Bezos a warp drive from Star Trek, which you shouldn't do, that's not coming. But if you do that,
he still only gets like another couple thousand years beyond that before he's using all of the
energy in the observable universe. So, you know, if not using, if not having constant perpetual growth
in energy usage means a civilization of stasis, well, then we better figure out how to make a
civilization of stasis work because that's coming for us in, you know, less time than it has
passed since the invention of writing and the building of the pyramids. So, you know, long on the
span of like a single human life, but in the span of history, not even that long. We have to
figure out how to do that. And that's the other thing, which is, you know,
it's not that hard to imagine a way to stop using ever more energy because that's also gone up with a population.
And the population of Earth is still climbing, but the best population forecast show that we're going to top out at some point in the next, you know, 7,500 years.
And then we just figure out how to live within a pretty reasonable energy budget.
And it's not actually that hard.
And there are lots of things and ways to be creative and find novelty without using much more energy than people have used in the past.
So, but Bezos doesn't see any of that.
For him, the number always has to go up.
Line has to go up.
And it's the same reason why, or one of the same reasons why you'll also see Bezos, well, not Bezos specifically, but many other tech buildings.
billionaires talk about, you know, falling population rates as being or falling birth rates
as a really serious problem. Part of that is just, you know, eugenics and racism saying,
oh, we've got falling birth rates in countries filled with white people as compared to
countries that aren't white, and that's a problem. But some of that is just, oh, no, we need
the number of people to keep going up because we need to always have more consumer.
because we need the economy to always grow because growth is always good. And we need lots of
growth forever, which is basically where the title of my book came from, right? More everything
forever. It's what they want. Yeah, it's so funny that this seems to be like the dominant
dream of the future right now is more everything forever. And I also, like, I keep thinking,
like, it's not even just,
we can say that,
that if we can even classify
like tech billionaires as right wing,
which is like a whole other discussion.
But like,
you also see it in,
you know,
Madaglacius,
like what's his prescription
a billion American?
Yes.
What is,
what is Ezra Klein say,
abundance?
Right.
More.
Yeah.
More of everything.
Yeah.
And also this idea,
to bring it back to Bezos,
like,
there is this fear not just of their own mortality,
but of the mortality of the human race in general.
Like they can't reckon with the fact that not only will they end,
but like, on a long enough timeline, this is all over.
Like, we're all dust.
The sun's going to expand and swallow the Earth.
Yeah.
And like to avoid that, we do have to become a space-faring species.
Is that even possible?
They sure would like to think so.
I mean, the timescale on which the sun is going to expand
and make life on earth impossible.
Like they, I mean, yes, the sun will expand and engulf the earth in about five billion years,
but the sun will heat up and boil off the oceans in about a billion years or maybe even a little less.
On that time scale, humans will already not be around because you will have evolved into something entirely different or have gone extinct or both.
I mean, a billion years ago here on Earth, there was no multicellular life, right?
Like a billion years from now, who the hell knows?
500 million years ago, there was some multicellular life, but there were no mammals,
there were no reptiles.
I don't think there were any vertebrates, right?
There were no fish.
It's, like, that problem, I am not generally a fan of kicking the
can down the road and saying let this be future generations problem. But that problem really is the
problem of future generations, if it's a problem at all, because we cannot know what is going to
happen between now and then well enough to make plans for those creatures, our descendants,
if we have any at that point. And also those, like the the perpetuation of those descendants and the
life of those descendants is used as the justification for the suffering these people are
putting on people now.
Yeah, yeah, that's exactly right, because, you know, you see it very directly with Elon Musk,
right?
Mosque says that his great mission is to save humanity by making us a multi-planetary species
and that that excuses any safety regulation violations that may have happened at his companies,
that this excuses any bad behavior or allegedly illegal behavior that he engages in.
And that's just nonsense, right?
Because we do not have the technology to live off of Earth right now, and we may never have it, right?
Musk wants to put a million people on Mars so can serve as a backup for humanity.
Mars sucks.
You cannot get a million people living there.
And there is nothing that could happen to Earth that would make it less habitable than Mars.
The worst day in the history of life on Earth, in the history of complex life on Earth,
was the day the asteroid hit 66 million years ago that killed off all the dinosaurs except for the birds.
That was a horrible day.
You know, it was a rock the size of Brooklyn.
It slammed into the earth and caused, you know, fire and death and destruction to rain down across the globe.
It cooked animals alive.
And yet, that day was a nicer day where it was easier to be alive as a mammal than it has been at any point in the history of Mars.
Because we know this.
because mammals survived.
There were mammals around, and they survived, and we are their descendants.
There is no mammal now or then that could survive on the surface of Mars without a space suit for more than a few minutes.
You know, a mammal, any mammal, human or not, would asphyxiate while its saliva boiled off its tongue.
Because there's no air and there's no oxygen in what little air there is.
and they might also freeze to death depending on where you put them.
Like it's, well, they desfixiate before they freeze to death.
But yeah, it's, it, Mars is awful.
The radiation levels are too high.
The gravity's too low.
There's no air.
And the dirt is made of poison.
It's a terrible place.
And it gets hit by, it gets, it gets, it gets hit by asteroids more often than Earth does.
Because it's closer to the asteroid belt.
there's nothing we could do to Earth that would make it worse.
We could, you know, emit all the carbon we want.
We could set off all the nukes all at once.
And Earth would still be better than Mars.
And Musk talks about, you know, oh, well, we'll terraform Mars.
We'll use technology that doesn't exist and might not be possible to build to make Mars more Earth-like.
Use it here.
If you have that technology and something bad happens to Earth, make Earth.
more habitable. Like he says that a Mars colony needs to survive, even if the rockets from Earth
stop coming, because, you know, it needs to be a lifeboat, a backup for humanity. And this opens up
all kinds of questions like who gets to be part of that backup and how many people do you
actually need living there in order to be self-sufficient. It's a lot more than a million.
but, you know, if we did somehow get, you know, a city on Mars of some size, like, you know, half a million people or a million people, which we can't do and we don't know how to do that, but say that somehow we did, that city would not be self-sufficient.
And if a, and it couldn't be because there's not enough people there.
You need more like 500 million people or a billion people to have a self-sufficient, you know, high-tech economy of the kind you need to like make all of the stuff you need to survive on Mars.
If something happened to Earth, if you had a colony on Mars with, you know, a few hundred people, a few thousand people or even a million people and something happened to Earth like an asteroid hitting Earth and you had to make a choice between staying on Earth or being.
on Mars to maximize your chances of staying alive, the right answer would be to stay on Earth
because on Earth, almost everyone would die.
But on Mars, everyone would die because the rockets would stop coming.
And then they wouldn't have any food or water and they'd die.
It's a terrible idea.
That's the kind of down note that we like to strike.
Well, I'm happy.
to oblige.
Sir, where can people find the book and what else are you working on right now?
Well, they can find more everything forever, wherever books are sold.
You know, it's on, you know, for all that I was trashing Jeff Bezos, it is on Amazon.
But it's also a bookshop and, you know, your local bookstores, wherever books are sold.
There's an audiobook, there's an e-book.
And it's available worldwide in English.
for now, although translations, fingers crossed in the works.
And what else am I working on right now?
Well, I just published an article a couple weeks ago in Fortune magazine about how Silicon Valley has gotten way too optimistic about the possibility of fusion power.
That's a good one.
Yeah.
So if you go to Fortune or just search, you know, Fortune Adam Becker, you'll find that article.
or you can follow me on Blue Sky to find out more of what I've been working on and what I will be working on.
Just Adam Becker on B-sky.com. Social. But, but yeah, what else am I working on?
I'm still figuring out what, you know, the next book will be. But, yeah, I've got, I've got a few ideas in the hopper and we'll see where they go.
but for now I'm thinking about ways that we can fight back against these tech oligarchs in the here and now because, you know, like you were saying that, or I don't know if you said this, but you alluded to the fact that I wrote this book before the election and I did not know what world it was going to land into when it came out.
And to my dismay, you are not the only person to call it prescient.
You know, the single word that people have used the most to describe this book is timely.
And now that we are in this new world, I think it's important that we work together to try to resist the power of these techno-fascists.
Well, we will have you back on to talk about that.
I am sure thank you so much for coming on to Angry Planet and walking us through this.
Thank you so much for having me.
This is a lot of fun.
That's all for this week.
Angry Planet listeners, as always, Angry Planet is me, Matthew Galt, Jason Fields, and Kevin O'Dell.
If you like the show, Angry PlanetPod.com, get pay us $9.
You get early commercial free access to all of the episodes.
You get some bonus episodes that are live, and you get access to the written work.
We will be back again soon with another comment.
conversation about conflict on an angry planet. Stay safe until that.
