On with Kara Swisher - Science vs. Silicon Valley with Adam Becker
Episode Date: July 14, 2025How skeptical should we be about the bill of goods (often marketed as needs) sold to us by Silicon Valley? Very, says Adam Becker, an astrophysicist and author of the new book, More Everything Forever...: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity. From colonizing Mars to building god-like AIs, Becker argues that the fantasies propagated by tech billionaires like Elon Musk, Peter Thiel, Sam Altman, Jeff Bezos and Marc Andreessen aren’t just far-fetched – they’re a convenient cover for a racist, authoritarian power grab. In this conversation, Kara sits down with her “soulmate” to dissect and debunk the narratives that undergird the less-than-benevolent Big Tech agenda and uphold the status quo. They also discuss why some ideas, like Musk’s dream of colonizing Mars, are scientifically impossible; the fallacy of effective altruism; the probability of existential threats against humanity; and how all of these factors add up to more power and more control for the techno-oligarchy. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
My video looks a little tilted.
Only I need to look pretty.
Hi everyone from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher and I'm Kara Swisher.
My guest today is Adam Becker, an astrophysicist and journalist and the author of More Everything
Forever.
In it, he argues that Silicon Valley's biggest fantasies, from colonizing Mars to building
godlike AIs, aren't just far-fetched.
They're a convenient cover for a racist, authoritarian power grab.
Oh, just the kind of guy I like to talk to.
Adam doesn't pull any punches, of course, neither do I.
And as a PhD astrophysicist, he actually knows what he's talking about when it comes to the
science fiction tale Silicon Valley has been spinning.
I'm excited to talk to him because this is my wheelhouse.
I've been talking about these issues forever.
Well, a lot of it, this is nonsense.
Many years ago, I actually interviewed an astrobiologist who was telling me how ridiculous
it was to want to live on Mars because it's miserable and we will die as small stupid trolls.
And instead they've decided to become small stupid trolls on Earth all by themselves.
I just think this is critically important to keep being reminded these people do not
have all the answers and Adam does a great job in doing that in this book.
Our expert question comes from journalist and science fiction writer, Cory Doctorow. This is a smart one, so stick around.
Blockchain is reshaping every aspect of society, starting with finance. It's happening across
industries, across sectors,
and across the world. And it's happening with Ripple. With more than a decade of blockchain
experience, over 60 licenses, and strong institutional trust, Ripple provides financial institutions
with blockchain and crypto-powered solutions across payment and digital custody applications.
This means secure 24-7 transactions,
moving value across the world faster.
Find out more at ripple.com.
Imagine a delicious ring of dough
with a sweet mouthwatering spread on top.
Sounds like a donut, right?
Well, if you spread new Philadelphia blueberry
or new Philadelphia pineapple on top of your bagel,
your bagel almost becomes a donut. It becomes a bonnet.
Turn your bagel into a bonnet with new Philadelphia blueberry and Philadelphia pineapple made with real fruit.
It's me, your brain, and I, your mouth. I act on logic.
I act on taste. For me, Pizza Hut's Nashville hot chicken pizza with
spicy fried chicken, pickles, and creamy ranch drizzle is confusing. To me it sounds good.
Pickles on pizza? Amazing. It shouldn't work but it's so good. Try the Nashville hot lineup at
Pizza Hut. Your mouth will get it. It is on.
Adam, thanks for coming on on. I appreciate you being here.
Oh, thanks for having me.
I feel like we're soulmates. I had to have you. I read your book and I loved it and something
I've been talking about a lot and you actually put pen to paper and articulated everything
I feel about some people, the Silicon Valley, largely men. But let's start talking about this because your book, More Everything Forever, and I'll
read the subhead, AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the
Fate of Humanity, kind of says it all.
That's kind of pretty much been my last 30 years.
So let's start with Silicon Valley.
It has so many myths.
And one of my first stories when I got there was the lies
Silicon Valley tells itself, including we're all equal,
all we care about is community,
we're here to help humanity, et cetera.
But they perpetuate this myth that it's about liberty,
science, protecting humanity.
And in your book, More Everything Forever,
it's essentially a counter narrative
that really does make the case that a lot of these
leading tech billionaires, people like Elon Musk, Sam Alton, Marc Andreessen, Peter Thiel,
and Jeff Bezos are scientifically illiterate wannabe authoritarians who will lead us to
environmental collapse unless we stop them.
So talk about that messaging and the narrative.
I never believed it for a minute, but it was amazing how they stuck
to it right from the beginning and probably believed it themselves on some level.
Yeah. I mean, look, they really think that because they have more money than anybody
else in the history of humanity, that means that they are the smartest people in the history of humanity. And that's just not true, right?
They are not experts on everything. They are arguably not experts on anything other than
how to make a billion dollars. But they reject the idea of independent expertise that people
might know more than they do about science and technology because
I think they've drunk their own Kool-Aid.
They're high on their own supply.
So where is the source of this?
Because one of the things that I noticed from the beginning is even when they weren't billionaires,
they were like this.
This did not come from the money.
This came from an idea that they, I always say people, it's not what people lie to you
about, it's what they lie to themselves about. What is your feeling of the origin story here?
I think that there was a genuine desire to make money. And also, like, you know, these
people are not all the same, right? Some of them are clearly just cynical, some of them are true believers, right? But I think there was an idea that came partly from science fiction and partly from the sort of like
weird, you know, Californian mix of counterculture and libertarian ethos, that there was like a
happy alignment between the desire to make lots of money through technology and to save humanity
through technology. That there was a way to both make a lot of money and make the world a much,
much better place by bringing about this sort of inevitable science fictional future.
But science fiction is at the heart of it. It really is.
Exactly. Yeah, yeah, yeah. Science fiction is definitely where the ideas come from, right?
It's where Musk gets the idea that we need to go to Mars.
It's where Bezos gets the idea that we need to go out into space.
It's where Altman gets the idea that, you know, super intelligent AI is inevitable.
So let's talk about the worldview that underpins these techno-utopian dreams,
and we'll dive
into the actual plans.
You wrote that Silicon Valley is awash in what you call an ideology of technical salvation,
which is both sprawling and ill-defined.
So define it for us, what's the ideological through line that connects these disparate
companies and personalities?
Yeah, I mean, basically, there's this belief that perpetual growth is possible and that
through perpetual growth, you know, they will be able to solve all problems with technology
and transcend all limits. And those three things, the growth, the reduction of all problems
to technological problems, including like political problems, social problems,
you know, problems that are sort of inherently non-technical still supposedly could be solved
with technology. And, you know, I'm still waiting for somebody to explain to me, you
know, okay, how do you solve, you know, the crisis in the Middle East with technology?
Like that's not...
Right, or poverty or anything else.
Or poverty or yeah, inequality.
One of the things that a lot of observers would describe it, and I have described this,
is libertarian light, because I don't even think they're full libertarians, but mostly
it's leave me alone.
And I'll never forget Bill Gates basically saying that in the 90s, like just leave me
alone.
I think that's what I got.
I know better.
Yeah.
Mostly leave me alone.
But you're describing something much more elaborate and
really insidious in a lot of ways.
Yeah, yeah. It's like taking that leave me alone idea and mixing it with this belief
that science fiction is a roadmap and that you really can use technology to solve all
of these problems and then transcend all of these limits, right? Transcend mortality, transcend our existence here on earth by going out into space, transcend
conventional morality and legal limits, right? Leave me alone. I'm going to go to space.
I'm going to do whatever I want.
I'm going to live forever. And anybody who comes with me will also live forever.
Let's get to these grand schemes themselves. Yeah.
We'll start with going to Mars since it's easy to understand.
Elon Musk has built a whole personality around the idea that humans must become interplanetary
in order to survive in the long term.
Broadly, probably not.
There's a non-zero, the way they would put it is there's a non-zero chance we're going
to hit by an asteroid.
That's essentially their argument or the Earth is going to collapse and we need to keep humanity
going.
That's their basic argument. Why do you find the idea of occupying Mars implausible? And if
so, if it's so unrealistic, why do tech billionaires believe that going to Mars is not only doable,
but a moral imperative?
You know, Musk wants us to go to Mars as a backup for humanity in case an asteroid hits
Earth. Mars gets hit by more asteroids than Earth does because it's closer to the asteroid belt.
It's a terrible place, right? The radiation is too high, the gravity is too low, there's no air, and the dirt is made of poison. And that's not even like a full list of all of the problems
with Mars. You know, if you were on the surface of Mars without a spacesuit, you would die
almost instantly. You would asphyxiate as the saliva on your tongue boiled off because
the air pressure is less than 1% that of Earth and there's no oxygen. And if you were in
a spacesuit hanging out on the surface of Mars, you would still die, like assuming you
had all the food and water you needed, you'd still die in a few years because the radiation levels are way, way, way too
high because the things that protect us from radiation here on Earth, the Earth's magnetosphere
and atmosphere, Mars doesn't have those.
And so you'd have to live underground in pressurized tunnels, somehow keep all of that
toxic dust out of your habitat. Musk wants to terraform
Mars. We don't have the technology to do that. The schemes that he has proposed for doing
it, absolutely do not work. He's been told that over and over again, and he just denies
it.
Let me give you a pushback. What if he's Christopher Columbus, right? Don't go, you're going to
fall off this thing of the Earth. But, you know, that I've heard that from them.
Yeah, absolutely. Okay, so, so a couple of things. First of all, like the myth of Christopher
Columbus, you know, proving the earth is round. That's a myth, right?
That's correct. That's what I wanted you to say. Yeah, go ahead.
Yeah, exactly. Yeah. People at the time knew that the earth was round. And
the reason people were pushing back on Columbus's scheme was not that they thought the earth was
flat and that he'd fall off, but that they actually knew how big the earth was, because that's also
something we've known for a very long time. And so they said, you're going to starve. If you go
that way, you're not bringing enough provisions, you're going to starve
before you get to Asia. And he would have starved if the Americas had not been there,
but the Europeans didn't know that the Americas were there.
Right, right. They thought it was a big ocean.
Columbus had an inaccurate estimate of how large the earth was. He got very, very lucky
that the Americas were there. And then, you know, went and killed
off an enormous number of people to the point where, like, even by the European standards
of his time, people thought that he was being incredibly brutal. And so, you know, you could
say that the only thing that Musk and Columbus have in common is that they're both horribly
racist. And also, with all of the difficulties that Columbus faced, what he wanted to do was still much, much, much, much
easier than what Musk is trying to do.
Than going to Mars.
Musk doesn't really know anything about space. He doesn't know anything about Mars. If he
did, he would know that everything he has said about Mars is a complete fantasy.
It has to do with HD Wells and everything else.
So let's go to AGI, artificial general intelligence.
It's an amorphous concept that more or less means reaching the point where AI can
outperform humans at any task.
Doomers believe AI alignment is the single most important issue facing humanity.
If we achieve AGI and its goals aren't aligned with ours, it will kill us,
if it really cared.
You can ask the Zoomers like Sam Altman, AGI will essentially solve all of humanity's problems.
First, explain AGI and the idea of singularity becoming foundational in the techno-utopian
projects and lately they've been doing it a lot.
They seem to be like on an extra dose of ketamine because they've just been going on and on
about AGI recently.
And second, why are you skeptical the entire premise behind AI?
You don't even think it's, you should call it intelligence.
Yeah, yeah.
I mean, okay, so AGI, artificial general intelligence, is notoriously difficult to define, which
is part of the problem, right?
The sort of vague definition that's usually
given is, you know, an AI that can do everything that a human can do, or is at human level
intelligence. I think the real definition, the true definition, is AI like we have in
science fiction.
Which would be Jarvis or whatever it happens to be.
Yeah, Jarvis, Commander Data, HAL, whatever.
If you take a look at the OpenAI charter, they have a definition of AGI in there and
the definition that they use is something that can reproduce any economically productive
activity that humans engage in at a human level.
Okay, first of all, that's still pretty vague.
And second, economically productive?
Why is that the measure? There's so many important things that we do that second, economically productive? Why is that the measure? Like, there's so many
important things that we do that are not economically productive. Like, I don't know, you know,
having a long conversation with a friend. But the dream is still this dream of AGI and singularity,
the idea that once you get to AGI, it will then be able to design an even better and smarter, more intelligent
AI, and then that will design an even smarter one and so on and so forth in short order.
And so once you get to AGI, you very quickly get to super intelligent AGI that is smarter
than all of humanity combined.
And I'm skeptical of this in part because there's no sign that
anything like that is on the horizon. You know, these generative AI tools are...
Interesting.
Yeah, they're interesting. They can do some interesting things, but they have so far proven
to be pretty bad at almost everything that people have tried to sell them to us
for.
Some things they're good at.
Yeah, some things they're good at.
Yeah.
The easy stuff is certainly, they're certainly better.
It's like a mimeograph machine versus a computer thing.
It's just like, that's better.
The computer thing is better than a mimeograph machine.
That to me is the advances, the so-called advances.
Yeah.
I mean, it's better at stringing together coherent sentences, and it can be useful for
solving certain well-defined scientific problems like protein folding.
Can make things faster.
There are positive things, but what you're talking about is super intelligence.
Yeah.
Based on our intelligence, it just becomes super, which you start with our intelligence,
meaning our intelligence is going to make us start with our intelligence, meaning our intelligence
is going to make us a more intelligent intelligence, right? So they're starting with us, not something
else.
Yeah, yeah. They want it to be as smart as we are and then, you know, move beyond.
Right.
And like, it's not, it's clearly much worse than humans at almost everything right now.
Does it have to be?
Is it, you know, because look, the early internet was pretty glum and then it was okay.
Sure.
And then it was better and better.
Yeah.
But that was mostly about like people putting stuff on the internet about people learning
how to use the internet better and, you know, and also like the continuation of Moore's
Law, right? You know, the continued increase in power of computers
and cheaper.
Made it possible to, yeah, and cheaper.
It made it possible to put more computationally intensive
stuff on the internet like video, right?
And that's part of what made it better.
Moore's law is over.
The chips are not gonna be getting appreciably smaller
and faster ever, because we already hit the atomic limits. You can't make the transistors
really much smaller than they already are. And this is exactly what Gordon Moore said
was going to happen. He said Moore's law is going to end sometime in the 2020s. And here
we are.
So when you think about AGI, where do you put it right now? It's a tool, right? A possibly better version of the internet, a more steroid version of the internet was subject to the abuse and subject to
good stuff. Yeah, I think that the AI stuff that we have right now is an interesting tool that was built in
ways that are seriously troubling
and that has an enormous carbon footprint,
an enormous human cost to the training.
They stole a lot of content in order to train them up
in the first place.
But even if you put all of that stuff aside,
you were left with something kind of interesting
that can make certain tasks easier, like, say,
writing code. It is good for that.
We'll be back in a minute.
Support for this show comes from Indeed. So you just realized you needed to hire someone yesterday.
How can you find amazing candidates fast?
Easy, just use Indeed.
Indeed's Sponsored Jobs helps you stand out and hire fast.
With Sponsored Jobs, your post jumps to the top of the page for your relevant candidates,
so you can reach the people you want faster.
And that makes a difference.
According to data from Indeed, sponsored jobs posted directly on Indeed
have 45% more applications than non-sponsored jobs.
Plus with Indeed sponsored jobs,
there are no monthly subscriptions,
no long-term contracts, and you only pay for results.
How fast is it?
According to their data,
in the minute I've been talking to you,
23 hires were made on Indeed across the globe.
There's no need to wait any longer.
You can speed up your hiring right now with Indeed,
and listeners this show will get $75 sponsored job credit
to get your jobs more visibility at indeed.com slash on.
Just go to indeed.com slash on right now
and support our show by saying you heard about Indeed
on this podcast.
Indeed.com slash on.
Terms and conditions apply.
Hiring Indeed is all you need.
Blockchain is reshaping every aspect of society, starting with finance.
It's happening across industries, across sectors and across the world.
And it's happening with Ripple.
With more than a decade of blockchain experience, over 60 licenses industries, across sectors, and across the world. And it's happening with Ripple.
With more than a decade of blockchain experience, over 60 licenses, and strong institutional
trust, Ripple provides financial institutions with blockchain and crypto-powered solutions
across payment and digital custody applications.
This means secure 24-7 transactions, moving value across the world faster.
Find out more at Ripple.com.
Support for On with Kara Swisher comes from Upwork.
Running a business right now comes with lots of roadblocks.
Tight budgets, hiring freezes and economic uncertainty are just the beginning.
But the good news is Upwork is helping small businesses do more with less. Upwork is the hiring platform designed for the
modern playbook. You can find, hire and pay expert freelancers on Upwork and
they can deliver results from day one. Perfect for businesses on tight budgets,
fast timelines, zero room for error, there are no subscriptions and no upfront fees.
You pay only when you hire. Posting a job is fast, free, and simple.
If you've never tried Upwork, now's the perfect time.
They're giving our listeners a $200 credit
after spending $1,000 in your first 30 days.
That's $200 you can put toward your next freelancer,
design help, AI automation, admin support, marketing,
whatever your business needs.
Visit Upwork.com slash save right now for this offer.
That's upwork.com slash save to get a $200 credit
to put towards your next freelancer
to help grow your business.
That's U-P-W-O-R-K dot com slash S-A-V-E.
Upwork.com slash save.
Don't wait, this offer is valid June 24th
through August 5th, 2025.
So one of the things you write in the book, quote, lurking underneath all the dreams and
desires and resentment of the tech billionaires lies a fear of death and a final loss of control.
So you've latched on to the idea called transhumanism, a sort of secular religion that says we can
transcend our biological bodies and upload our consciousness into the AI.
One of the people working on this is Sam Altman.
There's many others.
There's lots of them doing it.
You can see it physically manifest in someone like Jeff Bezos, for example.
How seriously do tech billionaires actually take this idea, and how does it shape the
political and moral assumptions they make?
They seem to take it very, very seriously.
This is, I think, a lot of the idea, not just behind the AI companies,
but behind companies like Neuralink.
This is Musk's Neuralink.
Yeah, Musk's company, Neuralink.
But there's others.
There's others.
There are, but that's like the most notorious one.
The idea there is to bridge the gap between computers and the brain.
This is also part of why I am so skeptical
about the idea of AGI. The brain is not a computer. You know, a lot of this stuff is
premised on the idea that the brain is a kind of computer. And it's not. It's just, it's
not. It's an evolved organ. But I think that there is like a real faith in the idea that you can transcend these
biological limits, which is like the main project of transhumanism. The idea that we
don't have to be confined to our bodies and their limits as they are now. That we can
upload our minds into computers or make ourselves into cybernetic organisms
and greatly extend human lifespan, go out into the cosmos and colonize the universe.
And all of that's just pure fantasy.
What's one that you're like, okay, this is interesting?
This is what they're working on. So I think that basic brain-computer interfaces
actually kind of are interesting in that they could, in theory,
allow people to regain capabilities that they lost due
to some sort of accident or injury.
If they can't move their legs, a brain computer interface could maybe,
maybe let them control like a wheelchair more efficiently or something like that.
Or if they can't control, like if they're, if they're quadriplegic, a brain computer
interface might allow them to control a substitute for their hands or something like that. There
is some work in there that has shown some promise. I think that's cool, right? You know,
and I think that that's useful. The problem is then taking that step and saying, okay,
and then once we do that, we'll be able to, you know, upload the entire brain into a computer
link. That's just nonsense.
Right. And it's not going to happen. Now, effective altruism is the idea that a good way to make the world a better place is to make a lot of money
and give it away to worthy causes.
And long-termism is the idea that we have a moral obligation to future generations.
Both seem fairly benign, not like the grandiose plans that we've been discussing,
but you argue they've created a toxic self-serving philosophy that justifies extreme inequality.
Walk us through your reasoning. Yeah. So first of all, yeah, it sounds good. We should care about future generations. We
should try to put more money toward worthy causes. But the devil is always in the details,
right? First of all, relying on philanthropy has its own problems. Billionaire philanthropy is democratically unaccountable, it's an exercise of power.
It would be much, much better if we could fund worthy causes through some democratically
accountable means like government funding.
And that way, we can all collectively make decisions together about where the money should
go.
Of course, we've seen that Musk doesn't care about that and has cut USAID.
He doesn't give anything away. You don't have to worry about that.
Yeah, of course. He doesn't care. Yeah. But on the other hand, you know, I just described
effective altruism, right? So what's the problem? The problem is that first of all,
there are some problems that you can't just solve by throwing money at them, right? If you want to create like systemic change and address problems like, oh, I don't know,
massive wealth inequality, you can't just throw money at that problem. You have to like commit
to systemic change in some way. But the other thing is there is a utilitarian philosophy that comes along with effective altruism. The idea that
what we need to do is make the most happiness and reduce the suffering the most in the world.
And that this is something that can be quantified and this leads the effective altruists and
especially like this influential subgroup within that movement, the long-termists,
to the idea that what we really need to do is ensure that there are as many people in the future as possible living lives that are at least barely worth living.
And so this creates what one of the leaders of that movement, Will McCaskill, called a moral case for space settlement,
which again is nonsense. That's not happening. And it also leads them to prioritize what
they call existential threats to humanity and human civilization over other pressing
problems. Then you get into questions like, okay, what counts as an existential threat
and which existential threats are more pressing.
And they have a very bad track record of answering these questions well.
You mean example.
Yeah, yeah.
An example is Toby Ord is a leader in this effective altruist movement who's pushed
long-termism.
And he came up with estimates of the severity of, or probability of different existential threats causing either
the extinction of humanity or unrecoverable collapse of human civilization in the next
hundred years. And if you asked me to make a list like that, or if you asked, I think,
like most experts in the subject to make a list like that. Top of the list would be things like global warming,
nuclear war, right?
Maybe a pandemic and or does rate pandemics pretty highly,
especially an engineered pandemic.
And that seems reasonable, but at the top of his list
is the threat of a super intelligent AGI
wiping out humanity.
And he rates that as 50 times more likely a super intelligent AGI wiping out humanity.
And he rates that as 50 times more likely than collapse or extinction from climate change
and nuclear war combined.
And when I asked him why, his answer was essentially, oh, I made those numbers up.
It was my best guess.
And the man is an Oxford philosopher, right?
That gives him a platform, right?
Power and influence.
He has been advising UK parliament on AI issues.
And I think it's really irresponsible for him and others in that movement to make these
claims based on very, very little.
Oh, Adam, they make a lot of things up.
That's been my history with them.
So let me pull something up you wrote, and you can read it out loud, then I'll ask you
a question about it.
Sure.
Silicon Valley's heartless, baseless, and foolish obsessions with escaping death, building
AI tyrants, and creating limitless growth are about oligarchic power, not preparing for the future.
The giants of Silicon Valley claim that their ideas are based on science, but the reality is darker.
They come from a jumbled mix of shallow futurism and racist pseudoscience.
How did eugenics end up driving so many of their grand schemes,
and do they really grasp how deeply racism underpins their plans
because most of them would strongly reject the notion that they're racist?
Yeah, well, but I think most racists will strongly reject the idea that they're racist,
right? It doesn't mean they're not racist. I don't know if, say, Marc Andreessen, just
to pick one of these guys, I don't know if he understands how deeply enmeshed
his worldview is with eugenics and racism, but it is.
Explain that, give me an example.
Yeah, so for example, just the idea of intelligence,
just the markers that they use for intelligence,
the idea that IQ is a good measure of intelligence,
which you'll see over and over again in the writings of these billionaires and the subcultures
that they fund.
You know, IQ is not a measure of inherent intelligence.
There is not, as far as we know, a single number that you can call intelligence. And yet, the notion that
IQ is really deeply important is a racist notion because IQ is not actually measuring
intelligence. It's been shown over and over again to have cultural biases. Another example,
and maybe this is a more immediate and direct one, if you go
and look at like Musk's plans for Mars, he talks about backing up humanity on Mars, like
who makes the decision about like who gets to go to Mars? Who gets to decide who is worthy
of going to space? What cultures and ethnicities get to be backed up on Mars. The space program
historically has excluded a lot of people and, you know, has favored people who look
like me.
It seems to be mighty white.
Yeah, exactly.
And what you're saying.
Yeah.
Talk a little bit about how they reject the idea that they are, that their meritocracy
is typical of their arguments.
Yeah, exactly. But then, you know, you come back to the question, okay, you're a meritocracy, how are you measuring
merit?
I think that they believe that being racist means that you want to be mean to people of
a different color to their face by using slurs.
And that's not what racism is, right?
Racism is when you reinforce a system of oppression
against people of a certain race.
And that's what these guys are doing very explicitly.
So let's talk about the consequences, real world
consequences of these fantasies.
Google just released its yearly environmental report.
It says that emissions have gone up 50% since 2019.
A separate report by an advocacy group
actually found Google's emissions had increased
by 65% during the same period.
And Google reports its electricity consumption
from data centers has doubled since 2020.
AI is clearly using a tremendous amount of energy.
They're obviously talking about using nuclear facilities
and everything else. But if you ask them AI proponents, they say artificial intelligence
will come up with a solution to global warming. You say global warming requires social and
political solutions, not technical solutions. There is probably a combination of these things.
But talk a little bit about that, the energy usage, because it's off the charts at this
point. Yeah, it is.
I mean, just the amount of energy needed to run these generative AI systems is truly enormous.
There's just one statistic, the AI powered Google search, one search query takes 10 times as much energy
to answer now than it did before they integrated generative AI into the search solutions.
Gemini.
And I think most people are annoyed that they did that.
It doesn't make the search better, it made it worse.
We all want old Google back. And so they are expending 10 times as much energy to make their product
worse. So here's something that's not in my book, because it happened too late for me
to put it into my book. Eric Schmidt, former CEO of Google, tech venture capitalist, billionaire,
he said in I think October that we're not going to meet our climate
goals anyway, so we should use more energy and more resources and pour them into AI so
we can get to super intelligent AGI and then that will tell us how to solve global warming.
You left out the faster than the Chinese. That's usually stuck in there somewhere.
Right.
Yes, there's a faster than the Chinese part as well.
Yeah.
So essentially, it's a problem we're never going to solve and therefore we should just
use more energy to find a solution to the problem technologically.
That's the circular logic.
It's a circular logic and we don't need much more by way of technological solutions to
solving global warming.
At this point, the primary barriers to solving the climate crisis are social and political,
not technological, right?
Like we have cheap, clean energy.
We just need to get through the various barriers to deploying it.
And a lot of those have to do with government subsidies and interest groups and whatnot.
And that's not a technological problem. That's a problem of persuasion and
politics.
Yeah, I'm not surprised you said that. My favorite nickname for him is that fucking
guy. We used to have a thing at code where he would say crazy stuff. And we had a ball
gag, you know, red, you know, ball gag giver seeing them. They're, they're sexual.
But every time he said something dumb, we put the picture of him with a ball
gag and then whatever he said.
And we'd say that fucking guy talked again.
So we had to put the ball gag on him.
I mean, look, you know, I had an idea in my head of what my book was actually
titled rather than more everything forever.
I thought of it as, uh, these fucking people, these fucking people.
Yeah.
So I would have been like, maybe that's my next book. Burn book of it as these fucking people. These fucking people. Yeah. So.
I would have figured that maybe that's my next book.
Yeah.
So like, Burn Book is kind of these fucking people.
Yeah, exactly.
So at its core, the book is saying that more everything forever mentality leads to less
real life for regular people, but AGI, Colizai, Mars, and transhumanism seem so far in fact,
it's not obvious how far off projects really do affect the public.
So does wasting money and energy on them simply exacerbate existing problems like racism,
income inequality, global warming, or do they create new intractable problems?
I mean, I think that it's a little of both, right?
I think that certainly the idea of these high-flying ideas that don't work has created
cover for these billionaires to amass more power and wealth and that's not just exacerbated
existing problems, but sort of like the amount of power and wealth that they have at this
point is so extreme that I would argue it's created like
a sort of new kind of problem, right? Just because it's such an extreme concentration
of wealth and power. And so, you know, they are able to like openly support fascism and,
you know, and like still, you know, go about doing their business in ways that would have previously been unthinkable
if they'd taken those stances even just a few years ago.
So as we discussed, you think tech billionaires
have an authoritarian worldview.
They do, and many of them have embraced President Donald
Trump, who seems like an aspiring authoritarian
at the very least.
Do you see Trump and his tech industry
moving beyond standard Republican deregulation
and working together on an actual authoritarian project?
It's unclear because of the breaks that are happening rather quickly.
What would it look like if that was the case?
I think it's over already, personally, because they've squeezed the lemon as much as they
can on some levels.
But do you see it continuing?
And who is the more dangerous authoritarian group? The tech people or Donald Trump?
Yeah, that's a good question. Which one's more dangerous? I don't know. They're dangerous
in different ways, right? Like Trump is dangerous in all of these like very obvious ways of
like eroding and destroying confidence in democracy, democratic institutions, guardrails,
eating an entire political party.
But the tech billionaires are going to be with us
for longer and not just because they're younger,
but because they're unelected, right?
And like, yes, Trump is trying to transform America
into an authoritarian state and he may succeed. He's already succeeded in a lot of ways that are horrifying.
But ultimately, there is hope that he can be stopped or those changes can be halted
and reversed through organizing and at the ballot box.
Doing that with the billionaires is a lot harder.
So I feel like the tech billionaires,
like if I had to pick one, are the bigger problem
because they're gonna be with us for longer.
We'll be back in a minute.
Hi everyone, it's Nicole Wallace from MSNBC.
Listen to my new podcast called The Best People.
I get to speak to some of the smartest,
funniest and wisest people I have ever encountered.
People like Kara Swisher, Rachel Maddow, Doc Rivers,
Jason Bateman, Jeff Daniels and Sarah Jessica Parker.
They'll often say, hey Carrie, you know,
they'll call me Carrie and that's all right too.
The Best People with Nicole Wallace.
New episodes drop Mondays.
Listen now wherever you get your podcasts.
President Trump met with the leaders
of five African nations at the White House yesterday.
One oops got all the attention
when Trump paid Liberia's president a compliment.
Well, thank you.
You have such good English, such beautiful.
Where did you learn to speak so beautifully?
English is Liberia's official language.
Were you educated where?
In Liberia.
Yes, sir.
Well, that's very interesting.
Anyway, you know what happened behind closed doors right before that meeting?
President Trump pushed those African leaders to accept people who are being deported from the U.S. That's according to a Wall Street Journal exclusive. In fact,
it's trying all kinds of ideas to increase the pace of deportations. And we're going
to tell you about some of them on Today Explained. Today Explained is in your feeds every weekday. So every interview we get an expert to send us a question for our guests. Now let's hear yours.
Hi, I'm Corey Doctorow. The big question I would ask is that sometimes technical breakthroughs
really do change the game, whether that's antibiotics or packet switching or other
more modern inventions. Obviously, everyone who comes up with a technical idea wants to market it as one
of these game changers and not some little incremental effect.
I guess what I would ask is how do we know when someone has got one of these big
game changing ideas and how do we know when they're just tinkering in the
margins and how do we assess those claims?
Yeah, no, that's a really good question. And thank you to Corey for asking it. And how do we know when they're just tinkering in the margins and how do we assess those claims? Yeah.
No, that's a really good question.
And thank you to Corey for asking it.
Part of why that's a good question is the, the real honest answer has to be we can never be completely sure, but there are some signs.
Right. And to me, the most reliable answer to that question, which is not always going
to be right, but it's often right. The real breakthroughs tend not to be hyped right out
of the gate. They tend to be, hey, we might have something interesting here. You know,
we've got this very interesting looking result in this Petri dish, and we're not sure, but
it seems like it may be killing off bacteria.
We've got this interesting result with silicon, where it seems like you might be able to use
it as a semiconductor, but we're really, we're not sure.
Of course, there are examples of real game changing technologies that were hyped straight
out of the gate.
electricity.
Yeah.
Although even electricity, it took a while to develop, right?
You can't use this as a hard and fast rule.
But the other thing is, I would say that the most reliable guide is also the hardest thing
to do, right? The hard answer to the hard question
is you look at it skeptically.
Always, yeah.
And you say, okay, sure, can it really do this? Are we sure? And, you know, with something
like electricity, the answer was relatively clear early on, oh yeah, this is actually extremely promising.
And you know, the same thing with say nuclear power.
Whereas with a lot of these technologies that I've been ragging on, a skeptical look makes
them look less likely, not more likely.
Less than interesting.
Now you've also said, quote, we don't need more Elon Musk.
We need at least one fewer Elon Musk, which is funny, putting aside the last few years
in his descent into far right politics for the moment.
Don't we also need these creative geniuses who push the boundaries of what's possible,
the risk takers in fields like electric vehicles, reusable rockets, satellite internet, the
21st century equivalents of Thomas Edison, Nikola Tesla, Henry Ford, Alexander Graham
Bell. of Thomas Edison, Nikola Tesla, Henry Ford, Alexander Graham Bell, some of them are obviously
deeply flawed.
Henry Ford, principal among them.
There's a point where we do need those inventions.
And I would say Musk really does get credit
for pushing forward electric vehicles.
He didn't invent it, but he, same thing with Steve Jobs,
right, very much pushing forward,
not the inventor, yet critical. Would you
rather live in a world without these inventions? I'm playing devil's advocate here.
I just don't think that that's the choice that we're facing.
Okay.
Like you said, Musk didn't invent the electric car. Sure, he pushed it forward, but you know,
Tesla existed before he came along. There are other electric car companies. It was the
kind of thing that was going to happen
with or without him.
I would argue that most of these tech billionaires,
really all of them, are people who,
insofar as they themselves created these innovations at all,
rather than just being the person at the helm
of a company that did,
that they were things that were that would have happened without them. And if it hadn't
been them.
Inevitable. Although I would say without Elon, Tesla was going to be another traffic accident.
That's probably right. Yeah.
Yeah. And it was a question of someone who could push it through with such risk taking
it risk taking is one of his best qualities actually. Sure, but there's a difference between risk taking and recklessness, right?
Correct. And he's crossed over.
Yeah, most certainly. I also think that, yeah, okay, maybe Tesla would have gone down without
him, but there are ways of pushing that kind of technology without being the kind of monster that Musk
is.
Yeah, and they often come hand in glove, unfortunately.
So toward the end of the book, you write, for example, the fact that our society allows
the existence of billionaires is a fundamental problem at the core of this book, and you
propose 100% wealth tax on personal net worth of over $500 million.
Now, Mondami just noted this and everyone lost their ever-loving minds.
Is the real problem Silicon Valley's ideology of technological salvation or is it capitalism
itself?
If tech weren't the dominant industry right now, if it were agricultural, oil, all big
industries, shipbuilding, coal, name any of them, would we be dealing with the same core
issues of exploitation?
We did, I think, in each of these areas for a long time.
This is not a, this could be just a different twist.
Yeah.
And talk about the not having billionaires.
Yeah, no, I think that there is something in common here
with all of these other industries
that have dominated society at various times.
I do think that there is something sort of unique
about the
kinds of narratives that are spun by the tech industry. I don't seem to recall like,
you know, the 1980s Masters of the Universe in, you know, Wall Street and financial industry
claiming that what they were doing was, you know, bringing about a permanent utopia for
humanity.
No, they didn't. No.
Yeah. And while those sorts of billionaires
and other billionaires in other industries
have often had really weird ideas,
they have been like not of the same kind.
Of the religious kind.
Yeah, exactly.
They're very religious in a weird way.
Yeah, I mean, but not having billionaires.
Look, you know, I think that a lot of what's happened
in this country over the last, at this point, 10 years has shown like the kinds of risks that we
as a society take by having billionaires, by allowing that kind of concentration of wealth.
It erodes the democratic fabric of the country. And at this point, our democracy is in mortal
danger and may already be lost.
Okay.
And that's awful.
So, so you, you imagine that passing? No. No.
Yeah. I mean, I think-
No. Everyone wants to be a billionaire.
I think that everyone wants to be Superman too.
Right.
But nobody actually thinks that they're going to gain superpowers, right?
Right. And everyone's Superman, no one's a Superman.
So the book is full of real life characters who exude humor as powerful men who want to
summon God like power, to create new worlds and escape death, and it's all inevitably
doom. It's right out of Greek tragedy or myth, essentially.
In this metaphor, the tech billionaires are like Icarus. They've gotten humanity strapped
to their back, so they fly too close to the sun,
we'll all go down with them.
So let's give us reasons for hope and optimism.
I hate to say that to you, because it's not an optimistic book, I would say,
but what is your most optimistic?
Is that we're onto them, or they will die, or how do we build safeguards?
So there's a better ending to this story.
I mean, I think that the fact that we are onto them
is actually really important and
optimistic, right? You know, like there was for a long time, a narrative about Musk, like
a story about Musk in our society that he was this great genius who was going to save
us all. And I never bought that. But a lot of people did. And for a long time, people were confused by the fact that I didn't like Elon Musk.
And they would say, but Adam, you know, you're an astrophysicist, you like space.
Why don't you like Elon Musk?
He loves space too.
And I would say, no, I really don't think he does.
But not the real stuff.
He has this fantasy. You know, the fact that now there is widespread distrust and distaste for Musk and most tech
billionaires, I think is actually very hopeful because that's the first step that we need
to make the changes that we have to make if we are going to save our democracy
and safeguard it from these tech oligarchs.
When people ask me what hope is there,
the answer that I generally give is like,
we have to organize against these people.
And part of the reason that answer always feels
kind of unsatisfying, I think, is it sounds boring, it sounds unsexy,
and it's like this boring, unsexy solution to a big looming problem that feels larger than life
and intractable. But I think that the history of politics and the history of humanity has shown that often it is boring, unsexy solutions
that win out and actually solve our greatest problems, right? Because there is, for example,
something very boring and unsexy about developing a vaccine, right? There's something very boring
and unsexy about doing the administrative work you have
to do to build like a healthy welfare state.
There is something boring and unsexy about like building a better computer.
And yet these things can solve real problems and have solved real problems.
Right.
That's a very good answer.
Thank you.
It's a very difficult thing because as you note, the money and the power, because they go on and they never stop and they never change,
and they get worse in many ways. Musk is the perfect example of that, but the others are,
I think, more dangerous. I think Musk is just more troubled and has other issues going on.
But someone like a Bezos or Zuckerberg in particular, I call them the
world's most dangerous man for a reason, and I stand by that to this day.
And part of it is ignorance, which is really difficult, you know, ignorance and ineffectiveness
and lack of expertise.
And I think, you know, on some level, you appreciate, you know, if you're very wealthy,
you give away money, but in a lot of ways, it comes with such a price and the learning
curve for them is so high.
At one point I wrote a piece called The Expense of Education of Mark Zuckerberg and I meant
at our expense, not his.
And I think that's where we are, unfortunately, but I agree with you when being onto them
is the beginning of the steps of doing so.
Even if you have hope that some of their things can help us all in some way.
Anyway, I really appreciate it, Adam.
This is a great book and everybody should read it.
It's called More Everything Forever.
And it's, you don't want more.
You don't want everything and you don't want it forever.
But I appreciate it.
Thank you, Kara.
On with Kara Swisher is produced by Christian Castro Rizal, Kateri Yocum, Megan Burney, Allison
Rogers and Kaylin Lynch.
Nishant Kurwa is Vox Media's executive producer of podcasts.
Special thanks to Skyler Mitchell.
Our engineers are Rick Kwan and Fernando Arruda and our theme music is by Trackademics.
If you're already following the show, we get a healthy welfare state.
If not, you're a small, stupid troll.
Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. And
don't forget to follow us on Instagram, TikTok and YouTube at On with Kara Swisher. We'll
be back on Thursday with more.