Tech Won't Save Us - Data Vampires: Fighting for Control (Episode 4)
Episode Date: October 28, 2024Tech billionaires are embracing extreme right-wing politics. It’s not just to enhance their power, but to try to realize a harmful vision for humanity’s future that could see humans merging with m...achines and possibly even living in computer simulations. Will we allow them to put our collective resources behind their science fiction dreams, or fight for a better future and a different kind of technology to go along with it? This is episode 4 of Data Vampires, a special four-part series from Tech Won’t Save Us.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The show is hosted by Paris Marx. Production is by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:Postdoctoral candidate at Case Western Reserve University Émile P. Torres, features reporter at The Information Julia Black, Goldsmiths University lecturer Dan McQuillan, and former head of the Center for Applied Data Ethics Ali Alkhatib were interviewed for this episode.Pieces by Sam Altman, Marc Andreessen, and an interview with Elon Musk were cited.Support the show
Transcript
Discussion (0)
In 1999, The Matrix arrived on scene, bringing a philosophically deep cyberpunk tale to a wide audience, just as the internet was infecting the cultural mainstream.
It introduced audiences to Thomas Anderson, better known as Neo, and a group of leather-clad dissidents trying to break humanity out of a computer simulation created by a race of intelligent machines.
The Matrix is everywhere. It is all around us. us even now in this very room you can see it when you look
out your window or when you turn on your television you can feel it when you go to work
when you go to church when you pay your taxes
it is the world that has been pulled over your eyes to blind you from the truth.
It's not an understatement to say it was a hit, and that some of its themes have remained in popular discourse for the two and a half decades that have followed.
The notion of living in a simulation wasn't new. It had been a staple of science fiction for decades.
But the film came out in
a moment when that possibility started to seem like something that might actually be possible
in the near future. Computers were improving, we were all becoming digitally connected,
and the exuberance of the dot-com boom was still full steam ahead. Almost anything seemed possible,
even that we might belong to a complex simulation ourselves.
The world of The Matrix was quite clearly presented as a dystopia. The whole plot revolves around escaping from the simulation, and in the later sequels, destroying it and the intelligent
machines once and for all so humans can reclaim their lives instead of being the batteries that
power the virtual world. Yet Silicon Valley doesn't seem to have gotten the message.
Elon Musk has talked about
how he thinks we live in a simulation, and the ideology embraced by far too many in the valley
openly welcomes such a future. In 2017, Sam Altman wrote a blog post about what he called
the merge. In his view, the merging of humans and machines wasn't an event like the singularity
that will happen all at once, but had already begun with our dependence on our devices
and would eventually reach the point where we need to make a choice. Either we merge with the machines
or we get left behind by them. Quote, if two different species both want the same thing and
only one can have it, in this case, to be the dominant species on the planet and beyond,
they are going to have conflict. We should all want one team where all members care about the
well-being of everyone else, he wrote. Before continuing with, quote, my guess is that we can either be
the biological bootloader for digital intelligence and then fade into an evolutionary tree branch,
or we can figure out what a successful merge looks like. There's no question that there are
commercial imperatives behind the push by the tech industry to massively expand computation
and put as many resources as they can muster into accelerating AI advancement. But there's also this
troubling worldview that shapes what they think the future will be and what should be sacrificed
to achieve it. Will we allow our world and ourselves to be sacrificed to pursue this future,
or will we try to stop them? This is your last chance. After this, there is no turning back.
You take the blue pill. The story ends. You wake up in your bed and believe whatever you want to
believe. You take the red pill. You stay in Wonderland. And I show you how deep the rabbit hole goes. This is Data Vampires, a special four-part series from Tech Won't Save Us assembled by me,
Paris Marks. Over the course of this series, we've learned more about hyperscale data centers,
the growing pushback they're facing around the world, and how the generative AI bubble is fueling
a building spree by major cloud companies. This week, to close off the series, we'll dig deeper
on the worldview shaping how some of the most powerful people in the tech industry see the
future, and why it needs to be opposed. This series was made possible by our supporters over on Patreon. And if you learned
something from it, I'd ask you to consider joining them at patreon.com slash tech won't save us so we
can keep doing this important work. In the coming weeks, Patreon supporters will get access to
premium full length interviews with the experts I spoke to for the series. So with that said,
let's learn about these data vampires and finish driving a stake through their hearts. The idea that humanity might be living in a simulation didn't begin with
the Matrix, but it seems almost certain to have popularized the idea at the very moment that the
power and influence of computer programmers and those cozying up to them was rising.
Money was flowing into the tech industry, and that meant their ideas and visions of the future, even the more outlandish ones,
could get taken a bit more seriously than they did in the past, especially when they started
to be positioned as a key part of their ambitious business plans. A few years after The Matrix was
released, philosopher Nick Bostrom started arguing not just that it was possible we live in a
simulation, but that it was even likely. If our species doesn't go extinct in the future and doesn't decide it shouldn't run simulations
of the past, then it will probably make a ton of them. And that means, in his view,
we probably live in a simulation. This is how Emil P. Torres, a postdoctoral candidate at
Case Western Reserve University and author of Human Extinction, explained it.
The reason for that third disjunct
is, okay, if we don't go extinct, we survive into the future, we do build this post-human
civilization, and there isn't some moral or legal or some other restriction that prevents
our future post-human descendants from running these ancestor simulations, then it's very likely that they will run a huge number of them.
Consequently, the number of simulated people in the universe could be far greater than the number
of non-simulated people. That leads to another question, which is, how do we know that we are not
in a simulation right now? Maybe you could say, well, we look around and things just
look very real. And Bostrom and others would say, well, actually, these ancestor simulations that
our post-human descendants will be running, they will be really high resolution. People in these
simulations will not be able to tell that they're in a simulation. The simulated world will be
indistinguishable from true reality. So if there's no way to empirically determine,
or there's no known way at this point to empirically determine whether or not we are
in a computer simulation right now, what do you do? Well, then he says what we ought to do is use
the principle of indifference, which just says that if you have no extra information, you should
just consider that you're just an average entity in the set. So if you do that, and you remember that there might be way more simulated
beings than actual real beings, it follows that we're much more likely to be one of the simulated
beings right now than one of the real beings. So let's take a pause here, because if you're
encountering this for the first time, it might be a lot. And let's be a pause here, because if you're encountering this for the first time,
it might be a lot. And let's be clear, the idea that we live in a simulation is bullshit. But
this is how people like Bostrom, not to mention a lot of these tech billionaires really think.
What someone like Bostrom is saying is that if we believe it's possible that people in the future
might run simulations that a ton of simulated post-human beings inhabit, then we must assume
we could be in a simulation of the past created by some future version of humanity that has
completed its merge with computers or has been replaced with them, I guess. Here's how tech
genius Elon Musk described his reasoning for believing we almost certainly live in a simulation
at the Code Conference in 2016. I think here's the strongest argument for us being in a simulation,
probably being in a simulation, I think is the following.
40 years ago, we had Pong, like two rectangles and a dot.
That was what games were.
Now, 40 years later, we have photorealistic 3D simulations with millions of people playing simultaneously, and it's getting better every
year. And soon we'll have virtual reality, augmented reality. If you assume any rate
of improvement at all, then the games will become indistinguishable from reality.
Just indistinguishable.
Even if that rate of advancement drops by a thousand
from what it is right now,
then you just say, okay, well, let's imagine
it's 10,000 years in the future,
which is nothing in the evolutionary scale.
So given that we're clearly on a trajectory
to have games that are indistinguishable from reality,
and those games could be played on any set-top box
or on a PC or whatever,
and there would probably be billions of such computers
or set-top boxes, it would seem to follow that the odds that we're in
base reality is one in billions. Can you imagine that as recently as a few years ago, people used
to sit around and nod along as they listened to this guy ramble on about total bullshit,
thinking they were hearing the train of thought of one of the world's unparalleled geniuses? Shockingly, some people still think that, even after all that's happened. Emil told
me Bostrom isn't as convinced as Musk that he's closer to 20% certain we live in a simulation,
not the one in billions chance we don't, as Musk claims simply because video games have gotten
better graphics. But those ideas, that we may live in a simulation,
will one day merge with computers, and are headed toward this post-human future,
are not just appealing because they sound like the future as presented in science fiction movies
and novels, but also because of how divorced some of these rich folks in tech have become
from the real world. If you are Elon Musk, the simulation hypothesis might seem more plausible than it is for the rest of us.
Because he is in such an extremely improbable situation in the world.
As the richest person or one of the richest people in the world who has had all this massive success and influence and acquired all this power.
So you can imagine how somebody in his situation might
look around and go like, this is just so strange. I mean, it's even stranger than the strangeness
of the lives of an ordinary person. So the science fiction is one piece of this,
and the surreal nature of being a billionaire is another. But there's also the detachment many of
these people have from the average person and the experience of their lives. This doesn't just play out in Musk's delusions or obsessions, but also an inability or even
disinterest in thinking about how these grand plans that are supposedly for the future of
humanity will actually impact real humans. Julia Black is a features reporter at The Information,
and she spoke to Sam Altman for a story a year and a half ago. Here's one of the things she told me about talking to him.
In my conversation with Sam as a part of writing the piece, I became very fixated on trying to get
him to answer, you know, really tangible one foot forward questions about like, okay, but how is
this actually going to change life for your average American? Try to picture that person.
And he couldn't, to a shocking degree, he couldn't seem to wrap his head around that question.
And it seemed irrelevant to him.
It seemed, why would I care about your average American today?
We're talking about human civilization on a grand timescale.
If this supposed future isn't for regular people, who is it really for?
People like Altman or Musk will say it's for future generations.
But really, it's little
more than a series of obsessions held by tech billionaires and the people who worship them.
Obsessions that, as we've seen through this series, are increasingly causing harm to communities,
accelerating social inequalities, and making it harder to tackle the climate crisis. But that
future doesn't end with the notion of living in a simulation. It goes much deeper than that.
Years after Nick Bostrom repopularized the idea that we might be living in a simulation,
at least among a certain niche in the tech industry, he built on it with a much more
expansive detailing of the threats facing humanity and the path we must take to combat
them.
Those simulations rely on the assumption of intelligent machines. But in his 2015 book, Superintelligence, he laid out a scenario where
computers exceed the intelligence of humans, achieving artificial general intelligence, or AGI,
then determine we're a threat to their survival and decide to eradicate us or turn us into
paperclips. It's little more than a science fictional thought
experiment, but everyone from Musk and Altman to Bill Gates praise the book. Bostrom argues we need
to be on the lookout for existential risks to the human species and ultimately advocates a worldview
called long-termism that puts the long-term future of humanity before the more immediate concerns we
might face. The ultimate goal of long-termism is to realize this, to quote one
of the leading long-termists, Toby Ord, vast and glorious future, where we become post-human,
we go out, spread beyond Earth, colonize the universe, the accessible universe, and create
astronomical amounts of value. It's a very kind of economical way of thinking about the future. I mean, I've said
before that for long-termists, morality is to a large degree essentially reduced to a branch of
economics. Long-termism might initially sound like a good thing. It brings to mind long-term
thinking, something we often acknowledge our leaders don't do enough of. But as Emil describes,
long-termism goes far beyond that, instead advocating that
our focus should be on what humanity might be like in thousands or millions of years,
and making significant sacrifices in the present based on fantasy scenarios like space colonization
and even the notion of building vast computer systems on faraway planets where digital beings
will live, you guessed it, in massive simulations. How exactly do we maximize value? Well,
long-termism is greatly influenced by a theory in ethics called utilitarianism.
And utilitarianism is very much like capitalism. With capitalists, it's about maximizing profit.
With utilitarians, it's about maximizing value in a slightly different sense. It's about maximizing profit with utilitarians, maximizing value in a slightly different sense.
It's not money, but something that has intrinsic rather than just instrumental value.
So what we need to do, as I mentioned before, go out and colonize space, create a sprawling
multi-galactic civilization full of trillions and trillions of people.
Now, there's one last important step to this line of thinking, which is that we could go
out and colonize space maybe as biological beings, but there's a certain carrying capacity to any
given planet. So there's an upper limit to the number of biological beings that could reside
on these planets. Let's say we go out and colonize some solar system, and rather than terraforming the
planets that are circling around the sun, we just convert those planets into planet-sized computers
made out of computronium running virtual reality worlds. Well, you can cram more digital people
per unit of space than you can biological people. So the long-termism therefore suggests that we need to
go out and colonize space and become digital beings to build these massive computer simulations
that are full of trillions and trillions and trillions of digital beings supposedly living
happy lives, because that is the way you get the greatest number of people and consequently are
able to truly maximize the total amount of value in the universe.
That may sound completely wild, because it is.
When you hear Elon Musk talking about the need to build a multi-planetary civilization,
or why population decline is an existential threat, it's these ideas that are ultimately behind the arguments he's making.
It allows these billionaires to believe they're building the sci-fi future they dreamed about
in their youths, but also provides them with a supposedly moral justification for hoarding vast amounts of wealth to spend on
AGI and space colonization dreams while people go hungry, are without homes and proper health care,
and the effects of the climate crisis keep getting worse, in part because of the demands
created by trying to realize this future in the first place. In fact, a lot of long-termists even
argue the climate crisis is not an existential threat, because even if warming happens far by trying to realize this future in the first place. In fact, a lot of long-termists even argue
the climate crisis is not an existential threat
because even if warming happens far beyond two degrees
and the human population declines,
over the long-term,
they do not believe humanity will be fully wiped out
and it will be able to rebuild.
It's a perverse and immoral way to look at the world,
but one they've convinced themselves is justified.
I think the reality of running a country, running a society, an economy is everything is about a
decision of where to allocate resources, where to allocate capital, where to allocate our thinking,
our human capital. And the conversation that I've seen in Silicon Valley, honestly,
even more in the last year, I didn't think it could get more extreme on this front, but it's just been reoriented to this thinking around all of our
resources need to be devoted to these extreme possibilities. So rather than taking care of,
you know, the society at large, like what we need to be caring about and thinking about is the cutting edge of innovation, the frontier, these far out possibilities that are much more resource intensive, by the way.
I think that there's something very significant actually to the fact that Silicon Valley has become so isolated and so removed from the realities of life for most of society. As Julia says, we've found ourselves in this present reality where these far out dreams of tech billionaires are taking priority over the real needs of the power of average people over their lives and to present false technological solutions to problems so political action doesn't have to be taken, all while the
social and environmental crises continue to get worse. You can clearly see how these long-termist
visions relate back to the issues we've been talking about through this series. Tech billionaires
like Musk and Altman are obsessed with AI, and AGI in particular, the version that isn't just
tech being wielded by humans, but tech that begins to think for itself.
Because they want their grand science fictional dreams to come true and will sacrifice virtually anything to try to make them a reality.
Which is why Altman in particular is so determined to see larger and larger data centers being built regardless of the energy or water they need or the broader impacts on the communities they're built in. But it also serves the commercial functions we discussed with Cecilia
Rickap and Dwayne Monroe, where it allows major tech companies to continue expanding their power
and sticking internet connectivity, digital technology, and cloud solutions in places
they're not truly needed. Julia made that link quite explicit in our conversation when talking
about the contrast between the utopian and dystopian AI futures on offer.
I do think that it's important to remember that what both these extremes, these polar
options have in common is that they're fantastical.
None of this is dealing in the tomorrow or the tangible or the what might really be possible
or happen.
It's dealing in these theoreticals, these hypotheticals
that I think are very useful when you're also asking for multiples of money, compute, energy
that are pretty much fantastical. I mean, he's talked about $7 trillion needed for data centers.
The he there is none other than Altman, who's doing everything he can from
searching for capital around the world to courting corporate partners like Microsoft alongside the
White House to support his vision. But there's one big thing that might stand in the way of these
billionaires achieving their dreams and being able to force the cost of them on the wider society.
That's the fact that for all the money and power they have, they still operate in democracies. At least for now.
Mark Andreessen is pretty typical of today's tech billionaires, someone who claims his wealth is a
product of his genius when it's much more a product of luck. He's an incredibly influential
venture capitalist who often has a hand in the rise and fall of new tech bubbles. His venture
capital firm Andreessen Horowitz plowed a ton of money into crypto, for example. 30 years ago, he was working at the
National Center for Supercomputing Applications at the University of Illinois, where he developed
a web browser called Mosaic with Eric Binna. But as the internet headed toward commercialization
in the early 1990s, eventually being privatized in 1995, Silicon Graphics co-founder Jim Clark saw
there was an opportunity to cash in instead of just seeing the web browser as a university
research project. In 1994, he recruited Andreessen, brought in some deep-pocketed investors, and they
started Mosaic Communications Corporation and released the Mosaic Netscape 0.9 web browser,
just as the dot-com boom was taking off. The company was later renamed Netscape
and its browser, the Netscape Navigator, to remove the association with the publicly financed
original. In 1998, before the dot-com crash, AOL bought the company for $4.2 billion,
giving Andreessen his path to further wealth and influence. Andreessen has been a prominent figure
in Silicon Valley ever since, often championing its influence and asserting it should flex its power over U.S.
society more strongly, even though his wealth has unquestionably distanced him from average
people over the decades. In April 2024, the author Rick Perlstein recounted being invited
to one of Andreessen's seven mansions for his book club in 2017. When Perlstein started talking about the
benefits of small-town life, Andreessen interjected with a quite heartless statement. Quote,
I'm glad there's OxyContin and the video games to keep these people quiet, Andreessen said,
according to Perlstein. As criticism of major tech companies and threats of regulation and
higher taxes by government have increased in recent years, and Dreesen hasn't reacted well at all. He's been a vocal figure in the tech industry's embrace of extreme right-wing
politics, and in October 2023, he made that clear when he published the Techno-Optimist Manifesto
to his venture capital firm's website. The 5,000-word manifesto was unsurprising in many
ways, but intriguing, if not concerning, in others. It asserted that technology is the
only way to solve the world's problems, and that anyone or anything that stands in the tech
industry's way is holding back the whole of humanity, a very convenient conflation for a
venture capitalist. He wrote, quote, technology is the glory of human ambition and achievement,
the spearhead of progress, and the realization of our potential. For hundreds of years, we properly glorified this, until recently. He wasn't shy about naming enemies
either. It was no surprise to see communists and Luddites on his list, but he also wrote that
society was being harmed by calls for sustainability, social responsibility, trust and safety,
and tech ethics. Few people other than Silicon Valley billionaires and their hangers-on could
agree with that. He even claimed those who stood in the way of AI could be seen as engaging in a
quote, form of murder because of all the lives he believes AI will supposedly save.
Andreessen and many of these other tech billionaires have never been able to accept
that they got to where they are through luck, being in the right place at the right time,
the right industry as it was skyrocketing, and locking out by cashing in on one of the early internet booms. They had to
convince themselves and the world it wasn't luck, but skill, that the meritocracy was at play and
their wealth means they are also some of the smartest people on the planet. When you believe
something like that, it's no wonder you'll start getting angry when those you deem inferior start
trying to stand in your way. Andreessen's manifesto embraces technocracy, the notion that experts and engineers
should be in charge of society, and explicitly praises a number of fascists, including Filippo
Tommaso Marinetti, the founder of Italian futurism and an ardent supporter of Italian far-right
leader Benito Mussolini. Other billionaires, like Peter Thiel and some of his crew,
have openly opposed democracy itself.
Here's what Julia Black had to say about that when I spoke to her.
One thing that's standing in the way of those people becoming empowered,
as they see it, to do what they want to make this technological future happen
is democracy, is the fact that, as you say,
most people couldn't even
begin to relate to this stuff, certainly wouldn't vote for it, certainly wouldn't,
you know, opt into most of it. And so how do you solve that problem? You remove the obstacle of
democracy and the need for a majority buy-in. The ideologies increasingly taking hold among
the Silicon Valley elite
are profoundly anti-democratic
and they have strong allies
in the far-right movements
growing well beyond the tech industry.
Long-termism, techno-optimism,
and these other worldviews
assert that our future should be shaped
by the billionaires who've made their fortunes
over the last few decades
and that the rest of us should silently accept
the consequences of those decisions,
regardless of what it means for workers' rights, the environment, and the social progress they
deem so desperate to roll back. This is impossible to disconnect from the effort to proliferate AI
through society, expand the digital surveillance apparatus, mediate as many of our interactions as
possible through digital platforms, and build massive data centers the world over to power it,
regardless of the resources needed to operate them. Dan made these connections quite explicit.
We are living in a time when far-right political ideas, proposals, ideologies, understandings
are on the rise. If we've got a techno-political understanding of what's going on, we understand
that the technology and the politics are not separate, but are really sort of co-productive, then we should at least step back and question
if our amazing new technology that seems to be so facile as a product of our moment has anything to
do with what else seems to be on the rise. Far-right politics is used to divert from underlying social structural injustice and inequality,
and so is AI. And that should be a profound concern to anybody who's advocating for AI,
because its social function at the moment is very overlapping with the social function of
far-right ideologies. Even before you get to the point which is actually happening at the moment,
where those very same actual fascist movements are turning to AI and going, yeah, this could be really great.
And anyone who's building a large scale AI mechanism is building a machine for that. through forms of heightened abstract cruelty through these mechanisms that sort of pre-prepares
it for these more fascistic political movements. So I tend to call AI has a tendency towards
fascistic solutionism. In theory, and even in practice, AI technologies can be deployed to do
some good and helpful things. But Dan's argument gets to a deeper point. Technology is not neutral.
It's
inherently political, given the resources that fuel its development and the use cases it's put
to are fueled by the politics of the world that surrounds it. On the net, we see AI being deployed
in obscene and incredibly harmful ways, denying social supports to people, discriminating in
immigration systems, targeting people in wars like in Israel's ongoing campaign in Gaza and beyond.
And on a broader scale, creating a world where people are constantly ranked and decisions are
increasingly made by opaque algorithms we have little control over. The PR and media coverage
is all too often focused on the potential beneficial applications of AI, many of which
are inflated if not wholly fictional, yet the consequences are far greater and don't get nearly the spotlight shown on them because that doesn't work for those developing it.
And as they embrace anti-democratic and increasingly socially conservative politics,
and the wider society shifts in that direction too, do we want more powerful AI to be in these
people's hands? If these technologies are deployed into the world and do cause harm to people,
how do we respond? The discussion often turns to regulation and how it can be deployed to rein in the worst excesses
of the tech industry. But increasingly, it feels that's not enough, especially when it becomes
clear how much tech companies have deployed their vast war chests, lobbying power, and the aura
of the tech industry to shape regulation in their favor. More and more often, discussions are going
beyond that to places the tech industry clearly doesn't want us considering. But Ali argued
destroying some technology should be in the realm of possibility.
Sometimes a person will encounter an algorithmic system and it is not going to stop hurting them
and they will not be able to escape the system. And given those two facts,
I think it's pretty obvious that it is reasonable to start dismantling the system to destroy it.
And I'm not saying we should necessarily destroy everything that has silicone in it or something
like that, although I'm sure there are probably people that would argue that and I'd be happy to
hear them out. But it doesn't seem radical to me to say, if you can't leave a system,
if the system is harming you, if you can't get it to stop hurting you, there really aren't that many other options.
I think it's reasonable to say you don't have to take it.
You don't have to continue to be harmed.
And if it forecloses on all of the other possible avenues that you have, then one of the avenues that we sometimes don't like to talk about is to start destroying the system.
The growing campaign against the expansion of hyperscale data centers is about water.
It's about energy.
And it's about the mineral resources that go into the chips that power them.
But it's also about something much greater.
The questions of what society we're building, who it ultimately serves,
and who gets to be meaningfully involved in determining our future.
Do we leave it to
sociopathic tech billionaires or fight to reclaim that power for ourselves?
Data centers are such a signifier of a broader set of material relationships,
clearly linked to broader concerns about the extractivism. The number of data centers goes
up, the number of servers goes up, the amount of energy goes up, the amount of cooling water. Look at the full range
of values of our time that they embody, the absolute obsession with growth, the absolute
sedimentation of brutal asymmetric global relations. The idea that this is all a legitimate way of addressing
our most fundamental problems is just a massive diversion. That's Dan again. And what he says
there is important to consider. I think regardless of the path we choose, we will require some form
of data storage and processing. But the current path that companies like Amazon, Microsoft,
and Google have us on is fueled by commercial interests and broader ambitions that are completely divorced, not just from what's necessary to build a better world for most people on it, but also the very real constraints we face, there's an event called the Butlerian Jihad, which involves a war to destroy the so-called thinking machines, meaning digital computers are no more.
I don't think our ambition is or should be to go that far, but that doesn't mean we can't question what kind of technologies are appropriate and when it's right to use them.
Here's Ali one more time.
Using technical systems to help us make sense of complex problems is not something that I'm categorically against. I think that computational systems can be great ways of trying to draw
comparisons between two fundamentally different things. But I think that when that system starts to become overly
decisive or have an outsized weight of what influence it has in making decisions about
consequential things, then it becomes obviously much more harmful and much more problematic and
dangerous. And again, I think it comes back to this question of like, do people consent to
the influence that the system has over this particular decision about my life? In a lot of
ways,
tech companies find ways to claim that people are not stakeholders in decisions that are about them,
or they find ways to say, well, this person's just not informed enough or too stupid or whatever to make an informed decision that is for the benefit of society or whatever.
It's long past time we had a discussion about our collective future and the role digital
technology should play in it, one that's not hijacked by science fiction deceptions about long past time, we had a discussion about our collective future and the role digital technology
should play in it, one that's not hijacked by science fiction deceptions about colonized planets
and AI servants. That's a discussion the tech industry doesn't want us to have. These massive
tech companies can seem unstoppable, but their executives' embrace of an extreme right-wing
politics is in large part a result of feeling their power being threatened,
not just by government action, but by a public that is turning against them. Around the world,
the push to build more data centers to serve the commercial needs of major tech companies
and the ambitions of the billionaires who control them continues. But there is also
successful opposition, things like temporary moratoriums in Ireland and Singapore, or
projects being stalled or defeated
in Chile and the United States. And that's on top of the broader regulatory and enforcement
efforts that are spreading as countries and their citizens get fed up with the abuses of
tech companies that feel themselves to be above the law. In more and more countries,
people are demanding the return of their digital sovereignty. The hype around generative AI and
the data center build out its fueling have not only put a spotlight on the vast material costs
of the future Silicon Valley is trying to build, it's also showed how the promise of the internet
revolution has been squandered by people who've been blinded by wealth and power. Now we have to
ask ourselves, should they be charting humanity's course, or is it time we collectively take that power back from them? Do we need to sacrifice so much to build out as much storage and server capacity as the tech industry's constant need for growth demands? system they've constructed, stopping their attempt to have digital interfaces mediate so many of our
interactions, and rejecting their plan to expand their control by rolling out algorithmic decision
making in as many places as they can get away with. Not only do I think we can, I think we should.
A better future is possible, and so is a different vision of technology. But it won't be won without
a fight, and their massive hyperscale data centers are a great place to start. support from our listeners at patreon.com slash tech won't save us in the coming weeks we'll also be uploading the uncut interviews with some of the guests i spoke to for this series exclusively
for patreon supporters so make sure to go to patreon.com slash tech won't save us to support
the show and thanks for listening to data vampires Thank you.