Tech Won't Save Us - Data Vampires: Sacrificing for AI (Episode 3)
Episode Date: October 21, 2024Sam Altman is clear: he’s ready to sacrifice anything for his AI fantasies. But are we? We dig into why generative AI has such extreme energy demands and how major tech companies are trying to rewri...te climate accounting rules to cover how much their emissions are rising. AI isn’t just churning out visual slop; it’s also being used to transform how our society works and further reduce people’s power over their lives. It’s a disaster any way you look at it. This is episode 3 of Data Vampires, a special four-part series from Tech Won’t Save Us.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The show is hosted by Paris Marx. Production is by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:Hugging Face Climate Lead Sasha Luccioni, Associate Professor in Economics Cecilia Rikap, former head of the Center for Applied Data Ethics Ali Alkhatib, Goldsmiths University lecturer Dan McQuillan, and Director of Research at the Distributed AI Research Institute Alex Hanna were interviewed for this episode.Interviews with Sam Altman and Brad Smith were cited.Support the show
Transcript
Discussion (0)
We do need way more energy in the world than I think we thought we needed before.
And I think we still don't appreciate the energy needs of this technology.
That's Sam Altman, the CEO of OpenAI, speaking to Bloomberg back in January at the World Economic Forum.
In that interview, he was lightly pressed on the climate cost of his generative AI vision.
And he was remarkably honest.
The future he wants to realize is one that will require an amount of energy that's hard to even fathom. And all that energy needs to come on stream in record time at the same moment we're supposed to be phasing out fossil energy in favor of less emitting alternatives like solar, wind, hydro, or, in some people's minds, a ton of nuclear energy. The good news, to the degree there's good news, is there's no way to get there without a breakthrough. We need fusion or we need like radically cheaper solar plus
storage or something at massive scale, like a scale that no one is really planning for.
So it's totally fair to say that AI is going to need a lot of energy, but it will force us,
I think, to invest more in the technologies that can deliver this,
none of which are the ones that are burning the carbon. The way Altman talks about the
massive energy demands his AI ambitions are creating is typical of tech billionaires.
The climate crisis is not a political problem, but simply a technological one.
And we need not worry because our technocratic overlords will deliver a breakthrough in energy
technology so they can continue doing
whatever they want, regardless of whether it makes any real sense to do so. Even though Altman refers
to this as good news, it's hard to see it that way. He's basically acknowledging that warming
far beyond the 1.5 or 2 degrees Celsius limit we're supposed to be trying to keep to is essentially
locked in because of industries like his own, unless they come up
with a technological breakthrough in time. There's no guarantee that will happen. And in fact, it's
highly likely it won't. That's why so many of their scenarios assume we're going to overshoot
on emissions, but hope we'll be able to use some future technology to pull all those greenhouse
gases back out of the atmosphere. And again, another massive gamble with the planet and
everything that lives on it. But in the interview, Altman wasn't just candid on how much energy
the widespread rollout of generative AI will require, but also about that more grim scenario.
I still expect, unfortunately, the world is on a path where we're going to have to do something
dramatic with climate, like geoengineering as a band-aid, as a stopgap.
But I think we do now see a path to the long-term solution.
Altman and his fellow AI boosters want us to gamble with the climate to such a degree
we have to try to play God with weather systems.
Also, they can have AI companions and imagine that one day they might be able to upload
their brains onto computers.
It's not only foolish, it verges on social suicide. And I don't think that's a trade-off that many people will openly Thank you. regardless of whether we want it or whether it will even make the lives of most people any better. In this week's episode, we'll be digging into how generative AI hype is accelerating the data center buildout
and presenting a series of threats from worsening climate catastrophe to further social harms
that will only become more acute the longer this is allowed to continue.
This series is made possible by our supporters over on Patreon.
And if you learn something from it, I'd ask you to consider joining them at patreon.com slash tech won't save us so we can keep doing this important work. New episodes of Data Vampires will be published every Monday of October, but Patreon supporters can listen to the whole series today. Then enjoy premium full length interviews with experts that will be published over the coming months. Become a supporter at patreon.com slash tech won't save us today. So with that said, let's learn more about these data vampires. And
by the end, maybe we'll be closer to driving a stake through their hearts.
Since the release of ChatGPT in November of 2022, talk of artificial intelligence or AI has been
everywhere. Let's be clear, AI is not a new thing. The term has been in use for decades and has
referred to different things since then. When you type a message on your phone and the keyboard suggests the next word,
or when you're putting together a document in Microsoft Word and a squiggly line appears beneath
the word to tell you it's spelled wrong, that's AI too. It's just not the same kind of AI as what
powers the chatbots and image generators that are all the rage today. That's generative AI,
and it's what's fueling a lot of these problems. Sasha Luchoni is the climate lead at Hugging Face.
And I asked her why this new generative AI is so much more computationally intensive.
This is what she told me.
If you compare a system that uses, I guess, extractive AI or good old-fashioned AI,
to search the internet and find you an answer to your question,
it's essentially converting all these documents, all these like webpages from words to numbers.
And when you're searching for a query, like, I don't know, what's the capital of Canada, it will also convert that
query into numbers using the same system. And then matching numbers is like super efficient.
This stuff goes really, really fast. It uses no compute at all. It's like you can run on your
laptop, you can run anywhere. But if you're using generative AI for that same task, instead of
finding existing text numbers, it's actually generating the text from scratch. And I guess the
advantage, quote unquote, is that instead of just getting Ottawa, you'll get like maybe a full
sentence, like the capital of Canada is Ottawa. But on the flip side, the AI model is generating
each one of these words sequentially. And so like the longer the sentence, the output, the more
compute it uses. And, you know, when you think about it for tasks, especially like question
answering, like finding information on the internet, you don't need to make stuff up from
scratch. You don't need to generate things. You need to extract things, right? So I think
fundamentally speaking, what bothers me is that like we're switching from extractive to generative
AI for tasks that are not meant for that. So basically there's a lot more work that goes
into generating text or images
than simply trying to identify what you're looking for. These generative AI tools are
built on general purpose models that were trained on almost any data these companies could get their
hands on, often by taking it off the open web. That includes everything from Hollywood movies
and published books to paintings and drawings made by all manner of artists, and even many of
the things you or me have posted on social media and other parts of the web over the years. And the vast majority of that
data was taken without anyone's permission. Now, it forms the foundation of the AI tools and models
that kicked off all this hype, and that have companies of all sorts rushing to adopt generative
AI and push it onto regular users, regardless of whether it's really necessary for the task
they're trying to accomplish. And that all comes with a cost. When you're switching between a good old-fashioned
extractive AI model to a generative one, like how many times more energy are you using? We found
that, for example, for question answering, there's like 30 times more energy for the same task, for
like answering a question. And so what I really think about is like the fact that so many tools
are being switched out to generative AI, like what kind of cost does that have? Someone
recently was like, oh, I don't even use my calculator anymore. I just use Chad GPT. And I'm
like, well, that's probably like 50,000 times more energy. Like I don't have the actual number,
but you know, like a solar powered calculator versus like this huge large language model.
Nowadays, people are like, I'm not even going to search the web. I'm going to ask Chad GPT. I'm
not going to use a calculator, right? All of that, what the cost to the planet is.
And for all that energy, there's no guarantee the outcome is even going to be better or more
accurate. As Sasha explained to me, these tools operate not based on understanding,
but probabilities. Again, think of when the keyword on your phone is suggesting the next word.
It doesn't know what you're doing. It's using probabilities based on the data it has to see
what word has the highest likelihood of coming after what you've already written.
That's why we so often see examples of chat GPT and other chatbots generating completely incorrect outputs.
There's no real understanding there, despite how often tech CEOs try to make us believe their large language models are on the cusp of sentience like a human being. But for those generative AI tools to work, they need a ton of computation, which is why Sam Altman says we either need a technological
breakthrough in energy technology or to start geoengineering the planet. The notion of scaling
the tech back is unacceptable. But there are only a small number of massive companies that have
access to nearly the amount of computation to properly compete in the generative AI game,
which is why the massive tech companies,
especially Microsoft and Google, have become so involved. They're not only providing the cloud
infrastructure to power the generative AI hype, they're also making sure they have a lot of
influence over the startups finding success in this financial cycle. Here's Cecilia Rickap,
the University College London professor from the first episode in the series, explaining how that
works. In 2019, Microsoft decided to invest $1 billion in OpenAI. Of course, Microsoft,
with all the profits it makes annually, has a lot of liquidity and can decide to invest in
many different things. But big tech in particular have decided to pour a lot of money into the
startup world as corporate venture capitalists. So Microsoft did this with OpenAI,
but the main motive is not financial.
It's not that they want to make more money
just like by investing in the company,
but the way to make more money,
it's actually about how OpenAI is developing technology,
what technology OpenAI was working on
and how Microsoft can steer that development.
And by doing it,
you can eventually get access to that technology earlier. So you can adopt it earlier as Microsoft
did with OpenAI, but you eventually may also be able to make extra profits if the company you
invested in is successful and starts developing a business. In early 2023, Microsoft invested
another $10 billion into OpenAI, but Semaphore reported
months later that a significant portion of that investment wasn't in cash, but credits
for Microsoft's Azure cloud computing platform, what OpenAI needed to train its models and
run its business.
The company is reportedly losing $5 billion a year, but can continue to operate because
of the support of powerful and deep pocket benefactors like Microsoft.
On top of that, Microsoft, Amazon, and Google have effectively rated the talent at inflection AI,
adept AI, and character AI, respectively, to the degree that regulators are investigating them.
Meanwhile, Amazon and Google have both put billions of dollars into Anthropic,
and Microsoft has an investment in Mistral AI. This ensures that on its face, the AI ecosystem looks like
there are a bunch of new tech companies rising, but those companies are still completely dependent
on the dominant players, not just for funding, but also for computation. There's one more angle
of this Cecilia pointed out to me though. Yes, generative AI is dependent on the centralized
computation of major cloud providers, but the hype around it and the perception that if companies
adopt it, they'll see their share prices rise, has accelerated its adoption. And by extension,
the demand for computation and the energy and water needed to run all those data centers.
Just because everyone is talking about AI these days, as a big company, you don't want to be
left out. Basically, what has happened is a much faster adoption, not only of generative AI, but widely of the cloud and widely
of all the different forms of AI. And because behind all this, we have the power of Amazon,
Microsoft, and Google, not only because of the cloud, but also because they have been investing
as venture capitalists in pretty much every single AI startup in the world.
They keep on expanding not only their profits, but also their
control over capitalism at large. So in a way, it has its own specificities. But if we want to put
it just in a nutshell, it has fast forward something that was cooked from way before.
So in short, the AI boom isn't just creating the stock market bubble and allowing companies like
OpenAI to rise up the ranks with the support of the existing dominant tech firms. The growth of
generative AI isn't a challenge to companies like Amazon, Microsoft, and Google. It further cements
their power, especially as other companies, non-tech companies, adopt it. Because every time
they do so, they're becoming more dependent on the cloud businesses of those three dominant firms,
further increasing their power, their scale, and driving a further build-out of major data centers across the world.
And, as we've touched on in the previous episode, all of that comes with a massive environmental
impact. For quite some time, tech companies have wanted to be seen as green. In the picture they
painted, digital technology was clean and green, the sustainable alternative to the dirty, polluting industrialism of the past.
That was always more a marketing campaign than reality, though, as the internet doesn't emerge
out of nowhere. All the technologies that underpin it have serious material consequences that create
plenty of emissions and environmental damage of their own. But as efforts were ramping up to
tackle the climate crisis, they wanted to keep that image alive. The most ambitious thing we're saying today is,
as you just mentioned, we will be carbon negative as a company by 2030, not just for our company,
but for our supply chain, for our so-called value chain. And by 2050, we will remove from the
environment all of the carbon that Microsoft has emitted, either directly or for electrical consumption, since we were founded in the year 1975.
That's Brad Smith. He's the president of Microsoft.
And that clip is from an interview he gave to Bloomberg back in January of 2020.
Microsoft was rolling out a new climate pledge.
It would not just achieve net zero emissions, but become carbon negative within a decade.
The company called this a carbon moonshot, indicating it was ambitious, but a goal they thought they could
achieve. Well, that was before regenerative AI became the next big thing that virtually everyone
in Silicon Valley felt they had to chase and that Microsoft saw could significantly expand
its cloud business. Here's Brad Smith again in May, 2024, this time.
You know, in 2020, we unveiled what we called our carbon moon
shot, our goal of being carbon negative by 2030. That was before the explosion in artificial
intelligence. So in many ways, as I say across Microsoft, the moon has moved. It's more than
five times as far away as it was in 2020, if you just think about our own
forecast for the expansion of AI and its electrical needs. Yes, you heard that right. The moon had
moved five times farther away in just a few years. That was a generous way of saying Microsoft's
climate pledge was sacrificed on the altar of market ambition. Between 2020 and 2023,
Microsoft's emissions were nowhere near going negative. They'd actually soared by 30%,
in large part because of all the data centers it was building, and continued to build through 2024.
Google wasn't any better. Despite making a carbon neutrality pledge of its own,
it announced earlier this year that its emissions were up 48% over just five years,
once again fueled by data centers.
I'm just worried that once all the dust settles, if the dust settles, if there's no new paradigm
that gets invented in the meantime, that we're going to look back and be like, oh, oops,
that was a lot more carbon than we expected. And I mean, historically, as a species, we have a
tendency to do that, like retroactively look back and be like, oh, this was worse than for the planet than we expected.
There are already signs Sasha's worries may be coming true.
In September, The Guardian looked over the emissions figures of the major tech companies and found what they were reporting didn't reflect what the numbers actually showed.
The collective emissions of the data centers controlled by Microsoft, Google, Meta and Apple were 662% higher than what the companies
claimed. When Amazon's data centers were included, the combined emissions of those five companies'
facilities would make them the 33rd highest emitting country in the world, just ahead of
Algeria. And that's just for their data centers that existed up to 2023. But why can these companies
claim to emit so much less than they really do? One expert the Guardian spoke to called it a form of creative accounting.
Basically, they buy a bunch of offsets and act as though having done so means their emissions
have been negated.
Probably the most important of those tools are renewable energy certificates, which shows
they've bought renewable energy that can be produced at another time of day or on the
other side of the world.
As long as it was generated somewhere,
the companies use it to pretend they didn't actually generate the emissions they very much did add to the atmosphere. And some tech companies are lobbying hard to ensure the rules on carbon
accounting are rewritten to make it look like they're emitting way less than they really are.
According to reporting by the Financial Times, Amazon and Meta are leading the charge to ensure
their deceptive accounting mechanisms are legitimized by the Greenhouse Gas Protocol, which is an oversight body for
carbon accounting. Google is pushing a competing proposal that would force companies to at least
buy renewable certificates that are closer to where they're actually operating, but still relies
on offsets at the end of the day. Companies like Amazon say even that would be too expensive.
Matthew Brander, a professor at the University of Edinburgh,
who spoke to the Financial Times,
gave a pretty good example to show
why this is all so ridiculous.
He said, allowing companies to buy renewable certificates
is like if you paid a fitter colleague of yours
for the right to say you bike to work
when you really drove your gas-powered car.
It's foolishness, but this is how they're planning
to keep expanding their data center networks
while claiming they're reducing, if not eliminating, their emissions.
It's a recipe for disaster on a global scale.
We've talked a lot about why AI is using a ton of computation and further fueling the climate crisis, but what is all the compute we're putting into it really achieving?
Maybe there's a world where all those resource demands are justified because the benefits are so great. And indeed, that's what tech CEOs like Sam Altman
or supposed luminaries like Bill Gates would have us believe. But the truth is that the rollout of
this technology only presents a further threat to much of the public. We're used to hearing about
AI as forming the basis for a series of tools that can do all manner of tasks. But I was struck by
how two of the people I spoke with described the broader project that AI seems to be part of when you consider who is developing it
and how it's actually being deployed. Let's start with Ali Al-Khatib. He used to be the head of the
Center for Applied Data Ethics at the University of San Francisco. When I asked him how he would
describe AI, he began by noting how the term itself is decades old, but there was a troubling
through line between its various permutations over the years. I think the thing that we would all
recognize all the way through continuously is the techno-political project of taking decisions away
from people and putting consequential life-changing decisions into a locus of power that is silicon or that is automated or something
along those lines, and redistributing or shifting and allocating power away from collective and
social systems and into technological or technocratic ones. And so this isn't really
like a definition of AI that I think a lot of computer science people would appreciate or
agree with. But I think it's the only one that, again, if you were a time traveler, kind of like going back 20 years and
then 20 more years and then 20 more years, you would see totally different methods. But I think
you would see basically the same goals, basically the same project.
Ali's description is unlike anything you'll hear from industry boosters who want you to see AI as
a way to improve many aspects of human life or on the extreme end, thinking it could end humanity if not done right.
They don't want to talk about that more political angle,
the way it's used to cement their power
in a way that can be harder to immediately identify
than say the outwardly authoritarian actions
of a politician or leader.
AI much more quietly erodes the power
of much of the public over their own lives,
taking away their autonomy by shifting decisions
to unaccountable technologies and the people who control them. This is something Dan McQuillan,
a lecturer at Goldsmiths University and author of Resisting AI, identified too.
AI is a specific in our faces example of a general technological phenomenon which claims to solve
things technically. And we see that across the board from tricky social issues
all the way up to the climate crisis. But I think that that sort of diversions aspect is really
an important aspect of contemporary AI, exactly because the issues are so urgent and other forms
of collective, social, grounded community action and worker action are so urgently needed, that's something that
successfully, even semi-successfully, diverses from those things is extremely toxic.
So I'm really talking about AI as a narrative, AI as an idea.
AI is a real technology that appears to do certain things, that can emulate certain things
or synthesize certain things in a way that provides people with a plausibility argument
that maybe this could fill the hole in health services or education or whatever. So that's the technology.
In Dan's telling, AI isn't just a digital technology made up of complex algorithms and
underpinned by the material computational infrastructures that drive it. It's also a
social technology, one that's deployed so the powerful can claim to be addressing what are
very pressing problems in society, the lack of healthcare, inequitable access to education, growing poverty and inequality, not to mention the accelerating
climate crisis, without having to actually take the extent of the difficult political measures
that would really be necessary to tackle them, measures the elites in our society likely don't
want to see taken in the first place as it might erode their power and certainly require their
wealth to be taxed at much higher rates. Instead, AI, like too many other digital technologies, can be presented as
a seemingly apolitical solution. It doesn't require a sacrifice and doesn't challenge the
hierarchy of capitalist society. Indeed, if anything, it further solidifies it in place.
And all we need to do as a public is have a little patience as our saviors in the tech industry
perfect their techno fixes so they can deliver us a digital utopia, which it probably doesn't need to be
said, it never actually arrives as those deeper issues just keep getting worse. There are many
harms we can talk about with generative AI and some of the more common forms of it too. We could
talk about how companies are stealing all this data and using it to harm the prospects of workers
in different industries, like in visual media,
writing, journalism, and more. Or we could talk about the waves of AI-generated bullshit flooding onto the web, some with malicious intent like non-consensual deepfake and AI nudes, but much
more of it being made just to try to make a buck through social media engagement or tricking people
into scams. Those things are important, but the deeper issue, to me, seems to be those that Ali and Dan
are describing, and which Alex Hanna, the Director of Research at the Distributed AI Research
Institute, outlined in a bit more detail when I spoke with her. The other harms that we see of
are these things replacing social services and becoming very, very automated, whether that's
in terms of having medical services being replaced by generative AI tools.
We've seen this with like Hippocratic AI and the way that they say they want to basically take
nursing that has to do with kind of checking up on patients, doing follow-ups to be replaced by
an automated agent. We're seeing this in kind of the replacement for lawyering services and the ways in which people that don't have means are going to have these things foisted upon them.
We're seeing more and more at the border, intense amounts of AI and automated decision making with biometrics that is not necessarily generative AI, but there are other kinds of things that could be looped in with generative AI, which are used at the border.
Healthcare, education, legal access, virtually anything that happens on the border, and the list of all the places they're trying to falsely present AI as a solution goes on.
Ultimately, generative AI is another one of the tech industry's financial bubbles, where its leading figures hype up the next big thing to drive investment and boost share prices until reality starts to creep in and the crash begins. We saw
it most recently with cryptocurrencies and NFTs, but there are already questions about how long the
generative AI bubble is going to last with everyone from Goldman Sachs to Sequoia Capital starting to
join the existing chorus of critics in calling out the aspects of generative AI that are clearly
inflated and poised to crash. Even after that crash, generative AI won't fully go away,
just as other forms of AI have stuck around as well. It won't be everywhere, or have the
widespread implementations the companies promised, but that doesn't mean there still won't be threats
that emerge from its ongoing presence. As Ali explained to me, we'd be foolish to think it can
be seized and redirected to mostly positive ends.
If people are designing these systems to cause harm fundamentally, then there kind of is no way to make a human-centered version of that sort of system.
In the same way, legislation that makes it slightly more costly to do something harmful doesn't necessarily fix or even really discourage tech companies that find ways to amortize those costs or kind of absorb
those costs into their business model. One example that I think I've given recently in
conversation was that there are all sorts of reasons or all sorts of powers that cause us
to behave differently when we're driving on the streets. Because as individual people,
the costs of crashing into another car or of hitting a pedestrian or something like that are quite substantial for us as individuals. But if a tech company that's
developing autonomous cars is going to put 100,000 or a million cars out onto the streets,
it really behooves them to find a way to legislatively make it not their fault to hit
a pedestrian, for instance. And so they find ways to sort of defer the responsibility for
who ultimately caused that harm or who
takes the responsibility for whatever kind of incident or whatever. And so that creates like
these really wild, perverse incentives to find ways to sort of consolidate and then offload
responsibilities and consequences for violence. And I just don't see a good way with design
out of that, or even with a lot of legislative solutions and everything else
like that. When the harms are acknowledged, the discussion around AI is about how to properly
regulate it. But even then, all too often, the conversations about those regulations are
dominated by industry figures who shape the process and sometimes even present outlandish
scenarios like AI presenting a threat to the human race itself to completely sidetrack the
discussions. The idea that maybe some of these technologies shouldn't be rolled out at all,
or that some use cases become off-limits, become harder to contemplate because the narrative we
have about digital technology is that once the tech is out in the world, it can never be reined
in again, a perspective that not only feels defeatist, but is clearly proliferated by the
industry to serve its own interests and prevent any public discussion or democratic say over our collective technological future. In my view, that can't stand
either when it comes to AI or data centers, because that's the other piece of this discussion
about AI and the bubble currently fueling it. Once the crash comes, the generative AI might
not fully go away, but neither will the infrastructure that's been built to support it,
namely all those massive hyperscale data centers. The data centers are not going to be decommissioned. They're this huge
capital expenditure. It's a fixed asset. They're going to try to do something with them. Data
centers are not going to go to way of malls, which like malls are now just skeletons of their former
selves. There's going to be a demand for computation, but maybe it's not AI and that's
going to have lasting environmental impacts.
What uses will all that additional computation be put to?
It's hard to say for now, but we can be pretty certain it won't be for the social good,
but rather will expand corporate power and further increase the profits of Amazon, Microsoft, and Google.
We started this episode with an honest but troubling statement from Sam Altman,
that the future he imagines, where generative AI is integrated through society, regardless of
whether it truly has a beneficial impact, will require an unimaginable amount of energy. And
that means we either find a breakthrough in energy generation, or we begin geoengineering
the planet. The notion that maybe his vision for the future isn't the ideal one, or the one the rest of the public might not agree to, cannot be fathomed. This is the path
that he and many of his powerful buddies in the tech industry want to put us on, and thus it must
be pursued. The rest of us do not have a say. We must simply accept it and hope it works for us.
But that's not good enough. Dan argues this isn't just about AI or data centers, but something greater. And I tend to agree with him. It's a remaking of society by the new dominant group, who not some way or deflates in some way,
will tend to condense around the things that were my original set of concerns, which were that AI is
really a part of another restructuring, you know, in the same way that neoliberalism was a
restructuring. I have a feeling that we're living through another phase of restructuring driven by
an attempt of the sort of hegemonic system at the
moment that is currently, you know, reaping the most benefits out of this world of our social
structures and of the wider global arrangements. Neoliberalism has kind of run out of steam. It's
fracturing. There's a need for restructuring. There's no desire to involve any kind of social
justice or redistribution in that restructuring.
So, you know, we've got to find both a mechanism and a legitimation of what we're going to do instead.
And AI is one of the candidates for that. And I think despite the fact that generative AI is demonstrably bullshit, it's still going to serve some kind of function in that, whether we like it or not.
A restructuring sounds about right.
And it's not just one that doesn't consider the broader concerns of the public. It's one we have little say in. The effort to roll out
these AI technologies, ensure digital technology is at the core of everything we do, and increase
the amount of data collected and computation required is wrapped up in all of this, as are
the social harms that are already emerging from it and the broader climate threat presented by
the intense energy demands. It's no wonder communities are pushing back locally. But stopping these data centers and
the new world they're designed to fuel will require even more. Next week, we'll explore
this ideology more deeply and what another path might look like. Thank you. patreon.com slash techwon'tsaveus to support the show and listen to the rest of the series, or come back next week for episode four of Data Vampires. Thank you.