Tech Won't Save Us - Decomputing For a Better Future w/ Dan McQuillan
Episode Date: July 24, 2025Paris Marx is joined by Dan McQuillan to discuss the global push by governments to rapidly adopt AI at all costs and how citizens can critically rethink our dependence on these technologies while imag...ining a collective future that benefits everyone.Dan McQuillan is a lecturer at Goldsmiths College, University of London and the author of Resisting AI.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Dan recently gave a talk about decomputing as resistance and published the text on his website.The UK Labour Government is going all in on data centre development, while planning for future water shortages.Academic institutions are rapidly adopting AI technologies, with a little help from industry leaders.The GKN Factory Collective offers an inspiring example of collective action.Support the show
Transcript
Discussion (0)
I do feel that AI is, in many ways actually, a response to a general context of collapse.
I think the optimizations of AI as it manifests socially and economically are actually pretty
well suited to a sort of neoliberal status quo because that is about very much about in which things can be optimized. Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks, and this week's guest is Dan McQuillen.
Dan is a lecturer at Goldsmiths University of London and the author of Resisting AI.
There is growing discussion about the harms and the drawbacks of social media,
of the amount of time that we spend on screens, and certainly of the generative AI tools that
are proliferating through our lives, especially in the past few years, and how we are going to respond to all these things.
We know that there are costs that have come
of the way that these digital technologies
have transformed our lives,
in particular the way that these companies
have sought to implement them in workplaces
to suppress the rights of workers,
certainly in government to reduce access to public services
or make them more discriminatory through the argument of efficiency.
And that's not to mention the environmental costs that have come of all of this, especially
as we're seeing these generative AI tools proliferate despite the amount of computation
that they require.
And so the question is not just how are these technologies transforming our society and how would we potentially use less of them?
But what is a different way of understanding these technologies and of thinking of a politics that would change our society for the better
while reducing our dependence on these
computationally intensive tools that seek to capture our attention and make us dependent on them.
And so that's part of the reason why I wanted to have Dan
on the show today, because he has been writing a lot
recently about this concept of decomputing.
And I wanted to understand that better.
What does it actually mean?
What is the politics behind it?
What is it trying to achieve?
And what does a better world actually look like
where we start to critically interrogate
the type of computation that
we use, the type of digital services that we use, and are much more focused on
technology for public good rather than technology that serves shareholders and
the ideological visions of the people who lead the tech industry, particularly
in the United States, but also in other parts of the world as well. Our
governments are really letting us down on this, and so if we want
to try to move a different vision forward and to pressure them to do something different, we also
need to have a vision in mind. And that is what I think something like decomputing might help us to
consider. So this is probably a bit more of a thought-provoking episode, thinking about a bigger
idea than maybe something that we usually talk about, but of course we ground that in things that
are definitely happening in issues
that you have been seeing us talk about on the show
in the past, so I think that you'll enjoy it.
And if you do, make sure to leave a five-star review
on your podcast platform of choice.
You can share the show on social media
or with any friends or colleagues
who you think would learn from it.
And if you do wanna support the work
that goes into making the show every single week,
so I can keep having these critical,
in-depth conversations that explore the ways that technology is impacting us
and our lives and the world around us.
You can join supporters like Peter from Dawson City,
Yukon, Carol in Portland, Oregon,
Luke in Leiden in the Netherlands,
and Lindsay in Seattle by going to patreon.com
slash tech won't save us,
where you can become a supporter as well.
Thanks so much and enjoy this week's conversation. Dan, welcome back to Tech Won't Save Us.
Fantastic to be here.
It's really great to talk to you again.
Obviously you've been on the show in the past
and you were in our series on data centers,
I believe it was last year to kind of dig into all these things.
But I wanted to have you back on the show
because obviously we've talked about different aspects of AI
and data centers and these technologies, of course, based on the
book that you wrote, Resisting AI and some other work that you've been doing since then.
But you've also been working on this concept of decomputing that I find really interesting.
And so I want to dig into this with you, but obviously there's kind of a broader context
that we should talk about before we dig into that specific concept, right, to know what
we're actually responding to. And I feel like a good place for us to start is actually what's
happening in your country right now, because it seems to be a good example of how governments
are just absolutely chasing this AI hype without any consideration of the potential consequences
of this. So what do you make of the Labour government's pursuit of AI and
kind of the policy approach that they have developed to try to attract AI companies and
tech investment?
It is interesting, and I am using air quotes there, because obviously in a lot of really
important ways, this sort of quite florid manifestation of technologies overlap with
fascism is most visible in the USA right now, most probably. But the UK is a really interesting case study because there's a political party
in control without any real opposition at the moment. And that's what will happen in
a few years is another story that's tangled up with this. But at the moment, it can basically
pursue a line. And unlike the last government, which were basically just a very familiar
bunch of tofs and crooks,
this lot are much more ideologically unified. The problem there being that this is my kind of
understanding, rather than my assessment, that they don't really have an ideology per se. They
called the Labour Party and many years ago they would have had people within it, some people
within it still see themselves as being committed to socialism and committed to the welfare of the ordinary person, so on and so forth. I mean, a minority, but a number of people
would still do that. But as a party, I mean, it's pretty clear they abandoned, have completely
abandoned it. No, I'm laying that groundwork because they are completely AI-pilled. And that's
what leads me to say those things. I mean, you can look at it the other way around and say,
this is a government that is, you know, so kind of cleanly and purely AI-pilled. It's almost like they exist only to sort of form
a fairly subservient relationship with big tech and sort of do its bidding. And they clearly see
that as somehow in their interests and since they identify with the nation, the national interests.
When I say they're AI-pilled, I mean, they don't make any bones about it. Before they were elected
or just after, there's Keir Starmer, who's the Prime Minister. The other probably most relevant
person is a guy called Peter Kyle. He's the Secretary of State for Science and Technology.
He was saying even before the election that it was important to relate to companies
in Silicon Valley rather than the basis of sort of,
having a relationship between the government
and a corporation, it was important to relate
on a diplomatic basis.
So essentially sort of treating Silicon Valley companies
as a sort of peer to the British state.
He wanted to have that kind of relationship.
And anyway, since the election,
they've just Google issued a report a few months before the election saying, okay, UK, you don't want to be left behind,
you don't want to be a bunch of losers. What you need to do is dump copyright and build loads of
data centers. And that's basically been their policy. They've just kind of sabotaged copyright
that's gone through. And they've got a big plan, which includes building lots of data centers.
And they've labeled data centers now as a thing called here, critical national infrastructure, which you sort of
get the idea. But what it basically means is, it means a lot of things actually. But
one of the things it means is you can't complain. It's critical national infrastructure. So
don't get funny ideas about trying to use planning legislation to object to the fact
that you're going to have a giant data center built down the road that's going to deplete your already depleted aquifer or whatever.
Yeah, I feel like I've already seen a few stories of like local communities in the UK
opposing data center projects that were planned near their communities and the national government
just coming in and saying, it doesn't matter that you oppose it, it's getting built anyway,
because this is, you know, the policy, right?
Absolutely. Yeah, yeah. And there's a few things going on. There's a funny thing is,
given the amount of rain there is here, that the UK is actually sort of water poor in some places
already. And there are plans, for example, in the Southwest, they've got a kind of contingency plan
to import water from Norway on tankers or something crazy. And of course, that's tied into
something that is underpinning
a lot of this as well, which was basically as Margaret Thatcher, her as a personification
of neoliberalism because the water infrastructure is collapsing and has been in the hands of
extractive profiteers for a couple of decades who've built nothing, stolen all the money,
and we're left with living. There isn't one waterway in the UK that isn't polluted by
shit because these companies are being such a bad job. That's obviously relevant for anyone
considering the material infrastructure of data centers, which place huge demands on
water supply and huge demands on electricity. Our electricity grid is in a similarly powerless
state. People are rightfully, people are kind of rightfully
and very immediately alarmed by the idea
of these things being dumped in the local areas
and they're quite right.
Yeah, and talking about the broader policy as well
and like the consequences of this,
obviously you talked about it.
It was really startling to me, right?
To see Google actually come out and be like,
these are the policies the UK government needs to implement
to be like, you know, a leader on AI or however they framed it. And then to see the government just turn around
and basically do what they wanted, right? You know, to make it easier to build data centers,
and then to really go after, you know, the copyright protections that artists rely on
to the degree that there has even been like a pushback campaign with some major musicians
and other artists in the UK actually like directly opposing the efforts of this government
to, you know, change copyright legislation to allow these AI companies to train on all
this copyrighted material without having to like compensate or worry about facing lawsuits
from these creators.
Yeah, it is interesting. And these are things which which I think as our conversation sort of develops, these will all
become obviously very relevant in the UK, but I think more
broadly, but these are all sort of diagnostics, I would think of
them as you know, it's like, the UK is very proud of its sort of
creative cultural industries, you know, not got that much else
going for it really. So part of the NHS, which is also the
capital, we'll get to that. But it is interesting because of that self-image of the UK and not just the self-image, but
the actual empirical numbers on hard export earnings that come from that stuff, that it
would normally be seen as kind of the crown jewels in a way.
And to be sort of sacrificed like that, it's just very interesting, very noticeable.
I mean, it should be noted, it should be paid attention to alongside other things. That's what I mean about a sort of unified,
see, I wouldn't even say ideologically unified because there isn't really an ideology behind it,
but sort of a Stalinistically unified sort of government that it is singularly focused on what
it sees as its main goal. And Everything else is literally to be trodden
underfoot, including that stuff. I believe that the central thing, and this is relevant
to the technopolitics of AI, I think, the central thing, which they say, which Kirstama
frequently, repetitively says, is that they are about growth. They have identified, I
think, that the AI industry equals growth for them. So that's it. Everything else can basically go hang.
Yeah. Unfortunately, we see a very similar thing playing out here in Canada,
where the new government is very clear that growth and the economy are the main priorities.
And that also means incentivizing AI, giving these tech companies what they want.
We've even seen that recently with the elimination of the digital services tax in order to keep
them and of course Donald Trump happy.
But I wanted to pivot then, you know, I'm sure that the UK will still come up in the
rest of our conversation, but I wanted to broaden it out, right?
To start talking about some of these bigger concepts that you have been grappling with.
And in a new book chapter, you relate what we're seeing to this concept of total mobilization.
Can you explain what that is and how it plays into what we're seeing here with AI and these
governments trying to do whatever it is that the companies want in order to encourage this
industry?
Yeah.
I mean, the term total mobilization, I won't try and mangle the German.
This is a book title by Kai called Ernst Jünger. He is like a horrific character, but he's a really interesting character. He was
a First World War veteran, most famous for his book, I think it's called Storm of Steel,
his account of the First World War. But he was a thinker. He was a guy who, as I would understand
it, he saw the transition of the First World War from, as it were, conflict
on a human scale to essentially a logistic, industrial, and even kinetic process.
He grasped being right in the middle of it in a way.
He grasped the idea that war had become something that was to do with energy and how much energy
one could enroll, energy in all forms, mineral energy, steam energy, coal energy, and human
energy.
How much that could be focused and directed towards a single aim of total war.
There'd be no distinction between this civilian population and the military.
There'd be no distinction between the human and the machine.
These being part of a total assemblage, if you like, there was this war machine.
That's one thing.
But he pursued that. He pursued this idea with a very
I do agree with people who believe that he is very influenced by Nietzsche. He has a nihilistic
vision that we would now call accelerationism. It's not really fundamentally different in that
way because his vision was to pursue this to its kind of ultimate conclusion as a form of breaking through to a different epoch.
He really saw this as a turning point or a pivot point in the very mode of being and definitely the very mode of civilization.
He saw, you can be sure, his ideas of what civilization meant are like deeply problematic, right? But he saw this idea of civilization and breaking through to a new civilization, you know, with a sort of new man, you know, and a new a new
race, sort of powered by, by this conflict form of technological accumulation of all available
energies, and also through this idea of complete transformation of society. So it was a very, we'd now call
it an acceleration, kind of extreme accelerationism. And it was explicitly again, with relation
to Nietzsche, powered by this will to power, you know, will to ultimate power over opponents,
over planet, over being in general, pretty crazy stuff. But having kind of stumbled on
this idea, I felt
that it was very resonant with what's going on because I feel that I'm not trying to impose
it. I'm not trying to say that anybody else should take this idea up if they don't feel
it fits. But what I do think we need to do is search for other ways of understanding
what's going on because it is irrational in a very, very deep
sense. It's irrational, not in a dispersed way. It's irrational with a sense of direction.
I don't want to bring back things in the UK, but just thinking about this idea or Canada particularly,
but just thinking about this idea of identifying AI and economic growth and geopolitical strength,
which are their tick boxes, that would be awful and brutal, but would have an
element of rationality about it if this stuff actually delivered on that, which it doesn't.
Because the technology doesn't really work properly. It's just a bit crap, essentially,
at a very large scale and in a very expensive way. It isn't delivering massive productivity
or anything else. So what, I ask myself, and they ask our listeners, right?
What is actually going on? So one of the things I'm sort of offering as a way to frame this is
this idea of total mobilization. What we're engaged in here, most of us unwillingly, is
a very authoritarian, very nihilistic mode of seeking a sort of ultimate transformation to a new order.
I want to dig in further, right, to several aspects of what you're talking about so that
we can flesh this out for the listeners and understand it properly.
And I feel like the first thing that I grabbed onto was this notion, and you mentioned it
there, of being so focused on energy and turning things into energy.
And I feel like when I think about the data centers
and what is powering AI
and what some of these tech CEOs have been saying,
there's this total focus on increasing the amount of energy
that we produce, then that we consume,
all so that we can enable this kind of AI powered future
that they have in mind for us.
And that seems like in a very direct sense, aligned with what you're talking about.
And then we can talk about some of the bigger things that come out of that.
But I wonder what you make of like the actual kind of push for AI adoption for creation
for this industry in general and how that relates to the concept that you're talking
about.
Yeah, yeah, absolutely.
And again, I'm really coming to grappling with these ideas by essentially doing what
everyone else is doing, which is trying to understand what's going on in front of us.
And in a way, in a lot of ways, what's being shoved down our throats, but what is completely
unavoidable.
I mean, I do find that this current wave of AI is interesting and provocative because
it makes so many things so very visible.
I mean, it's not like anything digital doesn't have a supply chain. It's not like anything digital doesn't have toxic
collateral effects and dependencies and so on and so forth. But we have been able to
sort of roll along while more or less ignoring those things and maybe getting a bit worried
about them in a different context. And AI just puts that stuff slap bang center unavoidable
because of its – and this
is where one of the things to growth as well is this whole kind of obsession with scale.
If they must scale, it's like that itself becomes quite a cult-like core belief that the only way
is to get bigger and bigger and bigger. This is not a new thing. Expansion of empire absolutely depends on the idea of unlimited outward expansion.
That's the only way that model can sustain. It's not like this is a new pattern, but it
takes a very particular form. It's taking a very extreme and extremely visible form. It's very
visible to, I would say, most people now that AI has these kind of material dependencies. And it's also very observable that the other stuff that these companies are doing, I mean,
this is probably some of what you're driving at.
Whereas before, big data center providers, AWS or Google or whatever, put a lot of time
into performing sustainability, claiming green credentials and all this kind of stuff.
And it's just so interesting that in the last year or so, they've not just abandoned those
things, they've just kind of quite cheerfully abandoned them.
They just say, well, you know, we can't do that anymore.
And anyway, it's not important because this is what's important.
And so goodbye climate targets.
I would suggest that this could be very straightforwardly explained by saying our existing extractive economy just never went away,
never even really had a second thought about continuing, never gave any credence to this
idea of transitioning to anything else, and is just sort of blasting its way back through the
gateway opened up by AI. I guess I don't disagree with myself on that one, but I would think I'm trying to
reach for an understanding of this being a more coherent process, a more integrated process
in some way.
Not that there's a giant plan, but that there's something specific and possibly unique going
on at this moment in time.
That is, is those things and more.
Yeah.
And I feel like as you're talking about that, the other thing that I really related to as
I was reading what you were writing about this concept of total mobilization and how
it fit into what we're seeing with AI in this moment is of course the geopolitical dimension
of AI, right?
Because we can talk about AI as this thing that is kind of driving economic objectives
for these governments.
You know, you were talking about how the British government is always talking about how growth
is the main thing and, you know, it wants to achieve growth at any cost.
But then when you're talking about total mobilization and what Junger is talking about, there's
this notion that like, that we are on the cusp of this new epoch and that there are all these governments lining up
or all these forces lining up to make sure that they are well positioned for this transition.
Regardless of whether or not AI is actually this new epoch, generative AI in particular,
these large language models, all this kind of stuff, many of our governments have begun to accept that that is the case, or at least that is kind of my read on it.
And that seems directly related to the type of thing that you're talking about, where
you see all these governments kind of scrambling to get their piece of AI in the hopes that
it positions them well for whatever this next stage of capitalism, our collective history,
or whatever is going to actually be or how it's going to look.
Yeah, absolutely. It's funny because you used the term scramble that made me think of this
old phrase, you know, the scramble for Africa, which was the previous sort of imperialist
conflict from sort of late 19th century, which I think is kind of relevant. This is the scramble
for AI, which coincidentally also involves a scramble for Africa because it's also a scramble for minerals, apart from anything else. I do feel
that AI is in many ways actually a response to a general context of collapse. I think the optimizations
of AI as it manifests socially and economically are actually pretty well suited to a sort of
neoliberal status quo because that is about very much about sort of reductive, isolating essentialism in
which things can be optimized. That's the market, that's Hayek and all that stuff.
But yeah, I don't know. I think that looks pretty broken right now. It's run out of steam.
It was always a load of rubbish really, but it allowed a lot of extraction on the way.
But it's definitely run out of steam and there's a pretty strong understanding that that is
the case amongst these governments in a very empirical way.
It's all breaking.
It's all pretty much broken.
Maybe the Labour Party was a good place to start because like the British government,
pretty much everywhere else, they haven't got any answers to this.
They haven't got any responses.
There are some pretty obvious responses, obviously, which is redistribution of wealth and a turn
to a more ecological production and all this kind of stuff. And they're never going to
do it. They are never, ever going to do that. So they have to do something. They're sort
of buying it. I mean, who knows how much they actually believe it. I have no idea. I don't
know whether they even care, but they're buying into this idea big time. You know, one of the really,
really concerning things for me is that, you know, I'm happy to sit down and snipe at AI and its kind
of obvious failings and sort of ludicrous performance in so many ways, which are also
unfortunately very serious. But the other thing is, I think it is quite good at being destructive.
It's not very good at actually optimizing and actually sort of reliably predicting or
generating something that can be trusted enough to be a functional part of society.
But it is quite good at intervening in ways where essentially collateral damage doesn't
matter.
And you can think of that as anywhere from Doge to Gaza, you know, it's just, it seems
to play well in the destructive context.
And that is to round off what I think you were just, you know, potentially describing
there.
We were returning to the kind of context that Jung himself would have both recognized and
relished essentially a war of all against all, essentially based on not just nationalistic
actually, but sort of ethno-nationalistic terms.
And I wonder if it's worth, you know, before we pivot into talking about decomputing and what an alternative would look like, to double down a bit more and
talk about those consequences, right? Because we have all of these governments kind of running to
adopt AI, to adopt policies that are favorable to the tech industry, to try to get this investment,
but also to claim that AI is going to make things
more productive, that it's going to allow them to run governments more efficiently, that it will
make their businesses more productive if they can just get them to adopt chat bots for whatever
reason. But then on the flip side of that, there's this whole ignorance of the very real harms that we're already seeing from these technologies and a seeming desire, you know, not to recognize that they're even there. And to me, it even really stands out that like, you can look back even two years ago. And it seemed like the discussion from a lot of our governments was like, okay, we need to adopt AI, but we need to do so safely. You know, We need to think about the ways that this could go wrong.
I even remember at the G7 meeting, I believe it was in Japan, a couple years ago, one of the things
that came out of it was this AI safety thing. Of course, we both know the issues with the AI safety
framing. We've discussed them many times on the show. But then this year coming out of the G7 was just like the statement on how we need to adopt AI
for economic performance and all this kind of stuff.
Like the notion of safety,
regardless of what that safety even meant,
is like completely out the window.
And it's just like AI for geopolitical positioning
and economic expansion.
And any notion of the harms or the consequences of
this, even as it feels like we have more and more stories reporting on the ways that chatbots
are already affecting people, let alone the potential of AI being integrated into government
systems and stuff is already proliferating. So I wonder if you could talk a bit about
that.
Yeah, I 100% agree with you. I mean, you know, a couple of episodes ago on your podcast was
a great discussion about the harms that are being wreaked by things like character AI
and people from relationships with chatbots. I mean, it's really horrifying. It's a whole
domain of horror. And there's not only, you know, as was really clear from that episode,
not only is there not any attempt to really constrain that, it's more like that's seen
as something to double down on, you know, something that can be intensifying. My feeling and the reason I'm trying to sort of
grasp for a, let's say, a, you know, a widescreen understanding of what's going on, the way to frame
it in that sense, is because I am sort of concerned and frustrated also with myself in this,
about how it seems that we are going to be ever more continuously sort of dismayed and horrified by
the effects of these kinds of techno social systems being implemented in the world. And
I really feel like that's sort of playing catch up in a way. You know, it's how conscious this is,
I'm not sure, that would depend on different people at different points, different decision
makers. With some it's very clear, you know, You've got to listen to that idiot from Andreessen, or is it Mark Andreessen, or people like that.
Delusional, deranged, hubristic, narcissistic, deeply unwell, but also unfortunately very
powerful people. They don't make any secret about where they're coming from. But they're
just figureheads in a way. We're talking about a systemic transformation here. There are
many people involved who may not think of
themselves as consciously engaged in this process. And yet
I would say there is a clear, broad direction here. And if
we don't get ahead of it, then we will be in a very poor
position to do anything about it. And I think it's very hard at
the moment, I would say to underestimate, and we don't need
to, right, we are living through a genocide, right. So we don't
need to underestimate the consequences
of world changes at the moment
and the things that are prepared to be tolerated
if they're seen as somehow in the interest
of those who are willing to,
whose interest is in maybe not just sustaining
the status quo, but in further concentrating power,
wealth and control and powered, I think, a lot of the time by
fear as well. There's this kind of fear that they're going to get caught out. There's
a kind of fear they're going to get held to account. There's a fear of previous social
movements which have been in the other direction. I'm thinking here like of the late 60s, where
a lot of the things that we now consider to be rights we've become accustomed to,
not that they are fully in practice,
but at least conceptually,
racial justice or gender equality,
or these kinds of things, took another step forward.
This seems to be to the systemic transformation,
it's other, the thing that drives it,
the thing that it wishes to erase,
not just to erase, but to completely roll back.
And there doesn't seem to be even any performative
gesture towards, well, should we be nice about this or shouldn't we be ethical about this or
let's not go too far? I think that's completely abandoned. That to me is consistent with the idea
that the emergent drive of this form, the kind of thing I'm trying to describe with this total
mobilization, it is full, it is unabashed, it is really quite
extreme. And we should consider what the sort of valid modes of response are, you know,
in a way that tries to actually counter it with something not equivalent, but something
that has sufficient compulsion of its own, because simply trying to trade water is just
not happening.
And I think that sets us up really well to talk about decomputing, right? If we're thinking about what some kind of alternative, some way to push back
on this very strong effort to adopt AI, to proliferate it through society, regardless of
the consequences, to set it up as this thing that is this massive change, right? That nothing is
going to be the same because now we have AI. Really great kind of marketing slogan for the companies and a way to sell their product to us, right?
And so you have been writing about this concept of decomputing for a while, and I am really
struck by it and find it really fascinating and think it's something that we need to be
thinking much more about. So for someone who's never encountered this concept before, how
would you describe what decomputing is and what it seeks to achieve?
It's an idea in development, or it's an open idea.
That's maybe a much more positive way to put it.
And I'm really interested in this idea being sort of taken up
and developed.
Where it's coming from for me is on a couple of levels.
One is a very straightforward and something
that we would very immediately share.
It's looking at the very material and immediate manifestation of a hyperscale data
center and what that is doing, what that is not just representing, but what that's putting into
operation and trying to address that in a way that doesn't split off one or other part of the
consequences of that. So what I'm trying to say is a hyperscale data center is both a very
physical and energetic, as in sucks up energy,
concrete in large many cases, object. It's a real thing. It exists in the material world and puts
a strain on our broad conditions for existence in a way that definitely does make things worse.
But it is also a platform for things that make our life conditions worse in other ways. Without
hyperscale data centers, it would be very difficult to have the forms of precaritization
that we currently have.
The world of Deliveroo, the world of Uber wouldn't work without data centers.
I'm trying to construct a response to these things that are delivered together.
It's something that's putting a strain on literally our ability to continue living across
a number of axes,
if you like, both the physical conditions of existence and the social relations and
the productive relations of existence.
AI specifically and hyperscale data sensors and what they enable, what they platform is
both at the same time.
And so at the very least, decomputing is saying stop essentially saying enough, that these
things are foundationally toxic
in what they are and what they deliver, and therefore we should be pushing back and saying,
no, thanks very much. Enough of that. That's trying to see in a way why, and this is maybe
relevant to a lot of things, why do these things have the power they do have? Why is it that AI, despite its rather shonky nature,
you know, it's kind of rather sketchy technology, and the advocates are quite lacking in credibility,
they're like very, very poor salesmen of something or other. Why despite all these obvious factors,
is it, as you were saying earlier on, you know, is it winning over some of the most powerful people
on the planet? Why is it winning over some of the most powerful people
on the planet?
Why is it winning over corporate commitments
to the nth degree?
What else is going on here?
And I think what I just say about that really
is the summary slogan you put there about them saying,
oh, this is gonna change the world.
My reading would be because the world was already
not what we were led to believe in the first place.
It's like the reason why these things have such power is because the systems that we are already essentially subject to are already quite a lot
like that. It's not simply the hyperscale data center and the AI algorithm. Those things are
riding on the wave of forms of institutionalization, forms of centralization,
forms of political economy that are already delivering harms that we're talking about. They're just being intensified by AI or delivering the climate damage. It's already
being intensified by AI. One of the things I try to pick out when talking about decomputing is the
idea of scale, because that is a kind of skeleton key for a lot of these harms. It's the thing the
industry itself fetishizes. It is actually quite an important part of getting any of
these junkie machines to work properly. And it is also a complete corollary for the general
idea of growth. It's a complete corollary for the idea of growth at any cost without
boundary, never mind what happens to people and planet. I guess, decomputing is trying
to be the other to scaling. It's trying to say whatever it is that leads us to believe that
that is the way forward, whether however we're structuring institutions and technology, then
it'd be better to do something else. And I appreciate that framing of it as well,
right? Because I really feel like looking at the hyperscale data center and looking at these tools
also forces us to consider why does this exist in the first place? Why do we need this kind of infrastructure?
How is this technology actually working?
Why is the hyperscale data center needed?
It is needed because we have this system of mass data collection, or we could call it
mass surveillance on the public, this desire to create these forms of algorithmic management and control that
require all of this computation in the first place.
There are all of these tools and all of these forms of technology that have been built up
now over the course of a couple of decades, a few decades that seem to all be kind of
coming together in this moment as the tech companies, as you were saying before, kind of push for this acceleration of
all of it, right? Under the guise of generative AI and, you know, whatever new future is kind of
coming into being here. And thinking about decomputing is not only just like, oh, how can we
use less computing power or not buy a new laptop as often or something like that, but is looking much more
structurally at the system that has been set up and the way that it has significant consequences
for the way that we all collectively live.
Yeah, yeah, absolutely.
Something that I've really nicely, you know, we are social beings and societies are socio
technical or something like that, you know, or rather to term, I prefer as techno politics.
You know, if we're looking at how we want our worlds to be arranged and how we
want to think about our power distribution and our sense of agency and our sense of collectivity
or not, then we look at our societies and our technical structures broadly. Now, technical
structures does mean technologies, but it also means the institutions. It means the
procedures, the laws, the protocols and everything else. It's the machine of society that we're really looking
at here. And this machine is not only broken, but clearly extremely vulnerable. I thought
Doge was really a great example that way. We were all looking at this feeling initially
flabbergasted. How does a bunch of repellent nerds with six laptops in each rucksack who
were ushered in the back door under the
aegis of executive power so quickly managed to bring such a high level of destruction
to, if not venerable, then at least vast institutional and governmental structures.
I think pretty clearly it's because of the way those structures already were arranged.
It's like this is a neoliberalism as it wasn't only a process of marketization, but it's another process of quite deep automatization. The
market mechanism of automated optimization was extended to all forms of social and governmental
relations in a way that made it very ripe, I think, for somebody to sort of close that
final loop and take it over and or destroy large
chunks of it very, very quickly. So yeah, absolutely. I'm just agreeing in a very long-winded
way that we're looking at here. And I think that works. Well, I think this is an idea
that works for me as a handy idea because it works at different levels. You know, you
can take this as a reading of the broader collapse and the mix of society and how we're
going to deal with it, but you can also take it to a local context. I mean, I work in higher education. So if it's there, it's like, okay, what is going on with
the sort of California wildfires reeking its way across higher education so quickly with the
introduction of chat GPT into general distribution? Why did that, why was that able to cause such
internal collapse in our educational systems? Well, I would say it's got a lot to do with the way that they were already constructed as essentially metrocised exercises in data gathering
and performative, measurable actions in substituting for relationships and actual learning.
They already like that. So along comes this very specific but relatively modest technical development
in a lot of ways and
boom, it causes it like it's like an explosion in a powder-filled room. But more positively,
decomputing being a stance on that. It's saying, well, okay, if we want to, let's say in my
case, reclaim the idea of critical thought, reclaim the idea of learning, reclaim the
idea of education more generally, we want to construct, reconstruct, or reclaim forms of education that are other to this,
that are not vulnerable in this way, that are not constructed in this way from the bottom
up.
Not just ban AI or something like that, but really, yeah, I guess this is the thing.
In every situation where AI is presented as a solution, in every situation where AI is
able to wreak harm, because whoever
it is in the decision making world decides that that is the case, it is the case that
we should re-examine what is going on there in the first place.
Yeah, because I guess in that sense, it's not just looking at the technology, right,
and how the technology is being rolled out, but the broader structures that not just encourage
the creation and the rollout of a particular kind of technology,
but these broader policy decisions
that have been made over many years,
that on the one hand, yes,
lead to, and not just policy decisions, right?
But, you know, decisions by government,
decisions by corporations, you know,
the broader way that our society operates,
that on the one hand, yes,
leads to the creation and the deployment of certain technologies, but on the one hand, yes, leads to the creation and the deployment of
certain technologies. But on the other hand leads to decisions that as you're saying,
you know, erode the education system, make it about achieving a certain grade on a test
and passing certain metrics rather than actually engaging in critical thought and learning
and all these sorts of things. And then we see similar dynamics in so many other aspects of society
where you have had this kind of breakdown
and this kind of hollowing out because of,
we could call it these neoliberal processes,
kind of broader pressures of capitalism
on a social safety net and on aspects of society
that don't turn a profit or that are not profitable enough.
Like it seems like it's not just a concept
that is focusing on technology,
but looks so much broader than that.
And probably that also comes out of some of the other
inspirations that you had that kind of come into this idea,
right?
Yes, absolutely.
I mean, with the decomputing party
for the sake of me insanity,
trying to boil it down into a couple of ideas
I can actually remember from day to day.
One of the ideas, which is not a catchphrase, but I use de-automatization,
and that's meant to address the stuff you just talked about, which for me speaks to the fact that,
again, the reason why we have such a crisis about losing control over things to AI is because we're
already fundamentally stripped of agency or the capacity to think differently about what we're
doing. So de-automatization, not having such technologized relations, particularly decision-making.
And also de-growth, you know, for me, trying to think through the idea of technological
transformations that are also addressing the climate crisis, that are also addressing labor
justice, if you like, and the broad range of the harms I would describe as infrastructure
intersectionalities, this idea that the data center sort of messes with you, it causes
power cuts in electricity grid, and it also leads to you having a crappy job where you
get exploited by an algorithm.
This is a form of intersectionality.
And there are many forms of degrowth.
The ideas that I cleaved to, which are the more transformational forms of degrowth, are
doing the inverse of that.
They're saying, okay, degrowth is not simply about re-examining our fetish for GDP, but
talking about how would we live otherwise? How can we make social arrangements for otherwise?
When I'm talking about this idea of decomputing, it's not that I'm
trying to invent some incredible theory out of nowhere. I'm trying to assemble things in that
that I think we might find useful in trying to push back. So the idea of degrowth being one of them.
But another area which I think you might have been alluding to, but anyway, I'll bring it
up, which is what I find really handy, is the ideas of Ivan Illich from the 1970s. And
he had a particularly key text from my point of view called Tools for Conviviality. So
first thing about that is when he talks about tools, he explicitly is not talking
about any technology, or even what it sounds like, screwdrivers. And he does mention that,
you know, he says like tools, screwdrivers, bicycles, but tools are also the institutions
that produce, that produce even abstract things like what we call education or what we call
knowledge. The tools for him, you know, cross all those boundaries. And then convivial, which is
talking about sort of quite a lot to do is talking about quite a lot to do with
relations and quite a lot to do with livable scale. And again, that's why conviviality
or tools conviviality, Illich's ideas in general, are a good starting point. Give us some traction.
Even if we're first thinking about AI and its harms, if you like, is thinking about
how firstly that scale itself and the effect that has
on us is important because Illich's reading was well, when stuff scales up too much, we have to
start adapting to the machine, if you like, rather than the machine being something that we can use
for our own creative and autonomous purposes, the machine being in this broader sense.
So I think ideas around conviviality give us a heuristic for
how we might decide to do things. But obviously the first thing and most fundamental thing,
and I think this is really just like a kind of core part, you know, you like the idea
of computing, you don't like the idea of computing, fine, you know, you don't believe in the darkness
of total mobilization. But one thing we could at least maybe all agree on is the fact that
technology should be subject to social sanction, essentially. There should be some decisions
about what society as a whole, what people in general, but particularly people who are
already immersed as a part of this, are going to be affected by the introduction of large-scale
technologies. There should be some way of collectively influencing that, if not even having an absolute say in it.
And this idea sounds weird now, but it doesn't. It's not weird. It wasn't weird back in the 1970s.
It was very common to talk about how society should have a say in what technologies are unleashed on
us. We're in a time, as you were saying earlier on, we've already been through this stuff with
social media. We've already seen how the internal logics of these large scale and
unbridled technological and pervasive so-called innovations can lead to incredibly multidimensional
harms. We've done that. And now we're doing it again. Great. Even more intensely on an
even greater scale. And one of the fundamental reasons is we don't have a say. So the conviviality in order to pursue it in any detail
implies grasping for developing, building for some form
of control over the introduction of technology.
That can also be at a larger scale
and also be a local scale, just my own context again.
It's like, I mean, I'm a poor, by the way,
academia and its so-called critical faculties
have just completely rolled over and gone,
oh yeah, it's inevitable, isn't it? This AI stuff, we better embrace it and change everything.
All that this social sanction means is that as people who know what we're up to,
students and staff, we should look at the technology,
carefully examine it on a number of dimensions and say, do we really want this?
It's absolutely wild to me that that is not the case, right?
That we have kind of ceded this to me that that is not the case, right? That we have seeded this
to this industry, to these people who, I would argue, and I think many people would agree,
are so distant from the usual experience of most people in society and have become more so over the
years as they become wealthier and more powerful, and are also kind of seized by these ideologies
that are so distant from how most people think
about the world and what a better future should look like
and all these sorts of things.
And somehow those are the people
not only developing the technologies
but deploying them into the world and saying,
if you don't like them too bad
because this is what we figure is the technology
that you should all be using.
Like just that fundamental notion
that these are the people
who should be making these decisions instead of us collectively. And actually like whether that is
us all kind of getting together or having particular groups that can assess technology,
or even like just expecting that our governments would do the very basic assessment of new
technologies before they're rolling out instead of always kind of being on the reactive kind of back
foot when it comes to how we're
going to respond to these things. And then, of course, not really wanting to do very much
unless they threaten economic growth or whatnot is just such a broken and flawed way of pursuing
any of this. And so I wanted to ask you, I think you were starting to get to it at the end of that
response. If we were thinking about implementing some sort of ethos of
decomputing into how we manage technology right now, how would you see that as being achieved?
Is this something that is done through policy mechanisms and government? Is it done through
collective action and mobilization? Is it done in many different parts of society? Is it an
individual kind of thing? How would you kind of conceive of that?
I wouldn't close off different levels of possibility at the same time.
I think it's an unfortunate case that we don't even have regulatory capture right now.
We have essentially state capture.
As we talked about from the very beginning, all of the entities in society with the existing
concentrations of power seem to be swinging behind the same
thing really.
And even worse, are abandoning a lot of their tokenistic claims to diversity, equality,
inclusion, whatever, and just generally getting weaponized.
So I wouldn't concentrate my efforts on trying to persuade them, I think, at this stage.
On the other hand, on a positive note, I would say that it's possible and legitimate for
us at any level, like for me,
not even in my university, but in my department or wherever it is, or any teacher in a school or in
work or in any kind of context to say, okay, this particular AI for this particular purpose,
being introduced into my life or into my healthcare system or into my children's school,
whatever it is, is not acceptable, actually.
And that does imply a level of self-organization, actually, doing something about it. And I think that is harder to sell to people as a compelling vision, because we don't have the experience of
it now. You know, we've had sort of 50 years of neoliberalism, you know, that have removed most
people's experience of collective empowerment, have removed most people's actual experience of agency, really.
You know, it is transformative, or it can be transformative.
You have the experience of actually being able to say no,
I think, which is a good start with this refusal,
and to start to act on what they honestly think
would be a better option in their situation.
But we mostly don't have the experience of that. And I think it actually does start locally. It starts with people, even individually,
but mainly collectively getting together in that particular context and just saying no to start
with. I think there's a very good argument for the power of refusal and the pleasure of refusal
in this case. We should possibly try to be a little bit punk about this, or definitely a little bit solar punk about it. This is maybe the other aspect of it that we need a broader vision,
but I'll come back to that in a second. I think, unfortunately, and this might just be where I'm
coming from, the pace and ferocity with which these things are taking hold means that, yes,
absolutely, things like, let's say, unions are incredibly relevant,
even though the union structure itself
is a bit like one of those tools
that has taken a certain form, you know,
which is, so I'm not really talking about a union,
I'm talking about a union branch
or a group of colleagues in a union or whatever.
That is an essential part of this resistance process
or a community group or whoever,
these are essential parts of it.
But I think the ferocity
with which stuff is changing is very, very brutal and clearly authoritarian at the very
least and constantly edging further and further in the direction of the far right, essentially,
politics and fascistic politics. I know this is probably going to make myself sound really
extreme here in some ways, but I honestly think that it's important at this stage, again,
to get ahead of it, to think in terms of resistance. In the UK, for example, this government, I
honestly believe, their absolutely fruitless attempts to persuade people that they are
the party of growth and it's going to deliver well-being for everybody are going to have
absolutely no influence on the massive anger, disillusionment and sense of betrayal that
really mirrors what has happened in America, almost one to one.
We've already had pogroms where people tried to burn down hotels for immigrants. We're
not talking theoretically here. If this is a stance taken on by institutions, then I
think we have to think very seriously about how to organize in ways that involve an element
of self-defense and an element of parallel structures. Learning the lessons of the past
is really important throughout this form of self-organization. But the swing back from this, the most important thing,
and again, the thing that's been sort of eliminated by all these years of crushing and deadening
neoliberalism, is just a sense of a better world. I think this was much more pervasive
in the 1970s for a number of reasons. People had a sense that life could be different,
life could be better. Or let's say, go back to the UK where the things that are currently being destroyed, that are the
things that however grumpy we might feel about the state of the UK, we say, well, at least
we've got the NHS, at least we've got the education system. These are the things that
are being dismantled and crushed and thrown into the AI shredder, essentially. These came
out of a feeling after the Second World War where it wasn't just that people had gone to war and fought fascism and were trying to prevent
London from being bombed and all this kind of stuff. It was a shared sense that people
weren't going back. They weren't going to go back to the 1930s. I think we should have
the same sense. It's not that we're just opposing AI. We don't want to go back to that status
quo that came just before AI because that was also crap.
What we really need is a vision of, and I'm not saying it's
one vision or a unified vision, but we need a sense that life could be transformationally
better. That's also really vital just to sustain our kind of collective existence at the moment.
If I think about something like the idea of decomputing, yes, it is about addressing a
technology, it is about transforming our productive relationships, and it is about taking technology back under control. But it's doing that as well within
this broader framing of a revived belief in the idea of a better life for everyone.
And I'm very keen to find out from others, you know, what their line of thinking and work is
towards that, because that seems to me a massively missing part of the jigsaw right now.
I couldn't agree with you more, right? And I think that has been part of the reason that the tech industry has been successful at kind of capturing our minds because they give us some vision of what a future could be or is supposed to be, you know, regardless of whether it's one that is actually better is going to work out for most people.
out for most people, but it's a future nonetheless that says that things can get better. Whereas it feels like we've kind of lost that in the political sense, right? This idea that we can actually make
things better through politics rather than just through technology being created by Silicon Valley
companies or something like that. And I think my final question, to build off of what you were
saying there, is whether you see any examples of movements toward decomputing or movements
that are making progress.
Like I'll speak for myself, you were
talking about hyperscale data centers earlier.
I've been very encouraged to see the movements
and the organizing around data centers
and the visibility of those infrastructures
and how that has been a way into thinking
about these technologies for people in a way that maybe was not so
apparent before they became so visible and people started paying so much attention to
them.
So I wonder on your end, you know, when you think about decomputing, when you think about
these efforts to question the technologies that are being rolled out and these broader
systems within which they exist, do you see any hopeful examples there that come to mind
for you?
There are movements around computing itself actually, which are surprisingly positive.
They come under a variety of names. What they all express to me is the sense of,
which I identify with because I'm in the computing department, people who actually
work directly with these technologies who recognise that. Now, I'm not talking about AI here,
because I think the current form of AI that we have at the moment is essentially pretty useless for social benefit. But computing
more broadly, I don't, you know, has plenty of uses. I'm still a bit of a fan of the internet,
despite the state it's in. And I think these could all be important parts. So movements
like permacomputing, frugal computing, computing within limits, these are all identifiable
clusters of people in technology. But they're to me, in a way, receptors.
They're waiting to connect with, let's say, other social movements or other initiatives
that are coming at it from the other end.
I would say, for example, this isn't the optimist example, but something that's top of mind
for me at the moment is the way algorithms are being deployed to further punish people
for claiming welfare benefits, particularly people who are disabled.
I mean, yeah, it's a very broad tendency and is intense in the UK, as it is
anywhere else. But actually, the movement of people with disabilities, disabled people
themselves have some elements of this already. You know, that technology is already integral
to their lives and how well or not they are lived. They have a technopolitics and they
also have an understanding of a sort of transformative technopolitics. They've got some sense of, and also a hands on
sense, because they're often sort of hacking the technologies to make them
better for themselves, they've got a sense of how they could, how life could
be built better, so that society wasn't constructing them as disabled, so that
technologies weren't rendering them as disabled, but we're doing the opposite.
And I think that's, that's a good, you know, it's a very real thing.
The disabled movement is like a really good source for all this stuff, but it's also a
paradigm for the rest of us as an idea. Now, the example that I find the most inspiring at the
moment, I don't think it's really got that much to do with computing, but it does have the elements
of what I think of when I envision this, which is a thing called, they call it the GKN collective,
the GKN factory collective. What that is, is it's a factory in Italy, near Florence.
They were making car axles and they were bought by a British hedge fund, which tried to do
what hedge funds do, which is shut them down and convert it all into cash.
They occupied the factory and refused to allow this to happen.
That's all very well and totally understandable and all power to them for doing that.
What they did then was, to me, it's kind of figurative of a kind of response around
technology and general AI in particular. They didn't just do that. They said, okay,
we need to be productive in some way because we've got to sustain ourselves. We're not going to do
this just as the workers left behind. We have to do this with the community. So the collective
structures they formed were with their local community, so it's work around community.
And they said, okay, if we're going to do anything, it has to be in the direction of a
better world. We've got a just transition is an idea that they subscribe to. So they said, okay,
we need to make stuff, we want to make stuff, we're skilled to make stuff. We don't want to
make car axles anymore. What are we going to do? And so they even collaborated with the local
university to sort of develop plans about what they could do productively, what they're doing What are we going to do? And so they even collaborated with the local university
to sort of develop plans about what they could do productively.
What they're doing at the moment is making cargo bikes,
which is great.
And they're also recycling solar panels, which is also great.
And they have their own plans, I think,
to make PV cells of their own.
And even more than this, they understand
what they are doing within a much broader context.
They understand what they're doing
as of this idea of a just transition, as the idea of working towards a balance with the planet, working towards
equality and justice for themselves and their communities, and working in a way that is pushing
back. I mean, the slogan they organize under is Insorgiamo, which means we rise up. And that is
a partisan slogan. And that's a very contentious thing in Italy because it's a fascist government
and they're doing everything they can
to erase the memory of the partisans.
So they're saying, you know,
no, we are part of this movement for a world
that is not new, it's been here
and it's fought for its existence before.
And here we are fighting for our existence
but doing something very constructive and positive.
And I just wanted to connect back
to your last question in a way,
which I guess, you know,
the thing that Genitive AR has again made very visible in some very specific ways, going
back to the beginning conversation about things like copyright and whatever, also all the
data labor that a lot of us are now aware of that underpins all this stuff is that there
is nothing in these things that doesn't come from us in the first place.
You know, there's no value, no actual value or anything constructive
or positive or creative in this at all that didn't originally start with us. It's been
extracted, it's being divided, it's being reduced and turned against us. But at the end of the day,
actually, we already make this world every day and there isn't any reason why we couldn't remake it.
And that collective are doing it under duress, but they're doing it. And I think the onus
really is on the rest of us to take that as an example.
I think that's a great place to leave it. You know, it leaves us with a great example,
but also kind of brings this all together for us. It's been fantastic to learn more
about this concept and how you are conceiving of the moment that we are in.
Dan, thanks so much for taking the time. It's always a pleasure to chat with you.
Yeah, likewise. You're very welcome.
Dan McQuillen is a lecturer at Goldsmiths University and the author of Resisting AI.
Tech Won't Save Us is made in partnership with The Nation magazine and is hosted by me, Paris Marks.
Production is by Kyla Hewson. Tech Won't Save Us relies on the support of listeners like you to keep providing critical perspectives on the tech industry. You can join hundreds of
other supporters by going to patreon.com slash tech won't save us and making a pledge of your
own. Thanks for listening and make sure to come back next week.