Tech Won't Save Us - Why We Must Resist AI w/ Dan McQuillan

Episode Date: March 9, 2023

Paris Marx is joined by Dan McQuillan to discuss how AI systems encourage ranking populations and austerity policies, and why understanding their politics is essential to opposing them.Dan McQuillan i...s a Lecturer in Creative and Social Computing at Goldsmiths, University of London. He’s also the author of Resisting AI: An Anti-fascist Approach to Artificial Intelligence. You can follow Dan on Twitter at @danmcquillan.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Dan wrote specifically about ChatGPT and how we should approach it on his website.Dan mentions TWIML as a podcast that has conversations with industry players that’s informative for how these technologies work (though you’re not likely to get a critical perspective on them), and Achille Mbembe’s book Necropolitics.OpenAI used Kenyan workers earning $2/hour to make ChatGPT less toxic.The UK had to scrap a racist algorithm it was using for visa applications and many councils dropped the use of algorithms in their welfare and benefits systems.Dan mentions a Human Rights Watch report on the EU’s flawed AI regulations and its impacts on the social safety net.The Lucas Plan was developed by workers at Lucas Aerospace in the 1970s, but rejected by their bosses.Support the show

Transcript
Discussion (0)
Starting point is 00:00:00 It's literally making stuff up and it has no idea what it's making up. Therefore, it is a bullshit engine. It's a bullshit engine in that it makes up stuff that essentially has no semantic content, grounding, causality or any of that in it. And it's bullshit because its only goal is to be plausible. Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks, and this week my guest is Dan McQuillan. Dan is a lecturer in creative and social computing at Goldsmiths, which is at the University of London. And he's also the author of Resisting AI, an Antifascist Approach to Artificial Intelligence. Now, obviously, this is a conversation that is very relevant right now as chat GPT and these image generation tools get a lot of media attention, public attention. You know, there's a lot of big promises and expectations
Starting point is 00:01:05 for what they might mean in the world, you know, often very positive things, most likely or almost certainly over-promising on what they can deliver, as is so often with the tech industry. So I think it's important that we have some conversations that, you know, throw a bit of cold water on this, throw a bit of reality
Starting point is 00:01:24 on what is likely to come of these technologies and whether we should be so quickly embracing them as Sam Altman and these other billionaires expect us to do. One thing that was really fascinating to me in having this conversation was that Dan told me when he set out to write the book, it was actually called AI for Good, right? And he was looking for a good version of AI to promote. And as he did the research, as he further looked into artificial intelligence, these technologies, how they work, the actual impact that they're having in the world, he changed that to what is now resisting AI, you know, a very different take on these technologies and how we should approach them.
Starting point is 00:02:05 And I think that this conversation is really instructive, not just to understand how these technologies work, but also how they are deployed and the risks for how they can continue to be deployed as they are improved and advanced and made more capable or have more computing power behind them, not just in the private sector as they are rolled out by these companies, but also in the public sector, right? As more of these tools are used to deliver services in governments that are increasingly subject to austerity, spending cuts, don't have the money to deliver the services that publics expect from them, and they look for quote-unquote efficiencies in order to supposedly deliver these things in a better way.
Starting point is 00:02:48 But actually what they end up doing is creating more exclusion, making it more difficult for the poorest and most marginalized people in society. And so if we think about these technologies and the impact that they're going to have, it's much more likely to be more of that rather than being your doctor and your teacher and all these wonderful things that Sam Altman would want us to believe. So I think this is an important conversation. I really enjoyed talking to Dan.
Starting point is 00:03:13 I think you're going to benefit from it, learn from it, enjoy it. At least I hope so. And I think that especially in this moment where AI is getting so hyped up, we need these kinds of conversations to give us a bit of a reality check. So with that said, if you like this conversation, make sure to leave a five-star review on Apple Podcasts or Spotify.
Starting point is 00:03:31 You can also share it on social media or with any friends or colleagues who you think would learn from it. having these conversations with people like Dan to give us the critical perspectives on these technologies that are being hyped up by the tech industry, you can join supporters like Bayorn from Bern in Switzerland, Pete in Pauling, New York, and Randy Seldinger in Seattle by going to patreon.com slash techwon'tsaveus and becoming a supporter. Thanks so much and enjoy this week's conversation. Dan, welcome to Tech Won't Save Us. Thanks very much for having me. Very excited to chat with you. You know, obviously, these AI tools are everywhere today. And last year, you had a book published called Resisting AI, which is very relevant to everything that is going on in this moment. Great timing on the book. But it gives us something great to talk about and to dig into it, right? Because I
Starting point is 00:04:25 think it's a really important perspective on AI to have, especially in this moment where there is so much hype and excitement about it. And so I want to start just by getting your kind of initial impressions of what has been going on the past few months. Obviously, last year, we had the kind of slow rollout of these kind of image generation tools, things like DALI, Stable Diffusion, became slowly more popular throughout the year. And then, of course, in the past few months, we've had ChatGPT emerge. We've had the deal between Microsoft and OpenAI to kind of build that into the Bing search engine. Google is moving forward with its own kind of AI search engine integration. Facebook is talking recently about how it also has its,
Starting point is 00:05:05 and it might do something with it soon too, because of course, now that there's the hype, everyone has to get in on it. So I wonder just generally what you've made of the responses that there have been to these technologies over the past few months as they have slowly become much more common and the public has been interacting with them. Well, I suppose what's kind of currently on my mind is just because being stuck in academia, you tend to see what's most immediately around you. And I'm really, really distressed right now about the way a lot of academics are just rolling over and going, oh yeah, well, you know, chat, GPT, all the language models, they're here, you know, they're inevitable and, you know, we have to learn how to live with them. And they even valorize this, you know, they're saying, ah, you know,
Starting point is 00:05:48 we can adapt our ways of writing to, you know, include particular forms of creativity and we can use these tools. No one could say that there's absolutely nothing to be got from these tools, but I think these kinds of statements are anyway, what I call in the book, AI realism for starters, which is the sense of inevitability around AI, no matter how visible its sort of toxicity. And also the fact that they just seem to be able to sweep this toxicity under the carpet. I mean, the absolutely appalling sort of varieties of harm that are immediate and already rolled up in something like a language model just don't seem to trouble them. So I guess that just happens to be my little fishbowl at the moment. I'm not searching for something good to say,
Starting point is 00:06:24 because I have nothing good to say about large language models, but I suppose there's a kind of collateral benefit in, as you say, it's just pushed it further to the front of everyone's agenda. And so in that sense, I was reading a great thread just now by Emily Bender on open AI and their sort of delusionary statements about their careful custodianship of the future of artificial general intelligence and all this kind of nonsense which is just basically marketing stuff really but it also represents that kind of ideology and you know that's that's really great that lots of other stuff is coming kind of out from under the stone some things that were perhaps only of concern to those of us who spent too much time thinking about this kind of thing anyway and worrying about it and trying to get people
Starting point is 00:07:01 alerted to it now it's out in the open It is mostly hype and people seem to be getting swept along with it. But those of us who are in the skeptical corner are trying our best to highlight the urgent downsides. And I think some more of those will come out. I mean, I won't want to go on about it too long, but just to say it's a kind of weird time for me because my main concern and my concern in the book wasn't really to focus on these most sort of spectacular forms of AI in a way. I mean, I think they're important, but my gut feeling has always been actually that the most impactful applications of AI or effects of AI are those things that sort of much more invisibly permeate through the institutions and come to touch people's lives in much more ordinary ways, you know, when they're claiming benefits or when they're
Starting point is 00:07:48 seeking housing or when they're getting educated or whatever it is. And I don't think this is diverting. I think it is just shining a light on that stuff. But my sort of final resentment for the moment would be that by adopting these things, like Microsoft, for example, still sticking with them, GPT and Bing, to my mind is also kind of, it's just adding another brick of legitimation to these kind of technologies in general, and their general application to our lives. And that's what resisting AI is really about, is the broad resistance to these kind of computational assemblages being given any kind of
Starting point is 00:08:21 influence over our everyday life at all. No, I appreciate how you put that. And I guess that is kind of a bright side that you've discussed, right, is even as the hype kind of accelerates around these technologies, that does also give us an opportunity to have more of these kind of skeptical views about these technologies, get more prominence and attention. As you say, Emily Bender is someone I definitely need to have on the podcast to discuss this with her as well. I want to go back to what you were saying at the beginning about, you know, kind of the reaction of the academics that you're seeing, because it does seem like, you know, obviously, there's this kind of boosterish response to it where, oh, my God, this is going to change everything. This is great. The kind of Sam Altman perspective on this and the people like him.
Starting point is 00:09:01 But then it does seem like there's also kind of a pragmatic response that also kind of plays into this as well, that's really beneficial to them, which is associated with what you describe, right? These are here, we need to figure out how to adjust to them, rather than saying, hold on a second, do we actually need these technologies at all? And should we be embracing this and acting like it's just inevitable? Or can we push back on it? Yeah, I mean, just to sort of skip to the punchline in a way, I'm, I guess, a rejectionist or an abolitionist in that sense. And I think we will talk about this. I think that's not just about saying no, it's about saying, what are the alternatives and
Starting point is 00:09:34 how we go about that? But what you made me think of just then was, I do agree that there's, first off, that there is this more self-styled, sort of realistic approach to it. What I think in a way is that that's kind of more dangerous because it's easy to dismiss the hypesters and the boosters and people understand that positioning. You know, there's hypesters and boosters in every area of life. So people are somewhat inured to that. I think it's the positioning of people as responsible to be able to sort of, well, yeah, absolutely, there are problems with these things. But, you know, given that we have them, this is the inevitability, given that we have them weak. somehow these responsible liberal bourgeois intellectuals should be the ones who can take on
Starting point is 00:10:26 this essentially monstrous technology and decide for the rest of us how best it should be managed for the common good which is coincidentally something that they get to decide so um i think it's i mean in a way and i'm not you know I don't really make any sort of secret of my positionings around these things. One of my bigger concerns about AI, I suppose, and its openings towards facilitating reactionary solutions to things is not that that is either a sharp break in technology, it's not sci-fi, or that that's a sharp break in politics. The harms that we're talking about are an intensification of the harms that are happening right now under our nominally democratic, apparently liberal regimes. So it's not just that we've got lunatic, space-faring, long-term visionaries, if that's the right word for them. May they all go off in rockets on a one-way ticket. It's that we've got the responsible authorities who are responsibly closing off borders so that people die in the desert or
Starting point is 00:11:29 on the Mediterranean or responsibly doing all the other things that are causing harm right now. And those are the same people who are coming along and saying, don't worry, we have the capacity and we have the responsibility to manage these things sensibly for everyone's benefit. Yeah, it's a huge concern, right? And we'll obviously come back to it through the course of this conversation to talk about how, you know, this does make AI technologies much more concerning and worrying, especially when, you know, they're not the very visible chat GPT style tool, but are much more kind of secretly integrated into the systems that enable all the problems
Starting point is 00:12:05 that you're just talking about, right? And so before we get to discussing those things, I think it's better for us to get an understanding of how these technologies actually work, right? Because that I think is going to help to illustrate how these problems are caused and why they're difficult to avoid. So can you talk to us a bit about how, you know, AI technologies, so to speak, actually work, you know, how they kind of learn, quote unquote, you know, I'm putting scare quotes on that for listeners who can't see me. Very scary. Totally. So how should we understand these technologies and how they're actually developed and how they function? Okay, well, thanks for that. It gives me a chance to try and ground,
Starting point is 00:12:42 you know, at least sort of backfill some of my opinions with some semblance of being grounded in an understanding of what's going on. And maybe if we work backwards from the language models, the large language models, which are the ones that are so front and center of everyone's attention. I mean, they're, you know, essentially text prediction machines. Inside of AI in general, inside of language models per se, you know, there are some sort some clever ways of doing things. I'm not questioning that. They're somewhat deft and skillful ways of manipulating things under their own terms. What they're largely manipulating are statistical optimizations of one kind or another. It's machine learning. And so language models just have used transformer models, essentially, which is just a particular way of setting up that kind of optimization to really just learn how to fill in the blanks. I mean, they've learned how to, you know, the cat sat on the blank. That's how they've learned to fill in that gap because they have quite sophisticated
Starting point is 00:13:38 methods for doing that, huge volumes of data to do that, quite a large amount of invisible ghost labor, which, you know, invisible human input into that, and, you know, oodles and oodles of computing power, then they do that very well. And doing that very well gives them a number of capacities. I mean, they can produce grammatical, mostly contextual, and sometimes creative seeming texts that are doing what they're meant to do which is be plausible they don't have any capacity to understand if they don't understand anything they don't understanding anything more than a teletype machine understands anything
Starting point is 00:14:19 they produce sequences of letters and they've learned how to do that in a way that achieves a mathematical minimization. Now, that does gift them with certain affordances, and producing sort of plausible-looking text is one of them. And it's plausible exactly because it's been trained against text that was actually produced by humans to be a good imitation. And that's what it is. It's an imitation. It's a somewhat randomized imitation of what humans can produce given the right set of inputs and prompts. And given the vastness of the different examples and subjects and topics and contexts and sources that have been sucked into these things, what they can therefore produce is also quite diverse. But I mean, I guess, and this is where you could really help me by prompting me, I guess having spent a little bit
Starting point is 00:15:04 too much time looking at that, I sometimes get a bit flat-footed when I'm faced with people who kind of read it as anything more. I mean, I can understand it. I'm not trying to belittle anybody. But because it's plausible text doesn't mean what it's saying is plausible. It doesn't mean that it has any conception of what it's saying. It's us reading into that pattern that there's some meaning in it. We are bringing the meaning to it.
Starting point is 00:15:28 There is no innate meaning in what an AI is doing other than a mathematical operation of statistical minimization and a reproduction of that by deploying that model, that set of learned parameters, basically. There's nothing else in it. If it seems reasonable to us or if it seems interesting and weird to us, that's us. That does seem to place some pretty strong limits on what we'd trust it to do. I think of it as a really, really interesting party trick. But why would you do anything else with it?
Starting point is 00:15:55 I don't know. Why would you allow it to be an integral part of our knowledge searching structures and search engines? And why would you let it write your essay? I mean, that sounds like a, I'm not trying, I'm not getting at my students here. If anybody's listening, you know, that's, you want to try it, that's up to you, right? But it's inevitably going to produce what the industry itself calls hallucinations, which I think is still over-reading it because it implies that there's something doing the
Starting point is 00:16:18 hallucinating, but let's use that word for the moment. Definitely, you know, it always, it's literally making stuff up and it has no idea what it's making up therefore it is a bullshit engine it's a bullshit engine in that it makes up stuff that is essentially has no semantic content grounding causality or any of that in it and it's bullshit because its only goal is to be plausible just like somebody who tries to bullshit you at a party that they know all about fusion physics or something like that, and then they don't. They're just bullshitting, and they're just trying to sound convincing. So yeah, I don't know. I mean, that's language models. And what they're doing is a particular application to this whole business of trying to predict blank words and sentences. That's the learning method. That's just a very specific
Starting point is 00:17:02 instance of the general, what we call AI right now's just a very specific instance of the general, what we call AI right now is a very particular slice of what could possibly be called AI. And AI over history has been lots of different things, as I'm sure you've covered before, and you very well know yourself, right? So lots of different kinds of AI. This is one very particular kind. It's called connectionist AI. So it's not relying on the idea that there's rules in the system. There's no expert stuff in the system. What it's relying on is that if you present this kind of learning with a huge enough data set, you give it a certain amount of desired outputs and you make lots and lots of interconnections in between those two things. And then you sort of turn the handle and you keep on doing a sort of self-correction process called back propagation so that you try to more
Starting point is 00:17:46 closely approach the quote-unquote correct output given your input i mean that's all it's just a kind of turn in the hand like a mangle it sort of mangles this stuff until it learns how to fake it it fakes it till it makes it basically and that's pretty much the whole of sort of i mean deep learning reinforcement learning there were variations on a theme, you know, but they all have these common characteristics. They're all based essentially on mathematical optimization. There's absolutely no reasoning involved in any sense to understand it. And they all require vast amounts of computation, vast amounts of data, which have their own political implications, of course, and their own ethical implications and their own social implications.
Starting point is 00:18:24 So yeah, I don't know. That's my starter for 10. No, I think it's a very concise explanation of how these things actually work. And I think what you get at there is why some of the kind of reporting around it has been concerning, right? As a lot of these publications have been publishing transcripts with ChatGPT and saying like, oh my God, look at all the things that it's been saying to me after I fed it these particular prompts to lead it in that particular direction. You know, this is something that we should be very worried about in like, not so much in the ways that I think we'll be talking about, but more in the way that, oh my God, look at the things that's spitting out. What if it actually means these things or gets people to believe them
Starting point is 00:19:03 or what have you, right? It does seem like looking at this through the wrong lens of what the problem is here and how we approach it. You mentioned there the degree of computation that is necessary for these AI tools, ChatGPT and things like that in particular, but part of the reason that the hype around this is happening in this moment is because there has been so much progress in computational power and creating these kind of cloud data centers where there's so much computing power available? Or would you link it more to a development in kind of the politics and how that has evolved in our society in recent decades. I try to earwig a bit on industry conversations, and actually a really good podcast for that kind of thing for anybody who wants to sort of try and listen along is called TwiML, which is This Week in Machine Learning. And the host on there, Sam, interviews key practitioners in all of these areas and does a very good job, I think, of some of it would be just very technical and very industry focused. But if you can sort of listen along, you understand some of the more internal narratives. And I think it's pretty well understood in the industry that the main thing that's going on at the moment is scale.
Starting point is 00:20:14 There have been some actual changes and there's a little bit of competition between, you know, do we have deep learning or reinforcement learning? And now we've got transformer models and they have, well, perhaps not surprisingly transformed a lot of things, but not fundamentally. You know, fundamentally, these are just, there are variations on a theme and essentially it's about scale. It's about compute power, as they say. But I think something that I should have said a little bit earlier in a way is, and I'd like to sort of emphasize this to people, is that I may sound like I'm going on about the computational technology. And I am, because I think it's important to be materialist about these things, you know, across the board. I mean, politically materialist, or rather sort of a materialist in the sense of political economy and follow the money and everything else, but also actually look at
Starting point is 00:20:55 the technologies themselves as, you know, not being simply neutral or passive technical frameworks in which people invest their politics. I mean, of course they do. We know that. We can talk about the weird politics of supremacist politics that gets invested by Silicon Valley in these machinery, but they're not just passive vessels for our imagination. They have particular characters. They make certain things easier and certain things harder. But my reading of the politics of them, it does start from, or at least that's how I try to do it in the book. I do try to start from, let's look at what AI actually does, not what the newspapers think it does, or certainly not what sociologists think
Starting point is 00:21:33 it does, or maybe not even what some computer scientists think. What is it really doing as a material technology? But that's because it never exists like that by itself. That stuff didn't just appear. We tend to talk about it as if it kind of exists over there somehow and then is being brought into society. That is a kind of dangerous fiction. I mean, this stuff emerged out of the same social, political matrix that we currently inhabit and immediately goes back into it. So it's really impossible to separate the operations that I find really interesting from the institutionalist assemblies that they're always part of, of the kind of, you know, ideas and ideologies and sort of values
Starting point is 00:22:11 that are already present throughout their creation process, throughout the process of even conceiving of these things, throughout their operation and throughout their application. These things can never be separated. They just act as a particular condenser or a particular channel within that process. So a real understanding of what these mean in a way in the world can never really be separated from the wider social matrix. And so I guess for me, trying to read the significance of AI is trying to say, well, what are they actually doing and how could this affect people's lives? But it's trying to say, but we're not starting from a blank here. You know, we're starting from the societies that we all have.
Starting point is 00:22:50 As they say, we're all experts by experience, right? We all inhabit this world and we understand some of its very stark and peculiar topologies. These same landscape is that out of which AI has emerged from a particular corner of it. Yeah, but out of this particular world. And that's where it actually operates. There's no other space. So if we need to understand what AI means, where it comes from, what it's going to do, we can't really do that without trying to see it as part of a jigsaw puzzle in this wider jigsaw puzzle of society as we know it and history as we know it. The ongoing dynamic and unfolding of history.
Starting point is 00:23:31 I think AI is part of that history right now. And it's certainly playing an active part in trying to foreclose certain futures and open up certain other futures. That's why I do find it something important to pay attention to. Yeah, you know, I would say that kind of addresses my question on politics, but we can get more specific about it, right? Because in the book, you describe how, you know, obviously we had this neoliberal turn in the 70s and 80s that led to a greater kind of privatization of the functions of government, a greater reliance on the market to kind of provision services for people, you know, cuts to the welfare state, government seeking efficiencies, quote unquote, to deliver services and things like that. And as things get marketized, there's also a vast production of data that goes along with kind of creating these systems. And that seems to be a type of environment, you know, less investment in the public service,
Starting point is 00:24:15 the government needing to seek out efficiencies, the market providing more of the services that we rely on, that seems to be kind of something that would be beneficial to an AI type of approach to, you know, all of these sorts of functions. So maybe you can talk about how AI fits into this broader kind of political picture or economic picture that we've created over these past couple of decades, and how it kind of exacerbates these trends rather than seeking to ameliorate them. Yeah, absolutely. I mean, what you're describing a bit, I think, is, I mean, I originally signed up with Bristol University Press to write a book called AI for Good. I'm in a computer science department, I feel, but I also have experience of, you know, working in what they call the social
Starting point is 00:24:58 sector, you know, but that's my twin concerns. And maybe I can help, you know, to usher in this idea of sort of AI in the public interest or something like that. And I just went through this series of kind of, wait, what sort of moments, you know, as I was kind of really trying to engage with the AI and at the same time have a kind of sense of history and sense of political and social understanding. I was trying to understand what was going on with AI and went through a series of sort of negative revelations, you know, sort of going, wait, this is exactly like neoliberalism and, you know, wait, this is in its own way, it's entirely recapitulating a sort of Hayek idea of what markets are supposed to do, you know, sort of optimal
Starting point is 00:25:36 distillation of all available information. So yes, basically, I mean, you know, AI in that sense, you could have almost predicted AI based on a reading of neoliberalism. I think the aspects of that that are most interesting to me are it plays into this anti-social in the literal sense, anti-society. There's anti-society, privatizing, centralizing, seamless construction of a space of free flows for capital and all of that. It does play into that. One of the useful things I think about the analysis in neoliberalism that's equally important for AI and is somewhat overlooked sometimes perhaps is the idea of the simultaneous production of the subjects. In other words, us, how we experience ourselves,
Starting point is 00:26:18 how we see ourselves and each other's under neoliberalism. I think that's one of the really strong elements of the critique of neoliberalism, as opposed to just like all the capitalisms that we've had, is its hyper-emphasis on hyper-individualism. And the fact that we exist as a very isolated subjects and entities, and that what's relevant for me is a sense that AI, in that sort of feedback way I was saying, comes out of and feeds back into that process as well, that it feeds back into subjectivation. AI produces its own subjects in a way. It's not just a mechanism for manipulating us. It's a mechanism for producing or producing certain kinds of us or producing certain understandings in us of who we are and what we should want. I think that's always how hegemony works.
Starting point is 00:27:05 That establishes a set of cultural and experiential framework and field as much as it does governance, as much as it does industrial relations or relations of work or labor and social reproduction. It establishes these things across the board, from the psyche to the political economy. And I think AI is active in that way. The other thing about it, from what you said, was one of my first kind of revelatory moments in sort of unpicking what actually goes on inside AI and realizing that it doesn't really just do anything except come up with these ways of dividing things and ways of sort of doing a sort of utilitarian optimal division you know a way of sort of ranking in essentially the desirable and the less desirable that again think about the
Starting point is 00:27:52 moment slightly longer political moment when in the sense of the financial crash for example what i'm saying in a very long-winded way is i just looked at this and say oh my god this is a machine for austerity this is an austerity machine It's a machine for reproducing and scaling this mechanism of austerity and this mechanism of precaritization. You know, when you're looking at AI, you're looking at Deliveroo. You're looking at Uber. These are inseparable from the core, we want to call them dark patterns. Somebody called it to me the other day, taking that analogy from web design. You know, these are the dark patterns that perhaps weren't intended to be dark, but they came out of a certain logic and they're embedded in these systems. So yeah, it does absolutely not conflict with neoliberalism, but seems like a kind of, to me, a kind of turning up of the volume of the agenda that's been rolling since, let's pick a date, 1973 and the coup in Chile or whatever. It's a machine for the reproduction of Chicago boys' values or whatever. At the same
Starting point is 00:28:52 time, I think that's pretty bad. But what really concerns me maybe is that this is coming at a time when I think the wheels are coming off that dominant world model. You know, there seems to be quite a lot of unintended friction in trying to sustain this neoliberal world of sort of just-in-time transglobal economic operations. And actually, for a number of reasons, one of the financial crashes, one of them, you know, the so-called refugee crisis or the, you know, the sort of massive displacements for war and climate change and famine and other reasons that happen around the world plus the climate crisis you know sort of overarching everything is causing shocks and tensions in the system that not to mention the pandemic actually that was also another kick in the nuts for the for the existing system and it's really you know it's staggering it's staggering and that's kind of that is worrying because it's not like the relations that have accrued to themselves such power over this period of time even compared to sort of previous versions of capitalism let's say you know the
Starting point is 00:30:01 massive intention to happen over that it's not like they're going to go oh yeah it's working anymore. We better go for a sort of socialist republic or something. They're going to dig in, you know, they're going to dig in and they're going to seek other means as being pretty generative of forms of far-right politics, if not fascism, which is you've got a crisis, you've got a sort of ruling class panic, if you like, and you've got a complementary presence of a sort of solution-offering ideologies. And we don't have to look anywhere. You don't have to look outside, down the street at the moment in many, many places in the world to see rising far-right movements. So that is in my mind. I mean, that's something that has concerned me for a long time. You know, politically, I've always felt myself to be an anti-fascist, but the last few years is incredible, the rise of the far-right. So again, if you're really considering ai as a technology that's
Starting point is 00:31:05 enmeshed and entangled and involved in creating circuits and feedback loops in the society that it comes from and the society that is currently emerging then the fact that this machine for division and ranking and separation and a sort of you know this machine that tends to essentialize the attributes that it imposes on people, for that to come about at a time when not only do we have an existing injustices, but those injustices are veering in a far-right direction in so many ways, whether that's fascist-style far-right or more religiously justified far-right or whatever kind of nationalisms or identitarianisms
Starting point is 00:31:45 we're talking about, it's breaking out all over. And the possibility of that happening in conjunction with the emergence of AI is partly also why I wrote the book. You know, I think it is concerning to see how these things develop, right? And I want to come back to the point on fascism and the relationship to the rising far-right in just a minute. But, you know, even if we think about how this affects our governments that have not elected far-right parties, which is still the case in some countries, luckily, if we look at that, you know, I was talking to Rosie Cullington recently about how, you know, bringing these kind of private sector technologies into the public sector still has
Starting point is 00:32:22 an effect on what governments can do and what they do do, right? Because it does help to shape the responses. And we can already see, you know, people like Sam Altman and these other kind of boosters of AI technologies, whether it's chat GPT stuff, or more broadly, the kind of suite of AI tools, that these tools are going to be your doctor in the future, they are going to be your teacher in the future. So we're going to need less human teachers. And, you know, it won't be kind of their kids, the kids of the billionaires who will have the kind of computer teachers and the chatbot teachers, it will be poor people, right, who don't have the money to buy access to private schools and private hospitals and those sorts of things, right? Even as, you know, especially in this moment, we see the challenges that a lot of governments are facing to fund public healthcare systems and
Starting point is 00:33:09 the talk that, oh, we need more kind of private sector involvement in the healthcare system. And then what is that going to mean into the future? How do these things evolve? How does technology and AI then justify or help to justify, you know, a further privatization of these things because of how AI is used. But then even more kind of, I guess, in a more hidden sense, in a sense that we don't see so much as you describe in the book, there's also the way that governments, as you say, after having been cut so much after having been subject to so much austerity, say maybe these AI tools can help us to, you know, administer our programs in a different way, right? Can help us to see who is a deserving welfare recipient and who is an undeserving
Starting point is 00:33:50 welfare recipient. And what other ways can we use these tools to classify the people that we supposedly serve so that we don't need so many bureaucrats, so that we don't need so much labor, so that we can reduce the cost of delivering services so that we can make them more quote-unquote efficient. How do you see that aspect of this playing out as these technologies get integrated into the public sector and then have these consequences that are often not expected, but maybe people like us would say, pay attention, this is going to happen, even though a lot of times those concerns are not listened to. I think you portrayed a very convincing picture of what, not only what will happen,
Starting point is 00:34:34 what is happening. And that is, although I'm trying to also raise warning flags about potential more extreme outcomes of these things, that is actually really my main concern, because that's the stuff that's hitting people right now. And it hits all the people that are already being hit so hard by everything else that I've been digging into in the last couple of weeks, a little bit more of what's going on behind the benefit system in the UK in this rather notorious benefit called universal credit. And actually, I have to say, I have credit Human Rights Watch here for a report they wrote a couple of years ago that I didn't read until recently. And reading through it, I mean, firstly, in somewhat sort of affirming kind of way, it really made concrete a lot of things that
Starting point is 00:35:10 I'd written into the book without, you know, knowing these on the ground facts about how these things would affect ordinary lives under pressure, if you like, you know, and feeling that these are simultaneously distanced, cold, thoughtless mechanisms of bureaucratic administration trying to achieve sort of impossible target, given the ridiculous goals that politics has in turn set them. And at the same time, sort of genuinely mechanisms of cruelty, that they are actually forms of punishment. I mean, in the UK, again, that isn't really hidden. It's not said so much about universal credit. But for example, the system that's in the UK that's created to deal with a revealing
Starting point is 00:35:59 phrase, asylum seekers, is actually officially called the hostile environment. But I was, you know, chatting to somebody who's working with universal credit beneficiaries, and they were saying that one of those beneficiaries they were talking to referred to universal credit as a hostile environment. And I think that's absolutely right. I mean, it is in the way it sanctions people. It has this incredible sort of twisted logic of saying, well, we're only going to give you the absolute bare minimum. And if something happens, and this can be quite trivial, that somehow transgresses one of the many, many parameters we set up for how your behavior and life should be lived, then we're going to sanction you and only give you 70%, which is by definition 70% of what you need to survive.
Starting point is 00:36:41 So, you know, it's really appalling. And I mean, there are many, many dimensions to this. And I think they perhaps reflect a few of the things I tried to put in the book that I did find quite early on Hannah Arendt's concept of thoughtlessness very helpful in sort of guiding me through what's going on because it was the capacity for such cruelty, such inhumanity, violences of various valences,
Starting point is 00:37:08 you know, happening really from people to people, but through the mediation of these systems is so awful that, you know, unless you've kind of seen it or studied it or experienced it, I think a lot of people still don't really either know about it or sort of want to know about it. Again, these things all play into each other, you know, sometimes we really don't really either know about it or sort of want to know about it. Again, these things all play into each other. Sometimes we really don't want to know what's going on. That's already present. The things that you described are already present. And yes, of course, the thing that you also described is that so much of that is now being absorbed into the private sector, which makes it, I mean, what does that give us? That gives us perhaps an incredible intensification of a single parameter, which is making a profit out of all this stuff. I mean, the fact that people, there's been newspaper stories in the last few
Starting point is 00:37:47 days in the UK comparing the food that you get in an old people's home with the stereotypical oysters that the CEO of that company is eating on their yacht. I mean, it's cartoonish in its horror. Of course, the privatization makes it even less accountable. Not that local authorities or government or local government was accountable before in any meaningful way, but it makes it less accountable, makes it more opaque, makes it more difficult for those people who are trying to exercise some guardianship, journalists or whoever else it is, activists or whoever. It makes it more difficult for them to get traction on it. It is the market ideology, of course, but I'm thinking here of also our own experiences where I work of that same kind of restructuring at the hands of consultancies who talk in terms of efficiencies and optimizations. And I think that's one of the other things I just like to sort of drop in in a way that's a consequence of this privatization, but it's kind of giving much more
Starting point is 00:38:45 of a hold to this single valued good as well, which is this idea that things must be optimized. And optimization is a very, you know, combined with the capacity for cruelty, combined with the capacity to leave people without the bare minimum that they need to survive becomes, again, the term that I adopted in my Jackdaw-like way,like way, picking up little bits as I go along. The term that made sense to me at that point was the idea of necropolitics, which is not meaning some massive transformation of politics. If anything, I think Achillean Bembe, when he wrote this book on necropolitics, he was talking about the still murderous colonialism that exists within post-colonial regimes.
Starting point is 00:39:25 But actually, that seems equally applicable to a lot of the systems that we live under, you know, depending on which bit of the system you experience. And I wanted to emphasize the necropolitics because I just wanted to emphasize, really, that these systems, I guess, in a way, are literally a matter of life and death. They aren't ordering anybody out to die or automated executions or something like that. But what they are doing is seeping out of the system in a way that is justified by being somehow mathematized and algorithmic and computational. Perhaps nobody really believes it's fully objective, but it's systematized at least, and it's authorized, just seeping out the sheer possibility of the continuation of existence for so many
Starting point is 00:40:05 people. So that's much more everyday and very pervasive. And I think all that chat GPT and these other sort of generative models, in a way, they're kind of generative, possibly of further sort of ideas about how to do this for other institutions that didn't yet think they too could get in on this particular party, you know, that the welfare benefits people and border forces, I mean, all over ai you know and now i think it will probably spread to the rest of them under the under the rubric of like oh wow maybe we could get i mean mental health is a particular concern of mine in general as an interest area because it
Starting point is 00:40:39 seems one of those areas which is a focus for the sort of effects of systemic categorization the marginalization of the of the sort of more vulnerable a statement about what society really means how do we care for people who at that particular moment are not able to care for themselves and at the same time it speaks so much to our values of normativity and what we think is okay and what we think isn't okay and who will accept because they're productive and who we won't accept because they're not productive so mental health seems to encapsulate so many things and of course mental health is now you know i've i've been concerned about ai and mental health for many years but right now i imagine that all of the people who are running underfunded programs for cbt therapy or whatever
Starting point is 00:41:18 are looking at gpt models and going maybe that's the answer it's very concerning right especially when you go all the way back to joseph weismbaum and Eliza in the 1960s and how he created the system that was basically an early chatbot, you know, and it was supposed to be like a psychotherapist. And he was like, you know, people will talk to it. They'll never actually believe that, you know, the computer understands them. But then he was shocked to learn that they did, right? And they did kind of buy into it and want to believe that it was real. And he spent his whole life afterwards saying, we really need to be concerned about this AI stuff, right? But we don't like to learn the lessons of the past doesn't feel ethical doubt, right? So if you integrate it into these kind of bureaucratic systems, it doesn't worry that you're giving someone only 70% of what they need to live on, for example, or you're cutting them off from benefits altogether because
Starting point is 00:42:16 you flag them as someone who doesn't deserve it, right? You know, I'll just say for listeners that if they want an example of this, they can go back to episode 72 in August of 2021 with Dakshayini Suryakumaran, where we talked about how this occurred in Australia with their kind of benefits system and how they implemented it to kind of cut people off, how it led to suicides and all these sorts of things based on it was integrated in a way that wasn't appropriate, right? That didn't work. You know, whether these things are ever appropriate, I think is a question, but in this case, it was flagging people who were completely legitimate, right? I wonder, you know, going back to what you were saying about the kind of concerns about how this can lead us toward or further empower a kind of far-right politics, a fascist politics that we're already seeing on
Starting point is 00:43:02 the rise right now. I think I have two kind of questions on that. Do you think the views, the politics, the perspectives of the people developing these technologies matter in this sense? When we look at these people, we know that there are people like Peter Thiel who are involved in this, who is very much, I feel comfortable calling him a fascist, you know, and he promotes the use of these AI tools in ways that we need to think about building artificial general intelligence and ensuring that the human species lives for way long into the future. And particularly people like them, not, you know, the poor people who don't matter so much. Right. Or so how much do you think that matters, the particular politics of these people? But then also, how did these systems in general kind of empower this way of thinking that leads us more in the direction of a far-right politics and that way of kind of seeing the solution to the problems that we very much do have in our society, as we've already seen
Starting point is 00:44:17 our political parties, our political systems shifting in that direction? You know, the center left party moves to the center, the center rightright party moves to the far-right, for example. You know, everyone's kind of shifting in this direction. To what degree are these technologies kind of helping move that along? I think, of course, it's important to do what you did and say, well, actually, this person is a fascist. I think, you know, the kind of meme that I always like, which gets repeated quite a lot at the moment, is the variation on the theme of saying, when they tell you who they are, you should believe them. There are many, many people at the moment who are mouthing and espousing and actively advocating for things that I think are
Starting point is 00:44:58 unambiguously fascistic. We shouldn't beat about the bush. So those people are present in the same way that there are actual fascists on the streets. Those things need to be opposed and dealt with. But they are, in a way, I don't think they're the determinant factors. I don't think there's determinism in this. It's a complex unfolding of stuff. But that doesn't need to leave us powerless in trying to analyze it. I don't think they're the most important factor, actually. I think it's perhaps no surprise that people like that find themselves drawn to or in positions of incubating technologies like this.
Starting point is 00:45:33 But I think that's more a kind of resonance between their pre-existing worldviews and what these technologies offer and what they're good for. I think what's far more concerning is the stuff that you have already been alluding to, which is the kind of systemic side of things and you know we have a what i saw called a few weeks ago which i've now adopted completely that poly crisis you know we've got a plurality of overlapping crises going on which are going to make life just you know obviously for huge parts of the world they're already critically filled with crisis and always have been you know ever since the p the PAPs, I don't know, perhaps since the beginning of empire, let's say, you know, there's always been the global South has always been, you know, subject
Starting point is 00:46:12 to mortal precarity. And that perhaps is simply just coming home a bit more now. I don't know, but whatever it is, I think it's pretty inevitable that all of us in one way or another are subjected to greater precariousness and more sort of threats in some way. This is the crisis of legitimacy that the system itself is always experiencing because even those who are supposed to be loyal to it now are concerned about their own existence and the future of their families and so forth. There's a systemic set of circumstances in which AI emerges. You know, it's no accident that it's emerged at this time in this form, and to which AI forms a kind of feedback loop. And I think, again, from my sort of personal
Starting point is 00:46:51 probings into how AI actually works, it seems to me that, again, that in itself is no surprise in that what it does as a sort of computational party trick, you know, made meaningful in the world, if you connect it to actual world operations, is essentially forms of closing off. It is forms of enclosure, let's say. Or another way of saying that is forms of exclusion. And those kind of dynamics, that's what it offers. I don't think it really offers, I don't know what phrase would be genuinely productive, but it doesn't really add anything new into the world. What it does is provide means of boundarying, dividing, valuing, as you say, particularly ideas of worth terrible outcomes by people who may not see themselves as actually believing in any of these ideologies that the more extreme people propagate, but they
Starting point is 00:47:50 just feel something needs to be done. We need to sort this problem out. You know, too many people coming across the English Channel in small boats and we just can't, you know, I can't hardly get a school place for my kid. And, you know, when I go to hospital, there's a 12 hour wait, all of which is true. It's very easy to point the finger as always you know in scapegoating politics which again is very much a far-right tactic as well as to blame some other groups some out group to find ways of systematizing of algorithmicizing of sort of mathematizing those kind of operations as well there's not this stuff will happen it is happening these kind of recturing politics are happening and they would happen if AI wasn't here. But I do think that AI
Starting point is 00:48:29 is very unfortunately resonates far too closely with these things. So I don't say AI is fascist. I sometimes get sort of queried about that. I don't say that because fascism is a political category. It's something that we do in social forms. But I think we do need to have a broader understanding of politics, actually. I think we need to have a techno-politics, particularly in our time. But I think it's been true since the Luddites. We need to have a techno-politics where we understand that what we understand as politics always rests on, is shaped by, is delivered through particular technologies. And that in itself is shaped by the character of those technologies, shaped by the particular things those technologies are good at doing and the particular sort of opportunities they seem to close off. I fear that might already be getting a little bit vague sounding. But essentially, I see AI as forming a mechanism that's very able to extend processes of enclosure extraction and exclusion.
Starting point is 00:49:25 Yeah, no. And I think I just want to kind of emphasize what you're saying there by comparing that to what we hear from, you know, the people who are developing these technologies, right? Like if we think about the Sam Altmans of the world, for example, or if we think about any of these other people, they are telling us that these AI tools are going to have so many incredible benefits for the world. As I was saying, you know, they're going to be your doctor, they're going to be your teacher and all these other things. They're going to make opportunities and education and all this available to so many more people because these tools are just so powerful and what have you, right? And what we're talking about and what you're talking about is very much that it's very unlikely that those things are going to happen in the way that they are telling us. when they're integrated into systems that are already kind of enabling, you know, forms of exclusion and harm and all these other things that already exist in the world as it's happening.
Starting point is 00:50:31 And especially at a moment where the crises that we face are escalating. And so there's even increased pressure to like do something about something. And then that opens the kind of gates to say, okay, well, we just need to accept these harms or whatnot because of what's happening here, right? To close off our conversation, I wanted to ask about how you think we should respond kind of collectively to AI. You mentioned that you obviously started writing a book called AI for Good. So is our response to think about how we can have good AI or what kind of a progressive AI looks like? Or does it need to be a much more comprehensive response that challenges AI itself and thinks about how we address these problems in other ways?
Starting point is 00:51:14 B. You know, I definitely think it should. We should challenge it. I mean, because of the, you know, I think it's not the kind of thing that we'd hear in these parts on this podcast, but in general, you know, people still seem to feel the need to preface any conversation about it, no matter how critical, by saying something like, of course, this has got huge potential for healthcare or huge potential for education or whatever else it is, as you say, you know, but, and then go on to list some of the
Starting point is 00:51:45 evidential and empirical harms. And it seems like such nonsense. I mean, why do we have, that is in fact complicit, you know, in extending the, you know, the range of harms that these things are likely to cause. All of the benefits, I mean, when it comes out of the mouth of the Twitter feed or someone like Altman, then, you know, we know it for what it is. You know, it's supremacist ramblings mixed with,lings mixed with self-interested investment tactics and PR. But when it comes out of lots of other people's mouths, it becomes a lot more obscure, a lot more the reality of the thing, which is that actually, I'd stand by this one. In a corner of a boxing ring, the benefits are all speculative. All of the things that are talked about, how they're going to make society better, you know, actually I would say fantasies. Whereas the harms
Starting point is 00:52:29 that these things are able to do are already evidenced, even in the minimal ways that real AI has been deployed in the world. And one of the reasons why I, to get onto your question, why I talk about anti-fascist approach, for example, is simply that one of the principles of anti-fascism is you don't wait for fascism and then go, we're right now we need to oppose it because then you're in a really bad place to do that and i think it's you know at least by analogy at the very least even if you don't think these technologies are fascistic in some way at the very least the analogy of not waiting for all the bad things to happen before you go yeah well those things are bad now we need to really undo them once they become sedimented in our infrastructures once we are not training any
Starting point is 00:53:03 doctors anymore because we relied on these systems or whatever else it is, that seems a very foolish measure. So much more on the plus side, I guess writing the book taught me is to try to think about technologies in the same way as I would think about what's sometimes called prefigurative politics, which is that if you're going to envision and imagine, and I think this is what we should be doing, and envisioning and imagining better worlds. And I say that plural, better worlds, which can be multiple and in different places, different things. But if we're going to imagine those, and the first thing obviously is the need to imagine and believe that those are as possible as the horrendous reality that we've got, then it's only consistent with that to think about if we're using tools of any kind and
Starting point is 00:53:41 computation in particular, what forms of computation might be you know consonant with that way of living with those kind of relationalities that we want i mean care is something that i didn't mention explicitly and but it came out in so much of what you were saying and and i was echoing about the consequences of these systems is they're so uncaring and callous and cruel forms of relationality that actually promote care and mutual aid and solenarity i think those are the values that i would like to see much more prevalent in society that I have witnessed for myself, I have experienced and part of. I think they offer a huge potential benefit for facing the challenges that are coming, actually, that are particularly relevant to times of crisis. Then we should be trying to conceive of modes of
Starting point is 00:54:20 technology that are consistent with those. And I think that doesn't need taking a step back. AI is a florid outcome of particular kinds of techno-social matrix. And I don't see anything actually in contemporary AI. I mean, I have had a couple of exchanges with Gary Marcus about this, because you're sort of, on the one hand, really ready to go out there and denounce AI as it is, but mainly so he can wind it back to the form of AI that he thinks is better, which is a kind of hybrid with expert systems. I'm just not interested. I don't think AI of any form that I've encountered. And I've got some historical understanding as well, not just contemporary. I don't see it. I see it all as it's either come out of neoliberalism or out of the Cold War. It expresses and instantiates those values. And if we go, I'm not writing off computing. I think maybe we've already got more
Starting point is 00:55:02 computing than we need, and maybe we should really roll backwards. I love technology, but I still hold out the possibility that there's ways of assembling kinds of technologies that are consistent with the values of the world that we want to live in. And that's the exercise that we should undertake. And at the moment, the way I'm thinking about that is kind of, as my little shorthand, is kind of a sort of shift from cyberpunk, which is telling us, you know, how the world looks when you actually put this stuff on the street in the hands of sort of already existing mafia. That's what it looks like, cyberpunk. Towards solarpunk, which is, you know, the idea of what might the world look like if we allow ourselves to be led by
Starting point is 00:55:38 pro-social values of care and solidarity, mutual aid and imagination and creativity and diversity and inclusion. If we allow ourselves to think about those kinds of societies, then we must allow ourselves to think about what kinds of technologies might support them. And I think the potential is there. I'm not talking about a 50-year research project here. If anything, it's about recombining and taking apart and putting back together in a very different way some of the stuff that we already have. I do think this is an immediate thing. And the historical example, as you would know, that I put in the book is one from the 1970s, where a bunch of workers in the Lucas Arms Company did exactly that. They said, we don't want to be in this arms company anymore for a variety of reasons.
Starting point is 00:56:14 So we're going to use the existing skills we have. And they were all shop floor workers, but because they were in a high-tech industry, they had skills. They had skills, they had engineering skills, practical engineering skills, design skills, and they designed early forms of hybrid power, solar cells, and so forth. Because at that time was the beginnings of the environmental movement. They were affected by that kind of thing, drew those ideas in, and they turned them into real practical working possibilities. Of course, they were completely crushed, of course, at that point.
Starting point is 00:56:37 But the possibility didn't die. That concept of social production didn't go away. It's still there, and it's still to hand. It's to hand as much as open AI's, you know, nightmarish visions are to hand. And I think that's what we need to reach for. Yeah. I think the example of the Lucas plan shows how we can imagine a different way of organizing things. We can kind of deploy our productive capabilities in a different way, but the system, you know, kind of limits what we can do there. And I think one of the functions
Starting point is 00:57:04 of these tech billionaires like the Sam Altmans is to kind of limits what we can do there. And I think one of the functions of these tech billionaires, like the Sam Altmans, is to kind of limit what we can imagine for the future and to accept, you know, their kind of visions for how things should work and how these systems should roll out and, you know, what role we have in that, which is a very small one to be more of an object rather than someone who kind of thinks up and has much agency in this. I just wanted to say, supported also behind themselves by the kind of thinks up and has much agency in this. I just wanted to say, supported also behind themselves by the kind of, you know, the apparently reasonable liberal establishment who also position themselves as people who are acting on everyone else's behalf, but they're a lot more sensible than these raving lunatics. I think that's also a problem. I think if there's anything that Lucasplan did, what that committee did said was, we can do it. You know, ordinary people can
Starting point is 00:57:43 do it. And I think that's the ultimate lesson of AI is no matter how complicated this stuff is and how whizzy and sci-fi it seems to be, it's nothing compared to the insights, the capacity for insight and the capacity for organization in ordinary people. I completely agree. And, you know, I think it's the perfect time to have a conversation like this as there's so much hype around the AI, around these tools, you know, so much imagining of what this is going to mean for our future rather than questioning, should this be our future at all? Dan, thank you so much for taking the time to chat. I really appreciate it. Thank you. It's been a lot of fun. Dan McQuillan is a lecturer at Goldsmiths at the University of London and the author of Resisting
Starting point is 00:58:21 AI, an Anti-Fascist Approach to Artificial Intelligence. You can follow Dan on Twitter at Dan McQuillan. You can follow me at Paris Marks, and you can follow the show at Tech Won't Save Us. Tech Won't Save Us is produced by Eric Wickham and is part of the Harbinger Media Network. And if you want to support the work that goes into making it every week, you can go to patreon.com slash tech won't save us and become a supporter. Thanks for listening. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.