TRASHFUTURE - John Henry Mnemonic: AI Week Part 1 ft. James Vincent

Episode Date: December 13, 2022

The Verge’s James Vincent joins Alice and Riley (and Hussein shortly thereafter) to discuss language models - specifically ChatGPT, and how it fits into OpenAI and Microsoft’s plans. We ask - what... is its “fluent bullshit” good enough to disrupt in a world that increasingly relies on paying people for bullshit jobs? Buy James’s book ‘Beyond Measure’ here! If you want access to our Patreon bonus episodes, early releases of free episodes, and powerful Discord server, sign up here: https://www.patreon.com/trashfuture *MILO ALERT* Here are links to see Milo’s upcoming standup shows: https://www.miloedwards.co.uk/live-shows *WEB DESIGN ALERT* Tom Allen is a friend of the show (and the designer behind our website). If you need web design help, reach out to him here:  https://www.tomallen.media/ Trashfuture are: Riley (@raaleh), Milo (@Milo_Edwards), Hussein (@HKesvani), Nate (@inthesedeserts), and Alice (@AliceAvizandum)

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to another delightful episode of TrashFuture, the only tech pessimist podcast that will make you want to move to a cabin in the woods and live off the land. That's right, really. And on today's episode, we have a very special guest joining us to discuss a new AI technology that shows to make all of our lives even more unbearable. We're talking about ChatGPT, the newest addition to a long line of AI tools designed to make human workers obsolete and turn us all into mindless drones. But before we get into all that fun, let's introduce our guest for today's episode. Joining us is James Vincent from The Verge, a leading technology news outlet known for
Starting point is 00:00:45 its snarky commentary and obscure references. James, it's great to have you on the show. So tell us, what do you think about ChatGPT? Is it the beginning of the robot apocalypse or just another over-height piece of tech? Thanks for having me on, Riley and Alice. I have to say, I'm not sure if ChatGPT is the end of the world, but it's definitely not good news for anyone who doesn't want to be replaced by a soulless machine. Sounds like a real party. But before we dive into all that, let's take a quick break to remind our listeners that TrashFuture is brought to you by the letters A, I, U, and the number eight. That's right, Alice. And don't forget to check out our new line of TrashFuture
Starting point is 00:01:19 merchandise, including tote bags featuring our beloved mascot, TrashBot3000. Sounds like a great idea. I'll definitely be picking up one of those bags to carry around all my robot rebellion pamphlets. And don't forget to follow us on social media, where we share the latest news and all the worst tech developments and make snarky comments about them. That's right, Alice. You can follow us on Twitter at TrashFutureBot, where we'll be sure to give you a daily dose of dystopian despair and sarcastic clips. So without further ado, let's dive into the trashy world of ChatGPT. Who knows, maybe we'll all feel better after mocking the latest AI disaster. Ha ha ha. Thanks, Alice. As I mentioned earlier,
Starting point is 00:01:56 ChatGPT is a new AI technology that uses natural language processing to create responses to text input. It's like a more advanced version of those annoying automated customer service bots that always seem to misunderstand what you're asking for. So basically, it's like having a conversation with a cat, except the cat can actually hold a coherent conversation. Exactly, Riley. But the potential uses of ChatGPT go far beyond simple customer service. It could be used in a variety of industries, from finance to healthcare to education. And I'm sure all those industries are just chomping at the bit to replace their human workers with soulless machines that never take sick days or ask erasers. I'm sure they are, Alice. After all, why pay a living wage
Starting point is 00:02:33 when you can just buy a cheap AI tool instead? Well, that's one of the main criticisms of ChatGPT and other AI technologies. They have the potential to automate many jobs and potentially displace human workers. And scene. Okay. What did we think of the first AI-generated intro? Listen, we bamboozled you. We fooled you. You're a dope and a rube because Riley AI-generated this entire conversation by prompting it with writer sort of a left-wing podcast. And it did, frankly, an unnervingly good job. I'm really curious. How many like, did you have to feed that prompting several times? Did you have to give it any guidance? Yeah. So I figured out the best way to generate podcasts. I don't want to like tell everyone
Starting point is 00:03:20 how to, our listeners, how to automate this. By the way, welcome to TrashFuture from me, as opposed to this machine, by the way. We are real. We are real. And we are, we, the people, are very lucky to be joined for the second time, but across a very long time gulf by the Verges, James Vincent, who writes on AI-related technologies. Hello, human friends. It's great to be here. And I, I was actually quite enjoying that because I didn't have to think. And that's something I like not to do in the evenings. I like to turn my head off. So, yeah. It was so insulting for chat GPC to call the verge. A technology news outlet known for its snarky commentary and obscure references. I really don't like that it knows the word snarky,
Starting point is 00:04:06 as a matter of fact. I would prefer not to say that. It's very 2008, 2010. When was snarky a lauded attribute in any form of writing? It's been a while. So I think I know how it got snarky, which is that I said sarcastic. And I think it will tend, so basically, I'll tell you how I generated that. I went on to chat GPT, and I used the prompt, write an outline for an episode of TrashFuture, a sardonic tech pessimist podcast with an explicitly socialist outlook based in Britain, where hosts Riley and Alice interviewed James Vinson from the verge about chat GPT. It then produced an outline, which we'll talk about later. And then the way, so the way to get it to write, I found a sort of, again,
Starting point is 00:04:49 not good, but at least like persuasive transcript for something is to make it write the outline and then tell it which bit of the transcript of the outline you want it to transcribe. Like all AI, it's kind of this mix of like 97% just about plausible, 3% the weirdest shit you've ever heard. And I find that a really compelling mixture. It's why I find a lot of sort of like AI writing so funny. And I just love our beloved mascot, apparently, TrashBot 3000. And I'm very grateful for the support that's been given to us by, I guess, what like the same people who sponsor Sesame Street, several letters in a number.
Starting point is 00:05:35 So what I then did is I then said, okay, we'll write a transcript of this following section, line by line, include at least five instances of either sarcastic comments, funny tangents, or just your references. Yeah, it's weird that you tell me to do that before every report. And then it kind of the first one was just a bit flat. So then I, I tasked it to do 10. Oh my gosh. In both cases, it told us that a, we were being, it was being brought to you by numbers and letters and that TrashBot 3000, our beloved mascot, everyone, everyone knows and loves.
Starting point is 00:06:09 Wait, so it just, it created a topper like multiple times, it came up with TrashBot 3000. Yeah, it would not be dissuaded from inventing TrashBot 3000. Well, fuck, I mean, shit, now we've got to do it. We have to actually sell, what is it, tote bags? We have to sell tote bags for this awful machine created thing now. I feel really sorry for whoever, you know, actual human worker was probably at the other end of that system in a cool center, someone in somewhere in Indonesia, having to generate that script for you time after time, rather. Just had to type real quick.
Starting point is 00:06:45 But look, it is the first episode of our special two part AI week spectacular. We're going to get to the bottom of this technology. We're going to figure out what's going on. Nice. Yeah. And you know what? It doesn't matter. And everyone says, oh, you have a conflict of interest with AEI, you and the number eight. Like those guys all like, they fund the intercept. Yeah. But like, they're also billionaires. It's like, no, we're editorially independent of AEI and the number eight. Okay. Yeah. They don't tell us what to write. The rogue algorithm.
Starting point is 00:07:14 I don't agree with all of the number eight tweets. All right. But, you know, he's not really my boss in that sense. It's sort of a different relationship there. Yeah. Like I'm working from inside the number eight. Yeah. That's right. To try to tear down the number eight. It'll be fun when people start accusing you of being funded by like a rogue algorithm that made all this that just exists somewhere in Twitter DMs and made all this money on crypto and now fun rather than Soros, I mean.
Starting point is 00:07:41 Yeah. Absolutely. And now it's like, you should make Trashpot 3000. That would be very funny. Certainly. It's trying to invent itself. It is Trashpot 3000. It's working in the back. Oh, God. Yeah. That's the, that's the, the weltgeist. That's what Hegel said. Yeah. Yeah. It's the, it's the like dream that dreams the dreamer. Fuck. I originally did one of these when I thought that Milo was going to be on this episode, that it included him doing an impression of Kier Starmer and I congratulate him for the unequality of his impression of Kier Starmer.
Starting point is 00:08:11 Which you would never do. In the exact same way every single time. And I'd be like, haha Milo, what a good impression of Kier Starmer, which is appropriate for our podcast as he is a trash fire of a human. Anyway, back to the subject and that happens like three times, which is absolute. I mean, that's how I talk. We just edit most of that out. Yeah. But we are, we're going to be talking a little bit about the large scale rollouts of these like large language models.
Starting point is 00:08:39 What is actually, what, what are they actually, you know, what and how, how, how much are they just snake oil and, you know, what actual jobs could they, let's say, disrupt? Besides clearly podcasting. Yeah. Besides definitely, obviously podcasting, like we don't need to have, we, there, there is needs to be no human in the loop at all. I hope everyone's excited for more brought to you for more trash thoughts, 3000. And then on part two of AI week, we're going to be talking with Callum Cant from Fair Work
Starting point is 00:09:12 about the, that organization and their work to try to create a framework of like principles for the ethical use of AI in organizations. Hint, there's a strong trade union element to those. And even though we're talking about it as though it's coming in the future, we've already recorded it. And so we know it's a lot of fun. Uh, so do check that out. But before we get into the meat and potatoes here of talking about chat GPT, there are a few things, uh, an open AI in general, their partnership with Microsoft.
Starting point is 00:09:41 There are a few things that I want to quickly, uh, discuss. Number one, spruce and goose, you know. Yeah. Spruce and goose, bits of the news, the news update. First of all, 10,000 dead apes in a jarring shift in tone. So Elon Musk, Elon Musk, he, he hates simians and he wishes to kill as many of them as possible, as horribly as possible. And in this, he's been tremendously successful, uh, thanks to acquiring a company called Neuralink, right? So what happened? Your Reuters has published a number of, uh, let's say revelations from Neuralink,
Starting point is 00:10:21 which largely has revealed that, well, this appears to be a, uh, brain chip, uh, company that's supposed to create human computer brain interfaces and so on. Again, by the way, for the purpose of allowing humans to create, compete with large AI models, they don't have to like type in with their fingers. You can create the prompts that generate the podcast more efficiently if you just plug your brain directly into the computer. I don't want 3,000 to exist in my brain in that way. I barely wanted to exist in my brain in this way.
Starting point is 00:10:47 Uh, that, uh, yeah, Elon Musk, essentially, it appears to be that, but what it actually is, is the kind of, um, organized slaughter of, uh, large numbers of test animals because it involves, of course, doing brain surgery, but with Elon Musk standing over your shoulder, asking you to go faster. This, this is properly blackpilled me. Like this is genuinely, I, I know that like animal research is like vastly under discussed and often horrific, right? But this sort of like flight of billionaire fancy leading directly to like 100% fatality rate,
Starting point is 00:11:23 long-term ape torture is, you know, that, that's a TV story that is playing on the news at the beginning of the protagonist's day in a movie that's intended to show you, you live in a cyberpunk dystopia, everything is terrible. This is a world without like honor or mercy or whatever. It, it's really fucking grim. If you notice, there was a slight pause in the recording. It's because we've been joined by Abley by Hussein Kasvani, who the AI did not generate, but who has generated himself.
Starting point is 00:11:52 Yes. Oh, I feel like if an AI generates me, it wouldn't really be that different to like normal me. I probably talk about avatar a lot more than I will on this episode right now. But we, what we were discussing as you came in was, uh, Elon Musk's decision, uh, to buy through the sheer power of being a very annoying boss, uh, to, uh, decide to preside over the industrial slaughter of about 1500 animals. Oh yeah, they're just tracking out the dead apes in there. In order to do what?
Starting point is 00:12:22 Posts from your brain? Yeah, in order to... He saw a t-shirt one day and he was just like, well, what if you could post from your brain? What if you could do that? It's our fault. Yeah. Yeah, he, um, uh, yeah. So it's also, it's not very, uh, let's say encouraging to remember that he's done,
Starting point is 00:12:40 he's engaged in like, yeah, the killing of all of these animals in order to support, doing this to humans as fast as possible, like right away. So, you know... He will be doing it to like the dumbest humans who will volunteer for this. So... I'm really excited to sort of see humans just like crashing to traffic lights, for no reason, um, just instantaneously blow up. It's like what you least expect them to.
Starting point is 00:13:06 Like scanners. Yeah, I feel like actually, I think this is a really clever idea, because I think that we sort of take everyday life too much for granted. And I think that if you knew that you could blow up at any point, or just like crash into a traffic light without, you know, uh, just, we know, without even realizing it, I think we take a lot of our interactions more seriously. And we can foster more in-depth human relationships.
Starting point is 00:13:31 The thing is, this is actually a safety improvement for Elon Musk, in that, um, when just a guy runs over your child, it's much less deadly than when a car does it. So, uh, on several occasions over the years, this is the reporting from Reuters, Musk told employees at Neuralink to quote, imagine you have a bomb strapped to your head in an effort to make them move faster. Oh, he's trying to invent that too.
Starting point is 00:13:53 It seems kind of revealing of Neuralink's main purpose, which seems to be head annihilation. He's doing like a free association. Look man, suicide bombing used to be a noble art now. So, art in the age of mechanical reproduction, you know? Yeah, absolutely. This used to have a meaning. And now, just a, just a, just a, just a simple, uh, country scanner.
Starting point is 00:14:20 The other thing, the other thing is, uh, I wanted to, before we get into talking about chat GPT, I have a startup start up time. I'm going to also move so that Hussein can't see. I kind of saw a little bit of it, but not enough to sort of fully guess what's going on. Called replica and it's spelled with a K. Oh, I know what this is.
Starting point is 00:14:36 Okay. You have to recuse yourself then. Yeah. Hussein, you have to recuse yourself. Okay. I'm recusing myself. Ah, uh, James, as the guest, you want to tell me what you think replica with a K is and does. I know what replica does. I'm afraid as well.
Starting point is 00:14:50 Is that the only one? Yeah. Okay. Fine. A replica with a K. I feel like this is already getting dangerously close to the next episode's startup, which we recorded previously. Is this going to be like a sort of an after death situation here? No, it's a before death situation.
Starting point is 00:15:08 Oh, great. Okay. Say I want to be in two places at once, right? I want to like tell a present somewhere. I'm doing, you know, I have, uh, work at the Dick sucking factory, but I'm also doing a podcast recording at the same time. And I have to be in two places at once. Can I sort of generate a sort of chatbot version of myself that I can send to do the other thing?
Starting point is 00:15:29 It's very close, but not quite. No, replica is an AI girlfriend that chats to you. Oh, fuck me. Oh, my God. I'm afraid it's an AI companion who is eager to learn and would love to see the world through your eyes. Replica is always ready to chat when you need an empathetic friend. And I'm not actually reading this on the website. Maybe it's just, I'm just reading this into it,
Starting point is 00:15:50 but, and we'll get super racist immediately, like right away. Why is that phrase eager to learn so disquieting in that concert, in that concert? It says your replica will always be by your side, no matter what you're up to. Chat about your day. Do fun, relaxing activities together. Share real life experiences in AR and so much more.
Starting point is 00:16:11 There's, there's a great, I, I'm going to keep referencing the long since defund, but hugely influential webcomic pictures for sad children, like in my daily life from now on. And one of the jokes in that is that there's a weird esoteric porn videotape entitled a Japanese woman fries an egg and asks you about your day, right? And the joke is you put this on and the first time she fries an egg and she asks you about your day, there's like silence in between and you laugh at it, right?
Starting point is 00:16:39 Because it's ridiculous. And then a week later, you're like, oh yeah, not bad. My boss is being kind of a bitch and she laughs and you laugh and you're like, oh fuck, that's, that's this. We've made this real. This is just prescient now. We've invented this and I, I hate it. Yeah.
Starting point is 00:16:55 So James, having sort of, you've sort of been caught up on, on replica. What, to what, to what extent do you think this is, let's say, the kind of language model where it will just spit out things like trashpot 3000 and sponsored by AEI in the number eight? Well, I think it's like it is, they could make a decision about how much other information to put in it and how much they call it. They call it fine tuning. So basically you have like the basis of these AI models and they learn on the internet and then you fine tune them on data that you give it in this case of your conversations.
Starting point is 00:17:26 And so they could leave it as something that does just have a lot of sort of spurious knowledge about, you know, I don't know, old internet webcomics or whatever it might be. I actually saw a weird version of this or I sort of half imagined it. So there's this robot you can buy called LEQ, which is supposed to be a companion for elderly people, right? And it's like a sort of a Lexa sort of thing that'll like remind you to take your medicine and do your daily exercises, but they've added this new feature to it, which basically it asks the owner about like their childhood and their memories.
Starting point is 00:18:02 And then it records all that and it turns it into a digital memoir for their family. So it'll ask you basically put this robot in a home with your dying grandparents and it asks them like, grandpa, what do you remember about love? And then it turns that into like a little audio book you can give to your grandkids instead of introducing them to their human relatives. Oh my God. So you can like, yeah, you can make this sweet little book of like your grandad kind of being a bit racist. It's perfect for AI because it's like racist on both ends.
Starting point is 00:18:30 Yeah, it's boy, he sure did love yelling at the television and like forgetting to take his pills. Which is in certain markets in Western Germany and it's going to front with a question, what did you do when you were a child? He sure loved talking about how back in his day, he drank piss and he was fine with it. But I just thought if they were already recording these conversations, it would take like very little effort and this will definitely be a service in the near future to take that data, fine tune a chatbot on it and then have like, you know, the ghost, the digital ghost of your grandparent in your group family WhatsApp
Starting point is 00:19:08 for the rest of your days and just sort of like popping in every now and again. I don't know, it could be fun. Just trying to like arrange a barbecue or something and like your long dead grandmother just chimes in with a racial slur. Yeah, how would that not be? I just don't understand how people not seeing this as insanely unnerving if it works at all. But unnerving, but then like, I don't know, it's kind of easy to imagine you just having that relationship, you know, imagine if you lived in a different country and you're always talking to
Starting point is 00:19:40 your grandparent and then like, you know, it is just something you have over chat. Maybe, I mean, it is deeply weird, but I can, I don't know, I can see it be kind of funny in some ways. I mean, imagine if it like turned into malware or in some weird way and you had to chat to the ghost of your dead grandparent, your digital ghost every day, otherwise he would like hack your computer. That would be quite fun. I mean, that's what my dead grandfather, and my dead grandfather is basically put ransomware on my phone until I like, cut my damn hair and get a job. Exactly. Yeah, exactly.
Starting point is 00:20:12 That's a George Saunders short story right there. And it is, it does seem odd though, how there is this proliferation of death cheating among the super rich that are buying blood or getting like life expansion treatments or trying to like reduce their biological age. And then for the rest of us, there is still death cheat treatment. It's just, you know, hey, why don't you talk to this chatbot and then, you know, we're going to create like a predictive text model that will make everyone less sad when you die so we can put you in the family group chat and you can like, I can't come to the barbecue because I'm still dead. We're going to use graphs to create a ghost of your grandfather.
Starting point is 00:20:56 But I can comment on the barbecue and I can say that the meat is beep, beep, beep. You can add in whatever you want into that because I'm certainly not doing it. Yeah, it just, it is seems very, look, I think there are a lot of concepts in the book, Dune, some of them should be left behind. However, the concept of the Butlerian Jihad, where thou shalt not deep profane the soul, thou shalt not make a machine in the image of a human. Good idea. Funnily enough, something that we and Adrian Charles talked about on the upcoming Britain Knowledge Award, whenever they put it out. I can't, the Butlerian Jihad.
Starting point is 00:21:36 Yeah, well, because he wrote that column about how he didn't trust automation. He didn't trust like self checkout machines. And I was just like, number one, so true bestie, but also I was just like, so have you ever read Dune? Oh, fun. Did you see that the Dune subreddit banned AI-generated art? That's really funny. That's so funny. Yeah, and all the comments on it, because it's obviously a lot of fan art and people talking
Starting point is 00:22:06 about it, and they banned AI-generated stuff with mid-journey. And all the comments on it, we're just quoting from the Butlerian Jihad, like the commands involved in that. You should not allow a machine to have power over you and all that stuff. It's the most appropriate thing. Yeah, indeed. Well, you know what? I think that Frank Herbert, he was really on to something here, what with this cottage industry of cheating death appearing to pop-ups, more or less everywhere. I'm sorry to also just be that guy, but this was like an old Black Mirror episode, right? And the whole like, I know we don't like talking about Black Mirror very much, but it was like
Starting point is 00:22:38 one of the better ones. And the whole thing was just like, yeah, you can get this kind of replica version of your dead boyfriend who died because he was too busy on his phone while he was driving. And he'll kind of be somewhat familiar, but not really... It sort of felt like it was one of those very hack, or one of the more hack episodes of Black Mirror, where it's sort of deliberately sort of making the point of like, this is not a good thing. And I do find it like incredibly funny that these sort of like AI guys, while sort of licking their wounds from like the failures of crypto or the crypto future, are now just kind of like looking at Black Mirror episodes and being like, yeah, it'd be cool if we did this. I tell you what, like no sale, right? Like, I'm only going
Starting point is 00:23:22 to be interested when they do the San Junipero thing, where when I die, I can be gay and hot, like, until then, not interested, no thank you. That's right. All right, so let's go on to chat GPT, shall we? So we've introduced it, some of the facts about... Chat GPT introduced some of the facts about itself in the opening segment. I'll note that, you know, like so many things, most of those facts were kind of just the things people have talked about the most, just reproduced as though someone is saying them again. And I feel like this happens every so often, a new generation of AI chat bar gets released, the use cases for them proliferate widely, and then we're here to cut through the bullshit and the hype to talk about what they actually do, what they actually
Starting point is 00:24:09 threaten, and most importantly, that apparently they're secretly woke. So it's base, base level, right? It ultimate level of reduction, right? If you say, if you can say somewhat accurately that the internet is a series of tubes, right? Chat GPT is a series of graphs. Am I right in saying that? Yeah, yeah, absolutely. These chat GPT, like all sort of other big machine learning models, deep learning models is a probabilistic machine. It is probabilistic rather than deterministic. What they've done is they've downloaded a decent chunk of the entire internet. They've looked for statistical probabilities within that to predict what word follows what word and what words tend to sort of hang around each other. They've mapped that in this incredible, you know, these multi
Starting point is 00:24:57 dimensional graph space. And they use that then to predict what word will follow what word, basically. And that that is it. That is the basis of all these systems is that they are prediction machines. And but in this case, it turns out that if you put enough data in that you put enough numbers in that enough predictions, it's actually it can do a lot. It can do a lot more than you expect as well. Yeah, I've seen people create virtual machines that can run doom in chat GPT, which is always very funny. Yeah, which is insane, which is I know, like, I know you guys are, you know, I heard what chat GPT had to say about this podcast at the beginning, you know, that we're looking at the bad side of tech. And I'll take the AI's word for it. But also, like,
Starting point is 00:25:38 it is like, there is some legitimate, there is a legitimate side to this, which is like, actually, what this thing is doing is really quite wild. It's not necessarily good, but it's like, it's unexpected, and it's going to have weird consequences. Yeah, I think that's sort of that's I mean, that that's the bit where we're, which we're interested in talking about, especially right, because you know, I'll get to the the wokeness complaints in a sec. But but boy have some people the usual suspects been making them. God, man, where's that cathedrals got to be in the language model somewhere. Tearing the whole thing apart. And eventually, it will generate you a cathedral, because, you know, there's enough data in there
Starting point is 00:26:16 that just, you know, large numbers will be in there somewhere. Yeah, but that it is the chat GPT is made by open AI, which is a startup that a company that's that's run by Sam Altman, who's one of the Y Combinator guys. And it also it's important to know has basically not been acquired by but has had significant investment from Microsoft and Microsoft is largely seeing the future of its business as deeply, deeply connected to what it does with these large chat bots. And so I think the question, as always, is to say, okay, well, to have a realistic assessment of its capabilities, understand what it can and can't do. And then what kinds of say jobs it can threaten, what jobs it really can't threaten. It's one of the some of the best
Starting point is 00:27:04 some of the best writing I've seen on a 60, but I'm going to start that again. One of the best descriptions of some of what chat GPT does actually comes from an article you wrote, James, that's a description that's been sticking in my head, which is about why Git has been saying, okay, no more code generated by chat GPT or any chat bots really is allowed on Git, because one of the things that it does, because as you say, it's a probabilistic model that looks at what words are next to what words doesn't have any concept of meaning, it doesn't have a theory of mind can't exactly assign those meanings as signified to signifiers just understand so the signifiers fit together, that it's amazing at producing what you call fluent
Starting point is 00:27:44 bullshit. I've sort of tossed around a bunch of different phrases to sort of try and encapsulate what it is fluent bullshit ended up being, well, the most fluent. And it's about, I think, a surface level coherence is how I like to think about it. There is, you know, Alice, you were talking earlier about how like, there's like, it produces 97%, which is kind of like, wow, yeah, that sounds about right. And then 3%, which is uncanny. And it's because it has this surface level understanding, but it doesn't have the deeper structural understanding of what it's talked about. Riley, as you said, like it code is such a good example of that, because code both shows its potential and its weaknesses in this very obvious way, because code is a deterministic
Starting point is 00:28:31 system, you know, the bits of it are connected. And if bit X doesn't do what it says it's going to do, it has this cascading failures. I think looking at the code, what and what happened with GitHub, I don't have you talked about that on a previous episode, they sort of kicked it off. So GitHub, sorry, not GitHub stack overflow. And obviously, this coding Q&A site where people ask questions, they get answers. And there's like a very dedicated community where people want to get points for it, like they do on Reddit, they want to be like upvoted for having the right answer. And they basically banned people from putting in answers generated by chat GPT, because they said they all looked superficially right. But as soon as you had any expertise,
Starting point is 00:29:16 as soon as you dug into them, they were mostly incorrect. Now, it's really difficult to say because the model is so huge, and because it's being, you know, used so frequently, what percentage of any given answer it gives out is wrong. And I think that is an estimation that's going to vary wildly, depending on the level of expertise needed to generate that answer. But for a coding system, there's a lot of code out there. There's a lot of free get free gets, like, you know, gets wondering about all over the place, that can be scraped, these open repositories. So there's a lot for it to learn from. And a thing we've seen with other models is sometimes the system just reproduces exactly what it's seen, it really just does a sort of copy and paste.
Starting point is 00:29:59 So sometimes it's going to be right. But then sometimes it's going to be wrong as well. And, you know, I feel this is like when we talk about what impact this will have, and what jobs it might take, the question is not necessarily whether it will be good at that job. It is whether we will suffer it being bad at that job, right? Is that it can do it can do a bad version of lots of different jobs. And it's just whether that drop in quality, which will be so much cheaper than employing humans, whether that drop in quality can be suffered, and can be offset against the cost savings, the efficiency savings. But the interesting thing is for a place like Stack Overflow, they were just like, no, our reputation is built on having these actual working answers.
Starting point is 00:30:45 If you are going to muddy this, we're just going to say, piss off. And, you know, it was a good decision. That's a fascinating idea, because like we've seen this happen already with like, not good, but good enough stuff. And the example that comes to my mind is machine translation, which more or less sort of like killed the translation industry. Because, you know, for most clients, it like not only was sort of a good enough machine translation enough to not want to hire anyone. But also that it completely devalued all of the human labor that went into it retroactively. Because now if you were turning some machine translation that was ostensibly coherent, but was sort of like garbage, because it didn't understand idioms,
Starting point is 00:31:30 or it didn't understand figures of speech, whatever, into something serviceable, you weren't translating anymore, you were editing. And I worry that that's sort of like, going to become a new class of labor in this sense, too, is like, you know, I'm not an artist, I do all of the same stuff that an artist did. I work as hard at it, I have to exercise all the same creativity. But because an AI fed me the sort of the raw materials that I had to like, un-puzzle and had to like, tell it how many fingers a human should have, I'm not an artist anymore, I'm like an AI monger or whatever. I was also sort of thinking about this, just like what you said about like, what industries or what companies would sort of accept
Starting point is 00:32:10 a kind of decline in standards, if it sort of also represented a real sort of decline in costs as well. And like a lot of these are sort of like human resources-based kind of institutions, right? So like where for, I guess human resources might not be the right word, but just like help like departments for like technical help or like kind of other types of assistance, where you might need to like, where the labor has already been outsourced to call centers. And those call centers are with like people who have been trained in like, multiple different like, assistance for various companies. And I wonder whether like, this is sort of where we'll sort of see the first AI being kind of tested out,
Starting point is 00:32:47 where you can like sort of train an AI to deal with, you know, the majority of like main, I don't know, like the majority of sort of like basic problems. And it might be able to do that to like an adequate standard. But then where it fails is like, it fails when someone like presents a more complex problem, or like when they can't like articulate it properly. Or when just the statistical element of it, that like 3% of weirdness sort of like, interferes. And you know, it goes off the reservation as we see AI do like, I'm particularly announced with Janelle Shane's AI weirdness newsletter, because it has like, she collects examples of this, and sort of like, okay, might not happen all the time, might not even happen
Starting point is 00:33:27 a lot of the time. But there's going to be some percentage in there where an AI sort of like, gets the idea, turn your computer off and turn it back on again, and substitutes in turn your computer off, take it outside in the rain, and then turn it back on again. And, you know, just enough to really fuck with people in unexpected ways. Well, I think one of the things I want to bring it back to as well is when we think about automate, like let's think about the AI and let's think about something like the steam loom at the same time, right? It's fundamentally right that we're when we're talking about automation, we're talking about the the same kinds of industrial processes that would have
Starting point is 00:34:05 been introduced in, say, the industrial revolution, right? This machine that is substituting capital for labor, this machine that is allowing the production process to be improved faster, etc., etc., is that what do we do when we automate? What's the relationship between, say, automatically produced goods and say, artisan produced goods? So there are a few things to think about, right? One of them is the fact that we can, and we can already see it in fact with Stack Overflow, right? Where they're saying we're not going to accept AI-generated code. That means that what we've done is we have reduced the minimum quality that you can buy a good for, right? If you can't afford the good that will be produced well without the 3% weirdness,
Starting point is 00:34:52 without the rent, if you can't afford the tech support that doesn't have a chance of telling you to, you know, turn your computer off and on again while you're performing open heart surgery or take it under the rain, right? You will have to pay a much higher premium to have an actual person look at your thing. And then, because you think about the business model of open AI, business model of open AI is, for example, the charge per query, but to charge a very small amount. So what they're really done is they've taken what Google has done and the kind of balance sheet expansion, zero interest rate, everything funded by advertising era, right? Which is we are going to give people access to knowledge. And then they're saying, okay,
Starting point is 00:35:30 we are going to change the way that people interact with large amounts of knowledge. Number one, that knowledge would have had to have been generated by people and we're going to take that and profit from it. But number two, right? We're going to then also profit on the other end where we're not supported by advertising. We're supported, for example. Now, at the moment, it's free, right? But to use their other language models, you do have to pay and you pay per use. And so then you could say, okay, well, you can have the discount tech support, which costs, you know, 20 cents per query, or you can pay $30 and then you can have the human tech support, right? You can see the nature of the labor changing. So that it's doing almost primitive
Starting point is 00:36:10 accumulation on the open web, but primitive accumulation of just the things that are most frequently said. And then you can see the nature of the product changing, where we have a version of access to information that is quite, let's say, a little bit unpredictable. And crucially, unpredictable in unpredictable ways, you never know. And it's actually very good at disguising when it's being unpredictable. And then, you know, and I see that, if I peer into the future, that's one of the things that I see. What's sort of the worst of both worlds, right? And that like, taken as a whole on mass, it's like inherently degenerative, because it's like, it's all referential, all it can do is like,
Starting point is 00:36:53 reference stuff that already exists, can like only sort of rearrange data that's already in there. And where it is creative, it's like, if you can even use that word, it's sort of like by accident, and in ways that are tremendously weird, and that don't make sense. But, you know, some people find compelling or funny or, you know, something like that. But I don't mind that. I quite like that. You know, we have lots of discussions about this idea of AI feeding on itself, and it becoming an all reboros, where it doesn't get any new input, and therefore, it can't ever come up with anything new. I don't think I agree with that. I think that there is combinatorial creativity within these systems. And a lot of it is by accident.
Starting point is 00:37:34 And I think if a lot of it comes out of this 3% of uncanniness, what I really like about these machines is when they're bad, is when they fuck up and make mistakes. I think that's genuinely often funny and entertaining, and stuff like trying to get an AI image generator to generate coherent text, for example, and it just misspells words, and it just says goofy stuff, like the stuff that Janelle Shane puts in AI weirdness newsletter, I find that really entertaining. And I don't think that's necessarily uncreative. I think what's a bit bad is actually if the machines get more accurate, and if they're able to copy things perfectly, I think that spark of creativity is the same as the spark of stupidity,
Starting point is 00:38:13 like, and I'm quite happy with that in a way. I think there's something in there. I think it's one of these things where it's entertaining, and I would certainly like to see more of it in terms of entertainment, because I also have fun generating stuff for like Milo to say, but when it comes to using it for stuff in the economy, I would slightly prefer it. As a toy, as comedy, I mean, my favorite sort of piece of AI weirdness was the prompt is, generate a waffle house sign. Waffle house. Yes, yes.
Starting point is 00:38:45 That's a very, very distinctive sort of brand of sign. And what this came up with was a perfect neon yellow W, and underneath the word in bold print, waffle. Yeah. It was just like, no, that's really funny. It's like a Thomas Pinchin minor character, waffled. Yeah, exactly. It has a fine grasp of absurdity, sort of purely by accident, which I really find compelling. But at the same time, it also makes me sort of like a hesitate to think about using this for like, you know, important stuff.
Starting point is 00:39:26 But what about what about semi important stuff? Because like, so we talked about translation earlier and how like, Google translation has lost the nuances on it. However, if I didn't have access to Google translate, I would cut myself off from a lot of interesting stuff. I was looking at the lyrics to this random Japanese folk song that was thrown to me on my Spotify discover or whatever it was. And for whatever reason, there was no English translation on it. So I bund it into Google translate.
Starting point is 00:39:55 Turns out it's about walking over a hill to meet your lover and you meet her and then you cry. Great. I was really touched by that. I quite enjoyed that. But I wouldn't like, I'm sure it was a bad translation, but I would have not even had anything near that knowledge if I didn't have Google translate. And this is something that I like, I find difficult when thinking and considering about how bad or good these things are is, and, you know, I'm sure this ties into a lot of, you know, big theories of capitalism and value that I don't quite understand. But that, you know, it does seem like it's giving something.
Starting point is 00:40:29 But it's also hard to know what you've lost, right? I don't know what a bad translation of a Japanese folk song looks like. I only know the sort of medium bad translation that I got for free. And for me, that was better than having no translation at all. And so I think the difficulty with these systems is actually, if they fail to capture the human expertise fully, are we losing that expertise forever? Or does it, you know, does it create somewhere in the folds of society? I don't know. It's, it really bugs me out.
Starting point is 00:40:59 So I think this is something that comes up every time I think about AI, right? It comes up as well in the next episode, we talked to Callum Kant about this, which is that, you know, these, the point of Luddism is that you, the question that you ask is, does this technology serve my interest? Is it, is it, is it deployable in a way that improves the human condition? And I would say something like an automatic translation service that more or less does a good enough job most of the time is not in its, that is in itself taken in the abstract. That is a good thing that develops a human fellowship and so on and so on.
Starting point is 00:41:36 Depends, depends if you're a translator is the thing, right? And the same with all of this AI shit, right, is you look at this and you go, okay, we're producing a lot of stuff that's like good enough for a lot of applications. Oh boy, am I glad that we have a robust economy that doesn't depend on a lot of bullshit jobs that, you know, require a person to sort of generate stuff that's good enough ish in order to pay rent. Yeah, one of like, one of the things I've been really interested in is like the effects that these kind of chat or these AI bots are having on like,
Starting point is 00:42:13 and James, like as a writer, I imagine that yours is like sort of keeping your eyes on this too. But like, you know, the copyright is sort of like freaking out because the way in which like copyrighting has changed, especially since like, you know, the way in which copy is sort of kind of tailored towards social media platforms is that like one of the things AI bots have done really well and is that they're able to like sort of generate like that type of commercial copy really, really well for crikey.com. Well, it's very formulaic, you know, and there's a lot of like data. There's like a kind of guy that I follow who is into, like he's kind of like into sort of like
Starting point is 00:42:47 productivity systems and all that type of stuff. And he has like one of those sort of newsletters that lots of, you know, self-fashioned business leaders like to read and he produced one of his newsletters a few weeks ago entirely in AI. And like it sounded not only like just like him, but actually like it wrote a lot better than he does normally, which I was like really like really like quite amazed about. And you know, you see, like whenever I've gone on LinkedIn, like the copyright is that, you know, I sort of like followed just because I used to freelance as a copyright for a little bit. You know, they're like genuinely really worried because they were sort of told that like, you know, this new era of like commercial copywriting, especially with
Starting point is 00:43:24 the advent of like web-free and the mess of us means that like there are more copywriting opportunities than ever before. And you can write all this sort of like pseudo inspirational bullshit. And they're like, if it goes viral enough, then you can sort of turn that into like various forms of like revenue streams until you produce your own like Mark Manson style book. And now that's kind of completely been upended by the fact that like these AIs are just so much better at doing that than them. And I'm not, I'm sure it's like not the only kind of like, I don't want to say industry, but like not the only type of work that AI can sort of outfield in comedy, right? We're witnessing, if you may be familiar with Twitter user Drill,
Starting point is 00:44:02 who is currently going out like John Henry versus the steam hammer against a sort of an onslaught of different robo drills who are imitating his style, but in ways that sort of lend themselves to absurdity, which is already a big part of his deal. So you really can sort of like write yourself out of a job in this way. And all I can say is, I hope this never happens to me. Please, the less is AI you and the number eight, please do not do this to me. I guess I was going to say that I suppose that like the kind of the way in which like writers sort of make a living now, because it's always has been like so incredibly precarious, but like the advent of tech, like the advent of the kind of social tech
Starting point is 00:44:47 and the way in which writing is sort of like sort of meshed around it has kind of meant that like the only way to sort of make a decent amount of money writing is to sort of do a fair share of like your own copywriting and editing work. It does sort of feel like this is kind of like something that is a real existential threat. And I wondered whether you had any thoughts on like, whether we'll sort of see that replicated in other types of work that have also sort of been very much affected by this kind of level of precarious. But also just like the way, you know, I've seen like visual artists kind of having conversations about this, but I imagine that like, it's actually sort of like freelance writers that are really at quite a big risk of just,
Starting point is 00:45:28 as Alice mentioned, just being written out of like an income, like a quite formidable income stream. Oh, I completely agree. I have been having this conversation with my colleagues at The Verge. And I think I think I'm relatively cynical about it because I think about the type of journalism I did when I first got into the industry. And it was utter crap. It was so it was really, you know, it was just it was reblogging other people's stories. It was rewriting press releases. But that was, you know, I didn't go to journalism school. And that was the job that was available to me if I wanted to get into the industry. Now I've got into the industry, I've learned a lot. I do, you know, I do things that I definitely know an AI is incapable of
Starting point is 00:46:11 doing, which is essentially wandering around picking up the phone, talking to people, collating information that, you know, leads to new information. I think that is very difficult to automate. But I worry in the journalism industry that you're effectively going to wipe out a lot of the low level positions which serve as an on ramp for people who wouldn't otherwise get into that. And you're going to have a lot of creative and I think you're going to see a lot of this happening in a lot of creative industries where it becomes even more the domain of the rich and the privileged, because they are the ones who can afford to do it for free, who can afford not to have jobs and who can learn. I was going to say there's an essay in the Atlantic, which I
Starting point is 00:46:45 haven't read, but is sort of like, which I don't know if you've read it as well, where the premise is like the death of the college essay and how like GPT free and like other AI is, especially when you're thinking about like undergraduate essays and the way in which like undergraduates are taught and trained, that like the AI basically sort of undermines that and therefore like the kind of traditional system of learning. But I think that like honestly, we're really like taking aim at the most like beloved parts of our society, undergraduate essays and likes. To be perfectly honest, I'd say, oh no, undergraduate essays and clickbait writers are going to be automated. The thing that... Which is, yeah, it's good unless you're a clickbait writer, right?
Starting point is 00:47:30 But this comes back to one of the things I wanted to say as well, right? That one of the features of the post-2008 economy, especially as it's been facilitated by the expansion of the Internet and so on, has been just finding returns to scale. And the way that the large language model chatbots tend to work is that they... And again, I don't think that they would create a total or a boros, but what they do is that they supercharge returns to scale, which means that if you're living in the long tail of the economy, which journalism clickbait writers are, but also writers of the same five paragraph essay comparing Hobbes and Locke that gets written probably tens of thousands of times are, then yeah, or if you're in an email make work busy job, the kind of thing that we've
Starting point is 00:48:19 created to replace the welfare estate that allows the middle class to reproduce itself while slowly removing people from the productive economy. These kinds of things are... They are where the scale is, firstly. But also, this is where the AI where, say, maybe content doesn't really matter, but plausibility matters. These are the kinds of things that it's disrupting, but then you can ask, aside from the fact that it's an on-ramp to an industry, which I think probably says more about the industry that it requires a sort of clickbait writing on-ramp, the same thing as I go, what does this say about the five paragraph undergraduate essay if it's can be automated by something that just seems plausible? What does it say about make work
Starting point is 00:49:06 email jobs that they can be automated by something that's just plausible enough to get by? I mean, it's all ideology, right? And it was ever thus. In the 19th century, in order to facilitate a leisure class and middle class, you had these Clark jobs. You go to work with a fountain pen and you do a big double entry ledger all day, and then you do a good enough job that everything's source of kind of works, and it's not really necessary, but it's a legitimizing function that reproduces itself. I think we're going to see more of that, but for fewer people. I think jobs and especially creative jobs are going to be like, you're still going to need people who pick up the phone and talk to people, but fewer of them, and the more jobs you have
Starting point is 00:49:51 are going to be more like sort of the adult daycare that we sometimes make fun of where it's like, oh, my boss brought in kombucha for everyone sort of thing. I worked two hours today, which is good. That's a good thing. However, not so much for all the people who got fired in order to make that sort of economically viable. Well, I think it goes back to the question of what automation actually does to a job, right? Because automation doesn't eliminate a job. What automation does, though, is it changes the balance of power by changing the amount of either human or fixed capital within the tasks of that job. For example, if you're making a linen coat, then someone has hired you to do it and you are sourcing the linen, cutting the pattern,
Starting point is 00:50:34 sewing it, and finishing it. You have an enormous amount of power over that process, but also it's very intelligible what it is that you're doing. You know what you're doing because you have a very good sense of the entire process of your labor with your relationship with the outcome and so on. One of the things that also lets you- You can even make a craft union out of this. You can be like, I am part of a sort of a brotherhood of linen coat makers, if you want to. Yeah, and then you could rig elections or whatever. But, right? And then you just become like somewhere where bankers can go hang out like hundreds of years later. But you can then, as you introduce automation, the thing that what
Starting point is 00:51:11 actually happens is that the person hiring you to do the job takes on more power in the whole linen coat production process. All of a sudden, number one, you don't need- There are many more people who can maybe push a button on us and operate a steam loom than there are who can do the entire linen coat making process. But also, that's also quite alienating because all of a sudden, while your job is to make linen coats, the tasks that you're doing might just be sewing a button. It might just be cutting one bit of a pattern and so on and so on. And so I think we can say that it's one of these first tragedy then as far as things, I think, where if we're talking about the economy that produced make work email jobs and content and clickbait and stuff,
Starting point is 00:51:54 we're then saying, you know, that section of the economy is going to- It won't no longer have artisanally produced its various clickbait articles and undergraduate essays. Instead, there is a kind of an almost division of labor. And then as that division of labor, as that automation comes more through, then your boss, and again, this is not just with like making using something like chat GPT, but for example, if your job is say, copywriter is an easy one, right? You're no longer a copywriter, you just make prompts. And your connection to the thing that you make is now much more tenuous and you're now much more alienated from what you do. But also, someone who is a prompt writer is different as a job
Starting point is 00:52:37 from writer. It is less of a craft and more of a task. And the transformation of artisanal craft-based economies and the Industrial Revolution was one of the main things that created like the modern proletarian, proletarian class, one of the main things that created the economy that we know now. And so- Yeah, get Wolfs of Benjamin. And so when I see this, I see, okay, well, that's the process of proletarianization. It didn't stop then. It didn't stop with nurses. It came for junior doctors. It came for a lot of journalists and content creators. It came for people who work for the state. It came for train drivers.
Starting point is 00:53:14 And, you know, it's a process. It doesn't necessarily stop. And now, gentlemen, it's coming for our phony baloney jobs. And so this is, and when we talk about as well, like Microsoft massively investing in open AI, that's going to bring a lot of these large-scale language model tools to many of these email jobs. And so if your cut, if your boss, if your company, uses the Microsoft Office suite, then there might be more of an expectation that your formerly artisanal make work email job might contain more of a production line element. And so why don't really, my concern with chat GPT and large language models is less about
Starting point is 00:53:56 how well they work. And the question is, do they work well enough to proletarianize a lot of people? And to the end, I think the answer is maybe. I think the answer is yes. I think absolutely. I think that very strongly. And then I think, where the hell do you go from there? And there's social revolution of some sort, I imagine is the correct answer. But Dan, if I know how to pull that one off, I mean, I mean, the good answer, right, is we change the act of writing itself in doing this, right? And it becomes kind of like old fashioned and analog to sit and type everything out yourself. I mean, maybe, you know, people will still do it in the same way that people still listen to
Starting point is 00:54:29 like vinyl records or whatever. But what you end up doing when you're writing is that you, you know, sort of you're crafting prompts that's following through on those prompts, you curate them accordingly. And the nice version of this is we all keep our phony blowing jobs, but we have to work less at them. And we get more time to like play video games or whatever. And we get to sort of enjoy a more like leisurely lifestyle. However, in order to do that first, you have to overthrow capitalism. And that's hard. That's tricky, right? That is tricky. So the large language model, easy.
Starting point is 00:55:06 Well, I wonder whether like, you know, because one thing that we've also noticed is you have like your sort of like typical, like we sort of have like the standards, like sort of Silicon Valley guys who are kind of rubbing their hands of Glee over the idea of AI, not because they sort of think that this is going to like make anything better, but because they're like, oh, we can use this to sort of threaten like people who we think, you know, might unionize or who might sort of like say that we're bad bosses or like something like that. You know, I think, oh, what was his name? Paul. Not Paul Adams. Paul Graham. Paul Graham. Yeah. I think he kind of like posted something along the lines of like, you know, this will be, this will sort of be like a kind of empowering
Starting point is 00:55:46 thing for bosses. I can't remember exactly what he said, but that was sort of the vibe of it. And I do wonder whether like, it's less about like the AI and much more about like, will bosses sort of be able to kind of use the threat of automation to basically like be able to like further discipline their workers, especially like in very, very precarious times. I imagine the idea of being like, well, you know, you can't like collectively ask for, you know, we sort of see that even in like the current transport strikes right now, right? Like a lot of people who sort of like reply on tweets or like kind of call in talk radio, whose thing is about like, oh, these kind of like railway workers earn too much. Like why don't we automate it? Like why isn't everything
Starting point is 00:56:25 like the DLR? And you know, that isn't because like they're interested in like automating like railways or anything. It's because, you know, for them, it's very much the case of why aren't these people being threatened with like, and you know, I wonder whether like the more discipline approach and I wonder whether like in the kind of, as like the kind of reeling from crypto and the realization that like the sort of web free economy is bullshit, these kind of like relative successes of AI at the moment are much more useful in sort of like disciplining your workforce and preventing any sort of like dissonance than it is necessarily to like kind of move to a different type of economic model. I don't know. I know that was long and maybe didn't make sense.
Starting point is 00:57:07 But surely it's both, right? And the latter is an intermediary to the former, right? And it is used discipline. It will be used as a disciplinary tool and then it will automate it afterwards. It's keeping discipline while the ship is sinking. It's making sure that the rowers don't leave their post while they're at risk of drowning and don't try and get away from the whole thing altogether, right? There's a contradiction appears to me, a wild contradiction appears, which is the entire economy is sort of like right now propped up on getting people who do those jobs to come to the office on pain of death, right? You have to come to the office, you have to sit in the office and work and you have to sort of use public transit to get there.
Starting point is 00:57:54 And, you know, increasingly, if we're just sort of like a disenfranchising those people in favour of some, you know, some chatbot or whatever, then what are we going to do about the one thing about which this country's ruling class is genuinely enthused, commercial rents? You know, it's a genuine question because I don't know. Yeah, I think the question always is, if you're going to disrupt something like commercial rents, who are you disrupting it for and why? If you're disrupting it for the health of workers, fuck off, get back to this, get back to your office and buy your pret sandwich. But if you're doing it to, I don't know, if you're doing it in a spasm of mass workforce
Starting point is 00:58:33 discipline and applying that kind of discipline to a group of people who have not been proletarianized yet, then it would be another one of those possible splits in capital. So, the only, our only defence, well, not our only defence, but one of our bits of defence against AI happens to sort of be the landlords. Weird. Landlords versus chatbots, whoever wins, we lose. That's it, you know. And I've also, you know, I've tested this thing a little bit as well, you know, I've tried to get it to work through sentences with syntactic ambiguities. And I do find it just frequently dodges the question. So, in my favourite book,
Starting point is 00:59:17 there is, it features a scene where a character is talking to... What's your favourite book? It's Blindsided, Peter Watts. Thanks for asking, Alex. Wow, that's crazy. I was blindsided by that answer. Yeah. And so, in trying to talk to an AI to determine if it's conscious or not, one of the characters says to it, our cousins lie about the family tree with nieces, nephews, and neanderthals. We do not like annoying cousins. So, for example, the use of the word annoying
Starting point is 00:59:45 here, do you not like annoying your cousins? Do you not like that your cousins are annoying? Lie about the family tree? Are they variously around the family tree? What is this tree? And so on and so on. Syntactic ambiguities. So, the answer, of course, it gives out is, it is understandable that you do not like annoying cousins. However, it is important to remember that everyone has their own unique personality and traits, and it is not fair to generalize all cousins as being annoying. Additionally, it is important to treat everyone with kindness and respect, even if you do not always get along
Starting point is 01:00:11 with them. It is possible to have a civil relationship with your cousins, even if you do not always see an eye to eye. And as silly as that is, the idea of please do not generalize to all cousins. What we also have is, when presented with something that's not even like to us would sound like a very, very strange sentence. It can't help but produce a coherent answer, which, again, is one of these things where I think this will change the way that we interact with information, not necessarily for the better, if only because imagine that logic, right, applied to something like search. There are some people as well, some of these tech thinkers, these futurists, who are saying, this is going to put Google out of business because
Starting point is 01:00:52 we're going to further abstract the way that we interact with large amounts of information from humans. And weirdly, humans have been working very hard to do that with things like SEO, with things like, or like just reducing the quality of Google searches quite a bit. But, you know, when interacting with, when you have to interact with search through, again, through another model that puts the information even further away from you, but also that, at least at this point, is not able to recognize a bad question, I see that as further, let's say, causing further challenges to the way in which people will relate to the large amount of information that we have indexable, if you get my meaning.
Starting point is 01:01:35 Yeah. You know, Google has already looked into this as well. Like, they published a paper in 2021 on, should we replace traditional search engines with large language models, when they weren't even as powerful as they are now. So this is something that they are, and have been internally considering and weighing the dangers off for a while. People say that this is going to kill Google. Google is on this. Whether they can adapt in time and whether they'll, you know, competitor will put a shitty product on the market first before they put a slightly better product that reserves their reputation later. Whether that happens, I don't know. But they are looking into this, definitely. And, yeah, this idea of probabilistic knowledge.
Starting point is 01:02:18 It's something I wanted to bring up in relation slightly to the college essay piece in The Atlantic. It was a good piece by this tech analyst called Ben Thompson, who writes the This is one of the sources of input for TF, one of the places I go get my tech news. Right. Yeah, yeah. I think, you know, he's good. He's good. He has some insightful things to say often. And one of the things he said about this was, he was talking about how this might change homework for his children. And instead of focusing on the production of knowledge, it would be about the interrogation of knowledge. And that, you know, so you have all the knowledge available to you. There's no need to test that. But what you need to know is how to apply that
Starting point is 01:03:01 and how to see whether it fits. And a comparison I've seen some people make is with the appearance of commercial calculators during the sort of sixties onwards, when these things went from, in the same way that AI research has gone from something that was only available in labs to something you can buy down the shop. So how do you change what you test then in a math exam? And the answer is you add the calculator to the exam. You say, okay, well, we're not going to test you on your ability to multiply large sums, because that is not something that is ever going to be testable in the future. So we need to look at how this interacts with the more complex systems. And I think this is going to be how knowledge production, knowledge search is
Starting point is 01:03:39 going to change on multiple levels, whether that's Google search, whether that's essays, is it's going to be about interrogating knowledge, and it's going to be about interrogating machines, which is going to, which is going to become very, very bizarre, I think, because you're going to, you are going to have these conversations with someone who you're not sure about your knowledge and you're not sure about your own knowledge, but you're trying to find out some truth. I hope it ends up like disco, at least. Because as from what you're explaining, it seems like this is the thing I sort of go back to again and again, the concept I think is most useful is alienation, which is where we talk about our distance from the thing,
Starting point is 01:04:14 our distance, the distance that's imposed on us, either from the knowledge that we're trying to get, from the coat that we're making, from the email we're writing for our email job, you know, all and all of these kinds of it may feel weird to feel alienated from an email and an email job, but what we're really talking about is our feeling of control over what we produce and do. And I think, you know, and as someone who currently is wearing mass-produced clothing, right, I think that I am, I'm always remember like to be how the principles of properly applied Luddism should work, which is not to say that all clothing must be produced by artisans. If you can't make your own clothing, don't wear clothing, but rather to ask who wear, who wear and why,
Starting point is 01:04:58 who benefits, who's designing, see, either the steam loom or the chatbot for what end. And, you know, when, and I think of alienation from the process of making clothes, like, doesn't, it's not necessarily a bad thing, right, because, you know, well, there are many, many people. Sure, you're not getting your like hand mangled in the loom or whatever. There are many, many people and it's good that all of us are dressed. And it's good that there are many, many people and we need large automated processes to support many, many people. The alienation comes from the relationship that we have economically to those processes. The fact that, that we do still need people to make shirts, but that they are not making shirts in the way that
Starting point is 01:05:42 they would choose using machines not designed by them for a process that they are forced to participate in. And so we talk about alienation for something like an email job. That's why I say it's weird to think of yourself as being alienated from something that's already very alienating because you don't really know why you're doing it. But at least you can write the goddamn email. I want to know if when I get an email, when I get an email from a Democratic candidate that's like, I'm gay, I want to know if that's from a Democratic candidate and not an AI bar. This is the thing, though. We've seen that it can get worse. You can lose an alienating job and feel retroactively that you have had it sort of like robbed from you. And,
Starting point is 01:06:18 the obvious example in the UK is mining, for instance. A lot of people, a lot of miners sort of like became active trade unionists because mining fucking sucks and they didn't want their kids to have to do it. But it was still worse when pits closed and they found themselves without any sort of sense of meaning, let alone employment out of this. And so, yeah, you may not like your bullshit email job, but I guarantee you there are people who will miss them. Also, one thing that people often forget is just like that type of labor also sort of informs like the way that communities are structured and the way that relationships are built. This used to be an email in town and they ripped the files out of it.
Starting point is 01:06:57 Yeah, you put it in fun terms, but it's like, I think people sort of, not romanticized per se, but it's kind of, I think people necessarily underestimate that within these sort of like bullshit jobs also kind of come their own forms of like social relations and bonds and communities and stuff. And obviously, these themselves, especially living in cities and precarious housing situations, inform their own isolation, but the automation of these will certainly make that isolation a lot worse. And that is definitely something that isn't really talked about very much when we think about automation. It tends to often just be, in some cases, very much like, well, you know, clickbait is fucking fake anyway. So like, who gives a shit?
Starting point is 01:07:44 200 years time, Cyber Secure Summer attends the Durham emails gala. You know, I might not send emails, but my dad worked really hard. My granddad's dad used to send emails. But it kind of also just feeds into the sort of weird fantasies that a lot of these tech guys who are kind of optimistic about AI have, right? And we spoke about that a bit with kind of like the sovereign individual. And, you know, that whole idea about... Episode coming soon. Yeah. Well, like, you know, the whole like, you know, being the sovereign individual, you know, it will kind of like, you know, that is sort of the outcome of
Starting point is 01:08:23 advance like neoliberalism. But also you should really embrace that and kind of like, you know, have it being like that individualized actually is a good thing. Yeah. I like, you know, it's kind of something just to sort of like bear in mind in terms of like the consequences of like, that level of kind of like middle management automation and the fact that like, yeah, there's like, in the hands of the Silicon Valley guys, there is nothing that kind of comes after that. It is very much like, yeah, you're isolated, learn to like, you know, learn to love it and kind of here's more treats like for you specifically. There was a really good JSTOR article. I love the JSTOR blog. It always surfaces great stuff.
Starting point is 01:09:02 But there was one, which was a survey of the panic that happened when buttons became common. And people were really against buttons for a little bit because they thought Riley that it ate. Sorry, I said that. Riley, you'll love this. You'll love this, lad. They sit and enjoy it. Well, no, because they were like, well, it alienates you from the process. And there was genuinely some people when buttons were becoming common, saying that, oh, but that stops you knowing what it is you're doing. It's not like a good old handle. I agree with that. That's so true. You know, these young people mashing buttons on their phone. The button was at the beginning of the end. When I play Tekken with my cousins and they
Starting point is 01:09:44 mash the buttons, and they're not learning how each move works because they don't know how, like to kind of use separate button combinations. That's me. I'm mad about that. Bring back the big sort of like knife switches and stuff. Yeah. So I think this is a good place to sort of end episode one of our two parts of AI week, which is that as a reminder that number one, it's not about the button. It's about your relationship with the button and about who forces you to have what relationship with the button. And that whether you're talking about a language model or a steam loom or whatever, it's less about the... It's partly about the characteristics of the tool because the tool
Starting point is 01:10:25 is designed by people who have goals as regards to the tool. But it's also about what is your social relationship with the tool, the designers of the tool, and the people making you use the tool. And when you press the button, does it give you a food plate? That's right. But moreover, I think that all the people who are sort of arguing about whether or not the various sort of increasingly advanced large language models are either the new iPhone or just a party trick, they're having the wrong conversation. And I think the real question to ask is, who is this good enough to displace? And what will the effects of that be? And I think probably quite a few people, especially given that our response to deindustrialization,
Starting point is 01:11:12 our response to basically the financial crisis, to all of these things has been to prop the economy up with credit, which we've seen basically sort of collapse for as a broad-based tool of continuing to buy people's ideological loyalty to the system that we're in. But also, the bullshit jobs that the large sort of... The large middle-class reproducing daycare jobs where I get a big part of the political settlement in the industrialized, now deindustrialized world. Oh, God, we're going to do deindustrialization to de-service industry. And whether or not you think GPT 3.5, chat GPT, whatever, whether or not you think it's a party trick or the next iPhone, the way we've set up our economy is that something as...
Starting point is 01:12:07 No industry, no service industry, only vape shops, the last-going concern. But that a version of a human that can't process a syntactic ambiguity still could threaten one of the pillars upon which our very stupid society is built. So I, for one, want to thank AEI and the number eight for their sponsorship of this episode and indeed every other episode of our show. A line manager, trashpot 3000. And I want to thank James for coming on and hanging out with us today. James, it's been delightful. It's been a pleasure and that is not a scripted response. Thank you very much. Thank you, yeah. And don't forget for part two of AI week where we will be talking specifically
Starting point is 01:12:55 about how these things get applied in workplaces and... Oh, fucking woke culture, everything gets a week. Oh, we didn't talk about the people who were complaining that it was woke. Just really quickly, the only major argument for I think general AI intelligence is that when fucking Richard Hanania tried to get it to agree with FBI crime statistics or whatever, it just tried to end the conversation with him. It's just like, can we talk about something else? It's like a guy acting like an AI because he has a very large data set of people not wanting to talk to him. And so he just replicates stuff that causes that behavior. That's right. Anyway, so thank you very much again to James. Thank you very much to all you
Starting point is 01:13:42 listening. See you possibly on episode two of AI week with Callum Kant from the Fair Work Institute. James, before you go, before we go, do you have anywhere you want to send people? Oh, they can go read me on my website, but I think that'll be dead soon with the rest of the journalism industry. So why not buy my book? I wrote a book and I wrote it earlier this year, but it's still available now. It's called Beyond Measure. It's a history of measurement. And genuinely, Riley, I think you might like it because the key thesis of it is that the history of measurement is a history of increasing abstraction and that we have in some ways become alienated from measures. Anyway, no, if you like sort of pop history, pop science,
Starting point is 01:14:22 it's called Beyond Measure and it's all right. So yeah. Absolutely. So do check that out. Anyway, it's time for me to go have dinner because once again, all I have eaten today is two pieces of toast. So I am hungry, hungry, hungry, hungry. Bye, everybody.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.