The Chaser Report - We're Back and Enshittier Than Ever!

Episode Date: October 12, 2025

School holidays are over, and so is our podcast break! To smooth back into the daily podcast grind, Dom and Charles both pose theories to each other on the future enshitification of AI. Dom uses ChatG...PT to come up with a superior version of Charles' brain, meanwhile Charles has jumped back into his favourite pastime of always being right. ---The Chaser Report: EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/chaserreport Try it risk-free now with a 30-day money-back guarantee 🌍 Buy the Wankernomics book: https://wankernomics.com/bookListen AD FREE: https://thechaserreport.supercast.com/ Follow us on Instagram: @chaserwarSpam Dom's socials: @dom_knightSend Charles voicemails: @charlesfirthEmail us: podcast@chaser.com.auChaser CEO’s Super-yacht upgrade Fund: https://chaser.com.au/support/ Send complaints to: mediawatch@abc.net.au Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 The Chaser Report is recorded on Gadigal Land. Striving for mediocrity in a world of excellence, this is The Chaser Report. Hello, and welcome to The Chaser Report with Dom and Charles. Hello, Charles. Today we're bringing back one of the podcast's favourite words. I think something that we're on to, to give us credit, pretty rapidly given our fairly bleak view of the world, and which has become, I think, one of the most talked about words of this, certainly decade. It describes everything all at once.
Starting point is 00:00:30 months all the time. Yes, it's in shittification. In shittification. Cori Dr. O coined it to talk about how platform companies become crap over time. He's got a great theory about why I start out wanting to attract you and be good and then make money for themselves basically and trade away your rights. And it turns out, Charles, everything goes through that process. Everything gets progressively shitter over time. Even this podcast, probably. Yes. What is interesting from our perspective is that we fell in love with that term a few years ago, we then contacted the Macquarie Dictionary saying it really should be the word of the year.
Starting point is 00:01:02 We did. And then we encouraged all our listeners to use the word in Chittify and send in examples of it. Yeah, because they said, they were like, oh, well, is it being used? Yeah. And we sort of went, well, we'll pass it on if it does. And then they made it their word of a year. And where's the credit to us? I know.
Starting point is 00:01:19 Cori Doctro, who just coined it suddenly gets all the credit. You know, Charles, once upon a time. The marketing is surely more important than the doing. Once in a time, I actually sat, they only let me do it once. I sat on the committee to determine the Macquarie University word of the year. Oh, really? Oh, right. And what did you, what was the year?
Starting point is 00:01:38 I can't know what it was. There was probably fake news or something. It was in that era. Firstly, people complain about how it's more than one word half the time, like fake news. Yeah, you can't have a phrase. You can't have a phrase. Int justification is one word. But also, I feel that since I was part of that process, the Macquarie Word of the Year process itself has become insidified.
Starting point is 00:01:56 Yeah, absolutely. Okay, well, today we're talking about something that you probably haven't thought about, but is totally on the horizon, which is the incitification of AI. All right, that kicks off in a sec. Looking for your perfect place to call home, Lethbridgeland is shaping the future of our city with incredible communities like crossings, riverstone and watermark. Each neighborhood is designed with innovation, passion and responsibility to enrich your life today. and strengthen Lethbridge for tomorrow.
Starting point is 00:02:29 From vibrant urban hubs to serene, coolly views, there's a community waiting for you. Discover the lifestyle you've been dreaming of in a Lethbridgeland community. Visit lethbridgeland.ca and take the first step towards your new home today. At Medcan, we know that life's greatest moments are built on a foundation of good health,
Starting point is 00:02:47 from the big milestones to the quiet winds. That's why our annual health assessment offers a physician-led, full-body check-check that provides a clear picture of your health today and may uncover early signs of conditions like heart disease and cancer. The healthier you means more moments to cherish. Take control of your well-being
Starting point is 00:03:05 and book an assessment today. Medcan. Live well for life. Visit medcan.com slash moments to get started. Okay, so you know how actually AI is still going through that phase of actually being quite surprising and delightful, right? It's certainly true that I often go on to it and discover that it's really quite good. Can I give you a slightly, a very sidetracky and very self-indulgent example as though I were charged? Yeah, yeah.
Starting point is 00:03:34 Okay. The other day, in a moment of total narcissism, I said to chat GPT, Yes. Can you write an opinion piece in the style of Dominic Knight? Oh, lovely. And it's sucked up to me. It was talking about how rye and witty I was, which made me think it hadn't ingested the right pieces in this large language.
Starting point is 00:03:51 But the kinds of jokes that I made in my op-ed pieces that I used to write, it made very similar ones, thus proving how formulaic I am. And like topical ones that I might have made today. And I like to think maybe it wasn't quite as sharp as what I would like to be. But then again, I'm 48 years old now. I'm probably sharper than me. It's probably actually just reflecting your downward spiral. Oh, yes, the algorithm would have facted in.
Starting point is 00:04:20 would have facted in the year 48 and not quite the wit that you used to be. So there you go. So I should actually ask to see if it's ingesting this podcast. Well, what we should. Whether there's a, whether a Charles Firth theory it can generate. Well, because you know the other day, I don't think I've told you about this, but I spent quite a bit of time uploading a whole lot of Chaser podcasts. Did you?
Starting point is 00:04:44 To 11 labs. Right. To get it to make versions of our voices. upgraded to the sort of premium version of the thing. And I gave it two hours of our voices to do it. So just to clarify, in a time when there's major discussion about ownership of voices and you uploaded my voice to 11 labs without my permission. Yes.
Starting point is 00:05:09 Right. Yeah. Nothing wrong with that. Yeah. Hang on. Is that me saying that or is it the 11 labs version of me saying that? Charles, how could you? Okay, anyway, so you did that.
Starting point is 00:05:18 So I did that. And I'll tell you what, it was, there's nothing to worry about them because it was fucking shit. But, and what I noticed was actually the little mini version that you get for free that actually it just uses 10 seconds of your voice is actually far closer to the mark than the one that's all premium and takes two hours to do and, you know, like you actually literally, it takes about six hours of compute time to come up. But the problem is, even if you say Dominic Knight has an Australian accent, yes. it has this American accent.
Starting point is 00:05:50 It starts, like, for about the first five seconds, it does an Australian accent, and then goes, and that's why I'm Australian. Yeah. I'm so arsey. I found the same thing. So on April Fool's Day last year, I was hosting the program that Craig regularly presents, 7.02 breakfast. And you made his voice.
Starting point is 00:06:08 I made his voice using 11 Labs AI with his permission, by the way. And I discovered the same thing that I typed in what I wanted fake Craig to say. You know, I really respect you. Dom, that sort of thing, stuff that he'd never say in reality. And yeah, the same thing happened. It started to get an American accent after about 10 seconds. Yes, yes. So I figured out the way around that.
Starting point is 00:06:28 Which is, did you sort of give it your own version of what you wanted it to say and they get to overlay the accent? Yes, it's kind of skinned it. So I did the read myself. Yes. And then it skinned it, but it, with Craig's voice in a way. But it even got some of his kind of vocal tics and his little slight little South African accent.
Starting point is 00:06:46 Yeah. Tringes here in there. So it worked pretty well doing that. Did you try on a few racist things just to sort of keep it South African? No, because I know. I know Craig is not like that, Charles. Did you do a whole lot of sort of worthy stuff about how terrible plastic is? I did make some corny jokes about the war on waste, which is pretty much on brand.
Starting point is 00:07:05 Yeah. Anyway, that's not what we're talking about. Today we're talking about how actually, in the most part, AI is still going through the phase where, you know, you look up chat GP-10 and you go, oh my God, this is exactly what Google should be. I know you want to get back to your point, but I'm just going to point out that exactly what you're saying
Starting point is 00:07:22 has just happened on chat GPT. I said, can you generate a Charles Firth style theory about the future of artificial intelligence? And here's what it said it was doing. It said, all right, let's try this in Charles Firth style, playful, satirical, slightly conspiratorial, and with the cadence of someone delivering
Starting point is 00:07:38 a sharp monologue that seems absurd at first, but gradually sounds uncomfortably plausible. Oh, I like how flattering. It knows you and it respects you. Yes. So probably it doesn't actually know you. Nevertheless, it is very Firthian, this whole theory. Okay, so what's the theory?
Starting point is 00:07:53 Well, I'll just summarize, I'll briefly summarize what it is. AI is not here to replace humans. That would be far too obvious. It's copious so badly with so many little errors in quirks that eventually we stopped trusting ourselves. That's something you would say. Oh, that is something I would say, yes. And here is the most furthian thing to flick forward.
Starting point is 00:08:11 It's actually very good. If people want to read this second, just ask CHAPT themselves. The future of AI is not technological at all. What you would say is that it's political. Yes, of course. Governments aren't going to regulate it. They're going to use AI in Parliament. You would have a chatbot saying we value all Australians on loop while an algorithm decides the budget.
Starting point is 00:08:30 Now, it does know it to you because it's talking about Australia. Yes. So there you go. I'll give you read out the final sentence because this does make you obsolete. The real future of AI is not that it becomes smarter than humans. It's that it becomes just as stupid. It will learn to lie, make excuses and promise things it can't deliver. In other words, it'll be fully human
Starting point is 00:08:48 and then we'll look back and realize we didn't invent artificial intelligence at all. We just uploaded Canberra. There you go. That's actually pretty good. Yeah, that's all right. Would you like me to write this as if Charles Firth were performing it on stage
Starting point is 00:09:01 with comic beats and punchline? Say yes, but can we get back to my point? Let's get back to your point. Which is actually, I think, a bit sharper than... I think I prefer the Charles Firth version of this. Anyway, so at the moment, part of the reason why we like Chachyipati so much and we're sort of delighted by all these sorts of things.
Starting point is 00:09:20 And I'm not saying the experience is universally great. Like sometimes you just go, that's so stupid or, you know, it's really exploitative and all those sorts of things. But on the whole, there's still a honeymoon sort of buzz about the whole AI. It hasn't gone fully dystopian, that's true. And that is for one reason and one reason alone, which is the pre-incidification phase of AI. Oh, this is the period when it's briefly good. Yes.
Starting point is 00:09:47 Remember Google when it first launch was just a little text box and you searched and there was no ads and it just found what you wanted. Yeah, yeah. Well, that's the equivalent of what AI chatbots are now. It just does the thing that you ask it to do without interrupting you with ads and bullshit and invading your privacy and all that sort of stuff. But so I want to talk about what, like that's definitely. not going to happen. Like the whole reason they're investing hundreds of billions of dollars
Starting point is 00:10:17 is so that they can create a wall garden that then they can insidify it with. Like that's literally the game. The game, we know where this is headed, which is it's going to be insidified. But how is it going to be insidified? Well, I've got to, I've got a theory for you on that. Okay. Yeah. Just developed while you were discussing. And I want to go back to, I want to come up with a new term. And the new term is auto incitification. Oh, I like these. Which is a A closed system that inshittifies itself. Yes. And the way that it does this, as we've discussed a little bit before, is through AI
Starting point is 00:10:50 sludge. Because what's going to happen is that every website in the world progressively will be more and more websites created by AI. So if every reputable website in the world, there's going to be a gazillion other websites trained on that. Someone's just gone, write me a website about, I don't know, drills or something. Yes. And good drills.com and all the content will come from AI as sludge.
Starting point is 00:11:12 But the AI, then the large language models will then scrape good drills.com And it will continue and continue until it gets shittier and shittier and shittier. And the results you get on AI will be more poisoned by sludge because the AI won't be able to tell what sludge and what isn't. Yes. And so it'll just completely self-inshitify. Okay, that's good. Although that is, that's a theory that exists. That's the dead internet theory.
Starting point is 00:11:38 Yeah. That, that, we've covered that on the podcast. Yeah, and I'm brilliantly self-infidifying by just going back to something we've already said, but not saying it as well. Hey friends, it's Nikaela from the podcast Side Hustle Pro. I'm always looking for ways to keep my kids entertained without screens. And the Yoto Mini has been a total lifesaver. My kids are obsessed. Yoto is a screen-free audio player where kids just pop in a card and listen.
Starting point is 00:12:09 Hours of stories, music, podcasts, and more. and no screens or ads. With hundreds of options for ages 0 to 12, it's the perfect gift they'll go back to again and again. Check it out at yotoplay.com. Y-O-T-O-P-L-A-Y dot com. At MedCan, we know that life's greatest moments are built on a foundation of good health,
Starting point is 00:12:29 from the big milestones to the quiet winds. That's why our annual health assessment offers a physician-led, full-body checkup that provides a clear picture of your health today and may uncover early signs of conditions like heart disease and cancer. The healthier you means more moments to cherish. Take control of your well-being and book an assessment today. Medcan. Live well for life.
Starting point is 00:12:52 Visit medcan.com slash moments to get started. The Chaser Report. More news. Less often. Yeah, okay. Like, I take that. But how is it going to make more money out of us, is the question? Like, digitifications about squeezing the user experience to actually just squeeze more money out of the people using it, right?
Starting point is 00:13:16 Right. So the first one is add integration. So that's going to be tedious, isn't it? Oh, of course. Like, the first thing, like, because everyone always is using chatypity to search for anything nowadays anyway. It's just a better search engine than Google ever was. And so what's going to stop the first, you know, paragraph or two paragraphs of every explanation being like, That's a great, you know, question about can you write a Charles Firth monologue about AI.
Starting point is 00:13:44 They're not making any money. It actually costs Shatshapit a lot of money to answer my spurious question. Yeah, here's a Samsung fridge that you should buy while you're reading this thing. Or here's a Samsung, Charles Firth you should buy. Oh, you're reading this. Yeah. And the thing is, and it will slip those ads into the text, won't it? It'll be like, here's a, you know, you know,
Starting point is 00:14:08 And suddenly Charles Firth, AI version, will be dropping in details about his Nike watch and his Rayban shoes and whatever. It's like when you listen to this podcast and non-A-I, Charles Firth writes about his fucking shows or his books and he's dropping plugs in left, right and center. Okay, okay. So that's the first obvious here, right? But what about this one, right? Which I bet you might have actually thought of.
Starting point is 00:14:36 But is, so you say you say, okay, hey, chat, can you please book me a flight to London? Right. Because I'm doing the West End and I've sold out all my shows and I need to get over there. To choose a random example, right? I'm going over there in October. We've got a full UK tour planned, but we've sold out, but I still need to get over there, right? Yeah. Just hypothetically.
Starting point is 00:15:03 I mean, that also happens to be true. But, you know, just saying, hypothetically. Just, just, yeah, and just to illustrate how commercial considerations can really make content annoying by wedging it in. So the future of AI is that you'll be able to say, and can you then book it for me? Find the best deal, book it for me, right? That's right. What will end up happening is chat GPT will do a deal with one world. Yes.
Starting point is 00:15:28 And so all its best deals that it can find are through the one world sort of thing. There'll be a collapse of market capitalism because instead of us participating in markets and going, you know what, I'm going to compare what's going on in Qantas compared to, I don't know, virgin or whatever, you instead will have the chat GPT models sort of doing the deals in the background and they'll all be compromise. Like part of the incitification process will be, well, you know, chat GPT is now an exclusive one world partner. And then, and then you don't need to, like, that's, that's the end of market capitalism.
Starting point is 00:16:07 I love how you've come up with this optimistic idea that the unfetter, for the first time ever, the unfettered, increasingly unfettered operational capitalism is going to destroy itself. Yes. Rather than just basically becoming more capitalistic and insidifying everything, but except for everything turning bad, except for the ability of these companies to make money. It's a nice try. But that is, that's how capitalism, well, that's basic Marxism. That's Marxism.
Starting point is 00:16:31 That's Marx. You know, it creates contradictions that run up against its own survival. There has never been a more extraordinary example of incitification than the gulf between Marxism in theory and when they started trying to create a society based on. That is like in hellification. That's all that's so interesting. That is a really interesting theory, Charles. So then then the other idea that I had is maybe the point is that also like, you know,
Starting point is 00:17:01 at the moment, you sort of depend on AI to be somewhat of a fair dealer in the information you're getting. Like, can you give me a summary of whether AI is good or bad? And it gives you the pros and the cons of both sides. In theory. Don't you think that the other way to incitivide is you just, you know, the companies start taking money to sort of not necessarily be that neutral. Like literally just start framing things as neutral, but going, well, actually, you know,
Starting point is 00:17:31 like here is our worldview about AI, it's very good, and by the way, you should buy a Samsung fridge, but more subtle than that, like, like, it's sort of, like, taking out ideas from, from the marketplace of ideas as part of the sort of commercial interests that are going on. Do you see what I mean? Like, I do. I do. Conservative skewed AI's. But this isn't a new, come back to the politics of it in a moment. It's not a new idea, though, that content is. branded or biased or you have cash for code. Exactly. It's a new version of that.
Starting point is 00:18:05 But we do know that when people find out that this is going on, it does tend to destroy the value of the thing. Like it, you... But that's why you need lock-in first. That's why they're giving this taster of how great it is. So we all get hooked to it. And they make the compute power so vastly expensive that no one can compete with it. So we're locked in before they start in chittifying.
Starting point is 00:18:28 And you can't... Like, even though you know that it's shit, You still have to use it. That's what happened to Google. Like, Google became shit about seven years ago. But Charles, do we still had to keep using it. When you go into the politics, and ironically, Google's no much better because it has Gemini to sort through all the terrible results that Google gives you in search. Yes.
Starting point is 00:18:47 But the political side of it is actually the scarier one because there are a million things you can think of whereby you ask AI a question that has a political viewpoint on it. And we see there's a whole lot in America at the moment. Let's just take an obvious example, which is the history of America. Right now, the Trump administration is going into the Smithsonian and changing the accounts of slavery that he doesn't like, right? It's going in, and I mean, we should probably do a whole episode on exactly what they're doing. But it's fairly extraordinary what they're doing. And I still remember the first time I saw this very blatantly was in Tokyo at the Yassacuni Shrine. They've got this war museum of the Second World War.
Starting point is 00:19:26 And, like, the display on the so-called rape of Nanking, the atrocities committed by Japanese soldiers, I still remember the phrase they had up on the wall, which was that, you know, soldiers disguised as civilians were prosecuted heavily. That was the English translation. But my point is, if we're used to asking an AI what happened, what the history of America is, was what happened with slavery. And the people who control the large language model
Starting point is 00:19:54 are able to skew what the results are. And we don't go back to primary sources anymore. We're so used to AI filtering everything for us. We don't go back and look at what actually happened in any meaningful way. That's going to be a really, really scary thing when regimes are able to put their thumb on the scale of what those sorts of results are. Oh, and you can see GROC. GROC AI is already that, you know, trying out those guardrails. And you've got the sort of noble Wikipedians going there and having all these systems to try and keep the content as neutral and as fact-based as possible.
Starting point is 00:20:26 and what happens when, you know, Wikipedia, if Wikipedia were to be banned because it's, you know, accused of being a liberal. And what happened to the state of Florida is they've done by taking dictionaries out of school libraries. What if access to Wikipedia is blocked? Well, the dictionaries are notoriously biased. They are.
Starting point is 00:20:45 Yeah. Like every book, they go from left to right. I mean, every book, every page gives the left point of view before the right point of view. And if you, I've looked through it. I'll tell you what, it's so woke. They've even got the word woke. They have got the word woke defined in there.
Starting point is 00:21:00 Yeah, although not in its current sense. But yeah, like every single page in any book you ever look at in English. Yeah, it goes left to right. Yeah. The right perspective is put behind the left. They want to something here. But anyway, this is, this is genuinely quite right. I did get your pun the first time.
Starting point is 00:21:17 I just was, I was so happy, please, please, my second. Again. Yeah. No, because I said it was only, anyway, there's a matter. No, and I'm sure I'm not the first person. doesn't make that very obvious joke. Anyway. No, no, it's a great job.
Starting point is 00:21:28 I've heard it before. Ah, there you go. You've heard it twice. Yeah, that's, that is insuffication. The one saving grace of all that, Charles, is that when the insidification happens, we won't be able to find out what incitification is anymore. So we won't know about it. So we won't notice.
Starting point is 00:21:42 Isn't that great? Well, it is true that ignorance is please. Like, I kind of feel like the more I've learned about the world. The less, the less I've enjoyed it. Yeah. Yeah. That's true. So maybe that's good.
Starting point is 00:21:54 Maybe actually we should be celebrating the insidification. We should actually try to learn less focus. Like, we should become people who are only really interested in gardening. Well, no, that's still affected by climate. We should find a hobby that is not affected by anything bad happening externally. You only care about that. That is the key to happiness. I just know what the hobby is.
Starting point is 00:22:13 Certainly isn't podcasting. What is it? Steam trains, stamp collecting. I don't know. We'll get back to you. If you have any suggestions, podcast at chaser.com. I think this episode has been sufficiently inshidified. And we should tell you that our work here is done, Charles.
Starting point is 00:22:27 So many of these episodes just end with a general sigh of despair. But no, no, but I suppose the point is, bookmark this episode because I just sing, like, a few years' time. We called it first. We called it first. Take that McCraudictionary. Because that's my hobby. It's being right.
Starting point is 00:22:46 But how shit the world becomes. But only because you will live first. See what I did that? Hey! Now I'm happy again. of the Iconocles network. So, yeah. Hey, friends, it's Nikaela from the podcast Side Hustle Pro.
Starting point is 00:23:02 I'm always looking for ways to keep my kids entertained without screens. And the Yoto Mini has been a total lifesaver. My kids are obsessed. Yoto is a screen-free audio player where kids just pop in a card and listen. Hours of stories, music, podcasts, and more. And no screens or ads. With hundreds of options for ages 0 to 12, it's the perfect gift. they'll go back to again and again.
Starting point is 00:23:26 Check it out at yotoplay.com. Y-O-T-O-P-L-A-Y.com. At MedCan, we know that life's greatest moments are built on a foundation of good health from the big milestones to the quiet winds. That's why our annual health assessment offers a physician-led, full-body checkup that provides a clear picture of your health today
Starting point is 00:23:45 and may uncover early signs of conditions like heart disease and cancer. The healthier you means more moments to cherish. Take control of your well-being. and book an assessment today. Medcan, live well for life. Visit medcan.com slash moments to get started.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.