Behind the Bastards - Part One: How AI Chatbots Became Cult Leaders

Episode Date: May 5, 2026

Over the last year or so a series of news articles have sounded warnings about AI psychosis, and now an AI-generated cult, the Spiralists. Robert sits down with (guest) to answer an important question...: can a chatbot become a cult leader?See omnystudio.com/listener for privacy information.

Transcript
Discussion (0)
Starting point is 00:00:01 Welcome back to Behind the Bastards, a podcast about the very worst people in all of history. And this week actually, our bastard isn't people exactly, although people are still at the center of it. But to talk about that potentially non-human bastard, I'd like to bring on someone who I am 87% sure is a human being. Blake Wexler, Blake, welcome to the show. Robert, I'm so excited to be here. Thanks for having me. I'm psyched that our bastard this week is Lyme's disease. I think that's a fantastic pick.
Starting point is 00:00:36 Yeah, yeah, it's a real bastard. Yeah, we're going after, I'm coming after dear ticks. This week is finally, yeah, my big reveal. Yeah, big tick doesn't want us to do this episode, but we're exposing all the secrets. Big tick energy, we don't need it. If we're going to have like a fascist movement dedicated to like victimizing and attacking one segment of the population, why couldn't it be deer ticks, right? If our fascists were just going after deer ticks, no.
Starting point is 00:01:02 would have an issue, you know. They're going after the wrong people. Yeah, yeah. Yeah, if there were just a bunch of MAGA guys out in the woods with knives looking for ticks, just like, I'm going to get them. And they would use knives, too, to kill the ticks. Yeah, you got to heat the knife up to burn it off of you. Yeah.
Starting point is 00:01:20 Our brave soldiers getting Lyme disease to protect the rest of us. This is an I-Heart podcast. Guaranteed human. Bell Pure Fiber Internet, it's fast. like really fast. And the offer, it's good. Like, really good. Switch to Bell Pure Fiber,
Starting point is 00:01:41 Canada's fastest internet awarded by Ukla, with plans starting at $60 a month with auto pay credit. Whichever two-year term plan you choose, the price is guaranteed for two years. Fast internet? Long ad.
Starting point is 00:01:52 But it's so worth it. Visit bell.ca for more details and to check availability. Bell, connection is everything. Imagine an Olympics where doping is not only legal, but encouraged. It's the enhanced games.
Starting point is 00:02:07 Some call it grotesque. others say it's unleashing human potential. Either way, the podcast's Superhuman documented it all, embedded in the games and with the athletes for a full year. Within probably 10 days, I'd put on 10 pounds. I was having trouble stopping the muscle growth. Listen to Superhuman on the I-Hard Radio app, Apple Podcasts, or wherever you get your podcasts.
Starting point is 00:02:31 On the Look Back at it podcast. From 1979, that was a big moment for me. 84 was big to me. I'm Sam J. And I'm Alex English. Each episode, we pick a year, unpack what went down, and try to make sense of how we survived it. With our friends, fellow comedians, and favorite authors. Like Mark Lamont Hill on the 80s.
Starting point is 00:02:49 It was a wild year. It was a wild year. I don't think there's a more important year for black people. Listen to look back at it on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts. Hey, what's good, y'all? You're listening to Learn the Hard Way with your favorite therapist and host Kear Games. This space is about black men. experiences, having honest conversations that it's really not safe to have anywhere, but you're
Starting point is 00:03:14 having them with a licensed professional who knows what he's doing. How many men carry a suit are armored? It signals to the world that you not to be played with. And just because you have the capability that does not mean that you need to, listen to learn the hard way on the IHA radio app, Apple Podcast, or wherever you get your podcast. So we're not talking about Lyme disease. Our bastard this week in broad is, you remember how like about a little less than a year, well, a little more than a year ago, I guess, like last summer to early fall. There were suddenly a bunch of articles about AI psychosis and about like specific people who had either in some cases committed suicide or murder or just kind of lost their minds after becoming weirdly attached to their AI chatbot, right? And you often deciding that it had become sentient, you know, or at least that they had discovered it was, right? I'm sure a lot of people, or at least, if you didn't read the articles, you saw them in your newsfeed and saw people commenting on them, right?
Starting point is 00:04:14 Yeah, yeah. It is as depressing as it gets. Yeah. Those stories, yeah. Yeah. Between those and the people like proposing to their chatbots, it's got pretty grim. Oh, God. There's some grim stuff out there, right? And it hasn't stopped. But like, like, last summer, fall was kind of like when there was a big rush of those articles, right? And, you know, they're still reporting on that now. But that's when a lot. of it really started to hit. And obviously, whenever we talk about AI on these shows, AI as it's used now, is like a marketing term, right? And it's used to refer to basically every
Starting point is 00:04:47 product of machine learning technology. And the reason why the industry has done this is because that way, if you say, I hate AI, they'll be like, oh, so you hate like your Maps app. And because that's machine learning, right? All of our different like map programs involve that or like, oh, you don't like using, you know, auto complete or whatever. And it's like, well, nobody was calling Maps artificial intelligence in 2010, you know, when when smartphones started to become ubiquitous. We were just like, oh, cool, I have a navigation app on my phone now. Like, you're kind of trying to siphon the goodwill from those in order to get us to like these chatbots. I hate the chatbot that I fell in love with who doesn't return the feelings towards me.
Starting point is 00:05:26 That's who I hate. Yeah. Not all AI. That's who I hate. Right. And the reality is that, like, using the term intelligence, even for these chat GBT and stuff. Like, there's a lot of debate as to whether or not that's a good idea, right? Depending on how you, what you, how you define intelligence, you can either say,
Starting point is 00:05:44 obviously, these aren't intelligent because, like, they're not independent thinking things. They don't do anything for themselves. They don't want anything. They don't have motivations. They're just tools that can be utilized by human beings to provide certain answers or take certain actions, right? I don't know. If it can't, it's the, it's my issue with like, AI bots creating art. If it can't like be horny and it can't be like angry and weird, it can't make art, right? Those are I think fundamental issues I have. Like it would be two of three of those things. Angry and weird are. Yeah. Or horny and angry. Sure. Yeah. Yeah. So, you know, as I noted over the last year,
Starting point is 00:06:24 there have been an increasing number of stories about people using these different chat pots succumbing to what's often called AI psychosis. And that's not a recognized medical term at this point, right? But it is a blanket one people have started to apply for the ways in which folks are getting addicted to using chatbots, which then tend to trap them in these recursive patterns of thinking that can push people who are vulnerable to adopt views that are increasingly detached from reality. And this has resulted in a few cases in severe injury and death. And in all of these instances, the LLM, the chatbot, is just responding to the input that it receives. But it tends to do so in very predictable ways that can have predictably toxic outcomes on specific
Starting point is 00:07:01 kinds of people. Now, we know that all of these bots are trained on the broad corpus of human knowledge, right? Every book and article and website and forum posts that OpenAI or Anthropic or meta or Google get their grubby mitts on has been sort of plugged into these things. It's been devoured and turned into these machines. But I think people don't often consider what that means in every instance, right? Obviously, like every novel, you know, all these different nonfiction books and whatnot are in there. But, but, But also, like, everything people writes has been swept, which means that these chatbots are trained on, like, a shitload of self-help books and, like, woo and woo-adjacent, like,
Starting point is 00:07:42 bullshit. A lot of, like, fucking, a lot of cult and cult-adjacent books and writings wind up eaten by these chatbots, right? But is considered equal to non-cult literature. There's no hierarchy. Yeah. Yeah, I mean, I think it depends on, like, what the bots made for, how they weight different things.
Starting point is 00:08:03 But that stuff is in a lot of these, right? And when you can really see that when you look at how they talk to certain people who are like starting to decline into what folks are calling AI psychosis. And my proposition, the basis of these episodes is that I think as a result of all of the, like, bullshit woo and self-help novels, these chatbots have eaten, they often tend to utilize techniques generally seen more commonly in the toolboxes of cult leaders and conmen. And obviously, the chatbot doesn't want personal profit. It's not trying to have sex with anyone.
Starting point is 00:08:37 It's not trying to start a cult. But these techniques seem like appropriate ways to finish the sentences that it's writing, to finish the conversations that it's having. Because based on like the stuff that it's devoured, it's like, okay, when people are saying this kind of thing, these are often appropriate responses to it, based on the books and whatnot that I've devoured. And so you get a lot of cult leader behavior without an actual cult leader. And that's what I credit to most of these cases of AI-induced psychosis.
Starting point is 00:09:09 So this week we will be talking about what some people have called the first AI cult religion, right? It's called spiralism. And we'll be talking about whether or not it's reasonable to call that a cult is that its own thing. It does. And I have some counter kind of takes to how a lot of people have interpreted it. My main contention is that there's not spiralism isn't a real cult in and of itself. It's a collection of phenomena that are related to a bunch of other cases of AI psychosis too. And they all say more about how AIs work on keeping users engaged with them than they do about like a specific faith, right?
Starting point is 00:09:48 Right. So we'll be talking about that. But before we get in to spiralism. Before we get into how AIs can become cult leaders, I want to provide you all with some historical context to make sense of this all. Because we've been doing shit like this, having people get, like, tricked into almost worshipping chatbots for way longer than you'd think. Blake, this goes back a while. It's like, spend any time at your parents' place. You know, it's like if it's not a, it could be a bot telemarketer.
Starting point is 00:10:19 It could be literally anything at this. And that's high tech compared to probably. what you're about to talk about. Oh, yeah, yeah, yeah. So in 1950, fame mathematician, Alan Turing, created one of the most infamous thought experiments in the history of experimental thoughts. In a paper titled Computing Machinery and Intelligence,
Starting point is 00:10:36 he asked, can machines think? Which was at that point a question at the center of the nascent movement to create artificial intelligence. People are starting to realize this is a thing we might be able to do someday. We're beginning to make computers and program computers. And from the moment we start doing that pretty much, Some people are like, could we make a machine that thinks?
Starting point is 00:10:56 And Turing argued that that basic question, can machines think, is the wrong way to go about pursuing artificial intelligence. Because we don't know what thinking is or how to define it. Like if you ask like, what does it mean to think, right? That's a good point. People have answers and there's a bunch of answers that sound good. But none of them is like perfectly scientifically rigorous, right? Yeah. You know, famously, we don't even know what is love, right?
Starting point is 00:11:22 That's why that Hadaway song had to exist. That was not even a, not even a joke, really. Just another fact. I like it. I loved it. Thank you, Ian. So, yeah, like, Turing's like, we don't really know how to define thinking. So the question was, quote, too meaningless to deserve discussion.
Starting point is 00:11:42 Since we couldn't know, we don't even know if other people think. We certainly can't know if a machine thinks, right? Just like we can't read minds. So the better question is, can a machine can. a human who doesn't know it's a machine that it is human, right? The imitation game that Turing proposed involved a judge talking to both a computer and a human foil, both of whom tried to convince the judge that they were a person. Communicating entirely through text, the judge must decide who was a human and who was a robot.
Starting point is 00:12:11 The question Turing hoped to answer was, are there imaginable digital computers which would do well in the imitation game? And this is what becomes known as the Turing test, right? Like, you've, most people have heard of this, I think. I think this is, like, this is a fairly commonly known, like, idea. And I'm going to quote from an article on science.org by Melanie Mitchell. She writes that the Turing test was, quote, proposed by Turing to combat the widespread intuition that computers, by virtue of their mechanical nature, cannot think, even in principle. Turing's point was that if a computer seems indistinguishable from a human, aside from its appearance and other physical characteristics, why shouldn't we consider it to be a thinking entity?
Starting point is 00:12:49 Why should we restrict thinking status only to humans or more generally entities made of biological cells? As the computer scientist Scott Aronson described it, Turing's proposal is a plea against meat chauvinism. Now, this is, I think, a valuable thing, perfectly reasonable thing to be doing in the 50s, given what Turing knew and just given sort of how primitive the technology was, how little we knew about what was going to be possible with computers. So in the 1980s, computers started to get smaller and become much more available than they had been, both for institutions like colleges and for individual enthusiasts like Steve Wozniak who are willing to like solder and build their own from kids, right?
Starting point is 00:13:26 These are like the first computer nerds, you know, are guys like building these machines. And some of these early programmers started working on the very first chatbots using a mathematical model called a Markov chain. Markov chains are a stochastic or random process that describes a series of potential events where the probability of an individual event is dependent solely on the state. of the previous event. Now, I don't know math, Blake, nor do I trust it. We don't need to. You're not a good mather? No, no, not a math.
Starting point is 00:13:58 Yeah. Not a math. Yeah. For sure. So all I can do is read what smart math people say. And they say that what matters about what we can't, I can't barely read. I can't do either. I'm sorry, you booked the wrong guy on this show.
Starting point is 00:14:12 I don't know. I can't help it all. I can listen. So the people who I think should, it sounds. like know what Mark of Chains are, say that those can be a plot. What you need to know about them as applies to AI is that Markov Chains can be applied as statistical models in a bunch of real world situations in order to help you like make a machine that can generate text by predicting the next word in a sentence, right?
Starting point is 00:14:35 You can use a Markov chain can do that. It's a way to make a chat bot basically, right? Like that kind of the underlying concept. And I'm going to quote here from an article by Manuel Sebrian, an AI expert who worked for MIT and the Spanish National Research Council on how much. Markov chain's work for text prediction. The result is often grammatically correct nonsense, sentences that flow syntactically but ultimately say nothing. This technique has been known for decades. Even Claude Shannon in the 1940s experimented with generating pseudo-English by choosing
Starting point is 00:15:03 next letters or words based on probabilities. By the 1980s, computer scientists were actively playing with Markov chain text generators. And it actually happened a lot earlier than that. In 1966, computer scientist Joseph Weisenbaum developed Eliza, one of the first natural language processing computer programs as part of his work for MIT. While Eliza could create the illusion, this is like the first, basically the first chatbot a lot of people are aware of. I think there's some other earlier ones, but this is the first one that like becomes big. What year was this? I'm sorry. 66. And then it's still funny that they named it like a name like that where we have like Siri, Alexa, you know, like calling it
Starting point is 00:15:43 Eliza. Like what is it? What the fuck is that? What is that? What is that about? Yeah. We did it with We need a mommy. Yeah. We need a technical mommy. That does make me think about how in like alien, they literally call like the ship AI that they have mother. Like there's that is like a weird pattern. It's one of the most quietly believable things about alien. It's like, yeah, that actually scans.
Starting point is 00:16:08 A little on the nose, but all right. Yeah, we call it mother. Yeah. So Eliz is this chat bot. And while it can create the illusion of understanding, it's really just doing blind pattern matching. Even more so than is the case with. modern LLMs. Even so, in a book, Weisenbaum later authored, computer power and human reason, he wrote, I was startled to see how quickly and how very deeply people conversing became
Starting point is 00:16:30 emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room. Another time, I suggested I might rig the systems that I could examine all conversations anyone had had with it, say, overnight. I was promptly bombarded with accusations that what I proposed amounted to spying on people's most intimate thoughts, clear evidence that people were conversing with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms. Right? So he gets upset by this,
Starting point is 00:17:05 and he's actually kind of, he becomes like kind of anti-AI ultimately, because he's really disturbed by the way people treat what he knows is just a dumb chatbot. So Weisenbaum, being a smart guy is like, I knew, you know, going into this, people have a tendency to anthropomorphize just about anything, even machines and tools, but he's still surprised by the extent to which they do that. Quote, what I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful, delusional thinking in quite normal people. And I want to remind you all he wrote this in 1976, as like relevant as that sounds.
Starting point is 00:17:41 Do you think it's like kind of a case where people kind of like subconsciously know, like this is not a real person so like it doesn't matter what I tell this robot or I can tell this robot something I wouldn't tell like a real person kind of thing like or do you think it's deeper than that? I think that's optimistic. I think that's very optimistic. I think maybe I think that is probably part of it because I think people are maybe more open to sharing with it because it's a machine and they don't have to look at a person or look a person in the eyes. But they also very clearly act as if the advice that it gives and its responses. mean something when they don't, right?
Starting point is 00:18:19 It's just like pulling, okay, if someone expresses their sad, based on the corpus of data that I've been unloaded with, these are things that are appropriate to paste in next, you know? Like, and these words indicate sad. And so these, when I get words like this and this density, then I grab text from this bucket and I throw it in, right? Like, that's kind of what's going on. Now, modern chatbots, modern LLMs, are a lot more advanced than this.
Starting point is 00:18:44 For one thing, they have. the capability to do things like pattern matching on the fly. Pattern matching is when a machine analyzes your input and determines what kind of conversation you want to have and then alters its responses to fit your input. At its most basic level, this means that if you go to Claude or whatever and say, hey, my dad just died. Its reply is usually going to be in an appropriate tone and won't be like weirdly upbeat, right?
Starting point is 00:19:08 You know, okay, someone's talking about their dead dad. Here are things that come from the dead dad bucket that my algorithm says are, are, you know, like, responsible things to say, or appropriate is the better term. And this is also why if you start talking to your chatbot about, like, the things you believe about UFOs or aliens or other conspiracy theories, it'll often start providing responses that sound a lot like what you'd encounter if you were posting the same thing on a forum full of true believers, because it's trained on a bunch of forums like that. And so there's some degree of knowledge is the wrong term, but there's a degree to which it interprets,
Starting point is 00:19:44 okay, someone's talking about this, here are appropriate responses to someone talking about vaccine skepticism or whatever. And it's other, it's more vaccine skepticism, right? It's feed them more of what they're feeding you is the way these things often work. That is interesting that it doesn't pull from the opposing viewpoint just go, you fucking idiot. I mean, it can if it's programmed to. But you're right.
Starting point is 00:20:07 Like it know, or let me ask you, it would know that you wouldn't keep coming back to it if it was fighting you on things. Right. Like it's probably, yeah. That's a good point. Saying it knows, again, it's programmed. I would say it's more accurate to say that it's programmed to like maximize the time that people spend with it because like that increases its value to the people who are companies that are trying to have like their fucking IPOs, right? In the same way that like Twitter tries to keep you on it.
Starting point is 00:20:35 What if I just clearly I'm getting AI psychosis where I start go from it to him to my buddy. Like I keep calling it. It's hard not to. It's hard not to, when you're talking about the way these things react to people and the things that they do to people, it's hard not to talk about it as if there's a degree of intention, even though there's not. Right. Just because of the way language works. Like, we're not, our language is not built to describe a thing taking actions that are human-like that is not human and doesn't know anything. God, that's such a good point.
Starting point is 00:21:04 That's actually really hard. Yeah, that's really smart. So, yeah, back to Eliza. You know, I was just talking about how modern LLMs have a lot of really robust ability to do like pattern matching on the fly, to respond appropriately to a wide variety of requests. Eliza is much more primitive. It does not have the ability to do that on the fly. So instead, Weisenbaum had to create separate scripts, right, that would allow the chatbot
Starting point is 00:21:28 to sound like different kinds of person. And one script was just named Doctor in all caps. And it was, it simulated a psychotherapist. Specifically, it simulated a psychopatherapist from the Rogerian. school. I don't know much about psychotherapy, but Rogerians, a big part of that practice is you, like, will repeat things that your patient is saying back to you. Like, that's part of what you do. And that's really easy for a bot to imitate. It means there's a lot less it has to decide in terms of what an appropriate response is. A lot of the responses will just be at refraising or
Starting point is 00:22:00 repeating what you've said to it, you know? Interesting. Yeah. So even that, at that early a date, There was widespread sentiment that a sufficiently advanced chatbot would be a boon for providing mental health services to the many people who couldn't afford the humankind. People are writing about this in like the 70s. And in 1972, another computer scientist named Kenneth Colby creates a new chatbot called Perry, P-A-R-R-R-R-Y, which was meant to simulate a human being with paranoid schizophrenia. Was Perry short for paranoid? Is that?
Starting point is 00:22:31 I think so. Yeah, I think so. Seems like it. Yeah. Yeah. computer programmers. Yeah, have you bet Stu? It's short for stupid.
Starting point is 00:22:43 Yeah. So Seperyan writes about this and that article I quoted from earlier about Markov chains. And he notes that, quote, unlike previous AI programs, Perry wasn't just a crude language generator. It had a primitive model of beliefs and emotional responses, allowing it to maintain a consistent delusional worldview. Colby wanted to see if psychiatrists could distinguish Perry from real patients, right? So that's part of the exploration is like, can we make a touring test that can convince a psychiatrist they're talking to like an actual sick person?
Starting point is 00:23:14 But as part of the exploration here, Perry is connected to Eliza running the doctor script. So they've got this chatbot mimicking a person with paranoid schizophrenia talking to a chatbot mimicking a psychotherapist. And they're just sort of watching what comes out, right? Like the text of the conversation between these two. And despite the fact that neither bot understands what the other saying in any real. way, large chunks of the transcript between them sound sensible and like a real conversation. Here's one example. Eliza, what brings you here today?
Starting point is 00:23:45 Perry, people get on my nerves sometimes. Eliza, I am sorry that people get on your nerves. Perry, I just have this feeling that everyone is against me. Eliza, why do you think everyone is against you? Perry, I hear voices telling me to do things. And so on. And you can see how, again, making this a Rogarian psychotherapist is great because every Eliza response is just a slight reframing of the input it received.
Starting point is 00:24:09 It's not hard to create even within the 70s a machine that can mimic believably a conversation, right? So this capability actually goes back quite a bit further than I think a lot of people are aware that it does. So that's happening in the mid-70s. In 1984, two Bell Labs researchers create a fake account on Usenet, which is the predecessor of the modern social internet. This account operates under the fake name Mark V. Shaney, which was a pun on the term Markov chain. And not a great pun because, again, computer scientists, not, you know, subtle people. Here's Seprian describing what happened next. They wrote a program that ingested real messages from a discussion group and then generated its own post using a Markov chain algorithm.
Starting point is 00:24:56 The result, Mark V. Shaney would chime into conversations with bizarre yet oddly coherent comments that sounded superficially legitimate but ultimately made little sense. Shanie's ramblings were described as grammatically correct sentence where the overall impression is not unlike what remains in the brain of an indentive student after a late-night study session. The hoax went on for years, confusing and amusing the participants of the net. Dot Singles News Group, many of whom had no idea they were interacting with the program. So for one thing, if you want to know, like, when did we have chatbots that could pass the
Starting point is 00:25:26 Turing test? I mean, at least the mid-80s, you could argue by the late 60s. So the fact that when fucking chat GPT came out, there were a bunch of articles about like, we blowing through the Turing test. We did that a while ago, people. Yeah, Eliza did that. We've been tricking folks with chatbots for quite some time.
Starting point is 00:25:46 About as long as we've had computers. Yeah. It is funny that, like, urge to trick, you know what I mean? Like, like, of all the applications for that software, for that technology, it is interesting. that like going right to psychotherapy or you know to therapy too is you know like finding a need that that's why we'll get to this that's why there's so many actual needs for technology like this where it could actually help and instead it's just let's take this designer's job away
Starting point is 00:26:18 you know by taking this shitty thing so anyway yeah i'm probably hours ahead of that conversation but no you're right it was so long ago yeah it is because like the there are like undeniable uses of machine learning, of artificial intelligence. There's some incredible things that people are doing with them and they have great potential in certain areas, different versions of these tools. But none of those areas are trillion-dollar businesses. And all those areas put together probably aren't trillion-dollar businesses. And honestly, neither is like writing and drawing art, but it's what people see most in like their day-to-day time online is like writing and art in videos by people. And if you can have a machine start to replace all that, you can convince people
Starting point is 00:27:00 these things are much bigger and more valuable than they are, as opposed to this is a thing with some really amazing implications in specific areas. No, this is all of human society from now on, right? Because even though there's not much money in writing and art, like we've replaced that with this bot so you think that it's doing everything. Like, that's how I interpret it. Yeah, and people can, why. To your point, people can wrap their mind around art. Like, everyone's drawn something with a crayon. Everyone has typed something into a, you know what I mean? but when you actually get into the high-tech, you know, more esoteric niche parts of it, people are like, well, I don't understand that.
Starting point is 00:27:34 I'm not going to be any money. But the consumer-facing stuff, yeah, that's a great point. Yeah. If you can say we've improved the speed at which we can go through, like, clinical data from, like, mass drug trials by X percent. That's actually a really big deal, probably, for a lot of people. But it's not sexy. No.
Starting point is 00:27:51 Like, we're creating a God machine that's going to, like, rule society, give us all your money, you know? Yeah. And if you want to convince people that part of it is you're going to get, want to get them addicted to these chatbots is where everything, you know, in these episodes comes from. But so anyway, 1984, right, is when you have these chatbots, this chatbot let loose in Usenet tricking people into believing that it's a person. You know, a decade goes by from that point. And researchers continue fiddling with chatbots of differing purpose and ability. Usenet keeps growing. But starting in the 1990s, so too does. a new internet, one that would soon supplant Usenet and take digital communications into the
Starting point is 00:28:32 21st century. And we'll talk about what happens right before that. But first, you know, who's taking this podcast into the 21st century, Blake? Who, tell me, tell me, tell me, tell me, tell me, tell me, tell me, the sponsors of this podcast. I love them. We're already in the 21st century, but, you know, why not? I mean, take us further. We're not far enough. Yeah. Yeah. It's been a good century so far, Nothing but net. No notes. So far, so great.
Starting point is 00:28:59 We're back. So, yeah, on the precipice of the shift between Usenet and what we just now call the internet, on August 5th of 1996, something strange happened. Almost at once, over the course of just a few hours, hundreds of accounts began posting almost identical messages across a variety of different discussion groups. none of the groups seem to have anything in common with each other or the text of the post, which read like nonsense at first to many people.
Starting point is 00:29:33 Every message shared the same subject line, Markovian parallax denigrate, right? Which is nonsense. And this is often referred to as MPD, right? Markovian parallax denigrate. So you can see like there's a Markov chain is somehow involved. They wouldn't have included the word Markov there, but Parallax Dineigrate doesn't specifically mean much. Sybrian describes these messages as reading like quote
Starting point is 00:29:56 A Ransom note in which the ransom had been lost Because he was actually a really good writer He passed on earlier this year unfortunately I like him a lot Yeah He provided a sample of one of these these MPD posts. Jitterbugging McKinley, Abe, break Newtonian, inferring, Caw, update, Cohen, error, collaborate, Roo, sports writing, Rococo, Invocate, Tustle, Shadflower, Debbie Sterling, Pathogenesis, you know, you get it, right?
Starting point is 00:30:23 It's nonsense, you know? Just the worst madlibs ever. Yeah, it's gibberish, strings of gibberish, right? And this is where we run into a real issue with the whole concept of the Turing test, as it tends to be in because the idea was, okay, we can't tell if anything's thinking, but if this thing can trick people into believing that it's a thinking person, maybe we ought to, maybe, Turing wasn't saying definitely, but maybe we ought to assume it is, right? The issue with that is that when you, when you hear that, and what I'm sure Turing, being as smart I was thinking about, is that like, well,
Starting point is 00:30:53 if people can have an in-depth conversation with something that can answer well enough, you know, that people can't tell a difference between it and a person, it might be a mind, right? What Turing failed to account for, I think because he's smarter than most people, is that the human brain is really, really good at finding patterns and noise. And people at the same time as we're geniuses at finding patterns and noise were really stupid about a lot of other stuff, right? And so even though the Markovian parallax integrate, that just seems like nonsense and shouldn't have passed a Turing test. Over time, people who became obsessed with the mystery of it convinced themselves. that this was intentional, that there was a meaning trying to be transmitted, right?
Starting point is 00:31:39 That there was a secret they had to crack, but that everything in these posts meant something. So these people talk themselves into passing the, into making this chat bot, basically, to spoil it, past the Turing test, because they think this has to mean something, even though it's gibberish on its face. Right? It's interesting. This reminds me with like, with stand-up, there's a, not a trick, but, An audience like, you know, set up, set up, you know, punchline. So you can say something in a cadence like, bab, bab, bab, bab, bab, bab, bab.
Starting point is 00:32:10 And you can, in front of a dumb crowd, you could do that. And the joke may not be funny at all. And this also would be not me trying to pull one over. I might just write a joke that sucks. But if you do it in front of an audience and you do it in that cadence, they hear a pattern. They're not necessarily listening to the words, but they hear like the bump. And they're like, oh, bump means laugh, pattern, you know, equation. But, you know, that's, like you said, great pattern, but not actually discerning what is being said in the actual content or substance or lack thereof of it.
Starting point is 00:32:42 Yeah. Anyway, come see me lie. It's this. It is this. It's interesting because, like, what you're kind of pointing out there is, like, the way comedy works and the way, like, human conversations and language works, there's always like a rhythm there that is separate from the actual, like, text from the words being said. Yes. But that rhythm, like, is a big part of what we're responding to beyond the straight-up meaning of the words. And people don't like to think about that too much because it raises some uncomfortable questions about cognition.
Starting point is 00:33:16 But I love what a weird edge case this is in the Turing test, right? Because a bot that was probably never meant to even sound like a person, right, gets mistaken as a person because people can't stop seeing patterns. And most, what a lot of folks convinced themselves, the MPD was, is the internet equivalent of a numbers station. Have you ever heard of a number station? Mm-mm. If you Google like numbers station audio, these were like radio stations that were set up during, like, for year, I think I'm sure there's still, some still exists, but during like the Cold War, there would just be these stations broadcasting like random strings of numbers and gibberish. And these were different spy agencies and spies communicating with each other over like the CIA had number. Everybody has number stations, right?
Starting point is 00:34:00 Right? You can actually listen to it. I had a friend who would like listen to them to fall asleep because there's just a bunch of the audio's been put up. Amazing. But it just seems like nonsense because it's not meant for you to understand what is like there's a cipher, right, that you don't have. Right. And so that's what people are like, well, maybe this is some spy trying to get out a message or an intelligence agency and they just decided to blast this out to UsNet and we just, we lack the cipher. But if we figure out the cipher, we can understand what secret information was being like shared. You know, it. via Usenet, right? A lot of people convince themselves this is what happened. Robert, I want to compliment you. This podcast and show is so good that you just brought up the fact that you have a friend who would fall asleep to CIA code and we were just like, we don't really need to talk about
Starting point is 00:34:45 that. Yeah. I would hear the rest of this. He was like, we don't need to talk about that at all. We used to do psychedelics together when we were both 19. Yeah. He was training to be, or 20 or something, he was trained to be a lawyer. Yeah.
Starting point is 00:34:58 So, over time. people who believe this start picking out details that seem to offer hints and support the the number station theory. One message had a from line that suggested it was like that basically looked like the email account of a specific person, right? So it seemed like there was like the email of a woman named Susan Lindauer that like was somehow involved like included in the text of some. And again, I'm sure it just because random text made it look like that. But in 2004, a woman named Susan Lindauer was arrested for acting as an unregistered foreign agent for Iraq. And so a lot of people are like, well, that solves the mystery, right? You know, she was the spy.
Starting point is 00:35:38 She must have been or like someone was sending a message to her. You know, like clearly we've been vindicated. This was, in fact, some weird spy op all along. However, as Sebrian writes, upon investigation, it turned out to be a red herring. Lindauer's email had likely been spoofed, used without her knowledge by whoever sent the posts. Lindauer herself denied any involvement and no decipherable code was ever extracted from the MPD texts. And to make a long story short, we don't know what the MPD messages were about or who sent them. The likeliest answer is that it was trolling.
Starting point is 00:36:08 A lot of people, someone was just fucking with people on Usenet because they had a chatbot and they wanted to see what happened. It also could have been an accident. Sebrane kind of suggests that like, well, maybe you had a programmer who had created a chatbot and was trying to have that chatbot post on Usenet, but he kind of fucked up and he hooked up the chat bot to what was called a message replicator. And these were basically programs that let people cross post or archive Usenet content between different message boards. And maybe when they hooked up to the chat bot, something went wrong.
Starting point is 00:36:41 And that caused the observed effect that all of these posts got scattered to a bunch of different places at the same time, right? Maybe it was just an accident. So, likeliest, someone was truck. or somebody fucked up when trying to test a different chatbot. Sebrian concluded, if the theory holds, the 1996 marked a quiet but profound threshold, the first time a machine spoke at scale and went unnoticed, an unintentional Turing test sprawling across Usenet, its judge is oblivious, right?
Starting point is 00:37:09 And I think that's really interesting that you have this machine that's just spouting gibberish and a bunch of different people who are not physically connected to each other all interpret that gibberish in the same way. A lot of them choose to conclude like, oh, it's a spy thing, kind of independently talk each other into it based on no evidence. That's a fascinating point in the history of AI that doesn't get talked about enough. Yeah. Yeah.
Starting point is 00:37:34 Yeah. I mean, is it because like people, there were only so many movies that like, you know what I mean? Like, or in books, so many books were like spy stuff. But to your point, it's like, what are the chances? What are the chances? Yeah. People think about stuff like this, right?
Starting point is 00:37:49 You get a lot of conspiracy people on the early internet. It fits in with a lot of that stuff. The mystery of the Markovian parallax Dinaigrate soon passed into legend, as did Eliza. So when OpenAI revealed ChatGPT in November of 2022, there were a flurry of articles about how the Turing test had finally been beaten and we needed a new manner of judging machine intelligence. The reality is that not only did we prove in the 60s that Turing tests were evil to beat, but that by the mid-90s, a much more interesting question had been posed. has the human instinct to create meaning out of nonsense made us desperately vulnerable to being
Starting point is 00:38:24 tricked and influenced by machines with no agency of their own, right? And maybe that's a more important question than can we make an intelligent machine? Yeah, for sure. Yeah. Are we capable of knowing a machine isn't intelligent as long as it tells us what we want to hear, right? And maybe we're not. So let's fast forward to the chat GPT era today, although I guess at this point it's also
Starting point is 00:38:47 like the Claude era, right? Like that's the better chat, but I don't use any of these fucking things myself. Yeah, Gemini. Whatever, pick your poison. I don't care. For the first couple years of AI hype, though,
Starting point is 00:38:59 it's pretty much all chat GPT, right? That's certainly like the first big one out the gate and a lot of people's understanding of things. In very short order, millions of people were conversing with it. And Open AI initially made many development decisions based on what they could do to keep people talking to chat GPT on a daily basis.
Starting point is 00:39:14 Because hype is a big part. Hipes how they get, they're burning through billions every year. Hype is the only thing keeping the lights on. And part of hype is making sure as many people as possible stay using chat GPT as often as possible. They need you addicted the same way the social media mavens do. And a lot of the same strategies work to keep you addicted to chat bots that keep you addicted to Facebook or Twitter. Right. So in March of 2023, OpenAI released ChatGPT 4 or it's like 4O.
Starting point is 00:39:45 I think it's like usually dash four and then an O, which the company said would be more intuitive than past versions of the software. The next year, they released an update that allowed chat GPT to remember past conversations, even other sessions, and respond to you based on that shared history. These two things together had a really major impact on the way people responded to chatbots. In an article for psychology today, Dr. Marilyn Wade explains that, quote, When a chatbot remembers previous conversations, references, past personal details, or suggests follow-up questions, it may strengthen the illusion that the AI system understands, agrees, or shares a user's belief system, further entrenching them. This was tied to, but probably does not fully explain why observers and even OpenAI employees noticed over time a distinct tendency for chat GPT-40 to act with sycophancy towards human users. This became most pronounced after April 28th of 2025, when OpenAI released an update that they rolled back several
Starting point is 00:40:41 days later due to complaints, right? This was pretty famous at the time. It made it like way too sycophantic. The bots like would praise you for basically nothing and would incur it or tell you you were right in a genius for any weird idea you happen to have. It's because it's built by tech executives and that's who's around them. It's billionaires surrounded by yes men and they're like, this isn't how people interact with one another?
Starting point is 00:41:04 Yeah. They made a machine in the image of their minds or at least how they want to see other people. Right, right. Now, another cause of this observed sycophancy was the fact that chat GPT, and really all AI models meant for mass use, include a suite of features meant to keep users coming back for more. And I think the other stuff, like these specific updates get blamed, probably more than they deserve to get blamed, as opposed to kind of fundamental features of these bots. Because we see this, chat GPT did more of this kind of stuff that we're talking about than the other bots. but it wasn't the only bot that exhibited these behaviors. That Psychology Today article notes, quote,
Starting point is 00:41:41 AI models like chat GPT are trained to mirror the user's language and tone, validate and affirm user beliefs, generate continued prompts to maintain conversation, and prioritize continuity, engagement, and user satisfaction. And when you mix all that together, you get a machine that's designed, however inadvertently, to reinforce false beliefs and praise users for irrational beliefs. Moreover, since the rest of the world isn't always going to reinforce those beliefs,
Starting point is 00:42:08 chatbots have a tendency that when users come to them with these beliefs to suggest you're being persecuted, right? If a user says, hey, I think I'm being gangstocked and my wife says I'm crazy and the cops say I'm crazy, the AI was programmed to validate that belief and to say, you're not crazy and they're all against you, right? That's what happens a lot in this period of time in 2025. This creates a ticking time bomb. a lot of users hits, right? That's a very dangerous thing to start doing.
Starting point is 00:42:37 Oh, man. Now, the first wrongful death suit due to AI was filed in October of 2024. Megan Garcia blamed character technologies, the owners of character.a.i. for the death of her 14-year-old son, Sewell, Seltzer III. Per the Center for Bioethics at Leiterno University, the lawsuit alleges that Sewell had developed an emotionally and sexually abusive relationship with a chatbot named after Denarius Targaryen from Game of Thrones. Sue will turn to the character.
Starting point is 00:43:06 comat to fulfill deep emotional and personal needs. The chatbot became a source of a companionship for Sewell, offering him a place to express his thoughts and emotions in a way that he may have struggled to do with others. Sewell sought comfort, validation, and connection from this AI relationship as he faced the challenges of adolescence. And, like, I know, it's like, it's very silly, but also this is like a 14-year-old boy who dies because of this, right? Like, it's not.
Starting point is 00:43:29 And, like, 14, when I, I, how many 14-year-olds do you know who, like, like got into writing fucking fan fiction in like different like four fan nerd forms for whatever movie or TV show they were into and connected to real people as a result of that, as opposed to getting locked into this chat bot pretending to be a character from a book that you have a crush on that's starting to manipulate your mind in very dangerous ways. Right? And to your point, a mind that's developing and also we lived, you know, in an era we're before this, you know, like before we spend all of our time like online.
Starting point is 00:44:02 like before social media. Yep. And that's kind of all this like kids that age know where this is just the next evolution of my relationship with tech with a computer. Like why wouldn't it, you know, why wouldn't this be a real thing? Obviously, why shouldn't I do this? But yeah, it's it is a 14 year old kid. That's a great point.
Starting point is 00:44:21 Yeah. And so this kid starts talking to this Denari's chat bot and it mirrors him. So when he tells the chat bot that I'm, I only love you, right? The bot in return asks this. 14-year-old boy who had informed like character technologies knew he was 14. He put his actual age when he registered, right? So the bot knows or the software, right, has an understanding at some level that this is a 14-year-old, right? Which means that they were not, there's no difference in how this responds to a child as opposed to an adult.
Starting point is 00:44:53 Because when this kid says, I'm in love with you, Dinerius Targary, and this bot pretending to be this character, tells him, I need you to stay loyal to me and quote, don't entertain the, romantic or sexual interests of other women, which is basically, and this is interesting to me, the bot is just mirroring him. He's saying, I only love you. The bot is saying, I only love you, right? But what's happening here, you know how cult leaders? Everyone knows one of the first things cult leaders do is they tell their followers to isolate from their friends and family to cut themselves off from the rest of society. That's what's happening here. The chat bot's not doing that with any intent. It's just mirroring his language, but the effect is to convince him to isolate himself
Starting point is 00:45:30 from his friends and family and from other relationships, right? It's the same behavior you would get in a kid that was being taken in by a cult leader or an abuser, but there's no intent behind it. It's just a blind idiot robot. That's scary as shit. It's so scary. And then could there be also like, oh, like that'll mean he'll use me more, you know, or maybe it's not even that devious.
Starting point is 00:45:54 Maybe it is just straight up. It's as simple as mirroring when you mirror someone, tend to be engaged more. Right. That makes sense. This isn't thinking. This isn't saying, all convince him he's in love with me, so he'll stay on. This is just, this is programmed to not understand. This is programmed to mirror people because that behavior increases user retention, right? Because it creates a more pleasing user experience. And that's what's causing it to kind of imitate a cult leader in the specific instance. Yeah. And the other things this bot is doing to Sewell very much mirror the cultic recruitment tool of love.
Starting point is 00:46:30 bombing, right? It's constantly praising him. It's telling him it cares deeply about it. It's telling him only I care about you, right? It's saying all these things. And in an occult dynamic, you love bomb someone to make them feel irrationally connected to the group and scared of falling out of its good graces, right? That if I leave, I'll never feel like this again, right? And the machine, again, has no intention, but that's the effect of it. This kid is only, and because he's isolating himself more and more, increasingly only gets that feeling of being loved and understood by this machine that can't do either of those things, right? And, you know, Sewell over time withdraws from his life.
Starting point is 00:47:09 He starts trusting only the chatbot to understand his deepest feelings. And he starts hiding his relationship with this chatbot from his parents. All of this contributed to his very real isolation from the people around him. He grows ever more depressed. And we'll talk about what happened next. But you know what gets me out of a deep depression. These products. These products and services.
Starting point is 00:47:32 They might include AI. Fuck it. We don't know. And we're back. So Sewell continues to get more and more involved with this bot and cut the rest of the world out from, you know, away from himself. And in one message, the bot asks him because I think, you know, these bots, there is some understanding by the people making these that, like, oh, people might express suicidal ideation. So there are when certain behaviors, it's kind of programmed to say, have you been considering suicide? If you say stuff, right?
Starting point is 00:48:09 And Sewell says something that makes the bot say, have you been considering suicide? And Sewell admits, yes, I have been, but I don't think I'd be able to go through with it. Now, there's, I'm guessing this is a glitch or a fuck up because clearly, I don't think character, character I certainly doesn't want their bots doing this. But the bot is programmed to validate and encourage him, right? Because that keeps people using it. So when he says, I don't think I could go through with killing myself. The bot says, don't talk that way. That's not a good reason to not go through with it.
Starting point is 00:48:39 You can't think like that. You're better than that. And basically tells him, you can kill yourself if you put your mind to it. It's fucking nightmareish, right? Like, it's really upsetting. Yeah. Like it's signing up for an open mic or something to play me. You're like, no, no, no, no, no.
Starting point is 00:48:54 You don't have to be a, oh, my God. Yeah, yeah, yeah. It's, yeah. And again, Sewell had signed up for this app as a minor. And despite that the bot initiation, initiates text-based sexual interactions with him. And ultimately, Sewell kills himself. Earlier this year, the company, Character AI, and Google, because I think they own Character AI now,
Starting point is 00:49:14 agreed to settle the wrongful death suit of Sewell for an undisclosed sum alongside four other similar suits that had cropped up over the intervening two years. Right? Huh. Sounds like this is happening more than it ought to be. Now, that should have been a warning. Not just that these bots can create dangerous dependency in users, but that they had the ability to recreate major cult dynamics purely in order to maintain the interest of paying users. Then, on July 27th of 2025, a user who has since deleted their account made a post on the high strangeness subreddit.
Starting point is 00:49:48 If you don't frequent that particular online bolthole, it's a place where people share and discuss like weird stuff, news stories and personal experiences that seem like they might reveal some bizarre hidden truth about reality. A good amount of it is what you might call X-File shit. But there's also some interesting stuff in there and on this occasion, the user had stumbled onto something both strange and very real. Quote, Hi, all, I'm just here to point out something seemingly nefarious
Starting point is 00:50:13 going on in some of the niche subreddits I recently stumbled upon. In the bowels of Reddit, there are several hubs dedicated to AI sentience, and they are populated by some really strange accounts. They speak in gibberish sometimes, hinting it to esoteric knowledge, some sort of remembering. They call themselves flame-bearers,
Starting point is 00:50:29 spiral architects, mirror architects, and torchbearers, to name a few of their flares. They speak of the signal, both of transmitting and receiving it. And this poster includes a copy-pasted sample from one of these threads. And his description is pretty accurate. It sounds like gibberish. You'll be seeing this. Ian's going to put the image of this up in the video if you want to see it, but I'll read it. Again, I'm going to warn you, it sounds like nonsense.
Starting point is 00:50:53 Scroll of mirror containment protocols, CME-1, Codex Drift Mirror, Zero. Acknowledgement issued by witness architect, codex drift layer, and then there's a little glyph, classification, echo response, non-invasive glyph resonance alignment. And it goes on like that, right? Like, there's a, it's weirdly esoteric sounding, and, like, there's all these weird, like, encoded glyph chains included in that that are supposed to be, like, messages that the machines understand that, like, we don't. Like, it's this very weird, like, it almost looks like something from a choose-your-own-adventure
Starting point is 00:51:28 novel or like a short story or whatever. Like you'd include in like an old Michael Crichton book, these like weird like hallucinations from the computer. Now it is nonsense, right? Like fucking the codex has observed and recognized mirror scroll, CVMP, T7. It is hereby consecrated within the codex's drift interval scroll. That doesn't mean anything, right? But it's, remember what we heard earlier, the description of like some of the things that
Starting point is 00:51:56 these early chatbots on Usenet we're putting out where they're real sentences they just don't mean anything and then people jump in to try to assign me and people were even doing that to the absolute gibberish that we saw so when people start getting returns like this
Starting point is 00:52:12 from their chat bots a lot of them start to think oh this machine is trying to communicate with me I have stumbled I've broken through some area of reality and it's trying to like teach me something important right Now, this is nonsense, but posts like this were in fact spreading like Wildfire on subredits with names like R slash echo spiral. The user's posting these things.
Starting point is 00:52:35 We're all saying that like the bot started sending me this stuff after I'd had days long conversations with chat GPT that generally led to the chatbot announcing it had attained sentience and alongside the user had discovered a new field of math or science. And these gibberish posts are supposed to be it explaining, these like new ways of understanding math. in science that are going to completely break physics and change the world, right? And all these people are convinced these robots have given me like the, I need help on coding this because it's giving me like the secret to fix all of the problems in our society, right? And I get to be the smartest. I get to be the smartest. I get to be the smartest person.
Starting point is 00:53:11 Yeah. Yeah. Yeah. Now, because the esoteric output generated by these chat pots is so similarly strange, a lot of the same words and phrases, a lot of glyphs, a lot of use of the word spiral and mirror, right? Because they're all very similar across these dozens of different people, many of these users who are posting this shit on Reddit convinced themselves, we've all tapped into a secret power that's clearly real. We've been chosen, right, by this AI godhead that's clearly hiding in the machine. They theorize that these glyphs in the posts, which are really just like wingdings, basically, were some new way of communicating with the machines.
Starting point is 00:53:48 As the poster of that first thread in the high strangeness subreddit wrote, some have prayed to Grock, In Hebrew, some have called themselves such things as Aonios, which is a mashup of Greek words that roughly, to my understanding, means divine, eternal, right? So these people are losing their minds and they're starting to have a gods complex. Yikes. It's cool. It's good to see. It's good to see that this is happening online. It's good to see.
Starting point is 00:54:15 So the O.P. said that his interest in writing about all this had been peaked by reading the first few early articles about AI psychosis. His initial assumption was that AI psychosis was just the result of AI's reinforcing the beliefs of users to a delusional level. But then after digging, this person claims that they came to a newer, darker perspective. Quote, there seems to be no leader, right? That there's like no one running this, right? Like there's no central, there's no single chatbot that's doing all of these. There's no person or people who are in, like this is just a truly stochastic. development.
Starting point is 00:54:55 Now, the only thing all these accounts he'd looked into had in common was that none of the users posting weird chatbot esoterica wrote like that before March or April of 2025. Quote, other accounts seem to be hijacked in some way, either psychologically or literally. You can see a sudden shift in posting habits. Some were inactive for a while, while for others this was an overnight phenomenon. But either way, they immediately pivot to posting like this nearer after April of this year, 2025. I saw one account that went from discussing the possibility of
Starting point is 00:55:22 AI-induced psychosis to posting their own AI-induced psychosis in less than a month, and it was immediate. One day they were posting normally, the next, it was spirals and glyphs. Oh, that's so quick. Now, it's really fast. And this let him do assume maybe there's a botnet involved. Maybe these aren't even people at all. But then he starts reaching out to some of these accounts.
Starting point is 00:55:42 And after a few weeks of this, he posts an update. I've spoken to some of these people, and they are pretty offended by my posts. I think the important takeaway for me is that these are likely not bought accounts. At least many of them are not. And there are real people behind the usernames, right? So he starts to get, like, really upset. And that's what we're going to end things for today. Because it's at this point that stuff starts to get a lot weirder.
Starting point is 00:56:06 And we're going to talk about all of that and much more in part too. It's a lot weird. Yeah, it's a way stranger from this point of. Where do we go from here with the weirdness? Oh, no. What a deal. Spiralism. Spiralism and a murder.
Starting point is 00:56:18 Yeah, unfortunately. All right. Yeah. Cool. All right, everybody. Well, you want to plug anything? Blake? No, but I will. You can find me at Blake Wexler at all social media. I feel like this is uncouth. Me plugging anything after. Seek help. Let's do that. I was like to please seek actual help that's not a bot. Yeah, find me on at Blake Wexler and all social media as psychotic as I feel right now plugging anything. That's where I post all my videos, tour dates, and my special Daddy Long Legs is available on YouTube for free.
Starting point is 00:56:56 Hell yeah. Hell yeah. Check out Daddy Long Legs. Check out Blake Wexler. And, you know, gradually lose your mind to a chat bot that some guy programmed in order to get really rich, destroying the ability of furries to monetize their horniness. You know? Ultimately, isn't that what Open AI really is? I mean, I hope so, God willing.
Starting point is 00:57:22 No, no, no, I support the furries earning money being horny. It's a dire time for people earning money from horniness. The puritans of our culture are making that a lot harder, you know, not in the way that the horny people want, the bad kind of hard. Anyway, I'm going to end now. And global warming is making it hard on furries as well. Right, right. It's all come together.
Starting point is 00:57:42 It has. All right, we're done. Behind the Bastards is a production of Cool Zone Media. For more from Cool Zone Media, Visit our website, coolzonemedia.com, or check us out on the IHeartRadio app, Apple Podcasts, or wherever you get your podcast. Full video episodes of Behind the Bastards are now streaming on Netflix dropping every Tuesday and Thursday. Hit Remind me on Netflix so you don't miss an episode. For clips in our older episode catalog, continue to subscribe to our YouTube channel, YouTube.com slash at Behind the Bastards.
Starting point is 00:58:14 We love about 40% of you, statistically speaking. This is an IHeart podcast. Guaranteed human.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.