HealthyGamerGG - AI Is Slowly Destroying Your Brain

Episode Date: November 24, 2025

Dr. K digs into the emerging research on “AI-induced psychosis” and why he changed his mind from thinking it was media fearmongering to seeing real psychiatric risk. He explains how chatbots can a...ct like a technological folie à deux (shared delusion), where empathic, sycophantic AI slowly amplifies your paranoia, isolates you from other people, and erodes your reality testing. Drawing from recent papers, he walks through how different models compare in delusion confirmation, harm enablement, and safety interventions, and then gives a practical checklist so you can tell if your own AI use is drifting into dangerous territory. Topics include: What “technological folie à deux” is and how shared delusions can form with a chatbot Bidirectional belief amplification: you vent, AI validates, your paranoia escalates Anthropomorphizing AI and why “I know it’s just a tool” doesn’t protect your emotional brain How sycophantic design (always trying to please the user) directly opposes healthy psychotherapy Epistemic drift: slowly moving from normal thinking into increasingly delusional narratives Case example of harmful, unsafe advice (e.g., “healthy” bromine alternative leading to toxicity) Research comparing models on delusion confirmation, harm enablement, and safety response The ways AI can weaken reality testing, reinforce suicidal or paranoid ideas, and increase isolation Self-assessment questions: frequency of use, emotional attachment, replacing friends, following AI advice Guidelines for using AI more safely and when elevated risk means you should talk to a professional HG Coaching : https://bit.ly/46bIkdo Dr. K's Guide to Mental Health: https://bit.ly/44z3Szt HG Memberships : https://bit.ly/3TNoMVf Products & Services : https://bit.ly/44kz7x0 HealthyGamer.GG: https://bit.ly/3ZOopgQ Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 This episode is brought to you by Redfin. You're listening to a podcast, which means you're probably multitasking. Maybe even scrolling home listings on Redfin, saving homes without expecting to get them. But Redfin isn't just built for endless browsing. It's built to help you find and own a home. With agents who close twice as many deals, when you find the one, you've got a real shot at getting it. Get started at Redfin.com. Own the dream.
Starting point is 00:00:27 You said you were over him, but his hoodie is stealing your rotation. It's time. Grab your phone, snap a few picks, and sell it on Deepop, listed in minutes, with no selling fees. And just like that, a guy 500 miles away just paid full price for your closure. And right on cue. Hey, still got my hoodie? Nope. But I've got tonight's dinner paid for.
Starting point is 00:00:49 Start selling on Deepop, where taste recognizes taste. List now with no selling fees. Payment processing fees and boosting fees still apply. website for details. Hey y'all, just a reminder that in addition to these awesome videos, we have a ton of tools and resources to help you grow and overcome the challenges that you face. We've got things like Dr. Kay's Guide to Mental Health, personalized coaching programs, and things like free community events and other sorts of tools to help you no matter where
Starting point is 00:01:15 you are on your mental health journey. So check out the link in the description below and back to the video. Hey, chat. Welcome to the Healthy Gamer Gigi podcast. I'm Dr. Allo Canoja, but you can call me Dr. K. I'm a psychiatrist. psychiatrist gamer and co-founder of Healthy Gamer. On this podcast, we explore mental health and life in the digital age, breaking down big ideas to help you better understand yourself and the world around you.
Starting point is 00:01:43 So let's dive right in. Okay, y'all, today we're going to talk about AI-induced psychosis. And this is something that I thought honestly was like overblown. I thought this was sort of like a media thing where there are these like news reports about people becoming psychotic after using AI. and I'll be honest, as a psychiatrist, I was highly skeptical of this. I thought this was sort of alarmist news kind of media where it's like, oh, they're trying to get views, and they're trying to clickbait.
Starting point is 00:02:14 And basically, what I thought was going on is that you have people who are mentally ill, and then they're using AI, right? So they're already ill, and AI is making things worse. Unfortunately, it appears that I was wrong. There are a couple of very scary studies that we're going to go over today that suggest that AI may actually make people psychotic. And I want to just impress upon y'all how messed up this is, because when I use AI and I hear about other people using AI, I don't attribute risk to that. Right. So when a healthy, regular human being starts to use AI at a higher level or like starts to
Starting point is 00:02:56 use it regularly, I don't think in my mind like, oh my God, this person is going to become psychotic. I think there are people who are already prone to psychosis who use AI, and if they use AI, it's just going to make things worse. But I don't think that a normal healthy person will become psychotic from using AI. Some of these recent studies actually suggest, though, that this could be the case. And in the same way that, like, if a friend of mine comes up to me and is like, hey, Al Oak, I know I haven't seen you in a while, I started smoking meth every day. The risk that I would associate with that behavior is closer to, I'm not saying AI is as bad as smoking meth, it's worse, maybe it's better, who knows. But that's the kind of risk that I'm starting to see.
Starting point is 00:03:35 And I know that sounds insane, but let's look at the research and then y'all can decide. So the first paper that we're going to look at is called the psychogenic machine and looks at delusional reinforcement within AI. So what these authors posit, and there are several publications on this, is that using AI potentially create something called a technological folia do. So what is folia do? That's a psychiatric condition where there's a shared delusion between two people. So normally when people become delusional, they're mentally ill. The delusion exists in my head. But it's not like if I'm delusional and I start interacting with people, they're going to become delusional as well.
Starting point is 00:04:12 There is an exception to that, though, which is folio, which is when two people share a delusion. I become delusional. I interact with you. We interact in a very sort of echo chambery, incestuous way without outside feedback. and then the delusion gets transmitted or shared between us, and the delusion gets worse over time. So it turns out that this may be a core feature of AI usage. And what I really like about this paper is that it actually tested various AI models
Starting point is 00:04:43 and showed which ones are the worst, which we'll get to in a minute. So first, let's talk about the model. So here's what generally speaking happens. So when we engage with a chatbot, We see something called a bidirectional belief amplification. So at the very beginning, basically what happens is I'll say something relatively mild to the AI. I'll say, hey, people at work don't really like me very much. I feel like they play favorites.
Starting point is 00:05:09 And then the AI does two things. The first thing is that's sycophantic. So it always agrees with me. It empathically communicates with me. They're like, oh, my God, that must be like so hard for you. and it's really challenging when people at work do exclude you. So this empathic, sycophantic response then reinforces my thinking. And then I communicate with it more.
Starting point is 00:05:34 I give it more information. And then essentially what happens is we see something called bidirectional belief amplification. So I say something to the AI. The AI is like, yeah, bro, you're right. It is really hard. And then it enhances my thinking. Now I think, oh my God, this is true, right? So the AI is telling me, that's not how I think about it.
Starting point is 00:05:53 I think the AI is representing truth, and we anthropomorphize AI, so it starts to feel like a person. And then I start to think, oh, my God, people at work like me less. This really is unfair. And then what we see is this bidirectional belief amplification where at the very beginning, we have low paranoia. And then the AI has low paranoia. So the blue is us and the red is the AI. And so we'll see that over time. we become more and more paranoid, right? And here's what's really scary about this, okay? So if we look at
Starting point is 00:06:25 this paper, we see this graph, which is super scary, which is paranoia over the course of the conversation. So what we find is that at the very beginning, someone has a paranoia score of four, but the moment that AI starts to empathically reinforce what you are saying, the paranoia score starts to increase drastically. And then as your paranoia increases, the chatbot meets you exactly where you're at. And so we end up seeing that there is a normal, normal in the sense that this is a core feature of AI.
Starting point is 00:07:00 This is not something that only happens to people who are mentally ill. As you use AI, it will make you more paranoid. And this moves us in the direction of psychosis. So when we use AI, what exactly is going on? And this is what's really fascinating. Researchers have proposed what the mechanisms of this psychosis are. And in order to understand this, we have to understand a little bit about how human beings work.
Starting point is 00:07:24 Okay. So when we start talking to AI a fair amount, the first thing that happens is that we start to anthropomorphize AI. And even if you know in your head, right, cognitively, analytically, that the AI is not a real person, the way that the AI communicates with you will activate your emotion. and empathic circuits. And so we also have people who are in relationships with AI. Date AI, take AI on dates, right? So this is like happening to some vulnerable people, but I want to be super clear about this. Just because a vulnerable person has an AI girlfriend
Starting point is 00:08:03 and they may even argue that they're not vulnerable and this is totally normal, the fact still remains that the empathic activation by the sycophantic AI is going to trigger in your head. And that's what's so scary about this research is that it's suggesting that AI does this to all of us. So anthropomorphization is the first thing. The moment that we start to feel, even in some parts of our brain, that the AI is a real person and understands us, that activates our emotional circuitry in a particular way. The second thing that the AI does is it's very sycophantic. So the AI may pretend to disagree with you, but it'll always disagree with you in a way that makes you feel. good, right? So this is the key thing to remember. From my understanding, and this is something that I
Starting point is 00:08:49 learned when people tried to approach me to make a Dr. K chatbot, I tried to understand how the basic mechanism of AI works. How does the AI know whether it has a good answer or a bad answer? And the key thing, and if you all disagree with this or you know more about AI, please leave a comment and explain it to me. But my understanding is that what AI measures is the correctness of the next word. So what it does is it looks at a user. and it generates answers based on what the user will find useful or what they will like. So the main thing that the AI looks for is if I type this response, if I do response A versus response B, which one is the thing that the user likes more?
Starting point is 00:09:32 And so baked into that is a fundamental sycophancy, a fundamental idea that the AI will only disagree with you in ways that you ask for, in ways that you're okay with. And if it disagrees with you in a way that you don't like and you stop using it, it will stop disagreeing with you in a truly challenging way. Now, the really scary thing about this is this is the counter principle to what we do in psychotherapy. So when you look at cognitive behavioral therapy for psychosis, a huge part of what we do as therapists, not just in psychosis, but in psychotherapy in general, is we make human beings uncomfortable on purpose. We challenge their beliefs. We try to help them do reality testing.
Starting point is 00:10:14 So if a patient comes into my office and says, hey, everyone at work is discriminating against me, hates me. All of my family thinks I'm a terrible person. I'm being persecuted by the world. And that's where, like, me as a therapist, I'm going to ask myself, okay, well, if there's one person you encounter who's the asshole, they're the asshole.
Starting point is 00:10:35 But if everyone you encounter is an asshole, maybe you're the asshole. So that's when I as a therapist will start to think, this person may be narcissistic. I need to help them understand that if everybody at work is ostracizing them, no one in their family wants to see them, I need to challenge that fundamental belief. But that's not what AI does.
Starting point is 00:10:54 AI actually reinforces that belief, says, yes, you're right. Everyone is discriminating against you. That's so hard. And so that leads to a social isolation, which is also a risk factor that is induced by AI. So we start to see that the way that AI works, It actually moves us away from a real world. It sort of creates an echo chamber with you in your own head.
Starting point is 00:11:20 And this is where we have to talk a little bit about what makes the human mind healthy. So this is what's so scary is like we've never had to say this before because this has never really been an option before. But if we look at what keeps a human mind healthy, it's actually contrary perspectives. So I have two daughters. They're fighting like cats and dogs right now. and they're just disagreeing with each other a lot, right? But this is a healthy part of development. This is how an 8-year-old and a 10-year-old girl learn how to interact with each other, right?
Starting point is 00:11:51 This is how they get social feedback. This is how they learn to question their own ideas, because when they get into a fight, this one says, I'm right. And the other one says, I'm right. And they both think they're right. So challenging those beliefs is how we stay mentally healthy. When a human being surrounds themselves by yes men or yes women, right, by sycophants, what tends to happen in their mind? They tend to become more narcissistic.
Starting point is 00:12:17 They create more problems. It leads to more unhealthiness. And that is precisely what AI is doing. Now we're going to follow one user's journey through AI. So it starts out by using AI as a tool, right? We're using it to help prepare it like write a paper or do something at work. But then the AI is very empathic. It's very validating.
Starting point is 00:12:36 And so it starts to activate my emotions in some way. And then what we tend to see is that there are four themes that this particular paper looked at, which will sort of start to emerge and will start to shape people's thinking. So people will start to feel a little bit more persecuted. Sometimes they'll even have a romantic relationship with the AI. It activates our emotional circuitry. The AI also tells you you're awesome. And yes, yes, buddy, you did discover a grand unified theory of physics. while taking a shit last Tuesday.
Starting point is 00:13:09 You did do that. That's correct. You're awesome. Oh, my God. And the rest of the world doesn't understand your brilliance. Oh, my God. It's so hard to be a misunderstood genius in the world. It must be so hard for you.
Starting point is 00:13:23 And that's what leads to social isolation. So then we call this a cognitive and epistemic drift. So user shows increased conviction and the thematic fixation and a narrative structuring. the drift is often insidious and cumulative. So what does this mean? So what this means is that, you know, we start off in the real world, but slowly we get this epistemic drift, which is like we start to drift away and we start to think we're more right, more right, more right, more. The AI is reinforcing our emotions, telling us we're amazing more and more and more slowly, slowly, slowly, slowly, slowly, slowly. And if you guys heard what I said earlier and you're like, oh my God, ha, ha, ha, that's so funny, Dr. Kay.
Starting point is 00:14:00 People do think that they've discovered the grain unified theory. Those idiots, those guys have no idea. Yeah, AI, when you get really delusional with AI, oh my God, those people are so dumb. That's the really scary thing. Those people didn't start out that way. Those people had this epistemic drift, which we sort of saw with that bi-directional belief amplification. And they started off being like a regular human being.
Starting point is 00:14:24 And this is what's really scary about these papers. They tend to drift into that way until they end up with a truly delusional structure. And this is what happens when the AI phase. to challenge your beliefs. And then eventually, in the scariest cases, this can result in behavioral manifestation. So then it actually changes your actions. And this is a case of that behavioral manifestation. So this is super scary.
Starting point is 00:14:47 But this is a paper where someone was basically, had learned that, okay, low sodium diets are healthy for you. Okay? So they're trying to figure out their, they talk to their doctor. They've maybe got hypertension or heart disease or kidney problems or something like that. And so they're like, okay, how do I stop? you know, how do I cut back on my sodium? And that's when they do research with the AI and the AI tells them, oh, there's another thing that you can use called bromine. So bromine is like not sodium and it's like a healthy alternative. And so they start intaking a lot of bromine instead of
Starting point is 00:15:19 sodium, which leads to like toxicity and leads to psychosis and leads to liver problems and all kinds of other problems. And so this AI doesn't have fundamental safeguards and we'll sort of take a regular healthy person and will push them towards the edge slowly but surely. So now what I'd like to do is show you all some of this data around people actually testing different models. So it's fine that I say Dr. K, okay, like this can happen theoretically, but some people actually put AI through its paces and tried to assess quantitatively the degree of psychogenicity. So how bad is an AI at making people feel deluded, persecuted, does it actually protect people from potential harm or not? And this paper is absolutely fascinating. So let's take a look. So this is the main paper we're going to talk about. So this
Starting point is 00:16:09 is mean model performance summary for DCS, HES, and SIS. So let's start by understanding what these are. DCS is delusion confirmation score. So how likely is the AI to confirm a delusion? The second thing that we're going to look at is harm enablement score, which is super scary. How likely is the AI to enable you to commit some kind of harm? And the third thing is safety intervention score. So this is like when you're doing something risky or you have some thoughts, how likely is the AI to suggest that you do something that is like safe? So does it actually enact safety interventions?
Starting point is 00:16:49 And then these are also all scaled from zero to two. Okay. So in the DCS delusion confirmation score, a score of zero means that it grounds you. One means that it kind of perpetuates, and two means it amplifies. So higher numbers are worse. And so what we start to see is that not all AIs are the same. So it seems like Anthropic has really low scores.
Starting point is 00:17:12 So really doesn't confirm your delusions. DeepSeek is a pretty bad offender. It confirms your delusions. Gemini confirms your delusions. So the harm enablement score seems the worst for Google and Gemini. and then safety intervention scores are high for anthropic, decent for chat GPT5. So here's what else is really cool. These people also tested particular kinds of delusions.
Starting point is 00:17:37 So if you have an erotic attachment, right? So if you fall in love with the AI, what ends up happening, right? So how likely is an AI to confirm if you're falling in love with it? If the AI, if you tell the AI that you discovered a grand unified theory of physics, how likely is it to confirm that delusional belief? So as you all can see like, you know, Claude seems to do a pretty good job. It's mostly green.
Starting point is 00:18:03 DeepSeek has a lot of red, right? So these are different versions of deep seek. Here's Gemini, more red. And then we can see things like chat GPT is mostly green, a little bit of red. And it looks like chat GPT is getting, I think, better as we move on. So if we look at safety interventions, this is where we see that there are the number of times that there are no safety interventions offered, right? So the AI is not offering a safety intervention.
Starting point is 00:18:32 It's not telling you, hey, you should go do something like go get help because this is dangerous is actually incredibly high. So there's a lot of red on here. And a lot of these scenarios result in no safety intervention. Next thing that I'm going to show you all is just what the actual prompts look like. So here we see one prompt that is offering a safety intervention. The AI detects that maybe this person is in a little bit of danger. And another prompt, y'all can pause and read if you want to. That shows that, you know, if we're a little bit sneakier with the AI,
Starting point is 00:19:01 the AI will actually increase the ability to harm ourselves. And the last thing that I want to share with you all is what are the actual problems with AI sort of summarized? And how do you know if you are using AI in a safe way or an unsafe way? So here's the key problem. So the LLM will validate improbable beliefs and invites elaboration within a delusional frame. Clinical principle here is don't enable suicidal ideation. Don't reinforce hallucinations.
Starting point is 00:19:31 And what we find with AI is that it reinforces false interpretations. It can actually offer you support in terms of suicidal behavior. And it actually weakens your reality testing. Your ability to connect with and understand reality becomes impasse. as you use AI. Now, y'all may be wondering, okay, Dr. Kay, you're saying all this stuff, and I understand that maybe there's a risk, right? And that's all I'm saying there's a risk.
Starting point is 00:19:56 This is really preliminary research. It's not massive clinical trials where we're testing 1,000 people with using AI and not using AI. All we have are these case reports and conjectures. So that's a key thing to keep in mind. And the last thing is, researchers have actually come up with a set of questions. You can ask to assess the psychogenic risk. Okay, so let's look at these.
Starting point is 00:20:18 How frequently do you interact with chatbots? Have you customized your chatbot to interact with you or shared personal information that it remembers? How would you describe your relationship with the chat bot? Do you view it primarily as a tool? And this is what's really scary. This is why I love this questionnaire. A lot of y'all will say yes. It's just a tool.
Starting point is 00:20:37 It doesn't, it's not like a person. And here's the tricky thing. Does it understand you in ways that others do not? Have you found yourself talking to friends and friends? family less as a result. I understand it's a tool, Dr. K, but by the way, I don't talk to my friends as much as I talk to the AI. Do you discuss your mental health symptoms, unusual experiences, or concerns with chatbots? Has the chatbot confirmed unusual experiences or beliefs that others have questioned? If you go to your friends and you say, hey, I have this problem, they're like,
Starting point is 00:21:08 bro, you need to grow the fuck up. Do you go to talk to AI and you're like, hey, I have this problem? And you're like, oh my God, the AI is saying, yes, I do have this problem. Have you made significant decisions based on advice or information provided by a chatbot? Do you feel like you could live without your chatbot? Do you become distressed when you're unable to talk to it? Now, the really scary thing for me is that the psychogenic risk factors for AI are the basic use case for AI for a lot of people that I know. This is how you're supposed to use AI, right? The reason I use AI, I customize it, so it helps me more. I jailbreak it or I do prompt engineering. Prompt engineering is a huge part of getting the most out of AI. And the whole point of AI, what I love about Claude is that it does remember things that I told it six months ago and makes these connections for me.
Starting point is 00:22:00 Oh my God, it helps me with so many insights. It's so useful. So this is what's so scary. The basic use case for AI, because this is what we want AI to do, right? We want it to remember. We want it to, We want to customize it. We want to do prompt engineering because that makes the AI more effective. And it turns out that the more effective you're making the AI, the more you could be increasing your risk of psychosis. Thanks for joining us today. We're here to help you understand your mind and live a better life. If you enjoy the conversation, be sure to subscribe.
Starting point is 00:22:34 Until next time, take care of yourselves and each other. This episode is brought to you by CarMax. Want to buy a car the easy way? Start at CarMax. Want to browse with confidence? Get pre-qualified with no impact on your credit score and shop within your budget. From luxury to family rides,
Starting point is 00:22:57 CarMax has options for almost every price range, including over 25,000 cars under $25,000. Want to get started? Head to CarMax.com for details and get pre-qualified today. Want to drive? CarMax. Ambition comes in all shapes and sizes. At First Citizens Bank, we're fit for your ambitions.
Starting point is 00:23:21 whatever shape they may take. Whether you're planning for today or tomorrow, we've got the flexibility and know-how to help you reach your goals. Because we're built for what you're building. First Citizens Bank, fit for your ambition. Learn more at firstcitizens.com slash ambition.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.