HealthyGamerGG - I Need To Warn You About AI Psychosis
Episode Date: February 28, 2026In this episode, Dr. K discusses a growing concern in the mental health field: the potential for intensive AI use to trigger severe crises like psychosis, suicidality, and homicidality. He examines a ...chilling case study that suggests AI might act more like a drug than a simple tool, and he challenges the tech industry's claims regarding user safety. What to expect in this episode: The AI-Psychosis Connection: An analysis of a case study where a 26-year-old with no history of psychosis became hospitalized twice due to extensive chatbot use. AI as a "Digital Drug": Why AI-induced mental health symptoms often resolve immediately after stopping use, mirroring the effects of substances like synthetic marijuana. Challenging the "Vulnerability" Argument: A critical look at why tech leaders claim only "at-risk" people are affected—despite not actually measuring user risk factors or medical history. The Lack of Safety Regulation: Why AI companies aren't regulated by the FDA and the dangers of launching products without formal clinical trials for mental health impacts. The 20-Year Warning: Advice on why users need to be cautious about their digital habits today, as definitive scientific answers may take decades to arrive. HG Coaching : https://bit.ly/46bIkdo Dr. K's Guide to Mental Health: https://bit.ly/44z3SztHG Memberships : https://bit.ly/3TNoMVf Products & Services : https://bit.ly/44kz7x0 HealthyGamer.GG: https://bit.ly/3ZOopgQ Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
I'm becoming increasingly concerned that AI use may be a legitimate cause or risk factor
for psychosis, mental illness, suicidality, and potentially homicidality.
So I'm not saying this to try to be alarmist.
This is not sort of a clickbait kind of video, but this is a genuine concern that I have.
We are starting to see evidence that AI use may be very similar to like excessive substance
use and the induction of things like homicidality, suicidality, and psychosis.
And the reason for that is because there's a new case study that is incredibly scary.
This is the case of a 26-year-old woman who has no history of psychosis or mania, has actually
is actively in mental health treatment, who uses an AI chatbot very extensively,
becomes actively psychotic, becomes hospitalized, then gets given anti-psychotic
medication, stop using the AI chatbot when she's in the hospital, and then the psychosis
resolves. And then she leaves the hospital, stops the antipsychotic medication, starts her
regular psych meds, starts using AI chatbot again, and then becomes psychotic again, and has to be
hospitalized again. So I want you all to understand something. Okay. So when we look at mental illness,
there are some people who are mentally ill, and then that is a risk factor for various kinds of
behaviors like homicidal ideation, which is the desire to kill someone.
And then as we treat that mental illness, hopefully that homicidal ideation goes down.
But then there's another category of people for whom these features of psychosis,
suicidality and homicidality, these are the three most like basically dangerous things in
mental illness can actually be induced by stuff.
So when I was working in the emergency room at Massachusetts General Hospital in Boston,
Massachusetts, there was a big problem with something called K2 use. So K2 uses, K2 is synthetic
marijuana. And basically people would use synthetic marijuana and then like they would come in,
they would be psychotic. They would attack people. Sometimes they were suicidal. And when the K2 goes
away, right, when they like sober up, those symptoms tend to resolve. And I'm beginning to think,
I know this is like insane, but I'm beginning to really wonder and be concerned about AI use
affecting our brains like a drug in the sense that when we use AI a lot, it can actually induce
psychosis. And then like when we stop using it, it, those things go away. And here's what,
what a lot of, you know, people who are in leadership and AI companies will basically make
this argument, right? So they've said this very publicly that, oh, like, it's a tragedy that
these things happen, right? That there are some vulnerable people who, when they use AI, will, like,
become psychotic. So there's this sort of idea, right, that like AI doesn't cause this. It's like
if I smoke a bunch of meth and I become psychotic, like that's the meth causing the psychosis.
Does that make sense? No one's thinking like, oh my God, like AI is causing the psychosis,
like causing it. So I think this case report is incredibly scary because it really shows a causal
or temporal connection between extensive AI use, the induction of psychosis,
When AI stops and we give the appropriate medication, psychosis goes away.
And then if you stop taking the medication and start using the AI bot again or when she did,
it causes a hospitalization again.
Now, a lot of people will say, you know, okay, once again, like this person has risk factors, right?
So we'll have some of these public statements by AI leadership that like, oh, yeah,
there's a vulnerable population.
And these tragedies happen in a vulnerable population when they use the AI.
It's not the AI causing it.
It's that these people are really close to the edge of the cliff.
And when they use AI, they get tipped over.
But here's what really scares me about that statement.
I want you all to think critically for a moment about what information do you need to make the statement vulnerable people who use AI.
That's when psychosis happens, right?
Like, it's not that AI causes it.
There's a pre-existing set of vulnerable people in this case, in the case of the 26-year-old.
She did have certain risk factors for mental illness.
She, I think, had a diagnosis of depression at ADHD, was using stimulants.
So all of these things can potentially be risk factors for psychosis, but she had no history of these things.
Okay.
So I want you to think for a moment about what information you need to have in order to make the
statement vulnerable people who use AI can sometimes uncover,
pre-existing psychosis, and it's a tragedy.
Hey, y'all, if you're interested in applying some of the principles that we share to actually
create change in your life, check out Dr. K's Guide to Mental Health.
There are actually two sources for anxiety. One is cognitive and one is physiologic.
For the majority of people, reassurance becomes something that you become dependent on.
You're not really dealing with the root of the anxiety, and it sort of becomes a self-fulfilling
prophecy where the more socially anxious you become, the more awkward you appear, and then it
kind of becomes this vicious cycle. So check out the link in the bio and start your journey today.
So my first question is for the people who are like running these AI companies, to know that
AI uncovers psychosis invulnerable people, in order to make that statement, the first thing you have
to be doing is assessing risk factors for your users, right? So as someone at a company like
chat GPT or Claude or Anthropic, is someone actually measuring psychiatric risk factors for
all of their users. Because unless you are measuring who has risk factors and who doesn't have
risk factors, how can you make the statement that people who are vulnerable are the ones that
AI is inducing psychosis in? Right. So you can't make that statement unless you're measuring that stuff.
And that brings up two other concerns that I have. The first is, are they measuring it because
that's kind of insane, right? Are they like measuring your risk factors? Are they collecting your
medical history in some way? I don't think so. And so if they're not collecting that, how do they know that only
the vulnerable people are the people who are at risk. And the second thing about that statement,
right? So vulnerable people who use AI can become psychotic, suicidal, homicidal, whatever.
If you sort of say that, then the second piece of information that you need to be able to say that
is to be measuring those outcomes in that population. If I were to say this risk factor leads to this
outcome, I can only make that statement. And if I say that the thing in the middle, the AI is actually
safe, the only way I can make that statement is to do a study where, like, I have people
with risk factors, people without risk factors, give them both AI, and then see how the outcome
is different.
So once again, are AI companies measuring psychosis?
Are they measuring suicidal ideation?
Are they measuring homicidal ideation?
And the answer is hopefully not, like sort of, right?
Because that means they're collecting health information on their users, which I don't think is
part of what they're supposed to be doing.
People are making these statements in AI leadership without actually having the sufficient
information to make those statements, which then creates another problem, which is like,
how do they know that AI is safe?
How do they know that AI does not induce suicidality, homicidality, psychosis, depression,
unhealthy attachment styles, social isolation?
How do they know what the safety effects of AI are?
are they just reading the Wall Street Journal and the New York Times and stuff like that
are basically leadership at AI companies?
Are they reading media reports?
Or are they doing research?
And then this is what's really scary is like, are they basing their statements based on the
few cases that enter the media?
And so this problem seems to be like growing quite a bit.
And what really scares me is if they're using media articles, how do they know that
there are not people who are quietly delusional, quietly psychotic, who are not killing anyone yet
or not committing suicide or things like that. Like, how do they know that their product is actually
safe? And here's the other thing that really scares me. What do you have to do, if you're running an
AI company, what do you have to do to know that your product is safe? So here's what I've seen
as someone who has worked with entrepreneurs, who has worked with startup founders. When a company
who has a product is faced with
solving all of this stuff that is outside of their product, right?
They're an AI company.
They are not equipped.
Do not have the bandwidth or the funding to actually do randomized control trials on the safety of their product.
They're not regulated by the FDA, right?
This is what's so interesting about AI companies is even though they have profound mental health impacts,
they are not formally in the system for evaluation for mental health impacts.
So when I work with founders like this and they are faced with this problem, I'm just imagining like put yourself in the shoes of someone who is an AI founder who suddenly people are like, just think about this, this is how fucking insane this is.
You open up the New York Times and you're like, oh shit, some guy who used my AI committed murder, killed their mother, was delusion.
and then committed suicide.
And you're like, well, fuck, how am I supposed to solve that problem?
When you are faced with problems that are unsolvable or feel unsolvable, the best cope
you can do is to do some mental gymnastics, throw your hands up and say, it's actually not my
problem.
This is not the problem that exists.
It's way simpler than that.
The simple matter is that there are some people who tragically are vulnerable to begin
with.
It's not my product that does it, right?
It's not my product.
Like, cigarettes don't cause cancer.
Like, that's insane, right?
So they're doctors who talk about the health benefits of cigarettes and how they make you
stimulated and they're like good for your health and they give you a sense of energy and like,
let's go.
It's not my product.
Not my problem.
I'm not, cigarettes are not a medical device, right?
So I don't need to do studies on them or things like that.
There's no like, oh, coincidentally, people who are high risk, sometimes they get cancer.
And it sort of makes sense, right?
Because if you're someone who's like an AI CEO, maybe you're a coder, you look at this
medical problem and you don't know how to solve it, right? You don't know how to put together a
clinical trial. You don't have the money to put together a clinical trial. So what do you do?
You do this interesting, very basic human sort of thing. I'm not saying that any of these AI CEOs
are specifically doing this. I'm saying this is my past experience working with an entrepreneur
when you're faced with a problem that you have no idea how to solve that could result in findings
that make it hard for your company to make money. Like, oh my God, dose how to you?
high dose of AI induces psychosis in suicidality. My business model is to get people to use my product
more. We want people smoking more, right? And so when you're faced with that problem, you kind of
throw up your hands in the air and you kind of say, look, this is not my area of expertise. Sometimes
shit happens to people. People are psychologically vulnerable. And I'm not talking specifically
about Open AI, but I think like in some ways, I think Open AI is doing a really good job. So they
released this great blog post where they're sort of talking about taking this pretty seriously. I know a
lot of users are actually complaining about the most recent version of chat chvety because it's not
as manipulatable. It's not as sycophantic. Right. So I'm not saying that these people are bad.
The reason I'm making this video is because there is a small chance or large chance. That's up to you.
We've presented a lot of different evidence. You can watch this other video that if you are using AI,
it's dangerous to you. I'm not here to shit on AI CEOs or AI companies. The reason I make, I'm a
psychiatrist, and this platform is about your health. The reason I'm making this video is because
I want y'all to understand that 10 years from now, 15 years from now, 20 years from now,
hopefully we'll have a clear answer that AI is dangerous, safe, whatever. But the problem is
in the meantime, you need to be careful.
