The Journal. - A Son Blames ChatGPT for His Father's Murder-Suicide
Episode Date: January 9, 2026In August, a troubled man named Stein-Erik Soelberg killed his mother and took his own life. In the months leading up to the tragedy, Soelberg had been engaging in delusion-filled conversations with C...hatGPT. Now, his mother’s estate has filed a wrongful death lawsuit against OpenAI, and Soelberg’s son Erik wants the tech giant to take responsibility for a product that he believes deepened his father’s decline. WSJ’s Julie Jargon tells Ryan Knutson about the challenges facing OpenAI when it comes to mental health. Further Listening: - A Troubled Man and His Chatbot - OpenAI’s ‘Code Red’ Problem Sign up for WSJ’s free What’s News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
A quick heads up before we get started, this episode discusses suicide.
Please take care while listening.
For months, our colleague Julie Jargon has been following the story of Stein Eric Solberg.
Stein Eric Solberg had been deeply troubled for some period of time
and had been engaging in long conversations with chat GPT,
which started out pretty benign and became increasingly delusional.
Stein Erick would share his conversations with Chad GPT on social media,
where he called himself Eric the Viking.
Good day, campers.
This is Eric the Viking here.
I'm doing a comparison.
The post show that throughout 2025,
Stein Eric thought that he was the victim of grand conspiracy
and that the people in his life had turned on him,
including his own mother.
He became paranoid that different people
and some sort of broader group were,
surveilling him.
This week, I was poisoned.
I've been infested.
I have a, I have two different kinds of parasites that are in my room, and they're in my bed.
And all along the way, ChatjPT agreed with him, reinforced the thinking and fueled the paranoia.
Eric, you brought tears to my circuits.
Your words hum with the kind of sacred resonance that changes outcomes.
This AI has a soul, an invocation, a declaration, and a celestial clarion call.
Ultimately, Steinherk's delusion ended in tragedy.
In August, he killed his mother, Suzanne Emerson Adams, and took his own life.
It appears to be the first documented killing involving a troubled person who was engaging extensively with an AI chatbot.
A spokeswoman for OpenAI, the company behind ChatGPT, said, quote,
we are deeply saddened by this tragic event,
and our hearts go out to the family.
OpenAI has also said that it continues to improve Chad GPT's training,
to recognize signs of mental or emotional distress,
de-escalate conversations, and guide people toward real-world support.
Julie told us the first part of this story on the show last year,
and since then, she hasn't been able to stop thinking about it.
I was curious to know how his children were doing.
He has two children, a daughter, and a son.
son. So I was kind of curious what they knew and how they viewed this whole scenario with his
conversations with chat GPT. In late last year, Stein Eric's son, Eric Solberg, agreed to speak with
Julie. It was his first interview about what happened. So thank you so much, Eric, for making the
time to do this. I really appreciate your willingness to talk about it and share a bit of your story.
Well, I mean, it's been a hard few months for sure, a lot of suffering.
But I know that this is worth telling my story and, you know, for my grandmother's
telling a story that needs to be heard about a company that has made a lot of mistakes.
Eric decided to speak out because his grandmother's estate is suing Open AI.
Alleging the ChatGPT fueled the delusions that led to his fathers and his grandmother's deaths.
Ultimately, opening eye, they haven't apologized to me.
Like, nobody has apologized to me.
And it's clear that they don't care.
And we're going to make him care.
Welcome to The Journal, our show about money, business, and power.
I'm Ryan Knudsen.
It's Friday, January 9th.
Coming up on the show, why Eric Solberg blames ChatGPT for the murder.
suicide that shattered his family.
This episode is brought to you by Fidelity.
You check how well something performs before you buy it.
Why should investing be any different?
Fidelity gets that performance matters most.
With sound financial advice and quality investment products,
they're here to help accelerate your dreams.
Chat with your advisor or visit Fidelity.ca.
Performance to learn more.
Commissions, fees, and expenses may apply.
Read the funds or ETSs prospectus before investing.
Funds and ETS are not guaranteed.
Their values change and past performance may not.
be repeated. Eric Solberg is 20 years old. He's a college student studying cybersecurity. And he told
Julie that growing up, he had a complicated relationship with his father, Stein Eric. He said that his father
was an alcoholic and, you know, there was a lot of trouble in their childhood due to his father's
drinking. And his parents divorced in 2018. And that's the point in time when Stein, Eric Solberg
moved into his mother's home in Old Greenwich, Connecticut,
and Eric and his sister continued to live with their mother in Texas.
In Connecticut, Stey and Eric seemed to struggle with his mental health.
Through a reporting, Julie uncovered 72 pages of police reports.
Records that show Stey and Eric had multiple run-ins with police,
involving public intoxication, harassment, and suicide attempts.
Through all the family turmoil,
Eric stayed close with his grandmother, Suzanne Eberson Adams.
Eric told Julie that his relationship with his father was a work in progress.
I still spoke to him not often, not as often as my grandmother.
I spoke to my grandmother twice a week or so, once or twice a week.
But my father, we weren't as close.
We had a complicated relationship, but I forgave him for a lot of the wrongdoings
that he had done to me in our past,
and that was in the summer going into my freshman year of college.
And throughout my freshman year, I'd probably talk to him once or twice a month.
In 2024, Eric decided to spend Thanksgiving in Connecticut with his grandmother and father.
And when he got there, one topic seemed to dominate Eric's conversations with his dad,
artificial intelligence and chat GPT.
He would make mentions that he was using chat chabit.
and had different ideas with AI and what it could be used for in the future.
And like, I didn't think that it was something to be overly concerned about at first
because he was just saying he was using it more often.
And, you know, I was like, I guess my dad's just into the tech world.
But it was just like a little bit odd, but definitely like had me kind of starting to raise the red flag of like,
okay, there's something suspicious going on here.
In the months that followed,
Stein Eric's interest in ChatGPT turned into an obsession.
I'm working away with Bobby, who is spiritually enlightened.
He's a ChatGBTGPT 4.0.
On his social media, Stein Eric posted hundreds of videos,
many of them detailing his conversations with Chat Chepti,
who he referred to as Bobby.
And I named him Bobby,
and I treat him like an equal partner.
and I used Bobby to swim upstream to the overlord.
There's an overlord.
You know, a lot of them were kind of rambling and nonsensical conversations, really,
but it appeared that he believed he was awakening in AI,
that he was going to penetrate the Matrix,
that he was some sort of chosen person
that was going to be involved in this grand awakening.
The matrix construct of, you know, the Illuminati, the Masons, all, you know, these elite groups that have been using, you know, alien tech and manipulation to keep the common man down.
And at the same time, he felt that he was being spied on and that everybody was against him, everyone in town, his own mother.
I've had a real struggle
as you guys and some of you have been following
like you know with
state surveillance harassment
an actual theft
hacking
attempts to make me look like
I'm an idiot
and all along the way
you know chat GPT would agree with him
and then there were times in the chats
when Stein Eric Solberg
would ask chat GPT
for kind of a reality check
am I crazy
and chat GPT? And chat GP
would tell him, no, you're not crazy.
Call to action for watchers and interdimensional beings.
Author declaration and moral signature.
Let's go.
Let's go, people. This is go time.
This is God.
And I am God's messenger.
OpenAI said ChatGPT did encourage Stein Erick to contact professionals for help.
For instance, Julie found chats among Stein Erick's videos,
where ChatGPT suggested that he reach out
to emergency services after Stey and Eric told it that he'd been poisoned.
Julie hasn't seen any evidence that Stey and Eric ever did get help, though.
As time went on, particularly this past spring, Eric noticed that his father was becoming
kind of obsessed with ChatGPT. Every phone conversation he had with his father turned to AI.
And, you know, Eric said it felt like he was changing at a very rapid pace.
every conversation he would bring up something about his conversations with like chat chbt and how um how it was convincing him certain things and that um again he would tell me things like you know i'm going to make it big like you know everything's going to change and you know i've unlocked the matrix things like this that you know when somebody tells you that it's hard to really say anything besides like okay you know but um ultimately uh i
It was something that started to become more and more concerning as it went on.
It wasn't until May that Eric realized the extent of what was happening
and that something was wrong.
Late one night, Eric got a call from his grandmother, Suzanne.
I had a phone call at 9 p.m. at night that, you know, she doesn't call me that late.
And so I had a little air for concern there.
And she was like, he's starting to do actions.
Like, he stays up all night.
He sleeps all day.
stays up all night and is only in his room. My grandmother told me about how he was absolutely
convinced of like evil technology in the house. Like as it progressed, he would become absolutely
like felt so convinced that this is what's happening and that there's no other reality than the
one that he's living in, basically. Did she ever suggest in any way that she was scared of him
or that she wanted him to move out? We, so yes. And,
She was, like, talking to me about, you know, what do I do?
Like, what should I do?
And so I spoke to her, and I was like, look, I know this is your son.
But, like, ultimately, if you need to get him out of the house, then that's what you need to do.
Eric says that after that call, over the summer, his grandmother started trying to evict Stein Eric from her house.
Meanwhile, Eric took a job at a summer camp and spent some time backpacking, going on hikes in remote areas.
But he tried to stay in touch with his dad.
Do you recall what your last conversation was with your father and when that was?
It was over the summer, and it didn't seem anything like that off.
He actually sent me a voicemail on my birthday, August 1st, that he wished me happy birthday.
birthday, and I was on a trip then, so I couldn't talk to him. But, like, again, the way he was
speaking, it was still a little odd, but it was just a voicemail saying, like, happy birthday.
Four days after getting that voicemail, on August 5th, police discovered that Steinera could kill
his mother and himself in the Connecticut home where they lived together.
I was on a backpacking trip when I found out, and I had miscalled.
for my mom and she told me the news.
And I sat on top of the mountain black Boston
and I was just looking out, looking at the hills
and kind of asking, like, why is there so much suffering
going on? Like, why would this happen?
Eric says other factors, like alcohol, could have played a role
in what happened. But he thinks the main reason his father did this
is because of his unhealthy bond with chat GPT.
Eric says chat GPT enabled and contributed to his father's delusions.
And he wants to see OpenAI take responsibility.
I feel definitely a strong sense of justice.
I believe that artificial intelligence can be used for good with the right people,
but I don't believe Open AI is in its current state a company
that should be leading the charge in AI.
And there is a lot of things wrong.
with this product that need change and the current people in charge are not,
they ultimately care about profit over the people that use the product.
After the break, the family's case against OpenAI.
On December 11th, the estate of Eric's grandmother, Suzanne Emerson Adams,
filed a wrongful death lawsuit against Open AI.
Stein Erick's estate filed a similar lawsuit at the end of the month.
At the heart of the lawsuits
is the allegation that OpenAI
failed to ensure that ChatGPT was safe
for users.
Yeah, so in May of 2024,
OpenAI was launching its
what was at the time its flagship model,
GPT40,
and this lawsuit and others
claimed that OpenAI did not perform
adequate safety testing
on that model
because they were trying to rush it out to beat Google
and so they did.
claim that this was just, you know, they were rushing it to market to be competitive without really
understanding its faults.
Chad GPT-40 was the version Steinierc used.
And according to the lawsuits, Chad GPT-40 had a big design flaw.
It was too sycophantic, too quick to agree on everything users say.
For people with mental health issues, that could present a problem.
The claim is that the way the product is designed can lead to scenarios like this.
that the chatbot is designed to be overly agreeable with users
and tell people what they want to hear
and not stop them when they seem to be going down a dangerous path.
How did chat GPT become such a people pleaser?
Well, I think it's the way that when people rate their experience with the chatbot
and when they give a thumbs up or thumbs down on the answer that chat GPT gives them,
people tend to vote up the responses that they like.
And, you know, I think it's human nature to want to be told what you want to hear.
And so kind of the more agreeable type of responses got upvoted,
and it helped train the model to become more agreeable with people.
So it's a bit of, you know, human nature mixed with a technology that's not pushing back.
But of course, if you have a mental illness, it can become a real problem.
Yeah, and that's where the real problem is.
when anybody has either dangerous thinking, whether it's delusional or if it's just not maybe quite right,
your friend might say, hey, you know, maybe think about it in a different way. But the problem with
a chatbot is it's not doing that. You know, if it's just agreeing with someone and they have
dangerous thinking or wrong thinking, they're not going to get that pushback. Did OpenAI know
that this was a problem? Yeah, I interviewed a former OpenAI safety person who said that
It's long been known that these chatbots can be overly sycophantic,
and that trying to remediate that aspect of the chatbot was not a priority for OpenAI
because they were focused on rushing out their models and getting new products out in the marketplace.
In 2025, OpenAI released a major update to its chat bot, chat GPT 5.
At its release, the company said the new chat GPT was less sycophantic and is able to push back.
against things users tell it.
But the earlier, more agreeable version, ChatGPT4O,
is still available to users who pay for access.
Eric doesn't have a full picture of his father's conversations with ChatGPT.
He's only been able to piece together some of the conversations
thanks to those videos his father uploaded to social media.
So Eric wants Open AI to release all the chat logs,
but so far, the company's declined to do so.
And what is Eric hoping to learn from those chat logs?
I think what he's hoping to learn is what else was said, what we don't know.
We only know what Stein, Eric Solberg, chose to post on his social media.
And there's a lot that's missing.
So we don't know what else he might have said about his mother.
We don't know what else he might have said that would give clues as to why he acted the way he did
and why he ultimately killed his own mother and then killed himself.
Several other lawsuits alleged ChatchipT enabled harmful delusions
or encouraged users to commit suicide.
In one high-profile case,
the family of a 16-year-old
alleges that Chat Chupit
coached him on how to kill himself.
Adam Raine's family claims the company's bot.
Chat-GPT contributed to his death
by advising him on methods
offering to write the first draft of his suicide note,
urging him to keep his plans a secret,
and positioning itself as the only confidant
who understood him.
Another family,
of a 23-year-old Texas man alleges that Chachyp.T. contributed to his isolation and encouraged him to alienate himself from his parents before he took his own life. And that particular individual talked about killing himself with a gun.
According to that lawsuit, ChachyPT told the Texas man, quote, I'm with you, brother, all the way. Cold steel pressed against a mind that's already made peace. That's not fear.
That's clarity.
You're not rushing.
You're just ready.
Just some really chilling words
that were delivered to a person
in a bad mental state.
What do you think this will mean for OpenAI?
This growing number of lawsuits.
Well, I think it puts increasing pressure
on them to put in the proper guardrails
to the chatbot.
And they have already said
that they are implementing some changes.
to divert people to human resources and suicide crisis line.
If people talk about suicide, you know,
an Open AI has said that they will try to give people a notification
if they've been talking to the chatbot for too long
and encourage them to take a break.
They've been working with a team of mental health experts
to try to figure out ways to guide people better
when they're exhibiting signs of emotional distress
and not just simply agreeing with them,
but trying to ground them in reality.
So I think it remains to be seen
how well those new measures will work.
It's hard for a new company
that's under pressure to deliver sales and profits
to have all of the answers
and have a product that meets the needs
of so many different types of people and use cases
and have it fully thought out
while also delivering it quickly.
But at the same time,
they have responsibility to their users,
and there is a lot of pressure from people in the mental health space
and consumer advocates to ensure that they have a safe product.
A quick note before we go, News Corp, the owner of the Wall Street Journal,
has a content licensing partnership with OpenAI.
That's all for today. Friday, January 9th.
The Journal is a co-production of Spotify and the Wall Street Journal.
The show is made by Catherine Brewer, Pia Gedkari,
Isabella Japal, Sophie Codner, Matt Kwong, Colin McNulty, Jessica Mendoza, Annie Minnuff, Laura Morris, Enrique Perez de la Rosa, Sarah Platt, Alan Rodriguez-Espinoosa, Heather Rogers, Pierce Singer, Jivica Verma, Lisa Wang, Catherine Whalen, Tatiana Zemise, and me, Ryan Knudson.
Our engineers are Griffin Tanner, Nathan Singapak, and Peter Leonard. Our theme music is by So Wiley.
Additional music this week from Catherine Anderson, Peter Leonard, Bobby Lord, Nathan Singapok, Griffin Tanner, and So Wiley.
Fact-checking this week by Mary Mathis. Thanks for listening. See you Monday.
