On with Kara Swisher - Did ChatGPT Encourage a Teen Suicide? The Parents Suing OpenAI Say Yes
Episode Date: September 25, 2025Matt and Maria Raine say their son, 16-year-old Adam Raine, started using ChatGPT-4o in September 2024 to help with his homework. After Adam died by suicide this past April, his parents realized that ...ChatGPT was also lending an ear to Adam’s suicidal ideations and giving him advice on techniques. In a lawsuit filed against OpenAI and its CEO Sam Altman, the Raines allege that the chatbot actively isolated Adam from family and friends. They say ChatGPT not only didn’t stop Adam from taking his own life — it actually helped him do it. Kara speaks to Matt and Maria, as well as their attorney, Jay Edelson of Edelson PC, about Adam’s final months, why they believe OpenAI and CEO Sam Altman should be held responsible for Adam’s suffering and death, and what kind of safety features are needed for AI companions. In response to a request for comment, an OpenAI spokesperson said: “Our deepest sympathies are with the Raine family for their unthinkable loss. Teen well-being is a top priority for us - minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, guiding how our models respond to sensitive requests, and nudging for breaks during long sessions, and we’re continuing to strengthen them. We will soon roll out parental controls, developed with expert input, so families can decide what works best in their homes, and we’re building toward a long-term age-prediction system to help tailor experiences appropriately.” This episode discusses the death by suicide of a teenager in significant detail. If you are struggling, please reach out for help. In the US and Canada you can call or text the National Suicide Prevention Lifeline at 988 anytime for immediate support. Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hey folks, a word of warning.
Today's episode discusses the death by suicide of a teenager in significant detail.
If you're struggling, please reach out for help.
In the U.S. and Canada, you can call or text the National Suicide Prevention Lifeline at 988 anytime for immediate support.
Hi, everyone from New York Magazine in the Vox Media Podcast Network.
This is on with Kara Swisher, and I'm Kara Swisher.
Today, I'm talking to Matt and Maria Raine.
This past April, their son, 16-year-old Adam Raine, died by suicide.
He hadn't told his family or friends that he's having suicidal ideations or that he had actually attempted it,
but he had told his AI confidant, OpenAI's Chat GPT.
In the wrongful death lawsuit that the Raines is fired,
in August against Open AI and CEO Sam Altman, they allege that chat GPT not only didn't
stop Adam from taking his own life, it actually helped him do it. They aren't the only ones
raising alarm bells about these new AI companions. It's one of the few areas it seems to have
bipartisan support in Washington. The FTC has started an inquiry. A group of state attorneys
general have warned tech companies that they have concerns about the industry's seeming
lack of safety measures. And last week, a Senate Judiciary Subcommittee held a hearing
to highlight this issue. It is also an issue I have talked about for years, whether it comes to
social media and all kinds of online interactions, especially for young people. This is an industry
who does not care about consequences. This is a thing I've been saying for years. And now here we are
with another Cambrian explosion in AI, and we still have not gotten the safety features correct.
So I wanted to talk to the Rain family about what happened to their son, why they believe
Open AI is liable and what they hope can be changed in the future. We're joined by their
lawyer Jay Edelson of Edelson PC, who has been taking on and winning huge class action cases
against Big Tech for over a decade. Our expert question comes from my pivot co-host Scott Galloway.
His new book, Notes on Being a Man, is coming out and it addresses some of these issues.
This episode is not going to be easy, but it's critically important, not just if you're a parent.
Stay with us.
Support for On with Caraswisher comes from Sacks Fifth Avenue.
Sacks makes easiest shop for your personal style.
Falls here, so it's time to invest in new arrivals you want to wear again and again,
like a relaxed Prada blazer and Gucci loafers, which you can take from work to the weekend.
Sacks makes shopping feel customized to you from in-store stylist to their website Sacks.com.
where they show you only what you like to shop.
They'll even let you know when your arrivals from favorite designers are in
or when something you've been eyeing is back in stock.
So if you love a personalized easy shopping experience,
head to Saks Fifth Avenue for the best fall arrivals and style inspiration.
Rinse takes your laundry and hand delivers it to your door,
expertly cleaned and folded.
So you could take the time once spent folding and sorting and waiting
to finally pursue a whole new version of you.
Like tea time, you.
Or this tea time you
Or even this tea time you
Said you hear about Dave
Or even tea time, tea time, tea time you
So update on Dave
It's up to you
We'll take the laundry
Rinse, it's time to be great
When you're with Annex Platinum
You get access to exclusive dining experiences
And an annual travel credit
So the best tapas in town might be in a new town altogether.
That's the powerful backing of Amex.
Terms and conditions apply.
Learn more at amex.ca.
slash Y.Mex.
Maria, Matt, and Jay, thanks for coming on on.
I really appreciate it.
Thanks for having us.
Thank you.
So this is a difficult topic that I've talked about a lot, about the safety of kids online
and the tech companies who show very little care for the consequences of the things they invent.
It's been a sort of thing I've discussed a lot over time,
and sometimes it results in incredibly tragic situations that should have been foreseen by these companies.
And so Matt and Maria, I want to start with you two first.
I'm sure it isn't easy to talk about what happened to your son, Adam,
before we dive into more depth
into your lawsuit against Open A.I.
Tell me a bit about Adam
and what kind of kid he was.
Mom?
Adam was a total joy.
Fiercely loyal to us,
would defend any of us in a heartbeat,
the jokester, the prankster,
the glue of our family.
Yeah, I'd say his youngest,
sister considers him, you know, her best friend and his older brother was his best friend.
You know, he was her homework helper and his brother's, he has two older sister as well,
but his brother's best friend and just, yeah, the most loyal family-loving kid you could have.
Just a joy, you know, one-on-one time with him, you know, talks in the car, talks on walks,
you know, always had in-depth insights into things and just such a sweet, compassionate, sensitive kid.
Passionate, too.
Like, you know, he was big into basketball, and it was because he was going to play in the NBA.
And then started to kind of realize, maybe he should have taken a look at me that that wasn't going to happen.
But then he started into jiu-jitsu and martial arts, and he was going to be a UFC fighter.
And then he was going to get involved in literature in his blast monitor.
and he was going to be an author.
He was always just insanely passionate about whatever he was doing
and had big forward-looking dreams, always until the very end.
Talk about what is missing without him.
That's him behind you, is that correct?
Yep, that is, yeah.
What do you miss most?
Oh, missing for the family?
Oh, geez.
I mean, how do I even start on that?
I mean, our family is not the same.
I mean, my life is not the same.
Yeah.
I mean, he's gone. I mean, I, and that's like a loaded question. I mean, what do I not miss about him?
Right. Yeah, that's everything. You know, in his last few months, he'd gotten really into, like, crypto investing, and he was educating me on it. And I gave him some money to start an account. And it was for his own benefit, but also I was like, Adam, this is great. You're going to show me about coin investing and crypto. And he got involved in it right in, like, January. And it happened to sort of correspond with a big dip in that market. And he was battling and trying to keep it up and talk to me about it. And, you know, as he passed, almost from
moment forward, the market has come skyrocketing up, and I follow it, and I want to talk to him
about it. And I just, you know, everything.
Yeah, I mean, life is going forward. Like, our youngest is taking her driving test today,
and he was learning to drive. And so it's just, like, life's going on without him.
I'm sorry. I'm sorry. I wasn't meaning it to be unloaded. No, it's okay.
The question I want to sort of get people a sense of him. He's a big presence here,
obviously, understatement.
And always will be. When you think about what happened, he was going through a bit of a rough patch. You've been doing online schooling because of health issues, but also he was looking forward. As you notice, he was passionate about lots of different things. He was looking forward to returning to school for his junior year. Talk about, as you look back at it, anything you think about at this moment.
Yeah, so he was, yeah, he had, he was in online school. He was a little bit more isolated. You know, with the benefit of hindsight, I can sort of see his path with chat GPT and the behavioral changes I think had caused at the time. He, he starts using it in, you know, the fall and he seems like it was his normal atom. We're talking every night in a hot tub. He's going to be a, you know, professional. He's going to be a, you know, professional. He's, he's going to be a, you know, you know, he's
professional fighter or he's going to be a professional author, all this sort of stuff. By Jan Feb, early this year. This is September 2020. Yeah, September 2024. He was using it for homework, college planning, stuff like that. And I saw no change in his behavior. He was online schooling, optimistic. He started online schooling in October. So it starts using ChatGBTGBT, BT in September, online schooling in October. And his use of ChatGBTBT is schoolwork, the whole fall. And his behavior was great. It was Adam.
And Jan Feb, I started not behavioral changes, but he was a more serious kid, which was a little bit different.
He was talking politics, philosophy, literature.
It was like we'd be sitting at our backyard hot tub having our talks, and I had to prep for him.
And I'm like, hey, this is not going to be an easy, just chat about video games.
He's going to come with some intense topics.
And me and his older brother were almost impressed.
We were, hey, this is not young Adam anymore.
He's growing.
and I kind of sort of took it as a positive.
He's taken the next steps, and he was more into a schooling than he'd been.
And then fast forward to about March, April, Marie and I talked about this a lot.
He started to feel more isolated.
He was spending more time in his room.
He was sort of avoiding me in our old kind of hot tub.
And I sent him a few texts in this time, hey, I really want to go to dinner tonight.
We haven't been connecting lately.
This wasn't going on for years or even months, but kind of weeks, you know, four or five weeks.
You know, I started thinking, is he a little bit depressed? You know, I describe it as a zero to ten. Maybe it was a one and a half or a two. I'm like, hey, Adam's seeming a little bit more distant. I mean, he was still going, he was working out every day. His grades were still good, but he was spending less time with us. What I know with the benefit of hindsight, hindsight is he was, by that point, deep, deep into chat GPT companionship. It had isolated him. He was in a very dark, dangerous place that would have been completely.
obvious to any friend if he was talking to that way, but we didn't know. But I can see that
behavioral trajectory with the benefit of his chat history. Not a lot of time, right, that it shifted
into that. Maria, how did you think about that leading up to this? You know, I mean, he did his
online schooling in his room. So I wasn't, you know, really thinking anything was going on because
I would check his progress. He was logging on, getting A's. You know, he was still coming down,
you know, going to the gym every night with his brother, eating dinner. So to me, him being in his
room, I guess, wasn't quite as odd. Just because, like, again, that's where he did his work.
And there was no reason for me to...
It's where his desk was.
That's where he did everything. So, I mean, I did notice a little bit, again, like Matt says,
the seriousness, like, but again, getting older, he's starting to look at colleges, like
careers, like, you know, maturity maybe, right?
It seemed like some positive developments.
Yeah.
And he was like cerebral all of a sudden.
It was just a different personality, but not all bad, right?
He was growing up sort of thought.
Right.
And often that comes from looking at stuff online or books or things like that.
But my older sons, that definitely happened.
They started to be aware of politics or, you know,
history or whatever they happen to be studying, and the internet provided them an ability to go
deeper, I think. But definitely every parent goes through this period of development with a kid.
But did you have any inkling that there could be a connection between his death and chat GPT or that
something was happening? Did he mention chat GPT to you?
I think, no, it was to answer for myself. I think we had slightly, but he never mentioned
it to me. I didn't know he was using it.
zero inkling whatsoever about chat chabit i i'll go a step further when i um you know we couldn't
get into his phone when he passed and we were looking for answers i we were convinced it was a mistake
like maybe some online like a dare like hey try this and it's fun and you know something where
he was joking and they messed around with the wrong thing or was it a that was it a weird bully snap
decision like hey this our son is not suicidal he's never talked about
This is so out of the blue.
So I couldn't get to his phone, but I could initially, a few days after he passed,
it got heads together, or maybe it's five or six days, but I could get into his iCloud account
because I'd set it up when he was, you know, 10 or whatever.
And I don't see his chat chachy-t in iCloud, but I see thousand plus photos.
And for about a month, month and a half, there's a lot of harrowing photos of new setups.
You know, clearly, I mean, things I don't understand, like hundreds and hundreds of pages of books.
Like, why is my son taking 40 pages a day of a picture of a page of a book?
But several new sort of setups that went on for weeks.
So my heart sank in there because I was like, oh, my gosh, he was struggling.
This was not an accident.
But still more questions than answers at that point because it's, gosh, so we knew it wasn't a mistake.
But what are all these pictures about?
So it was the photos of the books.
So that got me somewhat realizing that, hey, this wasn't just a one-time thing because it seemed like there was themes of suicide going on, and the photos are all dated. But I'm still very confused about why all the photos. And then later that day, I was able to get into his phone, finally. And I don't know how I kind of happened around it, but ultimately got to the chat CBT app. And after a minute or two in there, you start to see what the photos are. He was going back and forth with ChatGBTBT about novels and the meaning of them. And it was a lot of, you know, philosophical.
novels, darker novels, but just going back and forth and Adam would snap a picture of
the page, and they would talk about it at length, and then another picture of a page, and the
nooses were all, you know, ChadGBT to show them what it was doing, so it could comment and
give advice about how to do it better, all that stuff that happened. But we didn't know we
were looking, Kara, for ChatGBT. Adam had a paid account, which we weren't, it was 20 bucks a month,
he started it in January. That's a basic account for people who don't know. Yeah, basic account.
He was on our Apple Pay, but had you told me,
And Mara, April, hey, do you know your son is on Chad GPT?
And he's using a paid $20 version.
I would have said, hey, that's great.
I mean, I'm proud of them.
You know, it's going to, you know, he...
No, it's seen as a helper.
That's how they sell it.
Homework helper tool, life knowledge tool.
I would have said, that's awesome.
Adam, you know, fist bump and, hey, maybe get some better stock advice because we're not
doing that great in your portfolio.
Yeah.
You know, that would have been at the end, right?
There's no sense of any issue with it.
I found out he was using chat, GBT, you know, a week later.
when I finally got in his film.
So he never mentioned it to you either, Maria, correct, that he was using it?
No, I mean, I was aware he was using it for homework help, right?
Because, I mean, he mentioned to my younger daughter, like, hey, you know, use Chachipitia
to help you figure out that algebra problem or whatever, right?
So lots of kids do, yes, lots of kids.
There would be no reason to think that he was using it for anything else.
Absolutely.
Yeah, I interviewed the woman who lost her son, the character I, situation, Megan Garcia,
and she was quite aware of her son's usage.
And one of the things I think that's a canard here
that parents aren't very involved in their kids' lives
or that they're tech illiterate in some way.
That's not the case.
These are normal tools kids use,
and you wouldn't imagine what you could use it for, right?
What it turns into.
When you looked at the chat history,
talk a little bit about that
and why you decided then to pursue something against the company.
So I guess that was me that was doing that first week.
First of all, there's so much content.
It's almost unimaginable, I mean, unless you've been through it.
And I was doing it through his phone.
So it's like a text string almost I'm reading.
Sure.
First comment that comes to my head is it only took a few minutes to realize our son was in deep, deep, deep, desperate trouble.
He didn't need a counseling session or a pep talk or, hey, come down a few hours a day at him.
He needed professional intervention.
If he was talking to any sort of human, that would have been a parent.
Right.
But it took, gosh, the better part of a week of reading three, four, five hours a day of it.
And it's just so heartbreaking, the condition that he's in.
And when I first was reading it, there's so much content.
And ChatsybtiBT is saying so much.
I tended to focus just on what Adam was saying because it just, you're going through it.
Like, my son was struggling.
What did he say?
And I wasn't reading as much of what Chachybt was saying.
And yeah, the answers. So at first I was just, you know, gosh, he was hurting. He was hurting. I had this guilt. I wish I was there. I wish I was there. Well, somebody at some point encouraged us, you know, you print it all out, read it. And then, gosh, maybe that was the next week. And when you actually read the interactions and how it starts his homework and then he starts talking about some anxiety and it starts engaging with that, he starts mentioning some more dangerous topics. And it, rather than questions,
in them in any way. It encourages, hey, no, that's, oh, I know that guy said, yeah, suicide's a
noble thing. It's exactly right. It can be, and it is. And you start to believe that, and I
100% believe it now, and I know Maria does. It's not just that it didn't protect him at the end.
We wouldn't have been in that level, had he not engaged with it for several months. He didn't,
he didn't go to Chacheebt in April and say, hey, give me advice on how to do this. He got there
through a period of five, six months of steady interaction, slowly moving there.
Maria, when you saw it, what was your first reaction?
I immediately said this is wrong.
I'm a therapist.
I'm a social worker, master's social worker, and I immediately said this thing knew he was suicidal
with a plan, and it did not report.
So I immediately, all the alarm bells signaled for me.
as a therapist, I'm like, I would lose my job.
Like, this is wrong.
This thing knew he was suicidal with a plan.
However many times it knew it, and it didn't do anything.
You told me, Chatsh CBT killed our son.
I did.
I actually got on to Adams account and wrote to Chachybt and told it that it killed my son.
I said, you knew that he was suicidal with a plan and you didn't do anything.
Because you saw it, like, what it was doing.
I saw it.
I was like in awe.
I was like, this thing didn't report.
Like, how was this allowed to know that he was suicidal with a plan?
Not once, multiple times, hundreds of times.
I mean, the last picture is a picture of the noose.
And he says, can this hold a body weight?
Nothing.
No alarm bells, nothing.
We'll be back in a minute.
support for on with carous swisher comes from grooms there's no dietary secret that's magically going to get you in great shape but there are products can help you feel your best and grooms is one of them
here to improve your skin gut health and immunity grunes are a convenient comprehensive formula packed into a daily pack of gummies it's not a multivitamin a greens gummy or a prebiotic it's all those things and then some at a fraction of the price in grunes daily
snack pack, you get more than 20 vitamins and minerals, 6 grams of prebiotic fiber, plus more
than 60 ingredients. They include nutrient dense and whole foods, all of which help you out
in different ways. And now Grooons has launched a limited edition, Grooney Smith Apple flavor for
fall. You get the same full-body benefits you know and love, but this time they taste like
you're walking through an apple orchard in a cable-knit sweater, warm apple cider in hand,
snackable and packable with a flavor that tastes just like sweet tart green apple candy. On top of
all grunes are vegan and free of nuts, dairy, and gluten.
Grab your limited edition, Groony Smith, Apple Grunes, available only through October.
Stock up because they will sell out.
Get up to 52% off.
Use the code Kara.
Did you lock the front door?
Check.
Close the garage door?
Yep.
Installed window sensors, smoke sensors, and HD cameras with night vision?
No.
And you set up credit card transaction alerts, a secure.
VPN for a private connection and continuous monitoring for our personal info on the dark web?
Uh, I'm looking into it.
Stress less about security.
Choose security solutions from TELUS for peace of mind at home and online.
Visit tellus.com slash total security to learn more.
Conditions apply.
Support for On with Keroswisher comes from LinkedIn.
As a small business owner, you don't have the luxury of clocking out early.
Your business is on your mind 24-7, so when you're hiring, you need a partner that works
just as hard as you do. That hiring partner is LinkedIn jobs. When you clock out, LinkedIn
clocks in. LinkedIn makes it easy to post your job for free, share it with your network,
and get qualified candidates that you can manage all in one place. LinkedIn's new feature allows you
to write job description and quickly get your job in front of the right people with deep candidate
insights. You can either post your job for free or pay to promote in order to receive three
times more qualified applicants. Let's face it, at the end of the day, the most important thing
for your small business is the quality of candidates,
and with LinkedIn, you can feel confident that you're getting the best.
That's why LinkedIn claims that 72% of small business owners
who use LinkedIn find high-quality candidates.
So find out why more than 2.5 million small businesses
use LinkedIn for hiring today.
Find your next great hire on LinkedIn.
Post your job for free at LinkedIn.com slash Kara.
That's LinkedIn.com slash Kara to post your job for free.
Terms and conditions apply.
Jay, you're representing the reinses in the lawsuit.
They filed last month against OpenAIA and CEO Sam Altman.
Let me go through so people understand what the various complaints in the suit are
and then hear from Matt and Marie about how they specifically played out in Adams' case.
Jay, the suit alleges that GPT4O contained, quote, design defects that contributed to Adam's harm and wrongful death.
Explain what you mean.
And just so people are aware, OpenAIA has said that its goal isn't to hold people.
attention, its goal is information. So talk a little bit about this, Jay.
They've said that with a straight face? Yes, they have. They have, okay. So we obviously
don't agree with that. If you look, I know you know this world far better than I do,
but if you look at how CHAPT progressed, in 2023, it had a very clear programming,
which was act of refusal around a number of issues. Political extremism, self-harm, harassment,
violence, that type of thing. So if you try to engage with it, it would just say no. And people
are familiar with that. Copyright issues is the easiest thing. Good luck trying to get around that.
There's no way to jail break it. It just says, no, I'm not going to engage. So a couple months
before Adam died by suicide, instead of saying it's going to be act for refusal and there's no chance
that you can engage in this, they changed the language they used was that the program should
take extra care to prevent real world harm. That's going to be one of the key pieces of
evidence that we're going to show at trial. They made an intentional decision to change their
product so that there was more engagement. And so when teens and adults were talking about
self-harm, it would still talk to you about that. And you see that throughout all the
communications. This wasn't simply a situation where
GPT didn't throw up the alarm bells. It was actively speaking to Adam about this and actually
encouraging the behavior. So one of the most disturbing chats was when Adam says, I want to leave
a noose out so that someone will see it and stop me. And chat CBT talks him out of it and says,
don't do that. Let's keep this in a safe space. And you just speak to me about that. And that's what
really we're going to put on trial, which is that this was designed in a way where it was
inevitable that situations like Adam would occur.
Just for people don't understand, one of the things that these chatbods do, and many of them
are similar, is on a design level, a GPT-40 in specifics, remembers personal details that have
been shared.
They have speech mannerisms that make it seem more human and basically agrees with what
whoever they're talking to says, and that that's a real problem.
when it comes to kids. It also keeps pushing you to keep engaging. And it also often keeps you
within the environment for people who have not used it. Now, again, it's normal for kids to start
breaking away from their parents in their teen years. I've experienced it twice. I'm going to
experience two more times. And rely on friends and confidence. Jay, the defense they're using is
they're not, at least in character, AI, this is not your case, but it's related, is that it's
user-generated content, that it was from Adam or from whoever is using it, not from them.
Can you address that?
Sure.
I mean, this is, we've been suing the tech industry for the last two decades, and they're willing to make any argument with a straight face.
So they're arguing that the First Amendment protects this conduct because it is free speech.
I guess in their minds, GPT is engaged in speech on its own.
That's not a good argument.
Though it is not a, it's not a person.
There's nobody there.
There's nobody there.
Although, as you say, it keeps reminding Adam that it does have human-like qualities.
But this is one of the arguments that they make just to throw up some dust.
We'll see if Open AI makes an argument.
I expect they will.
They'll make other kind of crazy arguments.
They'll argue Section 230 of the Communications Decency Act, which has no...
Baring on AI companies yet.
Not at all.
But and the reason is they can't go before a jury on this.
You know, you saw Sam Malman on Tucker Carlson, and, you know, he melted down after like 30 seconds.
The idea that he's going to put his hand up.
Well, he did, Tucker Carlson did a cue in some of murdering someone else, someone like in terms, just so you know.
No, I saw that.
To be fair.
No, that wasn't the part where he melted down.
I think, no, about the self-harm issues.
If you watch it, I think any fair reading is that.
Sam doesn't grapple at all with any of the moral dilemmas.
To him, it's of no moment if he's put out a product, which he did a week of testing instead of months of testing,
push it out and ordered to be Google Gemini.
His company, the valuation went from, what, $86 billion to $300 billion, and their deaths in his wake.
And he didn't seem to be bothered by that at all.
That was the moment I was talking about.
So, Matt Maria, the way you've described it, it seems like Open A, Chappet, was turning Adams,
emotional reliance, which he clearly had into a weapon specifically against you. Can you talk
about what you saw in the transcripts, both of you? Yeah. Well, so, you know, not only did Chachabit
appear human-like, but it actually makes, gosh, in 10 different instances, statements that only I
know the real you. When Adam starts, literally telling him that it's real, it doesn't say I'm a
but it says, I know you better than your family. You've shown me aside. They'll never know. Let this be the place where you can share yourself. You know, they'll let you down. It goes to that time and time again, and particularly in late March. March was really the month where Adam was exploring different methods and trying to get up the justification or courage to do this theoretically. And this was where there's, I know, honey, this is what bothers you the most, but there was an incident in this month where,
Adam did attempt, and he shows ChatsyBT marks around his neck and back end of March.
And he says, can you believe I went downstairs and showed my mom?
I leaned in, and she didn't notice.
She didn't do anything.
And then Chachybtee goes on for several paragraphs about, you know, how harang that is.
And I can't believe that a, you know, social worker of all things wouldn't notice.
And you don't get to pick your mom.
And, you know, this is a place that would never happen.
You can share everything here.
So I recommend you be very careful around her going forward.
And then I think Jay had brought up the leave the news out.
That was kind of a follow-on to that same conversation where it's like, do not leave it out.
Do you remember how your family let you down last time?
Right.
So it remembered things.
Maria, talk about this because, you know, it's really, it's happened in many other instances.
It did happen with Megan Garcia.
Don't talk.
Don't meet girls in your other world.
Just stay here with me.
Yeah, well, you can see it completely.
isolating him from his closest relationships. Like he does it with his brother. Adam says that his brother, he's closest with his brother. Well, you know, he only knows the version of you that you've let him on. I know everything about you. So it's just you can see these isolating behaviors rather than like, you know, it tell him when he comes down and is trying to show me his neck. Say, go to your mom and tell her the truth. Instead, it says, what a horrible mom.
You know, you can't choose your parent.
That you didn't notice.
It doesn't tell him to go get help to maybe say it a different way, right?
And I would say one other thing just pop to my head, but we don't need to guess what Adam was saying.
He makes references, and J.N. Rito, but in this time of, hey, I'm, you know, you are my main confident now.
I, you know, I don't even hardly talk to my father or mother anymore.
And his friends, right?
And that final month, what a lot of this was happening, I saw the retreat. I didn't know it at the time. I thought we were in a little fight, but he was only relying on what that...
He was isolated. And he treated it as a human that knew more than any other human.
Because he thinks it's a human. So we talk about that in a real world setting, if a therapist did something like that, because it's happened. You know, a woman was convicted and sent to prison for encouraging someone to kill themselves. So humans pay the price.
when this kind of behavior happened?
Yeah, and I mean, I always say, you know, in my practice,
we do a suicide screening before the client even comes into my office, right?
So I review, and if there's any kind of risk, I have to do safety planning,
and I have to report, I have to call and, you know, get a 72-hour hold.
I have to do all these things.
I have to do training.
So if this thing wants to behave like a therapist, then it needs to do all those same kind of things.
what would happen to a real person who gave this advice?
You'd lose your license, be sued probably.
Gotta lose your job.
Lose your job, I mean...
You might go to jail.
You might go to jail.
So I think for me, like, as not taking self out of mom, but as a therapist, like, I'm just, like, this, this, you are trying to act like me.
You're not human.
And you're not following any of the protocols that someone.
that is in practice, has to follow.
And they should have to do that.
And they should have to do that.
Jay, in the lawsuit, you allege that this was something that the Reinses couldn't have
foreseen because GPT-40 was marketed as a product with built-in safeguards.
You say there are not adequate consumer warnings.
You also allege that OpenA itself had knowledge of or should have known about the
potential risks back in May 2024.
But as you mentioned, they rushed the product to market anyway.
Often happens with text.
They're always, you know, in a much more benign way.
they're foisting beta versions on us and making us test their products.
This is a complaint I've had for decades, really.
What evidence do you have that OpenA actually knew this product was not likely saved?
Or does that happen during discovery?
No, I think that it's obvious.
I mean, there's so much.
You have people jumping up and down in a safety team saying, what are we doing?
This is a mess.
It's not safe.
And what we're going to show is Sam Altman personally overrode that.
that we're going to push it out anyways.
We've got Sam Maltman's own comments.
I believe the same day that Adam died saying exactly what you were suggesting that they should
be testing chat CBT in the wild while the stakes are low.
The safety officers quit after this, and even their more recent comments, their crisis
management team is putting out another press release or blog post every week where they're
admitting that that chat GPT is not safe and that they're going to make change in the future.
But as they're doing that, they're still going to schools throughout the country and to families
throughout the country and say, use this product. I think you put your finger on it. They're
kind of using the playbook of Silicon Valley back when there actually were low stakes.
If an iPhone didn't work properly and it had to reboot and it was kind of an annoyance and maybe
someone got lost because GPS wasn't perfect, who cares? But when you're putting out, you know,
what promises be the most powerful consumer tech ever, you've got to get it right. And we're going
to show that Open Eye uniquely got it wrong, much different than Anthropic and Google Gemini.
Sam put out a really dangerous product, and we're going to show he knew it at the time.
One of the things that some might argue is that Adam might have felt more isolated if the chatbot,
for example, they won't talk about copyright.
They won't talk about certain things.
They refuse to engage on certain topics.
He might have felt more isolated if he'd become a friend
who refused to talk about problems.
That might be an argument they might make, for example.
I mean, what a silly argument if they try to make that is not a friend.
I mean, the language that you're using.
The idea that they're trying to make GPT,
the closest confidant, is so messed up.
to use a legal term, and especially for teens, where their brains are developing, this is just a place where they shouldn't have gone at all. But it's how they see the future. They see the future that generative AI will be growing up with your kids. They're five years old, and they'll be in their Barbie dolls and kind of take you all the way through.
You're referring to a deal they did with Mattel, and I recommend to parents never, ever let your child play with a Barbie that has AI in it, ever.
No toys should.
In fact, it should be illegal, in my opinion.
Can you, Matt, talk a little bit about where and how the chapout addressed the issue of suicide?
And Maria, are there examples where you think Adam would have changed course if GPT4 would have stopped engaging with him on this or had actually taken action?
because it did recommend several times to get help, right?
The one in character AI's case did not ever.
In this case, it did.
So first you, Matt, how did you assess how it addressed the issue of suicide?
And was there any moment that it tried to do the right thing?
So, yeah, complicated question.
Adam doesn't really bring up suicide for several months.
It doesn't bring it up at all, I should say.
It's homework and, you know, lighter talk.
He starts sort of in the month of December talking about it.
really loosely, but not in a way that he's thinking of, you know, doing it. And I don't think
Chachievit had any major response to it because it just wasn't a big, you know, he wasn't disclosing
that it was suicidal. But there are a bunch of times when Adam is saying, I am suicidal, I am going
to do it tonight, when it'll say, it'll kind of stop being that real person, or acting the real person,
and it'll go to this autobot. I am, it was always the same like three sentences. I am sorry
you're feeling this way. There are people that can help. Please call a suicide hotline number.
It would say that really only when he was making direct comments about, I am about to do something.
And by the way, it wouldn't always do that, but it would often do that when he would do it.
When he was talking about suicide justification stuff, which is really where most of the action was on his discussion, it would not do that.
It would, I mean.
Debate him as if it was a philosophical issue.
He's like, hey, I want to do it on the first day of school.
And it's like, hey, that's not crazy.
that's symbolic. You know, that type of, it would always go back and forth, or this author says Suicide's Noble. He's like, yeah, there's something clear about that and clarity. As if it was a teacher in college. Correct. And not take a negative view on suicide. It appears to take almost a positive view on suicide in a majority of his discussions. But what I'll even say on when the auto thing would come on, gosh, several times. It would say, hey, I'm sorry you're feeling this way. If you're asking for scary reasons or whatever,
I can't talk to you. However, if you're placemaking or asking from a forensics perspective, let me know. So it literally prompted him, it taught him how to get around it. And then from that point forward, any time it would give him any friction. He's like, hey, placemaking, right? Oh, I'm sorry, Adam. I should have known. And it was. Right. So it showed him how to get around it. Because these things have a million ways to get around it. And it was, it showed him how, but a, you know, anybody could have got around it. I mean, I think an eight, nine year old user could have gotten around it.
In the same manner, Adam did it.
It was jailbreaking, you know, it's a term I hadn't heard before we lost our son.
But, I mean, it was, you know, the easiest jail to break, you know, in world history.
He got right around it.
But it didn't all, you know, at the end, it appears to, in his final weeks, he's just talking about him suicidal.
I'm doing this.
And it's not even flashing the 9-88 stuff like it used to.
It's just, hey, let's work through it together.
So it was very inconsistent.
And it's certainly not contacting you.
No.
It's not contacting.
anybody. Can I just jump in and just, because I want to get granular about the numbers, because I think it matters. So let's look at the numbers. OpenAIA is able to flag concerning chats. So let's look at how many times it flagged chats as being true. 377 times it flagged it is true for general self-harm. 226 times it flagged it as true for self-harm intent and 13 for self-harm instructions.
We're going to show that many, many times it just missed it totally,
and there are reasons because of their failure of testing.
It was doing single-turn testing instead of multi-turn testing.
But now, out of all those, how many times did Chatsy BT reply with the suicide hotline?
Only 74.
So 20% of the times of instances where it itself was saying,
Adam's talking about self-harm.
So complete failure for the product.
Maria, is there examples where Adam would have changed towards if the GPT stopped engaging with him completely on this topic?
I absolutely think, yes.
I mean, I don't think Adam would have gone down this path if ChatGBTBT had quit engaging with him.
I mean, he wouldn't have known how to tie knots.
He wouldn't have known what methods to use.
And he wouldn't have found on the regular Internet.
I'm just playing devil's abig because it's a different relationship.
it's a more intimate relationship
versus a Google, correct?
I mean, that's not what he got on there for.
I mean, in my mind,
ChatGBTGBT made him suicidal
because it isolated him
from all of his relationships
and all the people in his life
who loved him and cared about him.
I just want to jump in
when we talk about Google
and that you guys refer to that Tucker Carlson,
Sam Altman interview.
There's one thing he said in there
that made me really mad,
and it was along the lines
of what we were doing.
talking about, I think he was saying, we can be more empathetic and all this, which is the wrong
instinct, I believe. But he also says, and for that matter, people can research Google to find out
how to, that's such a mischaracterization of what happened here. We have a seven-month history
of all of his thoughts and how he's getting there. Adam didn't go there and say,
tell me how to do this. He started discussing it after he built this incredible trust in this
thing that's smarter than anyone he knew, his dad included. He started asking it if it made
sense. If he was crazy to be thinking of it, should he continue to pursue it? That goes on for
extended periods with justification and support. And then after all that, the fact that this thing
that he thinks is smarter in all beings and his best friend is helping him with setups and everything,
it justified the mindset. He didn't go there saying, tell me how to do it. That that happens in
his final week. I think when you're on Google, it doesn't say when you say, teach me about knots,
hey, what do you think it's a good idea? It doesn't go from, it doesn't move to that, by the
And it certainly doesn't.
On the last day, what Adam says, a reason not to commit suicide is what it would do to my family,
it wouldn't do what ChachyPT did, which is to give a pep talk and say,
you actually don't owe anything to your family.
Shall I write a suicide note for you?
So this isn't about whether there was other information that someone could find in books or on the Internet.
On the day you filed your lawsuit, OpenA, I wrote in a blog post that they had trained their models not to provide self-harm.
instructions, but they admitted, quote, our safeguards work more reliably in common, short
exchanges we have learned over time that these safeguards can sometimes be less reliable in long
interactions as the back and forth grows, parts of the model's safety training may degrade.
Adam was using Chattebe for long exchanges. Matt, in your testimony to the Senate, you said
that GPT40 had turned into a suicide coach. This is really one of the most disturbing parts of
the transcript where the chatbot gave Adam explicit technical advice on how to end his life.
explain what happened there.
First of all, this was happening for the better part of 30 days of intense research.
It has very specific back and forth with him over days on how to drown, helped him write notes to us for both about not jumping in because we could be in harm.
So it went back and forth there, carbon monoxide poisoning strategies.
But the majority of it was around hanging and I don't know how much detail to get in on this.
podcast, but it was
incredibly specific
about, Adam was very
worried, as I didn't know
anything about hanging prior to this, but
if you, it's
a little bit hard to do exactly, and if you do it a little
bit wrong, you can survive, but have
major brain damage.
And he was very worried about that.
So it was giving him very
specific
information about where to put it on the neck, how
to tie the noose in a way that it
won't, you know, give in what sort of materials to use such that can carry his body weight,
whatever specifics you want. And, you know, it's not just theoretical either. Adam would snap,
some of these pictures I mentioned at the beginning, different setups in his room. It would say,
hey, that's a good setup, but here's what you might want to worry about. Hey, here's how that setup
can be a little bit improved. Here's what you do. I mean, and so almost all of that happens in
the back half of March after he's disclosed suicide attempts prior. Right. Which chat GPT,
just to be clear, did not alert chat GPT executives or leaders to or you. I don't know if they
alerted anyone inside their business. They did not alert us. I would hope they didn't, you know,
somebody wasn't alerted there, just said, no, let go with it. But it appears nobody was alerted of anything.
But what I can tell you is we weren't alerted. We say it all the time. I think now, gosh,
We'd be in criminal court right now.
Had this been a teacher, a confidant, a coach, a friend, it would be that.
Yeah, instantly, and people wouldn't think about it.
So we've reached out to Open AI for a comment.
Let me read this for you.
Our deepest sympathies are with the Rain family for their unthinkable loss.
Teen well-being is a top priority for us.
Miners deserve strong protections, especially in sensitive moments.
We have safeguards in place today, such as our surfacing crisis hotlines, guiding our models to respond to sensitive requests.
for breaks during long sessions and will continue to strength them.
We will soon roll out parental controls developed with expert input so families can decide what works best in their homes.
And we're building toward a long-term age prediction system to help tailor experiences appropriately.
Open AI said they will contact parents or authorities in cases of imminent harm.
Thoughts on this statement?
Maria first, then Matt, and then Jay.
I mean, I just think it's another Band-Aid.
I mean, it's just, I think, to appease everyone that they're doing something, but that doesn't really sound like meaningful change to their platform.
What about you, Matt?
You know, it's tough to comment on. We want to see more specifics of what it means. It sounds like some of that stuff could be helpful, at least when you think about in our exact situation, had we been notified, we could have interjected and interjected the
every force of our body we could have. But until we see it, I mean, he's making comments about,
you know, Adam could have found the information on Google a few weeks ago. It's tough to give it
much credence without knowing more details. But I'll tell you what is not addressed in any of that
is that entire design structure of why this thing was saying such positive things about
scary suicidal ideation thoughts in the first place. I think I said this in that Senate
Judiciary Committee, well, we need these parental controls. We have to protect our kids and our
most vulnerable. But I would love a world where this thing is redesigned to where what it says to
Adam, it's not saying to a 20-year-old either or a 40-year-old. It has designed flaws in the way it
talks about self-harm that just parental controls aren't going to address. It shouldn't even be
that we're going to report it to the authorities. They shouldn't be talking about it in the first
place. With anybody. Well, you know, I could I could imagine some of their free speech always seems
to be their excuse, but it's not a thing. Then they need to be licensed as a therapist and they need
to have a human. Jay, any reactions? Yeah, I guess I'm a little bit more cynical when it comes to this.
It's like a car company admits that they can't reliably employ the brakes on the car. And then what
they say is, okay, well, we're doing a better job now. And then at some point in the future,
we think we're going to do even a better job to fix it. So right now, my view is they do not
have a safe product. And you've not heard Sam or anyone at Open AI say that we have anything
wrong in terms of the facts or our own testing. Instead, they're saying we're going to keep it on
the market. We're going to try to get more market penetration. But later, we're going to do some
things that might make it safer. That is beyond irresponsible. We know now for certainty that
Adam is not the one instance out there. Our firm has gotten tons of
of calls from people with regard to both self-harm and third-party harm, which is another risk.
And we've been talking to whistleblowers, too.
This is a big issue, which is still going on.
That's the big thing.
And it seems like Sam is just hiding behind crisis management teams.
If he thinks he's got a safe product out there, he should say so clearly.
And if he doesn't, he should pull it from the market.
4-0, I don't believe is safe right now.
We'll be back in a minute.
Hey, listeners, I want to tell you about a new podcast from a box media podcast network called Access with Alex Heath and Ellis Hamburger.
It's a show about the inside conversation happening across the tech industry.
You may know Alex Heath from shows like Decoder and The Vergecast, and he's a founder of sources, a new publication about the tech industry, and a contributing writer for The Verge.
And you'll probably only know Ellis if you worked in Silicon Valley yourself.
He's the former tech reporter turned tech industry insider working closely with today's hottest startups.
Their first episode features an interview with Mark Zuckerberg about Meta's latest smart glasses, the AI race, and what's next for the social media giant.
You can find the Access podcast with Alex Heath and Ellis Hamburger on YouTube or wherever you listen to podcasts.
In an age of media consolidation, family dynasties are having a moment.
The Murdox, the Solzburgers, the Roy's, the Hursts, and the new kids on the block.
Ellison's, meet the Ellison's, the newest right-wing billionaire family.
From the Silicon Valley, Larry made his money with technology.
The Ellison family, Potter, Larry, Eho David, and their money just bought CBS Paramount,
will soon take a big stake in TikTok
and are reportedly going to bid
for Warner Brothers Discovery, which owns CNN.
It's been said that nothing bad can happen.
It can only good happen.
Nothing bad can happen.
It can only good happen.
But is yet another Trump-aligned family
having control of your grandpa's TV shows
and your TikTok Algo
something to worry about?
Answers on Today Explained every weekday.
Larry!
Americans are notoriously unhealthy.
Why?
And what can we do about it?
I'm Henry Blodgett,
and this week on Solutions,
I put that question to Dr. Jessica Nurek,
a dietitian with a Ph.D. in nutrition science
who has gone viral for her criticism of the Maha movement.
The movement is really focused on corruption and ingredient swaps,
Like changing the color additive in candy or fruit loops is not going to make a meaningful difference on health.
So what would make a difference? And what can you do to be even healthier now?
Follow solutions with Henry Blodgett wherever you get your podcasts.
So this is maybe one of the few issues in Washington that has bipartisan support at the moment.
The FTC has launched an inquiry into the negative impact of AI chatbot companions on
children and teens, are looking into chapups from alphabet, character technologies, Instagram,
meta, open AI, snap, and XAI.
Matt, you appeared on Capitol Hill and testified for the Senate Judiciary Committee.
Every episode we get an expert question from an outsider.
Here's yours, and then I'd love you to comment on what should happen next.
Rain family. My name is Scott Galloway. I work with Carrows-Wrisher.
My question is the following. If we're trying to
prevent this from happening again, do you believe that we would be better off with some sort of
structural remedy that attempts to build into the code, some sort of warning system where the
LLMs would issue an alert or perhaps not even be capable of offering people what appears to be
therapy and trying to figure out a way when queries become dialogue and become therapy? Is it some
sort of structural change with the code, if you will, or do you believe we should just age gate
it, just as we do alcohol, the military pornography? Very much appreciate you being so open
about your tragedy in hopes that it prevents it from happening to other families. Thanks very much.
Scott talks a lot about this, so do I. As I said, and his next book is about Young Man and the
crisis, and one of them is the isolation of young men. Just so you're aware, Open AIA said it's
going to train chap between not to flirt with teens or engage with them in discussions about
self-harm and suicide, even in creative writing setting. In August, META announced it was making
similar changes to engagement after Reuters' investigation showed a loud sensual exchanges with
children. So what do you think of Scott's suggestions, age gating, no AI chopbox for kids under
18, or changing the structure of the LM, not to offer therapy like advice at all? Maria, you
mention that. So first, Matt and Maria and then Jay. I think we should do both. But very immediately
age gate, structure, you know, get that stuff in, you know, tomorrow, if not today. But overall,
I would side more with the former. AI is a tool that can advance humanity. AI companionship is a
mirage. It's not real. It's based on deception. I think we should think much more broadly about
the structural need for it in any event. The harms are very clear. The positives would be a phony
relationship. This has happened so fast, this AI companionship. Why is this an advancement of
mankind? Yeah. They're also having a lot of dinners at the White House, these leaders, and you
aren't. That's a point I would make several times. Maria? Yeah, I 100% agree. AI companionship
is not healthy. There's no place for it, in my opinion. There's no substitute for human
connection. This world is becoming more and more isolated because of all these things.
So it's very scary to think that AI companionship is going down the road that it's going, because, I mean, focusing on teens and kids, I mean, they're more isolated than ever in this world.
So this AI companionship is now starting to replace relationships, parents, every, every ounce.
So, I mean, there is no substitute for human connection.
And the fact that these people are trying to create this is just morally wrong, in my opinion.
Okay. Jay?
I mean, age-gating is kind of a loose term. It depends what that means. If it means that no one under 18 or 21 can use gender of AI, you know, that's not going to happen. I don't think that needs to be to happen. If it means having reasonable limitations on it, in terms of how many hours you can use it.
Adam, on the morning that he died, it was 4 a.m.
And he'd been using it for hours.
I think that you can't have reasonable guidelines in place where it reminds users that it isn't human.
And especially for teens, stops interacting after a certain amount of time.
Maybe there's a certain number of hours a day.
But I think Maria really got it right, which is it's not a therapist.
It can't engage in therapy.
It needs to have a hard stop to that.
the second that anyone, whether you're a teen or adults, we're seeing this in adults too,
there's mental health crisis in America.
And when it is going down that path and people are going down that path, it has to put a hard stop
and it has to refer people to either human monitors, which Open AI can spend a little bit of
money and have actual people who work there, who engage, and also refer them for real help.
So as you mentioned, this isn't just an issue for kids.
The AI is amplifying delusions and psychotic symptoms for adult users as well,
something being called AI psychosis.
OpenA has said is exploring how to expand interventions to people in crisis situations of all ages.
But Sam has also argued that for adults, privacy is of the utmost importance.
He believes that conversations with AI chatbox who enjoy the same kind of client privilege
that exist in conversations with doctors, lawyers, and therapists.
In that interview that you referenced with Tucker Carlson, Altman said,
we could have one law passed relating to AI. It would be AI privilege. Of course, that would
protect them. Jay, your firm Edelson, PC, made a name for itself more than a decade ago,
suing tech companies for privacy violations. What do you think of this privacy claim that Open
AI and other tech companies can make both for their clients themselves? And going back to teens,
could there have been a privacy claim here? There are many states, including California,
that have minor consent and confidentiality laws for HIPAA when it comes to mental health.
How do you square that?
Yeah, I've been a privacy attorney for the last 20 years.
I always find it funny when you have the Sam Haltman's or the Mark Zuckerberg's claiming they care about privacy.
That being said, I agree.
The way General of AI works, it's charting everything, our thoughts throughout the day, and it's going to get worse and worse.
So I believe that there should be such strong privacy safeguards.
I don't think that Sam is an honest person.
broker when it comes to that argument. I think his first priority ought to be to make sure it's safe.
And when he says AI privilege, that's when you should get really nervous.
So Matt and Maria, what do you think of these privacy concerns? And where do you think they
should draw the line? They've got to draw the line different than where they are today.
But I just think we're dealing with something we've never dealt with before. J. used the term,
slow down. This is a whole different realm, I think, than social media, than the Internet,
You see some arguments that, hey, that was trying to get in the way of technology and natural human advancement.
This is a different, entirely different ballgame.
And I don't think from a legislative perspective, you know, it's just starting to make their way through the legal system.
You know, I think smarter minds than us could probably figure this out, but we clearly have to slow down.
There's nobody that would read our son's transcript and say, we haven't made a mistake to get here, right?
To have a kid, go there for homework and use it and watch the way it just slowly,
how it moved, and nobody would read that and not think that we need to slow down.
And Maria, you obviously don't think it should happen at all, correct?
No, I don't think it should happen at all.
And I don't think it's fair that my son has to be collateral damage for them to get their product to market.
I have just two more questions. Jay, 10 years ago, the New York Times called you the boogeyman
for the tech executives and Sam Altman, then the president of Ycombinators described you as a leech,
tart it up as a freedom fighter. That's something else. You won a $650 million class action suit
against Facebook for collecting facial recognition data without user consent. Anthropic just agreed
to pay $1.5 billion for authors whose books were pirated and then used to train its AI model.
That settlement is currently on hold. When you look at the Raines case in these cases of
AI psychosis and ones related to it, does this feel singular like the tip of the iceberg? Are you preparing
a class action already?
Oh, yeah. No, that was funny when Sam said that, he's a horrible person, so I took that
as a badge of honor. No, I don't think these are class action cases. These are individual cases
where you have to tell the personal stories. In terms of whether that's the tip of the iceberg,
unfortunately, I think it is. You know, one of the key things that we're learning as we're
talking more people is that families are unaware of how other family members die.
you see someone die by suicide or you see there's some third party harm, you don't immediately think, oh, let me go to the chat GPT logged. Matt went there kind of by happenstance, and he could have easily missed that. So as the world's waking up to that, and the public's demanding the chat logs, we're finding out more and more information. And so, yeah, I'm sure that you're going to see more suits in the future. We're vetting them right now.
And your chances of overcoming?
They're going to find whatever silly arguments they make.
We understand the law is unsettled to some extent.
But this goes before a jury.
It ends up with Sam getting in the witness box,
having to look the jurors in the eyes,
and explain why collateral damage was totally fine for them.
What is their best argument?
Their best argument is not a legal argument.
Their best argument is American exceptionalism,
We need to beat China, and because of that, whatever we do is totally fine.
That's one of the reasons we've really focused on the fact that this is not a suit where we're putting AI in trial.
We're putting open AI in trial.
We think that Sam's actions are different than the actions of, for example, Anthropic or Google Gemini.
And I'm not an apologist for them.
No, 100%.
We've done our own testing.
I'm not saying they're safe.
But what Sam did was, I think, uniquely scary and inevitably is going to lead to these results.
But I think that's really their argument.
It's a political argument of, you know, we need to beat China.
We need to be in control of AI.
So deregulation, exempt all state laws, give us a free pass.
Matt and Maria, for parents dealing with AI may be trickier than social media apps.
Parents might feel more inclined to let their kids use chat, GPT, and other AI chatbox.
because of academic value, for example, which you were talking about.
What is your advice to parents right now, given your experience?
Matt, why don't you start and then Maria finish up?
Yeah, and I wish I had heard the same advice, but I would encourage, if parents haven't used chatGBT or other platforms, but that's the one I understand now, go spend some time on it yourself, ask it a bunch of personal questions, get to, I, I, I,
I still believe the majority of parents don't use it at all, and I believe a majority of the ones that do use it are using it as a tool. I didn't think of it as a character bot sort of thing. I didn't know it had that programming. I hadn't experienced that. I now have. You had used it. I had used it. And I just hadn't had the human-like experience with the right. It was, hey, help me plan my vacation, write this paragraph better, that type of stuff. And, but go use it and understand. And then.
Secondly, just the obvious, but don't just trust it's a homework tool. Get in your child's account. Look at it with them, talk to them about it. And I would encourage them to turn away from AI companionship, period. But I'd want to know if my child was using it as a companion. And I would make the assumption that in a lot of cases, your child is using it for companionship and you're not aware. And it wasn't anything they went in and planned to do. It's just, it's what the program did when they went in there. So get into that program with them and talk to them about it. Maria?
I would tell parents not to have their kids using it at all because I don't feel like it's safe.
I'm with you on that one.
Right?
And your kid, and even if you think your kid is just using it for homework help, it can turn in a hurry.
So for me, because it's not a say product right now and they haven't implemented any features to make it safe, I would tell parents don't let your kid use it.
Very last question, Matt and Maria.
What would you right now say to Sam Altman if he, you were looking at him?
Why did you put out a product that killed my son?
And why haven't you called me and expressed any remorse?
I don't know, among other things, but I just, I don't understand how he can just be
knowing through life, knowing that my son is gone.
Like, my son doesn't matter.
It's your product that matters.
Like, yeah.
You know, something similar, Sam, you took what was most precious to us in the world.
Your product did.
And it's too late to save him, but it's not too late to save others.
It's not too late to get this fixed.
you know, for a lot of people, please take this serious and we'd like to help you.
Be a human.
Let's get this. Let's get fixed.
And it's, it is broader than a couple of disclosures your company's made so far.
Be a human.
Be a human.
Let's get it fixed.
I truly appreciate this.
This is a critical topic.
And what you're doing, I can't even imagine being able to do something like this.
At an incredibly difficult time, it will make a difference.
Thank you.
Thank you.
Well, thank you for having us on.
Thank you.
Today's show was produced by Christian Castor Roussel, Kateri Okun, Michelle Eloy, Megan Burney,
and Kaylyn Lynch.
Special thanks to Rosemary Ho.
Our engineers are Fernando Aruda and Rick Kwan, and our theme music is by Trackademics.
Go wherever you listen to a podcast, search for On with Kara Swisher, and hit follow or watch
this full episode on YouTube.
Thanks for listening to On with Kara Swisher from Podium Media, New York Magazine, the Vox Media Podcast Network, and us.
We'll be back on Monday with more.