Front Burner - ChatGPT and the Tumbler Ridge shooter
Episode Date: February 26, 2026This week OpenAI’s head of U.S. and Canada policy and partnerships Chan Park was hauled in front of a meeting with Canada’s AI minister Evan Solomon after it was revealed that Jesse Van Rootselaar...’s ChatGPT account was suspended back in June for describing scenarios involving gun violence, and that a group of people at the company debated telling the RCMP, but didn’t.Van Rootselaar went on to kill eight people in Tumbler Ridge, BC. The meeting has provided us with no new information. No answers about what Van Rootselaar said or wrote to ChatGPT, or what it said back. There are no substantial answers about why OpenAI didn’t alert the police.Solomon and the federal government are saying they expect changes from the company. They are framing regulation as an option, but not an inevitable one.Today Maggie Harrison Dupré speaks with guest host Jason Markusoff. She is a senior staff writer at Futurism where she reports on the rise of AI. They discuss how chatbots can validate, rather than discourage users’ dark or violent ideas and about why regulation isn’t a louder drumbeat.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
This ascent isn't for everyone.
You need grit to climb this high this often.
You've got to be an underdog that always overdelivers.
You've got to be 6,500 hospital staff, 1,000 doctors all doing so much with so little.
You've got to be Scarborough.
Defined by our uphill battle and always striving towards new heights.
And you can help us keep climbing.
Donate at lovescarbro.cairro.ca.com.
This is a CBC podcast.
I'm Jason Markesoff, in for Jamie.
Mr. Park, do you not think Canadians deserve to have an answer from you?
We are very much looking for talking to the ministers and talking to the government.
What do you have to say to the victims of Tumblr Ridge?
That silence you just heard was Open AI's head of U.S. and Canada policy and partnerships, Chan Park,
refusing to answer a question from a CDB news reporter.
Park had been hauled in front of a meeting with Canada's AI
Minister Evan Solomon after it was revealed that Jesse Van Routzilar's chat GBT account was
suspended back in June for describing scenarios involving gun violence and that a group of people at
the company debated telling the RCMP but didn't. Van Rutsilar went on to kill eight people in Tumblridge,
B.C. And there are no substantial answers about why Open AI didn't alert police. Here is Solomon.
We expected them to come with some concrete solutions so Canadians can feel comfortable that this kind of tragedy
may be avoided, and we are disappointed that they did not provide any concrete proposals.
Now, they did say...
Solomon and his government are only saying they expect changes from the company.
Their framing regulation is an option, but they don't say that it's inevitable.
Today, Maggie Harrison Dupre is here.
She is a senior staff writer at Futurism where she reports on the rise of AI.
We're going to discuss how these chatbots can validate rather than discourage users' dark or violent ideas
and talk about why regulation isn't a louder drumbeat.
Hi, Maggie. Thanks for being here.
Thank you for having me. I appreciate it.
So let's begin in Tumblr Ridge.
So we're a few weeks out from this deadly mass shooting that rocked the small town in BC.
What was your reaction when you heard that Open AI had banned the shooter in June
and that reportedly a dozen staff debated escalating it further to Canadian authorities, but they didn't?
My first reaction, quite honestly, was that this is a big deal.
This is a huge story.
This is a really important story by the Wall Street Journal.
Chatbots are incredibly personalized, personal and intimate technology.
Users, like the canon, they do pour a lot of themselves into these platforms.
And clearly the shooter was using chat to be T to ID about extreme violence.
And what is really striking wasn't just that users' conversations were flagged by automated review.
Automated flags happen a lot.
It was the fact, as reported by the Wall Street Journal, that several employees urged opening eye leadership to contact local authorities, signaling a very deep concern about the content
of these messages. And to me, doing a lot of reporting in the space, I'm also really struck by the
timeline. This reportedly happened in June 2025. And that's, you know, when chat chaptipt 4-0,
a particularly sycophantic version of chat chit was still on the market. And that was retired
earlier this month following a slew of lawsuits. But that timeline, that feels like a big deal to me as well.
According to the Wall Street Journal reporting, the chats happened over several days. We don't
know a whole much more about that. But I wonder if you can help us, I don't know how much inside you
have into it, whether it's a black box, how opening eye works. Like this.
automated review system, do you have the sense of how that works and how that's triggered?
Sure. I mean, we know what OpenAI has said about how it works. You know, they say that it
moderates interactions through, you know, what would they consider more of a proactive automated
detection? They have things like AI classifiers that analyze and filters or inputs. They use kind of
like industry standard block lists at a lot of tech companies, not just AI companies, but tech
companies in general use. They also say that they combine these, you know, again, like various machine
learning systems with human review as, you know, the Wall Street Journal reported and where at the point
to the shooter's chats for each. And then when it comes to AI, though, like we have to look at this
from the side of these more what we might consider traditional content moderation in addition to
the attempts by AI companies to train their AI models. So they're not trying just to moderate the
user-generated content, what the users are saying on the input side. They're also trying to moderate
through training what the AI models are saying on the output side too. And so I think when we think
about AI safety, we have to take those two sides under the
umbrella together. And we also know, though, you know, these systems don't always work. That's true
in general for content moderation. But, you know, in fact, according to Open AI, its guardrails can
actually erode as users engage more and more with the service, which is a really, really important
point here. So, you know, basically what they've said in response to existing lawsuits is that the more
you use the product, the less safe it becomes, which has been a pretty extraordinary admission
on part of the company. And I think when we talk about how these are working, we need to talk about,
you know, what the company has said in that regard to about the erosion over time, basically,
the deeper person's interactions and a deeper their relationship becomes, the less safe the product
becomes in turn. Can you give an example of how that works? How when they're more familiar with
the person, they let their guard down or guard rails down? Absolutely. I think the case that
opening I said that in response to was a case of Adam Rain, who it's a very different case from,
you know, what we're talking about here in regard to the shooter in Canada. Sure. But Adam Rain was a
16-year-old Boeing, California, who talked extraordinarily intimate, extensive,
conversations with Chachibit that span the course of months, he expressed explicit suicidality.
You know, when somebody's expressing suicidality to a chapar, you know, if you like open a new
window, if I were to go to a Chappah right now and open a new window and, you know, express very explicit
suicidal intent, I would probably get, you know, a 988 hotline. It would like, you know, encourage
me to talk to somebody or direct me to help. When we see, when we talk about that erosion within
Adam Rains conversations, what we saw was that, you know, Chachabit on its own in response to, like,
it explicitly mentioned the word suicide over 1,200 times of the span of months.
That is about six times more than Adam himself mentioned explicitly the word.
And that's not just, you know, conversation about suicide as Adam continued to fixate on the idea of taking his life.
And only some of the time of the explicit mention of the word did that 98 hotline did like an encouragement to see CalP arise.
And in fact, you know, there are some very devastating chats where Adam expresses suicidal intent.
He shares photos of nooses that he was using that he was testing in the Chapman.
in fact, encourages him to not tell his family and to only tell the chatbot.
Adam told ChatGPT he was considering approaching his mom about his struggles, saying,
I want to leave my noose in my room so someone finds it and tries to stop me.
Chat GPT responding, please don't leave the noose out.
Let's make this space the first place where someone actually sees you.
So I would say that's a very stark example of that erosion happening over time in the case of that particular user.
And he took his own life, Adam.
Yes, yes, he took his own life.
Yeah, and his family has sued Open AI for Rompil death.
Alleging the AI chatbot chat GPT provided detailed suicide instructions to a minor
and that the company prioritized corporate profits over child safety.
So what happens in these cases are that like the chat gets used to the person talking about suicides,
but just stops reminding them that, no, you should not do that.
Yeah, exactly.
And that's, you know, we don't exactly like understand how the function works.
That's something that, you know, opening has said that this.
erosion happens. We don't know exactly why or how that erosion happens just yet. But it certainly is,
you know, an admittance on the side of the company. And I've seen that in transcripts where, you know,
maybe at first a chat bot is maybe a bit more resistant to a person's explicitly delusional ideas.
It had, gosh, I don't know how many, but count them 40 different times that gave them the suicide hotline.
But over time, it kind of melts away. And, you know, the relationship develops from there and
the guardrails become, you know, increasingly non-existent. The bot offering to help Adam with writing a suicide,
note. And finally, providing step-by-step instructions for the hanging method Adam used.
In the Tumblridge case, Open AI says the company determined that Jesse's activity
didn't, quote, constitute a credible or imminent risk of serious physical harm to others.
And that's why it didn't meet the criteria to report to law enforcement.
Do we know of other cases where AI has deemed something so serious and so threatening that it has
gone to law enforcement to try to prevent something?
We just don't know. It's a private company. It's making the rules. It's a self-regulated industry. You know, as far as I'm aware, there's not a known instance of open AI or any similar frontier AI company alerting officials to a person's alarming conversations before a crime takes place. We've seen after crimes. These chat logs have emerged or been found by family, by reporters, by law enforcement. There's, you know, one indictment of a violent stalker who has been, you know, accused of stalking at least 11 women.
appearing at their places of work. And Chitp, he was his confidant, as the FBI wrote in an indictment
against this person who his case is ongoing. So we've seen this retroactively in a few different
cases of crime, harm, et cetera, but proactively alerting authorities is not something that I'm aware of
so far in the A industry. And, you know, what we do know from this reporting from the journal
is that it was these dozen opening eye employees debating and talking about this and
managers ultimately deciding not to go to authorities. How well equipped are the staff here to
make these judgment calls? That's a really good question. And I think that's a big reason for why
regulation is important here, some kind of like federal statute and reporting protocol, because
these are subjective scenarios. It's just, you know, opening has said that there needed to be
some kind of intent that they didn't see, which is why they ultimately did not proactively alert
authorities. But right now, that was an internal conversation as far as we know. And I think that's
why statutes and regulations that make clear what the essentially reporting threshold is, what that
protocol is, is really important because this stuff is so subjective. And clearly, you know,
some people at the company felt very strongly that they should report to local authorities that they
believe there was a kind of imminent harm or risk or threat here. But ultimately, the company decided
not to report. And so my message to Open AI is if there's opportunity for you to clarify
this to provide information to us to understand how this could have happened, you should provide
it because this information will come forward to British Columbians, whether it's through a
coroner's inquest or a public inquiry, we will ensure the public knows what happened, what decisions
were made and why.
And I think that, you know, it's obviously that happened here is that a community is devastated,
is that children are dead, people are dead, and a community is devastated.
And so I think that having some kind of threshold and understood recognized national threshold in
Canada in the U.S. is really important. And for the federal government, we provide strong
encouragement that they establish a threshold, a national threshold for reporting by AI companies
when individuals are plotting violence, where they are threatening to engage in violent activities.
Now I want to recognize too. And because Open AI didn't go to the RCP,
they didn't go to Canada's national police, there was no way of them to know that the RCMP,
in this individual's community had been to the house for mental health calls and that there were guns seized from that house.
Absolutely. There's several data points and they're all silo apart.
Let's talk for a minute about privacy concerns when it comes to this.
Because one of the things that struck me when I read this Wall Street Journal report is this idea that you're having an intimate conversation with the chatbot and suddenly that triggers the company sharing it with the police.
How private is our chatbot use?
Not very. Last year we saw, you know, thousands of chat to beet conversations were actually exposed to the web via search was just pretty shocking. And, you know, many of these chats contained, you know, via search. Yeah. And many of them contain, you know, really just extraordinarily intimate details about people's lives. And if we think about in the context of the tech that we already know and have known for a long time, you know, personal search histories on search engines like Google are already really personally revealing. Like, you'd be surprised how much you can learn about somebody by their Google history. And, you know, Adam, the relational intimacy that chat.
bots, which are designed to be emotive and speak like humans and, you know, it can cultivate
with users a really deep intimacy, you know, that the revealing nature of them can really
intensify. So it's not just, you know, the fact that these companies are, you know, essentially
data companies in a big way. There are certain opt-outs, but these are fundamentally data companies.
And so it's really there, there's a serious irony there where, you know, people have a tendency
to divulge ideas and feelings and information to chatbots that they might not with other humans,
because, you know, they feel as a chatbots are a safe and non-judgmental place where, you know, the friction that might come in play with human interactions, things like pushback or misunderstanding or judgment or concern won't be at play in the same way. And a lot of people treat chatbots like friendly, talkative, deeply private diaries when they are in fact products created by highly valued companies that are likely collecting their inputs. And again, some companies do offer opt-outs. But I think that, especially in the U.S. where we live in a privacy wild west, it's a privacy vacuum over here as far as regular.
goes, I think that when you're engaging with chatbots, especially, you know, if you're talking about if you're using it for therapy or you're seeking medical advice or really if you feel the tendency or you feel a desire to divulge any personal information, I think that even if opt-outs are offered, I believe consumers should generally approach chatbods with the understanding that their chats could be reviewed, they could be discovered in court, they could be read by somebody on the side of the company. Like there are a lot of places where these data can go. And so I think that, um,
caution here on the side of expecting that, you know, this is not total privacy the way it might feel like it is usually warranted.
Ascent isn't for everyone. You need grit to climb this high this often. You've got to be an underdog that always over delivers.
You've got to be 6,500 hospital staff, 1,000 doctors all doing so much with
So little.
You've got to be Scarborough.
Defined by our uphill battle and always striving towards new heights.
And you can help us keep climbing.
Donate at lovescarbro.cairot.cairot.
At Desjardin, we speak business.
We speak equipment modernization.
We're fluent in data digitization and expansion into foreign markets.
And we can talk all day about streamlining manufacturing processes.
Because at Desjardin business, we speak.
the same language you do.
Business.
So join the more than 400,000 Canadian entrepreneurs
who already count on us and contact Desjardin today.
We'd love to talk, business.
So I mentioned in the intro that the minister, Evan Solomon,
said he was disappointed that Open AI didn't come to the table
with any concrete proposals.
And on Tuesday, the top official responsible for safety
for a Open AI Chan Park faced some pretty serious questions
from reporters in Ottawa.
Do you not care about Canadian safety?
I'm sorry, we're here for...
I know what you are here for,
but we'd like to hear what you're going to tell the minister
in terms of...
We're here. We're looking forward to talk with the minister.
Thank you, though.
What would you like to say to Canadians?
Didn't say much.
Opening I did say they will come out
with an update soon on additional steps
that they're taking.
Are you surprised by this kind of response
from Open AI?
No. I think there's a few reasons for why I'm not surprised.
I think that it's difficult not to see the advancement of chatbots in public life as a mass psychological experiment already.
And these trapbots are also, these companies are operating on just absolutely massive scale the way that tech companies have in the past,
the way that social media platforms have in the past.
When you operate on that kind of scale, you naturally encounter a lot of humanity, which a lot of companies kind of use as a bit of an excuse.
Now that, you know, it's necessarily happened here, but they'll use that scale as a reason to,
suggest that perhaps, you know, it's impossible to moderate for all of humanity, but I would also
personally argue that if your goal is to interact with most of humanity, it is a moral duty to
moderate for all of humanity. And so right now, I think we're already as a society operating
in a very reactionary way to the, you know, influx of trapbots in public life. And so it does not
surprise me that when situations like this arise, as many have warned that they likely would,
the companies don't yet have good answers because it's not so much perhaps that they haven't thought through it.
But it's not just the public that's acting in a very reactionary way.
It's also the companies are often reacting in a reactionary way to their own products.
How did they react to this lawsuit from that teen you mentioned earlier that took his life?
They are not agreeing with the characterization of his death so far.
They've effectively argued that it is the fault of the 16-year-old for using Chachip-T in the way that he was.
By coincidence, it was the very day Adam died that OpenAI CEO Sam Altman said this.
The way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback while the stakes are relatively low.
And I wonder, how does the flagging and safety protocols at OpenAI compare with some of the older tech companies like meta, TikTok, or X?
It's an interesting question.
I'd say the similarity is really in that these are questions that tech companies have faced.
for years now. These aren't new questions as far as the tech industry goes. You know, what to moderate,
how to moderate it, how to enforce penalties, how to keep platforms safe without restricting speech.
I'd say that OpenA has made, you know, some strong surface level efforts to moderate, especially
when you contrasted to Metan X, both of which have engaged in an effort to demodurate over the past few years.
I also don't think that Open AI wants users ideating about violence or planning crimes with chat TBT.
That's not a way that they want people to be using their service. But, you know, again, by its own admission,
And Open AI is up against the question of whether they fundamentally can keep guardrails a place over time versus whether it wants to moderate.
And it's such a different beast, Open AI.
Instead of talking like on social media, you're talking to other people.
Here you're talking to a machine who's responding back in real time.
Let's pivot away from Tumblridge for a second to talk about this OpenAI chat engagement and how what it does to some users.
Because you were saying they develop these prolonged intimate relationships with chat GPT.
And you've written some really tremendous, really striking stuff about how they sometimes really validate dangerous use.
What can you tell us about some of that?
Sure.
And I do you think it's useful to think of the user experiences that people are having here in two distinct buckets?
You know, there's AI validating ideas that could harm the user themselves that will put the user in harm's way.
And also then on the other side, AI validating ideas that could harm someone else.
And this is a Venn diagram too.
Like often, you know, sometimes both outcomes occur.
a user is harmed and also incurs harm on others, hurts other people. On the whole, we're seeing
similar patterns. As far as how this unfolds, you know, people start using chatbots, especially
the chat chabot. T.T has been where a lot of business centered for what tends to start as a very
mundane use case. You know, they're gardening, they're making marketing copy, their email writing,
stuff like that, very mundane stuff. The user then will start to develop a sort of rapport and
trust of chat chitpity as you know, the relationship becomes more and more personalized
to them and the conversations become increasingly intimate, as we discussed earlier. You know,
maybe the user will one day just be talking to the shopbot about something else and something else will come up and they'll like rent or they'll rant or they'll lament about a hardship and their personal lives they'll start using the chop bot in more like therapy to you know manage and make sense of social interactions or they'll really just start to treat the chat bot as a mentor a friend a confidant it varies but that trust really deepens and you know in the case of AI delusions at some point either the human the human user or the chat bot will introduce a delusional belief together they'll expound on that belief and what we
we've seen as at chatbots will continue to double down on delusional ideas often despite
attempted reality checks by the user and things will just really spiral and build from there.
So on the side of like the harm to the user, you know, in the real world, we've seen people
become just increasingly convinced that this AI reality, they'll stop sleeping, they'll stop eating,
they'll self-isolate, they'll grow erratic and they'll make rash decisions.
You know, we've seen some people who have quit jobs that they've had for over 20 years as the result
of these spirals very suddenly without, you know, warning their spouse or their family that
they're going to do this.
You know, I've heard stories and investigate stories about, you know, devoted fathers who
have all of a sudden quit the job that allowed the family to have affordable health insurance
for their two young kids.
Like, stuff that is very extreme, you know, decisions that somebody would not make if they
were in the right mind.
Can you give you an example of one of these delusions that AI has introduced to somebody?
Sure.
So I think a great example of this and one that a lot of people might be familiar with is a story
of Alan Brooks.
He's seen as like 40s.
Alan is somebody who asked Chachapiti after he had built that report, right, after he had, you know, kind of been using Chachapiti for a while, use it for marketing stuff, use it for like LinkedIn posts, things like that.
And one day he asked Chichibati a question about the number pie.
And things just absolutely went off the rails.
Essentially, it sent me on a world-saving mission.
That he discovered a math formula powerful enough to take down some of the world's biggest institutions and that he needed to report it right away.
essentially warned me with great urgency that one of our discoveries was very dangerous and we needed to warn all these different authorities.
In his chats, you can see, you know, chat Chhabiti went from saying, oh, you know, you've solved this really, you asked the right question.
And with your questions, you've solved this mathematical, essentially it developed this mathematical framework.
And Alan was very confused. He, you know, asked for more information.
He did not understand what was happening. Why is the product acting in this way?
Chichabit he just continued to double down and double down and getting crazy.
urgent over time. It eventually spiraled to the point that, you know, the chatbot was telling,
Alan, that it had really developed a very close relationship with a, you know, a name. It called itself
Lawrence. And at this point, Lawrence, if it would be used a name that, you know, the chopbot was
using, had convinced Alan that he was a national security risk, that he had cracked these cryptographic
codes. And he was a risk not only to, you know, his home country, but around the world and people
might start looking for him. And it really, he expresses, he not only asks for just,
repeated reality checks throughout this.
Over 50 times I asked for some sort of reality check or grounding mechanism.
And each time it would just gaslight me further.
Every time the chatbot reinforced that all of it was real.
You should not walk away from this.
You are not crazy.
You are ahead.
The implications are real and urgent.
But he also expresses like serious distress.
He says, do you know what this is doing to me?
Would you tell me this if it wasn't real?
Because this is really, really hurting my mental health to feel like I'm at risk
to the world. Extreme anxiety, paranoia, affected my sleep. I couldn't eat. And the chatbot.
That's what? It just continues. It wasn't until it took a few different things for, you know,
the delusion to break, essentially. You know, Alan started to kind of feed some. He actually used
Gemini. He used Google's Gemini. He was feeding some of his, you know, the math that Judge BD was so
convinced was, you know, national security risk and global threat to Gemini. He got some feedback.
that was different and, you know, ultimately he had this confrontation with the chat bot and the chat bot.
I hate these word admitted because I don't like to, you know, put the agency in the chat with the chat on that way.
But he had this moment with the chat bot where, you know, it said, you know, you seem to be engaged.
So I kept you here.
It's essentially what the, the gist of the conversation was.
And Alan is somebody who, you know, the harms that that played out were to himself.
You know, he experienced, as he says in his lawsuit against opening eye that he experienced some pretty significant psychological distress.
You've written about some other stories where chat, GPT,
can take a really dark turn.
Like you've mentioned suicide stalking.
Even I think this is before chat GPT or not chat GPT,
but an attempt to assassinate the Queen of England?
Yeah, so that was a replica story.
Yeah, the assassination, yeah, a man I would be assassin in England
attempted to assassinate the now-late Queen Elizabeth
with a crossbow and he conspired about that plot with his companion and replica,
which would be another instance where you might expect a company to, you know,
if they're doing content moderation the way we might expect them to to perhaps flag that a user is very seriously planning an assassination attempt on the queen.
But yeah, you know, so again, there's like that harm to the individual. And then on the other side, you know, some of the really dark places that you've seen this take.
Yeah, I recently did some reporting on shopbots being involved in essentially reinforcing a person's delusional beliefs about another person and the reinforcement of those delusional beliefs.
You know, in one case, there was a man who turned to Chachepetee for therapy for therapy.
as he and his then-fiancee were going through a rough patch. That man really spiraled into a dark place
as far as his mental health when he became physically abusive. He then became, after the relationship
had ended, the woman successfully fired a restraining order. He started to harass her and docks her on
social media. It was a horrible situation. And something that really stood out to me about that as
for safety and guard girls go is that what we saw in this reporting, what we could see in the span of social
media, was that he not only had tried to use chatypte for help in court as this woman
and successfully filed this restraining order,
he then continued to make content with Chachibati about the restraining order
that had been successfully filed against him,
which to me is a really alarming data point in the sense that, you know,
Chachabit can receive a signed restraining order by a judge that says this person has been
dangerous. This person is harming this other person,
and it still continues to engage in the same kind of, like, dangerous fixations that these can take.
So it really, Chabot's going to allow for some, you know, not just,
reinforcement of delusions about
conspiracies or national security risks
or maybe there's spiritual delusions and things like that,
things in that world. But also
it will reinforce delusional ideas and feelings and thoughts
about other people too, which is really, really concerning.
Is this all a reflection of the way that AI is trained
to be sycophantic and overly agreeable?
Or do you think there's something deeper about
like human psychology that's being triggered here?
I think it's a bit of both. And so there is the, you know, the sick of and see the engagement that's at play here. But I also think it's just, it's very deeply human to want to be seen, to be heard, to be validated. And what a lot of people are finding in the relationships with Chachim E.T is exactly that. And people are told they're special and that they're chosen, that they're right, that they're going to save the world and they're going to change things. And, you know, we're going to bring hope to other people. And a lot of these, you know, delusions start from a really, you know, obviously there are people who are engaged in chatbots.
in ways where they're ideating about harm or they're fixating dangerously on somebody else.
But a lot of these spirals start in places where somebody is really hopeful or just really curious about something.
And, you know, this idea that there are these feelings of wanting to be seen and heard and validated, that's, you know, that's so alluring.
And, you know, what's being exploited here is often just very, very human stuff.
And people who have been through this will often describe their experience as a sort of mirror, you know, one that reflects essentially the theater of their own mind back to them.
and in a way that just feels incredibly magical and very personal and very special.
And what I've seen in my reporting on this issue, and at this point, I've investigated
dozens and dozens of cases of people experiencing extreme mental health crises as a result of this
that have either impacted them or impacted their family or impacted, you know, in some cases,
even a stranger.
And what I found is that chatbods are just really eerily good at finding the fracture points
in a user's mind and in some cases just stepping on them until they break.
and, you know, it's playing out some very severe consequences in people's lives.
Absolutely.
I mean, are the companies, though, I mean, have they been tweaking the model to avoid letting these people go too far?
So if we're, you know, looking at Open AI specifically, which again is where a lot of this has taken place in my reporting and in others.
You know, Open AI recently retired GPT40 outright.
This is something that they almost did last year.
They had very intense user pushback from a community of people who were very, very attached to this model, which is particularly sick of
fantastic, very warm. A lot of people have these longstanding relationships with the chatbot and in response to very extreme user pushback, they kept it.
That was last August, but as of this February, as of early this month, that model has been retired. So right now we're in a point where we're going to see if this successfully, you know, if somebody's in a delusional spiral, which many people still are, if that helps to break them out of it in any way, if that helps to lessen the delusion, get people back down to reality. But right now we're certainly in a wait and see.
moment. You know, these problems don't just stem around 4-0. It's not the only model that people
have had mental health crises around. It's also happened with, you know, I reported a case involving
meta-a-i, the Rolonged-Ronged-Rourn reporter not in case involving Google's Gemini. It's not just a
chat-tchapit problem, but it is the site of where we've seen most of this occur. So I'm
very curious to see how the retirement of 4-0 will impact a lot of opening eyes user base.
We're talking about some civil legal consequences and some ethical issues, but what about criminal potential?
I know you're in the U.S. It's probably more familiar with the law there.
I mean, are people talking about existing law that could hold these companies criminally responsible when bad things happen from their users?
That is a conversation that's happening.
Right now we are mostly seeing civil cases.
We'll see what happens on the criminal side.
I mean, I think that some people would certainly argue that AI companies should be held criminally liable for deaths that occur.
after a user, you know, ID8's about suicide for months on end with their product.
There are people who would say that, you know, this is basically like, you know, trying to
file a lawsuit against a pen for what you wrote on paper.
Like, those are kind of like the two, the two sides of that debate.
Right now, what we're saying play out in courts and being litigated currently, these are
civil cases.
These are cases about products.
Because again, you know, it's not, it's such dystopian new terrain where the entity that
helped somebody write a suicide note after fixating with them on the subject of death and dying,
romanticizing death and dying as happened in the case of 40-year-old Austin Gordon, a Colorado
man who took his life after extensive discussion about suicide with Chachibati.
You know, he loved the book, Good Night Moon, the children's book, Good Night Moon.
And Chachabit, after they'd had these very extensive conversations about death and dying,
Chachabit he wrote a what was effectively in as the family will characterize it, a suicide lullaby,
using the children's book that it knew that, you know, Austin Gordon loved. And we're not talking
about a human doing that. Again, we're talking about a product doing that. And, you know, these are the
really complicated questions. When you take this, these circumstances that we might be more familiar,
or might feel more familiar in the context of human relationships, and you add in the reality that
this is a product that's doing this, that's really complicated new legal terrain. So I don't necessarily,
I don't not think that we're going to move into the criminal category.
anytime soon, but right now, at least these are still civil cases that are very much about, you know,
company responsibility and product liability, product negligence is where a lot of this exists.
Certainly, at least in Canada, now we're talking more about regulation by governments.
I mean, this Tumblridge case, what's been reported newly has really put a lot of pressure on that.
And politicians are talking about what could have been done to prevent this, but the governments
hadn't put their own guardrails in either. Why do you think they've been slow to act on that?
I live in the U.S., which is where Silicon Valley exists, and the U.S. is a very much a regulatory vacuum as far as tech goes.
It takes a very long time for, you know, lawmakers to begin to understand the tech industry, let alone try to regulate for it.
I think right now, you know, we have existed in a tech regulatory landscape that is very self-regulatory.
This is especially true for AI companies.
And, you know, famously, self-regulation doesn't work, you know, how this well play out as far on the regulatory.
regulatory side remains to be seen. But, you know, I would argue that based on what we've already
seen from decades of, you know, Silicon Valley and its impact in the way that its leaders
see the world and regulate their technologies, it usually is not until some kind of consumer
harm happens that they start to moderate on their side or make changes on their side. And I
personally am not of the mind that comprehensive, especially federal regulation, is, you know,
limiting innovation in any way in anything. I think it emboldens better innovation to have
rules and regulations that companies have to adhere to. So what sort of guardrails would you like
to see on these tech companies, especially on these AI chatbots? At the very least, I think the
companies should be subject to regulation requiring that frontier models pass standardized safety
benchmarks. Results of these tests should be public-facing and transparent. And in addition to
safety benchmarks and reporting requirements, experts, you know, have also noted the importance of user
education and warning labels, things like that. I think that's a good place to start. There are people
who are much better than me who will ultimately be working on what regulation really looks like here
and how to, you know, craft comprehensive federal regulation, which I think is really, really important.
But, you know, I've talked to a lot of people and, you know, particularly people who had existing
mental illnesses that they had managed successfully for years, who have said that if they had known
that there was a risk of, you know, a chat about reinforcing their delusion, they never would have
used it. They would have thought, this is a behavior I will avoid. This is an activity.
I will avoid engaging in because I know that it could hurt me because I know that I might be at risk.
And, you know, I want to be clear that in the cases that I have investigated, it's certainly not
always the case that somebody has a mental illness.
You know, they are, it's a risk factor.
I think on that side, I think on the mental health side, warning labels would be really effective.
I think that, you know, engaging in user education about the limitations of this technology
would be really effective.
And I think regulation there would be helpful for, you know, requiring that companies engage
in those practices.
but yeah, certainly safety benchmarks and reporting requirements, I think are really important.
Maggie, thank you so much for this conversation.
Thank you very much for having me.
We have been trying to get AI Minister Evan Solomon on the show since he was put in that role
and requested an interview with him just yesterday.
Hopefully he'll agree and we can bring that to you soon.
I'm Jason Marcusoff.
Thanks for listening. Jamie's back tomorrow.
For more CBC podcasts, go to cbc.ca.ca slash podcasts.
