The Decibel - The argument for AI regulation after Tumbler Ridge
Episode Date: February 27, 2026Months before the mass shooting in Tumbler Ridge, B.C., earlier this month, the shooter was banned from OpenAI, the company behind ChatGPT, for violating its usage policy. The Wall Street Journal, whi...ch first reported this, said that the interactions with ChatGPT were describing scenarios involving gun violence. That has furthered calls for the Canadian government to regulate AI companies and their products – but there are challenges. Taylor Owen is an associate professor at McGill and founding director of McGill’s Centre for Media, Technology and Democracy. He’s also host of The Globe and Mail podcast Machines Like Us. He’ll tell us what responsibility companies have to report concerning or violent content, and what the government is up against in trying to regulate AI. Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hi, everyone. I want to give some response to the very disturbing media reports in the wake of the tragedy in Tumblr Ridge.
That's Evan Solomon, the federal minister of artificial intelligence.
The media reports he's talking about are that the Tumblr Ridge shooter had been banned from OpenAI,
the company behind ChachyBT, months before the shooting.
On February 10th, Jesse Van Rutsilar killed her mother and brother,
then went to Tumblr Ridge Secondary School,
and killed five students and an educator before killing herself.
In June last year, OpenAI suspended Van Routzilar's account
for violating the company's usage policy.
The Wall Street Journal, which first reported this,
said the interactions were describing scenarios involving gun violence.
OpenAI decided not to report these interactions to law enforcement at the time.
When I read those reports, I immediately contacted OpenAI to get an explanation about the situation.
And then I have summoned the senior safety team from OpenAI in the United States to come here to Ottawa,
to have an explanation of their safety protocols.
And when they escalate and their thresholds of air.
escalation to police, so we have a better understanding of what's happening and what they do.
Countries around the world, including Canada, are grappling with how to regulate AI companies
and their products.
Canadians expect, first of all, that their children particularly and are kept safe and that
these organizations act in a responsible manner.
But how exactly does the government try to ensure that?
Taylor Owen is here to talk me through all of this.
He's an associate professor at McGill University
and the founding director of McGill's Center for Media, Technology, and Democracy.
He also hosts a podcast from the Globe and Mail called Machines Like Us.
I'm Cheryl Sutherland, and this is the Decibel from the Globe and Mail.
Hi, Taylor. Thanks so much for coming on the show.
Hi, thanks for having me.
So to start, what did we learn from this meeting between federal ministers,
and the safety reps from Open AI?
Well, not we, the public, didn't learn very much yet.
All we've heard is some short press conference comments
from two of the ministers who are present.
But I did think more broadly, the fact the meeting took place is interesting.
It tells us that the government is taking this issue very seriously, I think.
You've seen a real tone change in the way the government is talking about AI.
from an issue of just broad adoption, we should all be using it,
to I think being legitimately shaken by a public safety and citizen safety concern.
So that's a major difference.
The fact that they needed to summon them to get these answers is itself interesting
because it shows that those answers aren't already known.
These are things we have left to the companies themselves to decide.
And I think the posture of the companies themselves was revelatory.
it's pretty clear from the Minister of AI's comments that they did not get the answers they wanted in that meeting.
Yeah, and when you say the comments from our Minister Evan Solomon,
AI Minister Evan Solomon, he said that, quote, we expressed our disappointment that no substantial new safety measures were presented at this time, end quote, there.
So Minister Solomon was clear that they weren't going to be talking about the Tumblr Ridge case in particular because of the ongoing RCP.
investigation. So what exactly were they hoping to learn from Open AI? I mean, a few
interesting things there. I mean, one, the idea that Open AI would have brand new safety
protocols two days after being called to a meeting, I think shows that we're both leaving too
much to the companies themselves to make these decisions and also probably belies the bigger
regulatory and policy conversation that needs to happen around exactly this on our end,
not necessarily on OpenAI's end. It sounds like they were looking for clear indications of what
thresholds were being used by the companies monitoring chats, how they were monitoring chats,
whether it was AI or humans, and what decision was made when the 12 people in OpenAI flagged
this particular chat as something that they believed the company should be,
communicating with law enforcement in Canada and then why did someone or someone's decide not to?
Right?
There's a whole series of cascading decisions, both automated, I assume, and human that the government
wanted some context on.
And to be clear, I think they are right to ask for that context.
What I'm more concerned about is that we are reliant on the companies themselves to tell us
and that we don't already know.
And that to me is the real deficiency here.
What responsibility do companies have right now to report concerning content?
How does that work?
I mean, right now, they have none.
That seems very surprising.
It is, except we've placed no obligations on tech companies and social media companies to do it either.
And AI is a very new phenomena and a new technological capacity.
So we haven't even placed those requirements on the companies that have existed for
almost 20 years, let alone the ones that have been around for really in their current capacity
for under two years. So it's not surprising, but it's surprising when you compare it to other
industries and other sectors of our economy. Like in the financial institutions have all
sorts of disclosure requirements for fraud and for illegal financial transactions. Doctors and
teachers have an obligation and sometimes a legal requirement to break confidentiality in certain
cases. So we do this in other spaces. We've just been very reluctant to do it in the digital
domain, particularly as regards to spaces that are either private or considered spaces of a very
flexible free speech. I think it will be a surprise to many Canadians to know that there are
are AIs scanning their chats and flagging for certain kinds of content.
Yeah, that is clearly happening, right?
And in some ways, we're learning as we go through this horrendous example.
Like, I think there is something very intimate and personal about the norms that have
emerged around our conversations with chatbots that we need to acknowledge.
Like, this is not two people talking to each other.
This is a person communicating with an AI that is created.
content designed to look like a human exchange.
Like that is a very different thing.
However, it's emerged as a norm because we as humans are very susceptible to thinking this is akin to a human exchange.
The norm has emerged that we will treat it as if it is and we will say things that I think we believe currently are confidential.
And I think people should know that when they're talking to a chat bot, this isn't a person.
perfectly confidential space. And maybe our norms will then evolve around that.
In this specific case with Tumblridge, the RCMP had already been to the house of the shooter
multiple times for mental health calls and gun seizure. And so law enforcement was already involved
with Jesse Van Rout-Silar, even without open AIs flags, right? So it makes me think about the
onus, right? And not just in this case, but beyond this case when it comes to very harmful.
whole situations, tragedies like this, how much responsibility should be on these companies?
I think that's a very important framing for this conversation. And it's very similar to the one
I think we're having that we have been having for many years around the addictive and mental
health implications of social media on kids. And I think that both things can be true,
that social media can contribute to harmful addiction and mental health and self-harm tendencies,
but they aren't necessarily the sole cause of them.
But that does not absolve responsibility from the technologies that could be contributing to it.
We have seen partly through the disclosure around civil law cases in the U.S.
some very worrying conversations that have happened around self-harm, for example.
So we do know there is a possibility that these chatbots can convince people in a meaningful and material way to do very damaging things.
One person that had really strong words when it comes to OpenAI here was BC Premier David E.B.
From the outside, it looks like Open AI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being.
dead children in British Columbia. I'm angry about that. And he's calling on the federal government
to bring in rules around when companies have to notify police about activity on their platforms.
How do you think that should work? Like, what should be the threshold for reporting concerning
content? I mean, the shorter answer is that I don't know what the actual threshold should be.
Because this is, I think, something legitimately new here that we're dealing with. That being said,
I think two things need to coexist. One,
is probably some form of mandatory flagging for the most egregious possible moments.
And that is an issue that public safety and the RCMP and potentially ceases will need to
articulate.
Yeah, because who gets to decide that, right?
Because that can be a...
Who decides, right?
I mean, law enforcement will always want more.
This is what we know, right?
This is the history of social media regulation as well.
And I think there's a legitimate pushback from those that are concerned about how long
enforcement sometimes uses those data or potentially abuses those data or the privacy implications
and democratic implications of governments having access to unlimited data about our seemingly
private conversations. There's no question there's a privacy issue there. On the other hand,
I think there are a whole host of things on the regulatory front that we should be demanding
of these companies to ensure a baseline level of safety and transparency over how their
products function in our country. And that's very similar to how we treat social media. We just have to put
much more obligation on them to show to us that their products are safe before we use them. And I think that
has to coexist with these very stringent, specific and highly limited criminal justice flagging
mechanisms. We'll be right back. Let's talk about regulation because this situation with a Tumblridge
shooter isn't the first safety concern with AI chatbots. We've seen several lawsuits in the
US against companies after teens died by suicide after talking to AI chatbots. We've seen stories
of people getting wrapped up in delusions by these chatbots who kind of egg them on and get
deeper and deeper into delusions. And this is causing people, including yourself, to call in
governments to do more around AI regulation. So let's talk about what Canada is doing. Can you
talk me through some of the measures the government has tried so far?
Tried is the operative word. Various pieces of legislation have been either proposed or tabled,
but nothing has passed. So we currently have not tried anything in a material sense. However,
various approaches to regulating AI have been talked about. Initially, this was the ADA provision
of C27, the Privacy Act, the previous government.
Artificial Intelligence and Data Act, right?
Exactly, which was added quite late in the process of a multi-year consultation
around how we need to modernize our data privacy laws.
And the challenge is that it was based largely on what the EU had done with the AI Act in the EU
and came out just before ChatGPT was launched.
So that shows you the kind of challenges government's face here, right?
They designed regulatory mechanism for AI safety before we knew what chatbots were.
The other thing the government did, though, that I think is more important for this conversation now,
is over many years they developed an online harm framework for regulating social media.
And I actually think that is a much more applicable model for the,
current types of harms, like you mentioned, and others, that we're seeing with consumer
AI tools, consumer AI products like chatbots. And that framework, the online harms safety
framework, essentially just says that there's an obligation on companies to ensure and to show
that their products, their consumer products, are safe before they are used by Canadians. And that
would be overseen by an independent regulator that has some audit capacity, visibility into
these systems, and ability to penalize the companies for noncompliance. So I think your examples of
the self-harm and mental health issues are real, but there's some more tangible ones that
show how this could work. So another big thing that's happened is in the last couple of months
is this feature of GROC's XAI's feature in GROC that allowed people.
to undress people inside their social media feeds.
And for a few weeks, this was a very prominent crisis, essentially,
where teenagers in Canada were using it to undress each other
inside their public messaging feeds.
So if we had this online safety regulator and system in place,
there is no way that that would have passed a risk assessment, right?
a new feature that allows you to address.
Like, could that pose risks in Canadian society?
Well, clearly it would have been flagged.
And if they didn't show how they were mitigating those risks,
it just never would have been allowed
to be deployed in Canada at all.
Where are we at with this Online Harms Act?
Because I know it died in Parliament in January 2025.
Where are we at with it now?
Well, there's a discussion,
active discussion about retabling a version of it.
My view is they've been very far too slow.
one doing that and are not treating it with the urgency that they've treated other policy priorities
right now. Now, that might be changing because of incidents like this. In my view, the political
position that it is too risky to impose regulations on tech in the middle of a trade
negotiation with the United States, which I believe is part of the government's hesitancy to do
this, is increasingly untenable as we see these issues being raised time and time again.
So is this uniquely challenging because most of these companies are based in the U.S.?
It absolutely is.
And for two somewhat separate reasons, both that they have financial incentives to commercialize their product and their data,
and also that the U.S. government, which has shown some hostility to our country,
has pretty sweeping access to that data.
You talked about a digital regulator.
Is there a model Canada should be following here?
Yeah, there's three things that have been tried in sort of pure democracies,
the EU Digital Services Act, the Online Safety Act in the UK,
and the E-Safty Commissioner in Australia.
And in my view, the version that has been proposed in Canada
in the original C-63 takes what we've learned from each of those.
models and build something that is iteratively better than all three. So I actually think
the Canadian, if we were to do something similar to the Online Harm's Act and add chatbots into it,
which I think is the key thing that needs to happen. It's not in there right now. It's not in
there. Right now it's just social media. Large social media platforms like Instagram and TikTok and
YouTube. But it would be missing, I think, both the nature of the current online harms,
and the way citizens currently see the vulnerability to leave chatbots out of a digital safety plan.
This stuff's going to keep happening.
There's no question about it.
I was on the government's online AI task force, and my submission to it flagged this in the fall,
that this safety stuff is going to happen and that the Online Harm's Act, including chapwats in it, is the way to deal with it.
And so far, that strategy hasn't been released.
On that, when it comes to government regulation, governments are slow when it comes to regulating
fast-moving technology like AI, like social media.
Is there a sense that, you know, if they were to add in AI regulation, AI chatbots,
that perhaps by the time this goes through, there might be something else to be putting in.
Like, how do you create regulation that will be able to...
Oh, there isn't a possibility.
There is an...
We should have the expectation that that will be the case.
So on that, how do you create regulation?
that will be able to kind of encapsulate this idea of harms when perhaps you don't even know what's coming up next.
Yeah, it's a great question. I mean, and the key is you need to design regulatory systems that are both adaptive to new technologies
and that focus less on the specific product or specific company and more on the effect that these technologies can have on Canadian citizens,
which would allow it to expand potentially or contract as these technologies evolve and new ones emerge.
Had we had the Online Harm's Act and the Commission and Regulators set up two years ago,
it could have expanded its scope to include chatbots far more quickly than where we're at now,
where we have to stand up an entire regulatory capacity.
So you want to build it and you want to build something that's far.
flexible and isn't specifically limited necessarily to a company or specific technology,
but is adaptable.
And the core principles of ensuring safety on Canadians, ensuring transparency over products
are going to transfer to new technologies as they emerge.
And those principles are what matters.
On that, is there the opposite effect, though, if you are to leave something so open that
perhaps you're able to regulate something that we don't want as a society to regulate?
100% right and like this is a in every piece of legislation this is always a problem right do you put how much detail do you put in the legislation and do you get parliament to decide on and how much do you leave to a regulator who might be more technically suited and a bit and have the ability to act quickly when things change and honestly there's not an answer to that that's a we have we have that same
debate on every piece of legislation that involves regulation. But that's the debate I think we need
to have. And that debate has to happen in government, at parliament. So to have that debate, we need to see a
bill. Tech companies have historically been resistant to regulation. We saw that with social media
companies. How does the government navigate that part of it? There's two issues there. There's one,
the companies themselves being reluctant. And that means they will lobby against it.
And I think that should just be broadly expected.
And that's just part of regulating any industry that a government has to deal with, frankly, and figure out how to navigate.
The bigger issue is the alignment of the U.S. tech industry and the White House right now, which is in a very historically unique place where the White House seems willing to use its most powerful tools internationally to defend.
the interests of U.S. tech and U.S.AI. And that includes using its tools of trade in and around trade
negotiations. So there is no question that the government is right to be nervous that the U.S. will
use our renegotiation of the Canada-U.S.-Mexico trade agreement as a lever and a weapon
if we head down this path. I'm not sure that that risk
negates the need to protect Canadian citizens
against what is clearly a growing set of harms.
But that's a decision they have to make.
Taylor, in an ideal world,
what would regulation look like?
In an ideal world, we would have a regulatory body or commission
whose responsibility is to look out for the safety
and the interests of Canadian citizens.
and to ensure that the digital products that they use on a daily basis
and that are embedded in their lives in meaningfully and powerful ways
and that the government is encouraging them increasingly
to use and adopt are safe and transparent.
And I think that is a baseline expectation
that we are right to have in a democratic society,
and right now we don't have it.
Taylor, thank you so much for this really thoughtful conversation.
I appreciate it.
My pleasure. Thanks for that.
After we recorded this episode, OpenAI released a letter sent to AI Minister Evan Solomon
in response to their meeting about the Tumblr Ridge Shooter's use of ChatGPT.
It says that the company has made changes to ChatGPT over the past several months.
Anne O'Leary, OpenAI's Vice President of Global Policy, wrote,
quote, with the benefit of our continued learnings under our enhanced law enforcement referral protocol,
We would refer the account banned in June 2025 to law enforcement if it were discovered today, unquote.
She also wrote that the shooter had created a second Open AI account after being banned in June.
The company promised immediate steps to, quote, help prevent tragedies like this in the future.
That was Taylor Owen, an associate professor at McGill University,
and founding director of the school's Center for Media, Technology, and Democracy.
We have a link to his podcast, Machines Like Us, in our show notes.
That's it for today.
I'm Cheryl Sutherland.
Bianca Thompson joins us from the Canadian Journalism Foundation's Black Fellowship Program and is our associate producer.
Our producers are Madeline White, Rachel Levy McLaughlin, and Mikhal Stein.
Our editor is David Crosby.
Adrian Chung is our senior producer, and Angela Pichenza is our executive editor.
Thanks so much for listening.
Thank you.
