The Current - What OpenAI knew about the Tumbler Ridge shooter

Episode Date: February 25, 2026

OpenAI banned the Tumbler Ridge school shooting suspect’s ChatGPT account months before the attack, but didn’t alert police. On Tuesday Canada’s AI minister summoned the company’s safety team ...to Ottawa to explain its reporting protocols. Emily Laidlaw, a cybersecurity law expert and Canada Research Chair at the University of Calgary, joins us to explain who decides when AI companies escalate threats — and whether that threshold should be written into law.

Transcript
Discussion (0)
Starting point is 00:00:00 This ascent isn't for everyone. You need grit to climb this high this often. You've got to be an underdog that always overdelivers. You've got to be 6,500 hospital staff, 1,000 doctors, all doing so much with so little. You've got to be Scarborough. Defined by our uphill battle and always striving towards new heights. And you can help us keep climbing. Donate at lovescarbro.cairro.com.
Starting point is 00:00:30 This is a CBC podcast. Hello, I'm Matt Galloway, and this is the current podcast. From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life. That's the Premier of British Columbia, David Eby, reacting to news that the Tumblr Ridge shooter had been flagged and then banned in June by OpenAI, the tech company behind ChatGPT. The Wall Street Journal first reported last week that the show, Shooter's posts on chat GPT had raised alarm bells internally, but that the company did not report them to police. OpenAI had meetings with the BC government on the day of the shooting, a phone call the day after about setting up an office in the province,
Starting point is 00:01:15 and then two days after the shooting asked David Eby's office for an RCMP contact. Up until then, the company chose not to inform Canadian authorities about their dealings with the shooter who killed eight people, most of them children. Canada's AI minister, Evan Solomon, met with Open AI's senior safety team yesterday. But according to Evan Solomon, the details of the shooter's interactions with the AI platform were not discussed in that meeting. Emily Laidlaw is a Canada Research Chair in Cybersecurity Law at the University of Calgary. Emily, good morning. Thanks for having me. Thanks for being here.
Starting point is 00:01:48 Let's start with what we know. The Wall Street Journal reported that in June, the shooter described scenes involving gun violence over several days. on chat GPT, a dozen or so staffers were so alarmed by this that they debated whether to take action, but were told that OpenAI felt the shooter's activity did not meet its harm threshold, and that's why it didn't say anything until after the mass shooting. How much do we know about how these sorts of protocols are set by tech companies? Well, I think the key thing to understand is that they set it themselves. Currently, there's no mandatory reporting criteria in law,
Starting point is 00:02:26 and so the companies have to develop their own internal policies about when they make a decision to pass that on. And that can be complicated to set precisely what an appropriate threshold is so that you're not overwhelming law enforcement or you're just passing on information that really isn't a credible threat. But clearly we can see from this situation the policy is set appropriately to pass on the key information. The Premier British Columbia, David Eby, we heard at the beginning, says from the outside,
Starting point is 00:02:59 it looks like perhaps this could have been prevented. Do you think Open AI, is it reasonable to say that, that Open AI could have prevented this? So I kind of will step back from this and say that these companies have kind of access to incredibly intimate details about people's thoughts and often those can seem more extreme than not and they have to make a determination of what to do about that. I think that the problem with the policy and they state that they only report credible and imminent threats. And that's an incredibly high threshold, right? I mean, what we can say now is you're a company, you had employees who were clearly deeply concerned about this individual and the threat that they posed
Starting point is 00:03:43 to the public safety and you didn't report it and they should have. The policy set that threshold so high that it had to be almost a very specific risk, a known risk of something to happen. And clearly, you know, that was set too high for an appropriate kind of threshold to pass that on to police. And so Evan Solomon, the federal government's new minister in charge of AI, halls representatives from Open AI into a meeting yesterday. But the shooter's interactions with that platform, the things that alarmed those dozen employees were not discussed. What do you make of that?
Starting point is 00:04:24 Well, the thing that I keep wondering about is what was chat GPT saying back to her? You know, we know from news reports that she was sharing, you know, key intimate details. I think a lot of this, of course, is going to be part of the investigation by law enforcement. So there's only so much that's going to be shared publicly at this time. But what is the point of a meeting with the tech company if you're not going to talk about the thing that everybody is talking about? Should that not be, if not, the only item on the agenda, then the first item on the agenda? Yeah, I think that some of the key things needed to be what precisely were the policies in place. Because, I mean, purportedly that was the reason for the meeting, right, is to sit down and talk about that policy.
Starting point is 00:05:11 What I don't know, and I don't think any of us know, is precisely what was to. discuss when it comes to the detail of kind of the content that she was sharing online, but also what chat GPT was saying back to her. And so Evan Solomon says in the wake of that meeting, the minister responsible, these are his words, we were disappointed, they did not have substantial answers for us, we asked them to have substantial answers. There are people who could say that if you're not going to talk about the thing that you should be talking about, that it's a symbolic meeting, no more than that.
Starting point is 00:05:43 Is that a fair assessment? I think there was high risk that this was going to. to be a symbolic meeting. And I think from both sides, right? From my end, when I look at this as someone deeply involved in the kind of law and policy space of this, is I want to see government leading on passing laws in this space. And that's what David Eby is calling for. David Evey has said that the federal government needs to introduce rules for when artificial intelligence providers must contact police in response to how people are interacting with the platform. Well, but it's more difficult than it seems to set that threshold. And let's talk about that for a
Starting point is 00:06:16 moment because the EU sets this as a requirement that platforms, when they have awareness that there is some criminal content that's a threat to life and public safety, they pass that on to law enforcement. But the question is what that threshold should be, right? There was nothing like mandatory reporting that was proposed in Bill C63, which was the online harms legislation that died on the order paper when Parliament was proroked. So what the government should be revisiting now is, is there an appropriate addition to a bill that would require mandatory reporting? And what is that threshold where we're not overwhelming police? We're not passing on individuals' most intimate details for police to investigate when it's unnecessary.
Starting point is 00:07:09 But also targets precisely this situation where we know now that that information should have been shared. with law enforcement. And I think what we have to focus on is not creating a system of just general surveillance. So when a company becomes aware of information, when there's a risk to public safety that is credible, then they pass it on to police. Is that general surveillance not going to be almost inevitable, given what we're talking about? It's not just a search engine, but these are chatbots. And that the nature of a chatbot is that you create, they're designed for you to create an intimate relationship with it, that they are designed to insert themselves and insinuate themselves into your daily life
Starting point is 00:07:53 and that you will share intimate information. Well, it is, and this is a whole new ballgame. And chatbots were not scoped into the previous online harms legislation, and they should be. They absolutely must be introduced. And a variety of friends, we have this case here, but we also have cases where incredibly vulnerable children, people with mental health risks were exploited because of the intimate nature of this and, you know, passing on how to guys to take their lives and so on. And so this first, we need to add chatbots within the scope, but also we need to think clearly about what from
Starting point is 00:08:30 A to Z, how does this work about the vulnerability of the individuals, the extent to which they're almost being monitored in this. But if we back this up and say, this is a company, you're offering a product or service, you should risk manage this, that's an easier way to go about this, then you know, you require them to have safety measures in place, which they had some. I mean, the content was flagged and sent to a human reviewer. That means that they have certain systems in place. Where it fell apart was after that when they tried to assess their policies.
Starting point is 00:09:04 Is there a sense that the government is as involved or as sophisticated in these safety measures as it could be? Because a lot of the focus of this government and of the minister is not to suggest that he's not doing other things, but has been about AI innovation, right? Well, and that's where this has exposed some of the flaws in that plan. You know, AI innovation is important and it's important for Canada, but it requires guardrails, we require regulation. This isn't something that you can just leave to the companies because their focus is on innovation. And making money. And making money. And you know what? There's only so much a company can do on their own. Because in the end, their job is to make money. It is the job of the government to sit down what those guardrails should be.
Starting point is 00:09:53 And so what I'm hoping to see is the federal government start taking seriously these digital policies and reintroduce an online harms bill, start looking at deeply consulting about an AI bill and finally reintroduce their private. sector privacy law. I want to play one more thing from David E.B., the Premier of British Columbia. Here he is speaking with our colleagues on power of politics yesterday. How do we ensure that there's a threshold for all of the AI companies across Canada to ensure that they report and bring this forward? Because the next question is, are there others? Are there other examples where flags were raised and Open AI or Google or Anthropic or GROC didn't bring it forward to law enforcement? Are they really? evaluating those decisions now, we can't know. And the only way to hold these companies accountable
Starting point is 00:10:46 is to have a consistent standard across the country. Emily, I need to let you go. But just what is at stake here if we don't figure out a way to create digital laws? I mean, the thing that people want is to prevent something of this from happening again. So what is at stake if we don't create the laws in a way that capture how quickly the technology is advancing? Canada lacks safety measures at the moment. And so we need to be passing those laws to ensure that Canadians at least are kept as safe as possible given the power of platforms. Are you confident that will happen? I don't know at this stage. I'm hopeful, but I'm always hopeful.
Starting point is 00:11:28 We'll talk again. In the meantime, thank you very much for this. Yeah, thank you. Emily Laidlaw is a Canada Research Chair in Cybersecurity Law and an associate professor in the Faculty of Law at the University of Calgary. For more CBC podcasts, Go to cBC.ca slash podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.