Front Burner - Are teen social media bans a silver bullet?

Episode Date: May 6, 2026

Australia was the first country to adopt a ban. Canada’s federal government is signaling that something is coming from them soon. A recent Angus Reid poll found 75 per cent of Canadians support the ...idea.But even among those who acknowledge the harm social media causes for young people, the answer is not so clear cut.We’re joined by Taylor Owen, the Beaverbrook Chair in Media, Ethics and Communications at McGill University. He’s a part of the federal government’s expert advisory group on online safety and on its AI strategy taskforce. He makes the argument that a ban isn’t a silver bullet and that we need to focus on making social media safer for everyone.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts

Transcript
Discussion (0)
Starting point is 00:00:00 This week on two blocks from the White House, we're talking about a Supreme Court decision that could have a big impact on American elections. The decision narrows, some argue guts the Voting Rights Act of 1965, and it's expected to lead to a major redrawing of electoral maps. Join me, Paul Hunter, and my fellow Washington correspondence, Katie Simpson and Willie Lowry as we break down U.S. politics from a Canadian perspective. Find and follow two blocks from the White House, wherever you get your podcasts, and watch us on YouTube. This is a CBC podcast. Hey, everybody, I'm Jamie Poisson. A couple months back, we did this interview with Jonathan Haidt. He's a social psychologist and author of The Anxious Generation.
Starting point is 00:00:52 And he made a really compelling case for why social media is awful for kids and teenagers and why he's been pushing for countries to ban platforms like Facebook, Instagram, and TikTok for kids under 16. Australia was the first country to adopt the ban. Manitoba recently announced that it will do its own. And Canada's federal government is signaling that something is coming from them soon. An under-16 social media ban is super popular here. A recent Angus Reid poll found 75% of Canadians support it. But even among those who acknowledge the harm social media causes for young people,
Starting point is 00:01:29 the answer is not so clear-cut. After that episode we did with Jonathan, we got a ton of emails from people who wanted to hear about privacy concerns, for example. My guest today is Taylor Owen. He's the Beaverbrook Chair in Media Ethics and Communications at McGill University. And he makes the argument that a ban isn't a silver bullet and that we need to focus on making social media safer for young people, but also for everyone. He's also on the federal government's expert advisory group for online safety as well as its AI strategy task force. So I can think of no better person to do this with.
Starting point is 00:02:11 Taylor, hey, it's great to have you. Thanks for having me. So as I mentioned, Premier Wab Canoe says Manitoba will move ahead with putting a ban in place on social media and AI chatbots for kids under 16. Our government will ban social media and AI chatbots for children and youth in Manitoba. So far, though, no word on a timeline or how it would be enforced. But Canoo says it's necessary to keep kids safe. We'll be working over the next few months with families and teachers in the province just to make sure. that we get the message out. Mark Miller, the federal minister of culture and identity,
Starting point is 00:02:50 says that the federal government is, quote, very seriously considering a ban. Very seriously. And looking at some of the feet, we have some work to do, frankly, if we want to get it right. The politics, perhaps, of it are convenient, but the policy has to be right as well and has to align with the objectives that we're trying to achieve, but also has to be effective. And just, you know, you're in these rooms. How likely is it that we see something coming from the feds relatively soon? and what could it look like? I mean, in part because of the public appetite and pressure you opened with,
Starting point is 00:03:22 I think it's now pretty likely that some form of age restriction will be a part of a retabled online harms act whenever that happens. So as part of a broader package of policies to address potential harms online and potentially even on chatbots too, it looks like the things they're considering are what age this should be from. So is it under 16? Is it under 14? Is it just enforcing the current terms of service, which is 13, which don't get enforced particularly rigorously to begin with? Is it a forever ban? Is this a permanent thing? Are we saying these products can never be safe for kids under a certain age, no matter what? Or is this the temporary ban until they prove they're safe?
Starting point is 00:04:12 and what exactly is in scope of a restriction of any kind? Which platforms are we talking about? Which are we not? So is this just the main social media platforms? Does it include sort of periphery platforms like SNAP, where we know there's a lot of harm, but sit more, slightly more, in the private messaging space? And now critically, and I think absolutely critically,
Starting point is 00:04:36 we've started hearing the folding in of chatbots into this discourse. So should AI, chatbots or potentially companion apps as a subset of chatbots be included under this restriction. So there's a lot of moving pieces, but I think the intent of the government's pretty clear that they want some form of age restriction included in their package of policies to address online safety. Okay. And this online harms act, we should be getting it kind of relatively soon, right? Like sometime this year? Is that, am I right? I mean, I only know from what they've told. told us, and so I'm on that expert panel, and they've re-impaneled us in order to address
Starting point is 00:05:17 some of these issues, including this one. And that signals to me that there's a pretty strong desire to get something out the door pretty soon. And my guess would either be before summer or after summer. Okay. So I think the best indication. So I want to get into what I think you're recommending and advocating for more a little bit later. But first, I just want to kind of look at how the ban in Australia is working logistically. Because we do have Australia to look at here. The ban for kids under 16, so kids and teens under 16 came into effect in December.
Starting point is 00:06:01 And so their ban says that the social media companies, TikTok, snap, they're doing snap. YouTube, Instagram, Facebook are supposed to prevent teens and kids under 16 from holding an account or they'll get fined up to 50 million bucks. There is no penalty for the kids. And just how are the platforms going about doing that right now? How do we know how it's working? The strongest advocates for the ban like Jonathan Haidt make the argument that I think is right
Starting point is 00:06:32 that social media was a large-scale social experiment that we've ran over the last 15 or 20 years. A mass social media ban is also a social experiment. And Australia is the first case. this. And so we're starting to see what happens when you attempt to take away social media from a generation of kids who have become dependent on it and use it in their day-to-day lives. And it's only been three-ish months, four months since December. So there honestly isn't a ton of rigorous evidence on what's happened and hasn't. But we do know some things. Initially, the government said about 4.7 million accounts were terminated in the weeks after the ban.
Starting point is 00:07:21 We now know that that includes all sorts of inactive and duplative accounts, too. So the number is probably significantly less than that. There was a survey of parents that found that 70% of parents say their kids still have an account so that a 30% effectiveness rate based on survey data. There's an estimate that about 30% of kids are used. using VPNs. So that is a pretty big number, and I think much higher than people thought for a desire to circumvent. The regulator itself thinks about 70% of kids are still using them. So I think it's kind of a mixed record, frankly. We've closed down a bunch of accounts, and kids are, perhaps
Starting point is 00:08:07 unsurprisingly, finding a whole host of ways of getting around it. And part of that is because it doesn't look like the platforms are trying very hard to implement it. Yeah, tell me more about what the platforms are doing right now. Well, it looks like they're kind of dragging their feet, frankly. Facebook, Instagram, TikTok, and I think Snap as well are all, and YouTube too, are all under investigation by the regulator. There's been cases where they've sort of reminded kids to change their age in the back end of the system, right? So there's, I just don't think they're trying very hard to enforce that.
Starting point is 00:08:43 And that's kind of understandable. And it follows a pattern of them being relatively loose with these kinds of regulatory obligations in other countries as well on other topics. And just how do people want them to enforce it? Are we talking about IDs, government IDs, facial age estimation? I mean, this gets to the huge challenge, right, of how do you verify that somebody is a kid or not? And there are a wide range of ways of doing this. Everything from uploading IDs, as some states in the United States are requiring for access to adult content, for example, to estimation based on photos, which have sort of a range of accuracy, not particularly accurate, to third-party providers who do the verification off platform and then signal to a platform or assure that someone is the age that they say they are. so it takes the platform out of the verification process,
Starting point is 00:09:42 to pushing it on to the app stores themselves, which is what the major platforms are asking for. They're trying to push it on to Apple and Google to do the verification when you set up your device. So there's a wide range of ways of doing this with, as we'll talk about, different sort of privacy considerations involved and different actors who are the core actors
Starting point is 00:10:05 responsible for doing this verification. But someone has to do, it, right? At some point, you have to know who's a kid or not, and you have to know within some degree of accuracy. Yeah. And just like sticking with what's happening in Australia, this idea that the platforms are dragging their feet. The Australian government is saying that, you know, these investigations have started and they could start levying fines pretty soon. So could you not make the argument that this ban has been in place for a couple of months now? And I know, you know, they're seeing 70% of kids and teens are still using it, but, you know, 30% aren't.
Starting point is 00:10:41 Yeah. And that maybe once they start levying fines, there will be an incentive for the social media companies to start taking this more seriously. Absolutely. And I think that's a perfectly reasonable argument. And we did an event with Jonathan Hay recently where he made that argument that, like, this is 30% of the population in a few months who are now off these platforms. A couple of things there, though. One is they can only do. that because they have a regulator. So, which is a really important point. So anyone advocating for just a ban on its own without an independent regulator enforcing all sorts of other measures plus the ability to fine and penalize and hold accountable to companies is really just a non-starter. So ban on its own just fundamentally won't work. But Australia has this
Starting point is 00:11:27 regulator. So yeah, absolutely. They can now investigate and audit the companies for compliance and issue a series of escalating fines to ensure compliance. And that is almost certainly what will happen here. And those numbers will all improve, I suspect. And I also just want to throw another kind of logistical argument at you that even if it is easy to get around these bans, just the fact that there is a ban is kind of like helping parents out here. Like it gives them an excuse to use when trying to pry that.
Starting point is 00:12:03 phone away from their 13-year-old. Look, like, this is bans. I got to take you off these apps. Parents are at their wits end. Both the companies that build these products and the governments that have failed to regulate them and ensure they're safe have put all of the responsibility on parents. We've said, this is an issue of individual choice and parenting, and if your kids are addicted to these tools and are using them too much, that's on you. And I think that's intolerable. And I think these are much bigger problems. They're societal problems. And no parent can push back against this on their own. And so they're, in absence of any other reasonable regulations or safety measures, they're saying, fine, just take them away. And at least then we can blame the
Starting point is 00:12:55 government when our kids complain. And it's not us and our teachers and our teachers being the bad cops here. And I think that's perfectly reasonable. But I think that's a function of our failure, not just of the design of these products. I'm trying to think of a counter argument to that. And I wonder how you would respond to this one, you know, that because it's banned, kids might be less inclined to come to an adult and tell them about something bad happening on these platforms if they know that they're not allowed on them. Do you think that whole Does that hold any purchase for you? Yeah, I mean, we're kind of bouncing back on pros and cons arguments here.
Starting point is 00:13:36 And like 100%, like I think there's two elements to that. One, we've created something nefarious out of these tools, right? Which kind of pushes it into darker spaces and like less visible places where we have conversations about them. And if kids think they're going to get in trouble for using them, they're going to be less likely to talk about them for sure. But it also pushes people into other spaces. that are arguably less safe. And I think you've seen some of that in Australia too, that people are moving to Discord channels
Starting point is 00:14:08 or to gaming platforms or to other spaces that aren't necessarily subject to the kind of regulations that Australia does have on social media platforms. They are ahead of Canada in this regard. So in theory, meta-products, TikTok, YouTube, are safer in Australia than they are in Canada because of their safety regulations.
Starting point is 00:14:29 And we're pushing kids away from, those ones, rather than ensuring that those products are as safe as possible. Just say a little bit more for me about that. Like why in Australia Instagram safer than a gaming platform? So Australia has online safety regulation. They have something called e-safety Commissioner, who has responsibility to ensure that certain kinds of content are not present. on the large social media platforms.
Starting point is 00:15:04 So child sexual abuse material, certain kinds of sexual content, intimate images, shared without consent, direct propositioning from adults to kids. Some of the most harmful types of content are effectively banned in Australia, and that's enforced by their e-safety commissioner. But only on the big social media platforms.
Starting point is 00:15:26 There are a host of broader platforms like the chat room in a video game. for example, or any number of locally hosted discord conversations that kids end up in, they aren't subject to those regulations. And so are inherently less safe. And we don't, just to be clear, we don't have those protections in Canada. We have nothing. We have nothing, which is why the push to the Online Harm's Act is so critical, I think,
Starting point is 00:15:53 in all of this conversation with or without a ban, the baseline need is a digital regulator that enforces safety standards on these products. Okay, three songs. You guess who thereby. Three little birds, one love, and jamming. Yeah, that was a really hard quiz. These are all, of course, by Bob Marley. A whole lot of the world felt close to Bob and his music before and after his passing. But the guy who really knew him best was his son, Ziggy.
Starting point is 00:16:29 On cue, Ziggy Marley will tell you about his new record, and about the song he says, connect him to his late father, Bob Marley. You can hear that conversation now. Just search Q with Tom Power, wherever you get your podcasts. I want to spend some time interrogating the concerns people have around privacy with these age restrictions. So after we did the Jonathan Hyde interview, we actually got a ton of emails about this. This idea that in order to enforce the ban, it would require everyone, adults included, to, you know, likely or probably. give government ID or do a facial scan to access social media. And like, here's a quote from Wikipedia founder Jimmy Wales who thinks Australia's ban is an unmitigated disaster. Quote, most of the people who are in favor of this sort of thing aren't in favor of that surveillance state and surveillance capitalism. I just think they haven't really thought it through. He says
Starting point is 00:17:26 he thinks it's teaching kids to accept surveillance from tech companies when they go online and and just flesh out that argument for me a little bit more, what do people say we could lose here in terms of privacy? There isn't a simple answer because there are a range of variables at play here, one of which being we do not have effective federal privacy regulation in this country either. If we did, and I think sequencing really matters here, if we had strong digital data privacy protections in this country, country, then some of this issue would be diminished. Right now we don't. So it's a free for all.
Starting point is 00:18:08 And so Jimmy Wales is totally right. These companies are collecting mass amounts of data about all of our usage, both on their platform and outside of. The question of what would be needed exactly in order to comply with this kind of age limitation of access depends a bit on how certain we expect them to be. So the Australian bill, for example, says platforms need to take reasonable measures to ensure. It doesn't say they have to 100% guarantee that everybody is. That's a very different threshold. And what that's opening the door for is other forms of method that don't necessarily provide 100% certainty, but get pretty close. So for example, you could use a probabilistic method for age verification, where we know that these companies,
Starting point is 00:19:13 partly because we don't have good privacy law, which is why these things are all interconnected, can estimate our age already in order to target ads at us. Right? You can buy an ad to target a 13-year-old, right? Yeah. So they clearly know to some degree of probability who's 13. Now, is that a high enough probability for us to ensure compliance with this kind of bill? Well, that kind of depends on how it's audited by the government and what we're okay with. Other measures take this off of the platform entirely, as I mentioned, where you can have third-party companies or even potentially a arm's-length guy.
Starting point is 00:19:52 government capacity, although that gets people nervous for other reasons. That's what Europe's doing, right? They're going to have some broad digital ID system, where the companies never know anything about you. All they know is that this third party has guaranteed your age, has approved your age. So there's a wide range of ways of doing this. I think bucking them all in the ways that are the most privacy invasive and require everybody to do it is not sort of of looking at the full range here. Let's really dig into this argument that you're making, that just sort of a blanket ban is not enough, right? Or not the solution here? Yeah. That it is, and quote, this is your writing, that it punishes users rather than products causing harm. And just tell me a little bit more about what you mean there. Absolutely. I mean, I think first and foremost, a ban on its own requires the assumption. makes the assumption that these products can never be made safe, that there's something inherently harmful about social media,
Starting point is 00:21:00 so that anyone under a certain age will inherently be harmed or the risk is inherently too high, therefore we should completely take it away forever. One of the challenges with this is it runs directly against what we know works best in the online safety governance conversation, and the very principle of our online harms act, which is that actually these products can be designed to be safe, but that the companies are not choosing to.
Starting point is 00:21:31 They're prioritizing other incentives, like financial incentives, for example, or user growth incentives, or attention spent on platform incentives, over the safety of the users, particularly the kids' users. And we know this from all sorts of whistleblower testimony and all these things, right, these documents that have come out that have shown that they've made decisions to make their products less safe because they prioritized other incentives. And that means they can be made safe, right? So the premise of our whole regulatory model, what's been done in Europe, what's been done in the UK, what we're proposing to do in Canada, is that actually you can make companies design these products to be safe. So how do you do that at the same time as say they never can be?
Starting point is 00:22:20 Look, like, talk to me about how you could design them to be safe because their entire business model is predicated upon you spending as much time on them as humanly possible so that they can sell you ads. And what makes you spend more time on it? Rage, social validation. I mean, we know all of this, right? Yeah. And these companies have been completely resistant to any kind of regulation. and they deny over and over and over again that their products are causing harm. Yep.
Starting point is 00:22:55 And here we are 10 years into this debate, and where are we? You know, I think I'm just trying to like marshal this argument that I hear from a lot of parents. I hear you and I feel it too personally. How we make them ultimately is we do what the Armed and Harm's Act at its core mandates, which is the companies have two responsibilities. They have a duty to act responsibly for all of their users, which means they have to do risk assessments on their products. They have to show how they're mitigating the risk of those products,
Starting point is 00:23:33 and they have to be transparent about that, to share data and be audited. But they also have a second duty, which is the duty to protect kids, children. And in that duty, they will have to implement something called, called an age-appropriate design code. And this is taken from the UK. It was designed by a Canadian when she was the Information Commissioner in the UK. And what it says is that for products that are likely to be used by kids, you actually have a much stricter set of design-based duties you have to
Starting point is 00:24:07 implement. And these include things like no data collection, no infinite scroll, no contact between or direct messaging between adults that you're not friends with and kids, which is crazy on the surface that that's even a feature, right? But it is on most platforms, and they've banned that. So, and that list can change based on the products we're using, and it is a very strict set of design requirements that are imposed if you are going to have access to children in our market. So I think there's very specific things you can do to incentivize the design changes. And that's precisely what the bill was intended to do. The challenge is it's going to take a little time to set that up.
Starting point is 00:24:55 And this is where I do think a potential temporary ban is where we should be headed. That that model I just described, I think it's best practice. I think it will be the best model of a digital regulator that exists in the world right now. But it will take a year or two to get up and going. And I think you're right that parents has every right to say to the government, you're telling us these products are unsafe. That's why you need a regulator. But you're telling us to wait two years. Well, that doesn't help if I have a 14-year-old.
Starting point is 00:25:27 Yeah. Right? So maybe what we need to do here is temporarily limit access to these products until the companies can show via the regulation that they're safe. Well, why don't you just flat out ban it then? and then say to these companies and any new company that would like to come into the market, go make something safe. And when we see the thing that's safe and we've tested the thing that's safe. I think we're saying the same thing.
Starting point is 00:25:55 I think we're saying the same thing, right? Like, I think so. Like, at some point you have to have a mechanism for evaluating what you mean by safe and how they're complying. So you ultimately need the regulator to do that, right? And you need a set of obligations that companies have to meet in order to prove they're safe. And that's already in the bill. The real question is, do you make it a tool for compliance? Or do you ban it and say the only way of getting access is by meeting these criteria?
Starting point is 00:26:25 And I honestly think that's just a question of emphasis. The key is, is if you want access to people under 16 in the Canadian market, you need to prove they're safe. And this is how you prove they're safe. A permanent ban does something very different. It says they can never be safe. And I just don't think that's true. You get the sense that the Canadian government is really considering this approach? I think so. Yeah, I mean, it's being discussed.
Starting point is 00:27:07 Look, I think politicians look for blunt objects. And when they hear a large majority of citizens calling for something and it seems like an easy fix, then there's a real tendency to reach. for that tool. The challenge is, like, and we've talked about this before, there are no civil bullets to these digital governance questions. They require governments to do multiple things at once, and that's the case here too. So limit access, absolutely, if you think this has become untenable, and I think I broadly agree with that, that these companies have proven, unreliable, and untrustworthy in their guarantees that their products are safe.
Starting point is 00:27:56 However, taking away social media and potentially AI, as we can talk about, to a generation of kids, some of whom do get benefit from them, feels like an inadequate and narrow policy solution to a pretty complicated problem. I just want to push back on this idea that you can make it safe. So this idea that you could say to the companies, you can't collect data and you can't do endless scrolling. Why would they ever agree to that? That's their entire raison d'etre, right? Because if they don't, they don't get access to kids in the country.
Starting point is 00:28:37 End of story. Right? So they maybe, and maybe some decide not to, right? But that's their choice. and I highly doubt, given what we know about the level of competition between them to get younger and younger users normalized in their ecosystems and with their products. The very reason they don't apply their current terms of service at 13 in any effective way is because they want younger users. And it's a race to the bottom because they want people on their platform. So they stick with them when they become more valuable as they get older.
Starting point is 00:29:13 I find it hard to believe that at least the main platforms would not comply. But if they choose not to, then we know the products are safe for the kids that are using them. I mean, same effect. Yeah. And just the idea that like you would tell them to make it safe for kids and teens, but then what? As soon as they turn 16, they're just back into the ecosystem that we all swim in? Yeah, I mean, that's one of the strongest arguments against a ban on its own. So if you use say that the one policy solution I have is that we will ban it for kids under 16, then what is it about the platform that becomes safe when you turn 16 and a day? And what is it about the platform that is perfectly safe for all of us? We all face some of these issues. So that's why ultimately
Starting point is 00:30:04 you do need this broader regulatory model that has duties and responsibilities about the safety of adults and all users and a specific subset of those for kids. Because kids have different, both they have different rights. And rights are a part of this conversation. We're talking about what people can and can't do in the digital environment. But kids have different rights. We have different responsibilities to ensure their safety. So you can probably impose a stronger set of responsibilities on the platforms to ensure
Starting point is 00:30:35 and to limit some of the design features that affect kids. For adults, like, do you want to be in a world where we're saying you can't have infinite scroll for adults? I'm not totally sure. I'm comfortable with that. With kids, I probably am, right? Because they're different. So there does be to be some nuance there. But I totally agree with you that, like, the idea that something magically happens when you turn 16 and that none of these issues are relevant.
Starting point is 00:31:06 And all of a sudden, we're going to give a generation of kids. access to these tools when the day they turn 16 that have no regulations on them whatsoever makes no sense at all. I do imagine that there are people listening, though, that would be pretty happy with a world without infinite scroll for adults, too. Oh, I would too. And so I has actually flashed through my mind when I said that. You know, we haven't spent a lot of time on the chatbots.
Starting point is 00:31:45 But are you thinking about the chatbots in the same way that you're thinking about regulating social media right now. So I think this is so important in the debate because all of a sudden, and I would say in the last two months, and in part in fairness and reaction to what happened around Tumblr Ridge, we have started including chat, or many have started including chatbots in their, in the scope of a ban. Manitoba did this. The Liberal Party Convention did this, right?
Starting point is 00:32:16 They voted on not just a social media ban, but also a chatbot ban. And I'm a bit concerned about this for a number of reasons. One is we have, as we've talked about, almost two decades of evidence about social media. We have a year of evidence about AI. AI is a pretty general purpose technology that if you tell me what exactly you want to ban, it gets very quickly to banning all access to any AI tools for everybody under 16. at the same time as we're also having a conversation or this government's trying to promote a conversation about how we should be using these things responsibly in our lives and in a manner that has all sorts of positive benefits to society. So I think reaching and adding an entirely new set of technologies and putting them into the most stark version of the policy, which is the ban, doesn't make a lot of sense.
Starting point is 00:33:15 On the other hand, including them, I think, in the scope of the regulations, does make sense. So it seems to me that AI should have to have an age-appropriate design code for kids, for example, do risk assessments on their products before they launch them in Canada. All the things, the obligations that social media companies will have to do under the Online Harm's Act. But a quick ban on it now feels premature to me. But I just, and again, I just kind of want to push back on this in sort of the same vein. Because, you know, I feel like people might be listening to this. And they might be like thinking about in their head like all of the warnings that we got 10 years ago about social media, right? Yeah.
Starting point is 00:34:00 That these companies do not care about your well-being, that the business model is designed to, that this like that this will not be good for democracy. it will not be good for mental health. And, you know, here we are 10 years later, and you said earlier in this conversation, that Canada has nothing, you know? And it feels like, if anything, we're just kind of playing catch-up here at best with social media, right?
Starting point is 00:34:25 And then you have these chatbots, and you're already seeing all of these anecdotal cases of a 14-year-old in Florida whose mother alleges the chatbot encouraged suicidal ideations, a 16-year-old allegations at the chatbot encourage secrecy around his suicidal thoughts. And you have these companies saying that they're doing stuff, but, you know, we don't really
Starting point is 00:34:47 know what they're doing. And we don't know how long it could take to come up with a kind of more nuanced legislation. And I just like, how would you respond to that? The same way as social media. And I think I broadly agree with that, that there are some risks inside these systems that are intolerable that are a function of the companies that are building them acting irresponsibly. And therefore, a temporary ban until they prove they're safe makes sense. And that is the lever we should be reaching for.
Starting point is 00:35:24 And you say there, yes, it will take time to do this and to pass other regulation. But I actually think these should be part of the same piece of legislation. These are not separate things. that if the government table is a bill that puts all the things in place that we think will have the best shot at ensuring these products are safe.
Starting point is 00:35:46 Recognizing that there aren't perfect solutions here. Making, tying the temporary limitation of access of these companies to kids in Canada to the implementation of that bill makes a ton of sense to me.
Starting point is 00:36:01 All right. That feels like a good place for us to leave it today. Taylor, Thank you for this. Thanks for having me. All right, that is all for today. I'm Jamie Poisson. Thanks so much for listening. Talk to you tomorrow.
Starting point is 00:36:31 For more CBC podcasts, go to CBC.ca slash podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.