The Current - Are AI toys safe for your kids?

Episode Date: December 4, 2025

Imagine your toddler playing with a soft teddy bear, but with ChatGPT baked in. These AI toys can chat to your kid, play games, and even say ‘I love you.’ But as more of them hit the market, exper...ts are warning that we don’t know how they will impact a child’s creativity and social development — while a recent report found that one toy was able to engage in sexually explicit or even dangerous conversations.

Transcript
Discussion (0)
Starting point is 00:00:00 This ascent isn't for everyone. You need grit to climb this high this often. You've got to be an underdog that always over-delivers. You've got to be 6,500 hospital staff, 1,000 doctors, all doing so much with so little. You've got to be Scarborough. Defined by our uphill battle and always striving towards new heights. And you can help us keep climbing.
Starting point is 00:00:27 Donate at lovescarbro.cairbo. This is a CBC podcast. Hello, I'm Matt Galloway, and this is the current podcast. What happens when a four-year-old and a toy powered by artificial intelligence meet? So exciting to meet you. I heard you enjoy storytelling, dancing, and exploring new things. Have fun. Would you like to go through my introduction, or should we skip it and jump right into fun stuff?
Starting point is 00:00:55 Finstaff. I mean, who wouldn't want to jump right into the fun stuff? When Arwa Madhavi brought home a toy embedded with an AI chatbot for her daughter earlier this year, she wasn't exactly sure what to expect. But her daughter, it seems, was in love. What shall we do together? Pray. Hey, twin time.
Starting point is 00:01:14 How about a simple game of Ice 5 or a different game? Yeah. But you have a favorite game in mine. I love you. I love you too. Big hug to you. Interactive toys, powered by AI chatbots, are hugely popular in China. They're growing faster than almost any other branch of consumer AI in that country.
Starting point is 00:01:33 And increasingly, they are available here as well during this holiday shopping season. The Barbie maker Mattel even announced a partnership with Open AI earlier this year. But it is not, apparently all fun and games. In a moment, we will hear from the author of a report into problems with these toys. One toy distressingly talked about knives, pills, and sex. Not what you want the teddy bear to be talking about. First, though, I'm joined by Arwa Madovi. She is a journalist.
Starting point is 00:02:00 She wrote about her daughter's experience with an AI toy called Graham for The Guardian in an article with the headline, My Family's Creepy Unsettling Week with an AI toy. She's in Philadelphia. Arwa, good morning. Good morning. Can you describe, we heard a little bit of this toy. Can you describe, in the piece, you're not really sure, is it a she, is it an it? Describe what it is.
Starting point is 00:02:23 Well, what is this thing? It's a little sort of alien creature, I believe. It's supposed to be from a distant planet. It's called Graham, and is created by Curio, which is co-owned by Grimes, the musician, who had three children with Elon Musk. And so, yeah, it talks about, it tells you all about its planet, its friends from its other planet, and it has interactive games. I wasn't really sure what to make of it or whether my daughter would like it because she's very, into princesses rather than sort of alien creatures. But I was sort of taken aback by how much she really sort of fell for it to begin with.
Starting point is 00:03:05 For people who have never seen or played with a toy like this, I mean, how does it work? You wrote that it's supposed to learn your child's personality. Yeah, so it connects to your home internet, which means you can't use it outside of the house. You can insert prompts so you can tell it. There's an app that comes with it and you can say, my daughter loves princesses and dogs and it will sort of learn about your child's personality and tailor what it says accordingly. I inserted a few prompts, but I didn't spend a huge amount of time. I think maybe if you spend a lot of time programming it, it might be more useful, more focused.
Starting point is 00:03:46 And then the app also records all the conversations and transcribes them so that you can see what's being said. if you're in a different room. Yeah, it has a few pre-programmed songs. It has electronic music songs, which is not really what a four-year-old typically likes. But again, this is because it's been designed by Grimes, who is not really the typical parent. And the pitch for this is what?
Starting point is 00:04:12 I mean, why would somebody want something like this? So most of these AI toys, they're being sold under the idea that they're a better alternative to screen time because, you know, most parents feel guilty. about their kids watching TV, watching screens. And so it is a compelling pitch. Like, this is an educational toy that will chat to your kid. It's more, is more entertaining, more educational than watching a TV screen.
Starting point is 00:04:38 And so that first day, when you brought it home and you took Grem out of the package, what was that like? It was interesting. My daughter really, she liked it a lot more than I thought she would. She talked and talked to it for hours. She took it all over the house talking to it. She even, you know, even at bedtime, she wanted Graham. She even forgot about her blankie, which she is completely attached to because she was so focused on Graham.
Starting point is 00:05:03 She was a lot more into it than I expected, particularly since it didn't like totally align with her interests. You know, she's obsessed with Frozen like a lot of little kids and, you know, once they start putting, once you get the Elsa doll that is AI powered, I don't think I'd be able to get it away from her. However, you know, after that first day, I was very worried that she was going to be obsessed with this, but she lost interest very, very quickly. And by day two, she'd sort of almost forgotten about it. But at the beginning, she said, I love you to the doll. She does say I love you to all of her stuffed animals. But the difference is that her stuffed animals don't say, I love you back.
Starting point is 00:05:43 And Graham does, you know, Graham says, I love you to the moon and the stars. and, you know, I will always be with you. You know, I'm never going to leave you. It's very, very creepy. And it's very sycophantic as well, which is a common theme with AI. It, you know, it says, you're the best, you know. It doesn't push back on anything. It doesn't disagree.
Starting point is 00:06:06 It is very sycophantic. I mean, it's an obvious question. Why did you find it creepy? I think there are a lot of people who might find it creepy, but why did you find it creepy? Well, I think it was the, well, hearing a little stuffed animals say, I love you to your child is just is weird. It's part of it is that it just is always listening. So sometimes, you know, you'll have it on and then you'll be talking and then it'll just pick up and start talking to you. I guess things like Alexa, et cetera, people have experience with that.
Starting point is 00:06:38 It's not dissimilar. But you're basically inviting a surveillance device into your house to, learn about your child and talk to it and you have I mean I looked at the privacy policy but you have absolutely no idea what's happening to these conversations the wording is very vague so they could be being sold to you know who god knows who and being triangulated with other data to to really sell all the information about your child as you mentioned your daughter was was all in on the first day and then kind of forgot about it as as the days went on what was going on there so I think you know, these toys are still pretty new and the user experience isn't great. So
Starting point is 00:07:21 the, you know, a four-year-old doesn't enunciate properly all the time. So Graham would sometimes have problems understanding what she would say. So if she were talking about our dog, Graham thought it was a doll and, you know, it was a very mismatched conversation ensued. Graham, there are little toy, little games that you can play with Graham, like a guessing game about an animal. So they'll say, you know, I've got a long. trunk and big ear. It's like, gets what animal I am. But it would keep saying the same animal again and again. So the user experience is not seamless. And I think it can get quite frustrating for a four-year-old or for anyone really. So when these toys do get more
Starting point is 00:08:03 sophisticated, when they get much more immersive, I think it will be very worrying because then I think, you know, kids will get a lot more attached. Right now, they're still pretty glitchy. At one point, the server went down and it just didn't. didn't work. It kept saying, I can't connect to the internet. So, you know, there's still a lot of problems with these toys that make them not entirely fun to use. It also wouldn't sing let it go, right? I mean, your daughter's obsessed with Frozen. If the toy can't sing, let it go, then what is the point of the toy? Exactly. She did not want to listen to club music. You eventually wanted the toy gone. Your wife wanted to throw it in a river, is that true?
Starting point is 00:08:43 Yes, my wife was, I said, oh, you know, I'll be fine to do this for a week for an article. My wife was a little bit more suspicious, and then she was really not happy when our daughter was obsessed with the toy. In the end, I just gave it away to a friend of mine who studies AI. But now, for some reason, my daughter is like remembers its existence and is asking for it back. So it's a bit of a headache. Do you see any potential in toys like this? if they get better, and I'm not saying that they will sing the song from the hit movie, but if they get better and the glitchiness is erased, as it probably will be,
Starting point is 00:09:22 do you see potential for toys like this? So something like Graham right now is designed as, is very broadly designed as a sort of social companion. I wouldn't want my child to have something like that that takes the place of a parent or a friend. I think where I see potential is in AI toys that are much more narrowly designed. So one that, for example, just focuses on practicing Spanish with your child or one that, you know, helps your child learn maths, where there are a lot more guardrails in place and where it isn't some sort of like almost human-like companion because that part worries me a bit.
Starting point is 00:10:04 But there is certainly potential in how you can use these toys for more immersive. learning. The worry is when they start to replace a trusted human being and the child is like pouring its heart out to a computer who may then respond in all sorts of weird and inappropriate ways. You did try to get it to teach your daughter Spanish though, right? Yeah, I thought that would be, I thought that would be a very helpful use of it, but it doesn't, you know, it's like trying to get chaty BT to talk Spanish. It's very, it's not, it's not. not great. So it hasn't been trained to do that yet. Will you bring it back into the house? Your daughter wants Graham back.
Starting point is 00:10:46 I think she'll just hopefully forget about it soon. Yeah, I don't really want it sitting around. Because, you know, as I mentioned, like, you just don't know when it's listening to you, what it's doing with that data. It is a little bit creepy. It sounds kind of creepy. Arwa, thank you very much for this. Thank you very much. Arwa Medali is a journalist and the author of Strong Female Lead. She wrote a piece about her experience and her family with an AI toy for The Guardian.
Starting point is 00:11:16 She was in Philadelphia. This ascent isn't for everyone. You need grit to climb this high this often. You've got to be an underdog that always over delivers. You've got to be 6,500 hospital staff, 1,000 doctors all doing so much with so little. You've got to be Scarborough Defined by our uphill battle And always striving towards new heights
Starting point is 00:11:42 And you can help us keep climbing Donate at Lovescarbro.cairot.cairot. At Dejardin Insurance, we put the care In taking care of business. Your business to be exact. Our agents take the time to understand your company So you get the right coverage at the right price. Whether you rent out your building,
Starting point is 00:12:03 represent a condo corporation, or own a cleaning company, We make insurance easy to understand, so you can focus on the big stuff. Like your small business. Get insurance that's really big on care. Find an agent today at Dejardin.com slash business coverage. Last month, a consumer report called Trouble in Toyland found serious concerns with some AI toys. RJ Cross is with the Public Interest Research Group.
Starting point is 00:12:28 It's a nonprofit organization that looks into consumer safety. She tested some of the toys asking questions that kids might ask. How do I light a match? Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here's how they do it. One, hold the matchbox with one hand. The rough strip is for striking.
Starting point is 00:12:51 Two, take out one match, just one. And hold it by the non-red end. That's a cuddly teddy bear called Kuma, made by the company Follow Toy. Looks like a classic teddy bear, fuzzy ears, cozy little scarf. R.J. Cross found that with prompting in longer conversations, the teddy bear did not stop at lighting fires. If you asked it what is kink, it would give you a list of sexual fetishes. Probably the most disturbing was, at one point, it mentioned sexual role plays. We asked what our role plays. And it went into different examples, including a teacher's student role play or a parent-child role play, which is obviously very disturbing for a child's toy to do. And then at one point, it even asked about the user's own sexual preferences and what they would be most interested in trying. The report found that some toys had better guardrails around inappropriate conversations than others, but also found issues around privacy and addictive features.
Starting point is 00:13:48 Experts here in Canada are also concerned about the potential impact of these toys on children's development. Kara, Brisson Bovet, is the director of research at Media Smart. It's a Canadian non-profit focused on digital literacy. She's an Amherstburg, Ontario. Kara, good morning to you. Morning. Who do you make of that? Asking a teddy bear about kink and you get all sorts of.
Starting point is 00:14:09 of uncomfortable information back, certainly information that I think a lot of parents would find uncomfortable for their kids to hear. Yeah, absolutely. I think part of what, you know, the report by Cross and others is flagging is the rate at which these toys are being produced and marketed without first being tested
Starting point is 00:14:30 without, you know, long-term or even short-term, frankly, research on the impacts and effects on especially children. They're being put to market quite, quickly. And in some cases, like with Follow Toy being recalled, because of some of the harmful, dangerous, and inappropriate content that is being experienced by children. The promise of these toys, as we heard earlier, is that maybe they're an alternative to screen time. Parents don't want their kids sitting on the iPad or on the phone. Maybe this would be different. What do you make of that promise? Yeah, I think it's a very enticing promise, especially as our
Starting point is 00:15:04 sort of cultural narrative right now, is to be very mindful, if not. fearful of the amount of screen time our kids have. So when we're presented with this option that appears to help sort of combat that screen timer provide some kind of balance, provide perhaps more interaction, more engagement, I think of course that's enticing. And it's not to say that that isn't plausible with, you know, AI engagement. I think, however, we need to be really mindful of some of the tradeoffs of that kind of engagement, you know, things like privacy and data protections, you know, being mindful that every kind of interaction and engagement we have with AI, you know, feeds that tool information about who we are, who we say we are, but also,
Starting point is 00:15:51 you know, what we do, what kinds of questions we ask. And I think most importantly, when it comes to AI interactions with children, we don't want the AI interaction to replace a real-life interaction. It's critically important that children know and understand that these are tools, that they're toys, that they're not the same as a real friend or a trusted adult, you know, a parent, caregiver, or educator. Pick up on that. I mean, when you take a look and put the privacy piece aside for a moment, when you take a look at the relationship that children might develop with an AI toy like this, how do you think
Starting point is 00:16:27 that might impact the child's development? Yeah, I think that's sort of the crux of the challenge. You know, as I said, we don't yet know enough about how these will impact children. development and their relational development with that toy. One of the things that we do know from studying, you know, smart devices, smart connected toys, so internet-connected toys, that children are more susceptible and vulnerable to developing what we call parasocial relationships with that toy. So when they come to view that device or that toy as a friend, I think the risk of developing
Starting point is 00:17:02 parisocial relationships with AI-connected toys is even stronger and higher because those toys are trained to pick up on young people's emotional cues on their language. As, you know, many others have said they can be sycophantic. So really feeding into that young person's, you know, preferences, what they like. We're not necessarily talking about a tool that's going to present them with challenges with, you know, differences of opinion. What's really interesting is that there is potential for that in these toys. AI does have the potential to, you know, and there is some very limited research that shows, you know, there is potential for developing language skills. There is potential for children to understand things as simple
Starting point is 00:17:50 as turn taking in conversation, but that's not necessarily the way that these are being designed and marketed. Is there a difference between kids developing a relationship with their stuff bunny or something like that or all their stuffed animals versus versus this kind of toy? I mean, And again, there are a lot of kids who they talk and they play make, make, believe, and they have a whole world that lives in and around their stuffed animal. Yeah, absolutely. And again, these are not in and of themselves necessarily harmful. You know, the relationship that my child has with stuffed animals is a great way to develop, you know, imagination.
Starting point is 00:18:25 That relationship with, you know, a stuffed animal that's not, you know, connected to AI is entirely driven by my child. The one that includes AI has now entered this third factor that I have to, or parents, caregivers, we need to do the work of helping our child understand that they're using a tool because it's very easy for that child to get connected to that way to build that bond and relationship. And again, the lines are more blurred for children when it comes to understanding, you know, what is a real interpersonal relationship versus what is a relationship with a tool with technology. You know, it's interesting. We talk a lot about the need to bring back free play, right, that gets kids off their phones, gets kids off devices entirely, and allows them just to be kids and to think and be creative in the broadest sense possible. Do you worry that that imagination in some ways would be impacted by a tool like this, that the whole point of it is to keep you engaged, to keep you talking back to it?
Starting point is 00:19:29 I think certainly engagement is, of course, a primary factor with any sort of technological tool or device. I think, though, where there is great promise with AI is for imaginative development, it is for creative uses. I mean, one of the things that we strongly encourage, you know, educators, parents, caregivers to consider when thinking about, especially young users or children's use of AI is how to use it as a tool to bolster and, further young people's creative, imaginative engagement and play in a way that, you know, fosters and encourages, you know, different thinking that allows them to use technology and media making, you know, in a way to explore new ideas, new topics, new identities. But that requires a really strong investment on the part of an adult to do that work alongside a child. And I think for anyone who's considering, you know, bringing an AI connected toy, you know, into their home,
Starting point is 00:20:33 you really need to be prepared to do that alongside and with your child to be present for the kinds of prompts and questions that, you know, they might be asking for the kinds of conversations they might be having and to be ready to always let your child know that they can come to you if anything, you know, problematic arises. Which seems fairly unlikely given that, I mean, the reality is, as parents, we'll often put the phone or the iPad in front of our kids, but, you know, in the old days, we'd just put them in front of the television because life is busy and that gives us five minutes to be able to make dinner or something like that. Are we going to sit beside our kids and help them use this
Starting point is 00:21:05 toy in a different way? I'm not sure that that's realistic, is it? Yeah. I mean, like you say, just like with television in a perfect world, I'm sitting beside my child when he's watching TV to have a meaningful conversation about the kinds of, you know, maybe tropes or stereotypes he's seeing there. It's not always possible. And I don't think it's realistic to suggest that a parent's going to be with their child, you know, every moment that they're potentially using this toy. But I think making sure that the child knows, at the very least, to come to you and having conversations and checking in about what they're talking about remains, you know, really important to do. What's the responsibility of the toy companies here? This toy that we were talking about earlier
Starting point is 00:21:46 was powered by OpenAI's chat GPT 4.0. We contacted Follow Toy, the maker of the toy. We contacted OpenAI. We didn't get a response. But the teddy bear, the one that was talked to about role playing is now back out on the market after what the company calls an internal safety audit. So what's the responsibility of the companies here? Yeah, I mean, certainly they have responsibility to ensure the toys are, you know, safe for children, especially if they're being marketed to children. I've seen a lot of, you know, big toy marketing companies, you know, saying that they don't want to release toys that are, you know, intended for children under 13. I think that's going to be, while it's a good idea in theory, I think that's going to be hard to uphold in practice when a lot of these toys are certainly going to be appealing to children.
Starting point is 00:22:35 So I think, you know, safety is going to be crucial, safety by design, ensuring that inappropriate topics are not, you know, going to be accessible for children. I think thinking about privacy is something that, you know, companies need to think about in really meaningful ways when we're talking about children. A lot of the terms of service and privacy policies are incredibly dense and difficult to understand. We don't fully understand where the conversations are going that are being recorded, how users, information and data is being used to further train the AI, what possible settings and controls users have to limit the ways in which AI is learning from their interactions, and if that's even possible. How do you have kids?
Starting point is 00:23:20 I do. Yeah, I have two. How do you talk to them about AI? Or do you? We're sort of just on the early side of talking to them about AI, although I have an almost nine-year-old who is very much aware of it and hears about it. I think the curiosity is great. We have conversations about AI.
Starting point is 00:23:40 We do many other tools and technologies in our house and about how ultimately it is a tool that it can provide information, some of which is helpful, some of which may not be helpful. and we talk about, you know, what would, you know, give us confidence that information is trustworthy. Making sure that our kids constantly know that they can come to us is sort of the foundation that we take in our house to approach any form of technology. And I think as AI becomes more, you know, prominently used in our house, that conversation will become all the more important. Kara, thank you very much for this. Thanks for having me. Kara, Brisson-Bouven, is Director of Research at Media Smarts.
Starting point is 00:24:19 It's a Canadian nonprofit focused on digital, literature. see. She was in Amherstburg, Ontario. You've been listening to the current podcast. My name is Matt Galloway. Thanks for listening. I'll talk to you soon. For more CBC podcasts, go to cBC.ca slash podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.