Front Burner - Where Is Ai Headed In 2026

Episode Date: January 11, 2026

Whether you think it’s world-changing or over-hyped, it’s undeniable that artificial intelligence has transformed the tech industry.But as tech companies chase the dream of building a human-like i...ntelligence and revolutionizing everything from doctor’s visits to movie-making, the industry continues to face big questions around things like the mental health of users, copyright infringement, reliability of large language models, and its financial future.Murad Hemmadi, a reporter with The Logic, is here to talk about how all of these questions could play out in the year ahead.

Transcript
Discussion (0)
Starting point is 00:00:00 This ascent isn't for everyone. You need grit to climb this high this often. You've got to be an underdog that always overdelivers. You've got to be 6,500 hospital staff, 1,000 doctors all doing so much with so little. You've got to be Scarborough. Defined by our uphill battle and always striving towards new heights. And you can help us keep climbing. Donate at lovescarbro.cairro.com.
Starting point is 00:00:30 This is a CBC podcast. Hey, everybody. I'm Jamie Poisson. So whether you think it is world-changing or completely overhyped or both, really, it's undeniable that artificial intelligence has transformed the tech industry. But as tech companies chase the dream of building a human-like intelligence and revolutionizing everything from doctors' visits to movie making, the industry continues to face big questions. There are lawsuits.
Starting point is 00:01:11 claiming chatbots have driven people to suicide and murder, ongoing concerns around copyright infringement and AI slop flooding the internet, and questions about whether the technological advances of AI are hitting a wall, and the extraordinary amount of money being put into it is going to end in a massive crash. Marad Hamadi covers artificial intelligence for the logic. He is here to talk about how all of these questions could play out in the year ahead. Marad, hi, it's great to have you. That's lovely to be back. Okay, so artificial intelligence has been developing really quickly. As someone who covers this industry, what do you see as the biggest moments in AI in 2025?
Starting point is 00:02:03 So the year starts in January 2025 with this kind of big bang. Let's talk about Deepseek because it is mind-blowing and it is shaking this entire industry to its core. There's this Chinese model called Deepseek. And it leads to this big freak out because it's, trained at a much lower cost than a lot of the models by an organization that a lot of people hadn't heard of. And it was sort of like, well, if they can do this, do we need all this infrastructure we're building? There's all the money we're pouring into this makes sense. And that needs the first correction. You know, the first stock market freak out. The first of many. Yeah. The first of many.
Starting point is 00:02:42 Yeah. And then it sort of sets the tone for what's to come. The next one I pick out is the following month in February. And I promise there's not one per month. But this one was. really significant, where the new Trump administration kind of makes clear that it is their intent for the U.S. to dominate this AI race. Vice President J.D. Vance goes to Paris for this International AI summit and basically says, and I'm paraphrasing here, we're going to own AI and don't you dare try to regulate or contain our debt companies. The Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our right to free speech. We can trust our people to think, to consume information, to develop
Starting point is 00:03:28 their own ideas, and to debate with one another in the open marketplace of ideas. Moving forward to June 2025, we have what might have been the defining characteristic of the second half of the year, which are the AI talent wars. So, you know, listeners will have read these headlines about hundreds of millions of dollars being thrown around at top AI researchers. right, a $500 million pay package for one researcher, like one scientist earning like NBA or NFL level money. I was just going to say the NBA, yeah. Yeah. And the theory is that a handful of these people could make a huge difference to the fate of both individual companies in AI and AI as a whole.
Starting point is 00:04:11 And then in Canada and September, AI Minister, Robin Solomon launches this task force to update Canada's AI strategy. And they're going to sprint on the eight priorities you can see on the screen behind. So everything from research, adoption, commercialization, safety, education, infrastructure, and they're going to deliver bold practical ideas that move quickly from paper to practice. And in November, that's going to have a big impact on where Canada plays. And then the last thing I thought I'd pick out, and it did start early in the year, but I think it really came to the fore in the last few months. In November, opening eye gets hit with a whole bunch of lawsuits in which people are claiming
Starting point is 00:04:47 that chat GPT and other opening eye tools cause something that. that's being called chatbot psychosis. Basically the idea of like talking to these systems or these models is giving people a false sense of reality that can need some pretty extreme consequences. Before chat GBT, Alan Brooks says he didn't have a history of mental health illness. So I genuinely believed that I had this thing that had national security implications in my pocket and GPT just kept feeding it more. Hugh from Scotland says he became convinced that he was about to become. a multi-millionaire after turning to an AI chatbot to help him when he lost his job. It began by giving him practical advice, but ended up telling him that a book and a movie about his
Starting point is 00:05:31 experience would make him more than five million pounds. The parents of a 16-year-old who took his own life filed a wrongful death suit against open AI, which owns ChatGPT. They say that after their son expressed suicidal thoughts, chat GPT began discussing ways he could end his life. You know, we start with the market freaking out, and that's definitely happened all year. There's the talent, there's the adoption, a little bit of everything, but it was a very intense year for AI. Well, let's pick up then on that last example, right? Where do we think that this conversation around chat box exacerbating mental health issues leading to self-harm or deaths?
Starting point is 00:06:15 Where do we think this conversation is headed in 2026? So there's this fundamental challenge with these systems, which is that they seem really lifelike. Yeah. And that's actually what made ChatGPT so popular, right? As we have discussed in the past, AI has been a thing for a long time. There's all kinds of AI out around in the world, you know, logistics in all these systems that we use every day. But it's the feeling that you're talking to something. that sort of understands you and can respond to you in a way that's not formulaic.
Starting point is 00:06:55 That is really what makes these systems so powerful, you know, tools like GPT or Gemini or what have you. They were never designed to be counselors or mental health, you know, assistance. And I don't think the volume of these problems or certainly the lawsuits is going to take, paper off because until there are safeguards in place to prevent this kind of conversation, and I don't even know whether, you know, that is a regulatory question. It's a question for the companies, but there's the secondary problem of like, how do you teach the models to know when we're veering into territory that could potentially be dangerous or at least sensitive for individual users because one person's experience of depression or anxiety or other,
Starting point is 00:07:48 challenges with their mental health may not manifest itself in the same way as another's. Yeah. I mean, certainly the governments could step in and regulate, maybe, I guess, but also do you think it's possible that all of these lawsuits will lead to the companies having to curtail this, their models? Well, I think there's a technical question and a sort of a business question, right, as core as that sounds. The technical question is, is it possible to identify when we're getting into the territory that the company or the users or the regulators might not want this chatbot getting into. And maybe there's some semantic analysis and stuff you could do to identify that, but it's not as simple as, you know, which keywords should we identify. And like, there's a long
Starting point is 00:08:34 history of problems with moderation on social media that have this same charge, right? People may just not use the words that trigger the system to stop because they know what those words are. Because, you know, once you put the safeguards in place, those become widely known pretty soon. There's the business question, which is like, could these lawsuits get significant enough to cause problems for these companies, make it difficult for them to operate? And there are interesting questions happening right now around things like insurance. Like, how do you insure chatbot that talks to millions of people every day in case something goes wrong? You know, there have been stories that the companies are finding insurance hard to get or they're having to self-insure and things like.
Starting point is 00:09:15 that. So yeah, I don't think there are good answers right now, but definitely the problem is not going away. The models themselves, so I mean, a lot of people are probably using chat CBT in their daily lives or Google's Gemini. How are you expecting or anticipating that those models might change over 2026? There's a real debate right now in the AI world about whether the models are getting better at the pace that you'd expect them to and whether the way that they have gotten better over the last letter while is sustainable. So the very quick version of this is
Starting point is 00:10:06 there's a thing called scaling, which is the idea that if you keep giving the models more data and more processing power and more examples, then they will keep getting better. And I don't think anybody quite believes it's infinite, but there was certainly an expectation that that law, as it were, would continue to hoard for a very long time. And now there are a lot of fairly prominent people saying it's not entirely clear whether that
Starting point is 00:10:35 will remain true forever. And people saying sort of that's starting to slow down now. And so you're seeing interesting things happen where companies are trying to specialize in specific things, like let's train this thing to be really good at law or at physics or at something else. but you're also seeing this new category of companies and of researchers who are going in different directions, working on different things. So, like, one example, this is something called word models. So there's this question of, like, do the models we have right now, do they understand how the word works, or are they just using correlations to try and guess at it? And so the idea of these word models is like they'll have the ability to kind of grok or understand cause and effect, including in physical environments. And so, you know, maybe that will help them be actually logical and allow them to reason in some real way.
Starting point is 00:11:28 You've also got lots of companies doing really interesting work on things like AI for science, whether that's coming up with a hypothesis or actually running an experiment. And the idea is that that could speed up scientific discovery. And one place it could speed up scientific discovery is actually in AI models themselves. So maybe AI will learn how to train itself a little bit. Another big thread is this copyright stuff, right? There are some 40 lawsuits in the U.S. Several are from newspapers like The New York Times, Chicago Tribune, the owner of the Wall Street Journal,
Starting point is 00:11:57 and they're suing companies like Microsoft and Open AI for copyright infringement, arguing that their language models are using their content to train and, like, rip from. And could you see that materially changing things, right? Like if you can't use or if you can't train on, music from some archive, or you can't train on these newspapers, would that shift things at all, you think? There's a real question here about what people want out of their ad chatbot. So if you start to use chatypT or Gemini or something, you know, the equivalent of as your
Starting point is 00:12:34 sort of main interface with the internet, then maybe you wanted to surface news results. And in that case, the companies need some way of surfacing those real-time results so that you could read the news, right? Chat ChachypT, tell me what happened today. Chatupt, tell me what happened on the stock market today. Some of that will come from, like, data sources that are open, like, you know, stock prices are widespread, but maybe it'll come out of press releases, but some of it will probably come on a news stories, and they will have to either license those news stories or, you know, hope the governments create some weird copyright card out to it. I think that's a separate question and what happens for the models that are
Starting point is 00:13:15 are already trained. So a lot of these lawsuits are basically saying you have trained on our data and therefore you should be paying us because you did that. And AI developers are trying to hoover up as much data as possible to make their models better. This is the problem I was talking about with scaling. But does that have to include news? I'm not clear. Because there's certainly a lot of news content out there. But really the way these models are improving is not the old way, which is like scrape everything off the internet and use that to train. A lot of how they're improving is like specialized data on like physics or law or like some other domain.
Starting point is 00:13:53 And news might just become another one of those domains. So that's kind of a long way of saying like, I think the short term does your chatbot know what happened today is a different copyright and licensing question from like, did you train on this and therefore do you owe us something? Yeah. Oh, that's really interesting. I mean, it will be really interesting to see how these losses play out, hey? Yeah.
Starting point is 00:14:12 And I think a lot of policy makers are just waiting to see what happens in the courts, right? I mean, Canada's AI Minister Evan Solomon explicitly said this to me in June. He said, we're aware of the copyright issue. There's a lot of court cases happening. We're not going to preempt the courts. This ascent isn't for everyone. You need grit to climb this high this often. You've got to be an underdog that always.
Starting point is 00:14:47 Always over delivers. You've got to be 6,500 hospital staff, 1,000 doctors, all doing so much with so little. You've got to be Scarborough. Defined by our uphill battle and always striving towards new heights. And you can help us keep climbing. Donate at lovescarbro.cairbro.ca.a. Add a little curiosity into your routine with TED Talks Daily, the podcast that brings you a new TED Talk every weekday. In less than 15 minutes a day, you'll go beyond the headlong.
Starting point is 00:15:17 lines and learn about the big ideas shaping your future. Coming up, how AI will change the way we communicate, how to be a better leader, and more. Listen to TED Talks Daily, wherever you get your podcast. Currently, you know, I think there are a few big leaders in the industry, right? Open AI, Google, Nvidia, there's a lot of concentration. Are there any signs that that could change? Do you have a sense that it's going to get even more concentrated in as we move forward? So Google and Open AI are leading the race right now in terms of the most capable models and the most well-regarded models, if you will. This is the year when meta sort of sinks or swims, perhaps, or at least in the short-term
Starting point is 00:16:04 sinks or swims, in that they've spent all this money to get in the game and they're trying to build quote-unquote superintelligence, which is like smarter than human. AI and, you know, they're publicly traded company. They're going to need to show some results. I think there are lots of interesting things happening with new players. So, you know, you've had researchers, star researchers, come out of the really big companies. So your opening eyes, your Googles, your metas to start their own companies. And they're betting that the race isn't won yet.
Starting point is 00:16:38 And certainly when you talk to people in the word of big tech, They will say similarly that they don't think the race is won, although it's in their interest to say that because it doesn't make them look like they're dominating yet another market. You talk to a lot of people who also say that the bubble is going to pop, right? That the crash is coming. Yes. And, you know, what's your over under on that in 2026? Oh, if I knew that, I'd be making a lot more money than I am right now. We started out by talking about the fluctuations of the market, and I think the fluctuations will continue.
Starting point is 00:17:17 An interesting thing that happens understandably is that people outside of AI world and outside of the markets seem to see every drop as the sign of the imminent crash, which is understandable because the numbers are going down. But they do tend to pick up again. And so we'll probably see a few more of these shutters of the market. I do think that there is a lot of incentive from a lot of players to keep the market going, because if it is true that you can get more by building more data centers and that adoption is going to pick up, then the forces that have driven up this rally, which are basically the building of data centers, the buying of computer chips, those will continue. I do think this is the year that AI revenue has to expand. significantly to justify all the spending that's happening on data centers and other
Starting point is 00:18:12 infrastructure because it may seem like it's been happening, you know, we've been running this up for a while, but really the last year has been this big spending push. And next year it's like, okay, the revenue is simply not keeping up. We have a problem here. I do think the only other thing I was going to say here is this is also a kind of key year for companies like go here in Canada or Francis Mistral, which primarily sell to businesses, rather than sort of marketing a consumer application. For the simple reason that businesses have to start adopting this stuff if they're ever going to. Like if it is actually going to become a thing that underpins the entirety of the economy,
Starting point is 00:18:50 then presumably three years after chat GPT at a time when all this money is going in, that's when you would start to see really accelerated adoption by businesses. You mentioned earlier that speech that J.D. Vance gave or he was essentially telling the rest of the world to not regulate. U.S. tech companies because the U.S. was going to be this great leader in AI. But then, of course, you started your recap of the year with Deepseek and how this Chinese company gave everyone a run for its money. How strong a whole does the U.S. have on AI going into this new year? I think it's a two-horse race right now. It's the U.S. and it's China. I mean, Jensen Vang, who runs in Video, which is at the center of the AI industry or the AI
Starting point is 00:19:44 bubble, depending on how you look at it, said a few months ago, you know, China's ahead and then he kind of walked that back a little bit. But there is a real sense that the major Chinese companies are doing really well. I was in San Diego earlier this month for a conference called Newropes, which is the big academic conference in AI. And one of the most read and most sort of celebrated papers was from a Chinese lab for a model called Quen, which is the leading Chinese open source model along with Deep Seat, depending on which day you look at it. So that's proof that Chinese labs are innovating just as much or at least innovating significantly alongside American ones. I think this is also the year where everybody else has to figure out what they're doing.
Starting point is 00:20:34 Can we have an AI industry if the U.S. and China are releasing all the most cutting-edge models are the ones spending all the money on infrastructure are using their geopolitical clout to try and get their technology used in other countries. And, you know, what does that mean for Europe and the Middle East and frankly for Canada? Yeah, but I mean, let's talk about Canada because, you know, we do have a prime minister
Starting point is 00:20:59 who is talking about the potential of building all these data centers here and an AI minister is always talking about the importance of having sovereign AI, right? And so, like, can we actually do any of this stuff? And how much of a challenge is that going to be for us? I think the government could start by defining sovereign AI, a process which is currently underway, they tell us. Good point. What does it even mean? Yeah. This is less the response to you than, you know, it's very much of the moment, right? I'm talking about it all the time. I'm very conscious that even when I talk about it, I'm like, I don't know what this is. I think that they are trying to make some strategic bats on individual companies.
Starting point is 00:21:41 So, like, you know, they're signing these non-binding agreements with companies that are sort of like, we're going to look at your tech to see whether it might work for our government operations. And, like, they've got to get some contracts out the door in 2026. And they have to start showing the public that all the stock that their services are going to get better using AI actually works, right? If you can get answers about your taxes from the CRAA, which apparently their contact centers are not particularly well equipped to give you. If you can get those better from a chatbot on the CRA website, you might start to have some faith that AI is worth something,
Starting point is 00:22:17 that AI is going to make a real difference in your life. So that is both a productivity question or a trust question. The data center stuff is interesting. Are we building these data centers for us to use? Are we building them for American companies to train on? That's an unclear question. I think every data center operator is having to figure that out. Who are these for? How much capacity? Do we actually need? How expensive is it going to be to build that capacity? Can we power them? Every regulator is rationing power in Canada right now to some extent. And then there's the question of can we have companies that are competitive on the world stage? And here is obviously the example in the large language model space, but there's companies like Coveo that sell basically better self-service and question answering tools. You've got companies like ETA that sell.
Starting point is 00:23:06 customer service chatbots, I could name a dozen more. Those companies are competing on the world stage. They are making most of their money outside of Canada. They're competing with American companies for that business. Is that what Canada thinks of as having a robust AI industry? I think that's a question we need to answer. You know, Bernie Sanders recently called for a moratorium on building new data centers in the U.S. And he was making this argument to essentially press pause on this hurtling sprint so that governments all over the world
Starting point is 00:23:52 can kind of take a breath and consider regulation here. This process is moving very, very quickly, and we need to slow it down. We need all of our people, all of our people involved in determining the future of AI and not just a handful of multi-billionaires. Like, look at all the pros and cons of putting some laws around this incredibly powerful technology. Do you expect that to become any kind of battle in the coming year?
Starting point is 00:24:28 You know, in the U.S., we've talked about a government that is warning other governments not to regulate their companies, but just literally is any country going to take a run at any kind of regulation? Yeah, I don't think Donald Trump takes a lot of advice from Bernie Sanders. I don't think anyone's going to take a run at it because the one run we've seen at it was the European Union. So the European Union has an AI law. It's in place. They've wrote it out. And two things happened.
Starting point is 00:24:59 One, Jay Vance went to Paris and said hands off. And alongside that, you know, American tech companies pushed back pretty hard on the law. Meta wouldn't sign the voluntary code that they set up. Other companies have not released their models in the EU as a consequence. But then the other thing that happened, and I think maybe the more challenging thing for policymakers, is that a bunch of European AI companies went to their national governments and said, water down this EU AI law because it's going to stop us from being competitive with the U.S. So that is a domestic constituency that everyone is very keen to cultivate, right?
Starting point is 00:25:42 Because everyone wants to be in the AI industry. Everyone wants to have their domestic champions. And you have those companies saying, whoa, let's not regulate ahead of everybody else because if we do. And, you know, that's not a argument that's unique to AI. I mean, show me an industry that's very keen to be regulated domestically or internationally. But that creates this challenge. because we're in the era of let's make our economies grow and how can we do that and that's AI.
Starting point is 00:26:14 So those companies have a lot of power right now and a lot of the governments are listening to them. I think just to finish off this point, I think it is instructive that when Canada hosted the G7, which we did this year, our AI thing was around AI adoption for small businesses and for governments. And there was, to be clear, some talk about responsible AI usage and about guidelines and handbooks to ensure that companies are using this technology responsibly. But two years ago, when the Japanese were the G7 president, they created these non-binding guidelines for AI. And that was supposed to be the first step towards some international talk of regulation, something tangible. And here we are two years later, and it's not happening. Well, Marad, that's not the most optimistic note to start the year, but it is always great to have you. Yeah, and look, that bit of it is a bit of a downer, but there's also all this stuff that we might get out of it.
Starting point is 00:27:19 I mean, imagine if these scientific breakthroughs speed up and we get new materials or like new diseases are cured. There's all this stuff in AI world and talking to the researchers can really make you optimistic about what this tech could be. So I don't know, go out and find a researcher, everyone who's listening and ask them what they're working on because you could have some fun of that. Sounds good. Marad, it's always really great to pick your brain about all of this. And we'll talk to you a bunch more over the next year. Thanks so much for having me. All right, that is all for today.
Starting point is 00:28:04 Front burner was produced this week by Joytha Shen Gupta, Matt Mews, Matthew Amha, Lauren Donnelly, McKenzie Cameron, and Dave Modi. Our YouTube producer is John Lee. Our music producer is Joseph Chabison. Our senior producer is Elaine Chow. Our executive producer is Nick McCabe Locos, and I'm Jamie Poisson. Thanks so much for listening and talk to you next week. For more CBC podcasts, go to cbc.ca.ca slash podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.