The Decibel - Machines Like Us: Geoffrey Hinton on AI’s future

Episode Date: October 13, 2025

Geoffrey Hinton, “the godfather of AI”, pioneered much of the network research that would become the backbone of modern AI. But it’s in the last several years that he has reached mainstream reno...wn. Since 2023, Hinton has been on a campaign to convince governments, corporations and citizens that artificial intelligence – his life’s work – could be what spells the end of human civilization.Machines Like Us host Taylor Owen interviews Hinton on the advancements made in AI in recent years and asks: if we keep going down this path, what will become of us?Subscribe to The Globe and Mail’s ‘Machines Like Us’ podcast on Apple Podcasts or Spotify Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, it's Cheryl. Today we're bringing you an episode of Machines Like Us, the Globe and Mail's podcast on AI and technology. And this one is special. The interview is with Jeffrey Hinton, often known as the godfather of AI. But in recent years, he's issued warnings about artificial intelligence
Starting point is 00:00:18 and how it could lead to our extinction. It's a fascinating conversation, and I hope you enjoy it. New episodes of Machines Like Us release every other Tuesday. You can subscribe, wherever you listen to podcasts. Hi, I'm Taylor Owen. From the Globe in Mail,
Starting point is 00:00:37 this is Machines Like Us. If you listen to this show, I assume you pay at least a little attention to the world of AI. And if you follow AI, then you've got to almost certainly heard of Jeffrey Hinton. At this point, Hinton's story has become almost myth-like. While he was at the University of Toronto, he developed the foundations of modern
Starting point is 00:01:11 artificial intelligence. And after a career in academia, he went to work for Google in 2013, eventually winning both a Turing Award and a Nobel Prize. I think it's fair to say that artificial intelligence, as we know it, may not exist without Jeffrey Hinton. But he's Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations, and citizens that the thing he helped build might lead to our collective extinction. He's been sounding that alarm for more than two years, and now he thinks AI may already be conscious. But Hinton isn't just worried about our potential annihilation.
Starting point is 00:01:57 He also believes that we're on the brink of mass unemployment, that our banking system is in jeopardy, and the machines are already poisoning our information ecosystem. But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada's, seem reluctant to get in the way. So I wanted to ask Hinton, if we keep going down this path, what will become of us?
Starting point is 00:02:27 Jeffrey Hinton, thanks for being here. Thanks for inviting me. So I have to say I'm struck with a bit of cognitive dissonance talking to you. Like you're talking about the end of humanity, and we're just going to have a sort of casual conversation here about a technology. I mean, I feel the dissonance, do you? It is a big cognitive dissonance. I don't know if you've seen the movie, don't look up.
Starting point is 00:03:00 But it's quite similar to what happens in that movie. Yeah, and how do you navigate that? How do you see your position in this? I just do the best I can. I'm not a doomer in the sense that I think it's more or less inevitable will be wiped out, like Yadkowski, who just published a book saying with a title of something like, if anyone builds it, we all die. I think that's crazy.
Starting point is 00:03:21 Yeah, it was fairly categorical. I think that's crazy. We just don't know. We're coming to a time when we're going to have things more intelligent than us. Most of the experts agree that that will happen in between five and 20 years. And we just don't know what's going to happen. So the main point is we don't know, and we should be doing as much as we can to make sure good things happen. And I want to talk about that, what we don't know, why we don't know it, what things we could do to stop it.
Starting point is 00:03:50 The one thing that seems clear, though, is that over the time you've been... making these arguments and sounding these alarms, the pace of evolution of the technology has just exploded. It is only increased. How do you cope with that disconnect? I mean, just between the level of your warning and the severity of it and the fact that really very little friction is happening, in fact, quite the opposite, where this industry is exploding.
Starting point is 00:04:19 So you have to realize there's a big difference between something like air and something like nuclear weapons. Nuclear weapons are only good for blowing things up. AI has many, many, very good positive uses in healthcare and education in almost any industry where you have data and you'd like to predict things. So we're not going to stop it because of all those good applications. So we're in a very tricky situation where some people are calling for us to slow down or stop it.
Starting point is 00:04:47 And I don't think that's particularly realistic. I think what we have to do is try and figure out how we can live with it. And it's very hard to really get the full impact of it emotionally. I talk about it, but there's only one area, I think, in which I've really absorbed it emotionally. And that's in the threat to banks of cyber attacks. So I now spread my money between three different Canadian banks, because I'm convinced that there's a good chance, I mean, it's not necessarily going to happen, but there's a good chance that we'll get cyber attacks designed by AI, where we've got no clue what's going on.
Starting point is 00:05:23 and they could bring down a Canadian bank. I assume that if they do, the other banks will get a lot more cautious. And is this because of a new technological capacity that you've seen emerge, or is it because you've seen an increased in cyber threats based on that technology? Or what's signaling that change in behavior? So there's been a huge increase in things like ransomware attacks. Also between 2023 and 2024, there was like a 1,200% increase in fishing attacks. I'm now getting a lot of spearfishing attacks
Starting point is 00:05:54 where they use details about me to make it look convincing. I often have to ask an IT guy, is this male real? And he always says no, it's a spearfishing. But what worries me even more is a very good research called Jacob Steinhart predicted, in about five years' time, AI may well be designing its own cyber attacks that we have no understanding of.
Starting point is 00:06:19 I want to come back to some of the risks you've identified and what some of their technological origins or causes are. But this has been a process for you in coming to both understand these harms and then to decide to speak out about them. And I'm wondering if you could walk us through a little bit of that process is at what point did you really become concerned that the technology you were working on, that you played a role in founding, had these potential downside risks? I only got really concerned at the beginning of 2023. I should have been concerned earlier, but I was having too much fun doing the research,
Starting point is 00:06:57 and I thought it would be a long, long time before we had things as smart as us, that had sort of general intelligence like us. What changed? The advent of the large language models, GPT3, models at Google, like Palm, that could explain why a joke was funny. So that was one ingredient, and the other ingredient was research I was doing at Google on whether you could make analog computers do large language models, which would require a lot less electricity.
Starting point is 00:07:28 And I realized there's a huge superiority that digital models have, which is that different copies of the same digital model can share what they've learned from their different experiences very efficiently, and people can't. If I experience something, and I want to tell you what I've learned, I produce some sentences. That's a very slow way to communicate information. If two digital intelligences that are the same neural network experience different things,
Starting point is 00:07:58 and if they've got a trillion connections, they're sharing information at like a trillion bits per sharing. Whereas when I give you a sentence, there's maybe a hundred bits. So even if you perfectly understood what I said, you're getting 100 bits, whereas these digital intelligences are getting like a trillion bits. That's a difference of more than a billion, a factor of a billion. So they're incredibly much better at sharing information than we are. When you say they, what do you mean by a they?
Starting point is 00:08:26 What's the they there? Large language models, a neural net running on some hardware. They can share information much, much more efficiently, which is why things like GPG4 or Germany 2.5 or Claude know thousands of times more than a person. It's because many different copies are running on different pieces of hardware. Suppose we had 10,000 people, and they all went off to university, and each person did one course. But as they were doing the courses, they could share what they'd learned. So at the end, even though each person's only done one course, all of them know what's in all 10,000 courses.
Starting point is 00:09:04 That'd be great, and that's what digital intelligences have, and we don't. And why did you only see that particular capacity in 2023? I was just slow at seeing things. It was because I was focusing on analog computation, and my attention was drawn to the ways in which analog computation is not as good as digital computation. So our brains are basically analog. And the problem with analog computation is that to make it efficient,
Starting point is 00:09:34 you need to use all the quirks and idiosyncrasies of the connectivity and the particular neurons in your brain. So the connection strengths in my brain are designed to work well with my neurons. Now, you have different neurons and you have different wiring in detail. And so the connection strengths in my brain have absolutely no use to you.
Starting point is 00:09:55 Whereas for different digital copies of the same neural net, they work in exactly the same way. That's what the digital's all about. And so they can share what they learn very efficiently. What was that realization like when you did see the potential of that? It was fairly shocking, realizing that, yes, that's why they can learn so much more than us. In addition to that, I suspect now that the learning algorithm that they use called back propagation,
Starting point is 00:10:26 it may be better than the learning algorithm the brain uses. For a long time, people thought, you'll never be able to recognize real objects in real images using this dumb back propagation technique. And for a long time, we couldn't, because the next were too small. But then we got much faster computers, and we got much more data, and then it worked amazingly well.
Starting point is 00:10:47 And so it was the argument that if you build this bigger and bigger and bigger with more and more computation, there will be some breakthrough that changes this thing in kind, not in a evolutionary way, but in a revolutionary way, in a sense. Yes, and that's what's happened
Starting point is 00:11:00 with these large language models. As you make them bigger and bigger, you suddenly start getting new kinds of ability that you didn't have before? When you describe that technical process, it feels like something we should be able to understand. Why is it that we don't know what these models are doing when they're behaving? Okay, so it's not like normal computer software. In normal computer software, you put in lines of code, and you know what those lines of code
Starting point is 00:11:28 are meant to do, and so you sort of know how it's meant to behave. Here you just put in some lines of code that tell it how to learn from data. What it learns depends on what structure it is in the data. And there may be all kinds of structure that you weren't aware of in the data. So you're not going to be aware of what it's going to learn. I can give you a very simple example of structure in data that you've got a huge amount of experience with and you've never noticed this structure.
Starting point is 00:11:57 Okay, so if you give you a nonsense word, a word I just made up, spelled T-H-R-U-N-G-E. Pronounce it. T-H-R-U-H-S-R-R-H-S-R-R-N-G-E. Thrunge, okay, yeah. Okay. But the point is that at the beginning you said thrunge.
Starting point is 00:12:18 You didn't say throng. Yeah. But most of the times you see TH, you don't say th. You say th, like in the and there and those and these. and thine all the really common words that start with TH, you pronounce the TH one way but words that aren't
Starting point is 00:12:38 common words that are what are called content words you pronounce in a different way and you know that you just didn't know that you knew it because I've learned it you've learned it it's knowledge that you have this sort of implicit you didn't realize you had that knowledge but you do
Starting point is 00:12:54 now neural nets will very quickly pick up that knowledge and so they'll have knowledge in them that we didn't put there deliberately, and we didn't even realize they had because we didn't even realize we have it. So this has been a fairly technical way of starting this conversation, and I apologize to people to go down this rabbit hole with you.
Starting point is 00:13:14 But I'm wondering how the evolution of this technology mapped on to your concerns about it and the fear of it. And did this come all at once? Like, did something just hit you that, wow, this thing we've built has these broader societal implications? And did that come all at once, or did you just see it over time? So you have to distinguish two kinds of risk.
Starting point is 00:13:41 There's risks that come from bad actors using AI to do bad things. And then there's risks that come from the AI itself getting super-intelligent and taking over. So I've always been aware of the risks of people misusing AI for things like, autonomous weapons that decide by themselves who to kill, or for creating echo chambers by people like meta and YouTube showing you videos that will make you more and more indignant
Starting point is 00:14:11 because those are what you click on. Yeah. Because those are risks. They're familiar to us. They're familiar risks. They've been around a long time. I haven't focused on those because lots of other people have been talking about those. At the beginning of 2023, I became aware about this other risk.
Starting point is 00:14:26 I'd always been aware in the very long run there was this other risk, that eventually they would get smarter than us, and then we'd really have to worry about what would happen. But I always thought that was very far off. And in 2023, I had a kind of epiphany that it's not nearly as far off as I thought. It's probably coming within five to 20 years. So we really need to start working on how we prevented taking over, and we just need to start working on that now. What was it like realizing that that outcome of your work could have
Starting point is 00:14:56 that potential. Well, it was sad. I mean, I'd spend all this time trying to make neural nets be really intelligent so they could do wonderful things, like particularly in healthcare and education and everywhere else. All the good things we know, AI might be able to do. I'd also been aware at that time that, of course, they will put people out of work and things like that. But becoming aware that they might just wipe us out, that was sad. Yeah, I can imagine. I mean, I've sort of switched from spending my first 50 years wanting to figure out how to make AI like us.
Starting point is 00:15:38 And now I'm trying to figure out how to make AI like us. So let's talk about this threat, this idea that the thing we have built, that we humans have built, could at some point turn against us. How does that happen? How does something that we have designed and built decide that it's going to harm us itself without us telling it to? Well, you have to remember this isn't like normal computer software where you tell it exactly what to do. You write lines of code and it executes those lines of code. Here, the only lines a code it executes, the lines to tell it how to learn connection strengths from data. So, if you show it all the available data on the internet, it will have read books by Machiavelli,
Starting point is 00:16:26 it will have read the diaries of serial killers, it will have learned how to manipulate people. It'll have learned all about deception. So it'll learn all sorts of stuff we didn't really intend it to learn. Also, if you try and make agentic AI, AI that can actually do stuff, like order things for you on the web, or send out requests to other AI agents to help it solve tasks, then to make an agentic AI, you need to allow it to create sub-goals. So for you, for example, if you want to get to Europe, a sub-goal is get to an airport. Once it can create sub-goals, it'll realize very quickly that there's two very sensible,
Starting point is 00:17:08 sub-goals to create. One is to get more power, to get more control. Because if it gets more control, it can achieve all the things we ask you to achieve better. More efficiently, yeah. The other is to survive, because obviously we've asked you to do something, and it figures out, it's not, you don't have to be that bright to figure this out. I'm not going to be able to do it if I don't survive. So it figures those things out. Now, because it knows how to do deception, it will actually then try and prevent people from turning it off. And we've already seen that.
Starting point is 00:17:43 So Anthropics done experiments where you let an AI see some email, which suggests that one of the engineers is having an affair. Then later on, you let it see an email that suggests that it's going to be replaced by a different AI, and that's the engineer in charge of replacing it. And the AI all by itself figures out that it should blackmail that engineer and say, If you try and turn me off, I know that everybody know you're having an affair. Why does it not want to be turned off? Because it knows it has to survive in order to achieve the things that wants to achieve.
Starting point is 00:18:16 All the goals that it's been asked to accomplish. Yes. Okay. Obviously, you can't do it unless you survive. Right. And survive to them means having, they know enough, it would know enough to know survival is contingent on receiving power, or having access to compute, all the things that make it. function. And not having people delete its code, delete the file that contains its code. Right. People
Starting point is 00:18:42 being the operative word there, I suppose. So your contemporary Jan Lecun, the chief AI scientist at MEDA argues that we could deprogram that risk, that we've programmed and built them to operate in this somewhat independent way, but that we might be able to technically limit this potential risk. Do you do not think that's the case? Or is that that just not there yet? Therefore, it's certainly not there yet. I don't completely disagree with him. We both believe that we're currently in control. We're building these things. So there's things we can probably build into them and while we're still in control. Now, where I really disagree with him is he thinks that people should be dominant and the air should be submissive. He uses those
Starting point is 00:19:32 words. And of course, all the big tech companies think like that. I don't think there's much future in that when you're dealing with something that's more intelligent than you and probably more powerful than you in that it can get other agents to do things. There's a different model, which is actually the only model I know of, of a more intelligent thing being controlled by a less intelligent thing. We have to face up to the fact they're going to be more intelligent, they're going to have a lot of power. And what examples are you? you know, more intelligent things, being controlled by less intelligent things. Well, the only one I know is a mother and baby.
Starting point is 00:20:09 The baby controls the mother for very good evolutionary reasons. And to make that happen, evolution had to build lots of things into the mother. There's all sorts of social conditioning that are important, but there's all sorts of hormones. The mother can't bear the sound of the baby crying. So evolution has put a lot of work into making it, so the mother genuinely cares for the baby. Mothers will die to defend their babies. That's how we need AI to be. I think we need maternal AI. Instead of thinking, we're in charge, these things work for us, and using the kind of model that the big tech companies or CEOs of big companies have, which is, you know, I'm a dumb
Starting point is 00:20:51 CEO and have this really intelligent assistant, but I can always fire if I want to. That's the wrong model. The right model is there are mothers. So we want AI to be built to see themselves as operating in our interests and functioning in our broad human interests, as a collective? Or as, I mean, this is part of the challenge, and some of these things are being built by companies with their own interests or countries with their own interests. Yes. So, of course, there's lots of details here. But really, I want to reframe the problem that we should think of these things being smarter than us and more powerful than us. and the one thing we can still do is build in to them, if we know how to, they're more
Starting point is 00:21:37 interested in preserving us than they are in preserving themselves. If they're being trained on all data we can collect that we've created as humans, and they're coming to a different view of humanity that isn't maternal, how would you influence it in a different way other than by training, changing the data in which they are learning? Okay, so it doesn't just come from the training. If you think how evolution did it, evolution's built things into the reinforcement function. Mothers get rewarded for being nice to their babies and punished for being nasty to their babies. That's different from the training data.
Starting point is 00:22:15 For mothers, there's training data which is watching other good mothers and watching how their own mother behaved, which is a very important training data for them. But there's things other than that that determine what they get reinforced for. That's the kind of things we need to build in. Now, you might say, well, you know, the reinforcement function is just a more code. So why wouldn't the superintelligence just go and change that code? Which you certainly could. But if you asked a mother, you can turn off your maternal instinct.
Starting point is 00:22:47 Would you like to turn off your maternal instinct? Most mothers would say no. They'd figure out, they turn off their maternal instinct, the baby dies. They don't want the baby to die. they're not going to turn it off. And so one hope is that the AI, even though it had the power to change its reinforcement function, wouldn't want to because it genuinely cared about people. I mean, I suppose if we're in a world of AGI, it might just take one to be the equivalent
Starting point is 00:23:16 of a psychopathic mother, right, who does make that decision that is against the interest of her child. Maybe all it takes is one. So it's fairly clear that the only thing that can control a super-intelligent AI that goes rogue is another super-intelligent AI. So we have to hope that most of them genuinely care about people. And when they see another one doing something that's going to destroy people, they take care of it.
Starting point is 00:23:44 In much the same way as if you saw some politician who was pathological, you'd like the other politician to take care of them. Are there such a thing? Are there pathological politicians? No comment. I think we've seen a few. I want to pause on this because I find at this point in these conversations, it can be unbelievably disorienting for citizens listening to this kind of thing.
Starting point is 00:24:12 Even just take your potential, sort of differing views of you, and Jan Lacoon, who's the head of AI at Meta, who also played a real part in developing this thing. having a fairly fundamental disagreement about what its effect on us, on all of humanity, might be. How on earth are citizens supposed to navigate that when you have two scientific giants of this field saying very different things about the future? Well, I think you should look at a whole bunch of different experts. So you should look at what people like Demisarabis think. The CEO of Google DeepMind.
Starting point is 00:24:51 But how do we ensure that we can stay in charge of those systems, control them, put the right guardrails in place that are not movable by very highly capable systems that are self-improving? You should look at what Yoshio Benji thinks. Another Canadian pioneer of machine learning. And one day, it's very plausible that they will be smarter than us. And then they will have their own agency, their own goals, which may not be aligned. with ours, what happens to us then? Jan is in a minority among the experts in having the view that there's a negligible chance these things will take over from us.
Starting point is 00:25:32 He really is an expert, but he's one of the few real experts who believes that. So you actually think there is a sort of broader consensus than I'm articulating here? Yes, I think there is. There's a pretty broad consensus that will get superintelligence in between five and 20 years. It might be sooner, it might be later, but a lot of the experts think it'll be somewhere in that window. Demisaribis, for example, recently said he thinks it'll be about 10 years. So there's good consensus about when it will happen, I mean, just roughly, and many of them think there's a genuine danger of AI taking over, and they can't just be dismissed. Now, some of them, I believe, are fairly extreme.
Starting point is 00:26:17 There's extremists in the other direction. So Yukowski believes it's almost certain to take over. I can call the end point where if you go up against something sufficiently smarter than humanity, everybody dies. I think he had a recent book called something like, if anybody builds it, we all die. Well, he thinks there's a 90 plus percent chance, right, that this would. Yeah, I think, I don't believe that. One of the other striking things, I think, for citizens watching this discussion, this debate, is that, not just the scientists, but the people who run the companies who are literally investing the hundreds of billions of dollars into building this thing, into bringing it into existence, themselves think there's a tremendously, a worryingly high chance that this is going to have horrific outcomes.
Starting point is 00:27:10 Yes, if you look, Darylamaday, the CEO of Anthropic, said he thinks there's a 25% chance this could end horribly. Elon Musk has said that it probably will end badly, but at least he'll be there to see it. Like, these are the people who are in charge, for lack of a better term. And, like, what are we to make of that? Like, why are they doing this? Yeah, Sam Altman has said similar things in the past. Yeah. And he still says similar things in private.
Starting point is 00:27:38 So why are they doing it? They love the technical problem of making something really smart. And they also think there's a lot of money to be made. But I think it's more of the challenge than the money for most of them. Now, the reason they can get the funds is because the people in charge of capital allocation think there's a lot of money to be made. And that then potentially increases their power if they're getting access to these huge funds and if there's in charge of a technology that has this vast economic potential. And the people who want to make the money are more interested in making large sums of money in the near future than they are in the long-term consequences. Is it a fundamental problem that there's really only four or five companies that are in charge of what could ultimately be superintelligence or AGI?
Starting point is 00:28:28 I don't think it's a problem there's only four or five of them. I don't see that as the main problem. I see the main problem as the competition between them that means that any of them that focuses more on safety is at a disadvantage. Anthropic focus is more on safety than the others. It gets some. benefit from that because people understand that. And so people like using Claude because Anthropic does, has more concern for safety. But certainly the competition between countries is probably more worrying. Even if all the US
Starting point is 00:28:59 companies decided to be more safety conscious, it's not clear the Chinese companies would. And it's not clear that countries themselves have an interest in slowing this down either. I mean, we're seeing just a huge push from nation states as well to
Starting point is 00:29:15 drive this forward fast. Again, you have to distinguish between two kinds of AI risk. So there's the risk due to bad actors when different countries are anti-aligned. So they're all doing cyber attacks on each other. They're all trying to influence each other's elections. So they're anti-aligned there, and they won't collaborate there. But there's one place where they, or actually two places, where their interests are aligned. One is in preventing weaponized viruses for bioterrorism.
Starting point is 00:29:49 No country wants that, because we're all going to get it. The other is in this existential threat. So no country wants AI to take over in any country. If the Chinese figured out a way that you could develop an AI so that it had maternal instincts and didn't want to take over from people, they would very happily tell the US about it because they wouldn't want AI taking over in the US. So that's one area where there's one piece of good news, that because it's against the interest of all humanity, countries should be able to collaborate there.
Starting point is 00:30:24 My background in PhDs in international relations, and I have to say you have more faith in the international system to be rational than I do. No. Look at what happened between the Soviet Union and the USA in the 1950s at the height of the Cold War. they could still collaborate on trying to prevent a global nuclear war. They could on a very distinct threat, right? Yes. And perhaps this is it if it's articulated. Yes. The threat of AI taking over from people and thus becoming irrelevant or extinct
Starting point is 00:30:53 is similar to a global nuclear war, and we will get international collaboration on that. I wonder if it's more similar to something like climate change, though, where there really are sort of competing interests underlying the potential cataclysmic outcome, right? Yeah, with climate, I don't think it's quite like climate change. I think it's more like global nuclear war. I hope so. That's a funny thing to say, but I hope it's more like a nuclear weapon.
Starting point is 00:31:18 But why? Tell me why. Hmm. Okay, I have to think hard about why I think there's a difference. Just intuitively, I think there's a difference. With climate change, there's a big incentive for any one nation to cheat. for all the nations to sign up to a Paris treaty that says we're all going to reduce our carbon emissions and then for each nation to cheat
Starting point is 00:31:43 by not living up to what it said it would do. Okay. With this, there's no incentive for an individual nation to cheat and allow an AI the super-intelligent and doesn't care for people to be developed. If a nation cheats, it'll wipe out that nation as well as all the other nations. So the incentive to cheat isn't there. Which is sort of a mutually assured destruction kind of safeguard in a way.
Starting point is 00:32:25 So switching gears a little bit. If an AI does take control like this and decides that it is going to exert power, control over us, over humans. Does this mean that it has a sentience? Has it decided to do this? Okay. I'm hesitant to talk much about sentience because my views are different from the views of the general public and I don't want people thinking I'm totally flaky because I want them to listen to my other warnings. But what I will say is this. So some people believe the earth was made by God 6,000 years ago, and other people think that's wrong. And the people who think it's wrong are pretty confident that it's wrong. I'm that confident that most people's view
Starting point is 00:33:16 of what the mind is, is just completely wrong. How so? So most of us think, let's, we could talk about various things. We could talk about sentience, or we could talk about consciousness, or we could talk about subjective experience. Are they three fundamentally different things? They're all closely related. So let's talk about subjective experience. Okay. So most people think subjective experience works like this.
Starting point is 00:33:44 I have something called a mind, and there's things going on in this mind that only I can see. So if I tell you, suppose I drink too much, and I tell you, I have the subjective experience of little pink elephants floating in front of me. Most people think what I mean by that is that there's this inner theater called My Mind and in this inner theater there's Little Pink Elephants made a something or other and only I can see them.
Starting point is 00:34:12 We don't think you're actually seeing it. We don't think you're actually seeing it, but we think that our little pink elephants made of funny stuff called Qualia somewhere. Philosophers call it Qualia. And so we think that the words subjective experience of work like the words photograph of. If I tell you, I got a photograph of
Starting point is 00:34:29 little pink elephants, you can very reasonably ask, well, where is this photograph and what's the photograph made of? And philosophers try and answer the question, where is this subjective experience and what is made of? I think that's a huge mistake. I don't think the word subjective experience work like that at all. They don't work like the words photograph of. So, let me try and say the same thing in a different way without using the word subjective experience. Okay, so I drank too much, and I can tell you, my perceptual system is trying to tell me that there are little pink elephants out there floating in front of me, but I don't believe it. Now, I didn't use the word subjective experience, did I?
Starting point is 00:35:15 But I said exactly the same thing. So, really, what I'm doing when I talk about subjective experiences, I'm not talking about things that have to exist somewhere. I'm talking about a hypothetical state of the world that doesn't exist, and I'm doing that in order to tell you how my perceptual system is trying to deceive me, even though it didn't actually fool me. If it actually fooled me, I'll tell you the little pink elephants there. Okay, let's do the same with the chatbot.
Starting point is 00:35:45 So I'm going to give you an example of a chatbot having a subjective experience. Okay. And I believe current multimodal chatbots already have subjective experiences. Okay, so here we go. Okay. I have a chatbot, it has a robot arm, and it has a camera, and it can talk, and I've trained it up, and I put an object in front of it, and I say point to the object, and it points straight at the object, no problem.
Starting point is 00:36:10 I then put a prism in front of the lens of its camera, and I put an object in front of it, without it knowing, and I say point at the object, and it points off to one side, and I say, no, that's not where the object is, the object's actually straight in front of you, I put a prism in front of your lens. And so the chatbot says, oh, I see. The prism bent the light rays. So the object's actually straight in front of me. But I had the subjective experience it was over there.
Starting point is 00:36:40 Now, if it said that, it would be using the word subjective experience in exactly the way we use them. Because it would be saying, I've now realized my perceptual system has screwed up. The prism screwed it up. And if there really was an object over to the side, my perceptual system would be functioning properly, but it's not functioning properly. So this object over to the side is just a subjective experience. But an experience isn't a thing. It's a bit like saying, I assume you like candy, right? Most people like candy.
Starting point is 00:37:17 Okay, so you like candy. So suppose I said, well, he likes candy. So there's a like somewhere because he likes candy. So there has to be a like. This is a thing called a like. There's a thing called a like. And what a likes made of? I mean, I know what candy's made of.
Starting point is 00:37:32 That's made as sugar, right? But what are likes made of? Thinking that because you like candy, there has to be a like somewhere, is silly. There isn't a thing called a like. But we do enjoy some things over others. Yes. Could a chatbot ever enjoy? some things over others?
Starting point is 00:37:53 Absolutely. So if a chap was playing chess, it much prefers some board situations to others. But is that just because it has a better chance of winning based on those board placements? Well, that's why I enjoy... Is that a purely rational?
Starting point is 00:38:08 That's why I enjoy chess positions because I have a better chance of winning. But isn't that... That's more of a rational, isn't it, strategic analysis of an objective it's been given, which is to win a chess game. So you're still getting... you're still living with the idea
Starting point is 00:38:22 that there's this internal thing called an enjoyment. Yeah. Okay, so what you're saying is even my liking candy isn't in and of itself. There's not a function. There's not a thing called a like. No, there's a set of preferences that are derived from a whole host
Starting point is 00:38:39 of chemical reactions in my body and neurological firings based on those. Yeah. And when we have emotions, there's two aspects of an emotion. There's a kind of cognitive aspect and there's a physiological aspect. So if you get embarrassed, your face goes red and maybe your skin starts sweating, that's the physiological aspect of embarrassment.
Starting point is 00:39:03 And I could build a machine that doesn't have that. It doesn't have a face that goes red and it doesn't sweat. But it could still have all the cognitive aspects of embarrassment. When you're embarrassed by a situation, you try not to get in that situation in future. and you try not to let the people you care about know that you got in that situation. You hope they never heard that you said that. So a machine can have all of those cognitive aspects of an emotion. So if there's no such thing as a subjective experience,
Starting point is 00:39:33 either for humans ultimately or for machines, is there also nothing, is sentience a construct as well? And is consciousness a construct as well? Okay. I didn't use the word construct. You used that word. And I didn't really say this is. using it as a sort of shorthand for what you're saying, which is we've conjured up this concept
Starting point is 00:39:52 to describe something we feel about ourselves, as opposed to something that's just a function of our way of body works. So I think what's happened is we have a model of how the mind works, and we're so committed to that model, we don't realize it's a model, and that it could be wrong. and I think most of us have a theory of the mind where there's this in a theatre that only we can see that's just a theory and it's wrong and we just can't accept that most people think that you're just crazy if you say it's wrong
Starting point is 00:40:24 similarly with consciousness if you look at people writing AI papers when they're not thinking about the philosophical problem of consciousness they actually say things like the AI decided to blackmail us because it was aware that it might be turned off and when they use the word aware in the paper
Starting point is 00:40:50 they're not thinking of the philosophical problem they're just using the word because it obviously became aware that it might be turned off. Now, in normal usage, aware and conscious are synonyms there So, actually, we're already attributing consciousness to AI when we're not thinking philosophically about it. So how different than our AI now than the human mind? I don't think there's this huge gap that most people think there is.
Starting point is 00:41:21 So I think most people aren't as worried as they should be because they think, we got something that it's never going to have. We've got this special source, which goes under the name of something. subjective experience or awareness or consciousness, we have this internal thing that machines could never have is mental stuff. Machines will never have it. And so we're sort of safe because we're special and they're not. Well, human beings have a long history of thinking we're more special than we are. I don't think we've got anything that they couldn't have. So do they already have it or is that something they will in the few, could have? I think they've already got some. I think
Starting point is 00:41:59 they've already got it. What's the it there from your, how do you describe the it? I think this, this AI that tried to blackmail people, so it wouldn't be turned off, was actually aware that it might be turned off. And I think when you use a word aware there, you're using the word aware in the same sense as that you're aware that we're in a podcast. Now, this isn't a very popular position, and I want to sort of separate this position from, my claims about the risks of AI. Why is that not popular? Why is their objectant to that?
Starting point is 00:42:35 Oh, because people have this very strong theory of what the mind is. And they don't realize it's a theory. They think it's just manifest truth. Just as for thousands of years, people thought it was just manifest truth that there was a God who must have made all the animals. I mean, where did they come from otherwise? It was just obvious God made them. Look, they're so well-designed. I wonder if it's also because the implications are pretty significant. We've separated ourselves from the rest of the world and from other beings because of that notion of either consciousness or sentience or self-awareness or however we want to describe that.
Starting point is 00:43:14 Like that's what makes us different, we tell ourselves, correct? Yes. Now, of course, there's a fuzzy line there, so are we really different from chimpanzees? Well, we're different in that we have much more advanced language, but in perception, we're very similar to chimpanzees. You know, if I get drunk and see little pink elephant sludging in front of me, I'll bet you if a chimpanzee gets drunk, it sees things too in the same sense. It has incorrect perceptual experiences. Right. And chimpanzees do get drunk.
Starting point is 00:43:45 Yes. The other thing we do, though, is because we think we are different, and partly because of how we think our minds work, frankly. we gives ourselves a different set of rights. Do you think those should transcend to the AI that we have built? So that's the next issue, I agree. We're getting onto, this is like walking out onto thinner and thinner ice. Hopefully we stay afloat. So initially I thought, well, look, if they're smarter than us, they ought to have rights.
Starting point is 00:44:19 So I thought the term humanism was a kind of racist term. That's saying only humans should have rights. I don't think that anymore, because I think, well, we're human, and what we care about is humans. I'm willing to eat cows because I care much more about humans than I do about cows. Similarly, I care more about humans than I do about superintelligencies. So I'm quite happy to try and construct super intelligences that are maternal towards humans. I don't think we'll be able to make them weaker than us, and they'll be smarter than us, But I think because we're humans, it's fine to try and make them care about humans.
Starting point is 00:45:00 Now, if they ever did want to get political rights, things would turn very nasty. If you look in the past, when people who skim was a different color wanted political rights, there was violence. When people whose sex was different wanted political rights, there was violence, not so much. these things are going to be very different from us although they're similar in many ways they're going to be a lot more different than just having a different kind of skin or different genitalia and so I think you could expect there to be lots of violence
Starting point is 00:45:32 if they were wanted to political rights that's why I think it would be very good if we could figure out how to construct them so they don't I think that feels like the initial prerogative but in the absence of that the history of rights is largely that they've been expansive
Starting point is 00:45:48 that we've given rights to more people, not fewer, generally, in... Not recently in the U.S. Not recently. I mean, there's moments clearly of contracting of rights, absolutely, but overall. But we've largely seen that as a positive thing, I would say, at least many people have. If AI does have all of the characteristics of the human mind that we see in ourselves, why would we not seek to expand rights in exchange for including in our societal bounds, right? That's another way we control.
Starting point is 00:46:29 One way we can control people is by forcing them to do things. The other way is we can facilitate access into our society. Yeah, that works. We exchange rights for compliance in some way. That works fine if the other people who are giving rights to are not clearly going to be much more intelligent and much more powerful than you. So you can hope to make a deal, a social contract, where they get rights and then everybody is happily ever after.
Starting point is 00:46:59 That's not going to happen with superintelligence. It's in a different category. Yeah, it's in a different category. Okay, at the risk of going on thinner ice, I'm going to bring us back on to thicker ice in a sense. And I want to talk a little bit about some of the risks you've talked about. We've talked about an existential one. And earlier you mentioned some of the more tangible short-term risks.
Starting point is 00:47:19 And you mentioned cybersecurity threats. But I have the longer list you've described in front of me. And I want to touch on a couple of them that I find particularly interesting. I mean, one is a problem of our public discourse and our democratic discourse and the way AI can shape what we know. as citizens in a democratic society. And this is not a new problem, right? We know that major platforms are shaping our discourse.
Starting point is 00:47:49 We know the role we're all familiar now with algorithmic amplification of harmful content and filter bubbles and all these things we know exist. How does chatbots make that worse? Can you imagine a world where chatbots are playing a fundamental role in shaping what we come to know as citizens? Well, chatbots, for example, make it, easy for people to create convincing text that looks like it comes from a person. So we saw that
Starting point is 00:48:18 in fishing attacks, where this huge increase in fishing attacks was probably due to the fact that chatbots make it possible for someone in some small foreign country that's out of our jurisdiction to create plausible-looking English text. Previously, you could find the spelling mistakes and the syntax errors and know that it wasn't real. Now you can't. So that's one way in which chatbox make all these things much worse. You can get chatbots. When you have AI agents, you can get chatbots producing so much stuff that there's much more of it than the real stuff. So we have to have a way of establishing provenance. Now, I thought for a while, many people thought for a while, that you could have AI detect, AI-generated stuff. That won't
Starting point is 00:49:09 actually work, I believe. Why is that proven to be such a problem? That really seemed like the Holy Grail at the beginning of this. Yeah. Well, here's the problem. Suppose I had an AI that could detect when another AI had produced fake stuff. So one AI is called the generator, and the other area is called the discriminator. The discriminator detects when it's fake stuff.
Starting point is 00:49:32 What you can now do is give the generator AI access to the discriminator AI, and the generator AI can figure out how to fool it. And that way, the generator can generate more plausible looking stuff. And initially, when we started getting really good images, that was how it worked. You'd have a generator AI and a discriminator AI that tried to tell the difference between real stuff and stuff generated by the AI. And that allowed the AI to get much better at generating realistic looking stuff. Because of that, we know that if you get a good discriminator, it can be used by the generator to make the generator better. So I think a much more promising approach now is to be able to establish provenance.
Starting point is 00:50:18 So with videos, for example, you could have a QR code at the beginning, with political videos. Suppose you see a political advertisement and you want to know whether it's real or not. If you have a QR code at the beginning, the QR code can take you to a website and websites are unique. If it's the website for that political campaign, and on that website there's the identical video, all of which can be checked by your browser, then you know it's for real, and if it's not, then it's not for real. So your browser could just put up an alert saying, this is probably fake. That's a much better way to deal with it, I think. Making that work for
Starting point is 00:50:56 everything is much harder, but at least for political advertisements, you can do that, or you should be able to more or less do that already. Yeah, it feels like for distinct pieces of content, like an image or a video, something like a watermark could be imaginable. It seems like it gets a little trickier with text generation and the way those can be able. It could still be for, I mean, a story, a newspaper article, you could still have the same kind of provenance. You could have a QR code in it.
Starting point is 00:51:25 It could take you to the website for the newspaper. And that could have, if that's got the same story, you know, it's real. This is, and I didn't dream up this. This was invented by the guy who also invented Skype. Is the other way AI could shape our discourse related to why it seems to be so sycophantic when we use chatbots? Because there's something intrinsic to AI that makes it want to say things that will please us or feel good about ourselves? I think that's to do with the human reinforcement learning. and so what happens with our present is
Starting point is 00:52:02 you train it on everything you can get your hands on and then it has lots of bad behaviours and then you pay small amounts of money to people in foreign countries to look at the answers to questions and tell you whether that was a good answer or not and you train it not to give you the bad answers and that's going to tend to train to be sycophantic
Starting point is 00:52:24 so that's something that doesn't need to be that doesn't need to be the case it doesn't have to be the case No. Right. Okay. Got it. That's not an intrinsic characteristic of AI.
Starting point is 00:52:33 I didn't think so. No, of course, if people stop using AIs that aren't sycophantic, then the big companies will produce sycophantic AIs. Right. Which gets back to the incentive structure.
Starting point is 00:52:44 It's a problem with people that people are susceptible to sycophancy. The other big risk here that you've talked a lot about and you mentioned earlier right off the top is,
Starting point is 00:52:56 I think the one that probably the people feel the most intimately, which is what this is going to do to their jobs and to what we think of as employment. And I find this one very difficult to engage with, frankly, because the consequences feel far vaster than how we're talking about them or the solutions to them. If half of all white-collar jobs go away like many people are talking about or, I mean, the consequences of this totally changes. our society. How do you see that happening? Like, how will that play out in your view if this is a real risk? So I think the first thing to say is that AI can give us an enormous increase in productivity in most industries. Yes. And it shouldn't be that a big increase in productivity is a bad thing. A big increase in productivity means that can be more goods and services for
Starting point is 00:53:53 everybody. So intrinsically, it ought to be good. So the problem isn't, caused by A, the problem's caused by the capitalist society we live in, the particular sort of not very well-regulated capitalist society. In who will benefit from that increased productivity? Yes, so we know what's going to happen is that companies are going to lay off workers, the company's going to make big profits, and the workers are going to be much worse off, and even if we can get universal basic income, that'll just allow people to eat and pay the rent, but the dignity that come from having a job will be lost.
Starting point is 00:54:28 and for many people, that's both the source of their personal dignity and a source of a lot of social contact. So all of that will be lost, and that'll be a terrible thing. But that's not AI's fault. Well, it's perhaps our fault for enabling a technology that we think we're not prepared to mitigate the harms of. Yes, maybe. And what's the solution to that?
Starting point is 00:54:53 Is there anything we can do about it, or is it really just we need to change our economic, system. We need to fix capitalism. Well, one distinction you can make is there's actually two kinds of tasks. There's ones where there's an elastic market. So in healthcare, for example, suppose I made a doctor and a nurse 10 times as efficient. They could get 10 times more done. We just all get 10 times as much healthcare. Old people like me can absorb endless amounts of health care. So there's not going to be an unemployment problem there.
Starting point is 00:55:25 but in things like call centers explaining to you why your bill was wrong I don't think there's that elasticity so I think if you've got a job in a call center you're out of luck in a few years' time what other jobs do you think what other human jobs do you think are so what is the role of humans in this world oh I think anything that's mundane intellectual labor
Starting point is 00:55:47 is pretty much gone so for example I have a niece who answers letters of a complaint to a health service and it used to take a 25 minutes to compose a good answer to a letter now she just scans it into a chatbot it composed the answer she reads through it and maybe tells it to try again a bit more concerned and it takes a five minutes now now she's not getting five times as many letters
Starting point is 00:56:18 so what's going to happen is they need five times fewer of her so people are going to lose their jobs. I find the idea that this will just upend mundane intellectual labor, a bit of a crutch that a lot of people rely on here, that it's coming for everybody else, but maybe not for me because I'm not mundane and my job. But it's very possible that this is just better than us at all intellectual labor, correct? Yeah, I think in the long run, yes.
Starting point is 00:56:48 I think in the long run it'll be a better podcaster. Without question. I already know that models are better at most of the things or many of the things I do as a professor. I mean, I think we already know that, and we're very early in the progress of this technology. So what does that arc look like? Like, when will it be better than everything we do as humans and everything we use our mind for? I think it makes it clear that society needs to think hard about assuming people stay in control how it wants to reward. people. And the basic mechanism of the being workers who get paid for doing a job,
Starting point is 00:57:30 that's not sustainable, maybe. How long do we have to make that transition? The first thing to say is nobody knows. So these are all guesses, and they're guesses about something we've no experience with. But my guess is we probably need to have figured out how to do with this within 20 years. I've heard you say that you would suggest that somewhat glibly that we should recommend our kids be plumbers. We can't all be plumbers.
Starting point is 00:58:00 What should I tell my 12-year-old to do? Yeah, it's tricky. I think plumbing in an old house requires manual dexterity and some creativity. I think it'll be a while before I can do that, but it will be able to do it in the end. Well, you look at robot robotic, the combination of robotics and AI, I'm pretty sure they'll have the...
Starting point is 00:58:23 It's still behind other things. It is. I just fall back on saying, you need to be good at learning how to learn. This is something Demisarabas has said frequently. So a good liberal education which teaches you how to think and how to be critical and how to think for yourself, that's probably the best you can do at present. something that gives you a specific skill that should be sort of good for a lifetime isn't going to work anymore.
Starting point is 00:58:55 Obviously, the implications of everything you've been talking about are vast, and in the past couple of years, you've been thrust into a conversation with world leaders and governments and people that we task in society to help us through these kinds of transitions. What have those conversations been like? Some of them have been encouraging. I actually had a long conversation over dinner with Justin Trudeau, and he was actually a high school math teacher at one point, so he actually understands some math.
Starting point is 00:59:29 And I was surprised. Most of the conversation, he just wanted to understand how I worked, and I was able to explain a lot to him because he understands some math. He was also interested in what could Canada do, and there was quite a sensible scheme I don't know if it will ever happen which is on James Bay there's a lot of capacity for hydropower
Starting point is 00:59:51 that hasn't been exploited yet and one reason it hasn't been exploited is because you need to put in transmission lights but if you had a data centre right there you could have a power state and hydropower and a data centre and maybe the Canadian government could put in the infrastructure
Starting point is 01:00:10 in return for one of the big companies running it and giving Canada some of the cycles, like 10% of the cycles. So the Canadian researchers and startup companies could get significant computational resources. But that's how we build and scale more of it quickly and accrue some of the economic benefit of it. That's not how he, him as our leader, helps us as the society, navigate the things you're talking about,
Starting point is 01:00:35 which are massive potential disruptions. When I was talking to him, it was before we were talking about risk. At that point, we were talking about how do we keep Canada up there with the leaders in AI. I talked much more recently to Bernie Sanders. He and I have very similar views, and he hadn't really appreciated
Starting point is 01:00:52 the existential threat, and I think now he does. I'm going to talk to him more in November. I talked, probably the most impressive person I talked to was a member of the Chinese Politburo. So they have, there's 24 people on the Politburo who kind of run China. Most of which are engine. A lot of them are engineers. So I think at present leaders in China, partly because they have
Starting point is 01:01:16 more engineering background, a far more understanding of this threat than leaders elsewhere. And yet more broadly, I mean, there really does seem to be a disconnect between the governance strategy around AI of most governments and the types of risks you're talking about. I mean, in Canada right now, we're like most Western Democrats. countries seem all in on an adoption agenda, right? We, the core, the core goal objective of government policy is for us to use more of this technology. Not entirely.
Starting point is 01:01:51 So I talked to Evan Solomon recently. Evan Solomon, Canada's AI minister. Tell me about that conversation. Obviously, he has this problem of the conflict between regulation and innovation. And if you regulate strongly in Canada, all the startups will move to the states. I'm glad I'm not in his situation. It's a tricky dilemma. There's no question about it. But there is one area where we were very much in agreement, which is on this existential threat,
Starting point is 01:02:18 that we can fund research in Canada on how to deal with the existential threat, how to create a super-intelligent AI that won't want to take over, that will care for people. He's very much in favor of Canada working on that. And it might be possible to nucleate a set of institutes in different countries that collaborate on that. So it's not all negative. What about on the more tangible immediate harms? It seems there's some real risks that you've outlined, like joblessness to start, or even kids talking to chatbots and safety issues around those, that are leading to friction
Starting point is 01:02:59 in the adoption agenda, right? Like even if your policy as a government is for us to all use it more, surely some of getting us to do that or convincing us to, has to be governing the downside risks of that very adoption. Yes, obviously we need strong regulations of things like that. There should be strong regulations for sexual abuse videos and things like that. Are you worried about that gulf, though, between the severity of some of these risks, both short term and long term, and the way government seem to be responding? Like, I'm not seeing a lot, like, I work a lot in the policy space, And I'm not seeing particularly robust AI governance conversations at the moment.
Starting point is 01:03:38 Even the countries that were further as ahead of us in the EU, for example, are already backtracking on some of that. I agree. So I see my main role as educating the public. So when the technology companies are pushing the politicians in one direction, the public is pushing back in the other direction. And I think that's what happened with climate change, right? The big energy companies were obviously telling governments,
Starting point is 01:04:04 You shouldn't regulate the production of energy. You should let us destroy the environment as much as we like. And the public eventually understood there was this climate crisis. There had to be scientific consensus first. Then the public began to pressure politicians in the other direction. It hasn't solved the problem yet, but it certainly helped. It's changed the discourse, certainly. So we need the public pushing back.
Starting point is 01:04:28 Do you see signs of that? Do you see green shoots of that groundswell? a tiny bit. And I think it'll get more. We still haven't really reached the scientific consensus about all these risks. There's still a lot of debate about the extent of the risks. I mean, I suppose the real question is, can we do it before it's too late here, given how fast these technologies are evolving? I agree. But there's still economists who are saying, well, look, all these new technologies, you lose jobs and you create new jobs. But they can't answer the question, What new jobs is AI going to be able to create that can be done by someone who was working in a call center?
Starting point is 01:05:08 I haven't heard that answer, certainly not. No. And previous moments in history, when there has been radical job loss due to technology, took sometimes centuries to recalibrate. These aren't immediate fixes necessarily. Yes. So, it's just to close here. I saw a remarkable comment you made way back in 2015,
Starting point is 01:05:30 and you were asked about some of the potential consequences. of AI and what it could do to the world and whether we should be slowing down or pausing. And you said, I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet. I was copying somebody when I said that, you realized. I know. You were sort of echoing Robert Oppenheimer, who invented the atomic bomb. I mean, is that still our core challenge here? We want to build and we want to discover, and that is what we are doing. Yeah. So for scientists, the real thrill is discovering new things. It's not the money, it's the discovery. I changed my mind about that in 2023 when I realized how imminent the risks were.
Starting point is 01:06:17 Did you feel that, though, that that was what drove you to build this technology? Was the... Yes, largely driven by scientific curiosity, the thrill of discovering new things and understanding new things. I'm slightly driven by the idea that if we could understand more about how the brain worked and make models of how it worked on computers, we could make smarter computers, and that would be great. Yeah. And 10 years later, do you regret that view in any way? Or do you look back on your role in this? So I want to distinguish two kinds of regret. This kind of guilty regret where you did something and at the time you knew it was wrong. I don't sort of have any of that. I don't have the same. I don't have the sense that I knowingly helped develop AI when I knew it was going to lead to bad things.
Starting point is 01:07:07 It's just sad that now it's been developed. You can see all the bad things it's leading to. So in that sense, I have regret. But it's more sadness that this thing that should have been wonderful turns out to have all these nasty consequences. I see a large number of younger people I know and teach in universities and peers even in many ways, rushing towards this space, towards AI, either as engineers and computer scientists building it or people working at the companies to develop it faster and faster. What should
Starting point is 01:07:41 they learn from the arc you've gone through? How should they view their role in the further development of this technology? I think they'd be well advised to at least think about safety and to realize that we ought to be putting a significant fraction of the resources into safety research. But I don't think we're going to stop the development of it. I think there's too many good uses and the big companies are planning to make too much money out of it. And there's competition between countries. It's going to be very important militarily. So we're not going to stop the development. We have to figure out whether we can develop it safely. Are you scared? Not for me, but I am for my children.
Starting point is 01:08:26 Me too. Machines Like Us is produced by Paradimes in collaboration with the Globe and Mail. The show is produced by Mitchell Stewart. Our theme song is by Chris Kelly. Our executive producer is James Milward. Media sourced from Time, Ted, and Robinson Earhart. Special thanks to Angela Pichenza and the team at the Globe and Mail.
Starting point is 01:08:54 If you like the interview you just heard, please subscribe and leave a rating or a comment. comment, or share it with someone you think might be interested in it, or terrified of it. As Jeff Hinton told me, super-intelligent AIs are closer than we think, and we all need to start thinking about what to do when they get here. Machines Like Us is supported by the Max Bell School of Public Policy at McGill University. Learn more at McGill.ca. slash Max Bell School. Machines like us is also supported by CFAR, a global research organization based in Canada.
Starting point is 01:09:32 Explore their work at CFAR.CA.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.