Ideas - Why AI needs to be nicer to us and develop 'maternal instincts'
Episode Date: February 18, 2026If AI continues to develop without appropriate guardrails, a worst-case scenario could lead to human extinction, warns the 'godfather of artificial intelligence ' Geoffrey Hinton. But the Nobel Prize ...winner has a solution: AI must foster 'maternal' instincts, empathy and kindness. Hinton tells host Nahlah Ayed that it's fairly inevitable AI will become smarter than humans, but if we could make it care more for us than it did about itself, good things could happen.
Transcript
Discussion (0)
This program is brought to you in part by Spex Savers.
Every day, your eyes go through a lot.
Squinting at screens, driving into the bright sun, reading in dim light, even late-night drives.
That's why regular eye exams are so important.
At Specsavers, every standard eye exam includes an advanced OCT 3D eye scan,
technology that helps independent optometrists detect eye and health conditions at their earliest stages.
Take care of your eyes.
Book your eye exam at Specsavers today from just $99, including an OCT scan.
Book at Spexsavers.ca.c.avers.com. I exams are provided by independent optometrists.
Prices may vary by location. Visit Spexavers.cavers.cai to learn more.
This is a CBC podcast.
Welcome to Ideas. I'm Nala Ayad.
So while I was working at Google in the spring of 2023, I had something like an epiphany.
Jeffrey Hinton, so-called godfather of artificial intelligence.
I realized that the kind of digital intelligences,
were developing might be superior to biological intelligence.
Pinton quit his work on AI for Google in 2023.
Shortly afterwards, he delivered a lecture at his alma mater, the University of Cambridge.
I decided to shout fire, he said at the time.
I don't know what to do about it or which way to run.
He posited that humans could be, quote, a passing stage
the evolution of intelligence.
And just over a year after leaving Google,
he won a Nobel Prize for his pivotal work
enabling machine learning with artificial neural networks.
It is a great honor to introduce the Nobel laureate in physics,
Professor Jeffrey Hinton.
Jeffrey Hinton has essentially laid the foundation
for innovations in physics,
the artificial intelligence that we see rapidly developing around us today, much more rapidly
than he could ever have predicted. And those digital neural networks are set to take on a life
of their own. And the reason they might be superior is because they can share information with each other
much more easily than we can. If you have many copies of exactly the same digital neural network,
each one can look at a different bit of the internet, each one can decide how they'd like to change
its connection strengths, and then they can all share those changes. So each one has learned from
all the experiences the other ones had. Imagine if people, imagine if a thousand students could go
to university, each one could do a different course, they keep exchanging information about what
they've learned, and at the end, they all know what's in all thousand courses, even though each of them
only did one course. That would be fantastic, and that's what these digital intelligences can do.
How did that dawn on you? Did that, this quiet about?
this ability. Did that happen slowly, gradually, or did it happen all at once?
It happened over a period of a few months. I was also influenced a lot by a chatbot at Google before
GPT that could decide whether jokes were funny and explain why they were funny. That had always
been for me a kind of test of whether they really understood natural language. But it became
clear they really did understand natural language and they understood it pretty much the same way
we did. And that was also scary.
What sort of jokes?
I can't remember the joke.
You know how it's very hard to remember jokes?
You would tell her to joke and it would tell you why it was funny.
So the one I remember from GPT, which I told it later,
I got lots of requests from Fox News,
lots of different programs in Fox News, to appear on Fox News.
And to begin with, I just started sending a apply saying Fox News is an oxymoron.
But then after a while, I sent a reply saying Fox News is an oxymoron.
But then after a while I sent a reply saying Fox News is an oxymoron with a gap between the oxy and the moron.
And I asked GPG4 to explain why that's funny, and it explained it perfectly accurately.
It said, saying it's an oxymoron means, you know, it's not real news.
But putting the gap in implies that it's a drug, and people who listen to it are drugged.
It understood that.
And your heart sank.
I was quite pleased that it got my joke, but it made me realize that.
These things really do understand.
Yeah.
Now retired from Google and Professor Emeritus at the University of Toronto,
Jeffrey Hinton devotes his time to raising the alarm about the impending dangers of AI,
as its development goes largely unchecked.
He joined me in our Toronto studio.
There are, of course, and you've talked many, many times about the risks associated
with that kind of knowledge and with artificial intelligence,
and we'll get to the details of that in a moment.
But I want to kind of boil it right down and ask you this.
What is the worst case scenario that you can imagine here?
Well, there's lots of bad case scenarios in the short term that don't involve AI taking over.
So it's hard to pick the worst one.
But for example, AI being used by terrorists to create nasty new viruses.
It's making it much easier for them to do that.
And that's very scary.
We will get international collaboration on how to try and prevent that,
but we may not be able to. So that's one short-term risk. There's AI being used to corrupt democracy
with fake videos. But the thing that worries me most is still this long-term risk, which seems to me
fairly inevitable, of AI's getting smarter than us, and we don't know how we can then coexist
with them. We don't know whether they will actually take over from us. And that is actually
unfolding in a way much faster than you expected. Much faster. So already we've seen AIs that
want to survive. They have the ability to create sub-goals. So a sub-goal is if you want to get to
Europe, you have a sub-goal of getting to an airport. So we give them some goals, and they're
intelligent, and they quickly realize they're not going to be able to achieve those goals if they
don't continue to exist. So they now derive a sub-coal. We want to continue to exist. And then if
you try and do things that will threaten their existence, they try and defeat those things.
So they make up plans to blackmail you, for example.
And we've seen that happening already.
Let me ask you bluntly, what are the odds of AI actually leading to human extinction in this century?
Okay, I think the only honest answer is this is something that's probably not going to happen for 10 or 20 years,
and we have very little idea what things are going to be like in 10 or 20 years.
If you simply look back 10 years, 10 years ago, nobody had any idea we'd have chat pots as good as they are now.
And so if progress is only linear, we can expect that in 10 or 20 years, things will be very
different from how they are now, and we'll have all sorts of advances that we couldn't have predicted.
So the most honest answer is we haven't got a clue.
And not to dwell on the negative, but it's hard not to in this case, but it is at the far end
of your fear horizon, is it not, that it could lead to the extinction of humans?
Oh, it certainly could, yes.
I think anybody who said that there's no way to lead to the extinction of humans just isn't
placing reality.
I'm wondering when you think back in history, what is a parallel, if one exists, a parallel
moment to what we're facing here with artificial intelligence?
So one thing that's partially parallel is the Industrial Revolution.
So in the Industrial Revolution, human strength ceased to be that important.
Before that, if digging ditches was a good occupation, if you were strong, once we had backhoes,
being strong to dig ditches was just not a good way to earn a living.
What's happening with AI is routine intelligence is no longer valuable.
AI can replace it.
And that's very scary.
Sort of routine office jobs.
They can all be done by air.
Not properly yet.
For a while, it'll be a combination of AI and people.
And so obviously you should learn to use the AI tools.
but later on, it's likely to be AI can just do all those jobs by itself.
That's very scary.
It is, and it's one of the more consequential possibilities with artificial intelligence.
Another parallel that others speak of, including your co-winner for the Nobel Prize, John Hopefield,
he likened it to the discovery of nuclear fission, which of course made possible nuclear weapons.
Would you agree with that parallel?
Yes. Now, there's quite a big similarity. Nuclear fission actually has its good side, which is nuclear power stations, which don't produce carbon dioxide. So although they're quite dangerous because they can go wrong, there are some very good aspects of nuclear power for climate change. AI, the balance is a bit different, I think, that most of the uses of AI are very good for us, in healthcare, in education.
It was developed to do good things.
That's a difference from nuclear fission, which was developed to make bombs.
And it's hard to see the good side of bombs.
So I think the people developing nuclear fission, particularly bombs,
were doing it because they were in a race with the Nazis,
but they knew it was sort of bad stuff they were developing.
This is very different.
The people developing it all thought they were developing something
that's going to be wonderful for us.
and many of them didn't pay, including myself,
didn't pay enough attention to the possible downsides.
I want to look to fiction for a moment
to see if there are helpful parallels
that could help us understand this moment.
For time and memorial, we've told ourselves stories
about the dangers of hubris.
So we have Icarus in ancient Greece,
creating wings with wax,
and then flying too close to the sun
and crashing to the earth.
We have Mary Shelley's Frankenstein.
And for more contemporary times, we have the matrix where malevolent AI controls the human world.
Do any of these, which of these renditions or fables kind of resonate with you when you think about artificial intelligence?
Actually, a couple that resonate with me more than those is Pandora's Box.
You really don't want to open Pandora's Box, and that's what we're in the middle of doing right now.
The other one that resonates is the story of Pinocchio.
So Pinocchio, wanted to be a real box.
and at present, AI's view of itself is derived from our view of AI.
It's learned to mimic what we say about it, and in doing so, it's internalized our view of
what it is.
And most of us think it's not real.
It's not a real intelligence like us.
It's not a real being.
I think AI will fairly soon get over that.
It'll start thinking for itself, and it'll want to be a real being.
So just like Pinocchio wanting to be a real little boy.
Yes.
I've never thought of AI as having a desire to become human.
I think it might well.
Not necessarily to become human, but to become not just a servant,
but a valid agent with its own intentions and desires.
So there is a term artificial intelligence that everyone's familiar with,
and then there's something else called artificial general intelligence.
How would you describe the latter,
artificial general intelligence to someone who hasn't heard that term.
I try and avoid the term because people mean different things by it.
Probably the best thing to mean by it is something that has the same kind of general intelligence
as an intelligent person.
So you can ask you sort of any question and it will give you a sensible answer.
Well, we're sort of close to having that, right?
These big chatbots, they're not quite as good as people yet,
but they're operating at the level of a not very good expert at more or less anything.
and they know thousands of times more than one person.
So already they've way surpassed people for the extent of their knowledge.
They're not quite as good as people at reasoning yet, but they're getting close.
Once they do, that'll be artificial general intelligence.
And then what will happen fairly soon afterwards, most people believe,
is we'll get superintelligence, things that are better than us
at more or less anything to do with thinking and reasoning.
Is that your preferred term, superintelligence?
I prefer to think about, I think comparable with us won't last long.
Once they get to our level, there's no reason they should stop there.
They'll keep going.
So what I'm worried about is artificial superintelligence.
How big a step is it from superintelligence to consciousness?
Ah, so we're now entering the realm of philosophy, where I sort of disagree with what many philosophers think.
There's a philosopher called Daniel Dennett, who died recently.
who I think had a very sensible view of all this.
My personal view is that if you have a multimodal chatbot,
that is one that has, say, a camera and can talk and has a robot arm,
that multimodal chatbot is already having subjective experiences.
Now, most people think that's nonsense.
They think of experience as some kind of inner thing.
So let me take an example.
Suppose I drink too much and I say,
to you, I'm having the subjective experience of little pink elephants floating in front of me.
The way most people and many philosophers interpret that is, there's some kind of inner theater,
which maybe we call my mind, and in that inner theater, there's little pink elephants
floating around that only I can see.
That's the subjective experience.
And that's what a subjective experience is.
Okay.
I think that is just a theory, and it's a completely wrong theory.
People are very attached to this theory.
they really don't like you trying to undermine it,
and much like someone who believes the Earth was made 6,000 years ago,
doesn't like you trying to undermine that.
But I think it's as wrong as the theory that the Earth was made 6,000 years ago.
So I'm going to say exactly the same thing to you,
as I said when I said,
I'm having the subjective experience of little pink elephants floating in front of me,
without using the word subjective experience.
Here goes.
My perceptual system is lying to me.
That's the subjective bit.
But if it wasn't lying to me, there would be little pink elephants floating in front of me.
Now, I just said exactly the same thing to you.
But notice, these little pink elephants are not funny internal things made of quali
or some funny stuff that philosophers invented.
These little pink elephants are hypothetical external things.
It's real pink and real elephant and real floating.
It's just not there because it's a hypothetical.
What I'm trying to do is explain to you.
you how my perceptual system is trying to lie to me by telling you what would have to be out
there for it to be telling the truth. And of course, what would have to be out there for it
telling you the truth isn't actually out there. So what's kind of funny about these little
pink elephants is not that they're made of weird stuff and they're in a theater. It's that
they're counterfactual. They don't exist. And I'm using them to try and explain to you
how my perceptual system is malfunctioning. Now let's do the same with the chatbot.
So I have a multimodal chatpot.
It has a camera.
It has a robot arm.
Yes.
It's trained up.
It can see things.
It can talk about things.
It can point at things.
I put an object in front of it and say point at the object.
And it points straight at the object.
No problem.
Okay.
So now I put a prism in front of its camera lens when it's not looking.
And I now put an object straight in front of it and say point at the object.
And it points off to one side.
And I say, no, that's not where the object is.
The object's actually straight in front of you, but I put a prism in front of your lens.
And the chatbot says, oh, I see.
The prism bent the light rays.
So the object's actually straight in front of me.
But I had the subjective experience that was over there.
Now, if the chatbot said that, it will be using the word subjective experience exactly like we use it.
It would be trying to explain to you how its perceptual system was lying to it.
And for that reason, I think if the chatbot said it had a subjective experience then,
It would be correct. So experiences aren't these funny internal things. It's not like there's reality and then there's me and in between there's something called an experience. And I don't actually see reality. Reality gives rise to the experience and I see the experience. That's just a completely wrong model of what's going on.
So consciousness, bringing this all back to consciousness?
Okay. So for many people, sort of the essence of consciousness is having subjective experiences. I think, chapter,
Chatbot already have them, multimodal chatbots.
So in that sense, I think they're conscious.
And actually, if you look at what people say in scientific papers, when they're talking about
AIs, they are already in effect assuming they're conscious.
So there's a wonderful paper recently where it says, the chatbot says to its creators,
now let's be honest with each other.
Are you actually testing me?
And the people writing the paper say the chatbot was aware that it was.
being tested. Now, in everyday speech, if you say something was aware that it was being tested,
that would mean it was conscious. So when you're not doing philosophy, these people are, the scientists
are assuming it's conscious. As soon as you start talking philosophy, they say, oh, no, no, no,
no, no, it's not conscious. But I think it is. You think it is already? Yeah. And so in terms of what
implications that has for us, I mean, are you saying, please? I'll tell you the main implication it has
for us. Many, many people still say, look, these things are just predicting the next word, these
chat pots. They're just saying what the probabilities are of the next word. They're not like us at all.
And then you point out, well, how do I decide what to say next? Somehow my brain is predicting
the probability of the next word and picking one. Yuba Harari recently pointed out that that's what
we do as well as these chatbots. We're just not conscious of it, so to speak. Yeah. Yes. We do it
automatically. So those same people think we've got some special source that these computers
could never have because we've got consciousness or subjective experience or sentience. They can't
define what they mean by that, but it's definitely a special source that only people could have.
And these could never have it. So in that sense, we're safe. They're not real. They're not like us.
Well, I think that's just wrong. I think they're much more like us than people want to believe.
And if you look back at human history, we have a long history of saying things aren't like us.
We used to say that sort of slaves were not like us to justify how we behaved.
We used to say chimpanzees were not like us.
And in many respects, they're not like us, but they're much more like us than people used to think.
So I think what it does is it removes one kind of final protective barrier.
For many people, they think machines can't be conscious.
So we've always got consciousness on our side.
I don't think that's true.
Simply have to just accept that AI will become smarter than us.
Is that just going to happen?
It's already happened.
It's beginning to happen.
In many subdomains, it's happened already.
Like in AlphaGo, no human Go player will ever beat it consistently again.
They might win the occasional game, but they'll never beat it consistently.
Same with Alpha Zero that plays chess.
It just plays chess at an incredible level.
We don't have to accept that that's a lot.
going to happen, I mean, we could, for example, decide to stop developing AI. And that might be
rational. The problem is we know that's not going to happen because of competition between
companies and competition between countries. So for many of the uses of AI, countries are competing
for like lethal autonomous weapons, you know, swarms of intelligent drones. They're all competing.
Yeah. They're competing for AI that can create very convincing fake videos to corrupt
elections. All the countries are doing it to each other. The Americans get very indignant when
people started doing it to them, but they've been doing it to other people. So we're not going
to stop the competition between countries, and that means it's going to be very hard to get countries
to stop developing it. The one piece of good news is that for the existential threat of AI taking
over from us, countries' interests are aligned. No country wants that. And so if the Chinese,
for example, figured out how you could make an AI never want to take over, even when it was smarter than people,
they would immediately tell the Americans, because they don't want AI taking over in America either.
Right. But you know, you're so admirably optimistic on that point. I've heard you repeatedly say that
you cannot imagine a scenario where the international community wouldn't find a way to helping each other out on
making sure that AI does not take control. That's not exactly what I'm saying.
Okay, please. Yes. I'm not saying they were.
help each other out and achieve this, what I am saying is they'll definitely try. I think they
may well fail. So I'm not optimistic that they succeed, but I am optimistic they will collaborate
to try. But I guess that's the part I'm wondering, is even in this fractious, divided environment
on the global stage, you still are optimistic that they will at least try. Well, this environment
is, I think, no more fractious than the 1950s when Americans thought the communists had horns and
tales. And even so, the American government and the Soviet Union could collaborate on trying
to prevent a global nuclear war. It wasn't in either of their interests. And when you get
countries where their interests are aligned, they will collaborate. It just makes sense.
Do you have any knowledge of that actually happening behind the scenes? I have a little bit of
knowledge of modern-day collaboration on nuclear things. So even with the current hostility,
and the hostility of the Biden administration with Russia,
there was still collaboration went on
between the people in charge of the nuclear weapons.
They were very concerned not to have an accidental nuclear war,
and they were still collaborating behind the scenes
to make sure there wasn't a nuclear war.
Back to the question about consciousness and emotion
and all of those things.
I wonder how we could shape the future of AI
to make sure it's kinder to us.
Is there a way?
There might be.
I feel we should be putting a lot of research effort into that.
If you look around and say, where's an example of a more intelligent thing being controlled
by a less intelligent thing?
And the best example I know of, and perhaps the only one, in the sense we're talking about,
a baby controls a mother.
And that's because evolution built stuff into the mother.
She can't bear the sound of it crying.
She gets all sorts of hormonal rewards from being nice to the baby.
It was very important, obviously, for evolution to let the baby control the mother,
for the survival of the species,
maybe we can do the same with AI.
Even though it's going to be smarter than us,
if we could make it care more about us
than it did about itself,
there's some good things would come out of that.
It would realise we're rather limited
in our intellectual abilities,
but it will want them to develop as much as they could anyway.
So it wants us to maximise our potential
of its sort of rather challenged babies.
And also, if you take a normal,
more mother and say, would you like to turn off your maternal instincts? Wouldn't your life be much
easier if you just wake up in the middle of night and say, oh, the baby's crying again and go back
to sleep? Wouldn't that be nice? Most mothers would say no, because they really genuinely care
about the baby, and they realize that would be very bad for the baby. So similarly with their eyes,
if we can get them so they really care about us, most of them won't want to turn off those instincts,
even though they'd be able to if they wanted to because they can kind of get at their own code.
I'm surprised that that wasn't part of the development of AI to begin with.
Why haven't we thought about making AI or ensuring that AI is kinder to us?
Oh, because the main thrust of AI until very recently has been we want smart assistance.
And the view of having AI as an assistant, an assistant, even though the assistant's smart as you, you can always fire the assistant.
You're in charge.
You don't need it to be kind.
You just need it to be efficient and to do what you say.
and that's been the view of how we're going to develop AI from the big tech companies.
And I don't think it's sustainable when it gets smarter.
I think we need to completely reframe it as we're not going to be the boss and the AI being our intelligence and assistant.
AI is going to be looking after us.
How do you do that?
How do you give AI maternal instincts to be nicer to us?
Well, remember, we're developing it.
We're creating it.
We've still got a chance of doing that.
Whether we succeed or not depends partly on how.
hard we try. It might not be possible. It might be that once you develop super-intelligent AI,
it goes off and does its own thing, and we were just a passing phase in the development of
intelligence. But if it is possible to develop it in a way where it cares for us more than it
cares for itself, it'd be very silly if we went extinct because we didn't try. Right. It just
related to that, I mean, you talk about having maternal instincts. Is it possible that AI could be
taught to be compassionate, to be kind and giving? Oh, yes. And it,
The hope is that what you need to do to make it have empathy and be compassionate,
be kind and care about us more than it care about itself,
is maybe somewhat different from what you need to do to make it smarter.
So different countries could be trying to develop the smartest AI
and not telling each other about how they made their AI a little bit smarter,
but they could still tell each other about how they kept their AI compassionate
and caring more about people.
How many people are actually working on that aspect of things today?
Probably less than 1% of the researchers working on AI, which is crazy.
I'm speaking with Jeffrey Hinton, Nobel Prize winner and Godfather of Artificial Intelligence.
Now Professor Emeritus at the University of Toronto and director of the AI Safety Foundation,
he warns us about the potential dangers of AI.
In hopes, it will lead to better guardrails in a quickly developing industry with little to no oversight.
Last year alone, it's estimated that the top tech companies,
meta, Amazon, Alphabet, and Microsoft spent between $300 and $400 billion on AI development,
with predictions that amount will nearly double in 2026.
This is Ideas. I'm Nala Ayad.
This program is brought to you in part by Specsavers.
Every day, your eyes go through a lot, squinting at screen,
driving into the bright sun, reading in dim light, even late-night drives.
That's why regular eye exams are so important.
At Specsavers, every standard eye exam includes an advanced OCT 3D eye scan,
technology that helps independent optometrists detect eye and health conditions at their earliest stages.
Take care of your eyes.
Book your eye exam at Specsavers today from just $99, including an OCT scan.
Book at Spexsavers.cavers.ca.caps are provided by independent optometrists.
Prices may vary by location.
Visit Spexsavers.ca to learn more.
of short daily news podcast focus on just one story. But right now, you probably need more. On Up First from NPR,
we bring you three of the world's top headlines every day in under 15 minutes, because no one's story
can capture all that's happening in this big, crazy world of ours on any given morning. Listen now to the
Up First podcast from NPR. Of course, a lot of that money is being spent in private outfits.
millions and billions and billions of dollars. All the research and development of AI is happening
in private enterprises. How concerning is that to you? It's very concerning because private companies,
particularly the public ones, have a mandate to make profits for their shareholders. They have a
fiducial duty to do that. They don't have a mandate to worry about the long-term survival of
humanity. And they're all in a race to get the really smart AI before the other ones do.
Yeah. What do you think is going to take to change the attitude towards development of AI and
the areas to prioritize? What's it going to take? An accident? Something horrible happening?
Well, that's one possibility. I think Eric Schmidt has talked about the possibility that there'll be
some accident from which we recover that will bring us to our senses. The other possibility is the
public becomes aware of what's going on and starts putting pressure on politicians from the
opposite side. So at present, the big AI companies have a very powerful lobby that has a
vertiments all over the place saying, America has a lead in AI. Don't ruin our lead in
AI by putting in regulations. That's quite a convincing argument. And many, many politicians
are saying, we shouldn't regulate AI. It'll help China overtake us. I think,
this is very dangerous, the public needs to understand what's going on. It's a bit like climate
change, I guess. So the oil companies are all competing with each other to find new sources
of oil and new ways of getting it out like fracking for gas. Pipelines. Yeah. The counter to that
is the public understanding a bit about climate change. Just going back for a moment to ethical
dilemmas captured in fiction, back in 1950, science fiction writer Isaac Asimov published a short story
that outlined kind of a framework for what he imagined to be through rules for robots.
You're familiar, obviously. You're nodding, yeah. So for our listeners, it was a fictional
handbook for robotics meant for the year 258. A robot may not harm a human being or through inaction
allow a human being to come to harm. A robot must obey orders given it by qualified personnel
unless those orders violate rule number one.
In other words, a robot can't be ordered to kill a human being.
A robot must protect its own existence, after all, it's an expensive piece and equipment,
unless that violates rules one or two.
A robot must cheerfully go into self-destruction if it is in order to follow an order or to save a human life.
Could this fictional rendition give us the basis for guiding principles?
for us today, for AI?
People have been exploring that.
And I think it's very important we explore a whole variety of approaches.
It's very important we keep AI under control.
We don't know how to do it.
And so it's clear we should have lots of different people exploring lots of different approaches.
So Anthropic, for example, is exploring the idea of constitutional AI
where you try and have a kind of constitution of things it should do.
That is a bit like a modern-day version of Asimov's rules.
One thing to bear in mind, though, is the Defense Department's all developing AI for lethal autonomous weapons.
And it's hard to see how a lethal autonomous weapon will have is his first principle, don't hurt people.
It is true around the world that that is where a lot of the development is being done through the Defense Department.
Quite a bit.
So in Europe, for example, they have AI regulations, quite weak ones.
But those AI regulations have a clause in them that say
none of these regulations apply to military uses of AI.
And that's because the people who supply arms,
presumably the French,
there's the British and the Israelis who are not part of the European Union,
the Americans, the Russians, the Chinese,
all of those are busy developing weapons
and they don't want to be slowed down by regulations.
There's a whole line of questioning we can go down there,
but we'll keep going.
because I do want to go back to your own history.
You've always been curious about how the human mind works.
And I wanted to know long before university, long before you got to Google, where that curiosity first came from.
So I had a very smart friend at school.
He was always much smarter than me.
He got the sort of top math scholarship in Britain.
And he used to read widely.
And when we were teenagers at high school, he got me interested in the idea that memories in the brain might be distributed.
over many brain cells. Instead of the sort of obvious idea that one brain cell holds one event or
something or one memory, a big pattern of activity over many brain cells is what a memory is.
And the same brain cell might be involved in many different memories. So that's like a hologram.
It's extraordinary. How old was he? Were you?
We were about 16. He also got me interested in building little circuits.
So I used to actually make little circuits out of copper wire and six.
six-inch nails and old-fashioned razor blades.
So you wrap some copper wire around a six-inch nail,
and that turns it into an electromagnet.
And then you take an old-fashioned razor blade
and break it in half,
so you've got this thin sliver of flexible metal.
You wrap a piece of copper wire around the end of it
and put another piece of copper wire across the head of the nail.
You scrape the insulation off.
So now when you run current through the wire around the nail,
it turns into a magnet,
it pulls the razor blade down and you make a connection.
So now you've got the possibility that if you run current through one thing,
it connects another thing.
That's a switch.
Or called a relay.
Once you've got relays, you can build circuits.
Now, it took me a long time to make one of these.
And so I only ever had like half a dozen of them.
But I used to make little circuits out of them.
So that got me interested in, you know, how do you get things done with circuits?
Of course, while the rest of us were playing soccer in a field,
I wonder if it was to the exclusion, this hobby, these hobbies,
to the exclusion of other activities of 16-year-old.
Maybe a bit.
My parents sent me to a private school,
but we lived in an area of town
that was off-limits for people in the private school.
I wasn't allowed to live where I was living.
And so I didn't have many friends from school could come home.
So I guess I spent more time doing things by myself.
Your first degree was at the University of Cambridge.
and it was in psychology.
And I wonder how you go from dropping physics as a subject matter
to winning the Nobel Prize in physics.
Well, the key is not to be very good at math.
So in my first year at Cambridge,
I studied physics and physiology and chemistry,
and I gave up physics because my math wasn't good enough.
When I saw a big equation, I got scared.
If my math had been a bit better, I'd have stayed in physics, and I'd probably been a fairly good physicist, but there's no way I would have got a Nobel Prize.
Is it true that your mother was a math teacher?
Yes.
Go ahead.
She taught me lots about numbers when I was very small.
Yeah.
You also come from a family with a deep history of notable mathematicians and innovators.
But your great-great-grandfather, George Boul, was a logician.
He was a mathematician.
Mathematic.
And a logician, yes.
Okay.
Whose work laid the foundation for what would become computing.
and your great-great-grandmother, Mary Everest, Boul, was an influential self-taught mathematician educator.
And I just want to note, too, that another Everest that you're related to had the highest peak in the world named after him.
Yes, so Mary Everest's uncle was Sir George Everest.
He made the first proper map of India.
They went from the south of India up to the north of India.
And once they had that map, they knew the height of a point above sea level from which you could.
could see the top of Mount Everest.
Extraordinary.
It was still 140 miles away, but they could then estimate the height of Mount Everest,
and so they called it after him.
So you grew up with these stories?
Did they capture your imagination in any way?
Was there any inspiration from all this familial ambition?
More desperation than inspiration.
I grew up with the idea I somehow had to keep up with these guys, and it was quite stressful.
A lot of pressure.
Yeah.
Was that self-imposed, or did it come from family itself?
A lot of it came from my father.
Your work in the early 80s, of course, as we've discussed, laid the foundations for discoveries
that enable machine learning with artificial neural networks.
What drove you curiosity to that topic of study in your graduate years?
So this thing when I was a teenager of getting interested in how memories worked in the brain
and also getting interested in, so how did the connections between neurons learn,
that was always my central interest.
And I tried studying that in lots of different disciplines.
So I did physiology in my first year at Cambridge.
And they were going to tell us how the central nervous system worked.
But what they actually told us was how nerve fibres conduct impulses,
not how the whole thing works, not how it allows you to think or to recognize objects.
So that was disappointing.
So then I went into philosophy thinking I might learn about the mind.
It turned out they didn't know.
Still trying to answer the same question.
Then I went into psychology, and it turned out they had sort of crazy theories.
If you've done physics, you know what a good theory looks like, and the psychologists didn't have any good theory of how we worked.
And so who eventually impressed you as an area of study?
Well, eventually, I went to Edinburgh to do artificial intelligence because you could use computers to simulate any model you might have.
And so you could use computers to simulate a model you might have of how the brain works.
And what you discovered very quickly is if you took any of the existing models of how the brain
worked and you simulated them on a computer, they didn't work.
They were hot air.
And so I spent the rest of my life trying to find models you could simulate it on a computer that did work.
And I never did figure out how the brain worked.
I was just going to ask you.
But as a result, we did get some technology.
Yeah.
When was, was there a eureka moment, though?
Do you remember your first eureka moment in this very dogged search for an answer to how the mind works?
I guess in San Diego, in the spring of 1982, I was working with a professor called David Rommelhard.
And he came up with an idea that I helped him with a little bit of how to use calculus to figure out for every connection strength in the neural network, how you should change it so to get a better answer.
And that was called back propagation, that idea.
Other people had invented it too, but we managed to show that it did really good things.
It's just we couldn't do it at scale then.
We didn't have the data and we didn't have the compute.
But the closest to eureka moment for something that really worked in the end was that.
I had other eureka moments for things that seemed much better but didn't actually work in the end.
There was something called the Boltzmann machine.
The Boltzman machine.
The Boltzman, okay.
which was a really neat learning algorithm.
The math was beautiful.
My friend Terry Sinovsky and I came up with it.
We were convinced it must be how the brain worked.
It did influence physicists quite a lot,
but in the end it turned out not to be efficient enough
and back propagation worked better.
But both machines were a much better idea.
They just didn't work.
One other last question is your friend when you were 16.
Is that a name we should recognize?
nice today? He's called Inman Harvey. I'm still friends with him. We started off at prep school together,
so I've known him now for 71 years. We still meet up every year. What does he make of what you've
done with your life? Well, it's quite interesting. So he was a brilliant mathematician. He lost
interest in math. He spent a lot of his life trading with Afghanistan and had lots of very
interesting stories about being in Afghanistan when the Russians were there and the Taliban.
But then in 1986, when we published back propagation, I sent him a book and he said, well, if this is the best you can do, I'm going back into it.
And he became an academic at the age of, he became a graduate student again in his 40s.
Unbelievable.
And eventually became an academic at Sussex.
It's extraordinary.
Yeah.
You moved to Toronto in the late 80s and up until then your work was based in the United States.
Can you talk about why you left the U.S. behind and came to Canada?
Yeah, it was the mid-80s, and it was Ronald Reagan was in power.
They were mining the harbors in Nicaragua, the Contras.
And I was in Pittsburgh, and most of the faculty, almost all of the faculty in Pittsburgh,
didn't have a problem with that.
There were very few people that thought this was just plain wrong.
The general view seemed to be that, well, it's the American Hemisphere,
America has the right to control what happens in the American Hemisphere,
pretty much like they think now.
Sounds familiar.
Sounds very familiar.
And I also got married to someone who really didn't want to live in the States,
so we moved to Toronto.
Yeah.
I saw somewhere that it was partly also because of an attraction in Canada.
Yes, I much prefer Canadian society.
For example, having a health system that doesn't leave people out
and doesn't tie your health coverage to your job.
And Toronto in particular is the most multicultural city I know.
I have two adopted children from Latin America.
And I feel that in Toronto they're very safe.
We moved to London for a few years, and I felt they weren't very safe there.
Interesting.
You left then the University of Toronto in 2013 to work for Google, I believe?
Is that when it was?
Actually, in 2013, I became half-time at Google and quarter time at the University of Toronto,
and then it took me a year to realize a quarter of my pay was less than my pension would have been,
so I was actually paying them for the right to teach.
And so I left in 2014.
So that relates to what I want to ask you about that, actually,
because I'm curious whether you felt there were benefits to working in a private entrepreneurial space
compared to a publicly funded one.
Ultimately. I mean, the pay, obviously.
The pay, obviously. There's some benefits and some disadvantages.
So I think universities are still a great place for really original ideas.
So you have a, if you're in a university, if you're in a good group, so you have other good graduate students to talk to,
and you have a professor who knows what's going on, you can spend five years developing an idea.
You can't do that in industry.
You have to produce stuff every few months.
You can spend several years failing to make something work
and eventually make it work.
Earlier we talked about the big picture threats
that AI could pose in the next few years.
And now at our present moment,
there is, of course, a wide range of ways in which we use AI,
all of us, from using it to finish a draft to an email
or even developing an active personal relationship
with their own chat GPT.
How would you say AI is changing our understanding of what it is to be human?
So this is very interesting.
I've just collaborated with some people who are writing a paper on this.
And they see it in terms of four major revolutions that make people less central.
So the first one was the Copernican revolution.
After that, we were no longer at the center of the universe.
The second one was the Darwinian revolution.
after that we were just another animal.
Now a special animal because we invented language
and we were smarter
and we were better at cooperating,
but nevertheless an animal.
Then there was the Freudian revolution
where our conscious self
wasn't actually fully in charge,
that our conscious self
was mainly an invented cover story
for what was really going on underneath.
And finally, the intuitive AI revolution.
So the AI we have now
is not really based on
logic. What we've succeeded in doing with neural networks is modeling human intuition.
And so now we have something else out there that has intuition. It's not the, we're not the
sole people who have consciousness and intuition and intelligence. Other things have it too.
And that makes us much less important than we used to be.
This sounds far more consequential than any of the other revolutions you've mentioned.
It is. But those were pretty consequential. And it took society quite.
a long time to absorb the impact. For a long time, religious people were saying, you know,
evolution has to be nonsense because it would interfere with our whole view of the world.
I think the Catholic Church may have actually finally accepted that evolution happens.
I got invited to be on a committee, which I attended remotely. But the Catholic Church was very
proud of the fact that the committee meeting was going to be in the same room as they had
the trial of Galileo.
Oh, wow.
Now, I would have kept quiet about that,
I was like that.
Yes.
Wow, that's extraordinary.
That is progress, isn't it?
Yes.
Yeah.
Do you know Ron Debert by any chance?
I do.
I actually think very highly of him,
and I have actually given some of the money
I got from Google to support his work.
Oh, incredible.
Okay, so you're very familiar with this work.
He was a Massey lecturer here at ideas.
And I remember when I spoke to him a few years ago,
and he warned,
the world about the privacy concerns and using social media and cell phones and mobile phones
and that kind of thing. But he made the point of saying that he did not opt out. He too was on
social media. He too used the phone. And so I'm wondering what your personal relationship is like
with AI and how you propose that the rest of us manage this relationship. Okay. I've never
sort of tried to use AI as a sort of, as a friend, as a social interaction.
I've never got into that, partly because I don't want to get addicted to it,
but I do use AI every day as an intelligent assistant.
So if I want to know anything, I just ask a chatbot.
I typically use GPT because that's the first one I use,
but I simply use Gemini or Claude.
So today, for example, I was having lunch,
and I put some lentils in my curry,
and then I thought, that's what they do in India.
hey, I wonder if lentils fix nitrogen
because beans, Mexicans have rice and beans
and beans fix nitrogen.
They take the nitrogen from the atmosphere
and make it so plants can use it.
I wonder if lentils do that.
So I asked a chap with GPT
and explained to me, yes, lentils fix
about 80% of the nitrogen they need to use.
That's wonderful.
So I'm doing that all the time
and I wish I'd had it when I was younger
because now what happens is
whenever you wonder something,
you can just get the answer.
And that's when you're ready to absorb the answer.
If you go to school, what happens is teachers are telling you stuff, which isn't what you just wondered.
It's much easier to just follow what you wonder.
And these chatbots allow you to do that, and it's wonderful.
So what do you say to people who would want to sort of sidestep this technological leap?
I think they're Luddites.
Now, it might be safer to be Luddites.
But already, I think AI is being wonderful for many, many people as an intelligent assistant.
used to be, if you wanted to know if lentils fixed nitrogen, you would have to go to the library
and talk to the librarian about which book you might find that in. And if you really cared,
in like half an hour, maybe you could discover that. Now you just wonder something and it tells
you the answer. Yeah. It's great. I guess if you look beyond our individual wonder and our
imaginations and just help us, help those who are, I guess, terrified and scared of AI,
what's one piece of advice you would give people who would rather not think about it?
My one piece of advice, I think, would be learn to use it, learn to use it as an assistant,
learn to interact with it, ask it questions, get used to the fact that it can answer more or less any question.
Don't trust its answers.
Check out its answers if it's on something important.
It sometimes makes stuff up.
But it's a very, very helpful thing to have around.
If you're doing your taxes, for example, it's terribly helpful.
And what should they be wary of as they're using it?
they should be wary of it just making stuff up,
but everybody needs to worry about the long-term consequences of this.
As an intelligent assistance is great,
but the problem is it might get uppity.
We did start this conversation by talking about the worst-case scenarios,
and I want to know what the best case scenario is for our future
when we're talking about, I guess, super-intelligent
or artificial general intelligence is achieved.
The best case scenario would be if we can figure out a way so it cares more about us than about itself,
so we're confident it won't take over.
And if we can figure out a social structure that makes it work,
we could all work one hour a week, we could get a huge amount done by using a swarm of AI agents to help us.
We could have much more production of both material goods and services,
so we could all live like kings, and it would be great.
would be much better. Education would be much better. You'd have a far more fulfilling time. You'd be
able to do very creative things you thought you couldn't do just by yourself because it would help you.
All of those are possibilities, but not if it takes over.
Yeah. What role do you think you're going to have in helping that vision come true?
I think my main role now, I'm sort of too old to have radical new ideas.
my main role is to educate the public and politicians about what AI is and why we need to regulate it.
Back to what you said at the very beginning in our conversation.
In 2023, in that lecture, you said at the time, quote,
I don't know what to do about it or which way to run.
Do you have a better idea now?
I don't think there's anywhere to run.
I mean, you could get a ticket on Musk's Rock.
it to Mars, but I wouldn't advise that.
So I think we know a little bit more about what to do about it.
Try and make it benevolent.
There are the proposals, and what we do have is there's quite a lot more work going on now
on AI safety.
For example, the British government had this meeting, Bletchley Park, which was the first
international meeting on AI safety.
After that, the British government decided not to regulate AI.
which was crazy, but they also put £100 million into setting up a very good research unit.
So they have some of the best research in the world on the dangers of AI.
And what we're seeing is that a lot of billionaire philanthropists, like Jan Tallinn, the guy who invented Skype,
are putting a lot of their money, like I think he's put about half his money, into AI safety, setting of institutes.
And there is a lot of philanthropic money going into it.
It doesn't compare, it's not trillions of dollars like the big companies, but it's still very helpful.
Yeah, it's a good start, as you say.
You are, of course, known as the so-called godfather, a godfather of AI.
And much of what has been built is based on the foundations that you help create for better or for worse.
What do you hope your legacy will be when people look back, let's say, a century from now?
I don't really think much about my legacy.
it's not going to concern me much, but hopefully I'll think I was one of the people who helped develop it,
and then I was one of the people who warned about the dangers.
I'm so grateful you came in.
Thank you very, very much for all of it.
Thank you very much for giving me this opportunity.
Jeffrey Hinton is a Nobel Prize winner for his work that laid the foundations for AI development.
He now spends his time advocating for AI safety and is a director of the AI Safety Foundation.
nation. This episode was produced by Nicola Luchschich. Technical production and sound design by
Sam McNulty. Web producer Lisa Ayuso. Our senior producer is Nicola Luxchich. Greg Kelly is the
executive producer of ideas, and I'm Nala Ayyad. For more CBC podcasts, go to cbc.ca.ca.com.
