The Agenda with Steve Paikin (Audio) - Geoffrey Hinton: Will AI Save the World or End it?
Episode Date: April 3, 2025Geoffrey Hinton, also known as the godfather of AI, was recently awarded the Nobel Prize in Physics for his pioneering work in artificial intelligence. He joins host Steve Paikin for a wide-ranging di...scussion on his Nobel win, his departure from Google, the promise and perils of AI, and why he recently got under Elon Musk’s skin. See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Renew your 2.0 TVO with more thought-provoking documentaries, insightful current affairs coverage, and fun programs and learning experiences for kids.
Regular contributions from people like you help us make a difference in the lives of Ontarians of all ages.
Visit tvo.me slash 2025 donate to renew your support or make a first-time donation
and continue to discover your 2.0 TBO.
Our next guest is known as the godfather of AI
and recently was awarded the Nobel Prize in Physics
for his pioneering work in artificial intelligence.
He's also gotten under Elon Musk's skin.
We'll have to ask about that.
He is Jeffrey Hinton, Professor Emeritus of Computer Science at the University of Toronto He's also gotten under Elon Musk's skin. We'll have to ask about that.
He is Jeffrey Hinton, Professor Emeritus of Computer Science at the University of Toronto
and he joins us now to discuss the promise and perils of advanced AI.
So good to have you back in the studio.
Thank you for inviting me.
We're going to show a little clip off the top here of I suspect one of the better days
of your life.
Sheldon, if you would.
["The Star-Spangled Banner"] ["The Star-Spangled Banner"]
That was King Carl, the 16th of Sweden, and you are getting in Stockholm the Nobel Prize.
When does the buzz of all of that wear off?
I'll tell you when it wears off.
It still has not?
Not completely, no.
Not completely.
How cool a day was that it was amazing. Yeah
Particularly since I don't do physics and I got the Nobel Prize in physics
You want to explain how that happened? I?
Think they wanted to award a Nobel Prize for the developments in AI because that's where a lot of the excitement in science is now
And so they sort of repurposed the physics one by pretending I did physics. Did you point that out to them?
Yes.
Thanks for the Nobel Prize, but you guys know I don't do physics.
Is that what you said?
That's what I said, yes.
And did they say don't look a gift horse in the mouth or what?
Pretty much, yeah.
You get a medal, right?
Oh, you do, yes.
Yeah.
Where do you keep the medallion?
I'm not telling you.
I'm not going to steal it, Jeffrey.
No, but somebody else might.
Oh, all right.
All right.
It's six ounces of gold.
It's worth about $15,000 if you melt it down.
So you're not going to tell me if it's at home,
or if you keep it in a safe deposit box, or whatever?
No.
OK, fair enough.
I'm going to read what you won for.
You won for, quote, foundational discoveries and inventions that enable machine learning
with artificial neural networks.
And I know you've been asked a million times, what does that mean in English?
So let's make it a million and one.
What does that mean in English?
Okay.
In your brain, you have a whole bunch of brain cells called neurons, and they have connections.
And when you learn something new,
what's happening is you're changing the strengths
of those connections.
And so to figure out how the brain works,
you have to figure out what the rule is
for changing the strengths of connections.
That's all you need to know.
How does the brain decide whether to make a connection
stronger or weaker so that you'll be better at doing something
like understanding what I just said.
And your brain has a way of figuring out
whether to make a connection
starting slightly stronger or slightly weaker.
And the question is, what is that way?
How does it do it?
And what happens if you can mimic that
and take a big network of simulated brain cells
and we now know what happens, it gets very smart.
Of all the different thousands and thousands of areas of scientific research that you could
have done, why that one?
Because that's clearly the most interesting one.
To you?
I think it's actually the most interesting.
To everyone.
Because in order to understand people, you really need to understand how the brain works.
And so how the brain works is,
we still don't know properly how the brain works.
We have more ideas than we did.
But that seems like just a huge issue.
You were obviously very well known after Get It Well.
You were well known before you got the Nobel,
but then you got the Nobel, and of course,
that has an explosive effect on one's profile.
Since then, you have been warning us about the perils of AI.
You even quit your job at Google a couple of years ago because of concerns about this.
So let's break this down.
The short-term risks of not having adequate control of artificial intelligence are what
in your view?
Okay. So there's really two kinds of risk.
There's risks due to bad actors misusing it,
and those are more short-term risks.
Those are more immediate, it's already happening.
And then there's a completely different kind of risk,
which is when it gets smarter than us,
is it gonna want to take over?
Is it gonna want to just brush us aside and take over and how many examples?
Do you know of?
More intelligent things being controlled by much less intelligent things not many
I mean we know we know sort of more intelligent people can be controlled by less intelligent people
But that's not a big difference in intelligence. I was gonna make a Trump joke there, but never mind
We're gonna move on so was I but I in intelligence. I was going to make a Trump joke there, but never mind. We're going to move on.
So was I, but I avoided it.
I just alluded to it.
OK.
Bad actors.
Let's start with that one there.
Give us an example of the concern
that you have about bad actors exploiting this.
Well, somebody getting lots of data about people
and using that data to target fake AI videos
to persuade those people, for example, not to vote.
That would be a bad actor.
That would be a problem, yes.
And those are the kinds of problems we're already facing.
Cyber attacks.
So between 2023 and 2024,
phishing attacks went up by 1200%. There were 12 times more fishing attacks in 2024 than in 2023 and that's because
these large language models
Made them much more effective. So it used to be you get a fishing attack where the syntax was slightly wrong
It was kind of direct translation from the Ukrainian or whatever and the spelling was slightly wrong
And so you knew this was a phishing attack now. They're all imperfect English. It's getting too sophisticated now. Yeah, okay
how about examples of
the second thing you said
Dumber people being in control of smarter people or dumber things being in control of smarter things
There's only one example. I know, and that's a baby and a mother.
It's very important for the baby to control the mother.
And evolution put a huge amount of work
into making the baby's cries be unbearable to the mother.
But that's about it.
The longer-term risks that you are worried about,
we just talked short-term, how about longer-term?
Well, the long-term risk is it's going to get smarter than us almost all the leading researchers agree that it will get smarter than us
They just disagree on when some people think it's maybe 20 years away. Other people think it's three years away
a few people think is one year away and
So we all agree it's going to get smarter than us and the question is what happens then and
Basically, we have no idea. People have opinions, but we don't have any good foundation
for estimating these probabilities.
So I would guess there's a 10% to 20% chance
it'll take over.
But I have no idea, really.
It's more than 1%, and it's less than 99%.
When you say take over, I think you've gone further than that.
I think you've said there's a 10% to 20% chance that we will be rendered extinct.
Yeah.
If it takes over, that's what'll happen.
You want to give us a time frame on that?
No, because like I say, there's no good way to estimate it.
But if we don't do something about it now, it might not be that long.
Right now, we're at a point in history
where there's still a chance we could figure out
how to develop super-intelligent AI and make it safe.
We don't know how to do that.
We don't even know if it's possible.
Hopefully it's possible, and if it is possible,
we ought to try and figure that out,
and we ought to spend a lot of effort
trying to figure that out.
Can you play out that scenario for us? How would they render us extinct because of their this area?
There's so many different ways they could do that if they wanted to that is I don't think there's much point speculating
I don't think it would be like Terminator
They could for example create a virus that just kills us all
Okay, so we've got clearly got to get a handle on that. Are we doing it? It would be a good idea to get a handle on that. And there is research on safety
and there's research on this existential threat that they might just take over,
but not nearly enough. And the big companies are motivated by short-term
profits. What we need is the people to tell the governments they ought to make
these big companies do more research on safety.
They ought to spend like a third of their resources on it, something like that.
How's that going?
People are becoming more aware, politicians are becoming more aware.
Recently in the States there was a step backwards, but...
You want to refer to what you're talking about there?
The Biden administration was interested in AI safety and had an executive order.
And I think it's gone the way of all of Biden's executive orders under Trump.
As in it's been reversed?
Yeah.
OK.
And I presume it's been reversed because the richest, techiest people in the United States
are all supporting this administration right now.
Fair to say? It's sad to say, yes.
Hmm.
What...
All right.
Clearly, you would like to see us get a handle on this.
You know, what can we do since it appears that there isn't the consensus there to do
anything about this at the moment?
Yes.
The first thing to do is build consensus as this is a really serious problem.
It's not just science fiction,
and we need to persuade the big companies
to do more research on safety.
It's like climate change.
You have to first build consensus
that there really is climate change,
and it's really gonna be terrible
if we don't do anything about it,
and then you can start getting action.
Not enough action, but at least some.
With this, we first need the consensus.
But one piece of good news is, for the existential threat that it might wipe people out, all
the different countries should be able to collaborate.
We should be able to collaborate with the Chinese.
Actually, I'm not sure who we is anymore.
I used to think of we as Canada and America, but that's not a we anymore.
It is not you are
But anyway countries should be able to collaborate because nobody wants to get wiped out the Chinese leaders
Don't want to get wiped out Trump doesn't want to get wiped out. They can collaborate on the existential threat
So that's a little piece of good news
But the bad news is we don't know what to do about it
I mean desperately need research now to figure out what to do about it
Is there an international institution that you see leading the way to get that collaboration?
There's a number of organizations that are trying to help with that, but no dominant
one yet.
I mean, is it a job for the UN or who?
Well, the UN is sort of a bit pathetic, right?
It's not up to this really.
Who's up to it?
The big companies have the resources.
So to do research on I.O. safety, you need to be dealing with the latest, most advanced
models, and only the big companies have the resources to train those.
Okay.
Let's talk about the richest man in the world, shall we?
I gather you're not on...
If we have to.
Well, I gather you're not on his Christmas card list anymore.
Okay.
So I agree with him on various things.
I agree with him on the existential threat, for example.
He takes it seriously.
And he's done some good things, like electric cars
and communications for people in Ukraine using Starlink.
So he's definitely done some good things.
But what he's doing now with Doge is obscene.
What's happening is he's cutting, almost at random, lots of government workers, good,
honest people who go to work and do their job.
He's accusing them of being corrupt and lazy and useless and just cutting their jobs.
And it's going to be terrible.
It's going to have terrible consequences on people.
And he just doesn't seem to care.
So the one thing he, the only time I've seen him care was when I criticized him and he
said I was cruel.
Well, let's do this here.
You went on his home turf, X, formerly Twitter, and you tweeted, I think Elon Musk should
be expelled from the British
Royal Society, not because he peddles conspiracy theories and makes Nazi salutes, but because
of the huge damage he is doing to scientific institutions in the US.
Now let's see if he really believes in free speech.
And apparently you caught his attention because he tweeted back at you, only craven, insecure
fools care about awards and memberships.
History is the actual judge, always and forever.
Your comments above are carelessly ignorant, cruel and false.
That said, what specific actions require correction?
I will make mistakes, but endeavor to fix them fast.
Okay, what was your reaction to his tweet?
I thought it's best not to get involved in a long series of exchanges with Elon Musk
because I want to be able to get into the U.S.
And my friend, Yan LeCun, answered those questions.
Okay, and where would we be able to see the answers?
On Twitter.
If you look at the—
On X-ray interviews. If you look at the... On X-Ray.
If you look at...
So that's the only interaction you had directly with him?
A couple of years ago, he asked me to call him because he wanted to talk about the existential
threat.
I actually wanted to recruit me to be an advisor for X.
So XAI.
So we talked about the existential threat for a bit.
And then he asked if I would be an advisor
for his new XAI company, and I said no.
He thought I might agree because he employed
one of my best students as one of the technical people.
And then he started just rambling.
And so I said, I made up a meeting and said,
I'm sorry, Elon, I have another meeting, so I have to go. And so I said, I made up a meeting and said,
I'm sorry, Elon, I have another meeting, so I have to go.
And that's it.
That's it.
If I can sort of break this thing in two,
I mean, he takes some fairly personal shots at you
at the beginning, as you did at him, fair.
I mean, not everybody agrees that what he was doing
when he got up on stage and did that thing
was a Nazi salute, you know?
He would argue he was just throwing his heart
out to the crowd.
Sure. You're not buying that?
No. You're not buying that, okay.
Particularly if you look at his history
and his parents' views and so on.
Yeah, he does seem to cozy up to some
fascistic situations here and there, yes.
But then the second part of this is rather constructive.
He's asked you for advice on what corrections he can make.
Yes, and I let somebody else answer that.
Jan answered that, so I left that.
Do you want to just share maybe one or two of the things that you think he ought to do?
Well if he's going to, I mean let's get straight what's going on here.
He wants there to be an enormous tax cut for the rich.
He wants a $4 trillion tax cut. That's what it's gonna cost and
In order to get the money for that without increasing the national debt hugely they have to cut somewhere
or the two things on us the two things they're planning to do are cut government spending and
Have tariffs which are really attacks on the poor tariffs are really a tax on the poor. Tariffs are non-progressive tax.
They're gonna make everything more expensive.
And so normal people are gonna end up paying
four trillion dollars more for what they buy
to pay for the tax cuts for the rich.
This is disgusting.
This is government policy in the United States right now.
Which is disgusting.
You talk about damage to scientific institutions in the United States.
Referring to what?
Well, for example, if you put a crazy guy with a worm in his brain in charge of the
health system, that's not good.
That would be R.F.K.
Jr. that you're referring to there.
Yeah.
You don't like anything of what he's doing right now.
No, I wouldn't say that.
These things are never completely black and white.
I think his emphasis on people having a healthy diet is important.
Maybe some of the things he's dead against, like seed oils, isn't quite right.
But the idea that people should have a healthy diet and that will improve health, that's
an important idea, and he sort of pushes that a bit.
But most of the rest of what he says is just nonsense.
You don't share his suspicion about vaccines and pharma and how we get autism and that
kind of thing?
No, I don't.
So there's been a lot of research on that already.
I mean, people have taken very seriously because of all these crazy claims.
Most of the people who push that just want to sell you medicines or sell you something.
They're doing it as a sales technique to get your attention.
They don't really believe it themselves.
He's had his own kids vaccinated, as far as I know.
That says a lot.
I mean, it reminds me of the time when Fox News
would be broadcasting 24-7 against the mandatory vaccination,
and yet all the Fox employees had to get vaccinated.
Right. There you go.
Okay, we've talked a lot about the perils of AI.
Is there anything you can leave with us here that should make us somewhat optimistic that
things may actually work out?
Well one thing, one of the reasons AI will be developed, we can't just stop it now, is
because there's so many good things will come out of it.
So for example, in healthcare it's going to do amazing things.
You're going to get much, much better healthcare.
Like you're going to have a family doctor
who's seen 100 million patients,
who knows and remembers the results of all the tests
that have ever been done on you and on your relatives,
and can give much, much better diagnosis.
Already, an AI system working with a doctor
gets far less errors in diagnosing complex cases
than a doctor alone
So already that's happening and it's gonna get much better. It's gonna be amazing in education
So we know that a kid with a private tutor will learn about twice as fast because the tutors can see what the kid
misunderstands now AI systems aren't there yet, but sometime in the next 10 years
Probably they'll be really good.
And so when a kid is learning something, the AI system will be able to see exactly what
it is the kid misunderstands, because the AI system's seen a million other kids, right?
It knows exactly what the kid misunderstands, exactly what example to give the kid to make
it clear what the misunderstanding is.
And so if a private tutor that's a person is like two or two times better
These will be three or four times better
It may not be good news for universities, but it's very good news for people learning stuff
Not good news for universities because maybe won't won't need them anymore
Won't have to go good. Well, you know, you'll need them for doing graduate research
I think I think you'll still need an apprenticeship to learn how to do research
Because we can't say how you do research. We can say, okay, this problem, I would tackle it this way. We can't really give the rules for it. There aren't any rules.
It's an apprenticeship.
All the kids who thought it was going to be a great idea to go to university and learn
how to code or take computer science, are they in trouble now?
They may well be, yes. I mean, computer science, you'll learn more than just learning how to code.
They call you the godfather of AI.
Do you like that title?
I quite do, actually.
It wasn't intended kindly.
Someone started calling me that after a meeting in which I was kind of chairing the meeting.
I kept interrupting people
And therefore they called they called me the godfather the godfather Andrew Andrew Ng it was a meeting in Windsor
In England and after the meeting Andrew Ng started referring to me as the godfather
Because you cut people off because I was sort of yeah, you were I was the oldest guy there
Pushing people around. Got it. Half of your Nobel money, which I gather is what, $350,000, something like that?
The whole prize is a million dollars, about, and so half of it's half a million dollars.
Half a million dollars, okay.
Of that half a million, you donated $350,000 to Water First.
Do I have that right?
Yeah, a quarter of a million U.S. is $350,000 Canadian. Got it. What to Water First? Do I have that right? Yeah, a quarter of a million U.S. is $350 Canadian.
Got it.
What's Water First?
Okay, Water First is an organization that trains people who live in indigenous communities
in water technology.
So how people who live in those communities can make their water safe.
And why did you pick them?
I adopted a child in Peru, and I lived there for two two months and you couldn't drink the tap water.
It was kind of lethal.
And so I experienced what it's like not to have safe drinking water.
And it just occupies, if you have a baby and you don't have safe drinking water,
it just occupies all your time on how you're going to stop the baby getting sick.
And it's just a crazy extra burden to impose on people.
And I think it's kind of obscene that in a rich country like Canada, there's all these
indigenous communities that don't have safe drinking water.
Like in Ontario, 20 percent of the indigenous communities don't have safe drinking water.
This will not satisfy you.
I don't mean it to satisfy you, but it's better today than it was a decade ago.
Maybe. No, it is. today than it was a decade ago.
Maybe.
No, it is.
I mean, we can say that.
But they should all have safe drinking water.
Of course they should.
Of course they should.
OK.
What's ahead for you?
I'm trying to retire.
I'm doing a very bad job of it.
How old are you?
77.
Oh, that's way too young to retire.
You've got to...
I thought I left Google at 75 because I wanted to retire.
You've got a lot of runway left still maybe
I mean you look awfully good for 77. I got a thank you. Yeah
No, I think you got at least one or two or maybe three chapters left for 77 to
Good makeup artist makes all the difference. Let me tell you
I'm so grateful you could spare some time to come in and take these impertinent questions today and
Who knows maybe you and Elon will get back together again and try and solve these
problems that we need solutions to.
I think that's improbable.
That's Geoffrey Hinton, Professor Emeritus, Computer Science, University of Toronto, who
was a Nobel Laureate in Physics.
Thank you for joining us on TVO tonight.
Thank you for inviting me.