Front Burner - AI ‘godfather’ on the tech’s global threat
Episode Date: May 8, 2023Artificial intelligence is developing at such a rapid pace that leading figures in the field are warning about the mortal threats of losing control. Among the trio known collectively as the “godfat...hers of artificial intelligence,” two researchers – both Canadian – are calling out the economic, ethical and existential risks of the tech they pioneered. University of Toronto scientist Geoffrey Hinton recently announced he’d quit his job at Google to speak out, and Yoshua Bengio is calling to pause the development of powerful AI systems like GPT-4. Today, Bengio joins us to explain the near-term dangers of AI, and what it would take for the tech to be a threat to humanity. Bengio is a professor at Université de Montréal and scientific director at Mila - Quebec AI institute. For transcripts of this series, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization,
empowering Canada's entrepreneurs through angel investment and industry connections.
This is a CBC Podcast.
Hi, I'm Alex Bonetta.
Hi, I'm Alex Panetta.
We are witnessing a remarkable event in human history.
A world-changing technology, artificial intelligence, is evolving rapidly.
In fact, it's evolving at such an astounding pace that some of the leading figures in this field are now sounding worried.
To hear some of their warnings, it brings to mind a sort of Frankenstein moment. And it leaves our society
at an inflection point where we have a choice to make. And we have to make it pretty quickly, they say.
The choice is, on the optimistic side,
do we step on that gas pedal and keep accelerating AI development
with its vast potential to bring countless new inventions into our lives?
Or, on the other hand, do we hit the brakes?
Do we prepare for a world where AI eliminates jobs,
makes it impossible to tell what's real from what's fake, or even worse,
what if it spins out of human control entirely in some dystopian sci-fi scenario where it destroys our species?
This fascinating and crucial debate is playing out right now among an elite group of researchers.
They are three men known collectively as the godfathers of artificial intelligence.
Just a few years ago, these three shared what's sometimes called
the Nobel Prize of Computer Science, the Turing Award.
Now, two of those three sound pretty worried.
It so happens those two are Canadians.
And one of them,
Geoffrey Hinton of the University of Toronto, has just announced he's quitting Google's AI research team to sound a public warning. Today we're speaking to his peer, a genius of the field,
the other Canadian in the big three. He's Joshua Benjo, a professor at the Université de Montréal and scientific director
at MILA, the Quebec AI Institute. He's a computer scientist with the world-leading number of
academic citations. His discoveries have powered new tools like autocomplete, AI chatbots,
facial recognition. He's here to talk to us today.
Hi, Yoshua.
Hi.
We'll get into the nightmare scenarios in a moment, but I want to start with the positive.
This is your life's work, and it's going to achieve so many things.
I want to talk about the dreams you've had for this technology.
When you were developing it, what scenarios did you envision it for helping humanity?
One of the areas where humanity has the most to gain from AI is in healthcare.
Biology is extremely complex. We don't understand even a small fraction of how even a single cell works. But we understand enough that we know we could do a lot better if we had a systematic model of how cells and our body works.
And what's happened in recent years with high throughput data being generated with experiments
on drugs, on genes, and so on, is that we're starting to bring together the possibility of
a systematic understanding of our body. And AI is going to be crucial to achieve this. It's not
going to happen just with human brains. There's just too much data to process and to experiment
with. So that's one example. In the last few years also, I've been involved in many projects
where researchers are developing methods to help to fight climate change in various ways.
Whether it is in better modeling the changes in the climate, whether it is in better using the energy we have, deal with renewable energies variations,
energies variations or even with new materials we may design that can help us for example to build better batteries to store energy or to capture carbon and these are two examples but
there are people working in education working in agriculture for example in developing world
of course the the big companies are working on using these technologies to bring us better
interfaces between humans and machines, which may bring in immense reductions in costs for all kinds
of things, which also has its potential dangers, of course. So disease, climate change, education,
agriculture, efficiency, that's a lot of potential good.
And, well, I want to ask you about the risks as well and how to mitigate them.
So as far as I can tell, there are different levels of risk.
And for the purposes of this conversation, I'll separate them into three categories.
Economic, societal, then existential.
So let's start at the most basic level.
The potential for economic disruption.
So how many jobs do you expect this technology
might wipe out in the short term? And which sorts of jobs do you think are most at risk?
There are a number of studies by economists in the last few years. They all point in the direction of
changes where for many jobs, it's not going to mean that the job disappears, but the people
will be able to do it more efficiently,
which, of course, can translate into either less jobs or more of that work being done.
And that includes jobs which are traditionally more intellectual, like programming.
We're seeing more and more tools help programmers being much more efficient.
Then, of course, there could be areas where jobs go
completely. And it's interesting that the progress in recent months and the reason of concern is not
with AI in the form of robots that would do manual work, but rather AI systems that can
manipulate text, that can manipulate text,
that can write text, that can understand images,
that can work in the virtual environment.
Okay, so I'm going to stop gatekeeping on this AI tool.
I'm just going to copy the body of the email and paste it into AI Assist.
I'm going to write a prompt telling AI Assist to generate me a response.
And that means you can literally make ChatJPT pretty much write all of your emails.
Type in, give me 10 marketing angles for a cat brush. As it takes certain words in your video and it doesn't only change your voice but it changes what your mouth actually looks like to
say the word. Check out this demo real fast. And that is dangerous for the white collar jumps.
So you know there are numbers that these studies put out that they are significant and how societies will prepare for this in terms of reskilling, social safety net and all that is going to make a big difference.
Now, as someone who writes for a living, I'll ask you maybe to take off your scientist cap for a second and put on a parenting cap.
You've got children.
You know, what advice would you give young people entering this job market in terms of training, education?
Are there things that we can do to protect ourselves from becoming casualties of the AI economy?
Yeah, this is a great question.
It takes a lot of time for the education system to change.
My first recommendation is don't educate yourself towards a very specialized niche job.
Instead, try to learn how to learn.
Try to learn critical judgment skills, general knowledge, so that if the job market changes
in unexpected ways, and it will, you can more easily shift jobs.
The other thing is, I think what will happen,
what I hope will happen, it's going to be a collective choice, is that we'll put more value
on jobs that are not easily replaceable by machines. Not because the machines will be
incompetent, but because we want to interact with a human being for some things. Like, you know, the educators for our young children,
they need a human person that's going to be a role model that's going to, you know, there's an
emotional exchange going on. And similarly, if you think about nurses or managers, you know,
everything that's relational, I don't mean that those machines will not be able to learn eventually
if we choose so to imitate our emotions and things like that.
But it's not the same thing.
We will always need more nurses.
We'll always need more teachers.
It's just that we're going to need to put our priorities there rather than having people being jobless.
Interesting.
People like to dump on the humanities, but perhaps it's not a terrible bet
as an educational path in the 21st century,
if I understand correctly.
Exactly.
Moving on to even bigger societal questions
about our democracies.
I know you're particularly concerned about some of these issues like disinformation and deep fakes.
We're seeing pretty convincing videos, fake videos.
Like there's this really funny one out this week, a very vulgar one, actually, from The Daily Show.
It's a fake Joe Biden election ad.
So all I can say is a vote for me is a vote for four more years of holding
fascism at bay, or as long as this ticker keeps pumping a minimal amount of blood for consciousness.
And if you can't tell this is an AI generated voice, then lots of luck in your senior year,
whatever the that means. And that's a pretty amusing example, but this could obviously be
used in far more dangerous ways. So how worried are you about how this could be used in politics?
dangerous ways. So how worried are you about how this could be used in politics?
I'm very worried. In a way, you could say, well, it's already happening. And even without deepfakes,
you know, there are trolls, there are people who are just blatantly lying in order to convince others. Demagogy has been there for thousands of years, right? So humans are unfortunately easy to
fool. And of course, politicians have been trying to take advantage of that forever.
But now they might have tools that could really disrupt the basis of democracy, which is that
citizens are well informed of the consequences of voting one way or another.
And when AI systems can pass for humans, even people, you know, specific people that you
might know, both through a text interaction in your social media and potentially coming more and
more with video, you know, it might really change the game sufficiently to distort democracy even
more than it is. Democracy also relies on trusts. And so what may happen is that
if we don't believe what we read,
I mean, it's already going in that direction,
but imagine 10x worse.
That's the sort of thing I'm concerned about.
And there's the potential for bad actors
at a state or geopolitical level.
I mean, last year, this fake video surfaced online
of Vladimir Zelensky, Ukrainian president,
saying he was surrendering eastern Ukraine to Russia.
You know, watching the video, you can see something's off about it.
His head's a bit big.
There's some weird digital garble, but it's not too far off.
And how quickly is this type of technology improving?
And what's the worst case scenario in your view with this stuff in terms of military capability? This can come very quickly because a lot of it is a matter of computational power. So
a country with bad intentions, putting enough money into this could really do damage as soon
as, say, the next US election. It's not far away. And there's also the actual battlefield itself.
There's AI-controlled weapons,
even nuclear weapons connected to AI systems.
Is this something that you really worry about?
I do.
No, I do.
And I've been worrying about this for many years, actually.
AI scientists have been citing letters after letters
to ask the international community to
sign treaties to put a halt to what's called lethal autonomous weapons, also known as
killer robots, meaning simply that there are weapons that can autonomously decide to kill a
person. And there are lots of reasons why this would be destabilizing in terms of the military equilibrium that we currently
enjoy, not to mention the moral aspects of machines taking these decisions.
Like a nuclear weapon strike, for instance.
Well, that kind of leads us to the doomsday scenario.
Your colleague and peer, Jeffrey Hinton at the University of Toronto, stunned a lot of people the other day.
He retired from Google and he began warning in media interviews to the New York Times, to us,
that he's worried that this could be the end of our species, like literally the end.
The issue is now that we've discovered it works better than we expected a few years ago, what do we do to mitigate the long-term risks of things more intelligent than us taking control?
Do you agree with him?
Let me put it a little bit differently.
There is a lot of uncertainty about how this technology is going to unfold as more and more companies and organizations around the world
are trying to take advantage of it.
And right now, there are no guardrails.
There's essentially no regulation around the world.
We don't keep track of what is going on.
And to be clear,
there is no immediate danger. The systems that we currently have are absolutely incapable of the kind of autonomy that would scare me. It's when these systems, if somebody programs them
so that they end up having their own goals, that's where we
might be in trouble.
And also, they would need to be smarter than they currently are.
But they already are doing comparatively to some humans, let's say, in many areas and
better in other areas.
So nobody has, unfortunately, a recipe that guarantees that they're going to behave well.
And because there's so much at stake,
I just think we have to be prudent.
But it's not the only place where the human species is at risk.
So think about biotechnology.
If we're not careful with biotechnology,
we could put out new bugs that maybe we think are going to be helpful,
but eventually wipe out life on
Earth.
So technology could be extremely powerful on the good side and the bad side.
And I don't think that we are currently organized at the level of the planet, politically speaking,
to handle those kinds of risks properly.
I want to get into some of the prescriptive stuff in a bit, possible solutions, but just
to conclude about some of the fears you hold here.
You know, science fiction novelists have been writing about, you know, these doomsday scenarios
for generations.
You said yourself you've warned about some of your concerns over time.
If you think about it, the story of an out-of-control invention is as old as humanity itself, from
Prometheus to Frankenstein. So I'm just wondering what now, like why now? What happened recently to strike
this sort of fear into Jeff Hinton? What had you worried?
Yes. I signed a letter a few weeks ago to raise the alarm on the risks of AI,
all the three risks that you've been talking about. And the reason that this was
the moment is because we have these very large AI systems like JAPGPT, started last fall,
that are much more intelligent, let's say, than we expected, and are also very difficult to understand.
We program how they learn, but it's very difficult to understand how they will behave in different
circumstances.
And they've now reached what's called passing the Turing test.
So Alan Turing, whose name was used for the price that Jeff and Jan and I won, proposed
this idea in the 50s that in order to check sort of a milestone in machine intelligence,
we would set up a game where you would converse through a keyboard, either with a machine
or a human, and you wouldn't know which.
And through the discussion, you with a machine or a human, and you wouldn't know which. And through
the discussion, you try to figure out. And we've reached the level where those machines can
essentially fool us. So that raises the issue with concerns with democracy. But it also means
even though we can see some of their failure modes, it raises the question, you know, maybe is it in three years
or in 10 years that these weaknesses will be figured out? So it's almost sure that we will
be able to build machines that are at least as intelligent as us, and maybe much more. But,
you know, do we want to? And, you know, if we do, how do we do it? These are questions we need to ask.
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization.
Empowering Canada's entrepreneurs through angel investment and industry connections.
Hi, it's Ramit Sethi here.
You may have seen my money show on Netflix.
I've been talking about money for 20 years.
I've talked to millions of people and I have some startling numbers to share with you. Did you know that of the people I speak to,
50% of them do not know their own household income? That's not a typo. 50%. That's because
money is confusing. In my new book and podcast, Money for Couples, I help you and your partner
create a financial vision together. To listen to this podcast, just search for Money for Couples. I help you and your partner create a financial vision
together. To listen to this podcast, just search for Money for Couples.
I want to talk to you maybe about something a bit more philosophical. In some of the doom
scenarios we've discussed, what the end of humanity actually looks like. I mean, does it mean
the end literally of humanity like Jeff Hinton was referring to? Or can it also mean that we've
lost an essential part of ourselves about what it means to be human? We are the only people
who write. We are the only ones who create art. And I'm just wondering if this is something you've
thought about and what your feelings on that are. Yeah, these are hard philosophical questions.
They are among the reasons why we need to be prudent because we don't have answers.
Along similar lines people are wondering whether we could build AI systems that are conscious
and as far as we know consciousness and intelligence are things that emerge from
the physics of what's going on in our brain as we understand better that in our body or brain are
just machines. It becomes clear also that we could in some future build machines that are as
intelligent as us and potentially have also some of our attributes of consciousness. And of course,
that once you pass that, it raises all kinds of philosophical questions for which I don't think
society is ready.
And I think we should take the time to think about these questions, think through them,
you know, not just like letting things go by the force of profit maximization is the only force involved.
Okay.
So I'd like to talk about the third person in your trio of godfathers, Jan LeCun.
You referred to him a moment ago.
LeCun now runs AI at Meta at Facebook, and he's diverged a bit from Hinton and from you, I suppose.
He sounds a bit more positive, saying, you know, maybe we should regulate AI a bit more, but we really shouldn't stop developing this technology.
Obviously, the three of you didn't always disagree on this fundamental question.
So I'm wondering, what provoked the
split? What do you think has happened where three people who've spent so many years working on the
same types of products have started to drift in different directions? So I believe that three of
us agree that regulation is necessary. Now, I think where there's a real split is about those risks of loss of control of AI systems.
And it's understandable.
And there's a debate in the community because there is so much we don't really understand.
There's actually a subfield of AI which studies those scenarios and tries to reason about them.
But there's still a lot of open questions.
But my stance is that of, if you want,
I'm agnostic. I don't really know if these things are going to happen or if we will find solutions.
And if we do find solutions, how they would be enforced, which is actually where it gets tricky.
But because I don't know, and because so much is at stake, I think we have a moral duty
of prudence and studying before we act.
Okay, well, let's talk about solutions then. You've signed this letter advocating for a pause, maybe a six-month pause,
while we work out new regulations to govern this technology.
But anytime you mention a pause, like I'm usually based in Washington,
that's where I usually work, you hear things like,
we can't pause, the Chinese will achieve world dominance if we do.
And that begs the question of whether this development can still be paused,
or is it too late?
Well, so first of all, the Chinese would not achieve world domination in six months.
And the pause was not for AI in general.
I think there are so many positive applications of AI, it would be unnecessary to pause them.
The pause was about systems that did not exist yet, which would be more powerful than GPT-4.
So about the geopolitical question,
I think these are also important questions.
Questions that worry me.
But the way we need to think about this
is the way we thought about nuclear weapons.
And, you know, the treaties with nuclear weapons
were signed and discussed in the middle of the Cold War.
Once everyone recognizes that everybody is in danger,
then it doesn't matter that much if you're, you know,
aligned with one country's philosophy or another.
I think we're all humans
and we want to make sure
that things end up well for our children
and our descendants.
So I do think that there is a possibility
governments around the world
understand that there are these high-stake risks,
just like they've done for nuclear weapons,
just like they've done yet unsuccessfully for climate,
that they start sitting around the same tables,
and especially the Chinese and the US and Europe and Russia.
So these countries should sit down
and start working out agreements
to increase the level of global safety.
Whether they'll do it, I don't know,
but I feel like we have a moral obligation
to speak about this and to encourage them to do so.
You've also talked about specific ideas like watermarks, transparency, and what is an AI and
what isn't. Can you talk to me about a couple of things we could do to make these things safer?
Yes, absolutely. Thank you. This is something we could do very quickly and doesn't require like new
inventions. We know how to do it. So a watermark is something that you would not perceive in the
images or in the texts. It wouldn't change anything to you as a user from that point of view,
but that a computer program could easily detect. This is chat GPT or gpt you know 4.6 and so that's the the watermarking is is just a way to
identify what has been generated by machines versus what actually comes from a real recording
or from a human well and of course it's not sufficient to have that technology you also
want to make sure that the users get to know that what they're
seeing, what they're reading comes from a machine rather than from a human. Now, it would be
something that companies would want to do to stay legal. And there's going to be bad actors who are
going to try to cheat and there'll be an arms race to try to defeat each other and so on. But at least we can reduce the risks of misinformation and loss of trust
using technology.
As we've done it, for example,
for counterfeit money, right?
We trust our bills
because there are laws against counterfeiting.
But we have to enact those laws.
I wanted to ask you just about specific policies that governments are working on.
You've been complimentary of a bill in the Canadian Parliament.
The White House also has a sort of artificial intelligence bill of rights doing some of the things you alluded to.
The White House is actually participating in a sort of hackathon. This summer, they're sending white hat hackers to a conference in Las Vegas to try poking around and finding vulnerabilities in AI systems.
Is any of this enough?
Are governments responding to this on the scale that you think that they should be?
Well, the good news is that they're responding.
But I do think that we need to accelerate that process and it also needs to move to an international stage.
So right now, there is no legislation as far as I know that has an AI specific kind of regulation,
but in major places, this is being discussed. As you mentioned, it could be that Canada would
be the first country with an AI legislation.
The EU has been working on one
for several years.
It's likely to go through this year.
And I mean,
there's a lot of discussions
in the US, let's say,
whether this is going to happen or not.
I'm not the expert.
And that's good.
And by the way,
the Chinese are working on an legislation.
You know, they're concerned too,
maybe for different reasons, but still.
I think this needs to accelerate the national legislation,
but then also as we move forward, these legislations,
these regulations are going to have to adapt.
It's not like we know how to fix the problem.
We're going to need legislations.
By the way, that's one of the nice features of the Canadian legislation,
that it's designed in a way that's going to make it easy and quick for government to change regulation as new, nefarious users show up.
So we need that sort of flexibility,
and we need governments to start working together
to coordinate these legislations.
First, it's better for business if they're compatible.
And second,
we want a level playing field. That's obvious. And we want to manage the risks globally.
So I've covered politics a long time. And one of the things I've noticed over that time is that
policymakers sometimes act best when they know that there's a deadline, when they've got the
proverbial gun to the temple. I'm just wondering if there's a deadline coming, something technological that you say we cannot pass this threshold without rules. I wish there was such a thing. There's a
lot more uncertainty about when things become very dangerous. And then there is uncertainty about
when things become very dangerous for climate. So, you know, in the case of climate, people talk about cascade effects
that could really topple the climate in disastrous ways. And scientists can kind of see this
potentially coming at the scale of a decade or probably more. But things could go much faster
for you. So I think that's the reason why I believe Jeff was quoted saying that this might even be something we need to worry more quickly than climate change.
Now, I think climate change is one of the biggest challenges we have.
But what's interesting is that to fix the problem of climate change and to minimize the risks of AI, we need the same sort of thing.
We need much better international coordination.
Great.
Well, thank you so much for taking the time to chat with us.
I really appreciate it.
My pleasure.
And that is all for today.
I'm Alex Bonetta.
Thank you for listening to FrontBurner.
For more CBC Podcasts, go to cbc.ca slash podcasts.