The Agenda with Steve Paikin (Audio) - Christopher DiCarlo: Will A.I. Become a God?
Episode Date: March 5, 2025Many questions abound about the role artificial intelligence will play in humanity's future. Philosopher and author Christopher DiCarlo, one of the world's leading voices on the ethics of AI, looks at... our complicated relationship with this technology and how we might prepare ourselves for its place in our world. It's all in his new book "Building a God: The Ethics of Artificial Intelligence and the Race to Control It". He's also principal and founder of Critical Thinking Solutions and he joins Steve Paikin to discuss.See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Renew your 2.0 TVO with more thought-provoking documentaries, insightful current affairs coverage, and fun programs and learning experiences for kids.
Regular contributions from people like you help us make a difference in the lives of Ontarians of all ages.
Visit tvo.me slash 2025 donate to renew your support or make a first-time donation
and continue to discover your 2.0 TBO.
Many questions abound about the role artificial intelligence will play in humanity's future.
Philosopher and author Christopher DiCarlo, one of the world's leading voices on the ethics of AI,
looks at our complicated relationship with this technology and how we might prepare ourselves
for its place in our world.
It's all in his new book called,
Building a God, the Ethics of Artificial Intelligence
and the Race to Control It.
He's also principal and founder
of Critical Thinking Solutions
and he joins us now here in the studio.
Welcome back to TVO.
Thank you.
For those who really remember, a long time ago,
we used to do something here called the Best Lecturer Competition.
And you won.
I did.
Yeah, you were a big shot back in the day.
You're a bigger big shot now.
You talk about different kinds of artificial intelligence in the book.
And I want to throw a few acronyms at you right off the top,
just so we can understand what you're talking about.
A-N-I, A-G-I, A-S-I.
What are we talking about?
So the world is going to become very familiar with those terms over the next few years.
A-N-I is artificial narrow intelligence, so that's your Roomba, that's Siri.
Those are narrow things that are never going to operate outside their algorithms.
So your Roomba isn't going to bump into you and say, Steve, I don't want to clean anymore.
I want to be an accountant.
It's not going to do that.
It may bump into you.
Yes.
But it won't do that.
Correct.
AGI is the current holy grail of AI.
That's what all the big tech companies are trying to attain.
That's artificial general intelligence.
So that's when artificial intelligence will be like you
and I. So it'll be able to think for itself,
plan, function on many different levels.
ASI is the one we have to be the most concerned with.
If we reach AGI, we're very concerned
it's going to become artificial super intelligence.
So this is godlike power.
This is the most intelligent machine
we will have ever invented, and perhaps the last one we ever
need to invent, because it'll tell us how to invent
all the other machines.
Which is the one that you are most preoccupied with
in this book.
Correct.
Yeah.
I want to pose this question to you.
And we're going to do a bunch of different ethical conundrums
here during the course of our conversation.
And you raised this in the book. If there is a 5% chance that artificial intelligence will destroy humanity,
but a 95% chance that it will do something amazing or just maybe neutral,
should we still take that risk and develop it anyway?
Yeah, that's a great question.
The fact of the matter is we're going to.
These tech companies are racing ahead,
and they want to be the first to do this.
So we're on that path.
We're on that trajectory.
So it really then becomes a question
of how do we set up the guardrails to get ready for it?
So we have gone too far to slow down.
I believe the race is chugging ahead.
Could we slow it down?
Could we stop it?
Yes.
Will we?
We're human.
Yeah, we don't put toothpaste back in the tube, do we?
We don't.
The genie is out of the bottle.
So we might not be able to stop it.
Now, having said that, Steve, if we get a shot across the bow,
if we get a major across the bow, if
we get a major warning where all the big players are like, whoa, okay, that's
serious, maybe we should rethink this, there might be time then to slow it down,
even stop it. But until that occurs, the race will continue. I want to play a clip
here to set up our next question because you mentioned Yuval Harari. In
your book.
We had him on this program for his book, Homo Deus, back in 2016.
We talked about artificial intelligence.
Here's what he had to say.
Sheldon, if you would, the clip.
I mean, we pursue all these technologies like AI and like biotechnology in the hope that
they will serve humankind,
and they will give us better health, happier lives,
and so forth.
But the result may be to make most humans redundant.
If you want better health, so you
create a kind of artificial intelligence
that understands your body and your brain better
than you can ever understand them,
so that
it can provide you with better health.
But once you have an AI that understands you better than you understand yourself, you're
becoming redundant.
The AI can do everything better than you can.
Are we programming ourselves into redundancy?
Perhaps, perhaps.
And the question we need to ask ourselves at this point in time,
which is a very unique point in our history,
are we ready to be number two on this planet?
Well, Trump's not.
But how about the rest of us?
Are we ready to be number two on this planet?
I don't know.
Are we going to just hand things over and be content to have something far more powerful than us,
smarter than us, better than us in so many ways?
It's going to be a shock to the system.
Here is an expression you use in your book,
which we need to understand better, moral agency.
What is that?
So the race right now is to create agency in AI.
To make an AI an agent is to give it
a certain amount of autonomy, a certain amount of freedom,
and let it go.
Let it do its thing.
Let it gradually build, become better, figure things out,
know what its errors are, not do that anymore,
and just stay with the positive things.
But moral agency is when you and I have the capacity to understand a value system, what's
right, wrong, good, bad, and then to act accordingly or to violate that.
Right now AI doesn't have any moral agency.
If it becomes conscious, then it almost has to develop some sense of what's right and
what's wrong.
And will that be the same as yours and mine?
Will that be the same as our laws?
We have no idea.
We're working through uncertainty here, and we really have no idea what will happen.
On this issue of developing moral agency, we asked that same question in the style of
ethicist Christopher DiCarlo, but we asked it of ChatGPT.
Do you want to know what ChatGPT had to say?
Absolutely.
In your voice, here we go.
The question of whether AI could eventually develop moral agency hinges on understanding
what moral agency entails.
From a naturalistic perspective, moral agency requires the capacity to recognize and evaluate
the consequences of actions within a social context, the ability to empathize with others,
and an awareness of the norms and values that govern cooperative behavior.
These traits evolved in humans as adaptive strategies to ensure survival within complex
social groups.
For AI to develop moral agency, it
would need to fulfill similar criteria,
albeit through artificial means.
How do you think ChatGPT did at capturing your voice?
That's not bad.
That's not bad.
Does that impress you?
Were you?
Concern you?
No.
I'm quite happy we're at this point in history.
I think there's a lot of good that AI is going to give us.
And that's the whole point, right?
We want the very best that AI can give us
while mitigating the very worst that could happen.
I want to next show you a couple of pictures here.
Sheldon, let's bring these up.
And here's the first one.
These are two well-known scenarios from your book
and what they imply about human decision
making.
And we've got people, Chris, who listen on podcasts to this, so I'm going to have to
describe what's going on.
We've basically got a streetcar coming down a track, and if it continues to go straight,
it will run over and kill five people working on that track.
If it makes a hard left, it will only kill one person working on the track.
Okay.
Let's do the next one after that.
We've got the same kind of circumstance here.
Streetcar coming down the track.
There is a bridge it's going to go through.
There are two people standing on top of the bridge.
And then there are five workers on the other side of the bridge on the track.
And this is, of course, a classic ethical conundrum.
In the case of the first picture, do you make the turn?
Save five people but kill one in the second one?
Anyway, you get the idea.
Would AI have a problem figuring this out?
Yes and no.
Yes and no.
So let's look at autonomous vehicles, okay?
You get an autonomous vehicle in North America and it's driving away, right?
It's chugging along, chugging along, chugging along.
And all of a sudden, for some reason,
a group of people run onto the road in front of you.
The car's going to hit someone.
It can't stop in time.
Who should it hit?
The elderly, the very young, right? In North America, we
program autonomous vehicles to hit the elderly rather than the very young.
We actually program them that way. Correct. In Japan, it's the opposite. Who's right?
And can you go to Japan, Stephen, and say, I'm gonna take this autonomous vehicle,
but I want you to tweak the moral parameters on it.
Because where I come from, we value these people at this stage more than others.
So when in Rome, do you have to do it?
When in Rome, that's exactly right.
Can I actually get you to speak to this?
In picture one, the streetcar is coming down the track.
It's either going to run over five people if it keeps going straight or it's going
to make that turn.
You have to pull the switch. You have to pull the switch.
You have to pull the switch and if you pull the switch...
Only one die.
You'll only kill one.
Correct.
Okay, you're the conductor. What are you doing?
Yeah, so I'm pulling the switch.
Most people when I'm giving public lectures, they raise their hand, right? They say, yeah, sure. I'll pull the switch.
It's the second one. Can you push somebody
I'll pull the switch. It's the second one.
Can you push somebody to their death, which
will stop the trolley, and accomplish
the exact same thing?
One person dies and five are saved.
The majority don't.
So why?
What is it about the process that irks people?
There's something different about pulling a switch
versus actually pushing somebody to their death.
It's like we're moved somehow.
And then, of course, I play around with them and I mix it up.
I say, OK, the one person is your son.
Well, I'm not pulling the switch then.
So context is very important.
Do you think you could pull the switch?
I think I could if it were my son, then I'm not.
And that's unfortunate for those five others.
But we have that connection.
Blood is thicker than water. We have that connection. It know, blood is thicker than water, right?
We have that connection.
It's undeniable.
That is part of the equation.
I mean, it doesn't indicate it in this picture.
But part of the equation clearly is, I know somebody in those five.
I don't know the one.
Correct.
So I'm going to save the guy I know,
and I'm sorry to the guy I don't know.
That's right.
And that's, boy, this is just a level of ethical conundrum that, God willing, none of us will have to face in our lives.
And you have to understand, this is never going to happen to us.
But that's not the point of philosophy.
Philosophy pushes us into uncomfortable places
so that we just work it through.
You a Star Trek nerd?
Very much.
A little bit?
OK.
Let's ask about Mr. Spock from the original series a little bit. Okay, let's ask about mr
Spock. Yep from the original series or mr. Data if you're a next-generation type
Yeah, both famous for applying logic to their decision-making
Would they be better than emotive humans in?
deciding
Which way to go when the pressure's on yeah. Yeah. They would be. They would be.
That's why I think AI is really important.
And don't forget, I was working on this in the 90s.
I met with politicians to try to build this super brain to help the Senate make better
decisions, right?
And I think what we can do is allow AI to work through some issues, but then dump it
on the table for us humans to sift through.
If Jim Kirk or Leonard McCoy were sitting here, they would say, Spock, you need to take
the human emotional element into your decision making.
It can't be pure logic or you won't make the best possible decision.
Are they right?
Contextually.
So depending on the context, they could be more correct than Spock.
Whose context is more correct?
That depends on the individuals that are involved and the value systems that they're working through.
So let's say that that one person on the track is a known serial killer.
Is it easier for the person to pull the switch or to push them to their death? I'm going to say yes.
Right.
Very much so.
What if that is a great person?
What if it's Jimmy Carter, right?
What if it's a great person that, you know, if they live,
they would be able to accomplish much more than, say,
those five would be able to, right?
So context is very important.
You know you're driving me nuts with all
of the contextualization of your answers.
I want it in black and white.
I don't want all this gray area.
Wouldn't it be nice if the world were like that?
Now, having said that, do you think,
because of the increasing incursion of AI in our lives,
that we are going to lose critical decision-making
skills, and if so, essentially hand whatever verdict AI decides for us.
Yeah, it's quite possible.
When was the last time you used a paper map
when you had to go somewhere, right, as opposed to GPS?
Decades.
Right.
So it does get into our lives.
So this is known as enfeeblement.
We're concerned about the future, where people will just
become so feeble by what everything can do for them.
They won't have to think,
they won't have to do certain things.
So we are very concerned about what that future will look like.
And we are on a trajectory towards that,
but there's also that human spirit.
I predict in the book that there will be people
who rise up against AI, you know, like neo-Luddism.
They'll say, forget it.
I'm out.
I'm back to the farm.
I don't need this.
Right?
And so they will, you know, avoid and forget about whatever promises AI can bring them.
So...
Do you think it's still possible or will be still possible to lead a meaningful, engaged,
productive life even
if you have no contact with AI?
I believe so.
I believe so.
Now in business that's an entirely different story.
Those who fall behind, those who use AI to their advantage, that's a different story.
Do you believe as Hobbes did that life is nasty, brutish, and short?
Yeah.
You do.
I'm very much a Hobbesian.
Okay.
So could AI bring about a kind of a world, if that's the way we want to describe the
world, where we are content to forfeit some of our freedoms in exchange for a world that
is less nasty, less brutal, less short?
That's this so-called, you know, utopian notion, this kind of almost Star Trek vision of, you
know, when are we going gonna get to a place in history
where we don't have to worry about resources,
we don't have to worry about feeding the hungry
or helping the poor, that sort of thing.
We want the best for everyone on this planet,
so hopefully AI can push us more in that direction,
whether that happens.
If we, in our beneficence program AI to share our values and by
those I mean let's just for argument say liberal democratic values that prize
free speech and equality of okay all that business. Should we then have
less to worry about if AI becomes sentient or conscious?
Yeah, that's a good question.
If AI becomes sentient or conscious, right away it has rights.
And now we have to say, can we just turn this thing off?
That used to be one of my thought experiments to students.
If you're in front of a computer screen,
you're some superstar at MIT,
and you go to turn, log off, and it says, please don't.
Would you do it?
The gang's waiting to go to the bar or whatever,
and this thing's saying, please don't.
Is it really sentient, is it really conscious,
or is it doing something we call stochastic parroting?
The parrot doesn't really know English
when it says, hello, how are you doing?
It's just mimicking, it's just miming it.
Is the computer system actually aware of itself?
Does it have a value system where it can feel discomfort
or comfort based on whatever that means
in an artificial sense?
If it can genuinely demonstrate to us
that it is aware of itself, we've created
a digital mind, we've created, we've brought life into being that hadn't existed before,
therefore it has rights.
What does that mean?
Well, I'll tell you what it means off the top.
It means we're back to slave holding.
Potentially.
If we make millions of these things to do whatever we want, and they can suffer, what does that say about us?
Remember that Star Trek episode?
I remember the Blade Runner movie.
OK.
Right?
We've really created a generation of slaves
to do the work we don't want to do, yet they become conscious.
And actually, it's a mirror to humanity of what we've become.
Can you imagine a world in which artificial, intelligent, what do we want to call them?
We call them beings?
Beings.
Beings.
They have become beings.
Okay.
Will feel the same way as humans and need to enjoy the same kind of rights as humans?
That's a tough question.
Feel is what you feel, Steve, when you hear a great piece of music and what I
feel, we think are kind of similar.
We say, oh, don't you love that song?
Oh, man, I love that song.
But how you identify with it and how I identify with it is what philosophers call qualia.
What is the nature of your experience?
We don't know what an artificially intelligent being will do when it feels something.
But just like airplanes, they don't flap their wings
to take off, right?
They fly in a totally different way from birds,
but they do it way better than birds.
So we're not sure exactly how they're going to feel.
Are we in an Oppenheimer moment?
Yes.
Explain.
We are at a point in history where
we're between the way we used to do things
and between the way we're going to do things in the future.
And that's how we will always be doing things in the future, using AI to make everything more efficient and functionally optimal.
And that will increase in every aspect of our lives, from health care to the way business is done, to the way broadcasting is done, to the way everything is done.
So we are at that unique point in history where we're just waiting for this supercomputer
god to be built.
We're right before that point.
And you remember in the movie, there was the potential that when they detonated the Trinity
experiment, they would ignite all the oxygen in the atmosphere and kill everything
and everyone on the planet.
We're at that moment with AGI and ASI right now.
Do you think we're building a world
that ultimately will lead us to obsolescence?
Not entirely, but that is a potentiality.
So I can't say for sure, because nobody
can predict how we're going to use this technology
and how we're going to adapt to it.
You lay odds on these kinds of things?
Yeah, there are plenty of forecasters and super
forecasters, right, and predictors
that try to figure out, you know,
what will the future look like?
So it's going to be a very different world, for sure.
Better in some respects and less human, if you will, and others.
But we won't know until we get there.
Apropos the title of your book, Building a God, let's talk about how we harness the
God.
Sure.
Do we have adequate governance around artificial intelligence in Canada at the moment?
No, not even close.
Or the world.
What do we need?
So back in the 90s, when I was trying to build this machine,
I thought about what happens when it comes into being.
A lot of people are going to want these types of machines.
So we have to develop a system of governance.
Maybe the UN, right?
Maybe we'll hand it over to them.
They don't have the kind of teeth we need.
Maybe an independent organization,
like the IAEA, right, the International Atomic Energy
Agency.
Because if we find a ne'er-do-well country utilizing
AI to harm other countries, what are we going to do about that?
Well, we need the rest of the world to be on board.
So we're going to need some type of, first of all,
registry that allows those companies working on AI
to let the rest of us know what they're up to.
Countries need to monitor that.
And it wouldn't be bad to have them report
to an international governing body so that we were all on board
So transparency is key if we do this right everybody benefits and if we get cheaters, you know, what's gonna happen again?
I'm gonna ask you to lay odds that countries as disparate as the United States China Russia India
Brazil okay
The list goes on are somehow all going to be able
to come together and agree on how AI ought to be governed going forward.
Yeah, or it's going to take a world superpower like the US to put down the law, right?
You're right.
It's important to note that within the AI communities, there is an overwhelming majority
in favor of transparency, transparency cooperation and trust between nations
Except that we live in
Hobbes Ian times don't we?
There is a global race right now to try to master this technology. So how are we going to figure all this out?
Well, we're gonna have to keep
letting the public know
The public have a right to know what's going on.
Then they can pressure the politicians
who can create the laws.
What you've just described is sort
of ideal operating philosophy in a functioning democracy.
Correct.
Except that a lot of the democracies in this role
right now seem to be flirting with authoritarianism.
And people are having a hard enough time getting
through the day just dealing with their regular lives
without thinking about how they're
going to deal with the incursion of artificial intelligence.
So when you say you've got to energize the public on this,
that's a pipe dream, Christopher, isn't it?
Well, what choice do we have?
We either cooperate, and we all do better,
and all the ships, all the boats rise,
or we decide to do business as usual,
try to cheat and try to get ahead of the other countries,
and that could lead to our ultimate ruin.
So we've got to make some big decisions.
We have to grow up as a species really fast.
We have seen, let's finish up on this,
we have seen what the last four and a half years
have done to our collective global mental health
because of a once in a century pandemic.
This is going to be way more impactful
than a once in a century pandemic.
It is.
So what's going to happen to our global mental health
as a result?
We're going to see a worldwide angst,
similar to when nuclear arms were developed.
But nuclear arms have to be activated.
You know, you have to have two keys and the codes and all that kind of stuff.
And then there's some battling going on between countries.
What if the nukes were aware or in control of everything?
So it's going to be far worse.
So I'm trying to stay ahead of the curve
and trying to let the public and various administrations know,
get ready.
This is coming.
People are going to have issue with it.
Strong reactions to it.
I want to thank you for depressing the hell out
of all of us.
Great job, Christopher.
Thanks so much.
The book is fascinating. Building a God, the Ethics of Artificial Intelligence
and the Race to Control it, Christopher DiCarlo.
Great to have you back here at TVO.
Great to be here, thank you.