The Current - Concern as Google ends ban on AI weapons
Episode Date: February 11, 2025Google’s parent company Alphabet has reversed a longstanding promise against using AI to develop weapons and surveillance tools. As world leaders gather in Paris to talk about responsible AI develop...ment, we look at what role Canada can play in regulating this rapidly advancing technology.
Transcript
Discussion (0)
When a body is discovered 10 miles out to sea, it sparks a mind-blowing police investigation.
There's a man living in this address in the name of a deceased.
He's one of the most wanted men in the world.
This isn't really happening.
Officers are finding large sums of money.
It's a tale of murder, skullduggery and international intrigue.
So who really is he?
I'm Sam Mullins and this is Sea of Lies from CBC's Uncovered, available now.
This is a CBC Podcast.
Hello, it's Matt here.
Thanks for listening to The Current wherever you're getting this podcast.
Before we get to today's show, wonder if I might ask a favor of you if you could hit
the follow button on whatever
app you're using.
There is a lot of news that's out there these days.
We're trying to help you make sense of it all and give you a bit of a break from some
of that news too.
So if you already follow the program, thank you.
And if you have done that, maybe you could leave us a rating or review as well.
The whole point of this is to let more listeners find our show and perhaps find some of that information
that's so important in these really tricky times.
So thanks for all of that, appreciate it.
And onto today's show.
World leaders, including the Prime Minister Justin Trudeau
are gathered in Paris to talk about the future
of artificial intelligence.
The AI Action Summit is focusing on responsible AI development
which is timely given that AI technology
is advancing incredibly quickly
and policy is struggling to keep up.
Just last week, Google broke a long held promise
that it would never use its AI systems
for warfare or surveillance.
Nora Young has been tracking these developments.
She is with the CBC's Visual Investigations Unit
and is with Mean Studio.
Nora, good morning.
Good morning, Matt.
What is the purpose of this AI summit?
I said it's an AI action summit, suggesting things are going to come out of it.
Well, you know, they always do with these summits.
Not just talking.
Yes.
So the, the supposed purpose is to look at balancing support for
innovation and the financial engine that AI could be while acknowledging
the need for guardrails and protection from harm.
So they'll be tackling issues like governance, trust, concerns about
environmental sustainability, which is important with AI and to bring together
not just technologists, but also politicians, including Trudeau, Macron and JD Vance.
And the thing is, you know, we're clearly at this moment where the technology is
obviously advancing very quickly, but also becoming like a foundational technology.
So not just a technology, but the basis for how a
lot of work and communication, leisure, culture happens.
So I think it's part of a recognition that we need to
think about where we're headed, what this means for
our economy and this move by Google to position
itself vis-a-vis national security and democracy, I think is part of that.
What is Google saying about what it wants to do exactly?
Well, they're not really saying it, which is interesting, right? What they did was they took
out these specific provisions in their public-facing AI policy. It used to state applications that
Google wouldn't pursue, specifically including weapons and tools for information gathering for
surveillance purposes that violate international norms, which is itself a bit vague.
So CNN found out that they did this by going back into its 2018 policies on the
internet archive.
So they've taken out this whole section that indicates avenues they won't pursue.
And they now have this language around responsible development and deployment.
There's a lot of air quotations.
I know.
You want the video here.
I'm doing a lot of air quotes here.
Uh, now none of this means that they have a specific plan to develop AI for There's a lot of air quotations going on here. I know. You want the video here. I'm doing a lot of air quotes here.
Now, none of this means that they have a specific plan to develop AI for weaponry or surveillance,
but certainly it's indicative of a general change of perspective.
Why did Google make this promise originally not to have promise such as it is to have
its AI systems used in surveillance or in war?
Yeah.
I mean, the politics and ethics of tech has long been this very fraught issue in Google and its parent company, Alphabet, going back to the sort
of don't be evil origins.
When it comes to surveillance, I mean, AI applications like facial recognition,
as you know, have long been concerns, whether that's about bias in the training
data or the potential intrusions on people's privacy.
And this is something that Google has been wrestling with for a while, right?
Yeah.
Yeah. Yeah. I mean, these things been wrestling with for a while, right? Yeah, yeah, yeah.
I mean, these things are controversial in society
more broadly, but also within Google, the company,
where employees have this history of kind of pushing back
against work that might have military applications
or surveillance applications.
So last year, you may remember a number of Google employees
protested Google accepting a contract
with the Israeli government, which they argued
might be used for further military applications. And the company responded by firing 50 employees. So
I think we can argue that Google has sort of been moving in this general direction for a while.
Has Google said anything about why it wants to open up its use of AI for weapons and surveillance?
Yeah, they sort of did a little bit. I mean, you can certainly argue that these
technologies are going to be developed all over
the world, much as we've seen trying to make
rapid advances in generative AI.
Demas Hasavis, who's the head of Google's AI team,
he argues that democracies should lead in
developing AI guided by democratic values, values
like freedom, equality, and respect for human rights.
So if somebody's going to do it, it might as well be us.
That's basically the argument, right?
So Google's updated AI principles, uh, say that
the company will use human oversight and make
sure it's technology mitigates unintended or
harmful outcomes with the goal of supporting
national security.
But the thing is, Matt, it's not like democracy.
There's those air quotes again, per se has clear
rules about the use of technology.
Countries can and do disagree about the appropriate
limits on technology.
What has been reaction to Google's move,
particularly from the world of defense?
Yeah, I mean, one of the people I talked to
is a guy named Michael Horowitz.
He's worked at the US Defense Department,
working on emerging technologies,
and he's now a political science professor
at the University of Pennsylvania. And he he's basically saying it's about time. My concern has actually
been at least in the context of the United States that there's a greater
risk of going too slowly rather than too quickly. The United States has not always
been the fastest when it comes to adopting military innovation and there
there are concerns raised all the time about the US
military potentially resting on its laurels. I think there's actually a larger risk that
the US is too slow on the draw when it comes to AI integration than too fast.
So Mike Horowitz is saying that the US is at risk of lagging behind, but different countries
have different views about AI right now. The EU, for example, is much more stringent about regulating the technology in general.
And in the United States, I mean, we saw this at the inauguration of President Trump.
You had gazillionaires lining up behind him, actually seated right around the president.
These are all people who run the major technology firms in this country.
What would that mean for AI and for the military?
Yeah. And we now have Elon Musk playing an active role in the US government. Among the many companies
that he runs, Musk owns XAI. In the past, he said, he believes that AI is the future of warfare.
He's argued that drones are going to be critical for future battles, calling Ukraine a drone war,
and you need AI assistance in order to coordinate these large clouds of drones called drone swarms and Musk also wants military
procurement to be made simpler which fits in with how he's approached other
US agencies in the last few weeks. Other tech companies like Google and Amazon
are offering up their services to the federal government but of course we're
in early days so it's hard to say exactly what they want to do. What are
the risks here do you think? I mean, so it's hard to say exactly what they want to do. What are the risks here, do you think?
I mean, in part, it's the same risks that come from any type of data-driven machine learning,
which is that these systems can be wrong, right? They work on pattern recognition and probability,
and there can be flaws in the training data, for example. So of course, there needs to be,
obviously, a lot of rigorous testing and accountability is an issue. In our conversation,
Michael Horowitz made the point that you always need a human responsible for the use of force,
meaning not necessarily in the specific tactical operation, but in the decision to use force.
This is so that the robots don't decide to go to war.
Exactly. Yeah. So clearly in the conversation about AI and automating war and national security,
there are still a lot of ethical worries. But it is happening already, this conversation. It's not like it's something
of the future, right? Absolutely. And I think that gets back to where we started from, which is that
people are seeing AI as a foundational technology, which is going to touch all areas of our economy,
all areas of our society. Nora, thank you. My pleasure. Thanks, Matt.
Nora Young is with the CBC's Visual Investigations Unit.
In 2017, it felt like drugs were everywhere in the news.
So I started a podcast called On Drugs.
We covered a lot of ground over two seasons, but there are still so many more stories to
tell.
I'm Jeff Turner, and I'm back with season three of On Drugs.
And this time, it's going to get personal.
I don't know who Sober Jeff is.
I don't even know if I like that guy.
On Drugs is available now wherever you get your podcasts.
The global AI summit is underway to address some of these issues.
The Prime Minister, Justin Trudeau, spoke there yesterday and made the case for a future that includes regulation. We cannot stop progress. We shouldn't want to. But we need to
have guardrails, transparency, accountability. We must put AI to the service of everyone in both
high and low income countries and not just for an increasingly small group of ultra-rich
oligarchs whose only concern is the value of their stock portfolios.
But just as Trudeau talked about guardrails, he also pitched Canada as a strong trustworthy
partner for the AI industry.
Sinead Buell is a strategic foresight advisor and founder of the tech education company Way just back from Paris, where she took part
in a pre-summit event on AI safety and ethics.
Srinay, good morning to you.
Good morning, thanks for having me.
Thanks for being here.
What was the mood like in Paris
as the summit got underway,
given all of the talk about artificial intelligence,
good and bad?
The mood, it's interesting.
So there's kind of many layers that are emerging right now. intelligence, good and bad. The mood, it's interesting.
So there's kind of many layers that are emerging right now.
So on the one hand, and in the meetings that I was in,
you do have this concern for the speed AI is moving at,
the safety that may or may not be prioritized
as countries kind of reign in influence
over this technology.
You have the broader purpose of the event, which is to address how countries can harness
all of the benefits, mitigate the risks.
But then there's also this shift towards national sovereignty in AI as countries recognize this
is a technology for, you want technological sovereignty over this technology, especially
as we see these geoeconomic fragmentation and geopolitical tensions arising, AI is becoming a critical component of economic power and
national security.
So there's all of these layers to the mood in Paris at this moment.
I was reading something suggesting that perhaps the doom and gloom, people thinking that this
is going to make us obsolete, that the machines will take over, that that in some ways is taking a bit of a backseat now.
Is that the case?
It's taking a backseat
when it comes to national priorities.
There are researchers that are still just as dedicated
to that cause and believe now it's not getting nearly
as much attention as it was before.
But when it comes to leadership at a state level,
AI safety and kind of existential risk
has certainly taken a backseat.
Can I just ask you,
we heard about how Google is doing some things.
How broadly, how would you explain
how AI is being used right now?
People hear about its promise
and perhaps don't understand where it is surfacing.
So just practically, how is it being employed right now?
Yeah, so I mean, in many ways we've been using AI for years,
whether you are on social media or you're in,
you know, watching a streaming platform.
In terms of this newer wave, this generative AI boom,
we're still in the early days of this technology being diffused
and deployed in
businesses and in companies, but it is playing a role. I mean, whether workers are open and honest
about using it, the numbers are very high in terms of how many people have subscribed and used systems
like chat GPT every single day. So now I think we're in the stage where companies are figuring
out their AI strategy,
figuring out how to adopt this technology, but eventually it's going to be probably bigger than the impact of the intranet
and more akin to something like electricity going forward in business models and in everyday society.
That it is going to be that foundational technology that Nora talked about.
It is going to be, yes, a general purpose technology.
So the way we stream electricity,
we will soon be streaming AI on that scale.
And so as companies compete for a piece
of what sounds like a very lucrative market,
how well is Canada positioned?
So we have a lot of potential
because there are non-negotiable,
indispensable ingredients to power the AI age and to power the emerging technology age.
And that is critical minerals, that is cold temperatures for data centers, that's a strong research environment.
So Canada theoretically has all of the components and we have been really strong on the research stage.
But it is time for us now to move a little bit more quickly on building our industrial
strategy around artificial intelligence.
We don't just want to export minerals and energy for other countries to build out the
AI ecosystem.
We want to insert ourselves as an indispensable part of the AI supply chain.
So we have to talk about how are we refining
and processing our critical minerals?
How are we exporting AI efficient energy systems
and smart grids?
So really crafting our competitive advantage in the AI age.
But it goes to your point, power, I mean,
that can mean a number of things,
but one of the things is that artificial intelligence
requires an enormous consumption of power to make it go.
Right?
It requires an enormous amount of power.
And right now we're just talking about training
these massive AI models.
We haven't even got to the distributed power
that's gonna be needed for businesses and citizens
to actually leverage this technology
themselves. But I will preface and say, energy, it does also need to be stable and low cost.
And that's why renewable energy has really taken kind of front and center for these tech
companies because you don't want a volatile energy supply chain that's unpredictable,
that changes the prices of AI. So on the one hand, energy is a really big deal,
but it also has this economic requirement for stable, renewable and cheap sources.
And is that something, I mean, if you think of the advantages that Canada has,
aside from the critical minerals, is that something that we might be able to corner,
not corner the market on, but certainly have an advantage over?
Yes, because we still are a leader in renewable energy. We do have some unmatched advantages at this point in time.
So yes, we can position ourselves as a irreplaceable AI enabler when it comes to energy, when it
comes to renewable energy and an AI driven supply chain.
How worried are you that, I mean, we saw that image at the inauguration of Donald Trump being surrounded by
tech leaders, the people who control many of the platforms
that are working hard to develop and push the AI envelope.
How worried are you that this technology will only benefit them?
I think it's a legitimate concern.
The US is engaged in AI technological race against China.
And now they're going to take more of a deregulation,
innovate at all cost approach.
And more capital will likely flow into the US
as a result of that. But at the same time,
there are those critical components that are needed to power the AI age. So you can have all
of this technology, but if you can't actually power it, you can't actually move forward.
So I do think if Canada needs to insert itself in this moment, we do need to flex our muscles
that we do have.
We have a lot of leverage in this age.
We just need to step into the game.
But yes, I don't think a world in which you have to choose between USAI or China's AI
system and you don't really have control over your own AI sovereignty, I do think that that's
a challenge because the US is deprioritizing safety
for the sake of innovation and for the sake of winning
what they see as this race against China.
I don't think AI is a race that can be won.
And I think that there will be some casualties
if that is the only frame countries are looking at AI through.
What do you mean that it's not a race that can be won?
I mean, you had the vice president speaking
at the summit today, pushing back to your point
against what he calls excessive regulation.
There are a lot of people who were shocked
by that appearance of a Chinese chatbot
that suggested that the race is not between big companies,
but between nation states.
So why is this a race that can't be won?
It can't be won because we will all suffer
if AI moves at a pace that we have no control
over and we really deprioritize safety.
And at the same time, what DeepSeq and these innovations that are coming out of China have
shown us is that you really can't put a border on software.
So you can attempt to go really quickly, but this technology is going to diffuse throughout
the globe regardless.
So I do think we're better off taking a more collaborative approach.
That said, of course, there are some challenges of AI coming out of a country like China that has a bit more of an authoritarian approach to this technology.
There are some cybersecurity challenges.
But at the end of the day, you really can't put a border on this software.
It's going to go where whichever country wants to adopt it.
Do you think there's any real appetite for regulation?
Again, JD Vance talked about how AI systems built in the United States
would in his words, be free from ideological bias and not restrict free speech.
If you are trying to move as fast as possible,
is there any appetite for regulation
that some people might believe would slow down
that lightning fast development?
I think there is still appetite for regulation
because you need societal adoption and societal buy-in
for this technology to really move markets
and to have that economic benefit.
So I don't think people will feel safe moving forward with the technology if there are absolutely
no strings attached to it.
Businesses do like some guidelines to know what are the kind of guardrails or the frameworks
that they have to play within
when it comes to adopting this technology?
So there is some appetite for it.
I don't think there's as much appetite as,
for example, Europe has put forward with their AI regulation.
I think that they went a little bit heavy-handed with it,
but I don't think it should be a technology
that isn't regulated.
I don't think that that's gonna be helpful.
And I think for society to adopt it and want to
buy into it, they do need to feel somewhat safe
and that this technology has been regulated or
there are checks and balances before countries
just go live to air with it.
This just finally, I mean, the technology
moves so quickly and the summit was meant to,
as I said, be about action.
Something is supposed to come out of it that
will lead to some sort of consensus.
At the same time, you have Elon Musk putting in a
bid to buy open AI, which creates chat GPT.
Do you think that the meeting of the minds in Paris
is actually going to lead to any action that would
make people who are nervous about the power of
this technology feel better?
I do think that these conversations, leaders coming together to figure out some sort of
global architecture for this technology still has a lot of merit.
It still has a lot of purpose for a country like Canada, for instance, aligning itself
with Europe to kind of create more blocks and more opportunities that into what's not
just dominated by two countries. But at the end of the day,
I do still think there's this unilateral advantage
that countries are trying to seek,
trying to sell AI in their country, come to our country.
This is why we are a good platform for AI development.
But I think it's a technology that does require diplomacy.
You can't really put borders on it.
So we still need to have these international conversations.
Regardless of people are signing off on pledges,
we are still better off in a world
where leaders are meeting and at least trying to collaborate
versus one where everybody is going full speed ahead
in their own lane with no coordination whatsoever.
Sinead, good to talk to you.
Thank you very much.
Thanks for having me.
Sinead Bohl is talk to you. Thank you very much. Thanks for having me.
Sinead Bohl is the founder of the tech education company
way just back from that summit in Paris.
For more CBC podcasts, go to cbc.ca slash podcasts.